As part of my doctoral work, I conducted experiments in grounded cognition, or the notion that humans recruit multi-modal representations—or visual and motor-based simulations—when processing language. So, for example, when processing sentences, people rapidly and unconsciously simulate the depicted situation. This is even the case when people process metaphorical language, or non-literal sentences, about time.
Previous research shows that we process sentences about the past and future by using physical space (the past is behind us, and the future is in front of us). However, little work has been done to show the granularity of this physical space (is tomorrow physically closer to us than next year?) and whether or not processing language influences our sense of physical space around us.
In a novel experimental paradigm, participants were blindfolded and listened to sentences about future events taking place either tomorrow or next year. For example:
Tomorrow/Next year, she will discover a planet.
While blindfold, participants were then asked to estimate the distance to a previously-established point in front of them by walking to that point. Participants were guided by an apparatus that had a laser distance measure attached. The prediction was as follows:
Participants would under-estimate the distance to the point in front of them after listening to sentences about tomorrow and over-estimate that distance after listing to sentences about next year.
As predicted, participants under-estimated the distance to the point in front of them after processing sentences about tomorrow and over-estimated that distance after sentences about next year.
These findings serve as further evidence of the grounded cognition hypothesis, particularly when people process abstract concepts like time.
Learn more here.