There are some truly strange arguments made in the psychological literature from time to time. Some might even be so bold as to call that frequency “often”, while others might dismiss the field of psychology as a variety of pseudoscience and call it a day. Now, were I to venture some guesses as to why strange arguments seem so popular, I’d have two main possibilities in mind: first, there’s the lack of well-grounded theoretical framework that most psychologists tend to suffer from, and, second, there’s a certain pressure put on psychologists to find and publish surprising results (surprising in that they document something counter-intuitive or some human failing. I blame this one for the lion’s share of these strange arguments). These two factors might come together to result in rather nonsensical arguments being put forth fairly-regularly and their not being spotted for what they are. One of these strange arguments that has come across my field of vision fairly frequently in the past few weeks is the following: that our minds are designed to actively create false information, and because of that false information we are supposed to be able to make better choices. Though it comes in various guises across different domains, the underlying logic is always the same: false beliefs are good. On the face of it, such an argument seems silly. In all fairness, however, it only seems that way because, well, it is that way.
Given the strangeness of these arguments, it’s refreshing to come across papers critical of them that don’t pull any rhetorical punches. For that reason, I was immediately drawn towards a recent paper entitled, “How ‘paternalistic’ is spatial perception? Why wearing a heavy backpack doesn’t – and couldn’t – make hills steeper” (Firestone, 2013; emphasis his). The general idea that the paper argues against is the apparently-popular suggestion that our perception essentially tells us – the conscious part of us, anyway – many little lies to get us to do or not do certain things. As the namesake of the paper implies, one argument goes that wearing a heavy backpack will make hills actually look steeper. Not just feel harder to climb, mind you, but actually look visually steeper. The reason some researchers posited this might be the case is because they realized, correctly, that wearing a heavy backpack makes hills harder to climb. In order to dissuade us from climbing them under such conditions, then, out perceptual system is thought makes the hill look harder to climb than it actually is, so we don’t try. Additionally, such biases are said to make decisions easier by reducing the cognitive processing required to make them.
Suggestions like these do violence to our intuitive experience of the world. Were you looking down the street unencumbered, for instance, your perception of the street would not visibly lengthen before your eyes were you to put on a heavy backpack, despite the distance now being harder to travel. Sure; you might be less inclined to take that walk down the street with the heavy backpack on, but that’s a different matter as to whether you would see the world any differently. Those who favor the embodied model might (and did) counter that it’s not that the distances themselves change, but rather the units on the ruler used to measure one’s position relative to them that does (Proffitt, 2013). In other words, since our measuring tool looks different, the distances look different. I find such an argument wanting, as it appears to be akin to suggesting that we should come to a different measurement of a 12-foot room contingent on whether we’re using foot-long or yard-long measuring sticks, but perhaps I’m missing some crucial detail.
In any case, there are many other problems with the embodied account that Firestone (2013) goes through, such as the magnitude of the effect sizes – which can be quite small – being insufficient to accurately adjust behavior, their being little to no objective way of scaling one’s relative abilities to certain kinds of estimates, and, perhaps most damningly, that many of these effects fail to replicate or can be eliminated by altering the demand characteristics of the experiments in which they’re found. Apparently subjects in these experiments seemed to make some connection – often explicitly – between the fact they were just asked to put on a heavy backpack and then make an estimate of the steepness of a hill. They’re inferring what the experimenter wants and then adjusting their estimates accordingly.
While Firestone (2013) makes many good points in suggesting why the paternalistic (or embodied) account probably isn’t right, there are some I would like to add to the list. The first of these additions is that, in many cases, the embodied account seems to be useless for discriminating between even directly-comparable actions. Consider the following example in which such biases might come into play: you have a heavy load to transport from point A to point B, and you want to figure out the easiest way of doing so. One route takes you over a steep hill; another route takes you the longer distance around the hill. How should we expect perceptual estimates to be biased in order to help you solve the task? On the one hand, they might bias you to avoid the hill, as the hill now looks steeper; on the other hand, they might bias you to avoid the more circuitous route, as distances now look longer. It would seem the perceptual bias resulting from the added weight wouldn’t help you make a seemingly simple decision. At best, such biases might make you decide to not bother carrying the load in the first place, but the moment you put it down, the perceptions of these distances ought to shrink, making the task seem more manageable. All such a biasing system would seem to do in cases like this, then, is add extra cognitive processing into the mix in the form of whatever mechanisms are required to bias your initial perceptions.
The next addition I’d like to make is also in regards to the embodied account not being useful: the embodied account, at least at times, would seem to get causality backwards. Recall that the hypothesized function of these ostensible perceptual distortions is to guide actions. Provided I’m understanding the argument correctly, then, these perceptual distortions ought to occur before one decides what to do; not after the decision has already been made. The problem is that they don’t seem to be able to work in that fashion, and here’s why: these biasing systems would be unable to know in which direction to bias perceptions prior to a decision being made. If, for instance, some part of your mind is trying to bias your perception of the steepness of a hill so as to dissuade you from climbing it, that would seem to imply that some part of your mind already made the decision as to whether or not to try and make the climb. If the decision hadn’t been made, the direction or extent of the bias would remain undetermined. Essentially, these biasing rules are being posited to turn your perceptual systems into superfluous yes-men.
On that point, it’s worth noting that we are talking about biasing existing perceptions. The proposition on the table seems to be the following chain of events: first, we perceive the world as it is (or at least as close to that state as possible; what I’ll call the true belief). This leaves most of cognitive work already done, as I mentioned above. Then, from those perceptions, an action is chosen based on some expected cost/benefit analysis (i.e. don’t climb the hill because it will be too hard). Following this, our mind takes the true belief it already made the action decision with and turns it into a false one. This false belief then biases our behavior so as to get us to do what we were going to do anyway. Since the decision can be made on the basis of the initially-calculated true information, the false belief seem to have no apparent benefit for your immediate decision. The real effect of these false beliefs, then, ought to be expected to be seen in subsequent decisions. This raises yet another troubling possibility for the model: in the event that some perception – like steepness – is used to generate estimates of multiple variables (such as energy expenditure, risk, or so on), a biased perception will similarly bias all these estimates.
A quick example should highlight some of potential problems with this. Let’s say you’re a camper returning home with a heavy load of gear on your back. Because you’re carrying a heavy load, you mistakenly perceive that your camping group is farther away than they actually are. Suddenly, you notice an rather hungry-looking predator approaching you. What do you do? You could try and run back to the safety of your group, or you could try and fight it off (forgoing other behavioral options for the moment). Unfortunately, because you mistakenly believe that your group is farther away than they are, you miscalculate the probability of making it to them before the predator catches up with you and opt to fight it off instead. Since the basis for that decision is false information, the odds of it being the best choice are diminished. This analysis works in the opposition direction as well. There are two types of errors you might make: thinking you can make the distance when you can’t, or thinking you can’t make it when you can. Both of these are errors to be avoided, and avoiding errors is awfully hard when you’re working with bad information.
It seems hard to find the silver lining in these false-belief models. They don’t seem to save any cognitive load, as they require the initially true beliefs to already be present in the mind somewhere. They don’t seem to help us make a decision either. At best, false beliefs lead us to do the same thing we would do in the presence of true beliefs anyway; at worst, false beliefs lead us to make worse decisions than we otherwise would. These models appear to require that our minds take the best possible state of information they have access to and then add something else to it. Despite these (perhaps not-so) clear shortcomings, false belief models appear to be remarkably popular, and are used to explain topics from religious beliefs to ostensible misperceptions of sexual interest. Given that people generally seem to understand that it’s beneficial to see through the lies of others and not be manipulated with false information, it seems peculiar that they have a harder time recognizing that it’s similarly beneficial to avoid lying to ourselves.
References: Firestone, C. (2013). How “Paternalistic” Is Spatial Perception? Why Wearing a Heavy Backpack Doesn’t- and Couldn’t – Make Hills Look Steeper. Perspectives on Psychological Science, 8, 455-473
Proffitt, D. (2013). An Embodied Approach to Perception: By What Units Are Visual Perceptions Scaled? Perspectives on Psychological Science, 8, 474-483.
Copyright Jesse Marczyk