How We Think - An Analysis of Cognition
How do we think? What ideas of self are embedded in this process? A review of classic theories of self is presented and synthesized in an article by Phin Upham
Classic theories on cognition present a view of self (both explicit and implicit) and their consequences (and presuppositions) constituting 1) a demonstration that we avoid the complexity of the world by employing simplifying “models” or heuristics and that these heuristics can be revealed by clever counter examples that reveal their inadequacies. 2) we employ similar self-models to view our own identity and to mediate and moderate our own behavior. And 3) that there are competing ideas of how this behavioral and self-models look. The question of attitudes and frameworks that define behavior is difficult not only because we are dynamic beings who are constantly changing and updating our behavior (though this weeks evidence would claim that we do not update certain aspects and internal structures of ourselves) but also because it is likely that we are not internally consistent in our actions. It is entirely possible, as evolutionary psychologists Tooby and Cosmides argue, that we are “Swiss-army knives” designed with heuristics for many different types of situations, which would present some problems with these hypothesis as presented.
The psychological economists Tversky and Kahneman argue that we use mental shortcuts, or ‘heuristics’ which are (generally) useful ways of organizing the world. There ‘heuristics’ can be shown to lead us to errors in many situations, but, it is assumed, they are efficient and generally accurate in life (though a more evolutionary claim would merely say they were accurate in the evolutionarily relevant EEA). Tversky and Kahneman believe that we use heuristics such as represetativeness, availability, and anchoring, to simplify the world around us. They are able to show that this simplifications can lead to mistakes in logic (under or overestimating the likely hood of events, for example) but they stop short of exploring why we might employ such tricks. They mention that these heuristics act to “simplify” the world and they describe the contexts in which we use each.
Hazel Rose Markus continues by applying Tversky and Kahneman’s logic to the realm of self-perception. He claims that we not only do we use shortcuts to interpret and define the world as Tversky and Kahneman claimed, but we use shortcuts to define and interpret ourselves as well, he calls there internal mental models “self-schemata.” He designed a study in which he divides people into dependent, independent and aschematics and illustrates that dependents and independents, by having a model of self, are able to self-define more quickly, efficiently and to maintain a more robust and contextualized image of self. If facilitates the processing of social information. The unanswered question is whether these subjects are correct in their self-identification and how they came to these self-labels. If these labels are useful, and they need not be accurate, then the question, for me, becomes 1) is it more or less or equally useful to have an accurate label than a inaccurate label (or no label). 2) which label is the most useful and successful 3) how do we form/change such labels. There is compelling evidence, for example, that people who have an inflated perception of their own abilities (as compared to the estimation of those around them) perform better than those with a realistic perception of their own abilities (this latter group is even sometimes depressed).
The scholar Bem provides a theory of how we might develop such a model of self, and also goes a little bit further providing a partial overview of the field, contrasting models, and attempting to give a genealogy of how models develop in human’s as part of his explanation. Bem constructs a model he calls “self-perception theory” which essentially claims that our attitudes and behavior has less to do with private knowledge about ourselves than we think. We sometimes socially define how we ought to attribute and define the “feelings” (though what the nature of these internal feelings are never explored) we have, especially when they are week and ambigious – and there is evidence that is quite often. He generates a model of how such a theory might begin to develop as a child grows, demonstrates that people can be fooled into attributing feelings incorrectly if given false reasons for feeling the way they do and shows that irrelevant factors (such as light) can change the way one feels about a mental thought (the mental understanding of a cartoon). Finally the theory provides evidence that we can attribute to others the same biases we unconsciously use ourselves (via the toy experiment). I find this essay refreshing in its audacity – it practically turns on its head the old theory that we understand ourselves and apply that understanding, with an “other minds” assumption to others – and says that the truth is instead, at least in part, the other way around.
The next three essays by Aronson, Steele, and Staw provide more straightforward ways we can model (some of) our behavior. Aronson explores the idea of cognitive dissonance which postulates that when we are in a state such that we have “dissonant” opinions in our mind or we are in “tension” with others, there is a desire to add “consonant” cognitions by changing one or both views so that they are complementary. This theory seems to explain a part of our thought processes but it falls short, as the authors recognize, of providing either a satisfying reason for this behavior nor providing a precise “how” component to our behavior. Steele explores the idea of self-affirmation, the idea that we often rationalize and interpret our surroundings in order to make it fit with our “theories of self.” So if we do something bad, we may do something good in another area entirely to “balance” this bad out and so maintain general equilibrium. This seems to be complementary to Dissonance theory (or the other way around) since it could be seen that the conflicting action is in dissonance with the general perception of self so we add a consonant factor (the countervailing act) to rectify this or we rationalize our action away. I think a synthesis of these two positions is possible. Lastly, Staw provides an “escalation to commitment” model in which we respond strongly to structural patterns such as past sunk resources. We do not always behave rationally when pursuing a course, but there are factors which can cause us to forge ahead even when logic tells us to cut our investment.
What troubles me most about the latter three (I think Bem’s piece largely escapes this criticism) is that there seems to be an assumption that humans are otherwise “rational” and that these theories provides a deviation from this norm. That is, as Tversky and Kahneman themselves put it, “[we do not comply to fully rational behavior because] statistical principles are not learned from everyday behavior because the relevant instances are not coded appropriately” (1130). This implies, as many of the other authors do, that where we have not yet been found to systematically deviate from the “rational choice” we are indeed using the principles of rationality. This seems highly suspect to me. We have little internal access to how we generate our decisions (though we can and do justify those decisions with rational arguments). We really know little of what goes on in our own minds or even how we construct a sentence. I can imagine an ape making largely the same decisions (minus the rationalizations) but no one would claim the he/she was thinking “rationally.” So perhaps the authors have been approaching the question from the wrong angle entirely. Perhaps the study of how we think lies in other areas entirely .
Tooby and Cosmidas, the husband and wife team of evolutionary psychologists, provide a compelling “Swiss army knife” model of the human mind. During the EEA (evolutionary era of adaptation) we developed specific compartmentalized strategies to deal with specific situations. They show, for example, that we can do logical manipulation perfectly when in the context of “what fruit might be poisoned” but we completely fumble the same task when the same question is asked of “x” and “y.” If we were thinking rationally, this would not seem to be the case. My point here is simply to undermine the idea that the default nature of the mind is rationality and the deviance is mental heuristics. Indeed, where our heuristics are good enough to “approximate” rationality sufficiently, there may be no practical difference.
Tversky, A. & Kahneman, D. (1974). Judgment under uncertainty: Heuristics biases. Science, 185: 1124-1131.
Markus, H. (1977). Self-schemata and processing information about the self. Journal of Personality and Social Psychology, 35: 63-78.
Bem, D.J. (1972). Self-perception theory. In L. Berkowitz (ed.), Advances in Experimental Social Psychology, Volume 6.
Aronson, E. (1969). The theory of cognitive dissonance. In L. Berkowitz (ed.), Advances in Experimental Social Psychology, Volume 4. New York: Academic Press.
Steele, C.M. (1988). The psychology of self-affirmation: Sustaining the integrity of the self. In L. Berkowitz (ed.), Advances in Experimental Social Psychology, Volume 21. San Diego: Academic Press (pp. 261-302).
Staw, B. M. 1981. The Escalation of Commitment to a Course of Action. Academy of Management Review, 6: 577-587.
Samuel Phineas Upham has a PhD in Applied Economics from the Wharton School (University of Pennsylvania). Phin is a Term Member of the Council on Foreign Relations. He can be reached at email@example.com.
Tags: Cognition , Analysis
This work is licensed under a Creative Commons Attribution-No Derivative Works 3.0 License.