Monday, December 1, 2008

Week 10: Social cognition (Part 1)

Jackendoff, R. (2003). An agenda for a theory of social cognition. In R. Jackendoff (Author), Language, consciousness, culture: Essays on mental structure. Cambridge, MA: MIT Press.

Beer, J.S., Shimamura, A. P., & Knight, R. T. (2004). Frontal lobe contributions to executive control of cognitive and social behavior. In Gazzaniga, M.S. (Ed.), The cognitive neurosciences III (pp. 1091-1104). Cambridge, MA : MIT Press.

Ochsner, K. N. (2007). Social Cognitive Neuroscience: Historical Development, Core Principles, and Future Promise. In Kruglanksi, A., & Higgins, E. T. (Eds.). Social Psychology: A Handbook of Basic Principles (pp. 39-66). 2nd Ed. New York: Guilford Press.

The articles for this week’s topic, were only somewhat illuminating on the actual topic of social cognition, as they covered the topic in much broader strokes than would have been helpful as an introduction to the topic. Nevertheless, some interesting information was gleaned.

Beer et al. (2004) review several studies that describe the differential role of lateral versus medial/orbitofrontal regions of the prefrontal cortex. Specifically, lateral prefrontal cortex is repeatedly implicated in what are traditionally conceived of as cognitive processes, and the article focuses primarily upon the role of the LPFC in attention. Evidence from several lesion and animal studies implicate the LPFC in allocating attention, inhibiting or exciting neuronal activity in sensory regions (visual or auditory). Lesion studies demonstrate how damage to this region impairs the individual’s ability to filter out irrelevant information. In sum, the LPFC plays an important role in controlled information processing. By contrast, the medial and orbitofrontal regions of the PFC appear to be implicated in self-regulation and the processing of social information, serving to integrate emotional and cognitive information.

This idea is not new to us here, as in prior weeks we have seen this region implicated in the integration of cognition and emotion in decision making, reasoning, and behavioral inhibition (รก la dear Phinneas Gage). It makes sense, then, that processing social information would implicate this region as well. One could hypothesize that acting in socially appropriate ways would require a similar process as decision making, wherein one decides whether to follow an emotional impulse or inhibit that impulse based upon contextual and historical information. Beer et al. also review studies implicating the medial PFC in encoding information relevant to the self and making inferences about other’s behaviors (theory of mind). This would also make sense, as this region appears to be important for integrating information about the emotional salience of stimuli with reward-based information about potential responses to this information. One could argue social information is inherently emotional, as it involves processing information that is ultimately pertinent to the maintenance and attainment of survival goals. We learn to navigate the social world in ways that maximize our own survival by maintaining proximity to important others that aid in our survival and maintaining distance from those that threaten it. At the risk of being reductionist, it seems to circle back to that old familiar theme of cognitive/affective integration.

Jackendoff’s article picks up this point, by discussing the myriad of social interactions that one must learn to navigate. Taking a much different approach, he argues for the analogy between social cognition and language development. Both language and social competency are attained through the interplay between “hardwired,” computational capacity and culturally driven, externally derived information. We are born with the capacity to learn language, but the nuances of what we learn and how our language is ultimately used is shaped by our language interactions with those around us. Similarly with social behavior – we have an innate ability for social processes such as empathy, “cheater detection” or the ability to detect the intentions of others, theory of mind, emotional contagion or the ability to mirror the emotions expressed by others, and self-monitoring/self-regulation in the service of socially appropriate behavior. How these capacities are expressed is a function of the immediate world we live in, and as such are culturally and group specified. The more I read, the more beautifully orchestrated human nature seems – information gathered through the internal lens, the external synthesized with the internal, and the combination offered back out again to the world.

The third article in this series, by Kevin Ochsner, is less a treatise on social cognition than a manifesto for the new discipline of social cognitive neuroscience. The article lays out an agenda for this new line of research, encouraging a multi-layered analysis of social psychology, examining the behavioral, computational/representational, and neuronal levels. The chapter suggests ways in which previously distinct disciplines of social psychology, social cognition, and cognitive neuroscience can work together to answer questions in a more constrained and meaningful way. While the chapter was a very interesting read, it did not provide much by way of examining specific research in social cognition, so was less useful to our immediate purposes here. Perhaps the Thagard chapter will provide more fine-grained insight…

Sunday, November 23, 2008

Week 8& 9: Reasoning and Problem Solving

Litt, A., Eliasmith, C., & Thagard, P. (forthcoming). Neural affective decision theory: Choices, brains, and emotions. Cognitive Systems Research.

Evans, J.S.T.B. (2003). In two minds: Dual process accounts of reasoning. Trends in Cognitive Science, 7, 454-459.

Thagard, P. (2007). Abductive inference: From philosophical analysis to neural mechanisms. In A. Feeney & E. Heit (Eds.), Inductive reasoning: Experimental, developmental, and computational approaches (pp. 226-247). Cambridge: Cambridge University Press.


This week’s readings focus on the process of reasoning and decision making. Throughout each article and chapter, a common theme is reported: both reasoning and decision making appear to be the result of a final solution reached through the interaction of dual processing streams, one involving emotional processing and the other involving more cognitive processing. Evans (2003) presents a dual processing theory of reasoning, whereby two systems essentially “compete” for the final solution. “System 1” processing represents rapid, automatic processing, representing concepts and beliefs that are formed through associative learning. It is through this system that innate and instinctual behaviors are accessed. Evans proposes the end result of the rapid and automatic processing by this system is what becomes available to consciousness. System 2, by contrast, represents much slower, methodical processing, making use of working memory systems to elaborate upon information from System 1, engaging in more sophisticated hypothetical thinking and forecasting, constructing mental models and analyzing possible outcomes. Through this process, System 2 essentially has the capacity to override System 1. Evans presents examples from studies in which syllogisms are used to evaluate the relationship between beliefs and analytical deductive reasoning. In one type of study, participants are asked to endorse only those conclusions that logically follow a preceding premise regardless of their beliefs. Results show participants have a very difficult time overriding prior beliefs, and show belief bias in their endorsement of conclusions, rejecting otherwise logically deduced solutions. For example, in the syllogism “No nutritional things are inexpensive; Some vitamin tablets are inexpensive; Therefore, some vitamin tablets are not nutritional,” participants demonstrated difficulty endorsing this conclusion regardless of the fact it rationally follows the previous premises, having a difficult time “buying” that we can conclude some vitamins are not nutritional based on the fact they are inexpensive. If instructions to participants emphasize the importance of endorsing conclusions only on the basis of their logical merit, participants are able to do it, but only with effort. Evans proposes these studies are representing the process of System 2 inhibition and override of System 1. If System 2 requires working memory and other higher cognitive processes in order to override System 1, then measures of intelligence ought to correlate with ability to inhibit System 1 by System 2. This has indeed been demonstrated, with higher IQ scores correlating with greater ability to find correct solutions in reasoning tasks. In short, the greater cognitive capacity an individual has, the better able they are to go beyond “gut reactions” to problems and find other possible solutions. This is analogous to the stereotype of the reckless, emotional decision-maker versus the cool, calm, collected and calculated one – Inspector Clouseau versus James Bond if you will.

In Thagard’s article, he provides arguments for a neural account of abductive reasoning. Abductive reasoning refers to inference involving the generation and evaluation of explanatory hypotheses. Thagard argues that the process of abductive reasoning is inherently emotional, based on the fact that reasoning occurs first when something is puzzling, which is resolved when a target explanatory solution is arrived at. Both the puzzling nature and the satisfaction with the explanatory solution are in essence emotional events. He suggests that the ability to find causal relations begins with very early perceptual processing, as demonstrated by studies showing infants as young as 2.5 months expect that a stationary object will be displaced when hit by a moving object. Thagard proposes there is a neurally-encoded image schema that establishes the causal relationship tying the neural structure representing hypothesis with the neural structure representing the target explanation. Abductive inference is the “transformation of representational neural structures that produces neural structures that provide causal explanations.” Abductive inference does not only include verbal-linguistic processing but also inference from multiple perceptual modalities (such as deducing from seeing a scratch on your car in a supermarket parking lot and a shopping cart nearby that the shopping cart caused the scratch). All types of inference are inherently emotional in that what motivates one to find a causal explanation is the emotional thrust of puzzlement, and what represents a solution is the satisfaction that solution elicits. Here again, we see the interaction between emotion and cognition.

Litt, Eliasmith, and Thagard provide an interesting account of the role of emotion in decision making. Decision-making involves the weighting of various response choices and their potential consequences. As discussed earlier in Week 6, this involves both emotional and contextual information, implicating VMPFC, amygdala, and hippocampus. The current article extends upon this and demonstrates through neurocomputational modeling how amygdala activation (representing emotional salience) influences ongoing response selection. In essence, the greater emotional arousal generated by stimuli, the greater the subjective value placed on the stimuli by the OFC. Valuations are exponentially dampened or intensified depending upon the lowered or heightened state of arousal. The authors provide equations representing this process, demonstrating how the level of amygdala activation can in essence cancel out OFC responses. Greater negative predictions elicit higher levels of arousal, and there is greater aversion to potential losses than gains in predicted outcomes. The authors go on to present fascinating accounts for the way in which framing a problem can influence decision making. Potential for loss is more arousing than potential for gain. Therefore, the way a problem is presented, emphasizing overall losses as opposed to overall gains, influences which decision is made. For example, studies by Tversky & Kahneman (1981, 1986) found when given a choice of two plans to control an outbreak expected to kill 600 people, participants were inclined to choose a plan that would result in 200 people being saved but reject a plan resulting in 400 people being killed. Objectively, both of these choices are exactly the same (200 people live, 400 people died), but when presented as an opportunity to save people the choice was more desirable than when presented with the opportunity to kill people. The same framing phenomenon occurs in the famous trolley-footbridge dilemma (Greene et al., 2002, 2004), wherein participants are more likely to chose to push a button releasing a runaway train car carrying multiple people, risking multiple peoples’ lives, than choosing to push one person in front of the runaway train, killing that person but saving the rest. Even though more people will likely die in the first option, the distance between the action of pushing a button and that action causing death is greater than the distance between making physical contact with an individual and causing death. The latter elicits far greater amygdala and OFC activation than the former, suggesting greater emotional salience. Another aspect of framing explains why people sometimes make choices that are objectively less valuable but hedonically more valuable. The authors give the example that winning $20 feels like a gain when the comparison is winning only $1, whereas winning $20 feels like a loss when the comparison is winning $100. It is objectively the same outcome, $20 is $20, but one outcome is more desirable than the other. The authors suggest the difference in desirability is the result of the distance between the actual outcome and the expected outcome. If you expect to earn $100, $20 feels like a loss. However, if you expect to win $1, $20 is a gain.
The article by Litt et al. maps well onto the article by Evans, wherein we can assume the hedonic value and emotional contribution to a decision is a result of “System 1” processes, resulting from prior learned associations and innate beliefs (such as killing another human being is bad). The degree to which the emotional aspects of a decision or reasoning process win out is related to the degree to which further elaboration and hypothesizing about a possible solution generated through “System 2” processes override System 1 contributions. Regardless, it appears we cannot “escape” bottom-up, affect-driven influences on what would otherwise be construed as a cognitive process.

Saturday, November 22, 2008

Week 7: Consciousness (Part 2)

Srinivasan, N. (2008). Interdependence of attention and consciousness. Progress in Brain Research, 168, 65-75.

This article seeks to understand consciousness by exploring the relationship between consciousness and attention. First, an important consideration in following the arguments in the article is the way consciousness is defined - consciousness is taken to mean awareness throughout the article, rather than mere perception. The article presents two conceptualizations of the relationship between attention and consciousness. On the one hand, attention is thought to be necessary for conscious awareness, in that we are not conscious of that which we do not attend to. Evidence supporting this idea is presented, such as studies of inattentional blindness wherein irrelevant stimuli are not reported as being seen when participants were not aware the stimuli would be present (Mack & Rock, 1998), or studies of change blindness wherein subtle changes in objects are not perceived outside of focused attention on the object (Rensink, 2002). On the other hand, consciousness is thought to precede attention, wherein selective attention operates on what is already conscious. From this perspective, perceptual processing leads to conscious perception, and attention acts to focus awareness in order to take appropriate action. While the article cites studies supporting this view (e.g. Lamme, 2003), the studies themselves are not presented, therefore it is hard to draw conclusions about this viewpoint. In essence, the entire argument represents a sort of “chicken-and-egg” dilemma.
It might be useful to return to the definitions presented earlier. Merriam-Webster defines consciousness (n) as: “the quality or state of being aware especially of something within oneself; the state or fact of being conscious of an external object, state, or fact.” Conscious (adj) is defined as: “perceiving, apprehending, or noticing with a degree of controlled thought or observation.” In other words, to be conscious of something we not only are simply perceiving it but also are attending to it to some degree. If this is the case, I would argue that attention is a necessary part of what makes something that is perceived something we are consciously aware of. If this is the case, I would place perception on one end of a continuum, and focused attention on the other, with consciousness operating as degrees along this continuum. Srinivasan presents one interesting theory that, while not exactly the same concept, would support this view: Dehaene et al. (2006) have proposed consciousness and attention may function on a 2x2 matrix in which one factor is stimulus strength (bottom-up), and the other is controlled attention (top-down). This results in four classes of processing: subliminal-unattended, subliminal-attended, preconscious, and conscious (although they don’t really define what is meant by “preconscious”). Again, degrees of consciousness depend on the interaction between perception and attention.
Having degrees of conscious awareness might be important adaptively. At any given moment, there are certain aspects of our internal and external environment that are important to attend to, and others that are not. Without degrees of consciousness operating as a sort of filter, we would be inundated with stimuli, and essentially incapacitated. Procedural memory can be thought of in this way: when we learn to ride a bike, we are initially aware of all the movements of our hands, feet, body, balance, etc. Once we get the hang of it, we no longer think of how our body needs to move in order to ride, and can shift our attention to our surroundings thereby avoiding crashing into walls or being hit by a car. One could only imagine how difficult riding a bike would be if we had to divide our attention between awareness of our bodily movements and information about our surroundings simultaneously. There is some evidence to suggest obsessive compulsive disorder may represent an inability to filter out irrelevant information in conscious awareness, causing an inability to disengage from stimuli. A recent study by Calamari et al. (in press) demonstrated that participants with OCD performed slower on a learning task than healthy controls, yet participants with OCD were able to describe all the elements that went into their selection of specific movements. In other words, they were consciously attending irrelevant information that affected their overall performance, whereas healthy controls were able to learn the task and filter out of awareness all the steps it took to perform the task, thereby allowing them to perform more quickly. It would be interesting to pursue this line of inquiry further to better understand the implications for OCD. Perhaps this could shed further light upon the relationship between attention and consciousness.

Saturday, November 15, 2008

Week 7: Consciousness (Part 1)

CONSCIOUSNESS (noun)
1 a: the quality or state of being aware especially of something within oneself b: the state or fact of being conscious of an external object, state, or fact c: awareness ; especially : concern for some social or political cause2: the state of being characterized by sensation, emotion, volition, and thought : mind 3: the totality of conscious states of an individual4: the normal state of conscious life 5: the upper level of mental life of which the person is aware as contrasted with unconscious processes

CONSCIOUS (adjective; from Latin com + scire “to know”)
1: perceiving, apprehending, or noticing with a degree of controlled thought or observation (was conscious that someone was watching) 2archaic : sharing another's knowledge or awareness of an inward state or outward fact3: personally felt (conscious guilt) 4: capable of or marked by thought, will, design, or perception5: self-conscious6: having mental faculties undulled by sleep, faintness, or stupor : awake (was conscious during the surgery) 7: done or acting with critical awareness (a conscious effort to do better) 8 a: likely to notice, consider, or appraise (a bargain-conscious shopper) b: being concerned or interested c: marked by strong feelings or notions (a race-conscious society)
synonyms see aware

This week’s two main articles were fascinating, and the stuff of mental gymnastics. What constitutes consciousness? How does consciousness emerge? What role does attention play in consciousness? It occurred to me while reading that to fully understand and consider how consciousness emerges we have to be clear about what we mean by consciousness in the first place – hence the definitions above. It seems everything from perception to controlled processing is considered “being conscious,” if we are to take the definitions above. The question seems to be, however, that if we were to take a continuum of automatic to controlled processes, wherein sensory perception falls on the automatic end and focused attention falls on the controlled end, where along this continuum would we place actual consciousness?? And what role does affect play in the generation of consciousness (or the constitution of consciousness)?? But I am getting ahead of myself somewhat…first, the readings.
Thagard, P., & Aubie, B. (in press). Emotional consciousness: A neural model of how cognitive appraisal and somatic perception interact to produce qualitative experience. Consciousness and Cognition.

In the Thagard and Aubie article, a neural model of emotional consciousness is described. The authors start by stating that any model of conscious emotional experience must be able to explain differentiation between varied emotional states; integration between varying mental processes including perception, memory, judgment, and inference; intensity of emotional arousal; emotional valence (positive or negative); and the changes or shifts from one emotional state to another. They go on to suggest emotional consciousness must not be limited to either perceptions of bodily states or cognitive appraisals of one’s state, as early emotion theorists have tended to suggest through defining emotions as either somatic perceptions or appraisals. Rather, Thagard and Aubie present a neurocomputational model in which emotional representations are comprised of both perceptions and judgments. Their model is based upon the idea that mental representations are generated not only by inputs from external or internal stimuli, but also from inputs between neural populations, such that one neural population is tuned to the firing of another neural population (out of which more complex representations arise). From this perspective, neural structures are in tune with the firing patterns of other neural structures, and these patterns of firing influence each other in a dynamic fashion. This allows for a model of parallel constraint-satisfaction, wherein the activation of one structure is constrained by the activation of another when an acceptable solution has been arrived at based upon external and internal representations.
Thagard and Aubie term this the EMOCON model of emotional consciousness. Structures implicated in this process comprise both cortical and subcortical structures: dorsolateral prefrontal cortex (DLPFC), orbitofrontal cortex (OFC), ventromedial prefrontal cortex (VMPFC), amygdala, insula, hippocampus, thalamus, ventral striatum, and raphe nucleus – structures spanning brain stem to higher cortical regions. Emotional consciousness does not result from any final output from one of these areas, but instead is an ongoing dynamic process resulting from feedback between these structures. In this way, both somatic sensations and cognitive processes are integrated, each playing a role in overall emotional consciousness. In view of the explanatory criteria mentioned above the EMOCON model satisfies each of these criteria. For example, the dynamic interaction of these structures serve to explain integration of various mental process. The strength and pattern of neuronal firing within and between neural populations serves to explain variances in intensity and valence, as well as differentiation (using a neurocomputational model, they demonstrate how the strongest emotion gains full activation and suppresses other emotions, or how two emotions can become co-activated representing mixed emotions).
Thagard and Aubie posit an important role for working memory in emotional consciousness. The current, most salient representation (including internal and external perceptions and associations from long-term memory) remains active in working memory. However, because representations in working memory decay over time, if the current representation in working memory is not further elaborated, or if attention shifts, the previously represented emotion begins to decay. The authors suggest this is what accounts for shifts in emotional consciousness or emotional change. This is an interesting idea clinically – following the EMOCO model, depressive rumination or anxious worry behavior involves continual manipulation of negative or threatening information and representations in working memory which in turn serves to perpetuate the experience of depression or anxiety. To get a better sense of the importance of this process in the maintenance or severity of depression or anxiety, it would be interesting to see if lower rates of rumination or worry are associated with poorer working memory in this population, and in turn if poorer performance on working memory tasks could predict lower symptom severity.
The article goes on to present neurocomputational models of emotional consciousness, wherein final “solutions” are arrived at through explanatory coherence. Propositions are accepted through a process whereby neurons spike in parallel causing other neurons to spark in either an excitatory or inhibitory direction until the network is stabilized, representing the final solution. However, the authors suggest emotional valence also plays an important role in the acceptance of a final solution. Emotional coherence occurs when the acceptance of a proposition is swayed by the emotional valence of that proposition, such that positive emotional valence encourages the acceptance of a proposition, and negative emotional valence encourages the rejection of a proposition. This neurocomputational model provides further evidence for the integration of both cognitive and affective processes in overall consciousness.
The take-home message of this article is: emotional consciousness is the result of the integration of perception, memory, attention, and sensation, which is further colored by emotional intensity and valence. What becomes conscious is the end result of this integration, facilitated by the manipulation of this representation in working memory. This leads to a question addressed in the second article for this week – is what becomes conscious what is attended to? Or do we attend to what is conscious?

Saturday, October 25, 2008

Week 6: Computational models of cognition and affect (Part 2)

Thagard, P. (2008). How molecules matter to mental computation. In P. Thagard (Ed.) Hot Thought: Mechanisms and applications of emotional cognition pp 115-131. Cambridge, MA: MIT Press.

In this chapter, Thagard argues that computational models of cognition need to consider the influence of neuromodulators at the molecular level. He argues for understanding processes in the brain as the result of both chemical and electrical activity. He goes on to point out that many of the chemical influences on synaptic activation occur as the result of activity of cells far removed from the local neural network, such as when the release of hormones influence distant synaptic firing. Much of the chapter goes into technical details of how chemicals such as hormones influence the action of neurotransmitters and synaptic activity, but the overall thesis is that if cognitive scientist are to construct accurate computational models, they must take into consideration the effects of chemical processes on the activation of these models, rather than approach them as if they were electrical computers. The effects of chemical processes, he argues, are even more important to consider in computational models of cognition as the evidence for the role of emotion in cognition mounts. Evidence already exists for the effects of hormones or other neuromodulators on emotion; therefore, these same chemical reactions ultimately affect cognition. He points out that while it may not be necessary to only consider systems at the molecular level, knowledge about molecular processes should be considered one type of map that is useful for certain levels of analyses – he presents the example that while a large map of Europe is useful for locating Switzerland as north of Italy, another type of map, more fine grained, is useful for navigating the terrain of the Swiss Alps. I would tend to agree that if a truly holistic account of cognition is to be developed, consideration of the molecular processes contributing to patterns of neural activation would only serve to enrich this development. While it might not serve cognitive science for researchers to try to attempt the formidable task of becoming expert in all levels of analysis, integrating findings from molecular and network levels of analyses would likely strengthen the understanding of cognition at both of these levels and strengthen a more holistic understanding.

Week 6: Computational models of cognition and affect (Part 1)

Wagar, B. M., & Thagard, P. (2004). Spiking Phineas Gage: A neurocomputational theory of cognitive-affective integration in decision making. Psychological Review, 111, 67-79.

In this article, Wagar & Thagard present a new computational model of cognitive-affective processing, called GAGE after the famous case of Phinneas Gage (whose personality changed dramatically after a tamping iron destroyed the left side of his brain, transforming him from a reliable, dependable and level-headed figure to an impulsive and profane individual). GAGE focus on the contribution of the ventromedial prefrontal cortex in gauging future consequences and behaving accordingly. Specifically, the VMPFC has been implicated in the ability to refrain from behavior leading to an immediate reward if that behavior has future negative consequences, or delaying immediate reward for a future, larger reward. In the GAGE model, Wagar and Thagard examine how the VMPFC and amygdala interact with the hippocampus to coordinate potential responses with bodily states associated with the current situation and contextual information about the situation. They were particularly interested in the mechanism by which context has a moderating effect on emotional reactions to stimuli.
Their model is an extension of A. Dimasio’s (1994) somatic marker hypothesis, whereby feelings or emotional states become associated with long-term outcomes of certain responses to a given situation. The VMPFC is thought to play an important role in generating somatic markers. In this hypothesis, sensory representations of a response to a current situation activate knowledge about previous emotional experiences in similar situations. These markers act as biases influencing higher cognitive processes that coordinate responses. Wagar & Thagard extend on this by suggesting four key brain structures involved in this process. First, the VMPFC responds in concert with the amygdala. However, Wagar & Thagard suggest the mechanism by which the amygdala response (or immediate emotional response) versus the VMPFC response (based on potential outcomes of responses) wins access to higher cognitive processing is through gating by the nucleus accumbens, which in turn gates information based upon contextual information received from the hippocampus. The process is hypothesized to unfold as such: 1) the VMPFC receives input from sensory cortices, representing behavioral options; 2) the VMPFC also receives input from limbic regions, providing information about internal bodily states, most notably from the amygdala; 3) the VMPFC records signals defining a given response by encoding representations of the stimuli and comparing it to the behavioral significance of somatic states that have been previously associated with the response; 4) the VMPFC generates a “memory trace” representing the action and expected consequences of that action; 5) through reciprocal connections to the amygdala, the VMPFC elicits a reenactment of bodily states associated with the specific action; 6) this covert emotional reaction is passed on to overt-decision making processes – however, the transmission of this information is gated by the NAcc, as controlled by the hippocampus, as 7) the hippocampus controls VMPFC and amygdala throughput by depolarizing the NAcc based upon context – the NAcc allows only activation signals from the VMPFC and amygdala through that are consistent with the current context allowing spike activity in the NAcc. As staed in the article, “The hippocampus influences the selection of a given response by facilitating within the NAcc only those responses that are congruent with the current context (p.70).”
This is where the article loses me a bit, because I am not entirely certain by what process the hippocampus is purported to match current contextual information with memory traces about past contexts, as generated by the VMPFC, in order to chose which potential response information to allow through. For example, given the tendency for individuals with anxiety and mood disorders, and particularly trauma, to misread the current contextual information, this would be an important part of this process to understand. Individuals who have experienced trauma, for example, show a tendency to disproportionally map past representations onto current contexts. If Wagar & Thagard are suggesting the hippocampus is matching current context to past memory traces, this would imply individuals with trauma have deficits in the ability of their hippocampus’s to accurately gauge the current context. (However, perhaps it is the result of affective influences on sensory processing, such that information about the current context is distorted, and thereby may match memory traces more closely.) In addition, while I can understand the mechanics of this proposed process, it leaves me with open questions about the hippocampus’s “motivation.”
Nevertheless, Wagar & Thagard go on to present evidence from two studies of the GAGE model. The goal of the first study was to see if GAGE could simulate the experimental results of the Iowa gambling task in Bechara et al., 1994. In this task, participants are given a choice of four decks and are asked to make series of card selections from each of the four decks. They are given $2,000 as a loan to start, and play the decks to try to capitalize on this loan. “Bad” decks give immediate rewards, but long-term net losses, whereas “good” decks give larger delayed rewards and overall net gain. The results from the initial experiment showed normal participants quickly adopted a strategy of pulling cards from the good decks, thereby demonstrating the ability to delay reward for greater ultimate gains. By contrast, participants with VMPFC lesions never learned this strategy, and continued to act upon immediate rewards without regard to future consequences. In the current study, Wagar & Thagard trained GAGE on this same task. When the VMPFC was taken out of the model, GAGE acted only upon immediate reward, whereas leaving the VMPFC in the model resulted in more selections from “good” decks and greater overall gains. In essence, without the influence of the VMPFC, the computer was acting upon emotional reactions elicited by the amygdala and reflecting the immediate situation. When the VMPFC was included, decisions were based upon potential outcomes, not immediate affective appraisals. VMPFC and amygdala responses were modulated by gating of the NAcc, which in turn was modulated by the hippocampus, in line with the proposed model above.
In a second study, the goal was to simulate the role of context in the integration of physiological affective arousal and cognition; specifically, the mechanism by which context moderates emotional reactions to stimuli. They had the machine gauge emotional reactions as positive or negative while in a positive versus a negative context. The results of this study showed that when the NAcc was presented with two possible VMPFC representations, the hippocampal-derived context drove GAGE’s behavior, such that positive contexts elicited positive responses and vice versa. The researchers go on to suggest that the NAcc stores associations between VMPFC and hippocampus to elicit representations based on the current context.
The two studies are summarized as such: Study 1 demonstrates that “the VMPFC and the amygdala interact to produce emotional signals indicating expected outcomes and that these expected outcomes compete with immediate outcomes for amygdala output…temporal coordination between the VMPFC and amygdala is a key component to eliciting emotional reactions to stimuli (p. 76).” Study 2 demonstrates that context exerts an effect on cognitive-affective integration, such that “For the signals from the VMPFC and the amygdala to access brain areas responsible for higher order reasoning, context information from the hippocampus must unlock the NAcc gate, allowing this information to pass through (p.76).” These conclusions lead to a few questions. First, does this suggest that the hippocampus overrides potential outcome decisions presented by the VMPFC? In other words, if the VMPFC is creating memory traces based on past experiences in similar situations, is it the case that the VMPFC is assuming one context that the hippocampus either rejects or confirms? Do the context appraisals formed by the hippocampus represent contextual memories or the actual current context? And how does the hippocampus form judgments about the current context, unless it is comparing it to prior encounters with the context? Isn’t contextual memory formation a key function of the hippocampus? There seems to be almost a memory loop going on here – the VMPFC is taking in sensory information and judging appropriate behavioral responses based upon the behavioral outcome of past encounters with the stimuli – which would imply some form of contextual representation. This information then is gated by the way in which the hippocampus judges the current context, and whether the information presented by the VMPFC is matching this judgment. The hippocampus encodes contextual memories about past encounters with the current context, which you would think influences the VMPFC’s initial representations. Is this process more dynamic than the GAGE model is implying?

Friday, October 24, 2008

Week 5: Language Acquisition and Processing (Part 2)

Jia, G., Aaronson, D., Wu, Y. (2002). Long-term language attainment of bilingual immigrants: Predictive variables and language group differences. Applied Psycholinguistics, 23, 599-621.

This article presents a study in which the long-term attainment of a second language, specifically factors relating to long-term L2 decline, was explored. The study sought to answer four main questions: 1) Given long-term L2 attainment decline versus long-term L1 increase, which aspects of language proficiency and to which bilingual groups can the findings be generalized; 2) what are the mechanisms leading to the switching or maintenance of dominant language between young and older arrivals; 3) what environmental or affective variables might be involved; 4) are their differences apparent in other groups previously studied, namely Chinese-English and Spanish-English, and are there additional social or cultural variables influencing differences in attainment between bilingual groups above and beyond language distance. To answer these questions, the study 1) investigated grammatical proficiency of 44 Mandarin-English speakers to investigate the relationship between long-term L1 and L2 attainment; 2) using a language background questionnaire, explored additional social, environmental, and affective variables; 3) collected normative data on L1 proficiency for Mandarin monolinguals between the ages of 9-16 to compare relative L1 proficiency between bilinguals and their monolingual counterparts; and finally, 4) gathered data on long-term L2 attainment of other groups to examine generalizability of results to other bilingual groups (specifically, Korean- Mandarin- Cantonese-English and European English bilinguals).
In the initial study, participants were presented with a listening and a reading task designed to assess judgments about grammaticality of sentences. Each task was presented in both English and Mandarin. Judgments in English included morphology (past tense, plurals, third person, present/past progressive, etc.) and syntax (articles, predicate structures, particle movement, pronominalization, etc.). Judgments in Mandarin included word order, inappropriate insertion of words, and inappropriate omission of words. Both grammatically correct and incorrect sentences were presented. Results showed younger AoA was associated with higher accuracy on the English listening and reading task and lower accuracy on the Mandarin listening task. There was also a negative correlation, such that better performance on L2 was associated with poorer performance on L1. Higher performance was also associated with self-report ratings of proficiency in both L1 and L2. This study also assessed environmental and cultural variables. Higher performance on the English listening and reading tasks was associated with younger AoA and more years of education in the U.S., but not length of time in the U.S. Better performance on the English listening task was associated with with more frequent usage at home, as well as more people speaking English at home. Better performance on the Mandarin task was associated with less frequent usage of English, and less people speaking English, at home. The variance between L1 and L2 proficiency was also associated with the level of the speaker’s mother’s proficiency in English, such that the more proficient the mother is in speaking English, the more proficient the children are. Looking at the normative data for comparable level of proficiency in Mandarin between bilinguals and Mandarin monolinguals, the bilinguals tended to arrive with less than adult proficiency in Mandarin. The authors suggest future studies should examine whether level of L1 proficiency in early learners has an effect on L2 acquisition.
Examining the generalizability of these results with other bilinguals, Asian language speakers evidenced stronger AoA effects and significantly lower accuracy on the listening and reading tasks than European language bilinguals. This finding is in line with the proposals of Hernandez and Li, wherein the lexical difference between L1 and L2 influences levels of lexical attainment, as is evident in greater AoA effects in Chinese-English bilinguals than in Spanish-English bilinguals.
In general, the results of this study show individuals who immigrate at a young age tend to switch dominant languages from L1 (Mandarin) to L2 (English), whereas older immigrants tend to maintain their dominant language. However, the maintenance of L1 as the dominant language was influenced by the extent to which English was spoken at home. This suggests it is not merely AoA effects on the ability to acquire the lexical aspects of L2 that prevents greater L2 attainment, but perhaps a combination of factors, including the extent to which the “language of life” is expressed in L2 rather than L1. Thinking back to Harris, Gleason, and Aycicegi (2006), it would be interested to see the extent to which a late learner who is immersed in the L2 language and culture, such as being married to a native speaker, would have less difficulty detecting grammatical errors than a late learner who remained in a household where L2 and accompanying cultural practices were intact. However, this again makes me think of my own step-mother, who would very likely have great difficulty detecting all the grammatical errors in a listening task. If she performed poorly on the tasks in this study, one could assume AoA effects in her ability to acquire English are present to a large extent. She is an individual who speaks L2 to her husband, her children, her step-children, her coworkers, and her friends on a daily basis – in other words, her language of life has been English for the last 25 years of her life. The only remaining contact she has with L1 is in conversations with her sisters. Despite the length of time she has been in the U.S. and living in an English speaking household, she is less than proficient in her ability to speak English, particularly in her pronunciation of English words which, according to this study, would have resulted in lower attainment by her eldest daughter, which did not prove to be the case. However, her eldest daughter was only six when she arrived, and was less than proficient in Vietnamese. Her daughter’s superior attainment of English (such that she sounds no different than a native speaker) fits with the conclusions of this study that younger immigrants tend acquire L2 to a higher proficiency and even switch dominant languages from L1 to L2, and perhaps her younger age of arrival cancels out the effects of her mother’s lower language proficiency.
In sum, this study, by focusing on differences in grammatical ability, lends support to the proposal by Hernandez and Li (2007) that perhaps AoA effects are the result of a critical period for sensorimotor processing, which in turn affects the ability to discern lexical differences between L1 and L2, and therefore affects grammatical accuracy and attainment in L2. It would be interesting to see further studies of late learners, in which differences in environmental and social factors and their relationship to overall L2 attainment are explored.

Week 5: Language Acquisition and Processing (Part 1)

Hernandez, A.E., & Li, P. (2007). Age of acquisition: It’s neural and computational mechanisms. Psychological Bulletin, 133, 638-650.

This article explores possible neural and computational underpinnings of AoA effects, and presents an argument for specific aspects of language acquisition that are more susceptible to AoA effects. Hernandez and Li first present a few of the existing theoretical accounts of AoA. Brown and Watson (1987) propose AoA effects in word learning are the result of “phonological completeness,” whereby early-learned words are stored holistically (thus more easily retrieved), while late-learned words are fragmented and need to be reconstructed. However, this theory has been disputed based on the fact that in a segmentation task reaction times were found to be faster for early-learned words than late-learned words, which the authors suggest is counter to what should be the case if the late-learned words were already fragmented. (This argument was a little difficult to follow simply because the article does not explain what a “segmentation task” actually is.) Another theory of AoA effects is the “cumulative frequency” hypothesis. This theory proposes early-learned words are more easily accessed because of the additive effects of their frequent usage over time. Late-learned words, according to this theory, would have been encountered less frequently and therefore would be less easily accessed. However, research in older adults did not find AoA effects of specific words to increase with age.
Another theory is the “semantic locus hypothesis,” which posits that early-learned words have semantic advantage over late-learned words because they are represented in the semantic network first, and affect the way later learned words are semantically processed. This would suggest a semantic “map” that is formed to early words and affects later words. Hernandez and Li point out that, if this was the case, bilingual speakers would match semantic concepts to two separate forms, that AoA would transfer from one language to the next, and that L2 lexical items should inherit L1’s lexical AoA. This does not turn out to be the case, however, as L2 lexical speed is associated with the age at which the word was learned in L2 and not the corresponding age it was learned in L1. For the semantic locus hypothesis to work, therefore, there would have to be separate semantic stores for each language. Hernandez and Li therefore suggest AoA exerts effects at the lexical level, not the semantic level. Further, evidence from computational modeling reviewed in the article suggests increased rigidity in word learning with age, which would be more suggestive of AoA effects at the lexical level, since one could assume semantic processing would be more enriched with age given the increased capacity to manipulate concepts and infer semantic meaning as one gets older.
The article goes on to present some intriguing evidence from neuroimaging studies of increased activation of Heschl’s gyrus, implicated in auditory processing, when making lexical decisions to early learned words, whereas increased inferior prefrontal cortex activation, implicated in effortful activation of semantic meaning, is evidenced for late learned words. The authors suggest this may represent a reliance on auditory processing for early learned words and a reliance on semantic processing for late learned words. If the lexical/semantic distinction in AoA is correct, this would make sense, in that early encoding of lexical information would involve increased reliance upon auditory processing to detect subtleties between phonemes and morphemes in phrase construction in order to learn lexical rules. In line with this, another study cited in the article found that participants coactivated auditory representations when making lexical decisions pertaining to early learned words, even though these words were presented visually. Hernandez and Li conclude that the neural substrates of early learned lexical information appears to represent more automatic processing, whereas late learned words require additional processing, mapping lexical information onto semantic information.
This has interesting implications for the learning of a second language. Hernandez and Li review studies that have investigated semantic and lexical processing between L1 and L2. In neuroimaging studies, tasks involving syntactic processing evidenced greater AoA effects than tasks involving semantic processing. In a study of Chinese-English bilinguals, in which individuals read sentences containing syntactic violations, differences between bilinguals and native speakers arose for both syntactic and semantic violations, but were apparent at different ages. Differences between L2 learners and native speakers in the detection of syntactical violations appeared in participants who had learned English as early as 2 years of age, whereas detection of semantic violations appeared only after age 11. In addition, late learners rated themselves as less proficient in L2. Another study found increased activation relative to early learners in areas implicated in motor planning and articulatory effort when late learners of L2 processed grammatical violations, suggesting early learners were able to process words more automatically whereas late learners required additional, more effortful processing. These differences were not found in semantic processing, and instead differences in semantic processing were only found when comparing late learners with high proficiency to late learners with low proficiency. The authors suggest, therefore, that AoA effects exist for grammatical violations, whereas proficiency is associated with semantic violations. Further, the article goes on to discuss how the magnitude of AoA effects in L2 acquisition are greatest when the differences between L1 and L2 are large. For example, AoA effects in judging grammatical errors in Spanish-English learners are smaller than AoA effects in judging grammatical errors in Chinese-English learners.
The authors suggest this difference in the magnitude AoA effects is the result of the ability to detect subtle differences in syntax between the two languages, or what would constitute a syntactic violation in L2, which require phonological and articulatory demands, which in turn require a certain level of auditory and motor processing. They argue that it is here where AoA exerts its effects – that perhaps the “window” for learning lexical aspects of a second language has more to do with sensorimotor processing. They relate this to the critical period evidenced in the development of sensorimotor processing in visual domains, and in the behavioral and neural domains of learning music (perfect pitch; synchronization of motor movements to visual presented stimuli). They posit that language, in essence, reflects a sensorimotor ability. Because areas involved in sensorimotor processing undergo rapid organization and reorganization early in life, and evidence a loss of plasticity, perhaps the difficulty a late learner of L2 has in mastering the subtleties of L2 (such as noun-verb agreement in Chinese-English speakers, or verb irregularities in Spanish-English speakers), is the result of reduced plasticity in processing and articulating these subtle differences. The study by Jia, Aronson, & Wu reviewed in the next posting lends further support to this hypothesis.
I immediately think of my own step-mother, who came to America in 1975 from her native Vietnam. After 33 years speaking English and 25 years living with an English-speaking husband in an English-speaking household, she has an excellent grasp of the semantics of English, and has even written books and poetry about her experiences using English. However, she still struggles with the grammatical subtleties of English, such as dropping the last sound of words, using correct noun-verb agreements, or using the wrong verb form in sentences. By contrast, her oldest daughter, who was 6 years old when they arrived in America, has no trace of a Vietnamese accent, and sounds no different than a native speaker of English. This account may also lend support the emotional context of learning hypothesis of Harris, Gleason, and Aycicegi (2006) reviewed in the last posting: One could argue that regardless of the ability to master the grammatical or lexical aspects of language, the ability to master the semantic aspects of a second language continues throughout life, and grasping the semantic meaning would also imply a relating of that meaning to the self through emotional processes. Therefore, one could develop a richer emotional connection to certain words from L2 learned late, and a critical window for pairing emotion and words may not necessarily exist. Instead, as Harris, Gleason & Aycicegi (2006) suggest, it may be the case that the greater emotional salience of early learned words accounts for differences found between the emotional intensity of reprimands in L1 versus L2, rather than AoA effects being at play.

Wednesday, October 22, 2008

Week 4: Special Topic: Affect and language

Harris, C.L., Gleason, J.B.,& Aycicegi, A. (2006). When is a first language more emotional? Psychophysiological evidence from bilingual speakers. In A. Pavlenko (Ed.), Bilingual minds: Emotional experience, expression, and representation. Clevedon, United Kingdom: Mulilingual Matters.

In this article, Harris and Gleason move away from computational accounts of language acquisition in bilingual speakers and consider the role of emotion. They note that, traditionally, investigations into the emotional and subjective experiential accounts of language has been shunned due to the idiosyncratic nature of emotion, which runs counter to a research agenda that seeks to identify the universality in linguistic processes. However, Harris and Gleason point out that there is significant overlap in subjective accounts of language, and that the subjective quality of specific experiences can be very similar across individuals. They point to consistent results found in recent research conducted by Dewaele (2004), in which greater emotional intensity was endorsed when speakers used taboo in their native language than in a second language. Harris and Gleason also point out that when emotion has been considered in psycholinguistics, it has been thus far limited to investigations of monolingual speakers. The article goes on to review recent studies conducted by the authors further investigating the role of emotion in bilingualism – specifically, are utterances in L1 more emotionally arousing than utterances in L2?
In a series of studies, the authors use psychophysiological measurement (SCR) in conjunction with self-report to investigate emotional responses to specific phrases. SCR was chosen as these responses are generated not only to threat but also indicate the degree of relevance a specific stimulus holds for the individual. In the first study, participants included native Turkish speakers residing in the U.S. Participants were presented with a series of phrases in both L1 (Turkish) and L2 (English) that included taboo words, reprimands, aversive words, positive words, and neutral words. Whereas emotional reaction to taboo words was high in both Turkish and English, the greatest difference in responding was found for reprimands. The authors suggest the greater emotional responsivity to reprimands in L1 may be the result of the emotional environment in which these phrases were initially learned. Research suggests conversational aspects of autobiographical memory may be encoded in a specific language, and memories of being reprimanded may therefore be encoded pairing specific words with their emotional contexts. The authors also point out that language acquisition occurs in the same years as the development of emotion regulation. Reprimands by parents may elicit highly emotionally-charged responses from the child, as maintaining attachment relationships with the parent (i.e. not jeopardizing this relationship through wrongdoing) is a paramount motivational goal. Memories of reprimands, therefore, are encoded as highly emotionally salient events. Following research on emotion and memory, it is likely that specific words are paired as emotionally relevant auditory stimuli that become associated and encoded as emotionally salient. In support of this, greater responses in this study were elicited when participants were presented with reprimands in L1 (Turkish) as auditory stimuli than as visual stimuli. As these reprimands are generally heard first before being read, and reading reprimands may not occur within as emotionally-charged a situation as hearing them, the visual stimuli of written reprimands may not have been encoded in memory with the same degree of emotionally salience.
In a second study, Spanish-English bilinguals were included and categorized according to age of acquisition: early learners (born to immigrant parents and learned English around age 5); balanced bilinguals (arrived in America around age 6 or 7); and late learners (learned English around age 12). The early learners endorsed English as their most proficient language, balanced bilinguals endorsed both, and late learners endorsed Spanish. Similar to the first study, the greatest difference in L1 relative to L2 responses was found to reprimands, but only for the late learners. Further, the emotional arousal of L2 was weaker than L1 only for those who acquired L2 past the age of 7. This suggest a decline in the emotional significance of L2 as a function of both age of acquisition and degree of proficiency in L2.
To investigate whether one language in these studies was inherently more emotional than the other, a third study investigated ratings of emotional intensity of phrases in Spanish versus English through a questionnaire study involving both American and Colombian students. This study found items rated similarly. However, another study conducted by the authors found native Turkish speakers rated Turkish phrases higher than English speakers rated the English counterparts. The authors suggest cultural differences may exist in the approach to these ratings, wherein baseline ratings are higher or lower dependent upon culture. This is an important observation, however the article does not tie this finding back to earlier investigations of American and Colombian students, so it is unclear if similar effects exist between Spanish and English speakers in these cultures.
The article goes on to explore possible reasons why a first language might be more emotionally forceful than a second language. A few psycholinguistic theories are reviewed, such as Johnson and Newport’s theory of a maturational mechanism in which genes for acquiring language are expressed more strongly in early childhood. This would suggest the affective primacy of L1 is related to a general language acquisition mechanism that is present in early in life. Birdsong came to different conclusions, pointing to motivational factors. It is interesting to note, however, that both of these lines of research are investigating grammatical knowledge, and are investigating very different L1 languages. Johnson and Newport investigated Korean- and Chinese-English speakers, whereas Birdsong investigated Spanish-English speakers. As will be explored in a review of Hernandez and Li (2007; see separate post), the differences found in between these investigations may have more to do with the lexical difference between L1 and L2 than differences in affective or semantic significance. In other words, if two languages are more grammatically similar, differences in grammatical knowledge as a function of age of acquisition might be less distinct as differences accounted for by levels of motivation or length of time speaking. In contrast, languages that are grammatically very different might evidence greater AoA effects if, as Hernandez and Li suggest, an AoA effect exists for acquisition of syntax in L2 (such that acquiring L2 syntax is easier as a function of age). This point will be explored further in a separate post.
Returning to the present article, another suggestion as to why L1 may be more emotionally salient is the fact that because early language development occurs in tandem with the development of emotion regulation, and therefore early words and phrases may have more connections with the amygdala, whereas later learned words may have more connections with cortical areas. However, recall the results of the studies above found greatest differences in emotional salience for reprimands. Perhaps the greater emotional intensity of early learned phrases have more to do with the more highly charged emotional context, whereas the same phrases learned later are less emotionally charged??
In line with this, Harris and Gleason propose a “context of learning theory” in which language has a distinctive emotional feel because it is learned and/or habitually used in a distinctive emotional context. Words and phrases become stored in memory in a context-dependent manner. They go on to present anecdotal evidence that L1 is not always the most emotional language. They point to stories of colleagues who have come to the U.S. to study and subsequently have remained in the U.S., getting married and starting families. These individuals report their second language (English) as feeling more emotional. The authors suggest that experiencing highly emotional events, such as having children in the context of L2 may result in L2 becoming more emotional. It would be interesting to test this hypothesis – using the same method in the studies above, it would be interesting to see if the presentation of specific words or phrases related to caring for children (such as “crib” “bottle” or “the baby is crying”) would be more emotional for these individuals in L1 or L2. In other words, would specific words or phrases accompanying these highly emotionally-charged experiences be encoded with more emotional significance much in the same way reprimands in childhood were found to be?
Harris and Gleason’s context of learning theory fits nicely with the Hernandez and Li article concerning age of acquisition. Harris and Gleason pose the question as to whether a maturational mechanism may be at play in the results reported above, such that first-acquired languages are always more emotional. However, they point out that the early contexts in which language is first acquired are also generally more emotional (for example, naturalistic settings tend to be more emotionally charged than classroom settings), which may play more of a causal role. In addition, if differences in the emotional force between languages are a function of maturational mechanisms, how would this account for the anecdotal evidence presented above, in which individuals who acquired a second language later in life nevertheless report L2 as feeling more emotional? An important consideration, however, is what specific factors of language are affected by age of acquisition (AoA), a point explored by Hernandez and Li (2007). AoA effects may be present only for syntax, and not for semantics, such that a “critical period” for processing syntactical information may occur early in life, whereas the processing of semantic information can occur throughout the lifespan. If this were to be the case, this would lend further support to Harris and Gleason’s argument that the greater emotional force of a first language has more to do with the emotional context in which the language was acquired and less to do with maturational mechanisms, because the syntax in the phrases presented in the studies above does not vary, only the semantic content. This would also lend support for why L2 might still be more emotional when learned late, if L2 is frequently used in highly emotional contexts. It seems more plausible that specific words and phrases become highly associated with specific emotional contexts and are encoded in memory as such. Again, to test this theory, it would be interesting to replicate the studies above using words or phrases congruent with highly charged emotional contexts experienced later in life, to see whether similar effects arise.

(n.b. - Further discussion of the Hernandez and Li article appears as a separate post.)

Sunday, September 28, 2008

Week 3: Culture, Language, Cognition and Affect

This week’s readings take an interesting look at cultural differences in the experience and expression of emotion, as well as the role of emotion in social relationships.


Potter, S. (1988). The cultural construction of emotion in rural Chinese social life. Ethos, 16, 181–208.

Potter’s article spotlights a fundamental difference between Americans and Chinese in the way that individual emotional experiences are both valued and used to guide action. Specifically, Potter focuses on the impact of experienced and expressed emotions on social structure. The article begins by pointing out that in the context of American culture, the “form and meaning” of social relationships are directly determined by the emotional experience of the individual, such that the experience of emotions both validate and maintain social structure and guides social action. For example, a “loveless” marriage is a legitimate reason for divorce, or anger at an institution is a valid reason to organize and press for change. Potter also points out that the important role emotions play in these social relationships puts an emphasis on continual emotional validation, such that the individual must reaffirm their emotional position through repeated emotional expression (i.e. parents hugging their children; spouses saying “I love you”), and without these repeated emotional displays the relationship may come into question or be considered devoid of meaning.
By contrast, Potter suggests individual emotional experience in rural Chinese life is viewed as irrelevant to the maintenance of social order or social action, maybe even to some extent threatening. In rural Chinese life, the social order exists and remains intact regardless of the inner experience of the individual. This is a major distinction between the ways in which emotions guide experience in American versus Chinese culture. As Potter states, “To root all meaningful social experience in the self is, from one point of view, an affirmation of the self and the importance of the individual. From another point of view, it puts an intolerable burden on individual experience.” In China, social structure is viewed as continuous, carried down across generations, such that the transgressions of an ancestor can have implications on the status of an individual generations later, without reference to that individual’s current experience. Further, because individual emotional experience is irrelevant to the social order, an emotion is never viewed as a legitimate rationale for social action – emotional expression has no formal consequences. Therefore, emotions are expressed more freely, without regards to how the expressed emotion might affect social standing. Potter gives the example of a villager ranting in rage at an officer, without concern of retribution by the officer for this display. This is very different from our own cultural context, where shouting at an officer can land you in jail. The fundamental difference is that in one context the display of anger is not viewed as potentially changing the situation, and therefore is not viewed by the authority figure as threatening, whereas in the other context it is intended to produce some wanted result, which may be in conflict with the officer’s goal and requires the officer’s response.
Potter describes an interesting developmental implication of this alternate view of the role of individual emotional experience in social structure. In America, emotional displays of children are responded to immediately and therefore produce results for the child, whether they be positive or negative. Further, the response to children’s emotional displays are judgment-laden, such that expressions of frustration or temper elicit negative reactions from caregivers, and pleasant expressions elicit positive reactions. The child therefore learns two important lessons – first, that expressed emotions can guide actions of others and produce wanted or unwanted consequences; and second, that some emotions are inherently “bad” while others are inherently “good.” By contrast, because individual emotions are viewed as irrelevant to social structures in rural Chinese life, emotional displays by children are largely ignored, and are neither encouraged nor suppressed. Thus, the Chinese child learns a) emotional displays do not produce results from others, and b) emotions are neither good nor bad. It is this developmental perspective, I think, that the difference between our own view of individual emotion and the views of rural Chinese can be best understood.
In reading this article, I found it very difficult to disentangle individual emotional experience from interactions with the outside world, and found myself often questioning, “yes, but is it true that the internal emotions of the Chinese individual do not guide interactions at all??” Looking at it from this developmental perspective, however, Potter’s argument is easier to understand. It is likely the case that internal emotional experiences do indeed color individual experience in similar ways across cultures, but seeing the different results that emotional expressions produce between the two cultures offers a greater understanding of the different ways emotions might be used to interact with the social world and motivate behavior. If we view emotional experience as existing as a sort of feedback loop, then it is easy to see how different the perspective on emotion between the two cultures might be. This is interesting clinically as well. One of the more recent trends in the treatment of emotional disorders borrows directly from eastern (particularly Buddhist) tradition that encourages taking a non-judgmental stance towards emotional experiences. Potter’s article I think in some ways highlights one mechanism by which taking this non-judgmental, accepting stance might be particularly difficult for Westerners to adopt. An accepting stance towards emotional experiences requires to some extent loosing the connection between internal emotional experiences and their potential consequences, something that, following Potter’s arguments, might be particularly challenging for American’s to accomplish, and is something to bear in mind clinically.

Tsai, JL, Knutson, B., & Fung, HH (2006). Cultural variation in affect valuation. Journal of Personality and Social Psychology, 90, 288-307.

Tsai et al. present an interesting new theory of emotion that attempts to close the gap between what ethnographers and scientists have found in studying differences in emotion experience across cultures. The article begins by pointing out that while ethnographers by-and-large have reported wide variation in emotions across cultures, scientists have reported many similarities. They propose a new theory to account for this discrepancy – “affect valuation theory.” This theory posits that two forms of affect exist, “ideal affect,” or the affect individuals would like to experience or value highly, and “actual affect,” the affect people actually experience. They further suggest that ideal affect is more related to cultural factors, while actual affect is related to temperamental factors. They set out to study these hypotheses in two separate studies, and chose to explore these differences between two cultures who differ along the lines of individualism versus collectivism.
Study 1 was conducted in an undergraduate sample of European Americans (EA) and Asian Americans (AA). They posited that the EA group would value high arousal positive states (HAP), as these states are congruent with individualists need to act upon the environment to achieve individual goals. By contrast, they posited the AA group would value low-arousal positive states (LAP), congruent to a collectivist viewpoint with the goal of adjusting to the environment. Study 1 set out to demonstrate that 1) ideal affect differs from actual affect, and 2) culture influences pure ideal affect more than pure actual affect. Results a two-factor model of ideal and actual affect provided a better fit than a one factor model. Further, EA participants valued HAP more than AA participants. However, this study was unable to conclusively demonstrate the hypothesis that the difference between ideal and actual affect was mediated by individualism-collectivism and not by temperamental factors. Therefore, a second study was conducted to directly address this hypothesis. The sample in Study 2 included EA, Chinese-Americans (CA), and Chinese from Hong Kong. Results from this study lent further support to the two-factor model of affect. Results also showed that the EA group valued HAP more than both the CA and CH group, and that the CA and CH group valued LAP more. One further analysis was conducted to investigate the relationship between discrepancies in actual and ideal affect and depression. The discrepancy between actual and ideal HAP was significantly associated with depression in both EA and CA groups, but not CH groups. By contrast, the discrepancy between actual and ideal LAP was significantly associated with depression in the CA and CH groups, but not the EA groups. Finally, affective traits were found to be more strongly associated with actual affect than ideal affect. These findings lend support to the suggestion that discrepancies exist between how people feel and how they want to feel, and how they want to feel is influenced by cultural factors.
These studies, while focusing on cultural differences, also seem to have some clinical significance. For example, people with anxiety and mood disorders may report valuing positive emotions highly, in line with the American view that positive emotions are good and negative emotions are bad. However, clinical experience indicates that for many suffering mood and anxiety disorders, the actual experience of positive affect is experienced as aversive, often provoking further anxiety. Viewing this through an “ideal” versus “actual” affect distinction might be helpful in understanding this clinical discrepancy, and helps to highlight one potential reason for the low mood resulting from this discrepancy.

Wilkins, R., & Gareirs, E. (2006). Emotion expression and the locution “I love you”: A cross cultural study. International Journal of Intercultural Relations, 30, 51-75.

In this study, the locution “I love you” was investigated along the following domains: a) when does it occur, b) with whom, c) in what language (native or non-native), d) about which topics, e)as part of what interactional sequences, and f) with what consequences. The study was conducted using an online survey enrolling undergraduates from a communications course. Results showed the locution was used most with lovers, followed by parents and grandparents and respective children and grandchildren. The expression was used rarely with siblings or cousins, and most often never with neighbors and coworkers. Males used it less frequently than females. Domestic students used it more frequently overall than international students, and were more likely to use it with parents and children. Non-verbal usage was more common with spouses. Non-native English speakers used the locution more often in English than in their native language. Further exploration was conducted using a qualitative questionnaire. Through this questionnaire, cultural differences emerged, that were by and large similar to the differences highlighted in the Potter article. Specifically, differences appeared to emerge based upon the consequences or the relationship between using the locution and maintaining social order, such that in cultures in which the expression of individual emotion has a direct bearing on relationships the expression was used more frequently than in cultures where expression has little or no bearing on the relationship. Cultures also differed on the weight of the phrase itself, with some cultures reserving it to only the most paramount circumstances, thereby preserving its special meaning, and others using it across multiple contexts of varying importance.
Because of the small sample size, it is hard to draw any specific conclusions about this study. Nevertheless, it lends further support to the idea that variations in emotion expression may be intricately woven within the specific social implications of that expression within the specific cultural context. At a much broader level, it speaks to the important interaction between affect, the cognitive interpretation of affect, and the consequences of that interpretation on behavior.

Saturday, September 20, 2008

This week’s readings were extremely thought provoking (no pun intended), and in many ways a must read for clinical psychologists. In essence, each article argues for the elimination of the cognition/emotion distinction, providing evidence for the interaction and integration of emotion and cognition, and against the notion of cognition and emotion as separable constructs. The articles immediately brought to mind the current arguments put forth by some in the field against the utility of cognitive therapy – some have proposed deemphasizing cognitive reappraisal in treatments and a move towards more purely behavioral or experiential therapies. When considering the way information about the world and internal states is processed, however, it becomes clear that behavior, cognition, and emotion are so intricately intertwined that a lack of consideration for any one of these domains in therapy in effect results in only a partial picture of the patient’s experience. Each of these articles is summarized below.

1) Duncan, S., & Barrett, L.F. (2007). Affect is a form of cognition: A neurobiological analysis. Cognition and Emotion, 21, 1184-1211.

Duncan and Barret (2007) propose that, while there may be an argument for a phenomenological or functional distinction between cognition and emotion, this distinction should not be confused as an ontological. They begin by a discussion of core affect, which they describe as valence and arousal, and which acts as a “neurophysiologic barometer that sums up the individual’s relationship to the environment” (p. 1186), with self-reported feelings functioning as “barometer readings.” They argue that core affect is in effect core knowledge, and forms the core of conscious experience, serving the primary function of evaluating the potential somatovisceral impact of external stimuli and organizing behavior accordingly. They go on to demonstrate that in addition to subcortical structures traditionally implicated in affective processing (amygdala, ventral striatum, insula), structures traditionally viewed as cognitive (orbitofrontal cortex, ventromedial prefrontal cortex, anterior cingulate cortex) are integral in the computation of the value of stimuli and in coordinating visceral and motor responses. For example, while regions of the amygdala function to evaluate the predictive value of stimuli, the OFC functions to generate an appropriate response based upon that prediction. The article goes on to discuss how sensory processing is widely distributed throughout the brain via interconnected structures, using the example of the amygdala’s role in visual processing. The amygdala can influence visual processing indirectly through top-down control of attention via the dorsolateral prefrontal cortex. The amygdala can also directly enhance visual processing through reciprocal projections to the ventral visual stream, by modulating the intensity of neuronal firing. Finally, the amygdala can influence visual processing directly through connections with regions of the brainstem, in modulating the release of neurotransmitters that in turn enhance sensory processing. Further, Duncan & Barrett point out that core affective circuitry such as the amygdala, OFC, and VMPFC offer the only route by which sensory information reaches the brainstem and basal forebrain. A key way in which the amygdala modulates the processing of visual information is through heightened awareness of stimuli. In essence, the amygdala monitors the affective significance of stimuli, influencing both cortical control of attention and response generation and sensory processing. The authors point out that heightened activation of the amygdala seen anxiety and mood disorders appears to affect sensory processing leading to heightened sensitivity to affective information seen in these disorders. As such, disruptions in core affective circuitry have direct consequences for sensory processing. Sensory information that is deemed to have greater significance to the somatovisceral homeostasis of the individual is preferentially processed, in line with studies reported in Treat & Dirks (2007). Duncan and Barrett go on to demonstrate how core affect influences memory, consciousness, and language, all domains considered to be the domain of cognition. In sum, the article provides an argument for the inseparability of cognition and emotion, showing that all aspects of cognitive processing are integrated with the processing of affective information.

2) Pessoa, L. (2008). On the relationship between emotion and cognition.

Pessoa continues along a similar argument, using evidence from structural connectivity models to demonstrate the interrelationship between “cognitive” and “affective” regions. He argues that while distinctions between anatomical structures considered cognitive versus affective can be made, these structures are highly interconnected, evidencing a greater integration of cognitive and emotional processing than the reliance upon purely anatomical divisions can provide. In fact, following an analysis by Young and colleagues (2000), he suggests the amygdala acts as a central hub in a vast distributed network reaching nearly every region of the brain. He proposes a cognitive-affective control circuit for the processing of information reaching from cortical areas to the brainstem, comprised of the lateral PFC, the OFC, the anterior cingulate cortex, the amygdala, the nucleus accumbens, and the ventral tagmental area. He further proposes that behavior is the result of both affective and cognitive computations. He concludes by proposing we must go beyond looking at interactions between cognition and emotion, and instead consider how cognition and emotion are integrated (how each in effect constrains the other). In essence, Pessoa suggests we consider how both affective information (such as reward properties of stimuli) and cognitive information (such as perceptual processing, attention, and memory) are orchestrated to motivate behavior.

3) Thagard, P. (2008). How cognition meets emotion: Beliefs, desires, and feelings as neural activity. In G. Brun, U. Doguoglu & D. Kuenzle (Eds.), Epistemology and emotions. Aldershot: Ashgate.

Thagard takes a decidedly philosophical point of view, through which he argues for the inseparability of cognition and emotion in epistemology. He points out that traditionally epistemology has ignored emotion, and offers the counter viewpoint that the acquisition and expression of knowledge is inherently affective and therefore inherently emotional (to the extent that one knows something through evaluating the significance of that knowledge to themselves as an individual, or acquires knowledge based upon the motivation or desire to know something). He also argues against conventional views that mental states are propositional attitudes (relations between the self and a proposition), instead suggesting beliefs, desires, and emotions are the result of patterns of neural activity. While much of this article argues an alternative view of epistemology, one major point Thagard makes was well taken. Since knowledge and the acquisition of knowledge are inherently affective, the quest for knowledge and the way that knowledge is processed (perceived or understood) is influenced by the desires and goals of the seeker. This is an important consideration for anyone pursuing scientific knowledge. Given the evidence presented in the previous two papers about the affective influence on information processing, it is important to recognize that the pursuit of knowledge and the interpretation of that knowledge is always intertwined with our own individual beliefs and desires (ie affective motivation). Therefore, we are always operating within our own individual biases. Thus the importance of sharing and collaboration with peers in the quest for scientific knowledge!

Sunday, September 14, 2008

Week 1: Contemporary cognitive science: New directions

1) Eliasmith, C. (2003). Moving beyond metaphors: Understanding the mind for what it is. Journal of Philosophy, 10, 493-520.


This article proposes a new theory to describe representation and dynamics in cognitive science. It begins with a very brief overview of the history of cognitive science and the predominant theories of cognitive systems: symbolicism, which relies upon the metaphor of mind as computer; connectionism, which relies on more “brain-style” models of processing, conceptualizing computation and representation as occurring through connected networks and nodes; and dynamicism, which argues that the mind is a physical, dynamic system whose state changes over time and whose functions cannot be viewed as discrete or static. According to the article, early accounts of cognitive systems, driven by behaviorism, were limited to the observable end products of thought (limited to inputs and outputs), whereas in the mid-1950s - with the aid of technological advances that produced computers and facilitated a working metaphor for cognitive processes - researchers began looking at internal states, internal processes, and internal representations to understand cognitive processes. As Eliasmith puts it, people began to peer “inside the black box.” The article goes on to suggest the metaphors provided through symbolic, connectionist, and dynamic accounts of cognitive processing, while all helpful analogies, may nevertheless be constraining how cognitive processes are understood. The author proposes a new theory, representation and dynamics in neural systems (R& D theory), inspired both by modern control theory and recent findings from neuroscience. The author suggests modern control theory, which considers internal system state variables in the processing of inputs and generation of outputs, “offers tools better-suited than computational theory for understanding biological systems as fundamentally physical, dynamic systems operating in changing, uncertain environments” (p.6). R& D theory is governed by three main principles: (1) that neural representations are the result of non-linear encoding and weighted linear decoding; (2) decoding is the result of transformations of neural representations carried out by connected populations of neurons, using linear decoding; and (3) “neural dynamics are characterized by considering neural representations as control theoretic state variables. Thus, the dynamics of neurobiological systems can be analyzed using control theory.” The remaining sections of the article provide support for the theory, and presents arguments for the utility of R & D theory above and beyond existing theories for explaining cognitive systems.

Given my limited knowledge thus far of existing theories of cognitive systems, this article in general was quite difficult for me to follow. While it provided a general account of existing theories, numerous references to key concepts relevant to the theories, about which I am unfamiliar, made it difficult at times to fully grasp some of the examples and explanations. I feel this article will be a good one to return to later down the road. Nevertheless, it was encouraging to read accounts of cognitive processing that line up with what I have learned so far about affective processing; specifically, that affect and cognition is best understood as the result of dynamic interactions between different levels of processing by interconnected brain regions. I look forward to re-reading this article when I have had a chance to digest a bit more of the cognitive literature.

2) Treat, T.A., & Dirks, M.A. (2007). Bridging clinical and cognitive science. In T.A. Treat, R.R. Bootzin, and T.B. Baker (Eds), Psychological clinical science: Papers in honor of Richard McFall. New York: Taylor & Francis Group.

In this article, Treat & Dirks extol the virtues of taking an interdisciplinary approach towards understanding the role of cognition in psychopathology. They discuss the limits of applying research paradigms from cognitive science directly towards answering clinical questions, and the shortcomings of staying within disciplinary boundaries when trying to extend cognitive science to clinical problems, such as through the use of cognitive therapy. They propose a new, integrated approach called “quantitative clinical-cognitive science.” The article goes on to summarize recent studies using this approach, in which the perceptual organization of two clinical populations is investigated. The studies cited each investigated specific differences in the processing of visual information, exploring whether processing was influenced in a disorder-specific manner. For example, using photographs of normatively heavy and normatively thin women with either sad or happy facial expressions, participants were ask to rate two photographs according to how similar they were. Participants who had endorsed bulimic symptoms consistently rated similarity according to body type, disregarding affective information. The same held true in a memory recognition task, where participants endorsing clinically significant eating disorder symptoms had better memory for body shape than affective information. Similar disorder-specific perceptual processing was found in a sample of college men who perceived unwanted sexual advances to be justified. When shown photographs of women who were or were not wearing revealing clothing and varied in facial expressions of affect, participants judged photographs as similar based upon dress and not affect, and demonstrated greater memory for information about physical appearance than facial expressions. The article suggests that attention should be paid to specific types of cognitive processing biases, and that perhaps by understanding disorder or individual-specific deficits in the processing of information we can develop cognitive therapies that more directly target these deficits. Further, the article suggests that research paradigms pay closer attention to tapping into these individual differences in processing when studying clinical phenomena. This suggestion is not only sound, but opens up a new and important approach to investigating how distorted cognitions might fuel psychopathology. For example, it is increasingly apparent that the way in which information is taken in and interpreted can affect both affective and behavioral responding. By understanding how information is being distorted in psychopathology, his way might enable more finely-tuned cognitive therapies that target core processes, rather than general deficits.

3) Thagard, P. (2005). Being interdisciplinary: Trading zones in cognitive science. In S.J. Derry, C.D. Schunn & M. A. Gernsbacher (Eds.), Interdisciplinary collaboration: An emerging cognitive science (pp. 317-339). Mahwah, N.J.: Erlbaum.

This article in general extols the virtues of cognitive science as an interdisciplinary science, recounting the history of both people and places integral to its formation. Thagard discusses how bringing together philosophers, linguists, psychologists, and computer scientists has contributed towards expanding ideas and conceptualizations about how the mind works. He also discusses how the field of cognitive science could not have evolved without the support of forward thinking institutions that allowed the such things as Carnegie Mellon’s early support of a joint appointment in psychology and computer science, as well as joint degrees. He notes early work of many of cognitive science’s “pioneers,” such as Noam Chomsky’s work on linguistics, George Miller’s “The Magical Number Seven, Plus or Minus Two,” and Marvin Minsky, Allen Newell, and Herbert Simon’s work in artificial intelligence. Thagard summarizes by saying the success of cognitive science lies with the establishment of these “fertile trading zones,” in which various disciplines and institutions have come together to share ideas and inform each other. It is through these collaborations that knowledge has flourished. In essence, it appears cognitive science provides a good example of how sharing knowledge across disciplinary boundaries can serve to advance our knowledge well beyond what is gleaned by sticking to our borders, a lesson clinical psychology would benefit from learning, and has begun to do so with growing collaborations between clinical psychology, neuroscience, and social cognition.