Monday, December 1, 2008

Week 10: Social cognition (Part 1)

Jackendoff, R. (2003). An agenda for a theory of social cognition. In R. Jackendoff (Author), Language, consciousness, culture: Essays on mental structure. Cambridge, MA: MIT Press.

Beer, J.S., Shimamura, A. P., & Knight, R. T. (2004). Frontal lobe contributions to executive control of cognitive and social behavior. In Gazzaniga, M.S. (Ed.), The cognitive neurosciences III (pp. 1091-1104). Cambridge, MA : MIT Press.

Ochsner, K. N. (2007). Social Cognitive Neuroscience: Historical Development, Core Principles, and Future Promise. In Kruglanksi, A., & Higgins, E. T. (Eds.). Social Psychology: A Handbook of Basic Principles (pp. 39-66). 2nd Ed. New York: Guilford Press.

The articles for this week’s topic, were only somewhat illuminating on the actual topic of social cognition, as they covered the topic in much broader strokes than would have been helpful as an introduction to the topic. Nevertheless, some interesting information was gleaned.

Beer et al. (2004) review several studies that describe the differential role of lateral versus medial/orbitofrontal regions of the prefrontal cortex. Specifically, lateral prefrontal cortex is repeatedly implicated in what are traditionally conceived of as cognitive processes, and the article focuses primarily upon the role of the LPFC in attention. Evidence from several lesion and animal studies implicate the LPFC in allocating attention, inhibiting or exciting neuronal activity in sensory regions (visual or auditory). Lesion studies demonstrate how damage to this region impairs the individual’s ability to filter out irrelevant information. In sum, the LPFC plays an important role in controlled information processing. By contrast, the medial and orbitofrontal regions of the PFC appear to be implicated in self-regulation and the processing of social information, serving to integrate emotional and cognitive information.

This idea is not new to us here, as in prior weeks we have seen this region implicated in the integration of cognition and emotion in decision making, reasoning, and behavioral inhibition (รก la dear Phinneas Gage). It makes sense, then, that processing social information would implicate this region as well. One could hypothesize that acting in socially appropriate ways would require a similar process as decision making, wherein one decides whether to follow an emotional impulse or inhibit that impulse based upon contextual and historical information. Beer et al. also review studies implicating the medial PFC in encoding information relevant to the self and making inferences about other’s behaviors (theory of mind). This would also make sense, as this region appears to be important for integrating information about the emotional salience of stimuli with reward-based information about potential responses to this information. One could argue social information is inherently emotional, as it involves processing information that is ultimately pertinent to the maintenance and attainment of survival goals. We learn to navigate the social world in ways that maximize our own survival by maintaining proximity to important others that aid in our survival and maintaining distance from those that threaten it. At the risk of being reductionist, it seems to circle back to that old familiar theme of cognitive/affective integration.

Jackendoff’s article picks up this point, by discussing the myriad of social interactions that one must learn to navigate. Taking a much different approach, he argues for the analogy between social cognition and language development. Both language and social competency are attained through the interplay between “hardwired,” computational capacity and culturally driven, externally derived information. We are born with the capacity to learn language, but the nuances of what we learn and how our language is ultimately used is shaped by our language interactions with those around us. Similarly with social behavior – we have an innate ability for social processes such as empathy, “cheater detection” or the ability to detect the intentions of others, theory of mind, emotional contagion or the ability to mirror the emotions expressed by others, and self-monitoring/self-regulation in the service of socially appropriate behavior. How these capacities are expressed is a function of the immediate world we live in, and as such are culturally and group specified. The more I read, the more beautifully orchestrated human nature seems – information gathered through the internal lens, the external synthesized with the internal, and the combination offered back out again to the world.

The third article in this series, by Kevin Ochsner, is less a treatise on social cognition than a manifesto for the new discipline of social cognitive neuroscience. The article lays out an agenda for this new line of research, encouraging a multi-layered analysis of social psychology, examining the behavioral, computational/representational, and neuronal levels. The chapter suggests ways in which previously distinct disciplines of social psychology, social cognition, and cognitive neuroscience can work together to answer questions in a more constrained and meaningful way. While the chapter was a very interesting read, it did not provide much by way of examining specific research in social cognition, so was less useful to our immediate purposes here. Perhaps the Thagard chapter will provide more fine-grained insight…

Sunday, November 23, 2008

Week 8& 9: Reasoning and Problem Solving

Litt, A., Eliasmith, C., & Thagard, P. (forthcoming). Neural affective decision theory: Choices, brains, and emotions. Cognitive Systems Research.

Evans, J.S.T.B. (2003). In two minds: Dual process accounts of reasoning. Trends in Cognitive Science, 7, 454-459.

Thagard, P. (2007). Abductive inference: From philosophical analysis to neural mechanisms. In A. Feeney & E. Heit (Eds.), Inductive reasoning: Experimental, developmental, and computational approaches (pp. 226-247). Cambridge: Cambridge University Press.


This week’s readings focus on the process of reasoning and decision making. Throughout each article and chapter, a common theme is reported: both reasoning and decision making appear to be the result of a final solution reached through the interaction of dual processing streams, one involving emotional processing and the other involving more cognitive processing. Evans (2003) presents a dual processing theory of reasoning, whereby two systems essentially “compete” for the final solution. “System 1” processing represents rapid, automatic processing, representing concepts and beliefs that are formed through associative learning. It is through this system that innate and instinctual behaviors are accessed. Evans proposes the end result of the rapid and automatic processing by this system is what becomes available to consciousness. System 2, by contrast, represents much slower, methodical processing, making use of working memory systems to elaborate upon information from System 1, engaging in more sophisticated hypothetical thinking and forecasting, constructing mental models and analyzing possible outcomes. Through this process, System 2 essentially has the capacity to override System 1. Evans presents examples from studies in which syllogisms are used to evaluate the relationship between beliefs and analytical deductive reasoning. In one type of study, participants are asked to endorse only those conclusions that logically follow a preceding premise regardless of their beliefs. Results show participants have a very difficult time overriding prior beliefs, and show belief bias in their endorsement of conclusions, rejecting otherwise logically deduced solutions. For example, in the syllogism “No nutritional things are inexpensive; Some vitamin tablets are inexpensive; Therefore, some vitamin tablets are not nutritional,” participants demonstrated difficulty endorsing this conclusion regardless of the fact it rationally follows the previous premises, having a difficult time “buying” that we can conclude some vitamins are not nutritional based on the fact they are inexpensive. If instructions to participants emphasize the importance of endorsing conclusions only on the basis of their logical merit, participants are able to do it, but only with effort. Evans proposes these studies are representing the process of System 2 inhibition and override of System 1. If System 2 requires working memory and other higher cognitive processes in order to override System 1, then measures of intelligence ought to correlate with ability to inhibit System 1 by System 2. This has indeed been demonstrated, with higher IQ scores correlating with greater ability to find correct solutions in reasoning tasks. In short, the greater cognitive capacity an individual has, the better able they are to go beyond “gut reactions” to problems and find other possible solutions. This is analogous to the stereotype of the reckless, emotional decision-maker versus the cool, calm, collected and calculated one – Inspector Clouseau versus James Bond if you will.

In Thagard’s article, he provides arguments for a neural account of abductive reasoning. Abductive reasoning refers to inference involving the generation and evaluation of explanatory hypotheses. Thagard argues that the process of abductive reasoning is inherently emotional, based on the fact that reasoning occurs first when something is puzzling, which is resolved when a target explanatory solution is arrived at. Both the puzzling nature and the satisfaction with the explanatory solution are in essence emotional events. He suggests that the ability to find causal relations begins with very early perceptual processing, as demonstrated by studies showing infants as young as 2.5 months expect that a stationary object will be displaced when hit by a moving object. Thagard proposes there is a neurally-encoded image schema that establishes the causal relationship tying the neural structure representing hypothesis with the neural structure representing the target explanation. Abductive inference is the “transformation of representational neural structures that produces neural structures that provide causal explanations.” Abductive inference does not only include verbal-linguistic processing but also inference from multiple perceptual modalities (such as deducing from seeing a scratch on your car in a supermarket parking lot and a shopping cart nearby that the shopping cart caused the scratch). All types of inference are inherently emotional in that what motivates one to find a causal explanation is the emotional thrust of puzzlement, and what represents a solution is the satisfaction that solution elicits. Here again, we see the interaction between emotion and cognition.

Litt, Eliasmith, and Thagard provide an interesting account of the role of emotion in decision making. Decision-making involves the weighting of various response choices and their potential consequences. As discussed earlier in Week 6, this involves both emotional and contextual information, implicating VMPFC, amygdala, and hippocampus. The current article extends upon this and demonstrates through neurocomputational modeling how amygdala activation (representing emotional salience) influences ongoing response selection. In essence, the greater emotional arousal generated by stimuli, the greater the subjective value placed on the stimuli by the OFC. Valuations are exponentially dampened or intensified depending upon the lowered or heightened state of arousal. The authors provide equations representing this process, demonstrating how the level of amygdala activation can in essence cancel out OFC responses. Greater negative predictions elicit higher levels of arousal, and there is greater aversion to potential losses than gains in predicted outcomes. The authors go on to present fascinating accounts for the way in which framing a problem can influence decision making. Potential for loss is more arousing than potential for gain. Therefore, the way a problem is presented, emphasizing overall losses as opposed to overall gains, influences which decision is made. For example, studies by Tversky & Kahneman (1981, 1986) found when given a choice of two plans to control an outbreak expected to kill 600 people, participants were inclined to choose a plan that would result in 200 people being saved but reject a plan resulting in 400 people being killed. Objectively, both of these choices are exactly the same (200 people live, 400 people died), but when presented as an opportunity to save people the choice was more desirable than when presented with the opportunity to kill people. The same framing phenomenon occurs in the famous trolley-footbridge dilemma (Greene et al., 2002, 2004), wherein participants are more likely to chose to push a button releasing a runaway train car carrying multiple people, risking multiple peoples’ lives, than choosing to push one person in front of the runaway train, killing that person but saving the rest. Even though more people will likely die in the first option, the distance between the action of pushing a button and that action causing death is greater than the distance between making physical contact with an individual and causing death. The latter elicits far greater amygdala and OFC activation than the former, suggesting greater emotional salience. Another aspect of framing explains why people sometimes make choices that are objectively less valuable but hedonically more valuable. The authors give the example that winning $20 feels like a gain when the comparison is winning only $1, whereas winning $20 feels like a loss when the comparison is winning $100. It is objectively the same outcome, $20 is $20, but one outcome is more desirable than the other. The authors suggest the difference in desirability is the result of the distance between the actual outcome and the expected outcome. If you expect to earn $100, $20 feels like a loss. However, if you expect to win $1, $20 is a gain.
The article by Litt et al. maps well onto the article by Evans, wherein we can assume the hedonic value and emotional contribution to a decision is a result of “System 1” processes, resulting from prior learned associations and innate beliefs (such as killing another human being is bad). The degree to which the emotional aspects of a decision or reasoning process win out is related to the degree to which further elaboration and hypothesizing about a possible solution generated through “System 2” processes override System 1 contributions. Regardless, it appears we cannot “escape” bottom-up, affect-driven influences on what would otherwise be construed as a cognitive process.

Saturday, November 22, 2008

Week 7: Consciousness (Part 2)

Srinivasan, N. (2008). Interdependence of attention and consciousness. Progress in Brain Research, 168, 65-75.

This article seeks to understand consciousness by exploring the relationship between consciousness and attention. First, an important consideration in following the arguments in the article is the way consciousness is defined - consciousness is taken to mean awareness throughout the article, rather than mere perception. The article presents two conceptualizations of the relationship between attention and consciousness. On the one hand, attention is thought to be necessary for conscious awareness, in that we are not conscious of that which we do not attend to. Evidence supporting this idea is presented, such as studies of inattentional blindness wherein irrelevant stimuli are not reported as being seen when participants were not aware the stimuli would be present (Mack & Rock, 1998), or studies of change blindness wherein subtle changes in objects are not perceived outside of focused attention on the object (Rensink, 2002). On the other hand, consciousness is thought to precede attention, wherein selective attention operates on what is already conscious. From this perspective, perceptual processing leads to conscious perception, and attention acts to focus awareness in order to take appropriate action. While the article cites studies supporting this view (e.g. Lamme, 2003), the studies themselves are not presented, therefore it is hard to draw conclusions about this viewpoint. In essence, the entire argument represents a sort of “chicken-and-egg” dilemma.
It might be useful to return to the definitions presented earlier. Merriam-Webster defines consciousness (n) as: “the quality or state of being aware especially of something within oneself; the state or fact of being conscious of an external object, state, or fact.” Conscious (adj) is defined as: “perceiving, apprehending, or noticing with a degree of controlled thought or observation.” In other words, to be conscious of something we not only are simply perceiving it but also are attending to it to some degree. If this is the case, I would argue that attention is a necessary part of what makes something that is perceived something we are consciously aware of. If this is the case, I would place perception on one end of a continuum, and focused attention on the other, with consciousness operating as degrees along this continuum. Srinivasan presents one interesting theory that, while not exactly the same concept, would support this view: Dehaene et al. (2006) have proposed consciousness and attention may function on a 2x2 matrix in which one factor is stimulus strength (bottom-up), and the other is controlled attention (top-down). This results in four classes of processing: subliminal-unattended, subliminal-attended, preconscious, and conscious (although they don’t really define what is meant by “preconscious”). Again, degrees of consciousness depend on the interaction between perception and attention.
Having degrees of conscious awareness might be important adaptively. At any given moment, there are certain aspects of our internal and external environment that are important to attend to, and others that are not. Without degrees of consciousness operating as a sort of filter, we would be inundated with stimuli, and essentially incapacitated. Procedural memory can be thought of in this way: when we learn to ride a bike, we are initially aware of all the movements of our hands, feet, body, balance, etc. Once we get the hang of it, we no longer think of how our body needs to move in order to ride, and can shift our attention to our surroundings thereby avoiding crashing into walls or being hit by a car. One could only imagine how difficult riding a bike would be if we had to divide our attention between awareness of our bodily movements and information about our surroundings simultaneously. There is some evidence to suggest obsessive compulsive disorder may represent an inability to filter out irrelevant information in conscious awareness, causing an inability to disengage from stimuli. A recent study by Calamari et al. (in press) demonstrated that participants with OCD performed slower on a learning task than healthy controls, yet participants with OCD were able to describe all the elements that went into their selection of specific movements. In other words, they were consciously attending irrelevant information that affected their overall performance, whereas healthy controls were able to learn the task and filter out of awareness all the steps it took to perform the task, thereby allowing them to perform more quickly. It would be interesting to pursue this line of inquiry further to better understand the implications for OCD. Perhaps this could shed further light upon the relationship between attention and consciousness.

Saturday, November 15, 2008

Week 7: Consciousness (Part 1)

CONSCIOUSNESS (noun)
1 a: the quality or state of being aware especially of something within oneself b: the state or fact of being conscious of an external object, state, or fact c: awareness ; especially : concern for some social or political cause2: the state of being characterized by sensation, emotion, volition, and thought : mind 3: the totality of conscious states of an individual4: the normal state of conscious life 5: the upper level of mental life of which the person is aware as contrasted with unconscious processes

CONSCIOUS (adjective; from Latin com + scire “to know”)
1: perceiving, apprehending, or noticing with a degree of controlled thought or observation (was conscious that someone was watching) 2archaic : sharing another's knowledge or awareness of an inward state or outward fact3: personally felt (conscious guilt) 4: capable of or marked by thought, will, design, or perception5: self-conscious6: having mental faculties undulled by sleep, faintness, or stupor : awake (was conscious during the surgery) 7: done or acting with critical awareness (a conscious effort to do better) 8 a: likely to notice, consider, or appraise (a bargain-conscious shopper) b: being concerned or interested c: marked by strong feelings or notions (a race-conscious society)
synonyms see aware

This week’s two main articles were fascinating, and the stuff of mental gymnastics. What constitutes consciousness? How does consciousness emerge? What role does attention play in consciousness? It occurred to me while reading that to fully understand and consider how consciousness emerges we have to be clear about what we mean by consciousness in the first place – hence the definitions above. It seems everything from perception to controlled processing is considered “being conscious,” if we are to take the definitions above. The question seems to be, however, that if we were to take a continuum of automatic to controlled processes, wherein sensory perception falls on the automatic end and focused attention falls on the controlled end, where along this continuum would we place actual consciousness?? And what role does affect play in the generation of consciousness (or the constitution of consciousness)?? But I am getting ahead of myself somewhat…first, the readings.
Thagard, P., & Aubie, B. (in press). Emotional consciousness: A neural model of how cognitive appraisal and somatic perception interact to produce qualitative experience. Consciousness and Cognition.

In the Thagard and Aubie article, a neural model of emotional consciousness is described. The authors start by stating that any model of conscious emotional experience must be able to explain differentiation between varied emotional states; integration between varying mental processes including perception, memory, judgment, and inference; intensity of emotional arousal; emotional valence (positive or negative); and the changes or shifts from one emotional state to another. They go on to suggest emotional consciousness must not be limited to either perceptions of bodily states or cognitive appraisals of one’s state, as early emotion theorists have tended to suggest through defining emotions as either somatic perceptions or appraisals. Rather, Thagard and Aubie present a neurocomputational model in which emotional representations are comprised of both perceptions and judgments. Their model is based upon the idea that mental representations are generated not only by inputs from external or internal stimuli, but also from inputs between neural populations, such that one neural population is tuned to the firing of another neural population (out of which more complex representations arise). From this perspective, neural structures are in tune with the firing patterns of other neural structures, and these patterns of firing influence each other in a dynamic fashion. This allows for a model of parallel constraint-satisfaction, wherein the activation of one structure is constrained by the activation of another when an acceptable solution has been arrived at based upon external and internal representations.
Thagard and Aubie term this the EMOCON model of emotional consciousness. Structures implicated in this process comprise both cortical and subcortical structures: dorsolateral prefrontal cortex (DLPFC), orbitofrontal cortex (OFC), ventromedial prefrontal cortex (VMPFC), amygdala, insula, hippocampus, thalamus, ventral striatum, and raphe nucleus – structures spanning brain stem to higher cortical regions. Emotional consciousness does not result from any final output from one of these areas, but instead is an ongoing dynamic process resulting from feedback between these structures. In this way, both somatic sensations and cognitive processes are integrated, each playing a role in overall emotional consciousness. In view of the explanatory criteria mentioned above the EMOCON model satisfies each of these criteria. For example, the dynamic interaction of these structures serve to explain integration of various mental process. The strength and pattern of neuronal firing within and between neural populations serves to explain variances in intensity and valence, as well as differentiation (using a neurocomputational model, they demonstrate how the strongest emotion gains full activation and suppresses other emotions, or how two emotions can become co-activated representing mixed emotions).
Thagard and Aubie posit an important role for working memory in emotional consciousness. The current, most salient representation (including internal and external perceptions and associations from long-term memory) remains active in working memory. However, because representations in working memory decay over time, if the current representation in working memory is not further elaborated, or if attention shifts, the previously represented emotion begins to decay. The authors suggest this is what accounts for shifts in emotional consciousness or emotional change. This is an interesting idea clinically – following the EMOCO model, depressive rumination or anxious worry behavior involves continual manipulation of negative or threatening information and representations in working memory which in turn serves to perpetuate the experience of depression or anxiety. To get a better sense of the importance of this process in the maintenance or severity of depression or anxiety, it would be interesting to see if lower rates of rumination or worry are associated with poorer working memory in this population, and in turn if poorer performance on working memory tasks could predict lower symptom severity.
The article goes on to present neurocomputational models of emotional consciousness, wherein final “solutions” are arrived at through explanatory coherence. Propositions are accepted through a process whereby neurons spike in parallel causing other neurons to spark in either an excitatory or inhibitory direction until the network is stabilized, representing the final solution. However, the authors suggest emotional valence also plays an important role in the acceptance of a final solution. Emotional coherence occurs when the acceptance of a proposition is swayed by the emotional valence of that proposition, such that positive emotional valence encourages the acceptance of a proposition, and negative emotional valence encourages the rejection of a proposition. This neurocomputational model provides further evidence for the integration of both cognitive and affective processes in overall consciousness.
The take-home message of this article is: emotional consciousness is the result of the integration of perception, memory, attention, and sensation, which is further colored by emotional intensity and valence. What becomes conscious is the end result of this integration, facilitated by the manipulation of this representation in working memory. This leads to a question addressed in the second article for this week – is what becomes conscious what is attended to? Or do we attend to what is conscious?

Saturday, October 25, 2008

Week 6: Computational models of cognition and affect (Part 2)

Thagard, P. (2008). How molecules matter to mental computation. In P. Thagard (Ed.) Hot Thought: Mechanisms and applications of emotional cognition pp 115-131. Cambridge, MA: MIT Press.

In this chapter, Thagard argues that computational models of cognition need to consider the influence of neuromodulators at the molecular level. He argues for understanding processes in the brain as the result of both chemical and electrical activity. He goes on to point out that many of the chemical influences on synaptic activation occur as the result of activity of cells far removed from the local neural network, such as when the release of hormones influence distant synaptic firing. Much of the chapter goes into technical details of how chemicals such as hormones influence the action of neurotransmitters and synaptic activity, but the overall thesis is that if cognitive scientist are to construct accurate computational models, they must take into consideration the effects of chemical processes on the activation of these models, rather than approach them as if they were electrical computers. The effects of chemical processes, he argues, are even more important to consider in computational models of cognition as the evidence for the role of emotion in cognition mounts. Evidence already exists for the effects of hormones or other neuromodulators on emotion; therefore, these same chemical reactions ultimately affect cognition. He points out that while it may not be necessary to only consider systems at the molecular level, knowledge about molecular processes should be considered one type of map that is useful for certain levels of analyses – he presents the example that while a large map of Europe is useful for locating Switzerland as north of Italy, another type of map, more fine grained, is useful for navigating the terrain of the Swiss Alps. I would tend to agree that if a truly holistic account of cognition is to be developed, consideration of the molecular processes contributing to patterns of neural activation would only serve to enrich this development. While it might not serve cognitive science for researchers to try to attempt the formidable task of becoming expert in all levels of analysis, integrating findings from molecular and network levels of analyses would likely strengthen the understanding of cognition at both of these levels and strengthen a more holistic understanding.

Week 6: Computational models of cognition and affect (Part 1)

Wagar, B. M., & Thagard, P. (2004). Spiking Phineas Gage: A neurocomputational theory of cognitive-affective integration in decision making. Psychological Review, 111, 67-79.

In this article, Wagar & Thagard present a new computational model of cognitive-affective processing, called GAGE after the famous case of Phinneas Gage (whose personality changed dramatically after a tamping iron destroyed the left side of his brain, transforming him from a reliable, dependable and level-headed figure to an impulsive and profane individual). GAGE focus on the contribution of the ventromedial prefrontal cortex in gauging future consequences and behaving accordingly. Specifically, the VMPFC has been implicated in the ability to refrain from behavior leading to an immediate reward if that behavior has future negative consequences, or delaying immediate reward for a future, larger reward. In the GAGE model, Wagar and Thagard examine how the VMPFC and amygdala interact with the hippocampus to coordinate potential responses with bodily states associated with the current situation and contextual information about the situation. They were particularly interested in the mechanism by which context has a moderating effect on emotional reactions to stimuli.
Their model is an extension of A. Dimasio’s (1994) somatic marker hypothesis, whereby feelings or emotional states become associated with long-term outcomes of certain responses to a given situation. The VMPFC is thought to play an important role in generating somatic markers. In this hypothesis, sensory representations of a response to a current situation activate knowledge about previous emotional experiences in similar situations. These markers act as biases influencing higher cognitive processes that coordinate responses. Wagar & Thagard extend on this by suggesting four key brain structures involved in this process. First, the VMPFC responds in concert with the amygdala. However, Wagar & Thagard suggest the mechanism by which the amygdala response (or immediate emotional response) versus the VMPFC response (based on potential outcomes of responses) wins access to higher cognitive processing is through gating by the nucleus accumbens, which in turn gates information based upon contextual information received from the hippocampus. The process is hypothesized to unfold as such: 1) the VMPFC receives input from sensory cortices, representing behavioral options; 2) the VMPFC also receives input from limbic regions, providing information about internal bodily states, most notably from the amygdala; 3) the VMPFC records signals defining a given response by encoding representations of the stimuli and comparing it to the behavioral significance of somatic states that have been previously associated with the response; 4) the VMPFC generates a “memory trace” representing the action and expected consequences of that action; 5) through reciprocal connections to the amygdala, the VMPFC elicits a reenactment of bodily states associated with the specific action; 6) this covert emotional reaction is passed on to overt-decision making processes – however, the transmission of this information is gated by the NAcc, as controlled by the hippocampus, as 7) the hippocampus controls VMPFC and amygdala throughput by depolarizing the NAcc based upon context – the NAcc allows only activation signals from the VMPFC and amygdala through that are consistent with the current context allowing spike activity in the NAcc. As staed in the article, “The hippocampus influences the selection of a given response by facilitating within the NAcc only those responses that are congruent with the current context (p.70).”
This is where the article loses me a bit, because I am not entirely certain by what process the hippocampus is purported to match current contextual information with memory traces about past contexts, as generated by the VMPFC, in order to chose which potential response information to allow through. For example, given the tendency for individuals with anxiety and mood disorders, and particularly trauma, to misread the current contextual information, this would be an important part of this process to understand. Individuals who have experienced trauma, for example, show a tendency to disproportionally map past representations onto current contexts. If Wagar & Thagard are suggesting the hippocampus is matching current context to past memory traces, this would imply individuals with trauma have deficits in the ability of their hippocampus’s to accurately gauge the current context. (However, perhaps it is the result of affective influences on sensory processing, such that information about the current context is distorted, and thereby may match memory traces more closely.) In addition, while I can understand the mechanics of this proposed process, it leaves me with open questions about the hippocampus’s “motivation.”
Nevertheless, Wagar & Thagard go on to present evidence from two studies of the GAGE model. The goal of the first study was to see if GAGE could simulate the experimental results of the Iowa gambling task in Bechara et al., 1994. In this task, participants are given a choice of four decks and are asked to make series of card selections from each of the four decks. They are given $2,000 as a loan to start, and play the decks to try to capitalize on this loan. “Bad” decks give immediate rewards, but long-term net losses, whereas “good” decks give larger delayed rewards and overall net gain. The results from the initial experiment showed normal participants quickly adopted a strategy of pulling cards from the good decks, thereby demonstrating the ability to delay reward for greater ultimate gains. By contrast, participants with VMPFC lesions never learned this strategy, and continued to act upon immediate rewards without regard to future consequences. In the current study, Wagar & Thagard trained GAGE on this same task. When the VMPFC was taken out of the model, GAGE acted only upon immediate reward, whereas leaving the VMPFC in the model resulted in more selections from “good” decks and greater overall gains. In essence, without the influence of the VMPFC, the computer was acting upon emotional reactions elicited by the amygdala and reflecting the immediate situation. When the VMPFC was included, decisions were based upon potential outcomes, not immediate affective appraisals. VMPFC and amygdala responses were modulated by gating of the NAcc, which in turn was modulated by the hippocampus, in line with the proposed model above.
In a second study, the goal was to simulate the role of context in the integration of physiological affective arousal and cognition; specifically, the mechanism by which context moderates emotional reactions to stimuli. They had the machine gauge emotional reactions as positive or negative while in a positive versus a negative context. The results of this study showed that when the NAcc was presented with two possible VMPFC representations, the hippocampal-derived context drove GAGE’s behavior, such that positive contexts elicited positive responses and vice versa. The researchers go on to suggest that the NAcc stores associations between VMPFC and hippocampus to elicit representations based on the current context.
The two studies are summarized as such: Study 1 demonstrates that “the VMPFC and the amygdala interact to produce emotional signals indicating expected outcomes and that these expected outcomes compete with immediate outcomes for amygdala output…temporal coordination between the VMPFC and amygdala is a key component to eliciting emotional reactions to stimuli (p. 76).” Study 2 demonstrates that context exerts an effect on cognitive-affective integration, such that “For the signals from the VMPFC and the amygdala to access brain areas responsible for higher order reasoning, context information from the hippocampus must unlock the NAcc gate, allowing this information to pass through (p.76).” These conclusions lead to a few questions. First, does this suggest that the hippocampus overrides potential outcome decisions presented by the VMPFC? In other words, if the VMPFC is creating memory traces based on past experiences in similar situations, is it the case that the VMPFC is assuming one context that the hippocampus either rejects or confirms? Do the context appraisals formed by the hippocampus represent contextual memories or the actual current context? And how does the hippocampus form judgments about the current context, unless it is comparing it to prior encounters with the context? Isn’t contextual memory formation a key function of the hippocampus? There seems to be almost a memory loop going on here – the VMPFC is taking in sensory information and judging appropriate behavioral responses based upon the behavioral outcome of past encounters with the stimuli – which would imply some form of contextual representation. This information then is gated by the way in which the hippocampus judges the current context, and whether the information presented by the VMPFC is matching this judgment. The hippocampus encodes contextual memories about past encounters with the current context, which you would think influences the VMPFC’s initial representations. Is this process more dynamic than the GAGE model is implying?

Friday, October 24, 2008

Week 5: Language Acquisition and Processing (Part 2)

Jia, G., Aaronson, D., Wu, Y. (2002). Long-term language attainment of bilingual immigrants: Predictive variables and language group differences. Applied Psycholinguistics, 23, 599-621.

This article presents a study in which the long-term attainment of a second language, specifically factors relating to long-term L2 decline, was explored. The study sought to answer four main questions: 1) Given long-term L2 attainment decline versus long-term L1 increase, which aspects of language proficiency and to which bilingual groups can the findings be generalized; 2) what are the mechanisms leading to the switching or maintenance of dominant language between young and older arrivals; 3) what environmental or affective variables might be involved; 4) are their differences apparent in other groups previously studied, namely Chinese-English and Spanish-English, and are there additional social or cultural variables influencing differences in attainment between bilingual groups above and beyond language distance. To answer these questions, the study 1) investigated grammatical proficiency of 44 Mandarin-English speakers to investigate the relationship between long-term L1 and L2 attainment; 2) using a language background questionnaire, explored additional social, environmental, and affective variables; 3) collected normative data on L1 proficiency for Mandarin monolinguals between the ages of 9-16 to compare relative L1 proficiency between bilinguals and their monolingual counterparts; and finally, 4) gathered data on long-term L2 attainment of other groups to examine generalizability of results to other bilingual groups (specifically, Korean- Mandarin- Cantonese-English and European English bilinguals).
In the initial study, participants were presented with a listening and a reading task designed to assess judgments about grammaticality of sentences. Each task was presented in both English and Mandarin. Judgments in English included morphology (past tense, plurals, third person, present/past progressive, etc.) and syntax (articles, predicate structures, particle movement, pronominalization, etc.). Judgments in Mandarin included word order, inappropriate insertion of words, and inappropriate omission of words. Both grammatically correct and incorrect sentences were presented. Results showed younger AoA was associated with higher accuracy on the English listening and reading task and lower accuracy on the Mandarin listening task. There was also a negative correlation, such that better performance on L2 was associated with poorer performance on L1. Higher performance was also associated with self-report ratings of proficiency in both L1 and L2. This study also assessed environmental and cultural variables. Higher performance on the English listening and reading tasks was associated with younger AoA and more years of education in the U.S., but not length of time in the U.S. Better performance on the English listening task was associated with with more frequent usage at home, as well as more people speaking English at home. Better performance on the Mandarin task was associated with less frequent usage of English, and less people speaking English, at home. The variance between L1 and L2 proficiency was also associated with the level of the speaker’s mother’s proficiency in English, such that the more proficient the mother is in speaking English, the more proficient the children are. Looking at the normative data for comparable level of proficiency in Mandarin between bilinguals and Mandarin monolinguals, the bilinguals tended to arrive with less than adult proficiency in Mandarin. The authors suggest future studies should examine whether level of L1 proficiency in early learners has an effect on L2 acquisition.
Examining the generalizability of these results with other bilinguals, Asian language speakers evidenced stronger AoA effects and significantly lower accuracy on the listening and reading tasks than European language bilinguals. This finding is in line with the proposals of Hernandez and Li, wherein the lexical difference between L1 and L2 influences levels of lexical attainment, as is evident in greater AoA effects in Chinese-English bilinguals than in Spanish-English bilinguals.
In general, the results of this study show individuals who immigrate at a young age tend to switch dominant languages from L1 (Mandarin) to L2 (English), whereas older immigrants tend to maintain their dominant language. However, the maintenance of L1 as the dominant language was influenced by the extent to which English was spoken at home. This suggests it is not merely AoA effects on the ability to acquire the lexical aspects of L2 that prevents greater L2 attainment, but perhaps a combination of factors, including the extent to which the “language of life” is expressed in L2 rather than L1. Thinking back to Harris, Gleason, and Aycicegi (2006), it would be interested to see the extent to which a late learner who is immersed in the L2 language and culture, such as being married to a native speaker, would have less difficulty detecting grammatical errors than a late learner who remained in a household where L2 and accompanying cultural practices were intact. However, this again makes me think of my own step-mother, who would very likely have great difficulty detecting all the grammatical errors in a listening task. If she performed poorly on the tasks in this study, one could assume AoA effects in her ability to acquire English are present to a large extent. She is an individual who speaks L2 to her husband, her children, her step-children, her coworkers, and her friends on a daily basis – in other words, her language of life has been English for the last 25 years of her life. The only remaining contact she has with L1 is in conversations with her sisters. Despite the length of time she has been in the U.S. and living in an English speaking household, she is less than proficient in her ability to speak English, particularly in her pronunciation of English words which, according to this study, would have resulted in lower attainment by her eldest daughter, which did not prove to be the case. However, her eldest daughter was only six when she arrived, and was less than proficient in Vietnamese. Her daughter’s superior attainment of English (such that she sounds no different than a native speaker) fits with the conclusions of this study that younger immigrants tend acquire L2 to a higher proficiency and even switch dominant languages from L1 to L2, and perhaps her younger age of arrival cancels out the effects of her mother’s lower language proficiency.
In sum, this study, by focusing on differences in grammatical ability, lends support to the proposal by Hernandez and Li (2007) that perhaps AoA effects are the result of a critical period for sensorimotor processing, which in turn affects the ability to discern lexical differences between L1 and L2, and therefore affects grammatical accuracy and attainment in L2. It would be interesting to see further studies of late learners, in which differences in environmental and social factors and their relationship to overall L2 attainment are explored.