can you provide chapter 6 of your book? otherwise, can you list some of the factors, just so that I do not write on factors that are not mentioned in the books.
in addition, please advise the required length for the answer
By definition, ulterior motives are hidden: They are motives of which, as De La Rochefoucauld suggests, we would be ashamed were they made public. In fact, there is the possibility that some of our ulterior motives are so well hidden that even we don't recognize them.
Historically, psychology has not concerned itself much with ulterior motives; it prefers pieces of the puzzle that are more easily exposed—motives such as hunger and sex and our need for affection and achievement. But in its search for the forces that drive our behavior, it sometimes uncovers hidden motives.
The forces that drive us to action define motivation. The term derives from the Latin verb movere, which means "to move." Hence motivation deals with the forces, conscious or otherwise, that underlie our behaviors. Motivation theorists try to understand why we do certain things and not others, what instigates behavior, and why it stops.
Motives are very closely tied to emotions. Clearly, complex combinations of feelings such as hate, fear, love, disgust, anger, and on and on are the reasons for much of what we do.
Early motivation theorists looked at biology and behaviorism for their explanations. It seemed to them that certain basic instincts and drives that we inherit might account for much of our behavior.
After all, they reasoned, we are animals, and many animal behaviors appear to be biologically determined: migration in birds and butterflies; spawning and nesting in fish and fowl; mating in mouse and man; and on and on. These complex behaviors, labeled instincts, are common to all members of a species, are apparently unlearned, and are little affected by experience (Figure 6.1).
The instinct model. It is no longer a popular explanation for human behavior.
Many theorists, including XXXXX XXXXX (1890/1950), listed an enormous number of human instincts, among which were tendencies toward jealousy, cleanliness, clasping, biting, sucking, and even kleptomania! At one point, Bernard (1924) counted more than 6,000 human instincts. But most of these are not instincts at all. They're not common to all members of our species, and they don't remain unchanged by experience. They have little in common with instincts such as migration or nesting. Furthermore, lists of instincts explain very little about human behaviors. To say that we are jealous because we have a jealousy instinct does nothing to explain our jealousy: It merely labels it. This is what is called the nominal fallacy—the assumption that naming something explains it.
The nominal fallacy is not uncommon in psychology. To say, for example, that Robert sets fires because he is a pyromaniac or that Sandra has trouble learning because she is mentally challenged or that Nora does not eat because she is anorectic does not explain a single one of these behaviors: It merely labels it. To attribute fighting to an aggressive instinct is, in the final analysis, no more revealing than to say that people fight because they fight.
Although instincts are now studied in relation to animal rather than human behavior, many psychologists argue that inherited biological tendencies play an important role in human behavior. As we saw in Chapter 2, for example, evolutionary psychologists point out that biology "prepares" us to learn certain things such as language or aversions to bitter-tasting and possibly poisonous foods, or fear of snakes. Others suggest that our tendencies toward aggressiveness and toward eating too much of the wrong things may also have a biological basis (van Honk, Harmon-Jones, Morgan, & Schutter, 2010).
The drive-reduction model of motivation.
Another historical approach to explaining human behavior rests on the appealing and obviously true idea that people usually behave so as to achieve pleasure and to avoid pain. This notion, labeled psychological hedonism, or the pain-pleasure principle, seems, at first glance, to be a good explanation for human behavior, supported both by science and by anecdotal evidence (Riediger, Schmiedek, Wagner, & Lindenberger, 2009).
But, by itself, it explains nothing. Basically it says that whatever we do, we do because doing so is pleasurable now or is expected to lead to pleasure in the future, or because it moves us away from pain or is expected to move us away from pain in the future. The problem with this explanation is that it does not lead to any valid predictions or explanations unless pain and pleasure can be defined beforehand. But these are subjective states, not easily defined. If The Great Magician Mammoon eats broken bottles, we must assume that he, too, is driven toward pleasure and away from pain, but we might have been hard pressed to predict his glass-eating behavior in the first place.
Still, there are clearly some common elements in our estimates of what is pleasant and unpleasant. As we saw in Chapter 4, certain stimuli, including food, drink, praise, and 50-dollar bills, are likely to be positive reinforcers. These are stimuli that satisfy our various needs. Hence, they are motivating: they provide a behavioristic explanation for our actions.
One way of looking at needs is to say that they drive behavior. Thus, the need for food gives rise to a hunger drive; for drink, to a thirst drive; and for sex, to a sex drive. These are basic physiological needs. From a hedonistic point of view, satisfaction of a basic need may be described as pleasurable and failure to satisfy a need as unpleasant.
That physiological drives are related to actual needs is clear with respect to hunger and thirst. Deprivation of food and water leads to detectable physiological changes that are responsible for our awareness of the needs. Drinking or eating then leads to a reduction in the drive—hence the label drive reduction. But that there is more to hunger and thirst than simple physiology and drive reduction is also clear. As we see later, a host of external stimuli, as well as cognitive and emotional states, contribute to our eating and drinking behaviors (Figure 6.2).
With respect to sex, the picture is much murkier. Sexual deprivation is not accompanied by tissue changes, although sexual urges are clearly influenced by hormonal factors. Perhaps much more important for humans, however, are the varieties of other factors—cognitive, emotional, perceptual, and cultural—that are inextricably linked with sexual behavior.
While the basic physiological needs seem clear, there is less agreement about what our psychological needs are. Likely candidates include the need for affection, belonging, achievement, independence, social recognition, and self-esteem.
Research supports the notion that outcomes such as emotional well-being and academic achievement are linked with satisfied psychological needs (Faye & Sharpe, 2008). However, unlike physiological drives that can be assumed to be common to most individuals, it is not entirely clear that everyone has the same psychological needs. In fact, many of these needs appear to be at least partly, if not entirely, learned. As a result, some people seem to have a higher need for acceptance or for achievement or for love than others.
Note, too, that physiological needs can be entirely satisfied, at least temporarily: You can eat or drink until you absolutely don't want any more. Psychological needs, on the other hand, are not so easily satisfied. Few people are ever totally sated with love or achievement or affection.
One of the weaknesses of earlier drive-reduction explanations is that they attributed behavior to tensions that accompany inner states and that are subsequently reduced as a result of appropriate behaviors. But even hunger is not entirely an internal state. If it were, people would always eat only as much as required to activate the physiological mechanism that says, "Whoa, you've had enough." But no, many people eat far more if the food looks and tastes especially good.
And some eat more if they're given a small appetizer first, even though the appetizer should have begun to reduce the hunger drive. Even rats will run a little faster toward the goal box when given a taste of food beforehand (Zeaman, 1949) (Figure 6.3). Why? Because the food has incentive value; it provides the rat with incentive motivation. Basically, incentive relates to the subjective value of a goal or reward. The higher the value, the greater the incentive; hence the more motivating the goal or reward.
Drives alone cannot explain some behaviors. In Zeaman's 1949 study, rats that had already been given some food (b) performed better on a maze they knew led to food than did presumably hungrier rats (a).
That different goals have different incentive value is evident in the fact that monkeys typically work harder to obtain a banana than a piece of lettuce (Harlow, 1953)—and that humans willingly pay more for a steak or a lobster than for a bowl of soup. These have more incentive value for us. Nor do we have to be given a taste beforehand, gifted as we are with imaginations; we can anticipate the consequences of our behavior. And there is little doubt that our anticipations are powerful influences in directing our activities. Anticipation underlines the importance of the cognitive aspects of motivation.
Maslow (1970) suggests that human actions may be accounted for by two systems of needs: the basic needs, and the metaneeds. The basic needs are called deficiency needs because when they are unsatisfied, they lead to behaviors designed to satisfy them. They include both physiological needs (food, drink) and psychological needs (security, love, and self-esteem).
Metaneeds are higher-level needs; they include cognitive needs, aesthetic needs, and the need for self-fulfillment. They're called growth needs because activities relating to them don't result from deficiencies, but from the organism's tendency toward growth.
According to Maslow, needs are hierarchically arranged as shown in Figure 6.4. What this means is that lower-level needs must be satisfied before other needs are attended to. Thus, a starving person might not hunger for knowledge or self-esteem—only for food (Harper & Guilbault, 2008) (Figure 6.4).
Maslow's hierarchy of needs. That the pyramid is open shows that self-actualization is a never-ending process.
Maslow was concerned with developing a theory that would encompass the more "human" qualities of behavior that define our "higher nature." We have an overpowering tendency toward growth, explains Maslow. This defines our very essence and is absolutely fundamental to mental health and happiness. The highest and most important of our growth needs is the need for self-actualization (discussed in Chapter 8).
Unlike the more basic needs, whose satisfaction can be ensured through appropriate activities (eating, for example), self-actualization, as a need, is never satisfied. Self-actualization is rarely an achieved state but more an ongoing process. Depiction of Maslow's theory as a triangle is misleading, explains Rowan (1998), because it implies an end point to personal growth. Hence the open triangle in Figure 6.4.
Some early behavioristic positions tended to view us as passive victims of forces over which we have little control. Motives were seen as internal or external prods that pushed us this way and that.
Cognitive positions present a more active view of the human animal. They take into account the wealth of human emotions involved in behavior, dealing with these not as forces over which we have no control, but as feelings that we actively and sometimes very consciously manipulate. The cognitive theorist tries to understand the sequence of ongoing human behavior as it is mediated and controlled by ongoing cognitive activity, of which affect (emotion) is a central component.
Note, however, that not all behavioristic positions view the organism as totally passive and reactive. Even Skinner's rat is an active organism that explores the environment and that emits responses rather than simply reacting blindly to external forces. The main contrast between behavioristic and cognitive approaches to motivation is that the cognitive theorist looks at the role of rewards and punishments in terms of the individual's understanding and anticipation. In so doing, cognitive theories take into account what might well be the single most powerful explanatory concept in human motivation: our ability to delay gratification. Doing so involves thinking, imagining, and self-verbalization—some uniquely human activities.
Festinger and Carlsmith (1959) asked college students to perform an extremely boring and apparently pointless task as part of an experiment. They were then divided into three groups, one of which did nothing further, serving as a control group. The remaining two groups were asked to help the experimenter by lying to some new subjects, telling them that the experiment was interesting, exciting, and useful. All agreed to do so. For their services, one group of subjects was paid $20; the other group, ignorant of how much the others had been paid, was given a single dollar.
After members of these two groups had each spoken to the "new" subjects (actually confederates of the investigators), they were interviewed to uncover their true feelings about the experiment. Not surprisingly, the control group still found the experiment boring and useless. The $20 group also found it boring; they hadn't changed their minds. Strikingly, however, the group who had been paid a single dollar now thought the experiment was useful and quite interesting (Figure 6.5).
Cognitive dissonance and "forced compliance." In the Festinger and Carlsmith study, the group paid only $1 to lie experienced dissonance (conflict) about lying, and convinced themselves the task was really quite enjoyable. The group paid $20 felt justified in lying, experienced little dissonance, and continued to believe the experience had been boring.
Why should those who are paid a single dollar change their minds and the well-paid group, not? The answer, explains Festinger (1957), lies in the theory of cognitive dissonance. It predicts that whenever there is conflict between cognitions (for example, conflicting information about behavior, beliefs, values, and desires), people will try to reduce the conflict. Cognitive dissonance (cognitive conflict) may arise when people do things contrary to their beliefs, when they compromise principles, or when they observe people doing things they don't expect of them.
In the Festinger and Carlsmith experiment, participants experienced dissonance because they told a lie. But if they were well paid for telling the lie, there was little dissonance because the behavior seemed justified. So those paid $20 did not change their minds about how boring the task was. But those who were paid very little money had less justification for lying; they therefore experienced more dissonance. Changing attitude is one of the easiest ways of reducing dissonance—which is what happened.
Cognitive dissonance theory contradicts the saying "The grass is greener on the other side of the fence." When subjects are given a choice between two relatively equal options, the one they select is the one they later rank as most desirable. Why? Cognitive dissonance theory provides an easy explanation: Whatever conflict results from having had to make an uncertain choice is quickly reduced when participants become convinced they've made the best choice. It should not be surprising that people who buy houses, cars, and other objects that require major decisions immediately begin to exaggerate the positive aspects of their choices. They are simply reducing dissonance or guarding against its appearance.
There are several ways of reducing dissonance, depending on the behavior from which dissonance originated (Figure 6.6). Changing attitudes is one common method, illustrated in the experiments just described. A second method involves changing behavior. Those who stop smoking, for example, reduce dissonance created by their smoking by changing their behaviors.
A model of cognitive dissonance. Everyone occasionally experiences conflicts between beliefs or desires and reality. There are many ways of trying to reduce cognitive dissonance.
Distorting information or perceptions can also reduce dissonance. When Gibbons, Eggleston, and Benthin (1997) interviewed individuals who had stopped smoking but later started again, they found that they had significantly distorted their perception of the risks associated with smoking.
Cognitive dissonance theory has become a relatively common approach to therapy. Changes in the attitudes of patients with eating disorders or addictions can sometimes be brought about by creating cognitive dissonance to stimulate change in attitudes or behavior. For example, Smith-Machin (2009) created cognitive dissonance among groups of female undergraduates with eating disorders using discussion, exercises, and homework assignments aimed primarily at countering cultural pressures and beliefs about women's bodies. She found significant reductions in dieting behaviors and bulimic symptoms among these women (bulimia and other eating disorders are discussed later in this chapter).
There are two competing forces in achievement behavior: the desire to excel and do well, and an opposing fear of failure. Those who fear failure too much are less likely to seek out challenging and difficult undertakings. Failure, as this girl is discovering, is often uncomfortable.
The urges that propel us toward achievement are complex and not easily defined or measured, although their influence can be extremely powerful. And those urges can vary widely from one individual to another. Some people have a strong drive to excel, to meet some inner standard of excellence, to do well. These people have a high need for achievement (nAch); others, not so much.
McClelland and his associates' pioneering investigations of need for achievement revealed several interesting findings (McClelland, Atkinson, Clark, & Lowell, 1953). First, and least surprising, those with the highest measured need for achievement typically achieve at a higher level. But this is not always the case; just having a burning drive to achieve and succeed brings no guarantees. Sometimes those more intelligent, more talented, more persistent, or luckier are the highest achievers.
Another interesting finding from this research is that children with high nAch scores tend to be moderate risk takers; those with lower scores tend to be either very low risk takers or very high risk takers. McClelland (1958) had young children play a ring-toss game where they could stand as close to the target as they wanted, and they would win prizes for accurate tosses. High need achievers tended to stand a moderate distance away. But low need achievers either ensured success by standing very close to the target or ensured failure by standing very far away. When there is very little probability of success, failure carries little stigma.
There are two competing forces involved in achievement behavior, explains McClelland: One is the desire to achieve success; the other is a fear of failure. What happens is that achievement-oriented behavior is a combined function of approach tendencies (resulting from a desire to achieve) and avoidance tendencies (resulting from fear of failure).
Why should we fear failure? Because failure might say something about what we are like: It might reflect badly on our estimates of ourselves.
We don't all react the same way to our successes and failures, explains Weiner (2008). Some of us believe we do well or poorly because we are intelligent or not so intelligent; others think they are just lucky or unlucky. Attribution theory, which we discuss again in Chapter 10, looks at these issues.
Our attributions, says Weiner, depend on our locus of control—that is, on whether we are internally oriented or externally oriented. If I am internally oriented, I tend to take responsibility for the consequences of my own actions, attributing them either to my ability or to effort, both of which are under my control. But if I am externally oriented, I attribute success or failure to factors that are not under my control, such as the difficulty of the tasks I face or luck. Because these factors are not under personal control, the externally oriented individual does not accept responsibility for either success or failure.
Weiner also differentiates between causes that are stable and those that are unstable. Ability and task difficulty are stable factors; that is, they don't vary for a given task and a given individual. Effort, however, can be high or low; and luck can be present or absent. Thus, these are unstable variables (Figure 6.7).
Four important possible attributions for success and failure. Our explanations for why we succeed or fail can be internal or external; they can also invoke causes that are stable or unstable.
One of the striking and consistent findings from attribution studies is that people who are high in need achievement are much more likely to attribute outcomes to internal factors for which they have personal responsibility. They are likely to think that ability and effort—or lack thereof—are responsible for their successes and failures.
When they are successful, individuals low in measured need for achievement may attribute success to any of the four causal factors. That is, they might conclude they succeeded because they worked hard (effort), they are intelligent (ability), the task was exceptionally easy (difficulty), or they were just fortunate (luck).
However, when those who are low in need for achievement are not successful, they are more likely to attribute the outcome to lack of ability. They see failure as reflecting on their abilities. That, explains Dweck (2006), is because they naively believe that intelligence is fixed and unchanging, a belief that, as we saw in Chapter 5, is a myth. But even if it is a myth, this belief (the entity theory) shapes goals and efforts. Those who are convinced that intelligence is fixed often go to great lengths to convince people that they have a lot of it. Or they struggle to hide the fact that they don't have very much. What happens in either case is that the individual avoids challenges that might expose weaknesses.
If, on the other hand, you believe that intelligence is malleable, that it can be improved with effort (the incremental theory), you will develop what Dweck labels a growth mindset. Knowing that you can develop astonishing skills and talents if you work at it, you will be willing to accept challenges that would stagger others (Dweck & Grant, 2008) (Figure 6.8).
Beliefs about intelligence, achievement goals, and achievement behavior. Adapted from C. S. Dweck (1986). Motivational processes affecting learning. American Psychologist, 41, 1040–1048. Copyright © American Psychological Association. Used by permission.
Our personal opinions of how competent we are in different situations—our judgments of self-efficacy—are important in determining what tasks we choose and how much effort we put into them. High self-efficacy judgments ensure that this female will not be intimidated by the formulas behind her.
Attribution theory presents an active view of people. We don't simply behave; we actively evaluate our behaviors and try to make sense of them. When we fail or succeed, we try to understand the reasons why. Depending on our predispositions, our personality characteristics, and our previous histories, we ascribe causes to specific factors.
Basic to our attributions is what we think of our personal competence—in Bandura's (1997) terms, our self-efficacy judgments. These are evaluations we make of our personal effectiveness in different situations. Those with high self-efficacy see themselves as capable and effective.
Judgments of self-efficacy are instrumental in determining what people do; hence they're important as motives. Under most circumstances, people don't undertake activities that they expect to perform badly. In contrast, those with high judgments of self-efficacy are more likely to accept challenges that might demonstrate the validity of their judgments. Slanger and Rudestam (1997) found that level of self-efficacy judgments was one of the variables that most clearly differentiated between high and low risk takers in sports such as sky-diving, kayaking, rock climbing, and skiing.
Self-efficacy judgments determine not only what tasks people will choose, but also how much time and effort they're willing to put in. Those who don't see themselves as very capable are far more likely to give up rather than persist when they encounter difficulties. That's one of the reasons why self-efficacy judgments are so important in schools. Williams and Williams (2010), for example, found a significant relationship between math achievement and self-efficacy in a study that looked at students in 33 different schools.
Not only does high self-efficacy contribute to high achievement, but there is a sort of reciprocal determinism at play in the sense that high achievement, in turn, contributes to a heightened sense of competence. In fact, direct experiences of success or failure (termed enactive because they result from our own actions) are probably the most important sources of information we have about our competence.
Second, we learn about our effectiveness from vicarious (secondhand) sources—that is, by comparing our performance with that of others. That we do better or less well than peers is highly informative.
A third source of influence is persuasion. If others express faith in your abilities, if they continually urge, "Why don't you try? I know you can do it," you might, in the end, come to believe just a little more in your effectiveness and competence.
Finally, explains Bandura (1997), emotions can have a direct impact on your estimates of capability. Under conditions of extreme arousal, for example, you might decide that you are capable of outrunning a threatening gorilla or of swimming across a river to save a baby. And if an activity makes you feel especially good, you are more likely to conclude that you're good at it than if it makes you feel frustrated and unhappy (Figure 6.9).
Sources of information that influences our judgments of personal effectiveness and competence.
We are not simple creatures, you and I, whose motives expose themselves easily to the probing psychologist trying to find another piece of the puzzle. No matter how overwhelmingly positive our judgments of self-efficacy and how high our expectations of success, there are other factors at play. Our choices, explain Eccles and Wigfield (2002), and our persistence and performance, are profoundly influenced by the value of the outcome we expect. And the cost of the activity, in terms of amount of effort required, sacrifices entailed, and other opportunities given up, also has to be taken into account. This is the basis of Eccles's expectancy-value theory (Wigfield, Tonks, & Klauda, 2009).
Expectancy, in this theory, is similar to a judgment of self-efficacy; it is defined by the individuals' beliefs about how well they will do on a task. Value is a combined function of four factors: the personal importance of the task in terms of how it fits into the individual's plans and self-image (attainment value); its intrinsic value based on the personal satisfaction the person gets from doing the task; its utility value, which has to do with what it contributes to short- and long-term goals; and its cost in terms of the amount of effort required, the probability of failure, associated stress, conflicting options, and so on.
In short, it's as though we make choices based on a sort of mental calculus. The important factors in our calculation include our expectations of success, our judgments of our effectiveness and competence (our self-efficacy), and the values and costs associated with each of the various options (Figure 6.10).
Eccles's expectancy-value theory of motivation, a sort of mental calculus we use to guide our choices and our efforts.
In situations of high arousal, the body's physiological systems prepare the individual to respond. If arousal becomes too high, panic may result and the effectiveness of behavior may drop. Some studies indicate that many soldiers under attack fail to fire their rifles. Some even run away.
The mental calculus that is the basis of Eccles's expectancy-value theory describes a cognitive side of human motivation. It tries to explain how we choose among different options, how we decide on goals and actions.
There is another side to human motivation. It has to do with the fact that, as Turner and Goodin (2008) explain, when we succeed in attaining a goal, we feel positive emotions; failure can lead to negative emotions. Hence emotions are a fundamental part of human motivation. And some of the physiological aspects of emotion can be detected and measured. These are the changes that define arousal.
The term arousal has both physiological and psychological meaning. As a physiological term, it refers to activity of the sympathetic nervous system. Physiological arousal ranges from states of very low activity such as those characteristic of sleep or deep comas (alpha, theta, or delta waves; low electrodermal response, or electrical conductivity of the skin; low respiration and heart rate), to very high activity such as might be characteristic of extreme anger, fear, or panic.
As a psychological term, arousal refers to the alertness or vigilance of the organism and to the emotions that accompany physiological arousal. Thus, an individual at a very low level of arousal might be asleep; a moderately aroused individual is alert and attentive; one who is extremely aroused may be in a state of extreme emotion.
The effectiveness of our behavior is closely tied to arousal level. At very low levels of arousal, such as when you are asleep or almost asleep, you might have trouble responding to the simplest question. But if what later awakens you is the fact that your house is on fire, you might immediately become so highly aroused that your responses would not be any more appropriate. High arousal, evident in increasing anxiety and sometimes even fear or panic, explains why some highly competent students do poorly in tense oral or written examinations. Considerable research indicates that anxiety reduces school performance (Chen, 2009).
It seems, explained Yerkes and Dodson (1908) more than a century ago, that there is a level of arousal at which behavior is most effective. Lower or higher levels of arousal are associated with increasingly ineffective behavior. This observation, now known as the Yerkes-Dodson law, has been widely accepted in psychology (Landers, 2007) (Figure 6.11).
The Yerkes-Dodson law. As arousal increases, performance becomes more effective until an optimal level is reached. Increases beyond this level lead to decreasingly effective behavior.
In a classic experiment, Hebb and his colleagues (Hebb, 1972; Heron, 1957) paid college students to do absolutely nothing. For as long as they wanted, they lay on a cot, getting up only to go to the bathroom and to sit on the edge of their cots at mealtime. The remainder of the time they lay down, their ears covered with U-shaped foam pillows, their eyes with translucent visors, their hands with cardboard cuffs. Overhead, a fan hummed constantly to mask any other noises. Although they could hear, see, and feel, their stimulus world was unchanging.
This was the first of a large number of studies on sensory deprivation. For example, in a recent demonstration, the BBC (UK) arranged for six volunteers to spend 48 hours alone in totally darkened nuclear bunkers (Total Isolation, 2008). The outcome in this demonstration was highly consistent with previous studies, most of which found that participants typically cannot endure the isolation for very long. In the original experiment, two days was the usual duration. In conditions of more extreme deprivation—where, for example, subjects are immersed in brine solutions in silence and darkness, floating unattended—length of stay is considerably shorter.
Among the striking findings from these studies is the observation that most subjects eventually experience some impairment of perceptual and intellectual functioning. Tasks that are very simple before isolation become extremely difficult and sometimes impossible after prolonged sensory deprivation. In the BBC demonstration, one subject's performance on simple memory tasks had dropped by 36 percent. In addition, many isolation participants experience emotional changes and rapidly fluctuating moods ranging from nervousness and irritability to anger or fear.
The most striking finding involves the appearance of hallucinations among some sensory deprivation subjects (Mason & Brady, 2009). These uncontrolled "apparitions," typically visual, are mildly amusing at first and can be gotten rid of if the subject wishes. Later, however, they become more pronounced, less amusing, and quite persistent. Three of the six participants in the BBC study reported hallucinations involving objects such as snakes, zebras, and oysters; and a fourth became convinced that her sheets were soaking wet.
Sensory deprivation research strongly suggests that humans have a need for variety in sensory stimulation. If this is so, perhaps some of our otherwise unexplainable behaviors might be accounted for—behaviors such as curiosity and exploration. That we have a need for stimulation is the basis for an arousal theory of motivation.
Arousal theory is premised on two assumptions (Hebb, 1972). The first, already mentioned, is the Yerkes-Dodson law—the belief that there is an optimal level of arousal for different behaviors and that this level varies both for different individuals and for different behaviors. For example, playing well in an intense, physical sporting activity such as football may require that players be "psyched up" (highly aroused). The same level of arousal may be a detriment in a highly cognitive task like writing an examination.
The second assumption is that individuals behave so as to maintain an optimal level of arousal. This is evident, for example, in highly arousing situations where an intensely frightened individual tries to escape or change the situation. The consequences of escaping from an angry mob will surely be a reduction of arousal.
The assumption is double-edged, however: It predicts not only that people behave so as to reduce arousal when it is too high, but also that they will attempt to increase arousal when it's inappropriately low. Evidence of this is clear in sensory deprivation studies, where participants whistle, sing, talk to themselves, try desperately to engage the experimenter in conversation whenever food is brought in, and otherwise try to increase the amount of stimulation they're receiving. One of the effects of sensory stimulation is to increase arousal.
The physiological changes that accompany arousal are brought about by activity of the autonomic nervous system, which is not ordinarily under the individual's conscious control. We cannot easily "will" our skins to become more conductive to electricity, our hearts to beat faster, and our brain-wave activity to change.
Still, there are ways in which we can control these changes. For example, simply imagining highly arousing scenes, or seeing them depicted in pictures, films, or words, can clearly increase physiological arousal. Hence one source of arousal is internal, cognitive activity.
A second obvious source of arousal is external stimulation. The sheer amount of stimulation, however, may not be especially important. In the original isolation experiments there was considerable sensory stimulation, including the sound of a fan, the pressure of the cardboard cuffs, sensations relating to clothing and diffuse light, and perhaps even tastes inside the mouth and vague odors in the environment. But this stimulation was constant and unchanging, and arousal level consequently dropped dramatically. Recordings of subjects during isolation indicate that their brain-wave functioning was more like that of the deeper stages of sleep than like that of subjects who are awake (Zubek, 1973).
The types of stimulation that appear to cause the greatest arousal are those associated with emotion. At one extreme, emotion-laden experiences can increase arousal dramatically, as is evident in panic situations.
In general, the most arousing stimuli are those that capture interest and attention—that is, stimuli that are surprising, novel, meaningful, ambiguous, or complex (Berlyne, 1960). These qualities of stimuli are linked with emotions such as interest and excitement; their opposites are associated with boredom and apathy. All these emotions are closely linked with motivation.
Emotions have presented a great deal of difficulty for psychologists. They cannot easily be defined or described; they are difficult to measure; their physiological bases don't clearly differentiate among them. Yet they are a fundamental part of being human. In fact, it is difficult to imagine any human experience that does not involve emotions.
Emotions (or affect) have two broad dimensions. The first is intensity, which, for most emotions, can range from low to high. Thus, anger might range from annoyance to rage; disgust, from mild aversion to utter revulsion; joy, from contentment to absolute ecstasy.
Emotions vary not only in intensity, but also in terms of whether they are positive or negative. Joy, love, happiness, and interest are generally positive feelings; fear, anger, rage, and disgust are more unpleasant.
Emotion can be defined as the "feeling" or "affective" component of human behavior, where feeling describes a subjective state that can be pleasant or unpleasant, as well as intense or mild. In addition, emotion entails detectable physiological changes and is sometimes accompanied by predictable behaviors. For example, anger is generally accompanied by increased heart and respiration rate as well as other physiological changes, and it may also be evident in changes in facial expressions, voice, and body language.
There are several thousand words referring to emotion in the English language, so it is very difficult to agree on the precise number of different emotions of which we are capable. Many psychologists have tried to summarize our emotions with lists of basic emotions. For example, Izard (2009) suggests that we can easily distinguish among six distinct emotions: sadness, anger, disgust, fear, interest, and joy/happiness. Others, such as Parrott (2004), divide basic emotions into related secondary and tertiary emotions. For example, secondary emotions related to joy include cheerfulness, zest, contentment, pride, optimism, enthrallment, and relief. And each of these secondary emotions might be described more precisely in terms of a tertiary emotion. For example, cheerfulness brings with it the possibility of many other emotions, including amusement, bliss, gaiety, glee, jolliness, joviality, delight, satisfaction, and ecstasy.
Not only have we not agreed about how many emotions we have; we cannot always agree about whether a given emotion is positive or negative. Is surprise pleasant or unpleasant? In fact, we cannot really say because emotion is not a property of stimulation, but a property of our subjective reaction to stimulation.
What emotion is this woman feeling? Click here to find out more.
To define an emotion is to speak about what we presume to be the subjective experience of being in that emotional state. Emotion, however, has other dimensions. Among the most important of these for human interaction is its expression. Emotions can be expressed in behavior, as, for example, when you run from an angry competitor or chase after something you want. In these examples, the motivational component of the emotion is clear. Most emotions may be seen as having approach or avoidance tendencies, although these will not always be expressed overtly.
Emotions may also be expressed verbally. Much of the richness of human conversation derives from the expression and interpretation of emotion. So, too, our books, movies, and art forms play on and with our emotions.
Emotions are expressed nonverbally as well, and they have been extensively studied as manifestations of facial expression. Ekman (2005) and others have noted that there appear to be a number of emotional expressions that are innate and common to all members of our species. Raising the eyebrows is a quasi-universal expression of greeting or acknowledgment; smiling, a universal gesture of approval and friendliness.
Many emotional expressions, however, are learned and culture-specific. Klineberg (1938) examined some Chinese novels to discover how their authors describe emotional expression. It might seem strange to us that sticking out the tongue means surprise; clapping the hands indicates anxiety or disappointment; and scratching the ears shows happiness. The Chinese might be equally surprised to find us frowning when we are puzzled, chewing our lips in concentration, pounding fist in palm to signal determination or anger, and wetting our lips in anticipation.
What is Emotion?
Emotions chart the landscape of life. People know the words and the facial expressions that are distinctive to feelings like happiness, sadness, fear, anger. When we become emotional, our brain notices and our body reacts. A new view of emotion treats them like social sensory systems that help us form attachments and negotiate social hierarchies, a view that is extended to other species.
In addition to how we experience and express it, we know that emotion is related to activity of the autonomic nervous system, as we saw earlier in this chapter. Popular literature abounds with descriptions of physiological activity that are so meaningful, given their contexts, that the reader need not be told anything more about the nature or intensity of the emotion. "My heart beat fast/stood still/quaked/shivered/jumped/rattled in my throat/stopped." "My hair stood on end." "Shivers ran up my spine." "My hands were cold/clammy/perspired/trembled/shook."
Early theorists made much of these physiological changes. XXXXX XXXXX (1890/1950) went so far as to say that emotion results from them, an idea being developed simultaneously by a Danish psychologist, Lange. As a result, the theory came to be known as the James-Lange theory. In essence, it maintains that an emotion-related stimulus gives rise to certain physiological changes and that the individual perceives these physiological changes and then interprets them as an emotion. As James (1890/1950) put it, "We feel sorry because we cry . . . afraid because we tremble" (p. 1006).
Two other researchers, Cannon (1929, 1939) and Bard, objected to the James-Lange theory. They thought our awareness of physiological changes is too slow to explain how we can instantly react to emotion-laden situations. Besides, physiological changes that accompany different emotions don't appear to differentiate among them; they're not specific enough to bring about distinct emotions. Also, Cannon had demonstrated with cats that if he cut nerves linking the brain to those parts of the body most obviously involved in emotional reactions, the cats still behaved as though they "felt" emotion. So Cannon and Bard proposed the Cannon-Bard theory.
The Cannon-Bard theory suggests that when an individual perceives an emotion-related situation or object, the hypothalamus sends messages both to the cortex, where the emotion is felt, and to the body, where physiological reactions take place. As a result, awareness of emotion and awareness of physiological changes are independent, although they result from the same source of stimulation. It is the organism's awareness of the emotional significance of an experience that gives rise to an emotion.
It turns out that both the James-Lange and the Cannon-Bard theory are probably at least partly right. In an early study, Maranon (1924) injected human participants with epinephrine, a compound very similar to noradrenaline, and found that while they experienced physiological reactions similar to those that accompany intense emotion, they did not feel any specific emotion.
Schachter later replicated the Maranon studies to try to clarify the relationship between physiological changes and cognitions in producing emotional states (Schachter & Singer, 1962). Participants in an experiment were told they would receive injections of a new drug that would improve their performance on a test of visual perception. Some subjects received injections of epinephrine; the others received a placebo. After the injections, they were given one of three types of information about the effects of the drug. The informed group was told what the actual effects of the drug might be and how long these effects normally last; the ignorant group was told that the drug would have absolutely no side effects; and the misinformed group was told that a slight numbness and itchiness might result.
Members of each group were then assigned to one of two experimental conditions, or to a control group. The control group waited quietly for a period of time, and members were then questioned about their emotional reactions. Not surprisingly, they reported no particular emotion, as had been the case in the earlier Maranon studies.
In one experimental group, the euphoria group, subjects were left to wait in a room with another individual who was introduced to them as a fellow subject. In fact this person was a confederate of the experimenters who had been instructed to perform a standardized "euphoric-manic" routine—dancing, playing basketball with crumpled pieces of paper, making little projectiles and launching them with rubber bands, playing with a hula hoop, and otherwise trying to convey the impression that this was really a lot of fun.
In the second experimental group, the anger group, subjects were left with a confederate and were asked to fill out a long and intensely personal questionnaire while waiting. It asked about the personal hygiene of every member of the subject's family—how often they took a bath and brushed their teeth, and who had the most disagreeable body odor; subjects weren't allowed to answer "no" or "none." One question read: "With how many men (other than your father) has your mother had extramarital relationships? 4 and under—: 5–9—: 10 and over—." The confederate, acting according to precise directions, became progressively angrier while completing the questionnaire, finally crumpling it up and leaving the room.
Following this, all subjects were individually interviewed to uncover their emotional reactions. Results were as follows: Subjects who were uninformed or misinformed and who had received epinephrine exhibited and felt anger or euphoria, depending on the experimental condition; informed subjects and those who had received placebos typically did not.
Schachter's explanation for these findings is the basis of his two-factor theory of emotion: One factor is an undifferentiated state of physiological arousal that underlies all emotions—as James had argued. The second factor is the individual's interpretation of arousal in light of what caused it. Thus, subjects who had a logical explanation for their physiological states (the informed group) experienced no emotion. They simply labeled their physiological states according to the explanations given by the experimenter. Those who had been misinformed or not informed attributed their physiological states to emotions they thought they should be feeling and labeled these emotions according to the confederate's behavior. In short, although physiological changes are clearly involved in emotional reaction, the individual's cognitive label for the change determines the nature of the emotion (Figure 6.12).
Three historical explanations of emotion.
Both the Cannon-Bard theory and Schachter's two-factor theory hold that the nature and intensity of an emotion depend largely on the individual's conscious understanding of the emotional significance of an event. These are, in a sense, attribution theories of motivation. Both emphasize that it is the individual's conscious understanding of the meaning of a situation (or of the reasons for physiological arousal) that determine the emotion (Moors, 2009).
Some theorists point out, however, that not all emotional reactions require a conscious understanding of a situation. Scherer (2005), for example, argues that emotional responses can also result from unconscious cognitive activity. Someone experiencing a classically conditioned fear response, for example, might not always have a clear, conscious explanation for the accompanying emotion.
Other theorists have proposed that emotional responses are represented in the brain in complex, linked networks that can be activated by a wide range of related stimuli (Lewis, 2005). These emotional responses may well have been classically conditioned in the first place. The behaviorist XXXXX XXXXX (1930) suggested many decades ago that infants are born with three basic emotional reactions: fear, rage, and love. Each of these is elicited by specific stimuli as an unlearned reflex. For example, stroking the infant evokes emotions related to love; confining the infant might elicit rage; and fear can be brought about by loud noises. Over time, these emotions become classically conditioned to a wide range of other stimuli (see Chapter 2). What happens is that whenever an individual has an emotional experience, information about the situation, the individual's behavior, and a variety of other related details is stored in memory and linked with other related emotions. As a result, previously neutral stimuli can come to have emotional significance for an individual.
The notion that there are unconscious as well as conscious processes involved in emotional reactions has led to an important model of emotional processing of fear-related stimuli: the dual-pathway model. What this model says, in effect, is that there are two systems involved in fear reactions. One is unconscious and rapid; the other is slower and conscious (LeDoux, 2010).
When you see an avalanche come barreling down on you, visual and auditory stimuli race to your thalamus. From there, the information forks out into the two paths of this dual-pathway system. One path streaks to the amygdala—that part of the brain directly involved with fear and danger—and leads you to react almost instantly. Meanwhile, the other path carries information to the cortex, allowing you to interpret the situation more thoroughly (Figure 6.13). A fraction of a second later, you might decide that your initial reaction was unwarranted, that this looked like an avalanche but was just a few snowflakes on the periphery of your vision and you can relax.
The dual-pathway fear system. Signals associated with fear are first processed in the thalamus. From there, signals streak toward the amygdala, giving rise to immediate physiological and muscular reactions. Signals also go to the visual cortex, where they are interpreted more carefully. Now the individual, who has already started to run, might decide it is only a trained wolf.
In a sense, our emotions control us: They make us do things. When we speak of experiences that have moved us, we speak of things that have evoked profound emotion. We can as easily speak of the emotions themselves as being moving, for novelists have not yet counted all the things we do in the name of passion. Love is a motive as surely as is hunger; so too are hate and fear, and the vast array of more subtle, less passionate emotions.
But we are not necessarily prisoners of our emotions. Even very young infants can do things to control their emotions. When 8-month-old Jessica is frightened, she might suck her thumb, bury her face in her mother's lap, or close her eyes and make the bad thing go away.
But we adults are wiser. We know that closing our eyes seldom makes bad things disappear. We have our own ways of controlling emotions. In the first place, our successful intelligence encourages us to seek out situations likely to lead to pleasant emotions and avoid those with unpleasant possibilities.
There are more extreme forms of control as well. As we saw in Chapter 2, the sometimes dramatic mood-altering effects of drugs is one; electrical or surgical brain intervention is another. For example, early studies showed that something similar to rage could be evoked in cats, dogs, primates, and other animals by stimulating appropriate areas of their brains (Flynn, 1967). These studies also showed that violent emotional reactions could be completely inhibited in these animals.
Emotion and Cognition
Psychologist and author Paul Ekman explains that, while "…cognition is always there during emotion," there's generally no consciousness of cognition in the process of emotional experience. Dr. Ekman adds that there are, however, exceptions, such as re-experiencing an emotion by recalling an incident that led to the original emotion in the first place.
Demonstrating his confidence in such procedures, Delgado (1969) entered a bull ring armed only with a radio transmitter. The bull facing him, menacingly angry, had radio-activated electrodes implanted in his brain. He charged! But with a flourish and a flick of the switch, Delgado stopped him dead in his tracks. The bull had been turned into an apparently docile and friendly beast. What was being controlled, however, is not clear. Whether Delgado's bull had actually become docile, whether it was just confused, or whether its motor system was paralyzed remains uncertain.
Research with humans also indicates that certain parts of the brain, especially of the limbic system, are closely involved in emotional reactions. For example, Koelsch (2010) showed that music, which can evoke very strong emotions, reliably leads to activation of virtually all limbic structures.
Odors, which often evoke strong emotions, also lead to activation of the limbic system. Schredl and associates (2009) stimulated 15 sleeping subjects during rapid-eye-movement sleep with one of two smells: hydrogen sulphide (a rotten egg smell) or phenyl ethyl alcohol (the smell of roses). Not only did these smells lead to activation of the limbic system, but they also affected the content of the subjects' dreams. The authors suggest it might be valuable to study the effect that olfactory stimuli conditioned to pleasant reactions might have on nightmares.
Among limbic system structures that are involved in interpreting emotions and in guiding behavior in appropriate directions, the amygdala is especially important in processing fear reactions (Bush, Schafe, & LeDoux, 2009). There is some suggestion that impairment of amygdala functioning in individuals may be linked to psychopathology and, consequently, to criminal behavior. Such individuals may be unemotional, callous, and fearless, and more prone to committing acts of violence without fear of consequences (DeLisi, Umphress, & Vaughn, 2009).
The amygdala and other structures of the limbic system, such as the hypothalamus, are involved not only in fear reactions but also in the recognition of emotional expression (Batista & Freitas-Magalhaes, 2009). Individuals with impaired limbic system functioning—as sometimes happens as a result of diseases such as Parkinsonism or alcoholism—may have difficulty interpreting the meaning of facial expressions (Marinkovic et al., 2009).
Not surprisingly, brain surgery can also be used to control emotional reactions. It has sometimes been used with mentally disturbed people for whom all other forms of therapy have been unsuccessful. The practice of removing parts of the cortex (for example, prefrontal lobotomy or leukotomy), once relatively common, has largely been abandoned since the discovery that very small lesions have many of the same positive effects without the same side effects. Rosemary Kennedy, sister of President John F. Kennedy, was left permanently incapacitated after a leukotomy (Feldman, 2001).
In this human anatomy class, it's likely that at least some students are using cognitive disengagement strategies to lessen the potential emotional impact of the situation.
It is possible that, at least to some degree, we are masters of our emotions and not their prisoners. In an experiment conducted by Lazarus and his associates, subjects were exposed to films of woodshop accidents (Koriat, Melkman, Averill, & Lazarus, 1972). In the films, one man lacerates the tips of his fingers, another cuts off a finger, and a third is skewered through the midsection by a plank propelled from a circular saw. He dies.
Some subjects were asked to detach themselves from events in the film; others were asked to involve themselves. In neither case were they told how to do this. Heart rate changes were recorded for every individual during presentation of the film, and reports were obtained of subjective emotional states. Significantly, subjects who were asked to detach themselves from the film had much lower emotional reaction in terms of both their heart rate and self-report. Those who were told to involve themselves experienced profound emotional reactions.
When questioned about their strategies, a majority of the involvement group said they imagined they were the person to whom the accidents were happening, or they attempted to relate the accidents to other accidents they might have witnessed or in which friends or relatives had been involved. Members of the detachment group pretended that the events had been "staged" for the filming, or paid particular attention to the technical details of the film.
The cognitive control of emotions is an important coping mechanism, explains Lazarus (1974, 1999). For example, a study of patients before and after surgery revealed that those who adopted detachment strategies, not wanting to know details of their surgery or symptoms of recovery or complications, experienced more rapid and smoother recoveries than patients who were more involved. Lazarus speculates that paying undue attention to possible signs of complications, or even to signs of recovery, is probably associated with more anxiety (stress) and is negatively associated with recovery. In Lazarus's view, cognitive activity does a great deal to control emotional states.
Emotion is seldom a single, identifiable response to a given situation; more often it is a complex of responses. Moreover, this complex shifts continually. Anger turns into despair; grief into elation; rage into apathy; joy into sorrow; anxiety into relief. Emotions ebb and flow as we appraise our situations, our relationships with people and things, our probabilities of attaining or not attaining goals. The fundamental point is that we are the ones doing the appraising; the emotion resides not in the situation but in our appraisal of it. Furthermore, we exercise control over our appraisals, sometimes deliberately reducing emotional reaction, as may be the case with anger or fear, sometimes purposely enhancing it, as with love or joy.
Some of the more obvious emotions and motives have been extensively investigated—especially negative emotions such as anxiety, fear, and stress and the more basic motives, including hunger and sex. In the remainder of this chapter, we look at hunger and sex; Chapter 9 looks at anxiety and fear in relation to mental health; and in Chapter 10, we look at a powerful sex-related social emotion: love.
Subjectively, hunger may be described as the bodily sensations that result from not eating for a period of time. Such sensations range from mild discomfort (the gentle growling of a hollow belly) to severe pain (the tortured pangs of intense hunger). Normally, death by starvation is preceded by cessation of hunger pains.
Hunger is a function of certain physiological mechanisms related to survival. It's also a function of taste, smell, appearance, and learning, and it doesn't always relate to nutrition.
For many years most psychologists and physiologists believed that hunger results directly from the actions of an empty or nearly empty stomach. So do we eat because the stomach is empty and it begins to contract? Early theorists thought this might be the case. Cannon and Washburn (1912) developed an ingenious test to look at this question. Washburn swallowed a balloon that was then inflated inside his stomach. The balloon was connected to a recorder so that stomach contractions could be measured. Whenever he felt a hunger pang, Washburn depressed a key to record the event.
After a while, Washburn's stomach contracted, although he sometimes had to wait a long time for this to happen. At about the same time, he thought he felt hunger. But that piece of the puzzle did not really fit. It turns out that even people with no stomachs get hungry and that hunger persists even when neural pathways from the stomach to the brain are cut. There have now been many experiments with "gastric" balloons (Schachter, 1971). Contractions may sometimes be involved in sensations of hunger, but sometimes not, and it is likely that the duodenum (upper part of the small intestine) is even more involved.
The brain, too, plays a key role in hunger motivation. Satisfying hunger is highly rewarding. It leads to a pleasant state that can easily become conditioned to various situations—for example, to the taste and smell of food, to its appearance, and to other situational factors. In fact, argues Panksepp (2010), the positive emotional states that accompany eating, in the same way as those that accompany drug use, may be very important in explaining addictions.
Using MRI recordings, Martin and associates (2010) found significantly increased brain activity in the limbic systems (medial prefrontal cortex) of subjects when they simply looked at images of food. Interestingly, obese participants displayed more brain activity than normal participants both before and after a meal. This activity, the authors suggest, is closely related to food motivation.
It has been known for some time that certain parts of the hypothalamus are involved in eating behavior. Patients with tumors or other injury on the hypothalamus sometimes overeat and consequently become obese (Rowell & Faruqui, 2010). And MRI studies now indicate that the cerebellum, too, is closely involved in controlling hunger, perhaps via its connections with the hypothalamus (Zhu & Wang, 2008).
Taste and smell, too, clearly contribute to food motivation, as is dramatically illustrated in studies using the black blowfly. When this fly is given a choice between a totally nonnutritive sugar substitute and a more nutritious alternative, it insists on eating the sweeter substance. And it will continue to do so until it starves to death (Dethier, 1976).
Like the blowfly, we often prefer tasty over less tasty food, regardless of nutritional value, although when we are really hungry, taste becomes less important than the immediate availability of food (Hoefling & Strack, 2010). The evidence suggests that as we become hungrier, we pay more attention to food cues. When participants are asked to detect specific rapidly presented visual targets that are randomly inserted among other images, hungry subjects are easily distracted by food-related images (Piech, Pastorino, & Zald, 2010).
Hunger is a complex motive, tied not only to the body's need for food and to brain activity but also to various metabolic factors. For example, as levels of glucose (a form of sugar) in the blood drop, hunger increases; as blood glucose levels rise, there is a decline in hunger. But the relationship between blood sugar level and hunger is not quite so simple because subjects who are given sugar substitutes such as aspartame don't subsequently eat more than subjects given sucrose, although their measured blood-glucose levels are significantly lower (Anton et al., 2010).
Many hormones and chemicals alter hunger. For example, fat cells produce leptin, a hormone that signals the hypothalamus to reduce appetite. Mice that lack the gene for leptin typically become obese. When they examined children's metabolic profiles, Eriksson and associates (2010) found that leptin was the best predictor of being overweight among 8-year-old children.
Another chemical related to hunger control is cannabinoid (THC or tetrahydrocannabinoid), the active ingredient in marijuana. There is considerable nonanecdotal evidence, both with rats and humans, that THC increases hunger and leads to overeating, even in satiated organisms. There is also evidence of differences in cannabinoid sensitivity between some obese people and others of normal weight. Drugs that counter the effects of cannabinoids are occasionally successful in combating obesity, although many of these also have negative side effects (Bermudez-Silva, Viveros, McPartland, & de Fonseca, 2010).
Unfortunately, our hunger control systems don't always work perfectly. Overeating, termed hyperphagia, is one possibility; it sometimes results in obesity. Obesity is a global problem. Estimates are that about 15 percent of the world's population—slightly more than 1 billion people—and more than one third of the adult U.S. population are obese (Figure 6.14). Americans now spend about $60 billion a year in efforts to lose weight, mostly on drugs, physicians, weight-reducing programs, patent weight-reducing medications, and weight-reducing literature (Worldometers, 2010). An impressive 30 percent of all those who seriously attempt to lose weight will be substantially successful. Sadly, only 6 percent of these—hence fewer than 2 percent of all who try to lose weight—will maintain their reduced weights for any length of time.
Percentage of adult population with a body mass index over 30, based on health estimates rather than self-reported data. Based on U.S. Bureau of the Census, 2010, Table 1306. Retrieved August 20, 2010, from http://www.census.gov/compendia/statab/2010/tables/10s1306.pdf
A variety of factors may be at play in obesity. Metabolic factors, such as a malfunctioning pituitary gland or hypothalamus, account for a small number of cases. Genetic tendencies are also involved, and a handful of gene defects have now been identified as causing severe obesity (Farooqi, 2010). But the fact that incidence of obesity among children in the United States has nearly doubled since 1980 suggests that environmental factors may be more important (Obesity Rates Continue to Climb, 2007). It is unlikely that our genes have changed significantly in just a few decades.
The most important factors that underlie obesity include the overwhelming popularity and availability of calorie-dense, high-sugar, and high-fat foods, coupled with an increasingly sedentary lifestyle among children. And, as Must and Anderson (2010) point out, by the time children have reached adolescence, detrimental eating and exercise habits have been repeated and reinforced so often that they are very difficult to change.
Obesity is a complex problem. After gastric bypass surgery for severe obesity, Beverley Keating, pictured here with her husband Doug and son Eldon Burke, regained half the weight she had lost. She claims she was not psychologically ready to be thin.
Malfunctions of our hunger control systems may be evident in overeating (hyperphagia); they may also be manifested in undereating (termed aphagia). Clinically, they may lead to one of three eating disorders: anorexia nervosa, commonly shortened to anorexia; bulimia; and binge eating disorder.
Anorexia is classified as a mental disorder. Its symptoms include unwillingness or inability to eat and eventual emaciation. It is often characterized by a significantly distorted body image and is sometimes fatal. Its possible causes include cultural standards that glamorize thinness, family and peer pressure, and occasionally genetic factors and psychological issues such as low self-esteem or depression. It is most common among adolescent girls (Keel, Eddy, Thomas, & Schwartz, 2010).
Literally, bulimia means "ox hunger." It is not clear whether it is intended to mean as hungry as a bull or hungry enough to eat the entire bull. Its main characteristics are recurrent episodes of often secretive and guilt-ridden binge eating at least twice a week over a period of months. Binges are typically followed by compensating behaviors, including self-induced vomiting (purging) and sometimes the use of laxatives or diuretics to combat weight gain. Bulimia is most common among adolescent girls.
Binge eating disorder is also marked by episodes of excessive compulsive eating but, unlike bulimia, does not involve purging or other attempts to get rid of excess calories. As a result, whereas bulimics are often very thin or of normal weight, those with binge eating disorder are often obese.
There are a variety of possible therapies for anorexia, bulimia, and binge eating disorder, including cognitive therapies (Murphy, Straebler, Cooper, & Fairburn, 2010), behavior therapies (Kroger et al., 2010), drug therapies, and sometimes even hospitalization.
Interestingly, these three eating disorders often manifest at different ages. Anorexia appears earliest, peaking at around age 14, and then again around age 18. Bulimia peaks somewhat later, at around age 19, and often occurs among girls who were previously anorectic. Binge eating disorder appears even later, at around age 25 (Keel et al., 2010).
As we saw, various factors are related to greater risk of eating disorders, including genetic background, emotional disorders, brain injury and disease, and metabolic malfunction. But perhaps more important than any of these factors are cultural pressures stemming from what Saguy and Gruys (2010) describe as the American media's glorification of excessive thinness. In a sense, we equate being thin with high social status; we make a moral virtue of it—an observation that is reflected in the fact that the average waist sizes of Miss America pageant winners decreased from around 26 inches in 1921 to below 24 inches by 1986 (Freese & Meland, 2002). By the same token, we associate fatness with lower status; we make of it the sins of gluttony and sloth.
These are some of the many pieces of the obesity puzzle. But we don't yet know for sure if we have all the pieces or just how they go together.
Nor do we completely understand sexual motivation, another powerful biological motive. Unlike hunger and thirst, sex is not necessary for the individual's survival; but it is clearly essential for the survival of the species. Nor is it a simple motive, easily explained by a lack or deficiency. If you do not eat, you eventually become very hungry. In contrast, the relationship between the sex drive and lack of sex is not quite so clear.
What does seem clear is that hormonal factors are closely linked to sexual urges. At puberty, when the body begins to produce the sex hormones—mainly estrogen among females and androgens (mainly testosterone) among males—sexual drive increases dramatically. And injections of sex hormones are known to increase sexual drive (Meletis & Wood, 2009).
But hormones are only part of the story. In fact, when the organs that produce sex hormones, the ovaries in women and the testes in men, are removed, sex drive does not necessarily disappear. Even nonhuman primates display sexual motivation that seems to be independent of hormones. Giles (2008) summarizes a number of studies that have shown that nonhuman primates that have been castrated continue to display both sexual interest and activity. Similarly, women who have had their ovaries removed typically experience little change in sex drive.
Sexual motivation has a number of cultural and learned components as well, as is clear from looking at the sometimes dramatically different sexual behaviors of different cultures. Hence, aspects of sexual motivation are clearly learned. Cultural standards play an important role in determining what we find sexually arousing, even as they define sexual behaviors that are acceptable.
And for humans, perhaps one of the greatest sexual motives of all is love—a topic that science has sometimes found awkward. It is a piece of the puzzle that we examine in Chapter 10.
Chapter Six and basically just answer all the question she did not specify how long. It's due Thursday. Thanks
ok, no problem
your paper is ready
please download here
i have highlighted where you should cite the book since i do not have details of the book
alternatively, you can provide me with the title and other details and i'll add in the citations for you
please help by leaving a positive feedback after hitting accept, thanks.