More on the brain size debate: Part II

While we’re on the subject of brain size, I wanted to share another interesting Temple Grandin theory. In Animals in Translation, Grandin suggests that we humans may be suffering from a species superiority complex. While she agrees that domestication was responsible for a 10 percent reduction in brain size in dogs, she contends that the civilizing process cut both ways. Taming wolves had a profound effect on the evolution of the human brain, according to Grandin.

Recent scientific findings suggest that human-wolf cohabitation began as long as 100,000 years ago. If these findings are correct, Grandin says:

Wolves and people were together at the point when homo sapiens had just barely evolved from homo erectus. When wolves and humans first joined together people only had a few rough tools to their name, and they lived in very small nomadic bands that probably weren’t any more socially complicated than a band of chimpanzees . . .

This means that when wolves and people first started keeping company they were on a lot more equal footing than dogs and people are today. Basically, two different species with complementary skills teamed up together, something that had never happened before and has never really happened since.
(Animals in Translation, 304-306)

This is more than a nostalgic ode to the ongoing love affair between humans and dogs. If our interspecies relationship does, in fact, date back this far, Grandin (among others) thinks it may provide the key to understanding how humans developed the complex social networks and behaviors that allowed them to thrive:

During all those years when early humans were associating with wolves they learned to act and think like wolves. Wolves hunted in groups: humans didn’t. Wolves had loyal same-sex and nonkin friendships; humans probably didn’t, judging by the lack of same-sex and nonkin friendships in every other primate species today . . .

Wolves, and then dogs, gave early humans a huge survival advantage . . . by serving as lookouts and guards, and by making it possible for humans to hunt big game in groups, instead of hunting small prey as individuals. Given everything wolves did for early man, dogs were probably a big reason why early man survived and Neanderthals didn’t. Neanderthals didn’t have dogs.
(Animals in Translation, 304-306)

Not feeling quite so paternalistic towards little Fifi now, are you? Or maybe you’re finding it hard to accept the idea that wolves may be responsible for inculcating us with the more humane aspects of human behavior. You certainly wouldn’t be the first. But before you dismiss Grandin’s theory, consider this:

Archaeologists have discovered that 10,000 years ago, just at the point when humans began to give their dogs formal burials, the human brain began to shrink. . . It shrank by 10 percent, just like the dog’s brain. And what’s interesting is what part of the human brain shrank. In all of the domestic animals the forebrain, which holds the frontal lobes, and the corpus callosum, shrank. But in humans it was the midbrain, which handles emotions and sensory data, and the olfactory bulbs, which handle smell.
(Animals in Translation, 304-306)

To Grandin, this suggests that “dog brains and human brains specialized: humans took over the planning and organizing tasks, and dogs took over the sensory tasks.”

Many scientists have greeted Grandin’s theory as heresy. But, before you rush to judgment, stop and think. Why would you assume that a relationship powerful enough to influence the development of the canine brain, would leave the human brain untouched? Given what we now know about the brain’s “plasticity,” it makes sense that a synergistic interspecies relationship like this would leave its mark on both species.

We, humans, are so accustomed to being at the top of the evolutionary pile, we find it difficult to remember that this wasn’t always the case. Us depend on dogs? Pishaw. But if you had to place a wager, who would you bet on to survive in the wild? A naked, pink-skinned biped with an oversized head and no sense of community, or a pack of mutually supportive runners with big teeth and built-in temperature control?


More on the brain size debate: Part I

Okay, I know I promised the next entry would be devoted to Temple Grandin’s views on language — a subject well worth exploring — but I’ve found myself distracted by some of my other reading this week. (So much to read, so little time.) Be assured, we’ll delve into “Grandin on Language” at a later date. Today, I find my thoughts once again turning to the teaspoon of gray matter separating the male and female brain. (“When it comes to brains, does size really matter?) I revisited this entry after the Tangled Bank Carnival and found myself no less irate over Terence Kealey’s pseudoscientific theory of female domestication.

(For those who have neither the time nor inclination to read “Does size really matter?” Kealey argues that women have smaller brains — and therefore less intellectual brawn — because they’ve been successfully domesticated by men. To buttress his point, he draws on evidence presented in a long-term study of foxes showing that the brains of domesticated fox pups shrank over successive generations. While it’s true that the female human’s brain has shrunk by something like 10 percent over the millennia, Kealey neglects that the male brain has as well.)

As is often the case when I’m preoccupied with a topic, the synchronicities started proliferating. Everywhere I looked, I found more information on the unique processing abilities of the female brain. I happened to be reading Katherine Ellison’s fascinating book on the intellectual benefits of motherhood, The Mommy Brain, when I stumbled on the following passage.

There are a few intriguing differences between men’s and women’s cerebral equipment that some scientists are convinced lead to differences in the way the two genders think. The most notable difference is overall size: The female brain, on average, is about 15 percent smaller than the average male brain. This is a particularly incendiary topic; a few modern scientists have suggested that brain size is related to intelligence, implying that women, in general, are dumber. Other experts have challenged the larger-is-better view by pointing out that women’s brains are more tightly packed with neurons, ergo, more efficient machines.
But here’s where it starts to get really interesting . . .
Recent fMRI scans have established that women do seem to use more of their brains at once, compared to men, in response to some stimuli. In one study, researchers looked at the brain activity in ten men and ten women who were listening to an audiotape of a John Grisham novel. In all cases, the men’s brains showed activity exclusively in the left temporal lobe, but women’s brains were active in both the right and left temporal lobes.

One other physical difference that is often cited in comparisons of male and female brains has to do with a highway of nerve fibers connecting the left and right hemispheres. Called the corpus callosum, the “calloused body,” it’s an arched structure running midline from the back to the front of the head. Some studies have found that a female human’s corpus callosum is on average slightly thicker on one end than a male’s, in proportion to total brain size. (Another smaller connecting bridge is called the anterior commissure, found to be on average about 12 percent larger in women.) Size does now seem to matter, as many expert’s suspect that women’s more generously equipped brain is more efficient at relaying information back and forth between the intuitive right hemisphere and the businesslike left. Helen Fisher, a Rutgers University anthropologist . . . writes that “women’s well-connected brains facilitate their ability to gather, integrate and analyze more diverse kinds of information, an aspect of what she calls “web thinking.”
(The Mommy Brain, 73-74)

Put that in your pipe and smoke it Mr. Kealey.

In all seriousness, I’m not trying to add fuel to the proverbial fire here. My goal isn’t to argue that the female brain is superior. The female brain is simply specialized to perform certain tasks, as is the male brain. This makes perfect evolutionary sense. The idea that the gender charged with propagating species has been engineered to be “dumber,” on the other hand, does not.

I find brain size debate ludicrous and fervently hope that it will soon be put to rest, immortalized in the annals of misguided scientific detours, alongside eugenics and creationism.


Tangled Bank Blog Carnival!

Neurontic is proud to be one of the featured sideshows at Tangled Bank Blog Carnival 58, sponsored by Archaeologist-cum-polymath Martin Rundkvist. Visit Martin's always entertaining SALTO SOBRIUS to learn more about insect cuisine, what J.Lo has in common with the Tanystropheus dinosaur, and the evolutionary psychology of niceness.


Temple Grandin: On thinking like an animal

I’ve been meaning to read Temple Grandin ever sense reading about her in Oliver Sacks’ 1995 book, An Anthropologist on Mars. But for some reason, her books continually ended up on the bottom of the pile on my nightstand. What a shame. Having just finished Grandin's Animals in Translation, I regret dragging my feet for so long. Her life story is like catnip for the psychologically curious.

Arguably America’s foremost animal behavioralist, Grandin has spent her 32-year career working to ensure that the animals that end up in slices on our dining room tables are treated with care and compassion in their final moments. She singlehandedly convinced McDonald’s -- perhaps the biggest purchaser of beef in the world -- to mandate that its beef suppliers adopt her recommendations for humane treatment. If you, like me, are a guilty, but unreformed carnivore, this alone should make her worthy of interest. But Grandin’s achievements are rendered even more impressive when you learn that there was a time in her life when some believed she’d never even speak.

Grandin was born with a severe case of autism. Sacks paints a vivid picture of her first few years of life:

At six months, she started to stiffen in her mother’s arms, at ten months, to claw her “like a trapped animal.” . . . Temple describes her world as one of sensations heightened, sometimes to an excruciating degree: she speaks of her ears, at the age of two or three, as helpless microphones, transmitting everything, irrespective of relevance, at full, overwhelming volume—and there was an equal lack of modulation in all her senses . . . She was subject to sudden impulses and, when these were frustrated, violent rage. She perceived none of the usual rules and codes of human relationship. She lived, sometimes raged, inconceivably disorganized, in a world of unbridled chaos. In her third year, she became destructive and violent:
“Normal children use clay for modeling; I used feces and then spread my creations all over the room. I chewed puzzles and spit the cardboard mush out on the floor. I had a violent temper, and when thwarted, I’d throw anything handy—a museum quality vase or leftover feces. I screamed continually.”
(Note to reader: According to Sacks, autistics have an unparalleled ability to accurately recall events from very early life. Some have detailed memories of their first year.)

In Animals in Translation, Grandin says her unquenchable rage was the result of her frustrated attempts to communicate; the consequence of being utterly powerless to reach outside herself and relay her needs. Thankfully, she was born into a family with the patience and tolerance necessary to pull her out of her insular universe. Two things saved her from a life of mute wrath, she says: learning to read and animals.

Reading introduced her to worlds outside her own. It fed her boundless curiosity and taught her that language is a vehicle for information—a realization that compelled her to master the intricacies of communication. (We’ll discuss this more in the next entry.)

In animals, Grandin found kindred spirits. She intuitively grasped that their experience of the world resembled her own. It was a world made up of minute details; a world perceived through pictures rather than language. Grandin sensed that animals, like her, were prone to sensory overload and driven largely by fear.

For those who’ve read up on Autism, Grandin’s ability to “relate” to animals may come as a surprise. Autism is marked by an inability to empathize. Autistics find it next to impossible to grasp the inner workings of someone else’s mind. They lack what psychologists call “a theory of mind.” For a normally functioning person, this is a difficult concept to grasp. Our ability to infer another’s emotions is so instinctive.

I find it easiest to think about it like this: empathy is essentially a function of projection. We build models of other people’s internal experiences by assuming that their reactions to events mirror our own, to a greater or lesser degree. Autistics can’t do this automatically. Any sense of another’s inner world has to be assembled intellectually. The idea of a shared emotional reality is entirely foreign to them. (No one is sure why at this point, although many theorize that it has to do with a deficit of mirror neurons.)

Grandin was born without the equipment to intuit other people’s emotions. She learned early on, however, that this was not the case with animals. Because she grasped that animals sensory experiences were similar to her own — that they, like her, were absorbed by details and defined the world based on their sensory experiences (as opposed to language) — she felt an immediate kinship.

But how could that be? Why was it that she had an instinctual sympathy with animals and not with people? Grandin’s still not entirely sure, but she has a theory that’s pretty convincing.

She believes that the culprit is faulty frontal lobes. Autistics, like animals, have underdeveloped neocortexs. And it is this difference, according to Grandin, that makes autistics more like animals, in some ways, than people. I’ll let her tell the rest:

When you compare human and animal brains, the only difference that’s obvious to the naked eye is the increased size of the neocortex in people . . . The neocortex is the top layer of the brain, and includes the frontal lobes as well as all of the other structures where higher cognitive functions are located.

To understand why animals seem so different from normal human beings, yet so familiar at the same time, you need to know that the human brain is really three different brains, each one built on top of the previous at three different times in evolutionary history . . . the first and oldest brain, which is physically the lowest down inside the skull, is the reptilian brain. The next brain, in the middle, is the paleomammalian brain. The third, and newest brain, highest up inside the head, is the neomammalian brain.

Roughly speaking, the reptilian brain corresponds to that in lizards and performs basic life support functions like breathing; the paleomammalian brain corresponds to that in mammals and handles emotion; and the neomammalian brain corresponds to that in primates -- especially people -- and handles reason and language. All animals have some neomammalian brain, but it’s much larger and more important in primates and people.
Put simply, the neomammalian brain is “the great generalizer.” It is the hardware that enables us “normals” to take all of the disparate data we receive, organize it, and draw logical conclusions. Being able to generalize is one of humanity’s most important gifts. Categorization is the fundamental building block of normal intelligence. Without categories, we would be captive to a continuous, undifferentiated stream of sensory input. (Much like Grandin was as a child.)

Due to their smaller frontal lobes, animals spend far less time generalizing than people, according to Grandin. Their worlds are made up of a series of sense impressions – pictures – that they react to largely on a moment-by-moment basis. Grandin believes that autistics have much the same experience. And she thinks this is the result of faulty wiring in the frontal lobes.” Consequently, she says many autistics rely on their second (or paleomammalian) brain to make sense of the world:

When [a normal person’s] frontal lobes are down [due to lack of sleep or trauma, for instance], you have your animal (palemammalian) brain to fall back on . . . the animal brain is the default position for people . . . and people are like animals, especially when their frontal lobes aren’t working up to par.

I think that’s the reason for the special connection autistic people like me have to animals. Autistic people’s frontal lobes almost never work as well as normal people’s do, so our brain function ends up being somewhere in between human and animal. We use our animal brains more than normal people do, because we have to . . . Autistic people are closer to animals than normal people are.
Grandin is aware that autism has robbed her of the full human experience, but she also believes her disorder has allowed her to avoid making one of the central mistakes of “normals”: over-generalizing.
The price human beings pay for having such big, fat frontal lobes is that normal people become oblivious in a way that animals and autistic people aren’t. Normal people stop seeing the details that make up the big picture and see only the big picture instead . . . A normal person doesn’t become conscious of what he’s looking at until after his brain has composed the sensory bits and pieces into wholes.
In this sense, Grandin argues, it’s unjust to claim that autistic people “live in their own little world.” She believes that autistics, like animals, are far better at accepting the unfiltered version of reality. It’s normals that continually try to project their beliefs onto reality. Being too cerebral, Grandin says (in a phrase reminiscent of both George W. Bush and Jesse Jackson) causes people to “abstractify.” We become so infatuated with our ideological picture of the world that we grow blind to what’s happening around us. The price we pay for abstractification is steep, says Grandin:

In my experience, people become more radical when they’re thinking abstractly. They bog down in permanent bickering where they’ve lost touch with what’s actually happening in the real world . . . Today the abstract thinkers are in charge, and abstract thinkers get locked into abstract debates and arguments that aren’t based in reality. I think this is one of the reasons there is so much partisan fighting inside government.
That Grandin has been able to transform herself from a feral, cardboard chewing mute into such an astute observer of human and animal nature is nothing short of miraculous. And her musings don’t end here. In the next entry we’ll explore her theories on the function of language, and why people have more in common with prairie dogs than you might think.


Semen: "It's better than cats."

Previously posted on Annotate
April 2006

I know I’m dating myself by referencing an SNL bit circa 1986, but I couldn’t resist. Those of you who’ve read Microscopic Mind Control know that toxoplasma, the bacteria people pick up from house cats, is purported to make women more "outgoing and warmhearted." Well, according to New York State University Psychologist Gordon Gallup, semen is an even more powerful organic anti-depressant. In 2002, Gallup conducted a study that suggested that women who regularly engage in unprotected sex (both vaginal and oral) are happier than their conscientious counterparts. (Semen Acts as Anti-depressant)

When I first stumbled on Gallup’s findings, I couldn’t help but wonder what prompted this line of inquiry. It seems less like science than a plan hatched in a boy’s locker room. ‘Let’s see, how can we convince girls to swallow?’ ‘I’ve got it! We’ll just tell them that sperm’s good for them.’

So how did Gallup measure the emotional benefits of semen? He divided 293 female students into groups, based on how often their partners used condoms. Then, he administered the Beck Depression Inventory to assess their wellbeing. Anyone who scores under 17 on the 21 question emotional inventory is considered mentally sound.

Gallup found that women who never used condoms scored 8 on the Beck scale; women who used them intermittently scored 10.5; women who frequently used protection came in at 15; and those who swore by condoms scored 11.3. (Incidentally, Gallup also administered the test to a group of abstainers. They scored 13.5.)

There are several reasons I’m suspicious of Gallup’s claim that unprotected sex is a mental magic bullet. First, the psychologist maintains that he screened out other criteria that might influence a woman’s state of mind. He adjusted for the stability of relationships and the possibility that certain personality types were inherently predisposed to use condoms. While I commend him for trying, it strikes me that a number of other factors contribute to mood: job satisfaction, family relationships, exercise and eating habits, to name just a few.

Second, the numbers just don’t add up. If Gallup’s hypothesis is correct, I find it strange that the women who used condoms most of the time scored 15, indicating a minor case of the blues, while women who always used them came in at a respectable 11.3.

But, most importantly, Gallup seems to be indulging in a little hyperbole here. He may have demonstrated that semen enhances mood, but he certainly hasn’t proven that it’s an anti-depressant. Why? Because nobody in his study was depressed to begin with! Not one of his test subjects scored 17 or above on the Beck scale, the range required to qualify as clinically depressed.

Still, it’s worth noting that a more recent study seems to support Gallup’s notion that semen is an upper. A group of researchers at the State University of New York conducted a study of 1000 women, which provides further evidence that regular semen exposure boosts mood. (HappySemen)

Unlike Gallup, this team of researchers provides biological evidence to back up their claims. They ascribe semen’s mood-altering effects to a cocktail of hormones. Semen, apparently, contains a wide-rage of psychoactive chemicals:
*Arginine: an amino acid believed to increase blood flow and blunt pain
*Tyrosine: a chemical that gets converted into adrenaline
*Dopamine: the much-publicized happy drug
*Beta-endorphins: a painkiller that lowers anxiety
And, oddly:
*Estrogen and Oxytocin: two "female" hormones linked respectively with happiness and pair bonding.
Who knew? Actually, there is a kind of Darwinian logic to the idea that semen makes women happy. If women feel good when they come in contact with sperm, they’re more likely to procreate. (I’m surprised that evolutionary psychologists haven’t already latched on to these findings.)

These studies are intriguing, but not persuasive enough to convince me that semen-based drugs will be the next big thing in anti-depressants. And for the sake of teenage girls everywhere, I hope adolescent boys aren’t spending their time pouring over scientific journals.

Demystifying ESP

Previously posted on Annotate
April 2006

I’m about to say something that will no doubt make me very unpopular with mainstream scientists. I believe in ESP. Hang on a second before you delete this blog from your RSS feed. Let’s just look at the component parts of that acronym. When people hear ESP, they immediately think of supernatural phenomena. In fact, there is nothing inherently mystical about the term ESP. It simply means ‘extra-sensory perception.’

When I say I believe in ESP, what I mean is that I’m convinced that some people have heightened powers of perception. These people have such sensitive antennae that they’re able to pick up on environmental cues that you and I miss. And their gift is often mislabeled as telepathy.

I am not the only person who harbors this suspicion. In fact, there’s entire academic discipline devoted to discovering the biological underpinnings of ESP. It’s called neurotheology. In recent years, neurotheologists have conducted several studies that lend credence to the theory of “super-sensers.”

One particularly interesting study was conducted in 2002 by researchers at Goldsmiths College in London. They recruited a group of people to take an ESP test. The subjects ranged from “normals” to people who claimed to be clairvoyant.

Each participant was seated in front of computer monitor that cycled through cards with five symbols on them: a cross, a star, a square, a circle, and three wavy lines. The test subject was asked to predict the symbol on a given card from looking at the back. Predictably, normal people did not perform well. But many of the so-called clairvoyants did.

Here’s why: it wasn’t actually an ESP test. The researchers had rigged the program so that the face of the each card flashed across the screen for 14.3 milliseconds before the subjects were asked to guess. “Normals” missed this environmental cue entirely. But many of the people who claimed to have ESP picked up on it and were able to accurately “predict” the symbols.


Neurotheologist Michael Thalbourne of University of Adelaide in Australia has a theory about why some people are super-sensers. He believes that their enhanced powers of perception are the result of information bubbling up from the adaptive unconscious.


In his book Blink, Malcom Gladwell offers a good working definition of the adaptive unconscious. He quotes psychologist Timothy Wilson, who says:

The mind operates most efficiently by relegating a good deal of high-level, sophisticated thinking to the unconscious, just as a modern jetliner is able to fly on automatic pilot with little or no input from the human, or ‘conscious’ pilot. The adaptive unconscious does an excellent job of sizing up the world, warning people of danger, setting goals, and initiating action in a sophisticated, efficient manner.
Imagine that you’re crossing a busy thoroughfare when you notice a semi speeding towards you. You don’t stand in the path of the truck consciously assembling all the information you have about automobiles and collisions and then decide you’re in danger. You run. You run because your adaptive unconscious has processed all of the environmental data in milliseconds and triggered your internal alarm.

So, what would happen if the boundary between your conscious mind and your adaptive unconscious was particularly permeable? You would have conscious access to external cues that most of us aren’t aware of. And you would have insight into the adaptive unconscious’ decision-making process--insight that could easily masquerade as clairvoyance.

Say, for instance, you have a friend who has a knack for guessing who’s on the other end of the phone before picking up the receiver. Chances are this friend is a super-senser with enhanced access to her adaptive unconscious. She may pick up on cues that are lost on you and make prognostications based on a surfeit of unconscious information.

Let’s say, for example, that the phone rings at 3:00pm on a Sunday afternoon and your friend is convinced her grandmother’s on the other end. If her guess proves correct, you might chalk this up to telepathy. But the explanation is probably far less mysterious.

She may be making a prediction based on information stored in the adaptive unconscious. She may remember that her grandmother always calls on Sundays at about noon. But she may also take into account the fact that her grandmother recently got back from a trip to California. When she doesn’t receive a call at noon, she may factor in her grandmother’s jet lag and accurately predict that she’s running three hours behind. Your friend wouldn't have access to the entire decision-making process occuring in her adaptive unconscious; she’d only privy to its conclusion. Consequently, this prediction would seem as uncanny to her as it does to you.

This explanation of clairvoyance is not unlike Gladwell’s theory of thin slicing. In Blink, Gladwell posits that the adaptive unconscious allows us to efficiently sort through a storehouse of subconscious data, compare it to what we’re seeing, and make informed decisions in the space of seconds. What I’m proposing is that “telepaths” do more than make decisions based on this information, they make predictions. And because these predictions are based on verifiable external cues most of us miss, and informed by past experiences stored in the adaptive unconscious, they’re often spot on.

The science of dreaming

Previously posted on Annotate
April 2006

Most people don’t know that Sigmund Freud was a frustrated neurologist. Before he abandoned himself to abstraction, the father of psychoanalysis was a practicing physician, intent on developing "a neural model of behavior." (Kandel Interview) But Freud found neuroscience too blunt a tool, in the early twentieth century, to serve his purposes.

If brain science had been further along at the turn of the century, we might have been spared the Oedipus Complex and the concept of penis envy. But we may also have missed out on his theory that dreams are a form of wish fulfillment--a hypothesis that still holds sway among many scientists and psychologists. (The Interpretation of Dreams)

Over the past 50 odd years, many scientists have tried to debunk Freud’s theory, which initially seemed far too fuzzy-headed to qualify as real science. Most were unsuccessful. But one alternative theory did gain credence.

In the 1970s, sleep specialist J. Allen Hobson presented compelling evidence that Freud’s theory of dreaming was itself a form of wish fulfillment. According to Hobson, Freud had read far too much into dreaming. He’d constructed a theory to reflect his belief that dreams had meaning. But Hobson’s research suggested that nighttime hallucinations were the nothing more than the brain’s attempt to turn a set of random stimuli into a cohesive narrative. And, unlike Freud, Hobson’s theory was supported by credible biological evidence.

By the ‘70s, neurologists believed that the pons, a primitive brain region located on the brain stem, was the seat of dreaming. Patients who had damage in the pons never achieved REM sleep--the sleep cycle then believed to produce dreams. It was also widely believed that dreaming didn’t involve the brain’s intellectual and emotional regions.

Hobson wasn’t satisfied with these findings. Given the emotional intensity of many dreams, he found it hard to accept that the brain’s emotional centers didn’t contribute to the dream experience.

Through a series of experiments, Hobson revealed that when REM sleep was triggered, it activated the production of a neurotransmitter called acetylcholine. He demonstrated that the pons used acetylcholine during REM to "send impulses to various brain regions . . . including the limbic system, the emotional center of the brain."” Hobson’s theory, known as the reciprocal-interaction model, posited that "the sleeping brain tries to do with these [arbitrary] signals exactly what it does in the waking state with sensory inputs: make sense of them." (Sweet Dreams Are Made of This)

Hobson’s findings were persuasive and many scientists adopted his theory wholesale. That is, until Mark Solms, a member of NYU’s Psychoanalytic Association, proposed an alternative theory. Like Hobson, Solms believed that the pons triggered REM sleep, but he contended "the origin of dream content lay in the highest-level brain regions." And he also had scientific evidence to back up his claim.

If, as Hobson posited, the pons and REM sleep were wholly responsible for dreaming, then people with damaged pons’ would never dream. Solms demonstrated that this wasn’t the case. He studied 26 people who could no longer achieve REM, due to brain lesions on the pons, and found that only one member of the group had completely lost the ability to dream. Solms went on to identify "two areas [in the brain] in which damage could cause complete loss of the dream experience": the white matter of the frontal lobes and the occipitotemporoparietal cortex. Neither were anywhere near the pons.

Solms research proved that people dream even when they aren't experiencing REM sleep. But, more importantly, it suggested that dream content originates in the brain regions associated with rational thought. In other words, Solms agrees with Freud. Dream imagery is not random, according to Solms. It is a form of unconscious thought.

Today, neuroscientists, psychologists and psychiatrists are split into two camps: Hobson supporters and the Solms advocates. The debate rages on and no one has come out the clear winner. But I was interested to read of a recent study about near death experiences that seems to support Solms’ theory.

New@Nature.com recently published an article detailing the findings of neurophysiologist Kevin Nelson of the University of Kentucky, Lexington. Nelson conducted a study of 55 people who reported having "white light" experiences as they lay close to death. Of these 55, 33 (or 60 percent) claimed that they’d had "at least one incident where they felt sleep and wakefulness blurred together." Put simply, Nelson’s research suggests that near death experiences could be a form of dreaming.

If this proves to be the case, it buttresses Solms’ belief that dreams "can be shaped by hidden emotions or motives." Who among us wouldn’t wish to save ourselves the agony of experiencing the final moments of consciousness? It makes perfect sense that the brain would manufacture a pleasant dream landscape to blunt the pain and fear of death. This is, in a sense, the ultimate in wish fulfillment.

Drink to your health?

Previously posted on Annotate
April 2006

We’ve all noted the fickleness of the nutritional standards. One week we’re told that eating eggs is tantamount to courting death; the next week, they’re deemed safe in moderation. One second eating pasta is called the Mediterranean Diet; the next second, enjoying a spaghetti dinner is the equivalent of mainlining lard. First coffee is good for you; and then it’s bad for you. We all switch to decaf only to find out that it’s even worse for you.

Up until about three months ago, I was as bewildered as everyone else. But now that I spend a good chunk of my time perusing scientific journals, I’m beginning to understand what fuels this ceaseless fluctuation: headlines.

We live in the age of the 24-hour news cycle. No news was once good news. But these days, no news means a massive void on television and the Internet. Media minions are constantly on the look out for anything that faintly resembles a "breaking" story. And contrarian scientific studies make for good headlines.

Unfortunately, few of us have time to read the articles that accompany these headlines. We do a quick scan of the first paragraph and accept this week’s findings as gospel. We never get to the part where the researcher responsible for the study "qualifies" his or her claims.

Last week, the American Heart Association’s regrettably named journal, Stroke, published an article that’s bound to be picked up by the mainstream media. The headline will read something like this: Drinking Shown to Improve Cognition in Women.

Speed-reading the first paragraph of this piece will be enough to convince many women that it’s time to break open the chardonnay. Here’s what it will say:

After conducting a randomized study of 3,298 Manhattan residents, Columbia neurologist Clinton Wright found that “women who had up to two drinks a day scored about 20 percent higher on the Mini Mental State Exam (MMSE) than women who didn't drink at all or who consumed less than one drink a week.

Sounds convincing, doesn’t it? And it won’t seem any less so if you read on to the second graf, which will include official sounding details like:

This study was conducted in a subsample of 2,215 [stroke-free] participants with both alcohol and carotid plaque data available. Their average age was 69. Fifty-four percent of the participants were Hispanic, 25 percent black and 21 percent white.

Don’t get me wrong. Wright followed all of the appropriate protocols. And his findings — as far as they go — are credible. But as he states quite plainly in the latter half of the Science Daily article, "the study is limited by the use of the MMSE, which is not a very sensitive test."

What the Science Daily reporter neglects to mention is just how limited the MMSE is. When I think cognition test, I think IQ test. The MMSE is to the IQ test, as the 1-yard dash is to the marathon. The test was not intended to measure normal cognitive functioning—it was designed to screen for dementia.

Here’s a sampling of the questions:

*What is the year, month, date, season, day of the week?
*In which state, county, town/city, hospital/street are you now?
*Repeat these three words - e.g. car, ball, key
*Name these objects - e.g. key, watch
*Repeat the three words from before
You get the idea. This is not a high level assessment. If I administered this test to a group of stroke-free women, I’d be far less concerned with the people that performed well, then with those that performed poorly. Clearly, something was terribly wrong with those women.

It seems to me that all Wright has proven at this point is that moderate drinking may help women ward off dementia. Frankly, that’s enough to convince me to keep drinking. But it’s far cry from improving normal cognition.

Hi, my name is Orli and I'm a dyscalculiac

Previously posted on Annotate
April 2006

I have spent most of my life losing keys, driver’s licenses, cigarette lighters—even the occasional car. I couldn’t tell you which direction is west to save my life. My math skills are abysmal. I’m clumsy, forgetful, and utterly useless when it comes to names.

I can, however, tell you that Elizabeth Barrett Browning was anorexic long before we had a word for it; calculus was invented by two men almost simultaneously (Isaac Newton and Gottfried Liebnez); and that Yale literary critic Harold Bloom thinks that Shakespeare is responsible for inventing the modern concept of humanity.

At times, my brain seems like little more than a repository for esoterica. While this comes in handy during games of Trivial Pursuit, it can make everyday life difficult. On balance, I’d much prefer to hold on to my ATM card, than remember that French Philosopher Michel Foucault popularized the term Panopticon.

I’ve long been convinced that I’m the butt of some kind of evolutionary practical joke. I seem to have been equipped with few of the skills necessary for day-to-day survival. But, hey, if you need a working definition of synesthesia, I’m your girl.

Over the years, I’ve found ways to counterbalance most of these inborn deficiencies. I do a lot of, 'Hey—you,' on meeting people I haven’t seen in a while. I’ve developed an elaborate cubby system to keep track of the practical paraphernalia required for daily existence. I avoid all sports, apart from Ping Pong. And I no longer own a car.

The one thing that still sends me into fits of hysteria is math. The prospect of taking the math portion of the GRE was almost enough to dissuade me from applying to graduate school. Thankfully, a math savvy friend agreed to teach me all the formulas I should have mastered in 9th grade. After six months of tutoring, we were both pleased when I got a 430 in math. (That puts me right around the 15th percentile.)

On arriving at NYU, I was relieved to find that it was teaming with verbal idiot savants. (No offense, guys.) So, I thought I might not be the only one interested to learn that scientists have a name for the numerically-challenged: dyscalculiacs.

Yes, there’s an actual syndrome.

Just to be clear, being bad at math doesn’t necessary mean that you’re a dyscalculiac. I've included a handy checklist of some of symptoms associated with dyscalculia below:

Dyscalculia Symptoms(Adapted from Discalculia)

*Normal or accelerated language acquisition: verbal, reading, writing. Poetic ability. Good visual memory for the printed word.

*Good in the areas of science (until a level requiring higher math skills is reached), geometry (figures with logic not formulas), and creative arts.

*Difficulty with the concepts of time and direction.

*Mistaken recollection of names. Poor name/face retrieval.

*Inconsistent results in addition, subtraction, multiplication and division. Poor with money and credit.

*Inability to grasp and remember math concepts, rules, formulas, sequence. May be able to perform math operations one day, but draw a blank the next.

*Gets lost or disoriented easily. May have a poor sense of direction, loose things often, and seem absent minded.

*May have poor athletic coordination.
I haven’t consulted a professional, but a quick review of the symptoms was enough for a self-diagnosis. This list could have been titled: “Things that Orli’s loved ones find alternately annoying and endearing about her.” (Okay, I only hope the last part’s true.)

If any of this sounds familiar to you, don’t worry. I’m not planning on starting a support group. I’m skilled with a calculator and my algebra days are behind me. As for the rest of it, those who know me have learned to think of them as eccentricities. Still, it’s oddly liberating to put a name to my “disorder.”

Penfield's Homunculus and the mystery of phantom limbs

Previously posted on Annotate
April 2006

Phantom limbs are not a modern phenomenon. There are records of people "haunted" by amputated appendages dating all the way back to the sixteenth century. Consequently, we have more than 500 years worth of theories about what causes phantom limbs—some quite ingenious. After losing his right arm in the Napoleonic Wars, British naval hero Lord Nelson believed that his phantom arm was proof positive of the existence of a soul. After all, if his arm could outlive its corporeal existence, why not the rest of him? This was a soothing hypothesis. Most were not.

For the uninitiated, phantom limbs are ghost appendages. After having an arm or leg removed, some patients continue to experience feeling in them. This is not some vague sense that an arm or leg is "revisiting" them. It is a visceral experience of continued "life" in the lost appendage. In his compulsively readable book Phantoms in the Brain, V.S. Ramachandran recounts the story of Tom Sorenson’s phantom left arm. Sorenson lost his arm in a car accident.

[After his crash], even though he knew that his arm was gone, Tom could still feel its ghostly presence below the elbow. He could wiggle each "finger," "reach out" and "grab" objects that were within arm’s reach. Indeed, his phantom arm seemed to be able to do anythcoming that the real arm would have done . . . Since Tom had been left-handed, his phantom would reach for the receiver whenever the telephone rang.
(Phantoms in the Brain, 21)

This, in and of it self, is unsettling. But to make matters worse, many people’s phantom limbs cause them excruciating pain. If someone had arthritis in the appendage that was removed, they often continue to experience joint pain. But given the limb's ghostly nature, pain medications do nothing to alleviate their suffering. Some phantom hands torture their owners by balling up into fists. Phantom nails dig into phantom palms and the patient is powerless to stop it.

Until fairly recently, many psychologists maintained that phantom limbs were caused by pathological denial. "As recently as fifteen years ago, a paper in The Canadian Journal of Psychiatry stated that phantom limbs are merely the result of wishful thinking." The agony of living with a phantom was compounded by patients’ belief that they were somehow bringing it on themselves.

Over the years, a variety of physiological explanations for phantom limbs have been bandied about, but none were terribly convincing. The most credible theory was that phantoms were caused by frayed nerve endings misfiring in the stump. Working off this hypothesis, some people with phantoms underwent repeated surgeries to shorten their stumps only to have the pain return again.

Stymied, medical professionals threw up their hands. For years, phantom limbs were relegated to the category of "unsolvable medical mysteries." Then neurologist V.S. Ramachandran read an article by Dr. Tim Pons, of the National Institutes of Health, which sparked his interest.

Pons and his colleagues were researching the Penfield Homunculus:

To further their research, they did some abominable things to a monkey. Pons and his team severed the nerve connections to a monkey’s arm, leaving it paralyzed. Then, they opened up its brain and rooted around in an area called the somatosensory cortex.

The somatosensory cortex contains a neural map of all of our body parts. A neurologist called Penfield charted the points corresponding to each body region. His homunculus was designed to illustrate these connections. By and large, the somatosensory cortex follows a logical progression, i.e., the lips are next to the jaw, which is next to the tongue, etc. But if you look at it closely, you’ll see that the map contains a couple of inconsistencies. The face, for some odd reason, is next to the hand. And the feet are next the genitals.

Why does this matter to us? Well, what Pons found was that when he stimulated the part of the somatosensory cortex associated with the face, the cells associated with the hand also fired. One of the great drawbacks of research monkeys is that they can’t talk. But Ramachandran guessed that if this one could, he'd be saying, ‘Wow. My dead arm’s moving.’

To test his theory, Ramachandran developed a study so simple it almost defies belief. He recruited some volunteers, bought a box of Q-tips, and went to work in his basement laboratory. He sat Tom in a chair, moistened his swab, and began gently stoking different areas of his face. I’ll let Ramachandran tell the rest:

I swabbed his cheek. "What do you feel?"

"You are touching my cheek."

"Anything else?"

"Hey, you know it’s funny," said Tom. "You’re touching my missing thumb—my phantom thumb."

I moved the Q-tip to his upper lip. "How about here?"

"You’re touching my index finger and my upper lip."

Once, when the water accidentally trickled down his face, he exclaimed with considerable surprise that he could actually feel the warm water trickling down the length of his phantom arm.
(Phantoms in the Brain, 29-34)

How is this possible? According to Ramachandran, when the somatosensory cortex learned that Tom’s arm no longer worked, it decided to delegate “arm” space for another purpose. The neurons correlating with his face started invading the neighboring area associated with his arm. The face neurons and the arm neurons began to overlap. So, when Tom’s face was stimulated, it inadvertently triggered feelings in his missing appendage.

Using materials purchased at a corner drugstore, Ramachandran solved the mystery of phantom limbs. But he didn’t stop there. Having figured out roughly what caused phantom appendages, he was eager to help alleviate patients’ pain.

For reasons that are too complicated to go into here, he surmised that patients couldn’t control their phantom limbs because of cross wiring in their optic circuitry. So, he designed a simple contraption to prove his hypothesis. He took a cardboard box, cut two armholes in the side, and placed a mirror inside. When subjects put their working arms and phantoms in the box, it created the illusion that they were seeing both their limbs.

One of Ramachandran’s patients regularly struggled with the feeling that his phantom hand was making a tight fist that he couldn’t release. When he placed his real and ghost arm in the box, Ramachandran instructed him to make a fist with his working hand. Because of the mirror, the patient saw two fists. He was then asked to loosen his real hand. Magically, both fists opened. By tricking the brain into "seeing" the phantom, the patient was able to control its movements.

But that’s not all. One of Ramachandran’s patients used the box repeatedly for a week and called him with some exciting news.

"Doctor," he exclaimed, "it’s gone!"

"What’s gone?" (I thought maybe he’d lost the mirror box.)

"My phantom is gone . . . my phantom arm, which I had for 10 years. It doesn’t exist anymore."
(Phantoms in the Brain, 49)

How could a rudimentary illusion result in the first ever "phantom amputation?" Ramachandran believes that, thanks to the mirror, the parietal lobe was overwhelmed with conflicting signals. Eventually, it solved the sensory conundrum by saying, “To hell with it, there’s no arm here.”

This neurological magic hasn’t worked on all of Ramachandran’s patients, but his research promises to revolutionize the treatment of phantom pain. In the mean time, at least amputees can be assured that it’s not "all in their mind."

Virtual Footnote

In studying phantoms, Ramachandran solved another, less pressing mystery: what causes foot fetishes.

After his first paper on phantom limbs was published, he got a call from an engineer in Arkansas who had lost a leg.

"I lost my leg below the knee about two months ago, but there’s something I don’t understand. I’d like your advice," [he said to Ramachandran.]

"What’s that?"

"Well, I feel a little embarrassed to tell you this . . . Doctor, every time I have sexual intercourse, I experience sensations in my phantom foot . . . I actually experience my orgasm in my foot. And therefore it’s much bigger than it used to be because it’s no longer confined to just my genitals."
(Phantoms in the Brain, 36)

I think my first response to this news would have been, 'Congratulations.' But for Ramachandran it was a revelation. Remember: as far as the somatosensory cortex is concerned, the genitals and the feet are right next to each other. It makes perfect sense that, in some people, there’s an overlap between the genital region and the foot region. The result? Your feet would be an erogenous zone.

Sex, Love, and SSRIs

Previously posted on Annotate
March 2006

Would you rather be miserable and smitten, or serene and passionless? If you’re suffering from depression and your doctor has prescribed SSRIs (or serotonin selective reuptake inhibitors) these are your options, according to anthropologist Helen Fisher. Fisher, who has been called the "doyenne of desire," believes that Prozac and other SSRIs are robbing us of our ability to form satisfying romantic relationships. (Excerpt from Love)

It’s no secret that SSRIs squelch the sex drive. Over the past decade, Prozac’s libido-dampening effects have become such a part of the cultural conversation that the issue was even highlighted on Sex in the City. But Fisher’s not just talking about sex, she’s talking about love. According to her:

Serotonin-enhancing antidepressants (such as Prozac and many others) can jeopardize feelings of romantic love, feelings of attachment to a spouse or partner, one's fertility and one's genetic future.
(World Question Center)

Sound a little alarmist? Well, Fisher’s says she’s got biological evidence to support her claim.

SSRIs, like Prozac, work in two ways: they increase levels of serotonin, and they limit the activity of dopaminergic pathways--the pipelines that deliver dopamine to different regions of the brain. Scientists still aren’t entirely sure why this alleviates depression. But they do know that upping serotonin and curbing dopamine helps to blunt extreme emotions, and prevent the obsessive thinking that is believed to trigger depression. This is the good news.

The bad news is that both of these actions contribute to the suppression of sexual desire. Some people seem to be immune to this effect. Most are not. According to recent reports, 73 percent of SSRI users are libidinally-challenged.

Doctors have been grappling with this problem since the introduction of SSRIs. But most concur that a lack of sexual desire is preferable to debilitating melancholy. If Fisher’s right, that may change.

According to Fisher, passion is more than a pleasant byproduct of procreation. Without passion, a woman’s ability to choose an appropriate partner is impaired. And long-term relationships are destined to fail. Why? It’s all about the orgasm.

Fisher believes that orgasms evolved for two reasons. First, women use orgasms to filter out inappropriate mates.

A woman unconsciously uses orgasms as a way of deciding whether or not a man is good for her. If he’s impatient and rough, and she doesn’t have an orgasm, she may instinctively feel he’s less likely to be a good husband and father.
(Excerpt from Love)

But sex becomes no less important once you’ve chosen a partner. The second reason the orgasm evolved, according to Fisher, is intensify the feelings of attachment that promote long-term bonding. When couples experience orgasms, they produce "a flood of oxytocin."

In recent years, scientists have become increasingly interested in oxytocin (not to be confused with the much-abused pain killer oxycontin). (DumbCrooks:OxyMorons) We’ve known for a while that dopamine is responsible for the elation and insanity that accompany falling in love. But the chemical components of long-term love remained somewhat mysterious. Researchers now think that oxytocin holds the key.

In long-term relationships that work . . . oxytocin is believed to be abundant in both partners. In long-term relationships that never get off the ground . . . or that crumble once the [dopamine] high is gone, chances are the couple has not found a way to stimulate or sustain oxytocin production.
(Excerpt from Love)

Orgasms aren’t the only way to promote the production of oxytocin. Small amounts of the hormone are released when a mother nurses or "when we hug long-term spouses or children." But when it comes to pair bonding, the orgasm is the single most effective way to increase levels of the lasting-love hormone. And if you’re not having satisfying sex with your partner, Fisher says, your chances of sustaining feelings of affection and kinship are severely compromised.

But wait--it gets even worse. According to Fisher, SSRIs also impact your ability to procreate, by increasing levels of serotonin.

Serotonin increases prolactin [and] prolactin can impair fertility by suppressing hypothalamic GnRH release, suppressing pituitary FSH and LH release, and/or suppressing ovarian hormone production. [And] strong serotonin-enhancing antidepressants adversely affect sperm volume and motility.

What does all this mean? If Fisher is correct, it means that if you’re taking Prozac to treat your depression, you may end up lonely and childless. But don’t despair just yet. Fisher’s theory has yet to proven. And her doomsaying is contradicted by a preponderance of evidence. I’ve known plenty of people who’ve managed to maintain healthy relationships while taking SSRIs. That said, many of them look forward to a new generation of anti-depressants that don’t impair the sex drive.

Be assured, scientists are on the case. Researchers are beginning to hone in on the mechanisms of depression.

Scientists have discovered a protein in the brain called P11 that may explain how drugs like Prozac fight depression . . . The finding, published in the current issue of Science, also could point the way to a new generation of drugs for depression.
(Study Sheds Light on How Depression Drugs Work)

The more we learn about how depression works, the better the chances become of developing a targeted drug with limited side effects.


The anatomy of conformity

Previously posted on Annotate
March 2006

In the wake of World War II, stunned by the German peoples adoption of Hitler’s horrific vision of Aryan purity, psychologists set out to discover the mechanisms of social control. One of the most famous studies to emerge during this period was conducted by Gestalt Therapist Solomon Asch. In the early 1950s, Asch designed a series of studies, which became known as the Asch Conformity Experiments.

Asch recruited a group of students to participate in what he called a "vision test." Each participant was seated in a classroom filled with what he presumed to be fellow test subjects. In reality, the "peer group" was made up of Asch conspirators. The test subject was then presented with two cards. Card 1 had a line on it. Card 2 contained three lines of varying lengths, marked a, b, and c. The test subject was then asked to identify which line on Card 2 was the same length as the line on Card 1.

Left to their own devices, the test subjects invariably picked line "c." But when the peer group insisted on a different answer, the guinea pigs frequently fell in line. For instance, if Asch conspirators claimed that line "b" was the correct answer, 33 percent of the test subjects concurred.

In the face of intense social pressure, many of Asch’s subjects denied the truth staring them in the face.

Since the 1950s, psychologists have wondered what was going on in the brains of Asch’s volunteers. Were they simply lying in order to reduce their discomfort? Or was it possible that their perceptions altered in response to group pressure?

Most of us would assume that the subjects were lying. We can plainly see that line "c" on Card 2 corresponds with the line on Card 1. But as it turns out, our view of reality is more susceptible to suggestion than previously supposed.

In June 2005, psychiatrist Gregory Berns of Emory University published a study that picked up where Ash’s left off. And his results are startling. The key difference in this study was that Berns’ subjects were placed in an MRI machine, which tracked activity in their brains while they underwent the test.

Before taking the test, each subject was ushered into a waiting room where he met four other "volunteers." These volunteers were, of course, plants--actors who had been instructed to give incorrect responses. To promote a feeling of solidarity, the subject and fellow "volunteers" were left alone in the room to talk, practice for the test, and take pictures of one another. Once they’d developed a rapport, the test commenced.

The real volunteer was put into an MRI machine and asked to "mentally rotate images of three-dimensional objects to determine if the objects were the same or different." (What Other People Say) The four actors were also in the room.

The rest of the test followed a similar model to Ash’s. When asked to correctly identify similar shapes, the four actors gave the wrong answer. Berns’ test subjects parroted the actors’ incorrect answers 41 percent of the time.

If the subjects were flat out lying, Berns reasoned, the MRI would reveal activity in the forebrain, the area associated with conscious deception. (Forebrain) But that’s not what showed up on the scan. The test subjects that concurred with the group showed changes in the posterior brain, the part responsible for visual and spatial perception:

In fact, the researchers found that when people went along with the group on wrong answers, activity increased in the right intraparietal sulcus, an area devoted to spatial awareness . . . There was no activity in brain areas that make conscious decisions, the researchers found.
(What Other People Say)

The subjects weren’t lying at all. The MRI scans reveal that those who capitulated to the group actually experienced a shift in perception. If the group insisted that a shape was square, suggestible subjects literally "saw" a square.

These findings may not surprise those of you who’ve read You are getting very sleepy. You already know how malleable the human mind is. What I found really intriguing about Berns’ study was the contrarians.

The subjects who stuck to their guns, insisting that the square was indeed a square despite group pressure, "showed activation in the right amygdala and right caudate nucleus--regions associated with emotional salience." Put simply, the holdouts continued to employ logic in the face of intense pressure.

Berns’ MRI scans have answered the questions introduced by Asch’s study fifty odd years ago. But they haven’t solved the real riddle. Why is it that some people are so susceptible to social pressure, while others are capable of resisting? The real breakthrough will come when we know what biological or social factors enable people to stand apart from the group.

You are getting very sleepy

Previously posted on Annotate
March 2006

As a born skeptic, I was always convinced that hypnosis was quack science. Then I reached the end of my tether. I’d promised myself I would quit smoking before I turned 30. In the months approaching my birthday, I still found myself sucking down a pack a day. I tried self-control. I tried tapering. I tried cold turkey. Nothing worked.

Unwilling to commit to regular Smokers Anonymous meetings, I decided to try hypnosis. Three hundred dollars later, I found myself in a small, dingy office sitting across from a maddeningly serene hypnotherapist with a shock of red hair. He led me through a series of exercises. I imagined myself as a child: energetic, ebullient, unhampered by addiction. Didn’t I want that again, the therapist asked? Didn’t I crave health more than nicotine? Clearly not, I thought. But when I walked out of his office I found myself chucking my unfinished pack of Camel Lights.

It worked--for six months. Apparently, I’m more committed to debauchery than health. Still, when I decide to quit, I intend to try hypnosis again. It’s the best solution I’ve stumbled on so far. And now I know why.

Neuroscientists have discovered that when impressionable people undergo hypnosis, they experience verifiable shifts in perception:

Recent brain studies in people who are susceptible to suggestion indicate that when they act on the suggestions their brains show profound changes in how they process information. The suggestions, researchers report, literally change what people see, hear, feel and believe to be true.
(This Is Your Brain Under Hypnosis)

Brain imaging revealed that, after being hypnotized, American test subjects “saw” colors that weren’t there, and lost the ability to read English. This may seem unbelievable, but it makes sense given our increased knowledge of how the brain processes information:

Information from the eyes, ears and body is carried to primary sensory regions in the brain. From there, it is carried to so-called higher regions where interpretation occurs.
(This Is Your Brain Under Hypnosis)

Humans are extremely efficient at carrying sensory data downward. Surprisingly, we are less capable of feeding our interpretations of this information back up to the conscious mind. This is a function of our machinery. “There are 10 times as many nerve fibers carrying information down as there are carrying up.”

What does this mean exactly? It means that our conscious impressions are often based on logical extrapolations. We use past experience to fill in missing information and form conclusions. We have, for instance, an existing framework in place for perceiving faces. We don’t need to reinvent the wheel every time we meet someone. Each new face we encounter is initially stored as a slight variation of an existing "face recognition" template. Intellectual information is filtered through existing theoretical frameworks. I know. It’s all a little abstract. What you really need to know is “what you see isn’t always what you get.”

Hypnosis, apparently, works because it influences the parts of our brains involved in making logical leaps based on existing frameworks. For example, you might see a flower and recognize it as yellow. But a skilled hypnotherapist can override this impression. She can tell you, ‘No, you’re mistaken. The flower is blue.” If you’re susceptible to hypnosis, your brain will accept this new interpretation. “If the top (or conscious mind) is convinced, the bottom level of data is overruled.”

These findings have profound implications, because they suggest that our “perceptions can be manipulated by expectations.” According to Michael Posner, a neuroscientist at the University of Oregon, “This is fundamental to the study of cognition.” We are inching closer and closer to identifying the biological mechanisms that determine how we perceive the world.

Current research suggests that human beings are the architects of their own reality. Put simply, science is confirming what new-age philosophers have been saying for decades: you are ultimately responsible for your perception of reality. And if you’re current perception is causing you heartache, it’s possible to change it.

Stranger than science fiction

Previously posted on Annotate
March 2006

You know those guys in high school who never learned to talk to girls? The ones who didn’t bother with acne medication, sported glasses that appeared to have been passed down from a long-dead uncle, and knew an alarming amount of Star Trek trivia? Well, apparently, they’ve all grown up and become neuroscientists.

It’s the only possible explanation. No one else could have come up with the idea of teaching a disembodied brain to fly an F-22 fighter jet. That’s right, folks--a group of neuroscientists in Florida grew a brain in a Petri dish and taught it how to fly.

They extracted 25,000 neural cells from a rat embryo, planted them in a glass dish, and grew a brain--or, at least, a close approximation. The brain or "live computation device" was "suspended in a specialized liquid to keep [it] alive and then laid across a grid of 60 electrodes." Once the brain had formed all the appropriate cellular connections, the scientists used the electrodes to stimulate different areas and measure neural activity.

Initially, they used their new fangled "living computer" to chart the brain processes involved in storing information. They soon became so adept at it that they decided to try something more ambitious: they hooked the brain up to a flight simulator. Preliminary testing was not terribly promising. "When we first hooked [it] up, the plane crashed all the time," said chief researcher Thomas DeMarse. (Why this brain flies on rat cunning)

But the brain learned quickly. The organic neural network adapted and grew increasingly skilled at piloting the virtual jet. According to DeMarse, "the brain [learned] to control the pitch and roll of the aircraft. After a while, it [produced] a nice straight and level trajectory."

As bizarre as it may sound, the Florida researchers’ findings do have practical applications. DeMarse’s team has paved the way for the development of "hybrid computers," one part circuit board, one part living tissue. Before you panic, the ultimate goal is not to create walking, talking cyborgs, a la the Terminator. It’s to create biologically enhanced computers to perform sophisticated operations with little room for human error.

If our understanding of "neural computation" continues to advance at its current rate, bio-computers may soon be used to pilot unmanned aircrafts on military missions deemed too dangerous for humans, and to perform complex computations that stymie even the most gifted mathematicians and programmers.

Bio-computers will make Deep Blue, the Supercomputer that went head to head with chess champion Garry Kasparov in the late ‘90s, look like a proto-type, according to DeMarse:

"The algorithms that living computers use are extremely fault-tolerant," Dr DeMarse said. "A few neurons die off every day in humans without any noticeable drop in performance, and yet if the same were to happen in a traditional silicon-based computer the results would be catastrophic."
(Why this brain flies on rat cunning)

Okay, I gotta hand it to them. The idea of a self-correcting computer is pretty impressive. But I won’t be satisfied until I can plug myself in, instantaneously download a program, and proclaim: “I know Kung Fu.”

Don't be afraid of your genes

Previously posted on Annotate
March 2006

I’ve been talking a lot about genes lately (because I’m obsessed) and what I’m finding is that many people are alarmed by genetics. I believe there are two primary reasons for this--one quite valid.

A Vast Social Engineering Project?

This fear of genetics arises, in part, from the belief that we may soon find ourselves enmeshed in a vast social engineering project. Neuroscientists already have the power to tinker with human nature and this power will only increase with time.

It’s likely that within the next 10 years, people suffering from post-traumatic stress disorder will have the option of expunging the memories that haunt them. But losing a memory means, in a sense, losing a part of the self. How many will be comfortable re-engineering their minds to this degree?

Even more bewildering is the idea that parents will one day have the ability to "adjust" their children’s personalities before they’re even born. Shall we extract the selfish gene, a doctor might ask? The ADD gene? The genes that correlate with depression? Depression is a clinical illness that causes a great deal of agony. But it’s also associated with a high degree of sensitivity, compassion, and creativity. Will curing the disease blunt these gifts?

The newfound power to re-engineer human nature is both intoxicating and disturbing. What are the side effects of extracting pieces of the human puzzle? Will we lose ourselves in a rush towards perfection?

These are all legitimate concerns. That said, there’s no turning back. We’re in the throes of a scientific revolution and no amount of wishful thinking will stop it. Genetic advances will force us to redefine what it means to be human.

We can take solace in the fact that we aren’t the first generation to wrestle with what it means to be human. Every great scientific paradigm shift, from the Copernican Revolution to Darwinism, "profoundly affected the way in which we view ourselves and our place in the cosmos." (Phantoms in the Brain) Those of us who embrace human idiosyncrasies can only hope that common sense wins out.

The Myth of Genetic Determinism

The second major factor contributing the fear of genetics can be traced back the age-old nature vs. nurture debate. As author Nancy Andreasen says in her excellent book, Brave New Brain:

When people discuss the causes of [human behavior], they frequently fall into a . . . false dichotomy. Are [illnesses] due to genes or environment? . . . The first and most basic problem with this dichotomy is that very few things in human life—normal or abnormal—are due solely to genes, or solely to non-genetic factors.
Many people suffer from the mistaken assumption that their genes determine their destiny. While it’s true that genes may predispose you to certain conditions, like heart disease, manic depression, or breast cancer, possessing a genetic marker doesn’t seal your fate.

Genes are a two way street. Your actions influence their behavior. If you are genetically predisposed to alcoholism, this doesn’t mean that you will inevitably become an alcoholic. If you’re prone to alcoholism and you guzzle a pint of vodka everyday--well, yeah, chances are, you’ll become an addict. If, however, you drink in moderation, stay aware, and treat your body with respect, you can easily avoid the disease.

A healthy-living individual with a predisposition to alcoholism is far less likely to fall prey to addiction, than a chronic alcohol abuser with no genetic proclivity. By over-imbibing on a regular basis, the "normal" drinker will change his brain chemistry. He will, in fact, alter his genes. The satisfaction centers of his brain will become dependent on the substance. When he stops using, his levels of CREB (Got CREB?) will dip; he’ll become extremely anxious; and he’ll be inclined to drink again to produce the chemical cocktail necessary for serenity.

A good example of just how "plastic" genes are is myopia, or near-sightedness. The prevalence of myopia in our society suggests that it’s genetically predetermined. It’s not. People genetically prone to myopia only develop the disorder if they engage in a particular behavior: reading. Recent studies have shown that "genes cause short sight only in those who learn to read. In societies where few people read, myopia will correlate more closely with reading than with 'myopia genes.’" (Genes So Liberating)

Obviously, few of us are so put out by near-sightedness that we’d opt for illiteracy. But this example illustrates my point: possessing certain genes doesn’t rob us of free will.

Instead of fearing your genes, get to know them. Take note of diseases that run in your family and make decisions that support your health. And keep in mind that, as neuroscientists plumb the depths of the genetic code, we become increasingly likely to stumble on cures for diseases. Knowing that genes respond to their environment “frees us from genetic determinism,” Dr. Andreasen says, and “It offers us hope that we will some day learn ways to modify the genetic instructions that lead to diseases.”


Decoding Consciousness

Previously posted on Annotate
March 2006

"It is better to tackle ten fundamental [scientific] problems and succeed in only one, than to tackle ten trivial ones and solve them all," Francis Crick once told his devoted pupil V.S. Ramachandran, director of San Diego State’s Center for Brain and Cognition.

Ramachandran, apparently, took this advice to heart. The man who contends that neuroscience is ushering in "the greatest [scientific] revolution of all" believes that understanding the circuitry of the brain will soon allow us to tackle the existential questions that have plagued philosophers for centuries. He is so confident, in fact, that he's started to consider the problem of consciousness.

This ambitious project is not purely an exercise in hubris. Ramachandran is picking up where his mentor left off. After decoding DNA, Crick turned his attention to the study of self-awareness. He was convinced that he’d found a way to decode the biological underpinnings of consciousness. The correct way to approach the slippery problem, in Crick’s view, was to understand how we process visual information.

It seems an odd way to start. But it makes more sense when you learn about a phenomenon called “blind sight.”

It all started with a patient known as GY. GY suffered from a peculiar vision problem. He was completely blind on his left side due to damage to his right visual cortex. When you and I close one of our eyes, the other eye compensates, offering us a fairly broad spectrum of vision. But if a person’s right visual cortex is impaired, he is blind to everything on the left side of his nose. It’s an odd--and I imagine--extremely disorienting condition.

In the late ‘90s, two Oxford researchers, Larry Weiscrantz and Alan Cowey, became preoccupied with this vision problem and recruited GY to undergo a series of tests.

Here’s what happened:

When examining [GY], Weizcrantz noticed something really strange. He showed the patient a little spot of light in the Blind region. Weiscrantz asked him "what do you see"? The patient said "nothing" [but then] he told the patient: "I know you can't see it but please reach out and touch it" The patient . . . must have thought this is a very eccentric request.

So [GY] said . . . I can't see it how can I point to it? Weiscrantz said: “Well, just try anyway, take a guess.” The patient reached out to touch the object and imagine the researcher's surprise when the patient reached out and pointed to it accurately--pointed to the dot that he [could not] consciously perceive. After hundreds of trials, it became obvious that he could point accurately on 99 percent of trials even though he claimed on each trial that he was just guessing . . . From [GY’s] point of view it might as well have been an experiment on ESP.
From BBC 2003 Reith Lecture: Synapses and the Self)

Based on these findings, Weizcrantz and Cowey came to a startling conclusion: the patient was, in fact, “seeing.” He simply wasn’t conscious of it. How is that possible, you ask? Vision is a complex mechanism involving multiple parts of the brain. GY’s visual cortex was damaged, but another key brain region involved in seeing remained in tact: "the pathway going through his brain stem and superior colliculus." What the researchers eventually concluded was that GY was “seeing” with the pathway to the superior colliculus.

By evolutionary standards, the visual cortex is relatively new. The pathway leading to the superior colliculus is old--practically primordial. GY’s ability to consistently perceive the spot of light, without being consciously aware of it, suggested something astounding: it suggested that only the newer visual cortex contributed to "conscious awareness." The ancient colliculus pathway, on the other hand, could "do its job perfectly well without being conscious." This implies that humans were not always self-aware, and that consciousness was an evolutionary adaptation.

The discovery of blindsight spurred a flurry of research in the field of consciousness. Crick was hardly the only scientist concerned with consciousness, but he was the most vocal. He believed that consciousness "hinged on the behavior of neurons" and that it might "be clustered within either one or multiple areas of the brain." (Princeton.edu)

I know. It’s a little vague. But applying the scientific method to the study of consciousness is a dodgy business and scientists are reluctant to make grand proclamations. The mere suggestion that consciousness was a function of neuron activity was highly controversial. (It does, after all, challenge traditional beliefs about the soul.)

Ramachandran has continued Crick’s study of consciousness, but he’s approaching it from a different angle. He believes that a cluster of cells called mirror neurons might be responsible for self-awareness. He discussed his nascent theory in a 2003 BBC lecture, Neuroscience-The New Philosophy. According to Ramachandran:

. . . our brains [are] essentially model-making machines. We need to construct useful . . . simulations of the world that we can act on. Within the simulation, we need also to construct models of other people's minds because we're intensely social creatures, us primates. We need to do this so we can predict their behavior.
Evolution, Ramachandran says, imbued us with the power to intuit other people’s intentions, enabling us to forge social connections and defend ourselves against potential aggressors. It did this by engineering the cells we’ve come to call mirror neurons.

Put simply, mirror neurons are "empathic" cells. They allow us to emotionally simulate another person’s internal reality. (For more on this see: Mirror Neurons Revisited)

Ramachandran believes that this ability to “read” the intentions of others may have predated consciousness. He surmises that once humans became adept at reading the emotions of others, they turned this power inward and began analyzing their own intentions. Thus, consciousness was born.

If he’s right (and, at present, we have no way of knowing), consciousness may be a relatively simple phenomenon. Increased understanding of the behavior of mirror neurons may unlock the mysteries of the self.

The idea that self-awareness is the product a complex set of chemical reactions in your brain may be unsettling to some. There are people who feel that this clinical approach to understanding the psyche robs life of its magic. But comprehending the biological underpinnings of "the self" makes it no less miraculous, in my view. The more I learn about neuroscience, the more I marvel at nature’s ingenuity and creativity.

Manipulating your mind

Previously posted on Annotate
March 2006

In Mind Wide Open, Steven Johnson writes about advances in neurofeedback technology. “Your Attention Please” describes Johnson's attempts to peddle a virtual bicycle using the power of his brain.

He’s at a training session organized by a firm called The Attention Builders. As the name suggests, the company is in the business of building attention. The firm’s software was designed to familiarize children suffering from attention deficit disorder (ADD) with the experience of concentrating. To do this, they employ some new-fangled technology. A helmet that wouldn’t look out of place in TRON is used to track the subject’s brain waves, moment by moment. This piece of machinery sinks up with a console that runs a variety of video games. This particular game involves an animated bike rider. Theoretically, when the subject is experiencing peak attention, he can “will” the animated bicyclist to pedal. Here’s what happened when Johnson donned the helmet:

After a few minutes of calibration, [the technician] announces that the system is ready, and launches the bike game. A long stretch of intense humiliation begins. From the very outset, my bike refuses to budge . . . I try staring intently at the screen. I try staring intently at the wall . . . the bicycle remains frozen . . . I’m trying to focus on the game, but very quickly, I find myself focusing on the possibility that I have been suffering from ADD for years without realizing it.
Five minutes pass, and the game ends. All told the bike has made only a handful of brief nudges forward. I’m ready to swallow an entire bottle of Ritalin.
(Mind Wide Open, Steven Johnson, 76)

The ability to peddle the bike depends on the subject’s ability to control his theta levels—the brain waves that result in distractibility. The helmet’s readings of Johnson’s theta levels suggested that his attention span was only slightly superior to a fruit fly’s. This, thankfully, was not the case. His inability to peddle the bike turned out to be the result of a technical hiccup. After recalibrating, he took the test again and proved to have impressive control over his theta levels.

Johnson’s venture into the realm of neurofeedback paints a clear picture of its current limitations. The technology is good, but not great. In some respects, it’s still frustratingly rudimentary, which is why I was so intrigued by John Geirland’s recent piece for Wired, Buddha on the Brain.

I won’t delve into the details of the article here. Suffice it to say, it’s recommended reading for anyone interested in Buddhism. What grabbed my attention (and immediately brought Johnson’s bike pedaling debacle to mind) was Geirland’s account of a study on brain activity in Buddhist monks.

The study was spearheaded by Richard Davidson, a neuroscience professor at the University of Wisconsin-Madison. Davidson is a long time “spiritual seeker” and a personal friend of the Dalai Lama. This makes his interest in the subject entirely understandable. It also makes his findings somewhat suspect in the eyes of the scientific community. Many neuroscientists have contested Davidson’s results, but I find them intriguing enough to throw caution to the wind and present them to you anyway:

In June 2002, Davidson’s [team] positioned 128 electrodes on the head of Mattieu Ricard. A French-born monk from the Shechen Monastery in Katmandu, Ricard had racked up more than of 10,000 hours of meditation.

[A researcher] asked Ricard to meditate on "unconditional loving-kindness and compassion." He immediately noticed powerful gamma activity—brain waves . . . indicating intensely focused thought. Gamma waves are usually weak and difficult to see. Those emanating from Ricard were easily visible, even in the raw EEG output. Moreover, oscillations from various parts of the cortex were synchronized--a phenomenon that sometimes occurs in patients under anesthesia.

The researchers had never seen anything like it. Worried that something might be wrong with their equipment or methods, they brought in more monks [provided courtesy of the Dalai Lama], as well as a control group of college students inexperienced in meditation. The monks produced gamma waves that were 30 times as strong as the students'. In addition, larger areas of the meditators' brains were active, particularly in the left prefrontal cortex, the part of the brain responsible for positive emotions.
(Buddha on the Brain)

Davidson’s study indicates that master meditators are capable of “alter[ing] the structure and function” of their brains through sheer force of will or--err . . . active non-resistance. These findings suggest that human beings have the ability to train their minds to conjure up and prolong optimal mental states, without the aid of virtual reality helmets or electroshock therapy. They suggest, in short, that our brains are far more advanced than our technology. To some this seems achingly obvious. To others, it’s a true revelation.

Davidson’s findings simultaneously thrill me and fill me with anxiety. It’s nice to know my noggin’s capable of tapping into a bottomless well of compassion. But my schedule is already pretty full and I just can’t see fitting in the requisite 10,000 hours of meditation.

Psychedelic Pharmacology

Previously posted on Annotate
March 2006

Clarence Darrow famously said: "I have never killed a man, but I have read many obituaries with great pleasure." It’s likely that Dr. John Halpern experienced a similar kind of schadenfreude on hearing of Timothy Leary’s death in 1996.

For those of you too young to remember him as anything other than Uma Thurman’s godfather, Leary was a renowned academic who launched the now infamous Harvard Psilocybin Project. The research project, which Leary developed in partnership with Richard Alpert (later known as Ram Dass), used psychedelics to facilitate "life-altering spiritual insights" in alcoholics and convicted criminals.

Today the idea of using hallucinogens for medicinal purposes sounds highly unorthodox. But in the early ‘60s, when Leary’s project got underway, many psychologists were optimistic about their curative powers. Few of them, however, dipped into the product as often as Leary.

Leary’s rampant drug use eventually got him dismissed from Harvard. Unfazed, he renounced the academic life and became the leading proselytizer for psychedelic self-actualization. In this role, he inspired countless hippies to "expand their consciousness" by engaging in such spiritually revelatory activities as dropping acid and watching Fantasia. You and I might find this cute. The medical establishment did not.

Thanks to Leary, the use of psychedelics in medical research became taboo. And John Halpern thinks that’s a crying shame. "That man screwed it up for so many people," Halpern said of the late visionary.

Halpern’s interest in psychedelics was spawned by a conversation he had with one of his medical school mentors in the early ‘90s. He was growing increasingly frustrated with his inability to help his alcoholic and drug-dependent patients:

He sounded off to an older psychiatrist, who mentioned that LSD and related drugs had once been considered promising treatments for addiction. "I was so fascinated that I did all this research," Halpern recalls. "I was reading all these papers from the 60s and going, whoa, wait a minute! How come nobody's talking about this?"
(Psychedelic Medicine)

The more he researched, the more incensed he became. There was a stockpile of evidence suggesting that hallucinogens were a singularly effective means of treating people in the grips of addiction, according to Halpern. But Leary’s legacy had left psychiatrists so gun shy; no one wanted to touch it. Halpern is one of a growing number of iconoclastic psychiatrists who are bent on changing that. And if they have their way, pharmacists will soon be dispensing as much Ecstasy as they are Prozac.

If you think he’s part of a fringe movement, think again. Halpern is the associate director of Harvard University's McLean Hospital substance abuse program. (Yup, he’s picking up where Leary left off. He, however, seems less inclined to raid the hospital pharmacy when things get dull.) To Halpern, hallucinogens are a serious business. He truly believes in their medicinal potential and he’s collected enough evidence to convince the FDA he’s right.

In February 2005, the FDA agreed to let Halpern prescribe MDMA (Ecstasy) to terminal cancer patients. And he’s working on getting permission to use LSD to treat sufferers of cluster headaches. But these are just baby steps--strategic maneuvers designed to get the medical establishment comfortable with the idea of using psychedelics in a psychiatric setting. In the long run, he hopes to get the go ahead for a study, which would evaluate the use of drugs like LSD, psilocybin, and DMT, in treating addiction. Some evidence suggests that the "profound insights and cathartic emotions" spurred by these drugs can reduce drug cravings. Many contend that this is the result of "after glow," but Halpern suspects that there is also a biochemical component.

Halpern is just one of a vanguard of psychiatric researchers around the world who are investigating the therapeutic potential of hallucinogens. Psychiatrist Francisco Moreno, of the University of Arizona, is testing psilocybin as potential treatment for sufferers of obsessive-compulsive disorder. Michael Mithoefer, a South Carolina doctor, is conducting a clinical trial to evaluate the use of MDMA in ameliorating post-traumatic stress disorder. A Russian psychiatrist by the name of Evgeny Krupitsky has administered ketamine(Special K) to upwards of 300 heroine addicts and alcoholics with good results.(66 percent of the alcoholics Krupitsky treated with Special K in one of his studies remained clean for a year or more after their session, while only 24 percent of the non-ketamine imbibers stayed sober.)

Charles Grob, a UCLA Medical Center psychiatrist, headed up a 1996 study of the psychological and physical effects of ayahuasca, a DMT-rich drug historically used by shamans in the Amazon. Uniao do Vegetal (UDV), a Brazilian church, administers ayahuasca as a sacrament. Grob studied UDV churchgoers to determine the long-term effects of ayahuasca. What he found was startling:

UDV members who regularly took ayahuasca were on average physiologically and psychologically healthier than a control group of non-worshippers. The UDV followers also had more receptors for the neurotransmitter serotonin, which has been linked to lower rates of depression and other disorders.
(Psychedelic Medicine)

Studies like these have convinced many psychiatrists that it’s time to take a second look at psychedelics. Risk-adverse pharmaceutical companies, however, are in no rush to sink money into hallucinogen-based drug development. And it’s hard to imagine that changing any time soon.

While I’m intrigued by the idea of using psychedelics as an alternative form of therapy, I’m also skeptical. Given the option, I’m not sure I’d choose a 12-hour foray into an LSD-induced parallel universe over a cluster headache. That might change, however, if I got my hands on the new synthetic compound designed to "induce transcendent experiences as reliably as LSD . . . but with a greatly reduced risk of a bad trip." (The Third Culture)

Got Creb?

Previously posted on Annotate
February 2006

I live with a man who can easily dredge up the names of people who testified in the Watergate hearings, because he watched them on TV--when he was four. He can recite dialogue from movies he hasn’t seen since the early ‘80s. And he can tell you with absolute certainty that Admiral Ackbar, the fish-faced commander from Star Wars, was a member of the Mon Calamari species. I, on the other hand, have trouble remembering high school. I fear that if I don’t do something to staunch the flow of data pouring out of my head, he’ll spend the autumn of his life reminding me what my name is, while feeding me from a sippy cup. So, I have a special interest in “cyclic-AMP response element binding protein,” commonly known as CREB.

CREB is a brain protein that scientists believe may hold the key to long-term memory. In order to understand how CREB works, you need to know a little about the chemical reactions occurring in your brain on a daily basis.

It’s easiest to think of your brain as the guts of a computer and your conscious mind as the keyboard. Flicking the memory switch is the equivalent of pressing "Save." When you enter this command on a keyboard, it sends a combination of signals through a network of circuits in your computer. Your brain works in much the same way. When your mental "Save" is activated, it sets off a series of chemical reactions in a network of neurons. In healthy people, this process "trigger[s] a synthesis of proteins," required for memory formation. CREB is the binding protein--the glue that holds it all together. (New York Academy of Sciences)

When your body isn’t producing enough CREB, your long-term memory starts to deteriorate. Alzheimer's patients, for instance, suffer from a severe deficit of CREB. Neurologists believe that increasing the brain’s production of CREB may reverse memory loss. Based on this assumption, clinicians are racing to develop CREB-enhancement drugs. At the moment, 40 different drugs are in production. I await them with baited breath--as does Kate Moss, no doubt. (Female First)

Even if you aren’t in danger of sinking into premature dementia, there are reasons to be interested in CREB-promoters. A researcher at the University of Illinois have discovered that the brain protein is also linked to anxiety and alcoholism.

Psychiatrist Subhash Pandey recently designed a study to identify the function of CREB in rat’s brains. He found that chronically anxious rats suffered from shortage of CREB in the central amygdala. Further testing revealed that this deficiency was hereditary. But neurotic rats didn’t simply suffer in silence, they self-medicated--with alcohol. Pandey found that drinking suppressed fear in his high-strung subjects. Why? Because hitting the sauce increased their "levels of active CREB." To Pandey, this suggests that "genetically high anxiety levels are important in the promotion of higher alcohol consumption in humans." (Bright Surf)

As far as I know, the current crop of CREB-based drugs only targets memory loss. But I wouldn’t be surprised if they discovered that CREB-promoters had other applications. If recent findings about CREB prove credible, these drugs might be effective in combatting alcoholism and chronic anxiety, as well as memory loss.

The Introvert Advantage

Previously posted on Annotate
February 2006

Introversion is a loaded word. Just look it up in the dictionary and here’s what you’ll find:

Introversion: The state or tendency toward being wholly or predominantly concerned with and interested in one's own mental life
(Mirriam-Webster Online)

Doesn’t sound so good, does it? Sounds downright narcissistic. And this is no accident. Sigmund Freud coined the term “introvert” to describe one of the traits associated with narcissism. In Freud’s view, introverts were neurotics who had taken "a turn from reality to phantasy [sic]." According to Freud, introversion denoted "the turning away of the libido from the possibilities of real satisfaction."


In other words, Freud believed that introverts were emotionally stunted. Unable to confront the fearful prospect of sex, the introvert retreated into himself, sublimating all of his libidinal urges into an unhealthy preoccupation with his own delusional inner life.

In retrospect, this theory seems so absurd that it’s hard to believe that it gained any traction. But it did. For decades, psychologists subscribed to the notion that introversion was a low-grade pathology. Although introverts make up something like a third of the world’s population, it wasn’t uncommon for psychologists to write clinical definitions of introversion that went something like this:

Introversion is normally characterized by a hesitant, reflective, retiring nature that keeps itself to itself, shrinks from objects, is always slightly on the defensive and prefers to hide behind mistrustful scrutiny.
(From The Introvert Coach)

Extroverts, on the other hand, were treated as paragons of mental health:

Extroversion is normally characterized by an outgoing, candid, and accommodating nature that adapts easily to a given situation [and] quickly forms attachments.
(From The Introvert Coach)

Modern definitions aren’t as flagrantly prejudiced, but introversion is still stigmatized. Given the option, most people would prefer to be extroverts. Punch the word “introversion” into Amazon.com and you come up with a list of self-help books like: Introvert to Extrovert and The Highly Sensitive Person. Look up “extroversion” and you get a whole lotta nada.

But all of those self-satisfied extroverts out there might be interested to learn that recent scientific findings suggest that introversion is not a psychological disorder--it’s a physiological trait with some distinct advantages.

Brain scans reveal that introverted children "have more brain activity, in general, [than extroverts] specifically in their frontal lobes." (Introverts in an Extrovert’s World) Experts have long understood that introverts need solitude to recharge, while extroverts are energized by social interaction. They did not, however, understand why. Scientists now believe they’ve discovered the biological foundation of both temperaments.

Extroverts have a high degree of activity in the back of their brains, particularly in the areas associated with digesting sensory input. They are more attuned to the outside world, according to neurologists, because that’s where most of their stimulus comes from. And brain scans suggest that extroverts seek constant stimulation, because they have "less internally generated brain activity."

Introverts, on the other hand, have such a surfeit of brain activity that it’s sometimes difficult for them to attend to what’s happening around them. While this might result in some social awkwardness, it has a host of benefits. Introverts have more acetylcholine, a chemical that enhances "long-term memory, the ability to stay calm and alert, and perceptual learning." They also have increased activity in the frontal lobe, which has been linked to high-level problem solving skills, long-term planning, and a facility with language. Perhaps, it’s no accident that an estimated 60 percent of the world’s best minds have been introverts. (Pop Matters)

As an introvert, I can’t help being a little pleased with findings. It would be nice to see a glut of extroversion self-help books on the market—books like: Mastering the Art of Concentration: The Extroverts Guide to Blocking Out Superfluous Sensory Input or From Small Talk to Real Talk: How Not to Bore Introverts at Parties. But the truth is, you can’t draw a clear line in the sand between introverts and extroverts. Most us fall somewhere along a continuum. It’s just that some land closer to the poles than others.

Want to know your stats? Take the Myers-Briggs Test: Human Metrics.

On Writer's Block

Previously posted on Annotate
February 2006

I’ve never read anything that captures the torment of a bad day of writing as well as the following passage from the preface of Joan Didion’s Slouching Towards Bethlehem:

. . . I sit in a room literally papered with false starts and cannot put one word after another and imagine that I have suffered a small stroke, leaving me apparently undamaged but actually aphasic.
I remember being intensely relieved when I first read this ten years ago. It seemed clear to me that only people of a certain disposition were afraid they were suffering from an undiagnosed embolism at the age of 22. And that I was one of those people. But so was Joan Didion! I felt a profound and, as it turns out, entirely unwarranted sense of kinship.

Recent research suggests that this feeling of temporary aphasia is not at all uncommon among writers. And Didion’s depiction of her distress may be more than metaphorical. It may be a surprisingly accurate description of the mechanics of writer’s block.

Neurologist Alice Flaherty, of Massachusetts General Hospital, began researching language after suffering from a temporary bout of hypergraphia. The mirror image of writer’s block, hypergraphia is a syndrome which results in an uncontrollable compulsion to write. After four months of scribbling Zen Koan-like sentence fragments on Post-Its in the middle of the night, Flaherty was driven to figure out exactly what was happening to her. So, she embarked on an exploration of the regions of the brain responsible for language.

Brain scans of people suffering from hypergraphia quickly revealed abnormal activity in the limbic system. Flaherty found that hypergraphia was associated with hyperactivity in the temporal lobe, the area of the cerebral cortex responsible for assigning meaning and significance to language. For hypergraphics, everything seems saturated with meaning--hence, the desperation to get it all down on paper.

Hypergraphia is relatively rare. (Frankly, I’m not sure whether I’m happy about that or not, but that's neither here nor there.) What’s important to us is that in the process of researching her own condition, Flaherty made some startling discoveries about writer’s block. She discovered that patients who claimed to be suffering from block were experiencing unusual brain activity as well. Like hypergraphics, blocked writer’s temporal lobes appeared to be super-charged, but brain scans showed that there was decreased activity in another language center--the frontal lobe.

While her findings are still preliminary, Flaherty believes that writer’s block is the product of reduced interaction between the temporal lobe and frontal lobe, the primary center for producing language. When activity in the frontal lobe is suppressed, as Flaherty contends it is in those with block, the temporal lobe kicks into overdrive. This means, essentially, that when you have a case of writer’s block, your facility with language decreases and your ability to assign meaning is amplified. Consequently, you end up producing less material, but your “inner critic” is more powerful than ever.

But more important than discovering 'how' writer’s block occurs, is discovering 'why'. Unfortunately, Flaherty can’t provide us with the answer just yet. Thus far, all neurologists have been able to establish is that block occurs more frequently when people are depressed. And, as many of you already know, depression is far more likely to occur in highly creative people. Writers, for instance, are eight to ten times more likely to be depressive than less creatively inclined people. Poets, it seems, are even worse off. According to some research, 70 percent of poets suffer from manic depression. (Seattle Post-Intelligencer)

Flaherty encourages people in the throes of a deep depression to take advantage of the medications on the market. But she cautions against blunting your emotions, because this too can contribute to a dip in creativity. Neuroscientists say you are most likely to be visited by “the muse” when you’re in a state of arousal, because emotions like passion, fury, and ecstasy increase activity in the temporal lobe (the meaning maker). According to Flaherty, “slight agitation” is far preferable to serenity if you want to make great art. (Infinite Mind)

(For more on creativity and depression, see CBS.com)

Flaherty also notes that perseveration is a hallmark of writer’s block. Perseveration is just a fancy word for being in a mental rut. Writers suffering from block often describe the experience of grasping for meaning. Flaherty suggests that this is the result of perseveration. In these moments, writers are caught in a cycle of trying to solve a mental problem that is no longer relevant. The trick, she says, is to step away. Take a walk. (Exercise and shock are, apparently, two of the easiest ways of defeating perseveration.) Or have a cup of coffee. (Stimulants are thought to aid in refocusing your attention. But once you become a habitual user, the effects are minimal at best.) The key thing is to get out of the feedback loop.

Understanding the neuro-dynamics of writer’s block doesn’t have to be disempowering. In fact, it can have the opposite effect. Knowing that writer’s block really is “all in your head” can free you up. No more self-flagellation. Just step away from the computer, walk to the cafe, and have a cup of coffee. If that doesn't work, you might consider some happy drugs.


What's Funny?

Previously posted on Annotate
February 2006

V.S. Ramachandran’s professional interest in laughter began about ten years ago in Vellore, India, when he was asked to evaluate a patient with a bad case of pain asymbolia. Pain asymbolia is a rare condition, which causes people to misinterpret the physical symptoms of pain. While victims are aware of “painful” sensations, they don’t associate them with feelings of suffering. This condition is almost too bizarre to comprehend, but Ramachandran’s description of his consultation paints a fairly clear picture:

. . . [when] I took a needle and pricked him with [it] to determine . . . his pain sensations . . . he started giggling, telling me: doctor, I feel the pain but it doesn't hurt. It feels very funny, like a tickle and he would start laughing uncontrollably.
(From BBC Reith Lecture 1)

The irony of this reaction made Ramachandran stop to consider the underlying purpose of laughter. In his view, it’s a singularly odd behavior:

. . . [A] martian ethologist watching all of [you] would be very surprised . . . Every now and then all of [you] stop what you're doing, shake your head, make this funny staccato rhythmic . . . hyena-like sound. Why do you do it?
Convinced that the ability to laugh was hard-wired in the human brain, Ramachandran set out to discover why this behavior had evolved. First, he tried to pinpoint what makes something “funny.” After much musing, he came to the conclusion that:

. . . [T]he common denominator of all jokes and humour [sic] despite all the diversity is that you take a person along a garden path of expectation and at the very end you suddenly introduce an unexpected twist that entails a complete re-interpretation of all the previous facts. That's called a punch-line of the joke.
(From BBC Reith Lecture 1)

But this was not enough to explain laughter or “every great scientific discovery . . . would be funny,” Ramachandran concluded. Something is funny, he determined, only if the resulting shift in your perception is of “trivial consequence.” This is all a bit abstract, so let me provide a couple of examples.

According to Ramachandran’s thesis, Watergate was not funny. Why? Because while the revelation that the President of the United States was a felon required a “complete re-interpretation of . . . previous facts," the punch line had dire implications.
Now try this:

Why did the scientist install a knocker on his door?

Answer: So he could win the No-Bell prize.
Okay, granted, it’s not terribly funny. (I challenge you to find a joke that meets the standards of political correctness. It’s not as easy as it sounds.) The point here is: jokes elicit a chuckle, because they alter our perception in superficial way.

So, why are we programmed to laugh when a mind-altering experience proves to be trivial? According to Ramachandran, this behavior evolved as a way to diffuse anxiety. In potentially threatening situations, early humans used verbal cues to alert their kin. When these situations turned out to be harmless, humans needed a way to get the message across. Laughter evolved as a way of saying: ‘Hey, I take it back. Everything’s okay.' It's nature’s “false alarm."

Sound suspiciously simple? Well, many neuroscientists and psychologists contend that it is. Still, Ramachandran’s study of the patient suffering from pain asymbolia seems to support his theory:

When we examined his brain, when we [did] the CT scan we found there was damage to the region called the insular cortex . . . The insular cortex receives pain signals . . . From the insular cortex the message goes to the amygdala . . . where you respond emotionally to the pain . . . and take the appropriate action. So my idea was, maybe what's happened on this patient is, the insular cortex is normal. That's why he says, doctor I can feel the pain, but the message, the wire that goes from the insular to the rest of the limbic system and the anterior cingulate is cut.
Therefore you have the two key ingredients you need for laughter and humour [sic], namely one part of the brain signaling [sic] a potential danger . . . but the very next instant the anterior cingulate says but I'm not getting any signal. Big deal, there is no danger here, forget it . . . and the patient starts laughing and giggling uncontrollably, OK.
(From BBC Reith Lecture 1)

Put simply, Ramachandran’s findings suggest that laughter primarily utilizes three areas in the brain: the insular cortex, the amygdala, and the anterior cingulate. The insular cortex produces the physical sensation of pain, while the amygdala processes these sensations into the “feeling of pain.” Once the feeling has registered, the amygdala sends a message to anterior cingulate to activate the “fight or flight” response.

Normal people experiencing pain will retreat from it whenever possible. The Vellore patient, in contrast, simply laughed. He laughed, according to Ramachandran, because the connection between his insular cortex and amygdala was severed. When the amygdala failed to recognize the danger signs, the anterior cingulate short-circuited and sent out the “false alarm” signal--prompting laughter.

While Ramachandran’s findings are still controversial, evidence seems to suggest that his unorthodox approach to brain research is bearing fruit. His belief that neuroscience will usher in a new era of human thought may be grandiose. (The greatest revolution of all) But who knows? He might be right.

"The greatest revolution of all"

Previously posted on Annotate
February 2006

Think neuroscience is boring? Think again, says V.S Ramachandran, director of San Diego State’s Center for Brain and Cognition. In the coming years, Ramachandran says, neuroscience promises to revolutionize the way “we view ourselves and our place in the cosmos.” (BBC Reith Lecture 1) If he sounds less like a neurologist than a new-age prophet, don’t let that surprise you. Ramachandran--one of the few scientists just as likely to quote the Upanishads as he is to cite the research of Richard Dawkins--is on a mission to tear down the walls that separate science and philosophy.

According to Ramachandran, human history has been punctuated by three evolutionary leaps in understanding--three distinct paradigm shifts, which required a complete reinterpretation of what it meant to be human.

First there was the Copernican Revolution, when man realized he wasn’t the center of the universe. (Clearly some New York City subway patrons are still struggling with this notion.) Next came the Darwinian Revolution, which “culminated in the view that we are not angels but merely hairless apes.” (BBC Reith Lecture 1) And then came Freud, who taught us that we’re all slaves to the unconscious.

Ramachandran believes we’re on the cusp of yet another paradigm shift thanks to neuroscience; a shift that will make Freud’s discoveries appear utterly quotidian:

. . .[We] are poised for the greatest revolution of all--understanding the human brain. This will surely be a turning point in the history of the human species for, unlike those earlier revolutions in science, this one is not about the outside world, not about cosmology or biology or physics, but about ourselves, about the very organ that made those earlier revolutions possible . . .
(From BBC Reith Lecture 1)

According to “Rama," understanding the byzantine circuitry of the human brain will eventually enable us to answer the existential questions that have plagued philosophers for centuries:

* What is the meaning of my existence?
* Why do I laugh?
* Why do I dream?
* Why do I enjoy art, music and poetry?
* What is the scope of free will?

Rama may sound like he’s suffering from a profound case of hubris, but the neuroscience community seems well on their way to answering some of these questions. For instance, based on his research, Rama believes he’s knows why we laugh.

In the next entry, we’ll delve into Ramachandran's findings about laughter. Are they credible? Is Rama really the harbinger of the next great evolutionary leap in scientific understanding? Or is he more mad scientist than cultural luminary?

Emotional Discrimination

Previously posted on Annotate
February 2006

Psychology is supposed to be the empathetic science. So, it surprised me to learn that many psychologists believe the entire range of human feeling can be distilled down to a list of ten. On the off chance this list grew too unwieldy, it was subdivided into two categories: primary emotions: happiness, sadness, fear, anger, surprise, and disgust; and secondary emotions: embarrassment, jealousy, guilt, and pride. (Steven Johnson, Mind Wide Open, 37) Let’s say, for the sake of argument, that it was necessary to compile this emotional Top Ten list: who decided that “embarrassment” qualified, but “love” didn’t make the cut? Call me finicky, but this strikes me as reductionism run amok.

Thankfully, Simon Baron-Cohen, the guru of autism research, recently embarked on a quest to compile a more comprehensive emotional inventory. The Cambridge psychologist set his minions to work pouring through thesauri to document “discrete emotional concepts.” They found thousands. Only after much agonizing were they able to whittle the list down to a manageable number: 412.

Baron-Cohen’s ultimate goal was to quantify “emotional detection” skills. And he had a hunch that he could do this by honing in on the amygdala—the part of the brain’s medial temporal lobe believed to be responsible for fear and pleasure responses. Prior research has shown that amygdala damage impairs people’s ability to “read” fear in others. But Baron-Cohen suspected that this piece of brain hardware was far more sophisticated than previously imagined. He theorized that the amygdala read not just fear, but the full gamut of human emotion.

To prove his hypothesis, Baron-Cohen devised an eye-reading test. The test consists of a series of pictures of eyes frozen in various emotional states. Accompanying each picture are four emotional descriptors, cherry-picked from his catalogue. The more often you choose the correct emotion, the higher your test score.

It’s virtually impossible to do justice to the subtlety of this test without the benefit of pictures, but it goes something like this: You’re shown a picture of a man with a wild-eyed stare. You choices are: jealous, panicked, arrogant, or hateful. Sound easy? Well, it’s not. Author Steven Johnson took the test fully prepared to ace it. What he found was that the longer he stared at each picture, the more confused he got:

. . . with each image, the clarity of the initial emotion grew less intense the longer I analyzed it . . . When I tried to interpret the images consciously, surveying each lid and crease for the semiotics of affect, the data became meaningless: folds of tissue, signifying nothing.
(From Mind Wide Open, 41)

Does this mean that Johnson has brain damage? No. It means that the part of our brains wired for emotional detection isn’t intellectual, it’s intuitive. The more he allowed gut instinct to lead him, the better he did on the test. In the end, Johnson scored a B+. As it turns out, this is not something to be terribly proud of. Most of us are capable of this level of emotional discrimination. Autistics, in contrast, tend to flunk the eye-reading test.

In an effort to find out why, Baron-Cohen hooked his subjects up to an MRI machine while they took the test. Here’s what he found:

. . . MRI scans of [normally functioning] people taking the ‘reading the eyes’ test . . . found that the amygdala lights up in trying to figure out people’s thoughts and feelings. In people with autism, they show highly reduced amygdala activity.
(From Mind Wide Open, 41)

These findings suggest that Baron-Cohen is right in assuming “the amygdala is actually used to detect a much broader range of emotions.” What scientists will do with this information remains to be seen.

Microscopic mind control

Previously posted on Annotate
February 2006

What would you say if I told you that parasites are infesting the brains of half the human population? Or creepier still, that these little buggers have the power to control people’s behavior, making some irascible, others docile, and still others certifiably insane? You’d probably say I’d watched one too many X-Files reruns—and you’d be right—but that doesn’t change the fact that it could be true.

It’s easy to dismiss parasites out of hand. Unless your intestines are playing host to a parasitic colony, they seem fairly benign. But it turns out stomach upset is one of their lesser powers.

According to science writer Carl Zimmer, parasites can be downright Machiavellian.

In Return of the Puppet Masters Zimmer informs us that:

The lancet fluke . . . forces its ant host to clamp itself to the tip of grass blades, where a grazing mammal might eat it, [thus allowing the parasite] into the gut of a sheep or some other grazer . . . to complete its life cycle.

Even more disturbing, National Geographic recently reported that hairworm parasites can drive their grasshopper hosts to suicide. (Suicide Grasshoppers Brainwashed) Apparently, adult hairworms use grasshoppers as a kind of surrogate womb, implanting their eggs in the insects’ bodies. I know—it’s unsavory. But the real trouble starts when a baby hairworm reaches maturity and wants to strike out on its own. The only way out is to “euthanize” the grasshopper. To accomplish this, the worm pumps out a potent chemical cocktail that attacks the grasshopper’s central nervous system, prompting it to head to the nearest body of water and drown itself. Once the host dies, the worm extricates itself—through the rectum. And thanks to a fairly depraved group of French biologists, you can see it here: Canal IRD.

Now that you’re sufficiently grossed out, let’s turn our attention back to the human mind snatchers. Anyone who’s spent time around a pregnant woman has probably heard of Toxoplasma. It’s a parasite people pick up from handling soil and cat litter. Scientists estimate that close to 50 percent of the world’s population carries the parasite, but, until recently, they believed that Toxoplasma only posed a threat to people with weakened immune systems. Well, all that’s changed now.

When scientists learned about the more bizarre aspects of parasitic behavior, they decided to take a closer look at Toxoplasma. Given that parasites were sometimes capable of manipulating their hosts, researchers wondered if Toxoplasma might be influencing human behavior. A group of scientists from Oxford University decided to infect rats with the parasite and conduct a series of tests. Why rats? It turns we have more in common with them than a talent for laughter. (Are rats laughing at us?) We also share similar brain structures. Here’s a brief description of the study, courtesy of Carl Zimmer:

[Oxford] scientists studied the rats in a six-foot by six-foot [enclosed maze] . . . In each corner of the enclosure they put a nest box . . . On each of the nests they added a few drops of a particular odor. On one they added the scent of fresh straw bedding, on another the bedding from a rat's nests, on another the scent of rabbit urine, on another, the urine of a cat.

When they set healthy rats loose in the enclosure, the animals rooted around curiously and investigated the nests. But when they came across the cat odor, they shied away and never returned to that corner . . . the odor of a cat triggers a sudden shift in the chemistry of rat brains that brings on intense anxiety.

Then the researchers put Toxoplasma-carrying rats in the enclosure . . . The scent of a cat in the enclosure didn't make them anxious . . . they even took a special interest in the spot and came back to it over and over again.

So what does all this mean? It means that Toxoplasma-infected rats are drawn to cats, like lambs to the slaughter. But the findings also suggest that the parasite can impact people. Another study seems to indicate that Toxoplasma alters human personality. Interestingly, the parasite appears to affect men and women differently. Male carriers are more self-reproaching, insecure, jealous, and suspicious, while female hosts are more “outgoing and warmhearted.”

Not such a bad deal for the ladies, you say? Wait till you hear the rest. Some scientists postulate that being exposed to high levels of Toxoplasma during pregnancy may “cause” or “contribute” to schizophrenia. Working off this hypothesis:

E. Fuller Torrey, of the Stanley Medical Research Institute . . . [and] the Oxford scientists . . . raised human cells in Petri dishes and infected them with Toxoplasma. Then they dosed the cells with a variety of drugs used to treat schizophrenia. Several of the drugs--most notably haloperidol--blocked the growth of the parasite.
(Return of the Puppet Masters)

Put simply, scientists now have a parasitic conspiracy theory to explain schizophrenia—or at least certain forms. (For reasons that Zimmer doesn’t go in to, Toxoplasma only provides a key to understanding some strains of the disease.) To date, there’s no conclusive evidence. But there’s certainly enough to warrant further inquiry.

Think about it. Eradicating particular types of schizophrenia could be as simple as administering antibiotics to pregnant mothers. Amazing, no?

Pinker says: Don't bother disciplining the kids

Previously posted on Annotate
January 2006

I recently started reading Blank Slate, by Steven Pinker, an MIT psychologist, much lauded for his poetic approach to science writing.

There can be no doubt, the man’s a great writer. But he’s also far smarter than the average bear (i.e., me) and I occasionally get lost in the dense thicket of his ideas. Still, I’m always drawn to his work, because he’s willing to confront the scientific fallacies born from political-correctness. He doesn’t shy away from the notion that there are inherent physiological differences between men and women, for instance—differences that may account for men’s proficiency with math and women’s superior language skills.

That said, I don’t always agree with him. I’m particularly skeptical of his take on the nature vs. nurture debate. Those who’ve been following this blog can probably guess where I stand on this issue. I think it’s an artificial dichotomy. I tend to agree with Steven Johnson, author of Mind Wide Open, who says of the protracted nature vs. nurture debate:

Are our mental faculties simply the product of evolved genes, or are they shaped by the circumstances of our upbringing? . . . [This] question has a clear, and I believe convincing answer: they’re both. We are a mix of nature and nurture through and through . . .

In Blank Slate, Pinker paints a decidedly different picture. In the chapter “The Many Roots of Our Suffering,” Pinker writes:

. . . children do not allow their personalities to be shaped by their parent’s nagging, blandishments, or attempts to serve as role models . . . the effect of being raised by a given pair of parents within a given culture is surprisingly small: children who grow up in the same home end up no more alike in personality than children who were separated at birth; adopted siblings grow up to be no more similar than strangers.

I haven’t finished Blank Slate yet, but this paragraph seems to place Pinker squarely in nature’s corner.

Pinker’s thesis seems to fly in the face of common sense. I, for one, am convinced that I wasn’t impervious to my parents “blandishments” as a child and I don’t think I’m alone in this.

Pinker appears to be implying that there is no such this as learned behavior. So, how does he account for me, you, and the baby sticklebacks raised by an ornery father? (See "Are rats laughing at us?")

Unfortunately, the nature vs. nurture debate doesn’t seem to be as settled as Johnson suggests. Many are still quick to dismiss the role of nurture.

Are rats laughing at us?

Previously posted on Annotate
January 2006

The New York Times Magazine ran a fascinating article by Charles Siebert, The Animal Self, last weekend about a newly minted field of psychology called Animal Personality. The burgeoning psychological school subscribes to the theory that animals, like humans, are born with innate character traits, which are either magnified or diminished by their formative experiences.

Pet owners across the country, no doubt, greeted this news with a resounding, 'Duh.' But in psychology circles, Animal Personality is highly controversial. To traditionalists, the theory smacks of quackery. But the field’s founder, psychologist Sam Gosling, is quick to point out that he’s not the first to acknowledge the “humanity” of those on the lower tiers of the food chain. Even Darwin believed that animals had emotions. After polishing off The Origin of the Species, Darwin turned his attention to studying intersections between human and animal behavior. Darwin’s book The Expression of the Emotions in Man and Animals was published in 1872. By the late 1940’s, however, the “perils of anthropomorphizing” began to loom large in the minds of many psychologists. Articles like The Behavior of Mammalian Herds and Packs and Insect Societies, once prevalent in psychological literature, began to disappear.

This disappearance, according to Siebert, was due in large part to the emergence of the theory of Behaviorism. Pioneers of Behaviorism, like B.F. Skinner, “stressed the inherent inscrutability of mental states and perceptions to anyone but the person experiencing them.” In other words, Behaviorists believed that our minds are like vaults, impervious to the efforts of even the most skilled psychological safecracker. Once you accept the notion that people are incapable of intuiting the feelings of their fellow humans, explorations of animal psychology start to seem just a little bit silly.

After years of neglect, animal psychology is experiencing a rebirth, thanks to Gosling and his fellow travelers. Psychology’s gatekeepers aren’t terribly happy about this, but common sense and scientific advances seem to be on Gosling’s side.

Obviously, Skinner was wrong. Our minds aren’t lock boxes. As Steven Johnson says in his book Mind Wide Open, there is a “growing appreciation for the art of mindreading,” thanks, in part, to the “discovery of mirror neurons.” (For more on mirror neurons, see Psychic Cells.) And while we may not be as adept at intuiting the emotions of animals, we have reached a point when controlled studies can give us some insight into their emotional make up.
So, what have Gosling and other Animal Personality specialists learned? They’ve learned that rats laugh; fruit flies can be so overbearing that no one wants to mate with them; and while some octopi are shy and retiring, others are unrepentant flirts. Roland Anderson, a scientist at the Seattle Aquarium, christened one of their octopi “Leisure Suit Larry,” because he was always groping the guests. (The Animal Self)

Okay, you say, so animals are people too—what does this have to do with me?

Well, for one thing, studying animal psychology may help us to resolve the eternal “nature vs. nurture” debate. According to the Animal Personality folks, all signs point to nature AND nurture. (I think this deserves another, 'Duh.') For instance, studies show that three-spined stickleback fish from predator-heavy environments are shy, while sticklebacks from less dangerous waters are outgoing. At first, scientists thought that this might be the result of evolutionary adaptation alone. But, as it turns out, baby sticklebacks raised in a laidback environment become timid when raised by a father from a predatory environment.

This is not (quite obviously) irrefutable evidence that humans are subject to the influences of both nature and nurture. But if psychologists accept the hypothesis that animal’s personalities are forged much in the way ours are, then researchers are free to conduct a whole range of experiments that can ultimately help us to resolve this question once and for all. (Want further details? Listen to Gosling’s conversation with Tom Ashbrook of wbur.org)

Imagine how much earlier we would have gotten to this point if psychologists had listened to Darwin. (Hello, have you heard of hubris?) I find myself slightly ticked off at B.F. Skinner. How could one of the great minds of mid-twentieth century psychology completely write off intuition? Sorta makes you wonder if the guy was suffering from a deficit of mirror neurons, doesn’t it? Perhaps, old Mr. Skinner had a touch of "mindblindness." (Psychology Today)