Creativity Language Neuroscience Visual Perception

Why our children see more than we do

Like the characters in C.S Lewis’ Chroncles of Narnia, research indicates that smaller children experience many things which adults can no longer see…


From the Wild Lands in the North, to the Great Deserts in the South, and the Majestic River of Telmar in the West, to the High Castle of Cair Paravel in the East, the land of Narnia is an extraordinary region populated by centaurs, dragons, talking animals and all manner of wonders which no adult human may ever see.

And although no adult may set foot in the land of Narnia, children, may enter into it through the famous wardrobe… as Lucy, Edmund, Susan and Peter famously discover in C.S. Lewis’s Chronicles of Narnia.

But they can do this only as long as they remain children.

It is with sadness that we see first Susan and Peter, then Edmund, and Lucy each learn that, beyond a certain age, they will never be able to return to Narnia.

This narrative of expulsion from paradise is as old as the story of the Garden of Eden, or the myth of “The Golden Age.”

Indeed, most adults, at some level would feel that the adult world somehow lacks the magic, the wonder and the sheer sense of possibility they once experienced as children.

We tend to see the cultural acclimatisation and education of our children as a process of opening their minds to more and more knowledge.

However, recent developments in neuroscience suggest that the opposite is in fact the case.

Because, extraordinary as it may seem, it is now clear that our awareness of the world around us, rather than expanding, in certain key areas, actually diminishes, as we grow older, and we become more socially acclimatised to the needs of our own particular tribe or social grouping.

Because, whilst education and the development of the social brain enable us to find our niche in society, this process is often at the expense of significant cognitive abilities.

To put it bluntly: we become blinded to anything other than that which our mother culture defines as reality.

We are each of us, born with around 100 billion neurons in our brains… to imagine how enormous this is, just think that this is about the same as the number of stars in the Milky Way.

And in a child’s first years of life, the brain is constantly being sculpted by its cultural surroundings, as it refines its circuits in response to environmental experiences.

Since brain circuits organize and reorganize themselves in response to an infant’s interactions with his or her environment, the synapses—the points where neurons connect—are either built and strengthened, or weakened and pruned away as needed (This process is often catchily described as “the neurons that fire together wire together”.)

In her 2011 TED talk, “The Linguistic Genius of Babies,” Kuhl describes this process as babies going from being “citizens of the world,” to becoming “language-bound” members of their own tribal grouping.

Patricia K. Kuhl is a Professor of Speech and Hearing Sciences at the Institute for Learning & Brain Sciences at the University of Washington. She specializes in language acquisition and the neural bases of language. Using magneto encephalography (MEG, a relatively new technology that measures magnetic fields generated by the activity of brain cells) Kuhl has, for the first time in human history, been able to show just how babies acquire language.

All spoken languages consist of basic units of sound, called phonemes, these phonemes combine together to form syllables. For example, in English, the consonant sound “t” and the vowel sound “o” are both phonemes that combine to form the syllable “to” as in “tomato”.

In total there are more than 200 phonemes, representing all the sounds that the human voice can create. But most languages use only a fraction of this number. In some languages it can be as few as 15, whilst in English it is over 40.

Patricia K. Kuhl discovered that before 8 months of age, the neurons in a baby’s brain could recognise the phonemes of any language on the planet.

After this point, they quickly learn to ignore the vast majority of phonemes and concentrate only on those used in their native language. And within a few months they have lost this ability altogether.

In her 2011 TED talk, “The Linguistic Genius of Babies,” Kuhl describes this process as babies going from being “citizens of the world,” to becoming “language-bound” members of their own tribal grouping.

(Intriguingly, similar tests done on adults show that these neurons continue to fire in recognition of all 200 phonemes, when presented with any “foreign” language. However this information is no longer processed consciously. So the listener is not aware that they can “hear” them.)

Similarly, in the visual domain, it has been shown that very young babies have cognitive abilities that become lost as they begin to grow into their culturally acclimatized selves.

According to a study led by Olivier Pascalis, a psychologist at England’s University of Sheffield, human babies start out with the ability to recognize a wide range of faces, even among races or species different from their own.

The study focused on the ability to recognize and categorize faces, determine identity and gender, and read emotions. The findings suggest that, in humans, whether or not you have this ability, is a question of “use it or lose it.”

Michelle de Haan, one of the study's authors, said: "We usually think about development as a process of gaining skills, so what is surprising about this case is that babies seem to be losing ability with age. This is probably a reflection of the brain's 'tuning in' to the perceptual differences that are most important for telling human faces apart and losing the ability to detect those differences that are not so useful."
Michelle de Haan, one of the study’s authors, said: “We usually think about development as a process of gaining skills, so what is surprising about this case is that babies seem to be losing ability with age.This is probably a reflection of the brain’s ‘tuning in’ to the perceptual differences that are most important for telling human faces apart and losing the ability to detect those differences that are not so useful.”

In the study six-month-old infants were able to recognize the faces of individuals of either different races or even different species—in this case, monkeys. Something, which most adults cannot do.

Babies who received specific visual training retained the ability. But those with no training lost the skill altogether by the time they were nine months old.

This is because by the time they’re nine months old, face recognition is based on a much narrower model, one that is based on the faces they see most often.

This more specialized view, in turn, diminishes our early ability to make distinctions among other species, and other races. For instance, if an infant is exposed to mainly Asian faces, he or she will grow to become less skilled at discerning among different, say, Caucasian faces.

Michelle de Haan, one of the study’s authors, said: “We usually think about development as a process of gaining skills, so what is surprising about this case is that babies seem to be losing ability with age.

“This is probably a reflection of the brain’s ‘tuning in’ to the perceptual differences that are most important for telling human faces apart and losing the ability to detect those differences that are not so useful.”

Even if children were not to lose such cognitive abilities as they “tune in” to their contextual cultural norms, we also know that a large part of their cultural acclimatsation  would prevent them from expressing views that are at odds with the social groupings in which they find themselves.

Even when we see the world differently, our adult brains have all too often been wired to keep our thoughts to ourselves.

Research conducted by Solomon Asch of Swarthmore College has clearly demonstrated the degree to which an individual’s own opinions will conform to those of the group in which he finds himself.

Whilst the Research conducted by Stanley Milgram of Yale has shown how likely people are to obey authority figures even when their orders go against their own personal morality.

Perhaps this is why we love the ability of children to speak the truth. To say what we have all been thinking, even though it is not culturally acceptable.

After all it is the child who is not blinded by culture, who on seeing the Emperor’s New Clothes says “But he isn’t wearing anything at all!”….


Understanding Consciousness in the Digital Age. Part 1

Raquel Welch in a promotional photograph for "One Million Years B.C.
What was she thinking? Raquel Welch in her fur bikini ready to do battle with dinosaurs. We tend to see such historical figures as being mentally and physically the same as us. However research using MRI technology suggests that, historically, the brains of humans were structurally and functionally very different to our own.


Let’s face it, with a gloriously daft plot that has Raquel Welch running around in a fur bikini and cave men doing battle with dinosaurs, the movie “One Million Years B.C.” was never going to win any prizes for historical accuracy.

However the fact that it wasn’t completely laughed out of the cinema when it was released in 1966, illustrates a fascinating cultural bias…

That popular culture tends to see history as little more than an extended costume drama.

We tend to see historical figures as being physically the same as us… in both mind and body. Admittedly, we recognize that these fellows may be a bit behind with technology, and that they may have some curious superstitions and beliefs, not to mention some dubious personal hygiene habits. But, given a good bath, a few lessons in English, and of course, a change of clothes we imagine that most historical characters could be introduced to contemporary society with ease.

This is largely because, when we have given it any thought at all, we have always tended to see the adult human brain as something fixed and unchangeable.

And if changes do take place to the structure of the human brain, that they tend to take place, through natural selection, over the course of many generations and many millennia.

Not everyone thinks this way, however. Back in the 1976, Julian Jaynes, a professor of psychology at Princeton, published a revolutionary new approach to the history of the mind in The Origin of Consciousness in the Breakdown of the Bicameral Mind.

In this extraordinary book, Jaynes argued that consciousness was, in fact, only a fairly recent development in human evolution, emerging as late as 10,000 years BCE.

Before this time, Jaynes argues, people experienced the world rather like schizophrenics who experience “command hallucinations”, or “voices” that tell them what to do. (In fact, according to Jaynes, schizophrenia is simply a vestige of humanity’s earlier pre-conscious state.)

Jaynes called this cognitive state “Bi-Cameralism” reasoning that the instructions or “voices” came from the right brain counterparts of the left brain language centres—specifically, Wernicke’s area and Broca’s area. These regions are somewhat dormant in the right brains of most modern humans, but occasionally auditory hallucinations have been seen to correspond to increased activity in these areas.

Using the earliest writings as evidence for his theory, Jaynes showed that the characters described in the Iliad, do not exhibit the kind of self-awareness we normally associate with consciousness. Rather, these bicameral minded individuals are guided by mental commands issued by external “gods”.

Julian Jaynes argued that consciousness is, in fact, only a fairly recent development in human evolution, emerging as late as 10,000 years BCE. Before this time, people like Helen of Troy in Homer's Iliad experienced the world rather as schizophrenics, who experience "command hallucinations", or "voices" that tell them what to do. (A handy excuse when you have just sparked a major international incident).
Julian Jaynes argued that consciousness is, in fact, only a fairly recent development in human evolution, emerging as late as 10,000 years BCE. Before this time, people like Helen of Troy in Homer’s Iliad experienced the world as schizophrenics, who experience “command hallucinations”, or “voices” that tell them what to do. (A handy excuse when you have just sparked a major international incident).


And whilst, no mention is made of any kind of introspection in the Iliad and the older books of the Old Testament, the corresponding works written after 10,000 B.C.E. like the later books of the Old Testament or the later Homeric work, the Odyssey, show signs of a very different kind of mentality … an early form of consciousness.

According to Jaynes, human consciousness first emerged as a neurological adaptation to developing social complexity, as the bicameral mind began to break down during the social chaos of the The Bronze Age Collapse” in the second millennium BCE.

Historians are unclear as to the cause of  “The Bronze Age Collapse”, but between 1206 and 1150 BCE, the cultures of the Mycenaean kingdoms, the Hittite Empire, and the New Kingdom of Egypt collapsed, and almost every city between Pylos and Gaza were all violently destroyed, and largely abandoned.

This was a time of large scale economic collapse, and mass migrations across the region (Christian and Hebrew scholars associate Moses and the story of Exodus with this time), creating social stresses that required societies to intermingle, forcing people to learn new languages and customs and generally, to become more flexible and creative.

Jaynes argues that self-awareness, or human consciousness, was the culturally evolved solution to this problem. Jaynes further suggests that the concepts of prophecy and prayer arose during this breakdown period, as people attempted to summon instructions from the “gods” and the Biblical prophets bemoaned the fact that they no longer heard directly from their one, true god.

According to Jaynes, relics of the bicameral mind can still be seen today in cultural phenomena like religion, schizophrenia and the persistent need amongst human beings for external authority in decision-making.

Jaynes big idea is really quite breathtaking in its boldness. And although many of his other, related, ideas have since been validated by modern brain imaging techniques, Jaynes’s Bi-Cameral hypothesis remains highly controversial.

At the time of publication, one of Jaynes’ more enthusiastic suporters was Marshall MCcluhan, who was undoubtedly drawn to Jaynes ideas about the origins of consciousness and how it shed light on the origins of language and writing. (It is entirely possible that the Bronze Age Collapse was the result of the collapse in early media, and breakdown of the Bronze Age mind , rather than the cause of it.)

Marshall McCluhan is, of course, famous for his thinking on the impact of media on consciousness.

McCluhan and onetime collaborator Walter Ong were both fascinated by how the shift from an oral-based stage of consciousness to one dominated by writing and print changed the way humans think.

McCluhan went on to expand on these ideas in The Gutenberg Galaxy, particularly the significance of  the invention of moveable type  and printing in terms of it’s impact on human consciousness.

It has been said that the digital age we are now entering is greater than any of these previous information revolutions, in that its impact may be no less than the equivalent of the invention of writing and the invention of the printing press… all at the same time.

However, until recently we have been lacking proof that even changes of this magnitude can have an immediate effect on human consciousness,

But over the last few years, new research using MRI scanners, has revealed that the adult human brain can, in fact, be significantly transformed by experience.

“Neuroplasticity” is the term used to describe this new found ability of the brain to transform itself structurally and functionally as a result of it’s environment. Significantly, the most widely recognized forms of “Neuroplasticity” are related to learning and memory.

If you live in central London you will be very familiar with the sight of guys, looking a bit lost, as they ride around on mopeds with a clipboard attached to the handlebars.

These are would-be London taxi drivers doing what is known locally as “The Knowledge”.

London Taxi Drivers who do "The Knowledge" develop a larger, modified hippocampus - that part of the brain associated with memory and navigation.
London Taxi Drivers who do “The Knowledge” develop a larger, modified hippocampus – that part of the brain associated with memory and navigation.


In order to drive a traditional black London cab, these drivers must first pass a rigorous exam which requires a thorough knowledge of every street  in a six-mile radius of Charing Cross.  It usually takes around three years to do “The Knowledge”, and memorise this vast labyrinth of streets in central London, and on average, only a quarter of those who start the course manage to complete it. However, it now appears that those who manage to attain “The Knowledge” , find themselves not only with a new job, but also with a modified brain.

A study published in 2000 by a researcher team at the Department of Imaging Neuroscience, at University College London demonstrated that London Taxi Drivers who do “The Knowledge” develop a larger, modified hippocampus, that part of the brain associated with memory and navigation, than they had previously.


As Dr Eleanor Maguire, who led the research team put it, “There seems to be a definite relationship between the navigating they do as a taxi driver and the brain changes. The hippocampus has changed its structure to accommodate their huge amount of navigating experience. This is very interesting because we now see there can be structural changes in healthy human brains.”

Over the past decade, other researchers, using MRI scanners, have observed similar structural and functional changes to the brain. For example, this study (Temporal and Spatial Dynamics of Brain Structure Changes during Extensive Learning Draganski, et al., 2006). shows how medical students undergo significant changes to their brains whilst studying for exams.

If this type of transformation can be observed in the adult brain, imagine what changes might be ocurring in the minds of children growing up in the midst of the digital revolution.

In 2001, the American educationalist, Mark Prensky, drew everyone’s attention to the impact this was beginning to have on the way our brains are wired, when he published this article called “Digital Natives, Digital Immigrants”.

As Prensky points out: “Our students have changed radically. Today’s students are no longer the people our educational system was designed to teach. Today‟s students have not just changed incrementally from those of the past, nor simply changed their slang, clothes, body adornments, or styles, as has happened between generations previously.

A really big discontinuity has taken place. One might even call it a “singularity” – an event which changes things so fundamentally that there is absolutely no going back.This so-called “singularity” is the arrival and rapid dissemination of digital technology in the last decades of the 20th century. It is now clear that as a result of this ubiquitous environment and the sheer volume of their interaction with it, today‟s students think and process information fundamentally differently from their predecessors.

These differences go far further and deeper than most educators suspect or realize… it is very likely that our students’ brains have physically changed – and are different from ours – as a result of how they grew up….

Digital Natives are used to receiving information really fast. They like to parallel process and multi-task. They prefer their graphics before their text rather than the opposite. They prefer random access (like hypertext). They function best when networked. They thrive on instant gratification and frequent rewards. They prefer games to “serious” work.”

These points that Prensky makes about the changing nature of consciousness are further developed in “Communities Dominate Brands” published in 2005 by Thomi Ahonen and Alan Moore. Here, Ahonen and Moore, identify two key drivers behind these changes:

Firstly they identify the act of gaming as the key factor in the rewiring of the Digital Native’s neural circuitry. Since the mid 1990s children have been  playing games on electronic devices, rather than as the previous generation, simply being passive consumers of broadcast media. And it is the interactive nature of electronic gaming, according to Ahonen and Moore that that is responsible for changing the brains of this generation structurally and functionally, and creating a constant appetite for social interactivity.

Secondly, Ahonen and Moore, emphasise the difference between the world of the PC and the fixed internet to the world of the mobile device and the mobile internet. The former is a world where you “log on” and “log off”, the latter a world where you are constantly connected. They characterize the former as “The Networked Age” (1992-2004) and the latter as “The Connected Age” (2002-2014?). And where the key characteristic of the “The Networked Age” is “Acquiring”, the key characteristic of the “The Connected Age” is “Sharing”.

It is this constant connection, and desire to share and interact  with other engaged minds that Ahonen and Moore predicted would create new “elective”, dynamic communities, and that these new social groupings would be the engine of massive social change. The subsequent rise in social media and it’s impact on major social events like the events of “The Arab Spring” are testament to the validity of Ahonen and Moore’s predictions.

However, it is probably the world of gaming that has the most to teach us about the changes that are taking in place in wiring of the brains of our “Digital Natives. And we will explore this in a little more detail in part two.

Language Narrative Visual Perception

Invisible gorillas, erotic dancers, and what lies beyond the visible spectrum.


Most people know about the spectrum of colours that can be seen with the naked eye, and that beyond this visible spectrum of colour, there are things that we cannot see, like infra-red, for example.

In recent years, however, it has become apparent that there are many things that we do not consciously see that can have profound effects on our behaviour. These are things that the unconscious mind sees “under the radar” of consciousness.

In 2004, two researchers from Harvard University, Christopher Chabris and Daniel Simons, were awarded the Ig Nobel Prize in Psychology for the experiment known as “The Invisible Gorilla.”

In this experiment, participants are shown a video, featuring two teams, one wearing white shirts, the other black. The teams are moving around in a circle, passing basketballs to one another. In order to occupy your attention, you are asked to count the number of passes made by the team wearing white.

Halfway through the video, someone wearing a full-body gorilla suit walks slowly to the middle of the screen, pounds their chest, and then walks out of the frame.

If you were to watch the video without being asked to count the passes, you would, of course, see the gorilla. But in tests, when people were asked to concentrate on the passes, about half the people did not see the gorilla at all.

Chabris and Simons call this phenomenon ‘inattentional blindness’. It occurs when you direct your attention like a mental spotlight on the basketball passes, because it is so focused on this activity, it leaves everything else in darkness. In this state, even when you look straight at the gorilla you won’t see it, because it’s simply not what you’re looking for.
That is not to say, however, that at some level your mind hasn’t registered it.

Our brains are physical systems and hence have finite ­resources. Compared to a computer chip, which is capable of processing billions of bits of information every second, our conscious brains (that part of our thinking in which we are aware of thinking) can only process a mere 40 bits of information per second.

Tor Nørretranders
Tor Nørretranders

In the “The User Illusion: Cutting Consciousness Down to Size”, Tor Nørretranders has pointed out that our senses receive about 12 million bits of information every second. Of that 12 million bits of information, 10 million bits come from our eyes, 1 million bits come from our sense of touch, and the rest being delivered from all the other senses—hearing, smell, taste, and spatial sensations.

And, this is the important bit, because our conscious brains can only process at 40 bits per second, the remaining information is processed subconsciously.

That’s a ratio, of something like 99.999 percent subconscious processing, to 0.001 percent actual conscious thinking.

And this information we receive “under the radar” of consciousness would appear to have a powerful effect on behaviour.

According to research conducted by Professors Gavan Fitzsimons and Tanya Chartrand of Duke University, and Gráinne Fitzsimons of the University of Waterloo and published in the Journal of Consumer Research, in April, 2008, when people are subliminally exposed to either an IBM or an Apple logo, those exposed to the Apple logo behave in a more creative fashion than those who had been shown the IBM logo.

Gavan Fitzsimons explains: “Each of us is exposed to thousands of brand images every day, most of which are not related to paid advertising. We assume that incidental brand exposures do not affect us, but our work demonstrates that even fleeting glimpses of logos can affect us quite dramatically.”

To demonstrate the effects of brands on behavior, the researchers selected two household names, with contrasting and clearly defined brand characteristics. They asked the participants to complete what appeared to be a simple visual acuity task, during which either the Apple or IBM logo was flashed so quickly that they were completely unaware they had seen anything.

The participants were asked to then complete a task designed to evaluate how creative they were, by listing as many uses as possible for a brick other than the obvious such as building a wall. And those who were exposed to the Apple logo generated significantly more unexpected, oblique and creative uses for the brick compared with those who had “seen” the IBM logo.

As Gráinne Fitzsimons puts it: “This is the first clear evidence that subliminal brand exposures can cause people to act in very specific ways.”

But perhaps even more dramatic than the discovery that subliminal exposure to brands can affect behaviour, was the research published by a group of evolutionary psychologists from the University of New Mexico, in their 2007 paper “Ovulatory cycle effects on tip earnings by lap dancers: economic evidence for human estrus?”

What they discovered, in fact, was that lap dancers earnings vary in direct proportion to the stages of their ovulatory cycles.

So, on average, a lap dancer would earn $335 per evening during estrus, that part of their ovulatory cycle when they are most likely to conceive, $260 per evening during the couple of weeks that form the luteal phase, and only $185 per evening during menstruation. (By contrast, participants using contraceptive pills showed no estrous earnings peak.)

As the researchers describe it in their paper: “All participants in this study worked as lap dancers in Albuquerque “gentlemen’s clubs” circa November 2006 through January 2007. The clubs serve alcohol; they are fairly dark, smoky, and loud (with a DJ playing rock, rap, or pop music). Most club patrons are Anglo or Hispanic men aged 20 to 60, ranging from semiskilled laborers to professionals; they typically start the evening by getting a stack of US$20 bills from the club’s on-site ATM and having a couple of drinks.

The Dancers in these clubs perform topless but by law are required to wear a underwear or a thong of some sort. During the evening, each dancer performs on an elevated central stage to advertise her presence, attractiveness, and availability for lap dances. These dances result in only modest tip earnings (typically $1–5 tips from the men seated closest to the stage, and amounting to just 10% of her total earnings).

The rest of the time, she will walk around the club asking men if they want a “lap dance.” A lap dance typically costs $10 per 3-min song in the main club area or $20 in the more private VIP lounge. Lap dances require informal “tips” rather than having explicit “prices” (to avoid police charges of illegal “solicitation”), but the tipping is vigorously enforced by bouncers. Dancers thus maximize their earnings by providing as many lap dances as possible per shift.

The direct correlation between the tips earned, and the ovulatory status of these women, demonstrates that this information was clearly communicated to their customers through some form of non-verbal communication. And that this is perceived by the subconscious part of the brain that processes 12 million bits of information every second, rather than the conscious part that is chugging along at a mere 40 bits of information per second.

What both the Apple vs. IBM, and the lap dancers research clearly shows, is that in large part, our behaviours, are driven by experiences that we are not consciously aware of.

And, that the vast majority of these experiences are primarily visual.

And that, in a nutshell, is why, the traditional marketing practice of proposition testing doesn’t work.

It’s all a question of bandwidth. Consider this: we have seen that something like 99.999 percent of our perception is subconscious processing, and of that processing capability, 10 million bits out of 12 million bits per second is purely visual.

So proposition testing only speaks to 0.001 percent of the available attention in the group.

In order to get real insights out of any focus group, you need to engage the whole human being, their conscious and subconscious selves, the rational and the emotional, or System A and System B consciousness as Daniel Kahnemann describes it in “Thinking Fast and Slow.

And you need to use visually rich stimulus.

A number of years ago our agency, Chemistry, developed a process that we call “Creative Planning” which does just this. It is based on the belief that consumers cannot relate in any meaningful way to propositions, but do respond to narratives placed in a visually rich context.

We find that using these methods in qualitative research, creates much higher engagement with consumers, providing much better, more profound insights than the use of propositions out of context.

Now “Creative Planning” isn’t perfect, But to be fair, consumers in focus groups are never going to be as engaged to the same degree as the customers of a lap dancing club. Whatever the time of the month.

Narrative Social Media

How Brer Rabbit survived the Black Holocaust. The resilience of narrative in social media.

Anyone hoping to understand the real power of social media, would do well to consider the extraordinary tale of a certain individual who often goes by the name of Brer Rabbit.

Now, nobody knows exactly how old Brer Rabbit really is, but he is clearly many, many hundreds of years old.

He was smuggled across the Atlantic in stories told by African slaves, to America, where he found fame and fortune in popular books and movies, becoming a character beloved by generations of children around the world.

In more recent years, these books and movies have become mired in arguments about political correctness and all but disappeared from the popular imagination. But, remarkably, the ancient oral storytelling tradition that gave birth to this character, keeps his adventures alive to this very day.

The Atlantic slave trade was a human tragedy on a scale like no other. The “Black Holocaust” or “Maafa” (a word derived from the Swahili term for “disaster”, or “great tragedy”) lasted for almost four hundred years, and although we have no way of knowing exactly how many people died as a result, many modern historians estimate a staggering death toll of at least ten million men, women and children.

The most deadly part of the journey was the notorious “Middle Passage” where prisoners were held below decks in slave ships for months as they crossed the Atlantic Ocean.

Despite the apalling conditions in which they were transported, it is thought that around eleven million Africans survived the journey to become slaves in the Americas.

The majority came from the west coast of Africa, and they came from at least 45 separate racial groupings. These included, the The BaKongo, The Mandé, The Akan, The Wolof, The Igbo, The Mbundu and The Yoruba to name but a few.

Most slaves came from the west coast of Africa, with at least 45 separate racial groupings, speaking over 1,400 different Niger-Congo languages

Mostly these people would have spoken one of the Niger Congo family of languages (these days, some 85 percent of the population of Africa speak a Niger-Congo language). However, it is estimated that there are at least 1,400 of these Niger-Congo languages.

Huddled together, in chains, in the darkness of the great slave ships, many of these people could not even talk with one another.

Over the years, West African Pidgin English, also called Guinea Coast Creole English, became the lingua franca along the West African coast.

This language began it’s life among Slave traders doing business along the coast, but it quickly spread up the river systems into the West African interior, because of its value as a common trade language among different tribes; even amongst Africans who had never have seen a white man.

It is still spoken to this day in West Africa.

Slaves in the Americas found West African Pidgin English as useful as a common language on the plantations as they had found it back home in West Africa as a trading language. And when they had children, these too adopted their own version of West African Pidgin English as their native language, thus creating a number of American English-based creole languages.

One of these creole languages is called Gullah and is still spoken today by about 250,000 people in the Southern United States, specifically, on the coasts of South Carolina and throughout the State of Georgia.

And it was in the language of Gulah, that a young Irish American called Joel Chandler Harris was to first hear, the animal stories, and songs, that were to bring him worldwide fame with the tales of Brer Rabbit.

Joel Chandler Harris was a journalist who wrote for a newspaper called “The Constitution” in Atlanta, Georgia, in the years immediately following the American Civil War. A war that had destroyed so much of the South, but wreaked devastation on Atlanta in particular.

Harris published his first Brer Rabbit tale, “The Story of Mr. Rabbit and Mr. Fox as Told by Uncle Remus”, in a phonetic version of the Gullah language, in the July 20, 1879 issue of the newspaper, under the heading “Negro Folklore”. He would publish 184 more of these tales during the next 27 years.

Becoming a household name, not just across the States and but also around the world with readers who delighted in these strange tales told in the creole language of Gullah.

Because of this, Joel Chandler Harris’s position amongst American men of letters at the start of the 20th century was second only to that of Mark Twain.

And his influence on other writers was equally far reaching; the children’s literature analyst John Goldthwaite has said that the Uncle Remus tales are “irrefutably the central event in the making of modern children’s story.” In terms of content, their influence on children’s writers such as Rudyard Kipling, A.A.Milne, Beatrix Potter, and Edith Blyton is substantial. Not to mention their stylistic influence on modernism in the works of Pound, Eliot, Joyce, and Faulkner.

And yet, today, few children would recognize the name Uncle Remus, let alone that of Joel Chandler Harris.

In the late 1960s most Brer Rabbit books were removed from schools and libraries in the States because they were deemed racist. And despite the enduring popularity of the signature song “Zip-a-Dee-Doo-Dah”, the Disney movie, “Song of the South”, which was based on these stories, has not been seen in it’s entirety for over fifty years. And never been released on home video or DVD.

In 1981 the writer Alice Walker , author of “The Colour Purple”, accused Harris of “stealing a good part of my heritage” in a blistering essay called “Uncle Remus, No Friend of Mine”. Strangely enough, and to be fair to Joel Chandler Harris, he would probably have agreed with much of what Alice Walker had to say.

Cruciallly, Harris saw himself as an ethnographic collector of the oral traditions of these former slaves rather than an original author of fictional literature in the style of Mark Twain. His tales were roundly praised by leading folklore scholars of the day. He became intrigued with the new “science of ethnology” and became a charter member of the American Folklore Society (along with Twain). As he began to fill his library with ethnological texts, journals and folklore collections, he become intrigued by the fact that the tales he was collecting bore striking resemblances to tales from cultures in other parts of the world.

Which they clearly do.

In English “Brer Rabbit” means “Brother Rabbit”. As indeed, “Brer Fox”, “Brer Bear”, “Brer Wolf” and “Brer Buzzard” are in fact: ” Brother Fox”, “Brother Bear”, “Brother Wolf” and “Brother Buzzard”.

As such, the names of these characters betray their very ancient origins in Western Africa.

As Karen Armstrong has pointed out in her brilliant “Short History of Myth”, pre-agrarian, hunter gatherer societies exhibit a strong sense of identification with all living creatures, particularly those that are hunted for food. Seeing all animals as siblings is a common expression of this perception.

Brother Rabbit, is a trickster. And as such is also another iteration of Brother Spider, or Anansi. Brer Rabbit tales, like the Anansi stories, depict a physically small and vulnerable creature using his cunning intelligence to prevail over larger animals. Brer Rabbit, originated from the folklore of the Bantu-speaking peoples of south and central Africa.Whereas, the Anansi tales which are some of the best-known in West Africa are believed to have originated in the Ashanti people in Ghana.

Although, many Brer Rabbit and Anansi stories are easily interchangeable, they often took on a whole new level of meaning on the plantations.

In the introduction of the first volume, Harris wrote: “…it needs no scientific investigation to show why (the Negro) selects as his hero the weakest and most harmless of all animals, and brings him out victorious in contests with the bear, the wolf, and the fox.” And Brer Rabbit, born into this world with “needer huff ner hawn” – neither hooves nor horns – has to use trickery to survive. The enjoyment of his amoral, and immoral, adventures, being made all the more fun as a thinly veiled code for the black slave out-foxing his white masters.

It has been said that these stories were usually told by one adult to another. And children, if they were lucky would get to listen in.

And the adult tone of many of the stories reflects this. Stealing, lying, cheating,torture savage beatings, and even cold-blooded murder are normal fare for what has been described as “this malevolent rabbit”.

Take “The Sad Fate of Mr. Fox,” in which Brer Rabbit not only tricks Brer Fox into getting himself beaten to death by Mr. Man, but then takes Brer Fox’s severed head back to his wife pretending that it’s beef for her soup pot. Or another story which has Brer Rabbit slowly scalding Brer Wolf to death, while another has him killing Brer Bear by engulfing him in a swarm of bees.

Several stories even touch on sex as a theme, usually with Brer Rabbit beating Brer Fox and the other animals for the attentions of “Miss Meadows and de gals,” who then make merry in a thinly disguised brothel.

Br'er Fox and Br'er Bear from Uncle Remus, His...
Br’er Fox and Br’er Bear from Uncle Remus, His Songs and His Sayings: The Folk-Lore of the Old Plantation, by Joel Chandler Harris

But perhaps no better tale demonstrates Brer Rabbit’s supreme wickedness than “Mr. Rabbit Nibbles Up the Butter.” In which “lumberin'” Brer Possum gets burned to death in his own fire. The little white boy, who is listening to Old UncleRemus tell this dark tale, protests indignantly that since Brer Rabbit stole the butter, he should be the one to be punished for it, not poor Brer Possum. To which Remus shrugs and says: “In dis worl’, lots er folks is gotter suffer fer udder folks sins.”

In the late 1990’s, I travelled along the Coast of West Africa with a good friend of mine, Winston, a West Indian with African slave ancestors who had been born on the small Carribean island of St Vincent.

On the westerly shores of Ghana, there is a beautiful stretch of beach, lined with palm trees, where the Atlantic surf crashes up on the golden sand, and creates the very image of a tropical paradise. And here, on a promontory a 16th century Portuguese castle stands like a dark, brooding Equatorial Elsinore.

This is Cape Coast Castle, and for almost four centuries, this was the centre of the North Atlantic slave trade in West Africa.

Cape Coast Castle. Centre of the West African slave trade for over four hundred years.

The castle itself is a dark, oppressive place. The immeasurable human pain and suffering it has witnessed, over hundreds and hundreds of years, seems to be ingrained into the very fabric of the walls.

The Gate of No Return, Cape Coast Castle, Elmina

Within the bowels of this castle is a doorway that is known as “The Gate of No Return.” Through this doorway you can see the surf crashing on the golden beaches below.

It feels like a portal to another world.

And for millions of Africans it was just that, as they passed through this gate on their way to a life of slavery, over the horizon, in the Americas. If they did not perish on the way.

As Emily Raboteau puts it so powerfully in a piece called “The Throne of Zion. A Pilgrimage to São Jorge Da Mina, Ghana’s Oldest and Most Notorious Slave Castle”:

This, then, was the door. It struck me as vaginal. You passed through it and onto a ship for Suriname or Curaçao, or through similar doorways for Cuba or Jamaica, Savannah or New Orleans. You passed through it, lost everything, and became something else. You lost your language. You lost your parents. You were no longer Asante or Krobo, Ewe or Ga. You became black. You were a slave. Your children inherited your condition. You lost your children. You lost your gods, as you had known them. You slaved. You suffered, like Christ, the new god you learned of. You learned of the Hebrew slaves of old. In the field, you sang about Moses and Pharaoh. You built a church, different from your masters’. You prayed for freedom. You wondered about the Promised Land, where that place might be.”

The only things they carried with them were their memories and their stories.

After a few hours in this dark claustrophobic castle, we were all quite relieved to get out into the late afternoon sunshine.

George a local teacher who had offered to show us around the castle suggested a place a little way back down the coast where we could get a cold beer.

An hour or so later, we were sitting outside a small wooden bar, on a beach, a couple of miles East of Elmina, watching the sun set over the promontory and the castle, and swopping stories.

As the light began to fall George started to tell Anansi stories. It emerged that Winston had been told similar stories, by his grandmother, as a child on the Caribbean island of St Vincent. Our spirits revived with the cold beer, Winston told one of his Anansi stories. Then George told one of his. Then Winston responded with another.

This went on for a while, when, with the sun slowly setting behind the silhouette of Elmina Castle, something really extraordinary happened:

Winston told a particularly funny Anansi story…

One that George had never heard before…

And at that moment it struck us all like a thunderbolt… At some remote point in the last four hundred years, this story had travelled over the vast expanse of the Atlantic ocean to the Carribean island of St Vincent. (After, perhaps passing through the “Gate of No Return” which stood ominously behind us in the gathering darkness.) Where it was passed down, from generation to generation, until Winston brought it back across the Atlantic, to share with us that evening.

The fact is, that these Brer Rabbit, Anansi stories have the most amazing ability to travel across vast swathes of space and time. And media.

Which is why these trickster tales are alive and well, and still being shared on a daily basis.

Despite the fact that many of the original books are out of print and the movie called “Song of the South” is deemed by the executives at Disney to be too politically sensitive to be re-released. And despite the fact that here have been many attempts to make the stories more socially acceptable to by removing the Uncle Remus character and the use of the Gullah language. These stories are flourishing, not in traditional media, but in that original social media… the shared oral tradition.

The rabbit who survived the Black Holocaust, may well have a few more surprises for us yet.


Vapour Trails

I was driving across the burning desert
When I spotted six jet planes
Leaving six white vapor trails across the bleak terrain
It was the hexagram of the heavens
it was the strings of my guitar
Amelia, it was just a false alarm

The drone of flying engines
Is a song so wild and blue
It scrambles time and seasons if it gets thru to you
Then your life becomes a travelogue
Of picture post card charms
Amelia, it was just a false alarm

People will tell you where they’ve gone
They’ll tell you where to go
But till you get there yourself you never really know
Where some have found their paradise
Other’s just come to harm
Oh, Amelia it was just a false alarm

A ghost of aviation
She was swallowed by the sky
Or by the sea like me she had a dream to fly
Like Icarus ascending
On beautiful foolish arms
Amelia, it was just a false alarm

Maybe I’ve never really loved
I guess that is the truth
I’ve spent my whole life in clouds at icy altitude
And looking down on everything
I crashed into his arms
Amelia, it was just a false alarm

I pulled into the Cactus Tree Motel
To shower off the dust
And I slept on the strange pillows of my wanderlust
I dreamed of 747s
Over geometric farms
Dreams Amelia – dreams and false alarms

“Amelia” by Joni Mitchell

The more we know about people, socially, culturally and personally, the more we feel we can anticipate how they might respond to any given situation.

And yet, it is impossible to predict anyone else’s behaviour with certainty.

Despite how close we might feel to another human being we can never really tell what they are going to do next.

Each of us has our own unique sense of being… the sense of an autobiographical self that is poised between the remembered past and the anticipated future.

And the nature of this sense of being, whether we realise it or not, is freedom.

And whilst it is sometimes hard to realise this sense of freedom in ourselves, it is practically impossible to experience it in others, because we can only ever experience their actions in the past.

This sense of freedom is central to the ideas of the Existentialist philosopher, Jean-Paul Sartre.

Hazel Barnes, is probably best known as the person who introduced the works of Jean-Paul Sartre to a wider American audience.  But as  her writing shows, Hazel Barnes was also something of a philosopher herself.

To explain the idea that freedom is at the heart of human experience, she came up with a wonderful visual analogy.

A visual analogy, that is both evocative and deeply illuminating.

The way we experience other human beings , she explains, is a bit like when you look up to see a jet aircraft flying across a clear blue sky.

You can see the white vapour trail stretching out for miles behind the plane, so you know where they have come from, and what the pilot’s previous actions are.

In other words you can experience his past. And from this past, you can also anticipate of where the pilot might go in the future.

What you cannot really know, however, is what is happening in the mind of the pilot or indeed what the pilot’s next move will be.

As an existentialist, Barnes understood that the core reality of every human being is freedom.

And that our daily denial of this essential nature was what Sartre had characterised as “Bad Faith”. Since it is a fundamental betrayal of our true selves.

Just because the pilot is flying in one direction, does not mean that he will always do so.

Every second he flies in that direction, he is doing so, not because he has no alternative, but because he is choosing to do so.

Because his true nature is freedom he has the potential to fly in any direction he wants.

And this is the unknowable part of any other human being.


It is 8.30am on a beautiful, clear Autumn morning in the pretty town of Albany, in upstate New York, where the trees are all changing colour to a brilliant blaze of red and gold.

Any one who happens to look up at the clear blue sky, might see the pure white line of a single vapour trail stretching across the sky from the East.

And, if they stop for a moment and carry on watching the plane, they will see something really quite unusual.

Because at this point, the plane which is a Boeing 767, banks sharply, and turns South, leaving a vapour trail at right angles to its original flight path.

The aircraft is in fact American Airlines Flight 11 flying from Boston to Los Angeles and the pilot is a man called Mohamed Atta.

Atta had been born on September 1, 1968 in the town of Kafr el-Sheikh, close to the shores of the Mediterranean Sea, in Northern Egypt.

His parents were ambitious for him, and, as a child, Atta was discouraged from socialising and spent most of his time at home studying.  His father, Mohamed el-Amir Awad el-Sayed Atta, was a succesful lawyer. His mother, Bouthayna Mohamed Mustapha Sheraqi, was also highly educated.

Atta had two sisters and his parents were equally ambitious for them too. One would go on on to become a medical doctor and the other a college professor.

In 1985, all this hard work paid off for Atta, when he secured a place at Cairo University to study engineering.

Atta excelled at his studies here, and in his final year he was admitted into the prestigious Cairo University architecture programme. And in 1990, graduated with a degree in architecture.

Nobody could have predicted that ten years later he would find himself at the controls of a Boeing 767, as he flew over the golden forests of upstate New York.

Nobody could have known that he would turn the plane away from it’s destination, Los Angeles, and begin flying due south down the Hudson River Valley towards New York.

And nobody could have imagined that around fifteen minutes later, at just after 8:46am, he would straighten up the plane, accelerate, and fly American Airlines Flight 11 straight into the the side of the North Tower of the World Trade Center.

With his strong engineering background, Mohamed Atta, of course knew that the Boeing 767 traveling at over 465 miles per hour and carrying more than 10,000 gallons of jet fuel, would explode immediately, killing everybody on board.

But nobody could else could have predicted that.

Not even Atta’s father, Mohamed el-Amir Awad el-Sayed Atta, who, to this day, continues to deny that his son could ever have been involved in something so unthinkable.

Because, however close we feel to another human being, we can never really tell what they are going to do next.

World Trade Center Attacked


How a brand determines experience: The ATM story

In 1989, Midland Bank, a traditional British high street bank, with a history of low customer satisfaction scores, decided to launch a separately branded, phone-based operation. This was to be called “First Direct” and was the first phone banking operation of it’s kind in the UK.

Not only did this new venture look very different to it’s old bricks and mortar cousin, the new brand also behaved very differently. Significantly, one of the key criteria for frontline staff was that they had never worked in a bank before.

The launch was a great succes. By May 1991 the bank had 100,000 customers on its books. Not only that, these customers were also saying that they were much, much more satisfied with the service than they ever had been with their traditional banking experience.

By May 2001 First Direct, now with the highest customer satisfaction in the market, was gaining one third of all it’s new business through referrals, with customers recommending the service to friends, on average, once every 4 seconds.

This difference in satisfaction ratings was most evident in the use of ATMs.

Midland  Bank customers continued to give low satisfaction scores for their ATMs – around 25% – while First Direct customers were giving satisfaction scores of up to 70%.

The extraordinary thing was that they were both using the same ATMs.

In other words, their attitudes to the two brands were having a significant, and measurable impact on their actual experience.

Even though the physical experience was exactly the same.


Don’t try to be original, just try to be good

It was the great american graphic designer paul rand who first said: “Don’t try to be original, just try to be good”. And this advice is nowhere more valuable than in the discipline of typography.
These days any fool can knock up a half decent piece of design, given the wonderful menu driven, template based, digital design tools, we now have at our fingertips.
However, to produce something that is really “good”, you really need to understand the fundamental principles of typographic design.
This stuff isn’t subjective, it’s like music, there are some things that are right and some things that are just plain wrong. It’s like you need to learn classical before you can express yourself in free form jazz.
I came across two great introductions to this esoteric art recently.
The first is this neat little online game which teaches you the mysterious art of “kerning” (For complete novices, a top tip: vertical letters like an “I” followed by another “I” always need to be spaced apart whilst letters like an “O” followed by another “O” always need to be closer together).
The second is this handy reference chart that summarises some of the more important principles of the discipline.
And as the man says: don’t try to be original, just try to be good.
Because It’s only when you get to be good, that you have the ability to express your originality.


Photo Opportunities

With “Photo Opportunities”, Corinne Vionnet, a Swiss-based artist, has created a series of haunting images, that also provide an interesting commentary on the stereotypical way in which we relate to familiar landmarks.

To create the mysterious visions, Vionnet combined hundreds of photos from tourists, online keyword searches and photo sharing sites alike. She then superimposed each image on top of one another to get the desired effect. What you’re seeing, then, is hundreds of the same photograph placed on top of one another.

Vionnet told Yvi magazine that the point of her creations is to show that people, tourists mainly, need to photograph their travels to prove that they were there.

She continued:
We are looking at a monument that we somehow already know. As a part of knowing that we have also been there, we need the photograph to fix the memory of our visit. By pressing the shutter button, time becomes event, a unique moment. The images made by tourists are picture imitations. They demonstrate the desire to produce a photograph of an image that already exists, one like those we have already seen. It is in fact a style of manipulating the viewer.


The Silent Treatment

It’s an old advertising cliche: “The client loves it but……”. A phrase usually followed by what the client really thinks. The fact is, we often dislike providing criticism that sounds, well, critical. Particularly when it has to be delivered in person.

Essanay, the studio that achieved fame with the silent movies of Charlie Chaplin, had no such compunctions when it came to rejecting scripts.

As befitting a silent movie studio, they didn’t mince words when it came to providing scriptwriters with clear and concise feedback. As this pre-printed rejection slip so eloquently demonstrates.


It’s not information overload… it’s filter failure


A bit like Dave Bowman, in the final moments of 2001 A Space Odyssey, many people feel assaulted by the sheer volume of information they have to process at any given point in time. And most feel that this problem of “information overload” is simply an inevitable bi-product of new media.

In this brilliant recent talk, Clay Shirkey, the author who once coined the phrase: “the internet runs on love”, begs to differ on both counts…

Clay points out with great clarity, that “information overload” is, in fact, nothing new. Since the problem initially appeared with the first media revolution. The one that Johannes Gutenberg ushered in with the invention of movable type in the mid 15th century.

Within a few years, for the first time in human history, there were suddenly more books available than even the fastest reader could read in a lifetime.

Clay explains how we all need to get over this sense of feeling besieged by the wealth of information that appears to demand our attention, and describes the implications for re-evaluating our concepts of filtering, privacy, and urgency.

And with a phrase borrowed from Yitzhak Rabin sums it all up with some very smart advice:

“When you have the same problem for a long time… maybe it’s not a problem. Maybe it’s a fact”.

Click here for the talk.