From the Wild Lands in the North, to the Great Deserts in the South, and the Majestic River of Telmar in the West, to the High Castle of Cair Paravel in the East, the land of Narnia is an extraordinary region populated by centaurs, dragons, talking animals and all manner of wonders which no adult human may ever see.
And although no adult may set foot in the land of Narnia, children, may enter into it through the famous wardrobe… as Lucy, Edmund, Susan and Peter famously discover in C.S. Lewis’s Chronicles of Narnia.
But they can do this only as long as they remain children.
It is with sadness that we see first Susan and Peter, then Edmund, and Lucy each learn that, beyond a certain age, they will never be able to return to Narnia.
This narrative of expulsion from paradise is as old as the story of the Garden of Eden, or the myth of “The Golden Age.”
Indeed, most adults, at some level would feel that the adult world somehow lacks the magic, the wonder and the sheer sense of possibility they once experienced as children.
We tend to see the cultural acclimatisation and education of our children as a process of opening their minds to more and more knowledge.
However, recent developments in neuroscience suggest that the opposite is in fact the case.
Because, extraordinary as it may seem, it is now clear that our awareness of the world around us, rather than expanding, in certain key areas, actually diminishes, as we grow older, and we become more socially acclimatised to the needs of our own particular tribe or social grouping.
Because, whilst education and the development of the social brain enable us to find our niche in society, this process is often at the expense of significant cognitive abilities.
To put it bluntly: we become blinded to anything other than that which our mother culture defines as reality.
We are each of us, born with around 100 billion neurons in our brains… to imagine how enormous this is, just think that this is about the same as the number of stars in the Milky Way.
And in a child’s first years of life, the brain is constantly being sculpted by its cultural surroundings, as it refines its circuits in response to environmental experiences.
Since brain circuits organize and reorganize themselves in response to an infant’s interactions with his or her environment, the synapses—the points where neurons connect—are either built and strengthened, or weakened and pruned away as needed (This process is often catchily described as “the neurons that fire together wire together”.)
Patricia K. Kuhl is a Professor of Speech and Hearing Sciences at the Institute for Learning & Brain Sciences at the University of Washington. She specializes in language acquisition and the neural bases of language. Using magneto encephalography (MEG, a relatively new technology that measures magnetic fields generated by the activity of brain cells) Kuhl has, for the first time in human history, been able to show just how babies acquire language.
All spoken languages consist of basic units of sound, called phonemes, these phonemes combine together to form syllables. For example, in English, the consonant sound “t” and the vowel sound “o” are both phonemes that combine to form the syllable “to” as in “tomato”.
In total there are more than 200 phonemes, representing all the sounds that the human voice can create. But most languages use only a fraction of this number. In some languages it can be as few as 15, whilst in English it is over 40.
Patricia K. Kuhl discovered that before 8 months of age, the neurons in a baby’s brain could recognise the phonemes of any language on the planet.
After this point, they quickly learn to ignore the vast majority of phonemes and concentrate only on those used in their native language. And within a few months they have lost this ability altogether.
In her 2011 TED talk, “The Linguistic Genius of Babies,” Kuhl describes this process as babies going from being “citizens of the world,” to becoming “language-bound” members of their own tribal grouping.
(Intriguingly, similar tests done on adults show that these neurons continue to fire in recognition of all 200 phonemes, when presented with any “foreign” language. However this information is no longer processed consciously. So the listener is not aware that they can “hear” them.)
Similarly, in the visual domain, it has been shown that very young babies have cognitive abilities that become lost as they begin to grow into their culturally acclimatized selves.
According to a study led by Olivier Pascalis, a psychologist at England’s University of Sheffield, human babies start out with the ability to recognize a wide range of faces, even among races or species different from their own.
The study focused on the ability to recognize and categorize faces, determine identity and gender, and read emotions. The findings suggest that, in humans, whether or not you have this ability, is a question of “use it or lose it.”
In the study six-month-old infants were able to recognize the faces of individuals of either different races or even different species—in this case, monkeys. Something, which most adults cannot do.
Babies who received specific visual training retained the ability. But those with no training lost the skill altogether by the time they were nine months old.
This is because by the time they’re nine months old, face recognition is based on a much narrower model, one that is based on the faces they see most often.
This more specialized view, in turn, diminishes our early ability to make distinctions among other species, and other races. For instance, if an infant is exposed to mainly Asian faces, he or she will grow to become less skilled at discerning among different, say, Caucasian faces.
Michelle de Haan, one of the study’s authors, said: “We usually think about development as a process of gaining skills, so what is surprising about this case is that babies seem to be losing ability with age.
“This is probably a reflection of the brain’s ‘tuning in’ to the perceptual differences that are most important for telling human faces apart and losing the ability to detect those differences that are not so useful.”
Even if children were not to lose such cognitive abilities as they “tune in” to their contextual cultural norms, we also know that a large part of their cultural acclimatsation would prevent them from expressing views that are at odds with the social groupings in which they find themselves.
Even when we see the world differently, our adult brains have all too often been wired to keep our thoughts to ourselves.
Research conducted by Solomon Asch of Swarthmore College has clearly demonstrated the degree to which an individual’s own opinions will conform to those of the group in which he finds himself.
Whilst the Research conducted by Stanley Milgram of Yale has shown how likely people are to obey authority figures even when their orders go against their own personal morality.
Perhaps this is why we love the ability of children to speak the truth. To say what we have all been thinking, even though it is not culturally acceptable.
After all it is the child who is not blinded by culture, who on seeing the Emperor’s New Clothes says “But he isn’t wearing anything at all!”….
Let’s face it, with a gloriously daft plot that has Raquel Welch running around in a fur bikini and cave men doing battle with dinosaurs, the movie “One Million Years B.C.” was never going to win any prizes for historical accuracy.
However the fact that it wasn’t completely laughed out of the cinema when it was released in 1966, illustrates a fascinating cultural bias…
That popular culture tends to see history as little more than an extended costume drama.
We tend to see historical figures as being physically the same as us… in both mind and body. Admittedly, we recognize that these fellows may be a bit behind with technology, and that they may have some curious superstitions and beliefs, not to mention some dubious personal hygiene habits. But, given a good bath, a few lessons in English, and of course, a change of clothes we imagine that most historical characters could be introduced to contemporary society with ease.
This is largely because, when we have given it any thought at all, we have always tended to see the adult human brain as something fixed and unchangeable.
And if changes do take place to the structure of the human brain, that they tend to take place, through natural selection, over the course of many generations and many millennia.
Not everyone thinks this way, however. Back in the 1976, Julian Jaynes, a professor of psychology at Princeton, published a revolutionary new approach to the history of the mind in The Origin of Consciousness in the Breakdown of the Bicameral Mind.
In this extraordinary book, Jaynes argued that consciousness was, in fact, only a fairly recent development in human evolution, emerging as late as 10,000 years BCE.
Before this time, Jaynes argues, people experienced the world rather like schizophrenics who experience “command hallucinations”, or “voices” that tell them what to do. (In fact, according to Jaynes, schizophrenia is simply a vestige of humanity’s earlier pre-conscious state.)
Jaynes called this cognitive state “Bi-Cameralism” reasoning that the instructions or “voices” came from the right brain counterparts of the left brain language centres—specifically, Wernicke’s area and Broca’s area. These regions are somewhat dormant in the right brains of most modern humans, but occasionally auditory hallucinations have been seen to correspond to increased activity in these areas.
Using the earliest writings as evidence for his theory, Jaynes showed that the characters described in the Iliad, do not exhibit the kind of self-awareness we normally associate with consciousness. Rather, these bicameral minded individuals are guided by mental commands issued by external “gods”.
And whilst, no mention is made of any kind of introspection in the Iliad and the older books of the Old Testament, the corresponding works written after 10,000 B.C.E. like the later books of the Old Testament or the later Homeric work, the Odyssey, show signs of a very different kind of mentality … an early form of consciousness.
According to Jaynes, human consciousness first emerged as a neurological adaptation to developing social complexity, as the bicameral mind began to break down during the social chaos of the The Bronze Age Collapse” in the second millennium BCE.
Historians are unclear as to the cause of “The Bronze Age Collapse”, but between 1206 and 1150 BCE, the cultures of the Mycenaean kingdoms, the Hittite Empire, and the New Kingdom of Egypt collapsed, and almost every city between Pylos and Gaza were all violently destroyed, and largely abandoned.
This was a time of large scale economic collapse, and mass migrations across the region (Christian and Hebrew scholars associate Moses and the story of Exodus with this time), creating social stresses that required societies to intermingle, forcing people to learn new languages and customs and generally, to become more flexible and creative.
Jaynes argues that self-awareness, or human consciousness, was the culturally evolved solution to this problem. Jaynes further suggests that the concepts of prophecy and prayer arose during this breakdown period, as people attempted to summon instructions from the “gods” and the Biblical prophets bemoaned the fact that they no longer heard directly from their one, true god.
According to Jaynes, relics of the bicameral mind can still be seen today in cultural phenomena like religion, schizophrenia and the persistent need amongst human beings for external authority in decision-making.
Jaynes big idea is really quite breathtaking in its boldness. And although many of his other, related, ideas have since been validated by modern brain imaging techniques, Jaynes’s Bi-Cameral hypothesis remains highly controversial.
At the time of publication, one of Jaynes’ more enthusiastic suporters was Marshall MCcluhan, who was undoubtedly drawn to Jaynes ideas about the origins of consciousness and how it shed light on the origins of language and writing. (It is entirely possible that the Bronze Age Collapse was the result of the collapse in early media, and breakdown of the Bronze Age mind , rather than the cause of it.)
Marshall McCluhan is, of course, famous for his thinking on the impact of media on consciousness.
McCluhan and onetime collaborator Walter Ong were both fascinated by how the shift from an oral-based stage of consciousness to one dominated by writing and print changed the way humans think.
McCluhan went on to expand on these ideas in The Gutenberg Galaxy, particularly the significance of the invention of moveable type and printing in terms of it’s impact on human consciousness.
It has been said that the digital age we are now entering is greater than any of these previous information revolutions, in that its impact may be no less than the equivalent of the invention of writing and the invention of the printing press… all at the same time.
However, until recently we have been lacking proof that even changes of this magnitude can have an immediate effect on human consciousness,
But over the last few years, new research using MRI scanners, has revealed that the adult human brain can, in fact, be significantly transformed by experience.
“Neuroplasticity” is the term used to describe this new found ability of the brain to transform itself structurally and functionally as a result of it’s environment. Significantly, the most widely recognized forms of “Neuroplasticity” are related to learning and memory.
If you live in central London you will be very familiar with the sight of guys, looking a bit lost, as they ride around on mopeds with a clipboard attached to the handlebars.
These are would-be London taxi drivers doing what is known locally as “The Knowledge”.
In order to drive a traditional black London cab, these drivers must first pass a rigorous exam which requires a thorough knowledge of every street in a six-mile radius of Charing Cross. It usually takes around three years to do “The Knowledge”, and memorise this vast labyrinth of streets in central London, and on average, only a quarter of those who start the course manage to complete it. However, it now appears that those who manage to attain “The Knowledge” , find themselves not only with a new job, but also with a modified brain.
A study published in 2000 by a researcher team at the Department of Imaging Neuroscience, at University College London demonstrated that London Taxi Drivers who do “The Knowledge” develop a larger, modified hippocampus, that part of the brain associated with memory and navigation, than they had previously.
As Dr Eleanor Maguire, who led the research team put it, “There seems to be a definite relationship between the navigating they do as a taxi driver and the brain changes. The hippocampus has changed its structure to accommodate their huge amount of navigating experience. This is very interesting because we now see there can be structural changes in healthy human brains.”
Over the past decade, other researchers, using MRI scanners, have observed similar structural and functional changes to the brain. For example, this study (Temporal and Spatial Dynamics of Brain Structure Changes during Extensive Learning Draganski, et al., 2006). shows how medical students undergo significant changes to their brains whilst studying for exams.
If this type of transformation can be observed in the adult brain, imagine what changes might be ocurring in the minds of children growing up in the midst of the digital revolution.
In 2001, the American educationalist, Mark Prensky, drew everyone’s attention to the impact this was beginning to have on the way our brains are wired, when he published this article called “Digital Natives, Digital Immigrants”.
As Prensky points out: “Our students have changed radically. Today’s students are no longer the people our educational system was designed to teach. Today‟s students have not just changed incrementally from those of the past, nor simply changed their slang, clothes, body adornments, or styles, as has happened between generations previously.
A really big discontinuity has taken place. One might even call it a “singularity” – an event which changes things so fundamentally that there is absolutely no going back.This so-called “singularity” is the arrival and rapid dissemination of digital technology in the last decades of the 20th century. It is now clear that as a result of this ubiquitous environment and the sheer volume of their interaction with it, today‟s students think and process information fundamentally differently from their predecessors.
These differences go far further and deeper than most educators suspect or realize… it is very likely that our students’ brains have physically changed – and are different from ours – as a result of how they grew up….
Digital Natives are used to receiving information really fast. They like to parallel process and multi-task. They prefer their graphics before their text rather than the opposite. They prefer random access (like hypertext). They function best when networked. They thrive on instant gratification and frequent rewards. They prefer games to “serious” work.”
These points that Prensky makes about the changing nature of consciousness are further developed in “Communities Dominate Brands” published in 2005 by Thomi Ahonen and Alan Moore. Here, Ahonen and Moore, identify two key drivers behind these changes:
Firstly they identify the act of gaming as the key factor in the rewiring of the Digital Native’s neural circuitry. Since the mid 1990s children have been playing games on electronic devices, rather than as the previous generation, simply being passive consumers of broadcast media. And it is the interactive nature of electronic gaming, according to Ahonen and Moore that that is responsible for changing the brains of this generation structurally and functionally, and creating a constant appetite for social interactivity.
Secondly, Ahonen and Moore, emphasise the difference between the world of the PC and the fixed internet to the world of the mobile device and the mobile internet. The former is a world where you “log on” and “log off”, the latter a world where you are constantly connected. They characterize the former as “The Networked Age” (1992-2004) and the latter as “The Connected Age” (2002-2014?). And where the key characteristic of the “The Networked Age” is “Acquiring”, the key characteristic of the “The Connected Age” is “Sharing”.
It is this constant connection, and desire to share and interact with other engaged minds that Ahonen and Moore predicted would create new “elective”, dynamic communities, and that these new social groupings would be the engine of massive social change. The subsequent rise in social media and it’s impact on major social events like the events of “The Arab Spring” are testament to the validity of Ahonen and Moore’s predictions.
However, it is probably the world of gaming that has the most to teach us about the changes that are taking in place in wiring of the brains of our “Digital Natives. And we will explore this in a little more detail in part two.