This is the last chapter in the 'introductory phase' of the essay, before a lengthy excursion into Darwinism and symbolic cognition is embarked upon. It should be noted that at several points here, reference is made to a newer draft of chapter 7 than that posted on this blog, in which objective reality is modelled as an emergent, ‘computational’ feature arising from quantum interactions. Douglas Hofstadter’s concept of the Strange Loop is also here obliquely referenced, and will be a future line of enquiry for development of some of these ideas.
Blackmore explores some properties of the experience of consciousness which have evidently functioned as the source of a huge array of traditional interpretations of its underlying nature, which may perhaps be said to rest upon human instinctual perceptions rather than any true neurological reality:
“The most natural way to think about consciousness is probably… [that] the mind feels like a private theatre. Here I am, inside the theatre, located roughly somewhere inside my head and looking out through my eyes. But this is a multi-sensational theatre. So I experience touches, smells, sounds, and emotions… and I can use my imagination too. All these are ‘contents of my consciousness’ and ‘I’ am the audience of one who experiences them.”
Another common image is that of the stream of consciousness, first coined by William James, but similar to an analogy found in some forms of Buddhism. Blackmore narrates:
“Our conscious life really does feel like a continuously flowing stream of sights, sounds, smells, touches, thoughts, emotions, worries, and joys – all of which happen, one after another, to me. This way of conceiving of our own minds is so easy, and so natural, that it hardly seems worth questioning.”
Yet the problems of consciousness are some of the most elusive in modern science, and all current models reject these “apparently innocent analogies” and their heavy dependence on an experience of consciousness which may be an evolved illusion presented to the self (which indeed may in turn be a similarly evolved illusion!) in order for the self to function.
Ned Block holds that consciousness exhibits two fundamental properties: phenomenal and access. The former represents unstructured experience centred upon our bodies and responses, but works independently of any behavioural responses, whereas the latter property is accessible to reason, behavioural control and cognition. While this idea has found some acceptance, we might criticise it for its somewhat Kantian structure, with phenomenal consciousness looking suspiciously like a pseudo- or epi-noumenal concept and (somewhat confusingly) access consciousness as phenomenal, and thus reject the model. However, the concept of access consciousness will have brief relevance later in considering some underlying requirements for the evolution of language, in particular the notion of behavioural and bodily control as a precursor to the intention to communicate.
For several decades of the twentieth century, upon seeing the modular structure of the brain with respect to certain functions such as language or visual recognition, neurologists sought a module which might reflect a source of consciousness – or perhaps, two sources corresponding to Block's distinction above – but none has been forthcoming. Instead, consciousness appears to be delocalised across the whole of the cortical structure. Daniel Dennett has suggested that consciousness may emerge not from one point or area but rather from the computational circuitry found across the whole of the brain, and posits a 'Multiple Drafts' model in which a given event liberates a variety of sensory inputs, and a variety of interpretations of these inputs, creating the equivalent of multiple versions of the same story.
Each of these versions becomes a percept available to elicit or modify behaviour and it is here that consciousness is to be found, in the flows of information around the brain as these diverse percepts are formed. There is no clear boundary between conscious experience and any other information processing in the brain, and no central experiencer, no 'Cartesian Theatre' in his words, who confers an unambiguous interpretation of reality upon any given draft. Descartes' maxim, 'I think, therefore I am', thus breaks down in the delocalised process-based light of Dennett's model.
This is a significant challenge to our previously discussed experience of consciousness with its duality of mind/body and its apparent unitary nature as it is presented to us, both of which assumptions form the fundament to the ‘Cartesian Theatre’, and they are to be recognised as neural constructs, or perhaps emergent features arising from the computational functions of the brain. Dennett rejects completely the notion that there is a specific place within the brain where consciousness takes place, or that there exists some kind of moment in the stream of experiences where things “mysteriously become conscious.” Blackmore summarises why this should be so:
“To begin with, there is no centre in the brain which corresponds to this notion, for the brain is a radically parallel processing system with no central headquarters. Information comes in to the senses and is distributed all over the place for different purposes. In all of this activity there is no central place in which ‘I’ sit and watch the show…There is no place in which the arrival of thoughts or perceptions marks the moment at which they become conscious…”
There is nothing in this that can support a theatre of consciousness, but Dennett adds that we additionally cannot shift our conception of it from a literal reality to some kind of analogical distributed process, or widespread neural network, in which the ‘thing-ness’ of consciousness, reflected in the word’s linguistic status as a noun, can still be retained. Consciousness is an emergent feature of the brain’s computations, present everywhere that computations occur (which is to say, in every part of the brain) and not for the first time on our journey will we see human inner experience confounded in this way. Unitary consciousness seems to be a illusory projection or construal of a much more complex and stranger fundamental neurological reality, and the role of symbolic cognition in construing some of this conscious identity and selfhood may perhaps be assumed. Here, then, we step over another threshold in coming to understand ourselves in a new way, and we find that traditional, dualist and Cartesian assumptions about ourselves cannot survive these revelations.
The delocalised aspects of Dennett’s model bear more than a passing resemblance to Steven Mithen's concept of cognitive fluidity, in which the modular primate mind combined several different ways of information processing – primarily social, natural history, technical, and later linguistic and abstract – to arrive at the cognitively-modern human being. By utilising creative processes reliant upon metaphor and analogy, as well as a less localised neurological foundation, archaic humans were able to knock down the information-processing barriers in their brains, and as such Mithen considered cognitive fluidity to be an important property of consciousness.
Here, then, our Initial Model of Perception is fundamentally challenged: there is no singular consciousness to which perceptual data is to be presented. Rather, consciousness has fluid, mercurial properties emerging from multiple cognitions of a variety of perceptual drafts and the apparent centrality of the phenomenon is illusory. The ‘stream of consciousness’ metaphor as a delocalised view can perhaps explain one of the most distinctive features of visionary experience, that of constant movement and flow, but we must theorise that during such visions, the natural ‘flow’ experience of consciousness has merely been intensified rather than posit the idea that visionary experience interrogates consciousness itself. We might speculate that this intensified ‘flow’ is caused by perceptual drafts responding and re-responding to each other in a kind of internal recursive cascade of cognitions from which the experience of a heightened and unitary state of visionary consciousness apparently emerges. But visionary experience, like normal consciousness, may not necessarily offer experiential insights into what consciousness really is or how it relates to the brain. It appears as emergent, indeed as illusorily flowing yet unitary, as any other type of consciousness state, and this emergent property is what confounds all simplistic explanations.
Emergence is a ubiquitous feature of evolved systems, and contrary to 'reducible' or fundamentalist theories in which all the characteristics of a given system can be predicted using a few simple rules or equations, emergent phenomena exhibit unpredictable patterns and regularities despite the original system lacking those characteristics: one must 'play out' the actions of the system for the emergent phenomena to be observed. We have already seen how Classical objectivity appears to emerge from a morass of trillions of quantum interactions, and we concluded that this objectivity was to a certain extent computational. This resonates with our discussion on consciousness, and further examples can neatly elucidate how computational phenomena and evolved systems can liberate complex and irreducible emergent behaviours.
The software game Langton's Ant simulates a creature (the 'ant') living in a virtual world governed by simple rules which should, according to reducible theories, produce predictable outcomes. The world is three-dimensional, consisting of time plus a two-dimensional infinite plane which itself is formed of pixel squares. At the beginning of the game, all pixels are uniformly coloured black but will turn white if the 'ant' should move to occupy one of them – thus the 'ant' starts on a black square. However, as the game starts, three rules of movement come into play: if the 'ant' is on a black pixel, it turns right 90 degrees and moves forward one square, but if the 'ant' finds itself on a white pixel it turns left 90 degrees before again advancing one square. As the 'ant' leaves the pixel, its colour inverts, from black to white or white to black.
On the surface, this simple formulation might be expected to liberate regular geometric patterns of black and white pixels, and at first, this is what happens. After several hundred moves, however, this geometric regularity begins to break up and descend into chaos. Random noise then ensues for several minutes before something remarkable happens. Quite unbidden, the ant begins to exhibit cycling behaviour, consisting of 117 moves over the pixels, a process which leaves the ant displaced by one pixel to the left and one pixel down relative to its starting point at the beginning of the cycle, from which point it restarts the 117-move cycle. From out of this iterative behaviour, the ant builds a road, which moves upwards and leftwards and seems to continue forever.
This road is wholly unpredictable and partakes of an irreducibility which is not expressible in terms of the initial conditions – the several simple rules of the virtual world – of the game. Thus, the only way that the road can be seen is to play out the game and observe, and not for the first time, we find that observation rather than essence or underlying reality are key to our present thesis.
A second and even more striking example of emergence can be found in the work of engineer Adrian Thompson, who works in the field of evolvable hardware, and who evolves circuits using actual logic cells, since (Langton's Ant notwithstanding) he believes that computer simulations will not necessarily exhibit the emergent properties he is researching. His approach is to decide on a task, generate some random circuits and run through various steps determined by the performance of the circuits to improve that performance towards completing the task.
In 1993, he set one hundred of these logic cells a task to distinguish between two audio input tones, one of 1000 Hertz (cycles per second) and one of 10,000 Hertz, and produce an output signal that indicated the circuit had made the distinction. This task he considered might eventually be useful in creating voice recognition software. As might be expected, the first completely random arrangement of the logic cells completely failed to produce any result, but slowly over thousands of steps, with his computer rating the fitness of each logic cell's attributes towards completing the task and adjusting accordingly, features were found in the evolving circuits which began to resemble a coherent result.
After some 4100 steps, the logic cells completed the task, but in a way that no human could have ever designed. Bellows explains:
“Dr. Thompson peered inside... to gain insight into its methods, but what he found inside was baffling. The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest, with no pathways that would allow them to influence the output, yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other [chips] of the same type.”
The evolved nature of this process thus did not make use of any features, such as an internal clock, to distinguish the frequencies, but instead took advantage of the specifics and quirks of the logic cells, and had done so in a far more efficient manner than any designer could have conceived, with only 37 cells being functional. Most strikingly were the five cells wholly unconnected to the circuit but nonetheless essential to its function, disclosing the same kind of delocalised phenomenon discussed earlier. Thompson reported at one point that he had no idea how this circuit worked.
Such emergent phenomena resonate with our current theme, particularly in light of Dennett's Multiple Drafts model. It is quite possible – and here we are engaging in some speculation, since consciousness remains a mystery – that consciousness partakes of the same type of emergent order as the ant's road or Thompson's logic cells, though whether this view should be taken literally or confined to a useful analogy is an open question.
If we consider the human brain as a system, then we would quickly conclude that the initial conditions of that system are redolent with complexity. We might consider a single neuron to represent a single element (or in the language of Langton's Ant, a pixel) but were we to do so, we would have to concede that the rules governing these elements are far more complicated and numerous than the world of the 'ant' or Thompson's logic cells. These would include quantum principles, laws of physics and chemistry, biochemical interactions, as well as genetic, epigenetic, mitochondrial and protein behaviours. There would also be the limitations and flexibilities of neurological predispositions, such as the propensity of the primary visual cortex to detect lines rather than curves, and intraneural connectivity in which each neuron connects with and communicates to an average of one million other neurons.
This dizzying picture of the human brain's initial conditions intensifies in light of the fact that we have around 100 billion of these neurons throughout our body, and the rules which govern all these principles are not, at the current state of our human development, fully understood, nor is it known how these principles can be pragmatically applied to the specific field in question, in the 'warm and wet' context of biology. Thus we can perhaps marvel here at an ocean of our ignorance, a great void in our knowledge in which it is difficult to raise questions of a pertinent let alone scientific nature, and while gazing at the complexity of what we do know of this lacuna we can understand why many commentators resort to quick and ready explanations or pseudo-science in seeking an origin for our consciousness.
But no matter how complex, an evolved system like this must surely exhibit unpredictable emergent phenomena similar to those seen in the preceding examples. Is it possible that several orders of emergence, as it were first-order emergent behaviours descending back into chaotic randomness until even higher orders of regular patterns emerge at a later point, can be observed from such a system? We might speculate that consciousness represents such a higher order of behaviour, irreducible to the myriad initial conditions of the human brain. Certainly the apparent delocalised properties of consciousness seem to suggest an emergent nature of some kind, and we can readily see a mirroring of the notion that while we experience it we don't know what consciousness is with Thompson's statement that his circuit works but he doesn't know how.
Taking the speculations further, into what might be considered wild territory bordering on the nonsensical, we could question whether this emergent viewpoint can throw any analytical or scientific light on concepts such as the ‘self’, the ‘ego’ or ‘mind’. For sure, these words are highly culturally conditioned, and as before, their nominal nature raises an unconscious expectation of ‘thing-ness’ and unitarity, but it seems possible that they might represent a set of recursive processes in which higher order emergent phenomena founded ultimately upon human neurology reflexively respond to each other to colour our experiences and cognitions in ways simultaneously esoteric and fundamentally essential to our identities as human beings.
Of course, it is quite possible that these phenomena may have their origins in simpler emergent constructs, their apparent unitary nature heavily coloured by our symbolic cognitions and cultural expectations rather than partaking of higher order emergence, but in essence what might be suggested is that the unitarity and coherence of mind and the self may represent the same kinds of cognitive illusions emerging from neurological computational processes as those proposed by Dennett for consciousness.
Perennial ideas of the ‘soul’ and the ‘spirit’ might also partake of this computational illusion, although it is to be noted that our linguistic examples at the beginning of this chapter strongly suggest that general human understandings of the soul rest more simply upon symbolic cognitions of the unitary experience of consciousness. Visionary sensations such as ‘soul flight’ and ‘out of body experiences’ would seem to perceptually confirm this sense impression – that is, experientially and subjectively rather than empirically – that the ‘soul’ can move about, but we might speculate that this represents an experience of the ‘intensified trajectory’ in which the nature and structure of neurological computation changes in some form or enters a state of elevated neural excitation, and this change is then symbolically re-cognized and experienced as flight or movement.
Thus we propose that all of these essential human experiences are as illusory and fleeting as any computational artefact, but that they remain nonetheless essential to our experience might be seen as a kind of paradox which requires resolution. In the fullness of time upon our journey, we may uncover hints of how this resolution may be achieved. But here we find ourselves in the clouds making idle speculations which may themselves be clouded by symbolic cognitive processes, and we must finally remind ourselves that if, in our current state of understanding, we do not know that much about the nature of consciousness, then the nature of mind and the self must be similarly inconclusive.
To return to earth and sense, then, it is to be noted we haven’t presented any kind of theory of consciousness here, but rather speculated upon some of its properties, concluding that evolved phenomena can have strange and unexpected emergent outcomes. As we turn our attention more closely to how such situations apply to human behaviour and culture, we find that within the field of evolution, a strictly Darwinian approach to viewing ourselves can liberate surprising connections between apparently disparate aspects of our being that we consider unique to humanity. These delocalised connections have the capacity to deepen our self-understanding and cause us to come to a radically reoriented sense of what it means to be alive as a human being.