The brain is the seat of human intelligence. How are memories stored? How are decisions made? How do we decipher the wiring schemes of the brain as compared to a computer? How do we make brains more like computers, and computers more like brains? These questions stand defiant and notorious, yet simple, belying their audacity to seize the Earth’s helm from God and tack upwind into futures unknown.
Understanding our brains remains the ultimate barrier to increasing happiness, productivity, and meaning in our lives. A thriving Earth homeostasis also depends on a comprehensive user-understanding of our cognitive functioning. But a truly stable planetary society will require a sophisticated intuition of the holes in our collective cognition, the ways in which we do not measure up to the task of balancing a grossly unbalanced ecosystem. We must seek operational understanding of our man-made intelligence, the computer, that proto-species still in its infancy. The fact that computers can only mimic intelligence is irrelevant. The notion that anything constructed by man can never be considered alive is misguided. The observation that computers are physically separate from human bodies, and thus warrant classification as distinct entities, is an illusion.
Humans must learn to broaden their primal notions of self. The prime component of self is the mind, generated by an organ whose complex interplay of electro-chemical firing patterns and shifting neural geometries cannot be readily explained or synthesized. We live inside majestic electron-storms, oblivious to the mystery of consciousness. We fancy ourselves powerfully in command, despite the fact that we cannot explain our functioning. And yet the humble computer, humming sentinel of our own creation, stands infinitely determinable. We can forge any sort of logic within its golden brain and expect it to work with a hitherto unknown precision and iterative magnitude. This little proto-mind we create, control, and master is just a device, we say, embedded into our homes or sometimes implanted in our pockets, but just a thing and certainly not part of our consciousness, our proprioception, our self.
Humanity, too, is in its infancy. We often react to impulses rather than choose our own course. We are imprisoned inside ourselves - blinded by experiences, shackled by developmental years, limited to genetic make-up and memetic tendencies. As we mature we re-define the boundaries that limit us, sagely conceding that the definitions and meanings of our upbringing are ultimately in our control. As we evolve, we become aware of our role in contriving neat categories for the pieces of the world. As we grow, we speak less of causality and objective truth, and more of relationships, nuances, and shades of grey. We seek out new perspectives and original attitudes. We begin to listen.
The alloying of soul intelligence with machine intelligence will allow a world that acts intelligently as a whole. Elucidating the mechanical, computational, and philosophical inter-relationships becomes our highest priority.
It is commonly assumed that human consciousness is simply an advanced version of a computer, or sometimes vice versa, when in fact the two operate in entirely different ways. The materials involved are different; the brain is made of flesh and the computer of silicon and gold, but there are other striking differences.
The architectures of the two are radically different, leading to dissimilar kinds of algorithmic processing, pattern recognition and parsing, and memory storage. The computer is a single system of logic. The functions of a computer can be mapped in an electrical diagram or algorithmic flow chart. Each decision is made in linear fashion, one after the other.
The brain, on the other hand, eludes a mapping to this degree of precision. The brain is a distributed system of associativity, with trillions of firing neurons interacting in complex, virtually incomprehensible ways. Coupled to the sheer cognition is a hormonal-emotional system that may override a logical sequence of thoughts because you “are having a bad day” or because “there is a tiger flying at your face.” These differences represent divergent approaches to handling information and interacting with an environment.
Assume that a given electrical system is wired correctly. If you decide to add a new wire from this switch to that one, and fifty new wires from this node to that, the system will no longer function. The current will not even flow. You will be looking at a lifeless tangle of wires and switches. The brain, on the other hand, is characterized by nothing but a hot mess of circuitry in every direction. And it exists in such a way that it adds its own new “wires” from here to there when it sees fit. These new circuits do not collapse the functioning of the system. They typically optimize an existing process or add an entirely new one.
To illustrate: no one expects a squirrel in their mailbox, until they open their mailbox and a starving, angry squirrel comes barreling out. To exaggerate and slightly simplify brain functioning, the node for “mailbox” and the node for “squirrel” just gained a connecting bridge. That is, the two just developed a new neuron or group of neurons between them. This informs the total system that mailboxes and squirrels do indeed occasionally mix. You will now open mailboxes with a caution you never would have considered before.
Computers are supremely logical entities. This claim seems dubious to anyone accustomed to the “illegal operations” of Windows or the frailty of online Flash applications. But these are faults of the human programmer and not of the computer itself. Modern computing involves thousands of programming languages on numerous platforms in multiple levels of abstraction. Despite the complexity, however, a computer utilizes only a small number of relatively simple operations. Clever applications and combinations of simple terms leads to the staggering power and detail we enjoy today. Just as DNA is composed of only four chemicals and yet comprises the full array of biological life on Earth, and just as English contains only 26 letters but infinite possibilities for expression, so too the computer is simplistic in the microcosm and complex in the macrocosm.
The computer’s ultimate devotion to logic is a design feature. The base level language of a computer is called binary, and it is the conceptual foundation upon which all else sits. Every move of the mouse, every stroke of a keyboard, every database query, and every computation is translated into binary. Binary is what makes computers possible. It is called binary because it is a very simple language, consisting of exactly two possible characters: 0 and 1. This can be thought of as any opposite pair, black or white, fight or flight, left or right, but 0 and 1 is how the computer thinks of it.
Imagine you are in a maze, being chased by the Minotaur. Turning back the way you came is never an option. Curious to this particular maze is that there are no four-way junctions; every junction is a T-junction, a fork in the road. As you run forward you come to a decision: left, or right? You choose left and proceed forward until again you can go no further without deciding left or right. Your decision process in this maze is a kind of binary language. There are only two options. But there are thousands of instances to make a decision. Even though at each junction you are limited to left or right, you can wind a million different paths through the maze depending on the order of your lefts and rights.
A computer is a big silicon maze through which electrical current winds a path. The base level of computation is very simple- 0 and 1; but when multiplied successively, trillions of different outcomes become possible. And unlike fantasy mazes made of magic, where walls slide around and dynamically change their shape, a computer’s maze is hard-wired. The circuits themselves never change positions. So if the instruction is “Take 8 million left turns”, it always reaches the same place. This is what makes computers fantastically useful; the same input always produces the same output. The same instructions always produce the same result. The very essence of a computer’s design ensures predictable behavior.
Computers are not completely limited to their logical, rational bailiwick. Methods can be devised to perform “intuitive” tasks like facial recognition. There is little similarity, though, between the brain’s method and the computer’s method. However the exact internal mechanism of the brain, the task of looking at a picture and recognizing your friend’s face is considered trivially easy. It is not strenuous or time-consuming, and in fact, seems quite automatic. You would have a difficult time merely seeing the picture as differently colored blobs. Your friend’s face simply jumps out of the picture. And this remains true even if it is scrunched up into a uncharacteristic grin, covered in Halloween make-up, or seen through a fog. The picture could have a large ink stain, obscuring much of your friend’s face, and though it might take a moment’s additional scrutiny, it is highly likely that you recognize your friend through it all.
A computer has a much harder time. It cannot simply “look at” the picture. It must deal with everything in strictly logical, sequential terms. It will analyze the picture, pixel by pixel, starting in the upper left corner and moving right, then moving down each row once it reaches the end. Facial recognition software reads pixels the way we read words. It will classify each pixel according to programmed criteria and make mathematical calculations with this data. So, it determines the first pixel is the color gray with such-and-such numerical value, and records this. It sees that the next pixel is also gray, with an ever so slightly higher numerical value, and records this. It establishes numerical values for the color of every pixel and then performs mathematics to determine gradient areas, borders, shading, and other aspects of a visual image. A computer facial recognition program categorizes and classifies the image in terms of regions and the relative differences between those regions. It does not “recognize”. It makes trillions of precise calculations.
The brain has not been reduced to a series of minor operations. The features of brain functioning are relatively well-known, but the sheer scale and non-uniformity of cortical topography quickly confound an intuitional understanding. Brain cells come in all shapes and sizes. Most of them have characteristic parts such as cell bodies, dendrites, and axons. Branch-like dendrites connect with axons via small synapses, which can be filled with a whole host of specialized chemicals. The exact make-up of the chemistry of a synapse determine aspects like electrical capacitance and information flow. But neurons are not connected one by one in this way. Each neuron can be connected to hundreds or thousands of other neurons in arrangements too precise to fully map, let alone comprehend. The dissection of brains remains too clumsy to tease all secrets from the most granular level of neural circuitry.
Jeff Hawkins left the computer engineering industry in the 1980’s to research brains. He came up against what he saw as shortcomings in the fields of cognitive science, artificial intelligence, and neural networks. His chief complaint was that researchers studied simplified systems far removed from real human brains. Cognitive imaging gives a great deal of information about regions of cortical activity, but only by asking the participant to concentrate on the same task or thought over and over again. There is no sense of dynamism of the real functioning of the human mind. The world is a fast-moving place and the human mind rapidly adapts to new information that is streaming in fast and furiously.
Similarly, neural network researchers were removed from real brain functionality. The majority of neural networks at that time were three to ten functional neurons hooked together. The pattern recognition and “learning” attributes of these networks have very interesting properties, and yet are so vastly simple compared to the brain as to be almost unrecognizable. Neuro-anatomists are aware of certain brain regions containing as many as one thousand feedback circuits per one signaling circuit. That is, for one neuron performing some task, there are a thousand others watching. This level of complexity, and other nuances found by dissecting brains, were not addressed in researchers’ systems.
Brains work fantastically well, despite the staggering complexity under the hood. They auto-associate. This suggests that they are constructed in such a way as to harness the physical and chemical properties of the universe to perform calculations and predictions. This claim can also be made of computers, but the brain possesses a far greater number of chemical and physical elements. It seems more impressive in that regard. So it is a complex computational process, housed in a complex internal structure, that grants the human mind such versatile powers. But the computer, a simple computational framework inside a relatively simple internal structure, can emulate a great many functions of a human consciousness. The simplicity of computers allows for a different kind of complexity. And so we see that contrasting methods can bring about similar results, which leads us to ask, what then is intellect? And what of sentience? Are human beings the sum of their parts, or is there something else? Is it the “soul” that makes us what we are? And will a pile of wires and circuit boards ever transcend the sum of their collective functioning?
The modern era is one of tremendous intelligence. Increasingly specialized human beings design ever greater contraptions and gizmos for a galaxy of purposes. And these contraptions, whether they are real mechanical objects you can hold in your hand or a more ephemeral software creation, have begun to exhibit signs of the intelligence of their creators. It is still a far cry to say that the Watson computer, winner of a Jeopardy tournament, is truly intelligent on the inside. No one claims that the chess champion Deep Blue computer ever lies awake, staring into the darkness, contemplating the philosophy of its purpose in the universe. But an intelligent human being who cannot answer every question on Jeopardy, nor hope to best a Grand Master in chess, cannot help but be impressed by an entity that does. How then should we think of these tremendously powerful machines? The time, energy, and resources that goes into every machine is staggering, especially when viewed in the context of designing each machine’s software to a specific purpose. The Watson computer, supremely excellent to its task, cannot hope to suggest a chess move. It can only respond to questions. We are impressed by the depth, but not the breadth, of such machines’ capabilities.
What shall be the criteria of intelligence? What things rightly deserve the moniker? Alan Turing, the British World War II era mathematician and cryptologist who broke the German Enigma code, devised a series of thought experiments around the issue. The summation of these concepts is called the Turing Test. If a computer can fool a human into thinking he or she is talking with a fellow human, that computer is said to pass the Turing Test. Given the difficulty of defining the concept of “thinking”, Turing sought to answer the question “Can machines think?” in straight-forward terms. If a computer can mimic the output of a human being reliably and convincingly, then it does not matter if the computer is really “thinking” or not.
To counter this argument, philosophy professor John Searle devised a situation called the Chinese Room. Searle imagines a person in a locked room with an enormous tome containing instructions for manipulating Chinese symbols. Chinese speakers would write a question on piece of paper and slide it under the door. The person in the room would carefully follow the instructions in the book, crafting a reply in Chinese before sliding it back under the door. The Chinese speaker on the outside would see a sensible response and conclude that there is someone intelligent inside the room, answering the responses. In fact, the person inside the room does not understand Chinese as anything more than unintelligible squiggles. The person outside of the room imagines he is talking to a thinking entity when in fact he is communicating with a very comprehensive set of instructions. This is a functional analogue to what computers do: execute specific instructions based on input data.
These thought experiments illustrate the different ways of thinking about the subject. Turing remained unconcerned with the “behind the scenes” activity and was only focused on the output. After all, we do not expect fellow human beings to share their inner worlds with us, nor to be even capable of conveying the experience of their innermost souls. We focus on the behavior and ideas we can observe. We imagine that our peers are mostly like ourselves but often find that the hidden qualities and motivations of others wildly differ from our own. We do not necessarily condemn someone for having a deviating path to the same answer. Indeed, we respect ingenuity we do not possess.
Searle is more comprehensive in his analysis. The mimicry of intelligence does not equate to understanding. But to carry his example a bit further, we must realize that the tome containing the instructions for Chinese conversation was created by some intelligent entity. So while the Chinese speaker is not literally communicating with an understanding being, they are speaking to the author of the tome. This raises the idea that intelligence can take different forms in space and time, that ultimately any mechanical device or even simple set of instructions is necessarily imbued with the intellect of the creator.
Is my hand an extension of my arm, of my mind, of my will? If it is replaced with a perfect mechanical replica, is it no less useful? Is it no less my own? And what if my brain is replaced by a perfect reproduction, accurate down to every synapse?
Many consciousness researchers think of the brain as theoretically reproducible. While no one is even close to mapping the brain precisely, let alone constructing it, researchers argue that there is no magic in cortical intelligence. Take every piece of a brain and replace it with a functionally identical piece and that brain will exhibit all the same aspects of consciousness, intelligence, and sentience that it did with its evolutionarily designed components. This speaks to the nature of human intelligence, and not computers, but there is a philosophical symmetry between contemplating the mind as functionally mechanistic and pondering computers imbued with abstract intelligence. If a brain is merely the sum of its parts then it is difficult to say that computers are so different from us.
Consciousness is the vessel by which we traverse the oceans of perception. Our consciousness defines our lives; it is the context of every event, experience, love, hate, joy, sorrow, and creation. But what is it, fundamentally? Can our internal worlds be explained by the sum of our parts? Can there be a true accounting for behavior? Can there be a comprehensive algorithmic understanding of mental circuitry? And what is to be made of the decidedly unconscious computer? Is its computational proclivity, massively exceeding our own, to be ignored on the measuring stick for sentience? For now, perhaps. We gave birth to a class of entities with unique properties and will spend the entirety of our lives trying to understand, educate, and evolve our heartless little children.
We remain an odd couple, humans and computers, but there is nothing wrong with a little cogito-mechanical bio-diversity. It is the differences that make strong the whole. One picks up where the other leaves off. Combining the breadth of human cognition and the depth of computing prowess will magnify the power and wisdom of intelligent entities to shape the world, the galaxy, the universe. And as we shape the cosmos, it shapes us, and we grow up.