Two and a half million years ago, the Homo genus of primates emerged by virtue of a rapidly growing brain, the largest in relation to body size of any mammal. The human brain has become the defining feature of our species, and recent advances in brain research have inspired neuroscientists and programmers alike to turn information about this mysterious and complex organ into biomimetic (‘life-imitating’) technologies.
One such technology was introduced in 2012, by scientists at the University of Waterloo. Spaun, short for Semantic Pointer Architecture Unified Network, is the largest computer simulation of a functioning brain to date. It is the brainchild of Chris Eliasmith, a professor in philosophy and systems design engineering at the University of Waterloo, who developed the system as a proof-of-principle supplement to his recent book: How to Build a Brain.
The model is composed of 2.5 million simulated neurons and four different neurotransmitters that allow it to ‘think’ using the same kind of neural connections as the mammalian brain. Instead of code, Spaun receives visual inputs in the form of numbers and symbols, which it responds to by performing simple tasks with a simulated robotic arm. Tasks are similar to basic IQ test questions, which include pattern recognition and retracing visual input from memory.
“Models like Spaun are not expressed using standard computational structures. In order to run on today’s computers, we have to translate the model into code; but it is more natural and efficient to run on specialized hardware that is structured more like a brain.”
“There are no connections in the model that aren’t in the brain,” explains Eliasmith. “Models like Spaun are not expressed using standard computational structures. In order to run on today’s computers, we have to translate the model into code; but it is more natural and efficient to run on specialized hardware that is structured more like a brain.”
Models like Spaun are unique from other forms of artificial intelligence (AI) because they are committed to solving problems in the same way as humans. Cognitive computer systems are at the other end of the spectrum and are capable of ‘machine learning,’ which allows them to analyze and recall patterns and trends from a large amount of data. These systems are undoubtedly clever, but their problem-solving strategies are incomparable to humans’.
For instance, IBM’s research team started developing a new cognitive computer in 2006, a namesake of IBM’s former CEO, Thomas J. Watson, which became the first of its kind to replicate the language and analytical ability of humans. Watson made headlines around the world after it beat long-time Jeopardy champions Ken Jennings and Brad Rutter on the popular gameshow in 2011. However, the data-containing servers stored by Watson filled up an entire room above the set, and it was running over 100 algorithms per clue in order to find the most probable answer.
It has taken two years for the IBM team to shrink Watson from its original 16-terabyte mammoth state to the size of a pizza box, all while increasing its processing speed by 240 per cent; however, it is still a far cry from the efficiency of a human brain, which consumes less power than a light bulb and weighs an average of three pounds.
The advent of neuromorphic devices has brought us closer to this ideal. IBM’s SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) project has recently developed prototypes of a silicon neurosynaptic chip that mimic the brain’s natural processing capacity using artificial ‘neurons’ connected to one another. The building block of these chips is a ‘corelet’ that represents a network of synapses for a specific function, which can be combined and programmed for more complex applications. The project’s ultimate goal is a human-scale cognitive computing system, on the order of 10 billion neurons and 100 trillion synapses that would occupy the same volume as a brain.
“There is still a long way to go in terms of developing the surgical approach, the technology that interfaces with the brain, and the algorithms that permit the camera to communicate with the brain.”
These brain-like technologies could not be possible without the recent surge in discoveries about brain function at genetic, molecular, and behavioural levels. In part, these can be attributed to advances in genetics, stem cell biology, and imaging.
Several innovative tools have made an impact on research in brain circuitry, including genetically-engineered viruses that can trace infected neuron pathways to determine their connections in brain tissue, the Brainbow technique (a genetic method to fluorescently label individual neurons) developed at Harvard Medical School in 2007, and the emergent field of optogenetics, which is used to artificially stimulate nerve cell activity. It can take up to twenty years for new research to be implemented into technology.
The creation of brain-machine interfaces is an excellent example this technology. Neuroengineering has already made such interfaces a reality, as with cochlear implants and deep brain stimulation for Parkinson’s disease, based on research into auditory function and disease pathology. One laboratory at the Montreal Neurological Institute is working on laying the scientific groundwork for using visual brain-machine interfaces to treat blindness. Christopher Pack, a professor in visual neurophysiology at McGill, studies the function of visual cortical circuits in the brain at a mathematical level. For the visual cortex, Pack envisions a small camera connected directly to the brain as a solution to retinal degeneration.
“There is still a long way to go in terms of developing the surgical approach, the technology that interfaces with the brain, and the algorithms that permit the camera to communicate with the brain,” Pack told The Daily. “Previous work has succeeded in allowing blind subjects to detect spots of light and perhaps crude shapes, but the longer term goal would be to restore the perception of detailed vision – things like faces, letters, motion, et cetera.”
Pack’s research is still in the early stages of translation into the tech sphere, but it is a clear indicator of the future of neuroscientific progress. We are hurtling toward the time when computers may surpass their creators in intelligence. The great mathematician John von Neumann defined this as ‘singularity.’ Shortly after Neumann’s death, an unfinished manuscript entitled “The Computer and the Brain” was published, outlining his thoughts on the computer as a brain-like processor. Even in the 1950s, the parallels were evident – and the conclusion startling. With the rate of technology accelerating the way it has in the last decade, it may well be time to start redefining what it means to be human.