Intelligence augmentation – or IA – has been defined as follows:
“IA involves supplementing our own brain’s abilities using a range of different technologies.”
Richard Yonck, Foresight Analyst.
IA technologies can be of two sorts: mindhacking based, in which neuroplastic cognitive processes are deliberately trained or modified, or biohacking based in which physiological and biochemical processes are deliberately trained or modified.
Why is there a mushrooming interest in IA? Three factors contribute to this:
1. Cognitive capital
It is known that the practical advantage of having a high IQ increases as our work environments become more fluid and complex – that is novel, ambiguous, changing, rapid, unpredictable, or multifaceted. A high IQ is of high value for any strategic capacity, in which problem solving, decisions and self-control is needed to problem solve and make decisions in the midst of complexity. In rapidly changing work environments IQ is increasingly viewed as valuable ‘cognitive capital’, and there are market pressures to accumulate this capatial, much as there are market forces to accumulate social capital and financial capital.
2. Self-optimization & attaining personal potential
The ‘quantification of the self’ – in which cognitive and biological capacities such as IQ and sleep are measured and improved through feedback loops – is part of a growing movement to optimize personal performance and potential – both physically and cognitively. And interesting review of this movement can be found at this link.
3. Brain health and aging
The demographics of the world are changing fast – with greatly increasing proportions of older people resulting from increasing modernization and economic development. Aging is associated with some forms of cognitive decline,– as can be seen in this graph of how IQ sub-factors decline with age. Fluid intelligence (Gf – problem solving ability) and visuo-spatial intelligence (Gv) are highlighted. The most dramatic decline is in processing speed (Gs – top most curve).
In addition, with an aging population and increasing proportion of the population suffers from dementia. The most common type of dementia is Alzheimer’s. In 2006, there were 26.6 million sufferers worldwide. Alzheimer’s is predicted to affect 1 in 85 people globally by 2050.
Thus due to the demographics of an aging population, tackling different forms of cognitive decline to ensure brain health has become a major cultural priority.
Mindhacks and Biohacks
There a number of mindhacking and biohacking fields that stand at the threshold of radical augmentation of human intelligence, which I will now review. The extent to which the augmentation of human intelligence becomes increasingly precious cognitive capital is a controversial, potentially yielding both benefits and abuses. Regardless of our feelings about it, we would be wise to anticipate the kind of future IA is likely to be a part of.
Computer hacking can be defined as:
“to be able to indirectly gain access to, and make personal use of, computer systems through skills, tactics and detailed knowledge.”
Analogously, mindhacking – or mindhacks – can be defined as:
“To be able to indirectly gain access to, and make personal use of, cognitive, motivational or emotional systems through skills, tactics and detailed knowledge”
Mindhacking techniques have been supported by far-reaching advances in behavioral & cognitive psychology, and cognitive neuroscience, over the past decade. More than ever in history, behavior can be conditioned, information processing mechanisms can be manipulated, and cognitive processes and can be trained and modified indirectly.
Recent gains in cognitive neuroscientific understanding of the link between pre-frontal working memory circuitry and problem solving ability and IQ have resulted in the development of scientifically based brain training software. This type of software selectively targets prefrontal neural networks, resulting in long term neuroplasticity changes increasing short term memory capacity, problem solving ability, self-control and overall IQ. The scientific basis for these powerful mindhacking methods is reviewed here. Advances by psychometricians working on the factors underlying performance on IQ and aptitude tests has also enabled targeted training of the five factors of IQ, incorporated in the i3 Mindware® software.
Brain-computer interfaces, or BCIs, are another type of IA being explored. A BCI gives a user the ability to control a computer or other device using only their thoughts. BCIs may also help with the treatment of brain disorders due to stroke, epilepsy or Alzheimer’s disease. There are also beginning to enhance cognitive functioning and intelligence. University of Pittsburgh scientists recently gave monkeys the ability to control a robotic arm to pick up food and bring it to their mouth just by thinking about it. Hair-like electrodes in a computer chip implanted in the monkey’s brains picked up nerve signals from the brain. Wires then carried the signals through the skull, and computer software converted the brain signals into a robotic arm’s movements. The world’s first human testing of a mind-controlled artificial limb has also now begun. A joint project between the Pentagon and Johns Hopkins Applied Physics Laboratory (APL), the Modular Prosthetic Limb is fully controlled by sensors implanted in the brain, and can even restore the sense of touch by sending electrical impulses from the limb back to the sensory cortex. Together with the BCIs that have been developed for the operation of computer interfaces and wheelchairs, these technologies offer hope of a more interactive and embodied life to those with spinal cord damage.
Memory systems are also now benefiting from brain-computer enhancement. Scientists have just successfully developed an artificial hippocampus in a rat. The hippocampus is a brain structure which transfers short term memories to stable, long term memories. In a breakthrough study in June 2011, rat’s memories for how to access food with a lever were pharmacologically blocked so they lost their long term memory for this action. Long-term memory capability returned to the rats when the research team activated the prosthetic hippocampus, programmed to duplicate the memory-encoding function for that specific lever-pulling memory. The researchers went on to show that if a prosthetic device with its carefully wired up electrodes were implanted in animals with a normal, functioning hippocampus, the device could actually strengthen the memory trace being generated internally in the brain and improve the memory capability of normal rats.
This research all points to a day when our ability to plug in to computer devises to enhance cognitive functioning could become an everyday reality.
Pharmacological methods include nootropics. Nootropics – also known as smart drugs, memory enhancers, cognitive enhancers and intelligence enhancers – are drugs, supplements, nutraceuticals (a product isolated or purified from foods) that are designed to improve mental functions such as memory, motivation, attention and intelligence. Among these are Ampakines, tested by DARPA, an agency of the United States Department of Defense responsible for the development of new technology for use by the military, in an effort to improve attention span and alertness of soldiers in the field, as well as facilitate their learning and memory. This focus compliments broader strategic goals, in which it the ‘information operations’ (IO) is now a core competence, along with air, ground, maritime and special operations. IO is defined by the Department of Defense as:
“The ability to control the information environment, including interrelated physical, informational, and cognitive dimensions, is now seen as vital to national security. And it is the cognitive dimension, in which “people think, perceive, visualize, and decide,” that is seen as most important.”
A substantial proportion of intelligence is considered to be heritable – estimates vary from 50-70% – and China has begun sequencing the genomes of 1,000 Chinese adults with IQs of 145 or higher. Therapeutic strategies to promote neuroplasticity and improve learning ability are also being explored by biochemical and genetic approaches. To quote from Richard Yonck:
A 2010 European Neuroscience Institute study found memory and learning in elderly mice restored to youthful levels when a cluster of genes was activated using a single enzyme. Several stem cell research studies offer hope not only for degenerative mental pathologies but also for restoring our ability to learn rapidly. In another study, mice exposed to the natural soil bacterium, Mycobacterium vaccae, found their learning rate and retention significantly improved, possibly the result of an autoimmune response. All of these suggest we’ve only begun to scratch the surface when it comes to improving or augmenting intelligence.
In summary our brief review makes clear that intelligence augmentation (IA) through a diversity of mindhacking and biohacking methodologies is now firmly on ‘the agenda’ of our Zeitgeist as we move into the 21st Century.
Our thoughts and inner mental life is encoded not in the electrical firing of individual brain cells or neurons, but in patterns of electrical activity in networks (or circuits) of neurons. There are many different networks in the brain – each serving a different function – and many different possible patterns of activation within these networks.
Life can be understood as a series of problems – better called ‘challenges’ – that need to be solved. …How to fix w, how to save for x, how to negotiate y, how to meet z. The more intelligent we are, the better we are able to tackle these problems efficiently and effectively. The AI researcher Kurtzweil defined intelligence as:
the ability to optimally use limited resources – including time – to achieve goals.
Intelligence involves applying rules, heuristics (rules of thumb) or strategies to solve problems, and exploring a ‘problem space’ with these rules. For example, finding a wallet may involve applying the rule ‘search one room at a time’.
Insights through trial and error
But new insights and new types of solutions usually also need lots of room for trial and error – for experimentation. Here is an example.
My dad and I were trying to move a very heavy bag of cement down a steep hill and then over a shoulder height wall onto a terrace. We knew we’d make the task much easier a) working together, and b) using a wheelbarrow. We shared the load of the bag to get it into the wheelbarrow, and got to the bottom of the hill in no time. But then we were confronted with the problem of how to get it over the wall onto the terrace. We tried lifting it together but it was too heavy. Dad got up onto the terrace, and we tried pushing and pulling. We were experimenting – trial and error wise. But nothing was working. Then I had an ‘ah ha’ moment – a flash of insight. Why not break open the paper covering of the cement, and just scoop it up in handfulls?! That will work. Problem solved. In this example, trial and error gave way to a strategy – a new rule – that worked just fine.
Our daily lives are filled with these kinds of experiences, in which we abandon old behaviours that are no longer efficient or effective and develop new, more appropriate ones. This ability lies at the heart of our general intelligence. A recent study, pulibshed on May 13th in the journal Neuron, has investigated exactly what is going on in neural circuits during these ‘insight’ moments that follow from periods of trial and error, in which a better strategy arises to deal with a problem.
Insights from recent research
“The ability of animals and humans to infer and apply new rules …relies critically on the frontal lobes,” explains Dr. Jeremy K. Seamans from the Brain Research Centre at the University of British Columbia (UBC) and Vancouver Coastal Health Research Institute.
In our study, we examined how groups of frontal cortex neurons in rat brains switch from encoding a familiar rule to a completely novel rule that could only be deduced through trial and error.
Rats are intelligent mammals. Like humans they appear to figure out new rules through trial and error when trying to solve problems. Dr Seamans and his colleagues looked at networks of neurons in the rat’s frontal cortex as the rats worked on a task to obtain food that needed a new, ‘insightful’ solution. While it took many attempts of trial and error to figure out the new rule, the researchers found out that the network of brain cells did not change its electrical pattern gradually, but showed an abrupt transition to a new pattern, corresponding to the new, effective behaviour – as if the network had experienced an ‘ah ha’ moment.
In the present problem solving context where the animal had to infer a new rule by accumulating evidence through trial and error, such sudden neural and behavioral transitions may correspond to moments of ‘sudden insight’.
Problem solving, sleep and brain plasticity
So insights, that are experienced as ‘break throughs’, have a real basis in the brain that involves a rapid reorganisation of electrical patterns into new, more adaptive, configurations. This is a type of very rapid brain plasticity, and is an essential aspect of applying our intelligence in daily challenges in life.
Interestingly, other research indicates that sleeping on a problem can help with just this kind brain plasticity. Dream (REM) sleep, has been shown to help attain solutions to new problems by stimulating associative networks, allowing the brain to make new and useful associations between unrelated ideas (link).
Daniel Durstewitz, Nicole M. Vittoz, Stan B. Floresco, Jeremy K. Seamans. Abrupt Transitions between Prefrontal Neural Ensemble States Accompany Behavioral Transitions during Rule Learning. Neuron, 2010; 66 (3): 438-448 DOI: 10.1016/j.neuron.2010.03.029
The human cerebral cortex – the more recently evolved portion of the brain that is used for thinking and our conscious experience – is a complex network of functionally specialized regions interconnected by neuron fibers. As Dr. Alex Fornito from the Melbourne Neuropsychiatry Centre at the University of Melbourne puts it:
The brain is an extraordinarily complex network of billions of nerve cells interconnected by trillions of fibres.
How is the brain organized?
The organizational principles for the brain’s neural networks remain largely unknown. However, it’s becoming clear that the brain is organized like many other networks in nature:
Complex networks, from ecosystems to metabolic pathways, occur in diverse fields of biological science. Nervous systems are complex networks at multiple scales of time and space. It has been shown that …brain networks have characteristically small-world properties of dense or clustered local connectivity with relatively few long-range connections. (Reference)
What is called ‘small-world’ topology (spatial arrangement) is an attractive model for brain network organization because it supports:
- Segregated and parallel information processing
- Gives resilience against pathological attack
- Minimizes wiring costs in the network’s communication.
Small world networks solve the economic problem of maximising information processing or communication efficiency while minimising costs, i.e. the amount of ‘wiring’ (and metabolism) needed to make the connections useful for communication purposes. The better networks solve this trade-off problem, the more ‘cost-efficient’ they are.
The brain’s network is small-world optimized
From the point of view of evolution, small-world cost-efficient brain networks have been naturally selected over the course of evolution. Cost-efficiency is one way of defining ‘brain fitness’.
The brain tries to maximize its bang-for-buck by striking a balance between making more connections to promote efficient communication and minimising the “cost” or amount of wiring required to make these connections. Dr. Alex Fornito
Here is a diagram showing the small world architecture of the human brain:
Recent study demonstrates that brain network cost-efficiency is largely genetically based
In a study published recently in the international publication The Journal of Neuroscience led by Dr Fornito, a cost-efficiency index of the overall brain network was measured using fMRI brain-imaging – with both identical (monozygotic) twins (16 pairs) and non-identical (dizygotic) (13 pairs) twins. Differences in cost-efficiency between individuals was dramatic, and the study found that genetics accounted for a full 60% of the brain’s overall efficiency.
Our findings indicate that -a brain’s ‘cost-efficiency’ has a strong genetic basis. …Ultimately, this research may help us uncover which specific genes are important in explaining differences in cognitive abilities.” Dr Alex Fornito
Some of the strongest effects showing genetic contribution were observed for regions of the prefrontal cortex which play a vital role in different aspects of intelligence: problem solving, planning, decision-making and memory.
Link with IQ
Previous work has shown that people with more efficient brain connections score higher on tests of intelligence (more on this later), and that brain network cost-efficiency is reduced in the elderly as well as people with schizophrenia, particularly in the prefrontal cortex.
Thus optimization of network ‘cost-efficiency’ represents an important principle for the brain’s functional organization, and this organization – to a large extent – may account for differences in intelligence.
Implications for IQMindware working memory training?
It is known that working memory training (such as the dual-n back of IQ Mindware brain training software) results in pre-frontal changes in synaptic connections. An interesting question is whether this kind of brain training may result in increased ‘cost-efficiency’ in critical brain networks.