Information

Does human to human interface carry the same information from the brain

Does human to human interface carry the same information from the brain

I'd like to know if the human to human interface carry the same information from the brain or it is just a signal that switch on or off electrical impulses that causes the limb to contract?

Is it even possible to take someone else signal coming from the brain and redirect to somebody else where the other person can use it for the same result or is it always going to be digital type On or OFF and not analog?


The answer for the particular interface that you linked to is that a single scalar variable is being passed. The EMG activity in one person controls an electrical signal that is connected to a nerve in the second person, which when activated causes the muscle to contract. The EMG activity is thresholded and then the binary information of whether or not it passes that threshold is passed to a "TENS unit" whose output can be controlled in an analog way.

While the actual information being passed in this demonstration is binary, there is no reason why it must be so. It is simply a result of the design decisions the creators made.


Contents

The roots of cognitive linguistics are in Noam Chomsky’s 1959 critical review of B. F. Skinner’s Verbal Behavior. Chomsky's rejection of behavioural psychology and his subsequent anti-behaviourist activity helped bring about a shift of focus from empiricism to mentalism in psychology under the new concepts of cognitive psychology and cognitive science. [4]

Chomsky considered linguistics as a subfield of cognitive science in the 1970s but called his model transformational or generative grammar. Having been engaged with Chomsky in the linguistic wars, [5] George Lakoff united in the early 1980s with Ronald Langacker and other advocates of neo-Darwinian linguistics in a so-called ”Lakoff—Langacker agreement”. It is suggested that they picked the name ”cognitive linguistics” for their new framework to undermine the reputation of generative grammar as a cognitive science. [6]

Consequently, there are three competing approaches that today consider themselves as true representatives of cognitive linguistics. One is the Lakoffian—Langackerian brand with capitalised initials (Cognitive Linguistics). The second is generative grammar, while the third approach is proposed by scholars whose work falls outside the scope of the other two. They argue that cognitive linguistics should not be taken as the name of a specific selective framework, but as a whole field of scientific research that is assessed by its evidential rather than theoretical value. [3]

Generative Grammar Edit

Generative grammar functions as a source of hypotheses about language computation in the mind and brain. It is argued to be the study of 'the cognitive neuroscience of language'. [7] Generative grammar studies behavioural instincts and the biological nature of cognitive-linguistic algorithms, providing a computational–representational theory of mind. [8]

This in practice means that sentence analysis by linguists is taken as a way to uncover cognitive structures. It is argued that a random genetic mutation in humans has caused syntactic structures to appear in the mind. Therefore, the fact that people have language does not rely on its communicative purposes. [9] [10]

For a famous example, it was argued by linguist Noam Chomsky that sentences of the type "Is the man who is hungry ordering dinner" are so rare that it is unlikely that children will have heard them. Since they can nonetheless produce them, it was further argued that the structure is not learned but acquired from an innate cognitive language component. Generative grammarians then took as their task to find out all about innate structures through introspection in order to form a picture of the hypothesised language faculty. [11] [12]

Generative grammar promotes a modular view of the mind, considering language as an autonomous mind module. Thus, language is separated from mathematical logic to the extent that inference plays no role in language acquisition. [13] The generative conception of human cognition is also influential in cognitive psychology and computer science. [14]

Cognitive Linguistics (linguistics framework) Edit

One of the approaches to cognitive linguistics is called Cognitive Linguistics, with capital initials, but it is also often spelled cognitive linguistics with all lowercase letters. [15] This movement saw its beginning in early 1980s when George Lakoff's metaphor theory was united with Ronald Langacker's Cognitive Grammar, with subsequent models of Construction Grammar following from various authors. The union entails two different approaches to linguistic and cultural evolution: that of the conceptual metaphor, and the construction.

Cognitive Linguistics defines itself in opposition to generative grammar, arguing that language functions in the brain according to general cognitive principles. [16] Lakoff's and Langacker's ideas are applied across sciences. In addition to linguistics and translation theory, Cognitive Linguistics is influential in literary studies, [17] education, [18] sociology, [19] musicology, [20] computer science [21] and theology. [22]

A. Conceptual metaphor theory

According to American linguist George Lakoff, metaphors are not just figures of speech, but modes of thought. Lakoff hypothesises that principles of abstract reasoning may have evolved from visual thinking and mechanisms for representing spatial relations that are present in lower animals. [23] Conceptualisation is regarded as being based on the embodiment of knowledge, building on physical experience of vision and motion. For example, the 'metaphor' of emotion builds on downward motion while the metaphor of reason builds on upward motion, as in saying “The discussion fell to the emotional level, but I raised it back up to the rational plane." [24] It is argued that language is not a cognitive capacity, but instead relies on other cognitive skills which include perception, attention, motor skills, and visual and spatial processing. [16] Same is said of other cognitive phenomena such as the sense of time:

"In our visual systems, we have detectors for motion and detectors for objects/locations. We do not have detectors for time (whatever that could mean). Thus, it makes good biological sense that time should be understood in terms of things and motion." —George Lakoff

In Cognitive Linguistics, thinking is argued to be mainly automatic and unconscious. [25] Like in neuro-linguistic programming, language is approached via the senses. [26] [27] [28] Cognitive linguists study the embodiment of knowledge by seeking expressions which relate to modal schemas. [29] For example, in the expression "It is quarter to eleven", the preposition to represents a modal schema which is manifested in language as a visual or sensorimotoric 'metaphor'.

B. Cognitive and construction grammar

Constructions, as the basic units of grammar, are conventionalised form–meaning pairings which are comparable to memes as units of linguistic evolution. [30] [31] [32] [33] These are considered multi-layered. For example, idioms are higher-level constructions which contain words as middle-level constructions, and these may contain morphemes as lower-level constructions. It is argued that humans do not only share the same body type, allowing a common ground for embodied representations but constructions provide common ground for uniform expressions within a speech community. [34] Like biological organisms, constructions have life cycles which are studied by linguists. [30]

According to the cognitive and constructionist view, there is no grammar in the traditional sense of the word. What is commonly perceived as grammar is an inventory of constructions a complex adaptive system [35] or a population of constructions. [36] Constructions are studied in all fields of language research from language acquisition to corpus linguistics. [35]

Integrative cognitive linguistics Edit

There is also a third approach to cognitive linguistics which neither directly supports the modular (Generative Grammar) nor the anti-modular (Cognitive Linguistics) view of the mind. Proponents of the third view argue that, according to brain research, language processing is specialised although not autonomous from other types of information processing. Language is thought of as one of human cognitive abilities along with perception, attention, memory, motor skills, and visual and spatial processing, rather than being subordinate to them. Emphasis is laid on a cognitive semantics that studies the contextual–conceptual nature of meaning. [37]

Cognitive Perspective on Natural Language Processing Edit

Cognitive Linguistics offers a scientific First Principle direction for quantifying states-of-mind through Natural language processing. [38] As mentioned earlier Cognitive Linguistics approaches grammar with a nontraditional view. Traditionally Grammar has been defined as a set of structural rules governing the composition of clauses, phrases and words in a natural language. From the perspective of Cognitive Linguistics, grammar is seen as the rules of arrangement of language which best serve communication of the experience of the human organism through its cognitive skills which include perception, attention, motor skills, and visual and spatial processing. [16] Such rules are derived from observing the conventionalized pairings of meaning to understand sub-context in the evolution of language patterns. [30] The cognitive approach to identifying sub-context by observing what comes before and after each linguistic construct provides a grounding of meaning in terms of sensorimotoric embodied experience. [27] When taken together, these two perspectives form the basis of defining approaches in Computational linguistics with strategies to work through the Symbol grounding problem which posits that, for a computer, a word is merely a symbol, which is a symbol for another symbol and so on in an unending chain without grounding in human experience. [39] The broad set of tools and methods of Computational linguistics are available as Natural language processing or NLP. Cognitive Linguistics adds a new set of capabilities to NLP. These cognitive NLP methods enable software to analyze sub-context in terms of internal embodied experience. [27]

Methods Edit

The goal of Natural language processing (NLP) is to enable a computer to "understand" the contents of text and documents, including the contextual nuances of the language within them. The perspective of Traditional Chomskyan Linguistics offers NLP three approaches or methods to identify and quantify the literal contents, the who, what, where and when in text – in linguistic terms, the semantic meaning or Semantics of the text. The perspective of Cognitive linguistics offers NLP a direction to identify and quantify the contextual nuances, the why and how in text – in linguistics terms, the implied pragmatic meaning or Pragmatics of text.

The three NLP approaches to understanding literal semantics in text based on traditional linguistics are Symbolic NLP, Statistical NLP, and Neural NLP. The first method, Symbolic NLP (1950s - early 1990s) is based on first principles and rules of traditional linguistics. The second method, Statistical NLP (1990s - 2010s), builds upon the first method with a layer of human curated & machine-assisted corpora for multiple contexts. The third approach Neural NLP (2010 onwards), builds upon the earlier methods by leveraging advances in deep neural network-style methods to automate tabulation of corpora & parse models for multiple contexts in shorter periods of time. [40] [41] All three methods are used to power NLP techniques like Stemming and Lemmatisation in order to obtain statistically relevant listing of the who, what, where & when in text through Named-entity recognition and Topic model programs. The same methods have been applied with NLP techniques like a Bag-of-words model to obtain statistical measures of emotional context through Sentiment analysis programs. The accuracy of a sentiment analysis system is, in principle, how well it agrees with human judgments. [42] Because evaluation of sentiment analysis is becoming more and more specialty based, each implementation needs a separate training model and specialized human verification raising Inter-rater reliability issues. However, the accuracy is considered generally acceptable for use in evaluating emotional context at a statistical or group level. [43] [44]

A developmental trajectory of NLP to understand contextual pragmatics in text involving emulating intelligent behavior and apparent comprehension of natural language is Cognitive NLP. This method is a rules based approach which involves assigning meaning to a word, phrase, sentence or piece of text based on the information presented before and after the piece of text being analyzed.

The specific meaning of cognitive linguistics, the proper address of the name, and the scientific status of the enterprise have been called into question. It is claimed that much of so-called cognitive linguistics fails to live up to its name. [6]

"It would seem to me that [cognitive linguistics] is the sort of linguistics that uses findings from cognitive psychology and neurobiology and the like to explore how the human brain produces and interprets language. In other words, cognitive linguistics is a cognitive science, whereas Cognitive Linguistics is not. Most of generative linguistics, to my mind, is not truly cognitive either." [2]

It is suggested that the aforementioned frameworks, which make use of the label ’cognitive’, are pseudoscience because their views of the mind and brain defy basic modern understanding of neuroscience, and are instead based on scientifically unjustified guru teachings. Members of such frameworks are also said to have used other researchers’ findings to present them as their own work. [3] While this criticism is accepted for most part, it is claimed that some of the research has nonetheless produced useful insights. [45]


Praise Is Fleeting, but Brickbats We Recall

MY sisters and I have often marveled that the stories we tell over and over about our childhood tend to focus on what went wrong. We talk about the time my older sister got her finger crushed by a train door on a trip in Scandinavia. We recount the time we almost missed the plane to Israel because my younger sister lost her stuffed animal in the airport terminal.

Since, fortunately, we’ve had many more pleasant experiences than unhappy ones, I assumed that we were unusual in zeroing in on our negative experiences. But it turns out we’re typical.

“This is a general tendency for everyone,” said Clifford Nass, a professor of communication at Stanford University. “Some people do have a more positive outlook, but almost everyone remembers negative things more strongly and in more detail.”

There are physiological as well as psychological reasons for this.

“The brain handles positive and negative information in different hemispheres,” said Professor Nass, who co-authored “The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships” (Penguin 2010). Negative emotions generally involve more thinking, and the information is processed more thoroughly than positive ones, he said. Thus, we tend to ruminate more about unpleasant events — and use stronger words to describe them — than happy ones.

Roy F. Baumeister, a professor of social psychology at Florida State University, captured the idea in the title of a journal article he co-authored in 2001, “Bad Is Stronger Than Good,” which appeared in The Review of General Psychology. “Research over and over again shows this is a basic and wide-ranging principle of psychology,” he said. “It’s in human nature, and there are even signs of it in animals,” in experiments with rats.

As the article, which is a summary of much of the research on the subject, succinctly puts it: “Bad emotions, bad parents and bad feedback have more impact than good ones. Bad impressions and bad stereotypes are quicker to form and more resistant to disconfirmation than good ones.”

So Professor Baumeister and his colleagues note, losing money, being abandoned by friends and receiving criticism will have a greater impact than winning money, making friends or receiving praise.

In an experiment in which participants gained or lost the same amount of money, for instance, the distress participants expressed over losing the money was greater than the joy that accompanied the gain.

“Put another way, you are more upset about losing $50 than you are happy about gaining $50,” the paper states.

In addition, bad events wear off more slowly than good ones.

And just to show that my family’s tendency to focus on the negative is not unusual, interviews with children and adults up to 50 years old about childhood memories “found a preponderance of unpleasant memories, even among people who rated their childhoods as having been relatively pleasant and happy,” Professor Baumeister wrote.

As with many other quirks of the human psyche, there may be an evolutionary basis for this. Those who are “more attuned to bad things would have been more likely to survive threats and, consequently, would have increased the probability of passing along their genes,” the article states. “Survival requires urgent attention to possible bad outcomes but less urgent with regard to good ones.”

And Professor Nass offered another interesting point: we tend to see people who say negative things as smarter than those who are positive. Thus, we are more likely to give greater weight to critical reviews.

“If I tell you that you are going to give a lecture before smarter people, you will say more negative things,” he said.

Image

So this is all rather depressing. There is an upside, however. Just knowing this may help us better deal with the bad stuff that will inevitably happen.

Take the work of Teresa M. Amabile, a professor of business administration and director of research at the Harvard Business School. She asked 238 professionals working on 26 different creative projects from different companies and industries to fill out confidential daily diaries over a number of months. The participants were asked to answer questions based on a numeric scale and briefly describe one thing that stood out that day.

“We found that of all the events that could make for a great day at work, the most important was making progress on meaningful work — even a small step forward,” said Professor Amabile, a co-author of “The Progress Principle: Using Small Wins to Ignite Joy, Engagement and Creativity at Work” (Harvard Business Review Press, 2011). “A setback, on the other hand, meant the employee felt blocked in some way from making such progress. Setbacks stood out on the worst days at work.”

After analyzing some 12,000 diary entries, Professor Amabile said she found that the negative effect of a setback at work on happiness was more than twice as strong as the positive effect of an event that signaled progress. And the power of a setback to increase frustration is over three times as strong as the power of progress to decrease frustration.

“This applies even to small events,” she said.

If managers or bosses know this, then they should be acutely aware of the impact they have when they fail to recognize the importance to workers of making progress on meaningful work, criticize, take credit for their employees’ work, pass on negative information from on top without filtering and don’t listen when employees try to express grievances.

The answer, then, is not to heap meaningless praise on our employees or, for that matter, our children or friends, but to criticize constructively — and sparingly.

Professor Nass said that most people can take in only one critical comment at a time.

“I have stopped people and told them, ‘Let me think about this.’ I’m willing to hear more criticism but not all at one time.”

He also said research had shown that how the brain processed criticism — that we remembered much more after we heard disapproving remarks than before — belied the effectiveness of a well-worn management tool, known as the criticism sandwich. That is offering someone a few words of praise, then getting to the meat of the problem, and finally adding a few more words of praise.

Rather, Professor Nass suggested, it’s better to offer the criticism right off the bat, then follow with a list of positive attributes.

Also, perhaps the very fact that we tend to praise our children when they’re young — too much and for too many meaningless things, I would argue — means they don’t get the opportunity to build up a resilience when they do receive negative feedback.

Professor Baumeister said: “If criticism was more common, we might be more accepting of it.”

Oddly, I find this research, in some ways, reassuring. It’s not just me. I don’t need to beat myself up because I seem to fret excessively when things go wrong.

It turns out that a strategy I started years ago apparently can be effective. I have a “kudos” file in which I put all the praise I’ve received, along with e-mails from friends or family that make me feel particularly good.

As Professor Baumeister noted in his study, “Many good events can overcome the psychological effects of a bad one.” In fact, the authors quote a ratio of five goods for every one bad.

That’s a good reminder that we all need to engage in more acts of kindness — toward others and ourselves — to balance out the world.

Excuse me now. I’m off to read my kudos file. And if you would like to add to it, feel free.


The Human Eye and the Brain

We can all agree that eyesight is important when “view-ing” interface (LOL, duh) but there’s on more thing to it that’s even more important — the brain. Our brain is actually the processor that interprets all things that enter our eye through pupil, lens and optic nerves and as with any type of processor it has a unique way of interpreting things. What we, the UX designers, need to understand is how to set things up in a way that human brain figures it out in the way we want to.

Ever started analysing about your eyesight? How it really works? There’s this fovea and macula and retina and optical nerves and bunch of other eye parts but let me simplify and sum thing’s up quickly here. There’s a small portion of your eyesight that actually sees sharp image called “foveal vision”. How small? Imagine about 1.5-2 cm circle. The rest of it is peripheral vision which has not so great resolution. And since the peripheral vision has poor resolution our brain “steps in” (between rapid-always-occurring eye movement) and fills out the blanks with our knowledge and expectation and tricks us into thinking that we see sharp-ish. Crazy, right? This is just a simplified explanation of what’s going on in there but if you wish to further explore into the subject there’s a list of excellent references down below.

So basically, there’s no way users can see your entire website at once — not even the half of it. Knowing this is extremely important when building interface elements that relate to each other — CTA buttons with marketing copy, warning and info messages with forms or checkout options. They all need to be close to each other to minimise eye movement and to decrease the chance to end in a blind spot.

There’s no way users can “see” your entire website at once
— not even the half of it.

To understand more how will interface look like to users I have developed a primitive mockup system that mimics what goes on with our eyesight in that split of a second when our foveal vision focuses on something. It’s basically a different set of blur intensity depending on the distance from foveal area and it looks sort of like this.

And here a slow motion of the same screenshot when users scan the page. Even when focused on single item other “sale” items are emphasised just enough to be noticeable in peripheral vision thus increasing the chance for eye detection. MANGO did an excellent job here, key items are well structured and emphasised throughout the entire website.

And here’s a bad example. The screenshot was taken from a Croatian job-listing website. I stumbled on this issue while trying to log in with the wrong password. Can you notice where the error message is before the screen changes? I couldn’t find it at all. After several failed attempts to log in, I just assumed the server had a problem. The “page flick” didn’t help out either there was no visual cue or reason to look so far away from where my attention was focused.

One could point out a lot of problems here: the lack of cues through iconography (a simple warning icon would have made all the difference), the unclear color associations (the header and some buttons look identical in color) and so on. But distance plays a crucial role here. Putting the error message closer could have been the hint that compelled me to read what it says. Proximity is important, wherever feasible.

So remember, use that proximity Gestalt principle, structure information as “logical units” and keep the relevant information and options in the same unit. This is really important when dealing with complex interfaces with lots and lots of different data. It will ensure less scanning or searching time for users and consequently you will create more usable interface and better user experience.

Yes, proximity is really important. If it can be achieved.

But let’s be fair. When dealing with large amounts of data on not so large canvas — there has to be some compromises. Not everything can be next to each other. What then?


Basic information processing and memory (Frontal lobe and the brain)

Memory refers to the storing of information in a reliable long-term substrate. Basic information processing refers to executing operations (e.g. mathematical operations and algorithm) on the information stored on memory.

Basic information processing and memory were the initial reason for creating computers. The human brain has been only adapted with difficulty to this tasks and is not particularly good at it. It was only with the development of writing as a way to store and support information processing that humans were able to take information processing and record keeping to an initial level of proficiency.

Currently, computers are able to process and store information at levels far beyond what humans are capable of doing. The last decades have seen an explosion of the capability to store different forms of information in computers like video or audio, in which before the human brain had an advantage over computers. There are still mechanisms of memory that are unknown to us which promise even greater efficiency in computers if we can copy them (e.g. our ability to remember episodes), however, they have to do with the efficient processing of those memories rather than the information storage itself.


Errors at the start of life

Only one in three fertilizations leads to a successful pregnancy. Many embryos fail to progress beyond early development. Cell biologists at the Max Planck Institute (MPI) for Biophysical Chemistry in Göttingen (Germany), together with researchers at the Institute of Farm Animal Genetics in Mariensee and other international colleagues, have now developed a new model system for studying early embryonic development. With the help of this system, they discovered that errors often occur when the genetic material from each parent combines immediately after fertilization. This is due to a remarkably inefficient process.

Human somatic cells typically have 46 chromosomes, which together carry the genetic information. These chromosomes are first brought together at fertilization, 23 from the father's sperm, and 23 from the mother's egg. After fertilization, the parental chromosomes initially exist in two separate compartments, known as pronuclei. These pronuclei slowly move towards each other until they come into contact. The pronuclear envelopes then dissolve, and the parental chromosomes unite.

The majority of human embryos, however, end up with an incorrect number of chromosomes. These embryos are often not viable, making erroneous genome unification a leading cause of miscarriage and infertility.

"About 10 to 20 percent of embryos that have an incorrect number of chromosomes result from the egg already containing too few or too many chromosomes prior to fertilization. This we already knew," explains Melina Schuh, director at the MPI for Biophysical Chemistry. "But how does this problem arise in so many more embryos? The time immediately after the sperm and egg unite -- the so-called zygote stage -- seemed to be an extremely critical phase for the embryo's development. We wanted to find out why this is the case."

Insights from a new model system

For their investigations, the scientists analyzed microscopy videos of human zygotes that had been recorded by a laboratory in England. They additionally set out to find a new model organism suitable for studying early embryonic development in detail. "Together with our collaboration partners at the Institute of Farm Animal Genetics, we developed methods for studying live bovine embryos, which closely resemble human embryos," explains Tommaso Cavazza, a scientist in Schuh's department. "The timing of the first cell divisions is comparable in human and bovine embryos. Furthermore, the frequency of chromosomes distributing incorrectly is about the same in both systems." Another advantage of this model system is: The scientists obtained the eggs from which the bovine embryos developed from slaughterhouse waste, so no additional animals had to be sacrificed.

Schuh's team fertilized the bovine eggs in vitro and then used live-cell microscopy to track how the parental genetic material unites. They found that the parental chromosomes cluster at the interface between the two pronuclei. In some zygotes, however, the researchers noticed that individual chromosomes failed to do so. As a result, these chromosomes were 'lost' when the parental genomes united, leaving the resulting nuclei with too few chromosomes. These zygotes soon showed developmental defects.

"The clustering of chromosomes at the pronuclear interface seems to be an extremely important step," Cavazza explains. "If clustering fails, the zygotes often make errors that are incompatible with healthy embryo development."

Dependent on an inefficient process

But why do parental chromosomes often fail to cluster correctly? The Max Planck researchers were able to uncover that as well, as Cavazza reports: "Components of the cytoskeleton and the nuclear envelope control chromosome movement within the pronuclei. Intriguingly, these elements also steer the two pronuclei towards each other. So we are dealing with two closely linked processes that are essential, but often go wrong. Thus, whether an embryo will develop healthily or not depends on a remarkably inefficient process."

The scientists' findings are also relevant for in vitro fertilization in humans. It has been discussed for some time whether the accumulation of the so-called nucleoli at the pronuclear interface in human zygotes could be used as an indicator for the chance of successful fertilization. Zygotes in which these pronuclear components all cluster at the interface have a better chance of developing successfully, and could therefore be preferentially used for fertility treatment. "Our observation that chromosomes need to cluster at the interface to guarantee healthy embryo development supports this selection criterion," Schuh says.


Researcher controls colleague's motions in first human brain-to-brain interface

University of Washington researchers have performed what they believe is the first noninvasive human-to-human brain interface, with one researcher able to send a brain signal via the Internet to control the hand motions of a fellow researcher.

Using electrical brain recordings and a form of magnetic stimulation, Rajesh Rao sent a brain signal to Andrea Stocco on the other side of the UW campus, causing Stocco's finger to move on a keyboard.

While researchers at Duke University have demonstrated brain-to-brain communication between two rats, and Harvard researchers have demonstrated it between a human and a rat, Rao and Stocco believe this is the first demonstration of human-to-human brain interfacing.

"The Internet was a way to connect computers, and now it can be a way to connect brains," Stocco said. "We want to take the knowledge of a brain and transmit it directly from brain to brain."

The researchers captured the full demonstration on video recorded in both labs. The version available at the end of this story has been edited for length.

Rao, a UW professor of computer science and engineering, has been working on brain-computer interfacing (BCI) in his lab for more than 10 years and just published a textbook on the subject. In 2011, spurred by the rapid advances in BCI technology, he believed he could demonstrate the concept of human brain-to-brain interfacing. So he partnered with Stocco, a UW research assistant professor in psychology at the UW's Institute for Learning & Brain Sciences.

On Aug. 12, Rao sat in his lab wearing a cap with electrodes hooked up to an electroencephalography machine, which reads electrical activity in the brain. Stocco was in his lab across campus wearing a purple swim cap marked with the stimulation site for the transcranial magnetic stimulation coil that was placed directly over his left motor cortex, which controls hand movement.

The team had a Skype connection set up so the two labs could coordinate, though neither Rao nor Stocco could see the Skype screens.

Rao looked at a computer screen and played a simple video game with his mind. When he was supposed to fire a cannon at a target, he imagined moving his right hand (being careful not to actually move his hand), causing a cursor to hit the "fire" button. Almost instantaneously, Stocco, who wore noise-canceling earbuds and wasn't looking at a computer screen, involuntarily moved his right index finger to push the space bar on the keyboard in front of him, as if firing the cannon. Stocco compared the feeling of his hand moving involuntarily to that of a nervous tic.

"It was both exciting and eerie to watch an imagined action from my brain get translated into actual action by another brain," Rao said. "This was basically a one-way flow of information from my brain to his. The next step is having a more equitable two-way conversation directly between the two brains."

The technologies used by the researchers for recording and stimulating the brain are both well-known. Electroencephalography, or EEG, is routinely used by clinicians and researchers to record brain activity noninvasively from the scalp. Transcranial magnetic stimulation, or TMS, is a noninvasive way of delivering stimulation to the brain to elicit a response. Its effect depends on where the coil is placed in this case, it was placed directly over the brain region that controls a person's right hand. By activating these neurons, the stimulation convinced the brain that it needed to move the right hand.

Computer science and engineering undergraduates Matthew Bryan, Bryan Djunaedi, Joseph Wu and Alex Dadgar, along with bioengineering graduate student Dev Sarma, wrote the computer code for the project, translating Rao's brain signals into a command for Stocco's brain.

"Brain-computer interface is something people have been talking about for a long, long time," said Chantel Prat, assistant professor in psychology at the UW's Institute for Learning & Brain Sciences, and Stocco's wife and research partner who helped conduct the experiment. "We plugged a brain into the most complex computer anyone has ever studied, and that is another brain."

At first blush, this breakthrough brings to mind all kinds of science fiction scenarios. Stocco jokingly referred to it as a "Vulcan mind meld." But Rao cautioned this technology only reads certain kinds of simple brain signals, not a person's thoughts. And it doesn't give anyone the ability to control your actions against your will.

Both researchers were in the lab wearing highly specialized equipment and under ideal conditions. They also had to obtain and follow a stringent set of international human-subject testing rules to conduct the demonstration.

"I think some people will be unnerved by this because they will overestimate the technology," Prat said. "There's no possible way the technology that we have could be used on a person unknowingly or without their willing participation."

Stocco said years from now the technology could be used, for example, by someone on the ground to help a flight attendant or passenger land an airplane if the pilot becomes incapacitated. Or a person with disabilities could communicate his or her wish, say, for food or water. The brain signals from one person to another would work even if they didn't speak the same language.

Rao and Stocco next plan to conduct an experiment that would transmit more complex information from one brain to the other. If that works, they then will conduct the experiment on a larger pool of subjects.


Where does the science stand?

The Tactical Assault Light Operator Suit (TALOS), which President Obama likened to “Iron Man,” could make American soldiers stronger and largely impervious to bullets. (Credit: AP Images)

O n Feb. 25, 2014, President Barack Obama met with Army officials and engineers at the Pentagon to discuss plans to create a new super armor that would make soldiers much more dangerous and harder to kill. The president joked that “we’re building ‘Iron Man,’” but Obama’s jest contained more than a kernel of truth: The exoskeleton, called the Tactical Assault Light Operator Suit (TALOS), does look vaguely like the fictional Tony Stark’s famous Iron Man suit. The first prototypes already are being built, and if all goes as planned, American soldiers may soon be much stronger and largely impervious to bullets.

A little more than a year later and an ocean away, scientists with the United Kingdom’s National Health Service (NHS) announced that by 2017, they plan to begin giving human subjects synthetic or artificial blood. If the NHS moves ahead with its plans, it would be the first time people receive blood created in a lab. While the ultimate aim of the effort is to stem blood shortages, especially for rare blood types, the success of synthetic blood could lay the foundation for a blood substitute that could be engineered to carry more oxygen or better fight infections.

Scientists are making tissue and blood in the laboratory. Synthetic blood may be used in human beings as soon as 2017. (Credit: AP Images)

In April 2016, scientists from the Battelle Memorial Institute in Columbus, Ohio, revealed that they had implanted a chip in the brain of a quadriplegic man. The chip can send signals to a sleeve around the man’s arm, allowing him to pick up a glass of water, swipe a credit card and even play the video game Guitar Hero.

Roughly around the same time, Chinese researchers announced they had attempted to genetically alter 213 embryos to make them HIV resistant. Only four of the embryos were successfully changed and all were ultimately destroyed. Moreover, the scientists from the Guangzhou Medical University who did the work said its purpose was solely to test the feasibility of embryo gene editing, rather than to regularly begin altering embryos. Still, Robert Sparrow of Australia’s Monash University Centre for Human Bioethics said that while editing embryos to prevent HIV has an obvious therapeutic purpose, the experiment more broadly would lead to other things. “Its most plausible use, and most likely use, is the technology of human enhancement,” he said, according to the South China Morning Post.

As these examples show, many of the fantastic technologies that until recently were confined to science fiction have already arrived, at least in their early forms. “We are no longer living in a time when we can say we either want to enhance or we don’t,” says Nicholas Agar, a professor of ethics at Victoria University in Wellington, New Zealand, and author of the book “Humanity’s End: Why We Should Reject Radical Enhancement.” “We are already living in an age of enhancement.”

The road to TALOS, brain chips and synthetic blood has been a long one that has included many stops along the way. Many of these advances come from a convergence of more than one type of technology – from genetics and robotics to nanotechnology and information technology. These technologies are “intermingling and feeding on one another, and they are collectively creating a curve of change unlike anything we humans have ever seen,” journalist Joel Garreau writes in his book “Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies – and What It Means to Be Human.”

The combination of information technology and nanotechnology offers the prospect of machines that are, to quote the title of Robert Bryce’s recent book on innovation, “Smaller Faster Lighter Denser Cheaper.” And as some futurists such as Ray Kurzweil argue, these developments will occur at an accelerated rate as technologies build on each other. “An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense ‘intuitive linear’ view,” writes Kurzweil, an American computer scientist and inventor whose work has led to the development of everything from checkout scanners at supermarkets to text-reading machines for the blind. “So we won’t experience 100 years of progress in the 21st century – it will be more like 20,000 years of progress (at today’s rate).”

GENETIC EDITING AND ENGINEERING

In the field of biotechnology, a big milestone occurred in 1953, when American biologist James Watson and British physicist Francis Crick discovered the molecular structure of DNA – the famed double helix – that is the genetic blueprint for life. Almost 50 years later, in 2003, two international teams of researchers led by American biologists Francis Collins and Craig Venter succeeded in decoding and reading that blueprint by identifying all of the chemical base pairs that make up human DNA.

Finding the blueprint for life, and successfully decoding and reading it, has given researchers an opportunity to alter human physiology at its most fundamental level. Manipulating this genetic code – a process known as genetic engineering – could allow scientists to produce people with stronger muscles, harder bones and faster brains. Theoretically, it also could create people with gills or webbed hands and feet or even wings – and, as Garreau points out in his book, could lead to “an even greater variety of breeds of humans than there is of dogs.”

Focus Group: American Voices on Ways Human Enhancement Could Shape Our Future

In recent years, the prospect of advanced genetic engineering has become much more real, largely due to two developments. First, inexpensive and sophisticated gene mapping technology has given scientists an increasingly more sophisticated understanding of the human genome.

The second important development involves the powerful new gene editing technology known as CRISPR. While gene editing itself is not new, CRISPR offers scientists a method that is much faster, cheaper and more accurate. “It’s about 1,000 times cheaper [than existing methods],” says George Church, a geneticist at Harvard Medical School. “It could be a game changer.” CRISPR is so much more efficient and accurate than older gene-editing technology because it uses each cell’s immune system to target and splice out parts of its DNA and replace them with new genetic code.

CRISPR is already dramatically expanding the realm of what is possible in the field of genetic engineering. Indeed, on June 21, 2016, the U.S. government announced that it had approved the first human trials using CRISPR, in this case to strengthen the cancer-fighting properties of the immune systems of patients suffering from melanoma and other deadly cancers. “CRISPR’s power and versatility have opened up new and wide-ranging possibilities across biology and medicine,” says Jennifer Doudna, a researcher at the University of California at Berkeley and a co-inventor of CRISPR.

According to Doudna and others, CRISPR could provide new treatments or even cures to some of today’s most feared diseases – not only cancer, but Alzheimer’s disease, Parkinson’s disease and others.

CRISPR’s power and versatility has opened up new and wide-ranging possibilities across biology and medicine.

— Jennifer Doudna, UC Berkeley

An even more intriguing possibility involves making genetic changes at the embryonic stage, also known as germline editing. The logic is simple: alter the gene lines in an embryo’s eight or 16 cell stage (to, say, eliminate the gene for Tay-Sachs disease) and that change will occur in each of the resulting person’s trillions of cells – not to mention in the cells of their descendants. When combined with researchers’ growing understanding of the genetic links to various diseases, CRISPR could conceivably help eliminate a host of maladies in people before they are born.

But many of the same scientists who have hailed CRISPR’s promise, including Doudna, also have warned of its potential dangers. At a National Academy of Sciences conference in Washington, D.C., in December 2015, she and about 500 researchers, ethicists and others urged the scientific community to hold off editing embryos for now, arguing that we do not yet know enough to safely make changes that can be passed down to future generations.

Those at the conference also raised another concern: the idea of using the new technologies to edit embryos for non-therapeutic purposes. Under this scenario, parents could choose a variety of options for their unborn children, including everything from cosmetic traits, such as hair or eye color, to endowing their offspring with greater intellectual or athletic ability. Some transhumanists see a huge upside to making changes at the embryonic level. “This may be the area where serious enhancement first becomes possible, because it’s easier to do many things at the embryonic stage than in adults using traditional drugs or machine implants,” says Nick Bostrom, director of the Future of Humanity Institute, a think tank at Oxford University that focuses on “big picture questions about humanity and its prospects.”

But in the minds of many philosophers, theologians and others, the idea of “designer children” veers too close to eugenics – the 19th- and early 20th-century philosophical movement to breed better people. Eugenics ultimately inspired forced sterilization laws in a number of countries (including the U.S.) and then, most notoriously, helped provide some of the intellectual framework for Nazi Germany’s murder of millions in the name of promoting racial purity.

There also may be practical obstacles. Some worry that there could be unintended consequences, in part because our understanding of the genome, while growing, is not even close to complete. Writing in Time magazine, Venter, who helped lead the first successful effort to sequence the human genome, warns that “we have little or no knowledge of how (with a few exceptions) changing the genetic code will effect development and the subtlety associated with the tremendous array of human traits.” Venter adds: “Genes and proteins rarely have a single function in the genome and we know of many cases in experimental animals where changing a ‘known function’ of a gene results in developmental surprises.”

A BETTER BRAIN?

For many transhumanists, expanding our capacities begins with the organ that most sets humans apart from other animals: the brain. Right now, cognitive enhancement largely involves drugs that were developed and are prescribed to treat certain brain-related conditions, such as Ritalin for attention deficit disorder or modafinil for narcolepsy. These and other medications have been shown in lab tests to help sharpen focus and improve memory.

But while modafinil and other drugs are now sometimes used (off label) to improve cognition, particularly among test-cramming students and overwhelmed office workers, the improvements in focus and memory are relatively modest. Moreover, many transhumanists and others predict that while new drugs (say, a specifically designed, IQ-boosting “smart pill”) or genetic engineering could result in substantially enhanced brain function, the straightest and shortest line to dramatically augmenting cognition probably involves computers and information technology.

As with biotechnology, information technology’s story is littered with important milestones and markers, such as the development of the transistor by three American scientists at Bell Labs in 1947. Transistors are the electronic signal switches that gave rise to modern computers. By shrinking the electronic components to microscopic size, researchers have been able to build ever smaller, more powerful and cheaper computers. As a result, today’s iPhone has more than 250,000 times more data storage capacity than the guidance computer installed on the Apollo 11 spacecraft that took astronauts to the moon.

Nanotechnology makes it possible to encode a great deal of information in a very tiny space. (Credit: AP Images)

One of the reasons the iPhone is so powerful and capable is that it uses nanotechnology, which involves “the ability to see and to control individual atoms and molecules.” Nanotechnology has been used to create substances and materials found in thousands of products, including items much less complex than an iPhone, such as clothing and cosmetics.

Advances in computing and nanotechnology have already resulted in the creation of tiny computers that can interface with our brains. This development is not as far-fetched as it may sound, since both the brain and computers use electricity to operate and communicate. These early and primitive brain-machine interfaces have been used for therapeutic purposes, to help restore some mobility to those with paralysis (as in the example involving the quadriplegic man) and to give partial sight to people with certain kinds of blindness. In the future, scientists say, brain-machine interfaces will do everything from helping stroke victims regain speech and mobility to successfully bringing people out of deep comas.

Right now, most scientists working in the brain-machine-interface field say they are solely focused on healing, rather than enhancing. “I’ve talked to hundreds of people doing this research, and right now everyone is wedded to the medical stuff and won’t even talk about enhancement because they don’t want to lose their research grants,” says Daniel Faggella, a futurist who founded TechEmergence, a market research firm focusing on cognitive enhancement and the intersection of technology and psychology. But, Faggella says, the technology developed to ameliorate medical conditions will inevitably be put to other uses. “Once we have boots on the ground and the ameliorative stuff becomes more normal, people will then start to say: we can do more with this.”

Doing more inevitably will involve augmenting brain function, which has already begun in a relatively simple way. For instance, scientists have been using electrodes placed on the head to run a mild electrical current through the brain, a procedure known as transcranial direct-current stimulation (tDCS). Research shows that tDCS, which is painless, may increase brain plasticity, making it easier for neurons to fire. This, in turn, improves cognition, making it easier for test subjects to learn and retain things, from new languages to mathematics. Already there is talk of implanting a tDCS pacemaker-like device in the brain so recipients do not need to wear electrodes. A device inside someone’s head could also more accurately target the electrical current to those parts of the brain most responsive to tDCS.

[Smart genes] would allow us to do so many different things. The sky’s the limit.

— Anders Sandberg, Oxford University’s Future of Humanity Institute

According to many futurists, tDCS is akin to an early steam train or maybe even a horse-drawn carriage before the coming of jumbo jets and rockets. If, as some scientists predict, full brain-machine interface comes to pass, people may soon have chips implanted in their brains, giving them direct access to digital information. This would be like having a smartphone in one’s head, with the ability to call up mountains of data instantly and without ever having to look at a computer screen.

The next step might be machines that augment various brain functions. Once scientists complete a detailed map of exactly what different parts of our brain do, they will theoretically be able to augment each function zone by placing tiny computers in these places. For example, machines may allow us to “process” information at exponentially faster speeds or to vividly remember everything or simply to see or hear better. Augments placed in our frontal lobe could, theoretically, make us more creative, give us more (or less) empathy or make us better at mathematics or languages. (For data on whether Americans say they would want to use potential technology that involved a brain-chip implant to improve cognitive abilities, see the accompanying survey, see U.S. Public Wary of Biomedical Technologies to ‘Enhance’ Human Abilities.)

Genetic engineering also offers promising possibilities, although there are possible obstacles as well. Scientists have already identified certain areas in human DNA that seem to control our cognitive functions. In theory, someone’s “smart genes” could be manipulated to work better, an idea that almost certainly has become more feasible with the recent development of CRISPR. “The potential here is really very great,” says Anders Sandberg, a neuroscientist and fellow at Oxford University’s Future of Humanity Institute. “I mean scientists are already working on … small biological robots made up of small particles of DNA that bind to certain things in the brain and change their chemical composition.

“This would allow us to do so many different things,” Sandberg adds. “The sky’s the limit.”

In spite of this optimism, some scientists maintain that it will probably be a long time before we can bioengineer a substantially smarter person. For one thing, it is unlikely there are just a few genes or even a few dozen genes that regulate intelligence. Indeed, intelligence may be dependent on the subtle dance of thousands of genes, which makes bioengineering a genius much harder.

Even if scientists find the right genes and “turn them on,” there is no guarantee that people will actually be smarter. In fact, some scientists speculate that trying to ramp up intelligence – whether by biology or machines – could overload the brain’s carrying capacity. According to Martin Dresler, an assistant professor of cognitive neuroscience at Radboud University in the Netherlands, some researchers believe that “evolution forced brains to develop toward optimal … functioning.” In other words, he says, “if there still was potential to optimize brain functioning by adding certain chemicals, nature would already have done this.” The same reasoning could also apply to machine enhancement, Dresler adds.

Even the optimistic Sandberg says that enhancing the brain could prove more difficult than some might imagine because changing biological systems can often have unforeseen impacts. “Biology is messy,” he says. “When you push in one direction, biology usually pushes back.”

THE FUTURE OF BLOOD

Given the brain’s importance, cognitive enhancement might be the holy grail of transhumanism. But many futurists say enhancement technologies will likely be used to transform the whole body, not just one part of it.

This includes efforts to manufacture synthetic blood, which to this point have been focused on therapeutic goals. But as with CRISPR and gene editing, artificial blood could ultimately be used as part of a broader effort at human enhancement. It could be engineered to clot much faster than natural human blood, for instance, preventing people from bleeding to death. Or it could be designed to continuously monitor a person’s arteries and keep them free of plaque, thus preventing a heart attack.

Synthetic white blood cells also could potentially be programmed. Indeed, like virtually any computer, these cells could receive “software updates” that would allow them to fight a variety of threats, such as a new infection or a specific kind of cancer. 1

Scientists already are developing and testing nanoparticles that could enter the bloodstream and deliver medicine to targeted areas. These microscopic particles are a far cry from synthetic blood, since they would be used once and for very specific tasks – such as delivering small doses of chemotherapy directly to cancer cells. However, nanoparticles could be precursors to microscopic machines that could potentially do a variety of tasks for a much longer period of time, ultimately replacing our blood.

It’s also possible that enhanced blood will be genetically engineered rather than synthetically made. “One of the biggest advantages of this approach is that you would not have to worry about your body rejecting your new blood, because it will still come from you,” says Oxford University’s Sandberg.

Regardless of how it is made, one obvious role for enhanced or “smart” blood would be to increase the amount of oxygen our hemoglobin can carry. “In principle, the way our blood stores oxygen is very limited,” Sandberg says. “So we could dramatically enhance our physical selves if we could increase the carrying capacity of hemoglobin.”

According to Sandberg and others, substantially more oxygen in the blood could have many uses beyond the obvious benefits for athletes. For example, he says, “it might prevent you from having a heart attack, since the heart doesn’t need to work as hard, or it might be that you wouldn’t have to breathe for 45 minutes.” In general, Sandberg says, this super blood “might give you a lot more energy, which would be a kind of cognitive enhancement.”

(For data on whether Americans say they would want to use potential synthetic blood substitutes to improve their own physical abilities, see the accompanying survey, U.S. Public Wary of Biomedical Technologies to ‘Enhance’ Human Abilities.)

HYPE OR PARADIGM SHIFT?

So where is all of this new and powerful technology taking humanity? The answer depends on who you ask.

Having more energy or even more intelligence or stamina is not the end point of the enhancement project, many transhumanists say. Some futurists, such as Kurzweil, talk about the use of machines not only to dramatically increase physical and cognitive abilities but to fundamentally change the trajectory of human life and experience. For instance, Kurzweil predicts that by the 2040s, the first people will upload their brains into the cloud, “living in various virtual worlds and even avoiding aging and evading death.”

Futurist and inventor Ray Kurzweil predicts that by the 2040s the first people will upload their brains into the cloud, “living in various virtual worlds and even avoiding aging and evading death.” (Credit: Getty Images)

Kurzweil – who has done more than anyone to popularize the idea that our conscious selves will soon be able to be “uploaded” – has been called everything from “freaky” to “a highly sophisticated crackpot.” But in addition to being one of the world’s most successful inventors, he has – if book sales and speaking engagements are any indication – built a sizable following for his ideas.

Kurzweil is not the only one who thinks we are on the cusp of an era when human beings will be able to direct their own evolution. “I believe that we’re now seeing the beginning of a paradigm shift in engineering, the sciences and the humanities,” says Natasha Vita-More, chairwoman of the board of directors of Humanity+, an organization that promotes “the ethical use of technology to expand human capacities.”

Still, even some transhumanists who admire Kurzweil’s work do not entirely share his belief that we will soon be living entirely virtual lives. “I don’t share Ray’s view that we will be disembodied,” says Vita-More, who along with her husband, philosopher Max More, helped found the transhumanist movement in the United States. “We will always have a body, even though that body will change.”

Based on our past experience, we know that most of these things are unlikely to happen in the next 30 or 40 years.

— George Annas, Boston University

In the future, Vita-More predicts, our bodies will be radically changed by biological and machine-based enhancements, but our fundamental sensorial life – that part of us that touches, hears and sees the world – will remain intact. However, she also envisions something she calls a whole-body prosthetic, which, along with our uploaded consciousness, will act as a backup or copy of us in case we die. “This will be a way to ensure our personal survival if something happens to our bodies,” she says.

Others, like Boston University bioethicist George Annas, believe Kurzweil is wrong about technological development and say talk of exotic enhancement is largely hype. “Based on our past experience, we know that most of these things are unlikely to happen in the next 30 or 40 years,” Annas says.

He points to many confident predictions in the last 30 or 40 years that turned out to be unfounded. “In the 1970s, we thought that by now there would be millions of people with artificial hearts,” he says. Currently, only a small number of patients have artificial hearts and the devices are used as a temporary bridge, to keep patients alive until a human heart can be found for transplant.

More recently, Annas says, “people thought the Human Genome Project would quickly lead to personalized medicine, but it hasn’t.”

Faggella, the futurist who founded TechEmergence, sees a dramatically different future and thinks the real push will be about, in essence, expanding our consciousness, both literally and figuratively. The desire to be stronger and smarter, Faggella says, will quickly give way to a quest for a new kind of happiness and fulfillment. “In the last 200 years, technology has made us like gods … and yet people today are roughly as happy as they were before,” he says. “So, I believe that becoming a super-Einstein isn’t going to make us happier and … that ultimately we’ll use enhancement to fulfill our wants and desires rather than just make ourselves more powerful.”

What exactly does that mean? Faggella can’t say for sure, but he thinks that enhancement of the mind will ultimately allow people to have experiences that are quite simply impossible with our current brains. “We’ll probably start by taking a human version of nirvana and creating it in some sort of virtual reality,” he says, adding “eventually we’ll transition to realms of bliss that we can’t conceive of at this time because we’re incapable of conceiving it. Enhancing our brains will be about making us capable.”


Acknowledgements

I wish to thank Susan Anton, Melinda Zeder, Tim Lewens, Polly Wiessner, Tim Ingold, Robert Sussman, Kim Sterelny, Jeffery Peterson, Celia Deane-Drummond and Marc Kissel for their influence on the themes and content in this article and the organizers of the ‘New trends in evolutionary biology: biological, philosophical and social science perspectives’, co-sponsored by the Royal Society and the British Academy, for their kind invitation to participate. I also thank the editor of Interface Focus and two anonymous reviewers for substantial and efficient critiques and commentary on earlier versions of this article. Agustin Fuentes is responsible for 100% of the development and writing of this article.


Your brain on imagination: It's a lot like reality, study shows

Imagine a barking dog, a furry spider or another perceived threat and your brain and body respond much like they would if you experienced the real thing. Imagine it repeatedly in a safe environment and soon your phobia -- and your brain's response to it -- subsides.

That's the takeaway of a new brain imaging study led by University of Colorado Boulder and Icahn School of Medicine researchers, suggesting that imagination can be a powerful tool in helping people with fear and anxiety-related disorders overcome them.

"This research confirms that imagination is a neurological reality that can impact our brains and bodies in ways that matter for our wellbeing," said Tor Wager, director of the Cognitive and Affective Neuroscience Laboratory at CU Boulder and co-senior author of the paper, published in the journal Neuron.

About one in three people in the United States have anxiety disorders, including phobias, and 8 percent have Post Traumatic Stress Disorder. Since the 1950s, clinicians have used "exposure therapy" as a first-line treatment, asking patients to face their fears -- real or imagined -- in a safe, controlled setting. Anecdotally, results have been positive.

But until now, very little has been known about how such methods impact the brain or how imagination neurologically compares to real-life exposure.

"These novel findings bridge a long-standing gap between clinical practice and cognitive neuroscience," said lead author Marianne Cumella Reddan, a graduate student in the Department of Psychology and Neuroscience at CU Boulder. "This is the first neuroscience study to show that imagining a threat can actually alter the way it is represented in the brain."

For the study, 68 healthy participants were trained to associate a sound with an uncomfortable, but not painful, electric shock. Then, they were divided into three groups and either exposed to the same threatening sound, asked to "play the sound in their head," or asked to imagine pleasant bird and rain sounds -- all without experiencing further shocks.

The researchers measured brain activity using functional magnetic resonance imaging (fMRI). Sensors on the skin measured how the body responded.

In the groups that imagined and heard the threatening sounds, brain activity was remarkably similar, with the auditory cortex (which processes sound), the nucleus accumens (which processes fear) and the ventromedial prefrontal cortex (associated with risk and aversion) all lighting up.

After repeated exposure without the accompanying shock, the subjects in both the real and imagined threat groups experienced what is known as "extinction," where the formerly fear-inducing stimulus no longer ignited a fear response.

Essentially, the brain had unlearned to be afraid.

"Statistically, real and imagined exposure to the threat were not different at the whole brain level, and imagination worked just as well," said Reddan.

Notably, the group that imagined birds and rain sounds showed different brain reactions, and their fear response to the sound persisted.

"I think a lot of people assume that the way to reduce fear or negative emotion is to imagine something good. In fact, what might be more effective is exactly the opposite: imagining the threat, but without the negative consequences," said Wager.

Previous research has shown that imagining an act can activate and strengthen regions of the brain involved in its real-life execution, improving performance. For instance, imagining playing piano can boost neuronal connections in regions related to the fingers. Research also shows it's possible to update our memories, inserting new details.

The new study suggests that imagination may be a more powerful tool than previously believed for updating those memories.

"If you have a memory that is no longer useful for you or is crippling you, you can use imagination to tap into it, change it and re-consolidate it, updating the way you think about and experience something," said Reddan, stressing that something as simple as imagining a single tone tapped into a complex network of brain circuits.

She notes that there was much more variance in brain activity in the group that imagined the tone versus the ones who really heard it, suggesting that those with a more vivid imagination may experience greater brain changes when simulating something in their mind's eye.

As imagination becomes a more common tool among clinicians, more research is necessary, they write.

For now, Wager advises, pay attention to what you imagine.

"Manage your imagination and what you permit yourself to imagine. You can use imagination constructively to shape what your brain learns from experience."


Minimising the cognitive load

There are three types of cognitive loads:

  • Intrinsic cognitive load is the inherent difficulty of a task. In UX terms, it’s the energy people need to absorb new information while keeping track of whatever task they are trying to accomplish with your product.
  • Extraneous cognitive load is anything taking up mental resources to deal with problems that are not related to the task as such. For instance, in design, this could be caused by random use of fonts sizes (which is, of course, not the same as purposefully deployed fonts).
  • Germane cognitive load is the load used to construct and process schemas. This is particularly interesting for areas such as teaching, but it’s also more complex to address through design.

The part that we can easily tackle is the extraneous cognitive load. This is the cognitive load that is, basically, the result of bad design. Our goal should be to minimise it as much as possible. What follows are five ways to reduce extraneous cognitive load.


Watch the video: How Does Light Become Images in your Brain? (January 2022).