Information

Can logic be traced back to neurons?

Can logic be traced back to neurons?

In our everyday experience, and in both math and philosophy particularly, there exist some binary order in the sense that some statements can be classified as true or false.

Does this have a specific neural correlation in the brain (classifying a statement as either true or false)? Or in other words, is it possible to correlate the process of assigning true/false with some particular feature of the brain?


Certainly some statements are either true or false. Either it rains or it doesn't. But when you think about many of these statements more carefully, the truth of these statements isn't really binary at all, or the binary fact is largely irrelevant.

Using the weather example, for most people it is not very helpful to know that it doesn't rain when there is a hurricane or it's snowing, both of which wouldn't usually be considered "not rain" by someone who wants to know what to wear. So the answer to "Does it rain?" usually isn't "No", but, for example, "You should take your umbrella because it looks like it might rain later".

The brain also isn't binary. Axons do not either forward an electrical impulse or not, but impulses of different intensities or frequencies. Neurons are often not activated by impulses from a single synapse, but usually by multiple synapses. Synapses do not either send out transmitters or not, but different amounts of them. Functions aren't located in one area, but in multiple areas. And so on.

Logic and maths (and psychological theories) are simplifications of a very complex reality. They provide simple solutions that often work (because life is flexible enough to adapt to a rigid approach), but just as often logic or mathematical (or psychological) approaches to deal with real problems are destined to fail.


Can logic be traced back to neurons? - Psychology

In order to understand the great debate of the Neuron Doctrine, which is associated with Cajal, Vs. the Nerve network, which is associated with Golgi, it is perhaps necessary to backtrack a little bit in order to see how these theories originated.

Technology (see technology tab) was finally developed enough for scientists to explore how the body functioned at the cellular level.
Equipped with the carmine stain, Otto Friedrich Karl Deiters observed the anatomy of nerve cells: particularly, the somas and dendrites. Deiters wondered how these nerve cells communicated. Because the staining techniques at that point were not well advanced, after looking at his slides Deiters, around 1865, hypothesized that perhaps nerve cells were fused together, as this is what it looked like on from his slides (the synapse was not visible to scientists until the 1950s). Joseph von Gerlach took the fusing idea a step further and suggested that all nerve cells are interconnected, which creates a giant web, thus the nerve network was established (Finger, 2000).

Not long afterward, in 1873 Golgi invents the silver stain which allows scientists much clearer view of nerve cells (Finger, 2000).

In 1886 Wilhelm His suggests that perhaps it is possible that nerve cells do not fuse together. He backs up his claim by pointing out that motor neurons are not connected to muscle fibers therefore, he speculated that if nerve cells in the peripheral nervous system are not connected, nerve cells in the central nervous system may not be connected either . Around that same time, August Forel suggested that nerve cells may simply touch to send the nerve impulse, but they are not technically fused (Finger, 2000).

This was when Santiago Ramon y Cajal came into the scene. Cajal improved Golgi’s stain and created better slides. At the German Anatomical Society of 1889, Cajal presented his slides, and expressed his opinion that he found no evidence of nerve cells fusing together (Finger, 2000).

Thus, in 1891, Wilhelm von Waldyer, with his prestige, officially established that nerve cells are seprate entities called neurons. This theory thus became the neuron doctrine (Finger, 2000).

Though it was established that neurons were most likely seperate units, it was still not determined how they communicated or in what direction. Golgi believed that dendrites’ only job was to provide nutrition, so it was only the axons that commuicated information. Cajal however believed that communication flowed in one direction, but that dendrites received informaiton while axons send out information. He came up with this hypotheis by examining the sense organs. In the eyes, for example, neurons are all positioned with their dendrites outward, to recieve the information from the outside world, and then the axons are pointed in towards the brain to deliver that information (Finger, 2000).

However, Cajal still did not answer the question about how information passed between neurons. He hesitantly agreed with other scientists that perhaps dendrites communicate to axons via touching, but he was not entirelly convinced (Finger, 2000).

*Interesting Fact: Way back in 1872, before Golgi even invents the silver stain, Alexander Bain suggests that when learning takes place, nerve cells grow closer together, and when memory loss occurs, nerve cells grow further apart (Bain, 1873):

“For every act of memory, every exercise of bodily aptitude, every habit, recollection, train of ideas, there is a specific grouping, or co-ordination, of sensations and movements, by virtue of specific growths in the cell junctions.” (pp. 91).

“If the brain is a vast network of communication between sense and movement – actual and ideal – between sense and sense, movement and movement, by innumerable conducting fibres, crossing at innumerable points, — the way to make one definite set of currents induce a second definite set is in some way or other to strengthen the special points of junction where the two sets are most readily connected […]” (pp. 92)

Another interesting fact* The idea that perhaps nerve cells communicate either chemically or electronically can be traced back to 1877 to Emil Du Bois-Raymond.

Of known natural processes that might pass on excitation, only two are, in my opinion, worth talking about. Either there exists at the boundary of the contractile substance a stimulative secretionin the form of a thin layer of ammonia, lactic acid, or some other powerful stimulatory substance, or the phenomenon is electrical in nature. (qtd. Finger p. 260)

The answer to exactly how nerve cells communicate was answered by Otto Loewi, who did an experiment which showed that neurons can communicate via chemicals. Though the story varies in minute details, Otto Loewi was said to have insomnia and woke up often during the night. One time he woke up and got the idea for an experiment to test whether neurons respond via chemicals. He wrote it down, and later he did the experiment and was successful (Finger, 2000).

He took two frog hearts. One frog heart had its vagus nerve still attached while the other heart had its vagus nerve removed. He put neutral ringer’s solution on the heart with the vagus nerve. He stimulated the vegas nerve which made the heart slow down. He then took some of the ringer’s solution and applied it to the second heart, which immediately made it slow down its beating as well, just as if he had stimulated the second heart. This shows that the vagus nerve released some chemicals to tell the heart to slow its beating, and he was able to get some of those chemicals and make the other heart do the same thing (Sabbatini, 2003).

Bain, A. (1873). Mind and Body: The Theories of Their Relation. London. Henry S. King.

Finger, S. (2000). Minds Behind the Brain: A History of the Pioneers and Their Discoveries. New York NY: Oxford Press.


In my blog, I am intending to explore the neurological nature of theatrical performance. In the domain of Affective Neuroscience and Cognitive Psychology, it seems that the focus of most researchers stays on the processes involved in the genuine human emotional activity. Such activity as Acting, for example, involves mostly artificially stimulated emotions, and yet it requires full commitment and therefore its emotional processes are often labeled genuine, even though they are motivated by the need to perform. In real life, emotions are often uncontrolled. They are labeled as spontaneous, they don’t call before they show up at someone’s door, they just happen. In the Performing Arts, it is quite the opposite: emotions are provoked in performers, who then (ideally) receive reactions and empathy from the audience. I am interested in physical and psychological manifestations of artificially/purposefully stimulated emotive processing. I am hoping to demystify emotions using a scientific approach to the most ephemeral aspect of human behavior and nature. I am also exploring the concept of emotional prosody and the audial/musical/vocal basis of human emotions.

Human beings embody, express, process, inhibit, function, act, feel. All the verbs I just listed, along with many more, have as their sources the essential parts of what constitutes a human: body, mind, emotion, and behavior. In his dissertation, Kemp (2008) states that cognitive science acknowledges the central role of the body and enables a better understanding of understand the relationship between thought and expression (p. 20). Acting, on the other hand, does not explain the body-mind-soul relationship, but rather provides the richest material for exploration of and experimentation with human emotions. How does theatrical performance/activity conceptually relate to the cognitive science and affective neuroscience? The main things that both disciplines share is the idea of duality of the human nature. Are the emotions manifested through the body, or is the body producing emotions as integral parts of its purpose? Following the same logic, the acting traditions argue: can the physical work stimulate imagination to the point that the actor lives through the emotions of the character, or does the psychological approach to acting guarantee deep understanding and therefore meaningful expression? Kemp (2008) proposes that “the two approaches are in fact representative of positions on a continuum, rather than being mutually exclusive or necessarily oppositional. The empirically based concept of the embodied mind provides a foundation that explains the effectiveness of approaches to training and rehearsal that consciously link physicality and environment in the expression of meaning” (p . 24).

Unfortunately, until recently researchers and thinkers didn’t have a luxury to be inspired by scientific evidences of neurological activity and embodied cognition. And yet the juxtaposition of emotional and mental has always been present both in the science and the arts. Historically, acting has always been reflecting the latest trends in philosophical and cultural thought. For generations and even centuries, acting style maintained a very high level of artificiality, and what we know now as “believable acting” was simply a nonsense. In the early 19 th century, just several decades before Psychology emerged, Henry Siddons and Johann Jacob Engel summarized the European pre-realistic modern acting style in their book “Practical Illustrations of Rhetorical Gesture and Action”. The book describes and illustrates several emotions and their physical expressions, in a way that is very similar to the system of discrete emotions used in Neuroscience. The pre-realistic school of acting assumed that “habit becomes a kind of nature” (p. 3). By providing illustrations of various gestures and poses each of which was connected to a specific emotion, the authors made sure that the conventional emotional expressions get institutionalized via theatre and therefore become internalized by many generations of theatre practitioners. Before there was a Psychology, acting relied on captured generalized emotional stereotypes.

One thing they were missing. Fake, artificial or not, emotions kept engaging the audiences by making them feel and empathize. As Lewis, Gibson and Lannon simply put it – Detecting an emotion changes the observer’s own emotional tone in the direction of the emotion he’s observing. (p.4) Some researchers of the 20th century would argue that theatre owed its glory to the mirror neurons.

Nearly 20 years ago mirror neurons were discovered in chimpanzees by Rizolatti and colleagues (Drenko, 2013, p. 26). After a series of tests, it was concluded that the mammalian brain is capable of engaging in what Lewis (2000) calls “the internal neural simulation of behavior it observes in others” (p.5). This theory clearly has a great potential to literally explain the functionality of performance in general and of theatre in particular. Since the beginning of the Western theatre tradition as we know it, the famous author of Poetics Aristotle described the main functions of Tragedy as Fear, Pity and Catharsis. While many historians argue whether those translations from Ancient Greek are accurate or even if Aristotle existed to begin with, there is not really much authentic evidence to work with since Greeks didn’t leave us their secrets on a flash drive. Assuming that this interpretation of Aristotle’s suggested dramatic functions is roughly compatible with the actual truth, we see how the list includes an affective state (fear/terror), empathy (pity) and purgation (catharsis).

Below is the full Aristotelian definition of tragedy: “Tragedy, then, is an imitation of an action that is serious, complete, and of a certain magnitude in language embellished with each kind of artistic ornament, the several kinds being found in separate parts of the play in the form of action, not of narrative with incidents arousing pity and fear, wherewith to accomplish its katharsis of such emotions” (Butcher, S. H. (Ed.). (1917). The poetics of Aristotle. Macmillan)

Aristotle has been immortalized as the “Father” of Western Philosophy, Drama, and even of the Neuroscience. While his relation to the neuroscience may seem like a stretch, it worth mentioning that Aristotle was a trained doctor and researcher himself. While he acknowledged the duality of human nature manifested in tension between the mind and the heart, he did not believe in the brain’s involvement in emotions. (Gross, p. 247) If only he lived to see the mirror neurons, he would have known that empathy, essential component of Theatre, calls the brain its home. Empathy has been inscribed in the history of drama since the known beginning of it, as well as in the history of human kind. In the review article, Bernhardt and colleagues (2012) conclude that multiple studies, mostly based on empathy for pain, showed that “empathic responses recruit, to some extent, brain areas similar to those engaged during the corresponding first-person state” (p.). Linderberger (2010) describes the mirror neuronal process as two consecutive phases: stage one – imitation if the observed actions, second – internalization of the information and as a result the understanding of it (p.4). Those two stages may indeed constitute true empathy, and yet they only seem to be manifested in someone who is experiencing the event/emotion/story vicariously. When applied to the people impersonating and embodying characters in a story, the empathy cannot be enough.

Obviously, there is an endless number of acting techniques. The ones that prevail in the times contemporary with the modern neuroscience tend to be based on the psychological approach. Realistic acting is assumed to be the most common acting style people are exposed to, whether via television, cinema or live performances. We are going to set aside the improvisational methods and other non-traditional experimental approaches: in order to stay focused, let’s assume that generally realistic actors approach a character in a generally similar way. And this way involves two stages of processing. First, the actor gets acquainted with the character through reading his/her story. During this stage of the process, the actor is in the audience’s shoes: the incoming information resonates with his/her mind and perpetuates empathy. The actor’s goal is, however, not only to comprehend affectively the story and the character, but to undergo a process of transformation in order to portray/embody the given material. The actor must exist in the imaginary, or given circumstances: therefore, logically, his/her body needs to adjust and to start functioning as the one of the character. Since the body clearly includes the brain, can it be assumed that the actor rewires his/her brain to function as the one of the non-existing character, too? Kemp (2008) suggests that “the experience of emotion is something that is part of a disembodied consciousness rather than the processes of the body” (p.21). In this case, the emotions and the mind seem to be rather merged together, which contradicts the very traditional heart/mind dichotomy. But if we take a generalized realistic acting technique and trace every step of character’s coming to life, it appears that the consciousness and emotion walk hand in hand.

Once the actor internalizes the information about the character such as the background, demographics, looks, relationship history, beliefs, lifestyle (pretty much the equivalent of anyone’s first meeting with a psychologist), he/she connects the personal history and the given circumstances of the material that is being performed. Where is the line between the actor and the character? Where does the actor stop making decisions and begins choosing guided by emotions of his character? Creating a character is essentially reconstructing a human being from scratch, attributing all human aspects to his/her being/existence. Emotions then become the driving force of this process of creation. On stage or on screen, the actor creates a life, re-creates and re-tells a story. Without living emotions, the audience wouldn’t buy it (literally and figuratively).

More on current struggle to create an interdisciplinary bond between cognitive psychology and acting:

Bernhardt, B. C., & Singer, T. (2012). The neural basis of empathy. Annual review of neuroscience, 35, 1-23.

Butcher, S. H. (Ed.). (1917). The poetics of Aristotle. Macmillan.

Drinko, C. (2013). Theatrical improvisation, consciousness, and cognition. Palgrave Macmillan.

Engel, J. J., Siddons, H., & Engel, M. (1822). Practical Illustrations of Rhetorical Gesture and Action: Adapted from the English Drama: from a Work on the Subject by M. Engel.. Sherwood, Neely and Jones.

Gross, C. (1995). Aristotle on the Brain. The Neuroscientist, 1, 245-250.

Kemp, R. J. (2010). Embodied acting: cognitive foundations of performance (Doctoral dissertation, University of Pittsburgh).

Lewis, Gibson and Lannon. A primer on the neurobiology of inspiration. Published at http://www.terrypearce.com/pdf/PREREAD_gibson_et_al_061024.pdf

Lindenberger, H. (2010). Arts in the Brain or, What Might Neuroscience Tell Us? Toward a Cognitive Theory of Narrative Acts, ed. Frederick Luis Aldama, Austin: University of Texas Press, pp. 13-35


The Skeptics Society & Skeptic magazine

For decades now computer scientists and futurists have been telling us that computers will achieve human-level artificial intelligence soon. That day appears to be off in the distant future. Why? In this penetrating skeptical critique of AI, computer scientist Peter Kassan reviews the numerous reasons why this problem is harder than anyone anticipated.

digital image by Daniel Loxton and Jim Smith

On March 24, 2005 , an announcement was made in newspapers across the country, from the New York Times 1 to the San Francisco Chronicle, 2 that a company 3 had been founded to apply neuroscience research to achieve human-level artificial intelligence. The reason the press release was so widely picked up is that the man behind it was Jeff Hawkins, the brilliant inventor of the PalmPilot, an invention that made him both wealthy and respected. 4

You’d think from the news reports that the idea of approaching the pursuit of artificial human-level intelligence by modeling the brain was a novel one. Actually, a Web search for “computational neuroscience” finds over a hundred thousand webpages and several major research centers. 5 At least two journals are devoted to the subject. 6 Over 6,000 papers are available online. Amazon lists more than 50 books about it. A Web search for “human brain project” finds more than eighteen thousand matches. 7 Many researchers think of modeling the human brain or creating a “virtual” brain a feasible project, even if a “grand challenge.” 8 In other words, the idea isn’t a new one.

Hawkins’ approach sounds simple. Create a machine with artificial “senses” and then allow it to learn, build a model of its world, see analogies, make predictions, solve problems, and give us their solutions. 9 This sounds eerily similar to what Alan Turing 10 suggested in 1948. He, too, proposed to create an artificial “man” equipped with senses and an artificial brain that could “roam the countryside,” like Frankenstein’s monster, and learn whatever it needed to survive. 11

The fact is, we have no unifying theory of neuroscience. We don’t know what to build, much less how to build it. 12 As one observer put it, neuroscience appears to be making “antiprogress” — the more information we acquire, the less we seem to know. 13 Thirty years ago, the estimated number of neurons was between three and ten billion. Nowadays, the estimate is 100 billion. Thirty years ago it was assumed that the brain’s glial cells, which outnumber neurons by nine times, were purely structural and had no other function. In 2004, it was reported that this wasn’t true. 14

Even the most ardent artificial intelligence (A.I.) advocates admit that, so far at least, the quest for human-level intelligence has been a total failure. 15 Despite its checkered history, however, Hawkins concludes A.I. will happen: “Yes, we can build intelligent machines.” 16

A Brief History of A.I.

Duplicating or mimicking human-level intelligence is an old notion — perhaps as old as humanity itself. In the 19th century, as Charles Babbage conceived of ways to mechanize calculation, people started thinking it was possible — or arguing that it wasn’t. Toward the middle of the 20th century, as mathematical geniuses Claude Shannon, 17 Norbert Wiener, 18 John von Neumann, 19 Alan Turing, and others laid the foundations of the theory of computing, the necessary tool seemed available.

In 1955, a research project on artificial intelligence was proposed a conference the following summer is considered the official inauguration of the field. The proposal 20 is fascinating for its assertions, assumptions, hubris, and naïveté, all of which have characterized the field of A.I. ever since. The authors proposed that ten people could make significant progress in the field in two months. That ten-person, two-month project is still going strong — 50 years later. And it’s involved the efforts of more like tens of thousands of people.

A.I. has splintered into three largely independent and mutually contradictory areas (connectionism, computationalism, and robotics), each of which has its own subdivisions and contradictions. Much of the activity in each of the areas has little to do with the original goals of mechanizing (or computerizing) human-level intelligence. However, in pursuit of that original goal, each of the three has its own set of problems, in addition to the many that they share.

1. Connectionism

Connectionism is the modern version of a philosophy of mind known as associationism. 21 Connectionism has applications to psychology and cognitive science, as well as underlying the schools of A.I. 22 that include both artificial neural networks 23 (ubiquitously said to be “inspired by” the nervous system) and the attempt to model the brain.

The latest estimates are that the human brain contains about 30 billion neurons in the cerebral cortex — the part of the brain associated with consciousness and intelligence. The 30 billion neurons of the cerebral cortex contain about a thousand trillion synapses (connections between neurons). 24

Without a detailed model of how synapses work on a neurochemical level, there’s no hope of modeling how the brain works. 25 Unlike the idealized and simplified connections in so-called artificial neural networks, those synapses are extremely variable in nature — they can have different cycle times, they can use different neurotransmitters, and so on. How much data must be gathered about each synapse? Somewhere between kilobytes (tens of thousands of numbers) and megabytes (millions of numbers). 26 And since the cycle time of synapses can be more than a thousand cycles per second, we may have to process those numbers a thousand times each second.

Have we succeeded in modeling the brain of any animal, no matter how simple? The nervous system of a nematode (worm) known as C. (Caenorhabditis) elegans has been studied extensively for about 40 years. Several websites 27 and probably thousands of scientists are devoted exclusively or primarily to it. Although C. elegans is a very simple organism, it may be the most complicated creature to have its nervous system fully mapped. C. elegans has just over three hundred neurons, and they’ve been studied exhaustively. But mapping is not the same as modeling. No one has created a computer model of this nervous system — and the number of neurons in the human cortex alone is 100 million times larger. C. elegans has about seven thousand synapses. 28 The number of synapses in the human cortex alone is over 100 billion times larger.

The proposals to achieve human-level artificial intelligence by modeling the human brain fail to acknowledge the lack of any realistic computer model of a synapse, the lack of any realistic model of a neuron, the lack of any model of how glial cells interact with neurons, and the literally astronomical scale of what is to be simulated.

The typical artificial neural network consists of no more than 64 input “neurons,” approximately the same number of “hidden neurons,” and a number of output “neurons” between one and 256. 29 This, despite a 1988 prediction by one computer guru that by now the world should be filled with “neuroprocessors” containing about 100 million artificial neurons. 30

Even if every neuron in each layer of a three- layer artificial neural net with 64 neurons in each layer is connected to every neuron in the succeeding layer, and if all the neurons in the output layer are connected to each other (to allow creation of a “winner-takes-all” arrangement permitting only a single output neuron to fire), the total number of “synapses” can be no more than about 17 million, although most artificial neural networks typically contain much, much less — usually no more than a hundred or so.

Furthermore, artificial neurons resemble generalized Boolean logic gates more than actual neurons. Each neuron can be described by a single number — its “threshold.” Each synapse can be described by a single number — the strength of the connection — rather than the estimated minimum of ten thousand numbers required for a real synapse. Thus, the human cortex is at least 600 billion times more complicated than any artificial neural network yet devised.

It is impossible to say how many lines of code the model of the brain would require conceivably, the program itself might be relatively simple, with all the complexity in the data for each neuron and each synapse. But the distinction between the program and the data is unimportant. If each synapse were handled by the equivalent of only a single line of code, the program to simulate the cerebral cortex would be roughly 25 million times larger than what’s probably the largest software product ever written, Microsoft Windows, said to be about 40 million lines of code. 31 As a software project grows in size, the probability of failure increases. 32 The probability of successfully completing a project 25 million times more complex than Windows is effectively zero.

Moore’s “Law” is often invoked at this stage in the A.I. argument. 33 But Moore’s Law is more of an observation than a law, and it is often misconstrued to mean that about every 18 months computers and everything associated with them double in capacity, speed, and so on. But Moore’s Law won’t solve the complexity problem at all. There’s another “law,” this one attributed to Nicklaus Wirth: Software gets slower faster than hardware gets faster. 34 Even though, according to Moore’s Law, your personal computer should be about a hundred thousand times more powerful than it was 25 years ago, your word processor isn’t. Moore’s Law doesn’t apply to software.

And perhaps last, there is the problem of testing. The minimum number of software errors observed has been about 2.5 errors per function point. 35 A software program large enough to simulate the human brain would contain about 20 trillion errors.

Testing conventional software (such as a word processor or Windows) involves, among many other things, confirming that its behavior matches detailed specifications of what it is intended to do in the case of every possible input. If it doesn’t, the software is examined and fixed. Connectionistic software comes with no such specifications — only the vague description that it is to “learn” a “pattern” or act “like” a natural system, such as the brain. Even if one discovers that a connectionistic software program isn’t acting the way you want it do, there’s no way to “fix” it, because the behavior of the program is the result of an untraceable and unpredictable network of interconnections.

Testing connectionistic software is also impossible due to what’s known as the combinatorial explosion. The retina (of a single eye) contains about 120 million rods and 7 million cones. 36 Even if each of those 127 million neurons were merely binary, like the beloved 8࡮ input grid of the typical artificial neural network (that is, either responded or didn’t respond to light), the number of different possible combinations of input is a number greater than 1 followed by 38,230,809 zeroes. (The number of particles in the universe has been estimated to be about 1 followed by only 80 zeroes. 37 ) Testing an artificial neural network with input consisting of an 8࡮ binary grid is, by comparison, a small job: such a grid can assume any of 18,446,744,073,709,551,616 configurations — orders of magnitude smaller, but still impossible.

2. Computationalism

Computationalism was originally defined as the “physical symbol system hypothesis,” meaning that “A physical symbol system has the necessary and sufficient means for general intelligent action.” 38 (This is actually a “formal symbol system hypothesis,” because the actual physical implementation of such a system is irrelevant.) Although that definition wasn’t published until 1976, it co-existed with connectionism from the very beginning. It has also been referred to as “G.O.F.A.I.” (good old-fashioned artificial intelligence). Computationalism is also referred to as the computational theory of mind. 39

The assumption behind computationalism is that we can achieve A.I. without having to simulate the brain. The mind can be treated as a formal symbol system, and the symbols can be manipulated on a purely syntactic level — without regard to their meaning or their context. If the symbols have any meaning at all (which, presumably, they do — or else why bother manipulating them?), that can be ignored until we reach the end of the manipulation. The symbols are at a recognizable level, more-or-less like ordinary words — a so-called “language of thought.” 40

The basic move is to treat the informal symbols of natural language as formal symbols. Although, during the early years of computer programming (and A.I.), this was an innovative idea, it has now become a routine practice in computer programming — so ubiquitous that it’s barely noticeable.

Unfortunately, natural language — which may not literally be the language of thought, but which any human-level A.I. program has to be able to handle — can’t be treated as a formal symbol. To give a simple example, “day” sometimes mean “day and night” and sometimes means “day as opposed to night” — depending on context.

Joseph Weizenbaum 41 observes that a young man asking a young woman, “Will you come to dinner with me this evening?” 42 could, depending on context, simply express the young man’s interest in dining, or his hope to satisfy a desperate longing for love. The context — the so-called “frame” — needed to make sense of even a single sentence may be a person’s entire life.

An essential aspect of the computationalist approach to natural language is to determine the syntax of a sentence so that its semantics can be handled. As an example of why that is impossible, Terry Winograd 43 offers a pair of sentences:

The committee denied the group a parade permit because they advocated violence.

The committee denied the group a parade permit because they feared violence. 44

The sentences differ by only a single word (of exactly the same grammatical form). Disambiguating these sentences can’t be done without extensive — potentially unlimited — knowledge of the real world. 45 No program can do this without recourse to a “knowledge base” about committees, groups seeking marches, etc. In short, it is not possible to analyze a sentence of natural language syntactically until one resolves it semantically. But since one needs to parse the sentence syntactically before one can process it at all, it seems that one has to understand the sentence before one can understand the sentence.

In natural language, the boundaries of the meaning of words are inherently indistinct, whereas the boundaries of formal symbols aren’t. For example, in binary arithmetic, the difference between 0 and 1 is absolute. In natural language, the boundary between day and night is indistinct, and arbitrarily set for different purposes. To have a purely algorithmic system for natural language, we need a system that can manipulate words as if they were meaningless symbols while preserving the truth-value of the propositions, as we can with formal logic. When dealing with words — with natural language — we just can’t use conventional logic, since one “axiom” can affect the “axioms” we already have — birds can fly but penguins and ostriches are birds that can’t fly. Since the goal is to automate human-style reasoning, the next move is to try to develop a different kind of logic — so-called non-monotonic logic.

What used to be called logic without qualification is now called “monotonic” logic. In this kind of logic, the addition of a new axiom does- n’t change any axioms that have already been processed or inferences that have already been drawn. The attempt to formalize the way people reason is quite recent — and entirely motivated by A.I.. And although the motivation can be traced back to the early years of A.I., the field essentially began with the publication of three papers in 1980. 46 However, according to one survey of the field in 2003, despite a quarter-century of work, all that we have are prospects and hope. 47

An assumption of computationalists is that the world consists of unambiguous facts that can be manipulated algorithmically. But what is a fact to you may not be a fact to me, and vice versa. 48 Furthermore, the computationalist approach assumes that experts apply a set of explicit, formalizable rules. The task of computationalists, then, is simply to debrief the experts on their rules. But, as numerous studies of actual experts have shown, 49 only beginners behave that way. At the highest level of expertise, people don’t even recognize that they’re making decisions. Rather, they are fluidly interacting with the changing situation, responding to patterns that they recognize. Thus, the computationalist approach leads to what should be called “beginner systems” rather than “expert systems.”

The way people actually reason can’t be reduced to an algorithmic procedure like arithmetic or formal logic. Even the most ardent practitioners of formal logic spend most of their time explaining and justifying the formal proofs scattered through their books and papers — using natural language (or their own unintelligible versions of it). Even more ironically, none of these practitioners of formal logic — all claiming to be perfectly rational — ever seem to agree with each other about any of their formal proofs.

Computationalist A.I. is plagued by a host of other problems. First of all its systems don’t have any common sense. 50 Then there’s “the symbol- grounding problem.” 51 The analogy is trying to learn a language from a dictionary (without pictures) — every word (symbol) is simply defined using other words (symbols), so how does anything ever relate to the world? Then there’s the “frame problem” — which is essentially the problem of which context to apply to a given situation. 52 Some researchers consider it to be the fundamental problem in both computationalist and connectionist A.I. 53

The most serious computationalist attempt to duplicate human-level intelligence — perhaps the only serious attempt — is known as CYC 54 — short for enCYClopedia (but certainly meant also to echo “psych”). The head of the original project and the head of CYCORP, Douglas Lenat 55 has been making public claims about its imminent success for more than twenty years. The stated goal of CYC is to capture enough human knowledge — including common sense — to, at the very least, pass an unrestricted Turing Test. 56 If any computationalist approach could succeed, it would be this mother of all expert systems.

Lenat had made some remarkable predictions: at the end of ten years, by 1994 he projected, the CYC knowledge base will contain 30󈞞% of consensus reality. 57 (It is difficult to say what this prediction means, because it assumes that we know what the totality of consensus reality is and that we know how to quantify and measure it.) The year 1994 would represent another milestone in the project: CYC would, by that time, be able to build its knowledge base by reading online materials and ask questions about it, rather than having people enter information. 58 And by 2001, Lenat said, CYC would have become a system with human-level breadth and depth of knowledge. 59

In 1990, CYC produced what it termed “A Midterm Report.” 60 Given that the effort started in 1984, calling it this implied that the project would be successfully completed by 1996, although in the section labeled “Conclusion” it refers to three possible outcomes that might occur by the end of the 1990s. One would hope that by that time CYC would at least be able to do simple arithmetic. In any case, the three scenarios are labeled “good” (totally failing to meet any of the milestones), “better” (which shifts the achievements to “the early twenty-first century” and that still consists of “doing research”), and “best” (in which the achievement still isn’t “true A.I.” but only the “foundation for … true A.I.” in — 2015).

Even as recently as 2002 (one year after CYC’s predicted achievement of human-level breadth and depth of knowledge), CYC’s website was still quoting Lenat making promises for the future: “This is the most exciting time we’ve ever seen with the project. We stand on the threshold of success.” 61

Perhaps most tellingly, Lenat’s principal coworker, R.V. Guha 62 left the team in 1994, and was quoted in 1995 as saying “CYC is generally viewed as a failed project. The basic idea of typing in a lot of knowledge is interesting but their knowledge representation technology seems poor.” 63 In the same article, Guha is further quoted as saying of CYC, as could be said of so many other A.I. projects, “We were killing ourselves trying to create a pale shadow of what had been promised.” It’s no wonder that GOFA.I. has been declared “brain-dead.” 64

3. Robotics

The third and last major branch of the river of A.I. is robotics — the attempt to build a machine capable of autonomous intelligent behavior. Robots, at least, appear to address many of problems of connectionism and computationalism: embodiment, 65 lack of goals, 66 the symbol-grounding problem, and the fact that conventional computer programs are “bedridden.” 67

However, when it comes to robots, the disconnect between the popular imagination and reality is perhaps the most dramatic. The notion of a fully humanoid robot is ubiquitous not only in science fiction but in supposedly non-fictional books, journals, and magazines, often by respected workers in the field.

This branch of the river has two sub-branches, one of which (cybernetics) has gone nearly dry, the other of which (computerized robotics) has in turn forked into three sub-branches. Remarkably, although robotics would seem to be the most purely down-to-earth engineering approach to A.I., its practitioners spend as much time publishing papers and books as do the connectionists and the computationalists.

Cybernetic Robotics

While Turing was speculating about building his mechanical man, W. Grey Walter 68 built what was probably the first autonomous vehicle, the robot “turtles” or “tortoises,” Elsie and Elmer. Following a cybernetic approach rather than a computational one, Walter’s turtles were controlled by a simple electronic circuit with a couple of vacuum tubes.

Although the actions of this machine were trivial and exhibited nothing that even suggested intelligence, Grey has been described as a robotics “pioneer” whose work was “highly successful and inspiring.” 69 On the basis of experimentation with a device that, speaking generously, simulated an organism with two neurons, he published two articles in Scientific American 70 (one per neuron!), as well as a book. 71

Cybernetics was the research program founded by Norbert Wiener, 72 and was essentially analog in its approach. In comparison with (digital) computer science, it is moribund if not quite dead. Like so many other approaches to artificial intelligence, the cybernetic approach simply failed to scale up. 73

Computerized Robots

The history of computerized robotics closely parallels the history of A.I. in general:

  • Grand theoretical visions, such as Turing’s musings (already discussed) about how his mechanical creature would roam the countryside.
  • Promising early results, such as Shakey, said to be “the first mobile robot to reason about its actions.” 74
  • A half-century of stagnation and disappointment. 75
  • Unrepentant grand promises for the future.

What a roboticist like Hans Moravec predicts for robots is the stuff of science fiction, as is evident by the title of his book, Robot: Mere Machine to Transcendent Mind. 76 For example, in 1997 Moravec asked the question, “When will computer hardware match the human brain?” and answered “in the 2020s.” 77 This belief that robots will soon transcend human intelligence is echoed by many others in A.I. 78

In the field of computerized robots, there are three major approaches:

  • TOP-DOWN  The approach taken with Shakey and its successors, in which a computationalist computer program controls the robot’s activities. 79 Under the covers, the programs take the same approach as good old-fashioned artificial intelligence, except that instead of printing out answers, they cause the robot to do something.
  • OUTSIDE-IN  Consists of creating robots that imitate the superficial behavior of people, such as responding to the presence of people nearby, tracking eye movement, and so on. This is the approach largely taken recently by people working under Rodney A. Brooks. 80
  • BOTTOM-UP  Consists of creating robots that have no central control, but relatively simple mechanisms to control parts of their behavior. The notion is that by putting together enough of these simple mechanisms (presumably in the right arrangement), intelligence will “emerge.” Brooks has written extensively in support of this approach. 81

The claims of roboticists of all camps range from the unintelligible to the unsupportable.

As an example of the unintelligible, consider MIT’s Cog (short for “cognition”). The claim was that Cog displayed the intelligence (and behavior) of, initially, a six-month old infant. The goal was for Cog to eventually display the intelligence of a two-year-old child. 82 A basic concept of intelligence — to the extent that anyone can agree on what the word means — is that (all things being equal) it stays constant throughout life. What changes as a child or animal develops is only the behavior. So, to make this statement at all intelligible, it would have to be translated into something like this: the initial goal is only that Cog will display the behavior of a six-month-old child that people consider indicative of intelligence, and later the behavior of a two-year-old child.

Even as corrected, this notion is also fallacious. Whatever behaviors a two-year-old child happens to display, as that child continues to grow and develop it will eventually display all the behavior of a normal adult, because the two- year-old has an entire human brain. However, even if we manage to create a robot that mimics all the behavior of a two-year-old child, there’s reason to believe that that same robot will without any further programming, ten years later, display the behavior of a 12-year-old child, or later, display the behavior of an adult.

Cog never even displayed the intelligent behavior of a typical six-month-old baby. 83 For it to behave like a two-year-old child, of course, it would have to use and understand natural language — thus far an insurmountable barrier for A.I..

The unsupportable claim is sometimes made that some robots have achieved “insect-level intelligence,” or at least robots that duplicate the behavior of insects. 84 Such claims seem plausible simply because very few people are entomologists, and are unfamiliar with how complex and sophisticated insect behavior actual is. 85 Other experts, however, are not sure that we’ve achieved even that level. 86

According to the roboticists and their fans, Moore’s Law will come to the rescue. The implication is that we have the programs and the data all ready to go, and all that’s holding us back is a lack of computing power. After all, as soon as computers got powerful enough, they were able to beat the world’s best human chess player, weren’t they? (Well, no — a great deal of additional programming and chess knowledge was also needed.)

Sad to say, even if we had unlimited computer power and storage, we wouldn’t know what to do with it. The programs aren’t ready to go, because there aren’t any programs.

Even if it were true that current robots or computers had attained insect-level intelligence, this wouldn’t indicate that human-level artificial intelligence is attainable. The number of neurons in an insect brain is about 10,000 and in a human cerebrum about 30,000,000,000. But if you put together 3,000,000 cockroaches (this seems to be the A.I. idea behind “swarms”), you get a large cockroach colony, not human-level intelligence. If you somehow managed to graft together 3,000,000 natural or artificial cockroach brains, the results certainly wouldn’t be anything like a human brain, and it is unlikely that it would be any more “intelligent” than the cockroach colony would be. Other species have brains as large as or larger than humans, and none of them display human-level intelligence — natural language, conceptualization, or the ability to reason abstractly. 87 The notion that human- level intelligence is an “emergent property” of brains (or other systems) of a certain size or complexity is nothing but hopeful speculation.

Conclusions

With admirable can-do spirit, technological optimism, and a belief in inevitability, psychologists, philosophers, programmers, and engineers are sure they shall succeed, just as people dreamed that heavier-than-air flight would one day be achieved. 88 But 50 years after the Wright brothers succeeded with their proof-of-concept flight in 1903, aircraft had been used decisively in two world wars the helicopter had been invented several commercial airlines were routinely flying passengers all over the world the jet airplane had been invented and the speed of sound had been broken.

After more than 50 years of pursuing human- level artificial intelligence, we have nothing but promises and failures. The quest has become a degenerating research program 89 (or actually, an ever-increasing number of competing ones), pursuing an ever-increasing number of irrelevant activities as the original goal recedes ever further into the future — like the mirage it is.


That Is Not How Your Brain Works

T he 21st century is a time of great scientific discovery. Cars are driving themselves. Vaccines against deadly new viruses are created in less than a year. The latest Mars Rover is hunting for signs of alien life. But we’re also surrounded with scientific myths: outdated beliefs that make their way regularly into news stories.

Being wrong is a normal and inevitable part of the scientific process. We scientists do our best with the tools we have, until new tools extend our senses and let us probe more deeply, broadly, or precisely. Over time, new discoveries lead us to major course corrections in our understanding of how the world works, such as natural selection and quantum physics. Failure, therefore, is an opportunity to discover and learn. 1

Brains don’t work by stimulus and response. All your neurons are firing at various rates all the time.

But sometimes, old scientific beliefs persist, and are even vigorously defended, long after we have sufficient evidence to abandon them. As a neuroscientist, I see scientific myths about the brain repeated regularly in the media and corners of academic research. Three of them, in particular, stand out for correction. After all, each of us has a brain, so it’s critical to understand how that three-pound blob between your ears works.

M yth number one is that specific parts of the human brain have specific psychological jobs. According to this myth, the brain is like a collection of puzzle pieces, each with a dedicated mental function. One puzzle piece is for vision, another is for memory, a third is for emotions, and so on. This view of the brain became popular in the 19th century, when it was called phrenology. Its practitioners believed they could discern your personality by measuring bumps on your skull. Phrenology was discredited by better data, but the general idea was never fully abandoned. 2

Today, we know the brain isn’t divided into puzzle pieces with dedicated psychological functions. Instead, the human brain is a massive network of neurons. 3 Most neurons have multiple jobs, not a single psychological purpose. 4 For example, neurons in a brain region called the anterior cingulate cortex are regularly involved in memory, emotion, decision-making, pain, moral judgments, imagination, attention, and empathy.

LIZARD BRAIN: Why does the tale linger that our instincts stem from a part of our brain inherited from reptilian ancestors? Because if bad behavior stems from our inner beasts, then we’re less responsible for some of our actions. Galina Gala / Shutterstock

I’m not saying that every neuron can do everything, but most neurons do more than one thing. For example, a brain region that’s intimately tied to the ability to see, called primary visual cortex, also carries information about hearing, touch, and movement. 5 In fact, if you blindfold people with typical vision for a few days and teach them to read braille, neurons in their visual cortex become more devoted to the sense of touch. 6 (The effect disappears in a day or so without the blindfold.)

In addition, the primary visual cortex is not necessary for all aspects of vision. Scientists have believed for a long time that severe damage to the visual cortex in the left side of your brain will leave you unable to see out of your right eye, assuming that the ability to see out of one eye is largely due to the visual cortex on the opposite side. Yet more than 50 years ago, studies on cats with cortical blindness on one side showed that it is possible to restore some of the lost sight by cutting a connection deep in the cat’s midbrain. A bit more damage allowed the cats to orient toward and approach moving objects.

Perhaps the most famous example of puzzle-piece thinking is the “triune brain”: the idea that the human brain evolved in three layers. The deepest layer, known as the lizard brain and allegedly inherited from reptile ancestors, is said to house our instincts. The middle layer, called the limbic system, allegedly contains emotions inherited from ancient mammals. And the topmost layer, called the neocortex, is said to be uniquely human—like icing on an already baked cake—and supposedly lets us regulate our brutish emotions and instincts.

Myth number one is that specific parts of the human brain have specific psychological jobs.

This compelling tale of brain evolution arose in the mid 20th century, when the most powerful tool for inspecting brains was an ordinary microscope. Modern research in molecular genetics, however, has revealed that the triune brain idea is a myth. Brains don’t evolve in layers, and all mammal brains (and most likely, all vertebrate brains as well) are built from a single manufacturing plan using the same kinds of neurons.

Nevertheless, the triune brain idea has tremendous staying power because it provides an appealing explanation of human nature. If bad behavior stems from our inner beasts, then we’re less responsible for some of our actions. And if a uniquely human and rational neocortex controls those beasts, then we have the most highly evolved brain in the animal kingdom. Yay for humans, right? But it’s all a myth. In reality, each species has brains that are uniquely and effectively adapted to their environments, and no animal brain is “more evolved” than any other.

Noise Is a Drug and New York Is Full of Addicts

As soon as the door slams, I slide to the floor in a cross-legged position and hold my breath. The room in which I have just barricaded myself looks a bit like Matilda’s chokey a single light bulb casts a. READ MORE

So why does the myth of a compartmentalized brain persist? One reason is that brain-scanning studies are expensive. As a compromise, typical studies include only enough scanning to show the strongest, most robust brain activity. These underpowered studies produce pretty pictures that appear to show little islands of activity in a calm-looking brain. But they miss plenty of other, less robust activity that may still be psychologically and biologically meaningful. In contrast, when studies are run with enough power, they show activity in the majority of the brain. 7

Another reason is that animal studies sometimes focus on one small part of the brain at a time, even just a few neurons. In pursuit of precision, they wind up limiting their scope to the places where they expect to see effects. When researchers instead take a more holistic approach that focuses on all the neurons in a brain—say, in flies, worms, or even mice—the results show more what looks like whole-brain effects. 8

Pretty much everything that your brain creates, from sights and sounds to memories and emotions, involves your whole brain. Every neuron communicates with thousands of others at the same time. In such a complex system, very little that you do or experience can be traced to a simple sum of parts.

M yth number two is that your brain reacts to events in the world. Supposedly, you go through your day with parts of your brain in the off position. Then something happens around you, and those parts switch on and “light up” with activity.

Brains, however, don’t work by stimulus and response. All your neurons are firing at various rates all the time. What are they doing? Busily making predictions. 9 In every moment, your brain uses all its available information (your memory, your situation, the state of your body) to take guesses about what will happen in the next moment. If a guess turns out to be correct, your brain has a head start: It’s already launching your body’s next actions and creating what you see, hear, and feel. If a guess is wrong, the brain can correct itself and hopefully learn to predict better next time. Or sometimes it doesn’t bother correcting the guess, and you might see or hear things that aren’t present or do something that you didn’t consciously intend. All of this prediction and correction happens in the blink of an eye, outside your awareness.

If a predicting brain sounds like science fiction, here’s a quick demonstration. What is this picture?

If you see only some curvy lines, then your brain is trying to make a good prediction and failing. It can’t match this picture to something similar in your past. (Scientists call this state “experiential blindness.”) To cure your blindness, visit lisafeldmanbarrett.com/nautilus and read the description, then come back here and look at the picture again. Suddenly, your brain can make meaning of the picture. The description gave your brain new information, which conjured up similar experiences in your past, and your brain used those experiences to launch better predictions for what you should see. Your brain has transformed ambiguous, curvy lines into a meaningful perception. (You will probably never see this picture as meaningless again.)

Predicting and correcting is a more efficient way to run a system than constantly reacting in an uncertain world. This is clear every time you watch a baseball game. When the pitcher hurls the ball at 96 miles per hour toward home plate, the batter doesn’t have enough time to wait for the ball to come close, consciously see it, and then prepare and execute the swing. Instead, the batter’s brain automatically predicts the ball’s future location, based on rich experience, and launches the swing based on that prediction, to be able to have a hope of hitting the ball. Without a predicting brain, sports as we know them would be impossible to play.

What does all this mean for you? You’re not a simple stimulus-response organism. The experiences you have today influence the actions that your brain automatically launches tomorrow.

T he third myth is that there’s a clear dividing line between diseases of the body, such as cardiovascular disease, and diseases of the mind, such as depression. The idea that body and mind are separate was popularized by the philosopher René Descartes in the 17th century (known as Cartesian dualism) and it’s still around today, including in the practice of medicine. Neuroscientists have found, however, that the same brain networks responsible for controlling your body also are involved in creating your mind. 10 A great example is the anterior cingulate cortex, which I mentioned earlier. Its neurons not only participate in all the psychological functions I listed, but also they regulate your organs, hormones, and immune system to keep you alive and well.

Modern research in molecular genetics has revealed that the triune brain idea is a myth.

Every mental experience has physical causes, and physical changes in your body often have mental consequences, thanks to your predicting brain. In every moment, your brain makes meaning of the whirlwind of activity inside your body, just as it does with sense data from the outside world. That meaning can take different forms. If you have tightness in your chest that your brain makes meaningful as physical discomfort, you’re likely to visit a cardiologist. But if your brain makes meaning of that same discomfort as distress, you’re more likely to book time with a psychiatrist. Note that your brain isn’t trying to distinguish two different physical sensations here. They are pretty much identical, and an incorrect prediction can cost you your life. Personally, I have three friends whose mothers were misdiagnosed with anxiety 11 when they had serious illnesses, and two of them died.

When it comes to illness, the boundary between physical and mental is porous. Depression is usually catalogued as a mental illness, but it’s as much a metabolic illness as cardiovascular disease, which itself has significant mood-related symptoms. These two diseases occur together so often that some medical researchers believe that one may cause the other. That perspective is steeped in Cartesian dualism. Both depression 12 and cardiovascular disease 13 are known to involve problems with metabolism, so it’s equally plausible that they share an underlying cause.

When thinking about the relationship between mind and body, it’s tempting to indulge in the myth that the mind is solely in the brain and the body is separate. Under the hood, however, your brain creates your mind while it regulates the systems of your body. That means the regulation of your body is itself part of your mind.

Science, like your brain, works by prediction and correction. Scientists use their knowledge to fashion hypotheses about how the world works. Then they observe the world, and their observations become evidence they use to test the hypotheses. If a hypothesis did not predict the evidence, then they update it as needed. We’ve all seen this process in action during the pandemic. First we heard that COVID-19 spread on surfaces, so everyone rushed to buy Purell and Clorox wipes. Later we learned that the virus is mainly airborne and the focus moved to ventilation and masks. This kind of change is a normal part of science: We adapt to what we learn. But sometimes hypotheses are so strong that they resist change. They are maintained not by evidence but by ideology. They become scientific myths.

Lisa Feldman Barrett (@LFeldmanBarrett) is a professor of psychology at Northeastern University and the author of Seven and a Half Lessons About the Brain. Learn more at LisaFeldmanBarrett.com.

1. Firestein, S. Failure: Why Science Is So Successful Oxford University Press, Oxford, UK (2015).

2. Uttal, W.R. The New Phrenology MIT Press, Cambridge, MA (2001).

3. Sporns, O. Networks of the Brain MIT Press, Cambridge, MA (2010).

4. Anderson, M.L. After Phrenology MIT Press, Cambridge, MA (2014).

5. Liang, M., Mouraux, A., Hu, L., & Lannetti, G.D. Primary sensory cortices contain distinguishable spatial patterns of activity for each sense. Nature Communications 4, 1979 (2013).

6. Merabet, L.B., et al. Rapid and reversible recruitment of early visual cortex for touch. PLoS One 3, e3046 (2008).

7. Gonzalez-Castillo, J., et al. Whole-brain, time-locked activation with simple tasks revealed using massive averaging and model-free analysis. Proceedings of the National Academy of Sciences 109, 5487-5492 (2012).

8. Kaplan, H.S. & Zummer, M. Brain-wide representations of ongoing behavior: A universal principle? Current Opinion in Neurobiology 64, 60-69 (2020).

9. Hutchinson, J.B. & Barrett, L.F. The power of predictions: An emerging paradigm for psychological research. Current Directions in Psychological Science 28, 280-291 (2019).

10. Kleckner, I.R., et al. Evidence for a large-scale brain system supporting allostasis and interoception in humans. Nature Human Behavior 1, 0069 (2017).

11. Martin, R., et al. Gender disparities in common sense models of illness among myocardial infarction victims. Health Psychology 23, 345-353 (2004).


The Degeneration of Dopamine Neurons in Parkinson's Disease: Insights from Embryology and Evolution of the Mesostriatocortical System

Address for correspondence: Dr. Philippe Vernier, Development, Evolution, Plasticity of the Nervous System, UPR2197, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France. Voice: +33-169-823-430 fax: +33-169-823-447. [email protected] Search for more papers by this author

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Address for correspondence: Dr. Philippe Vernier, Development, Evolution, Plasticity of the Nervous System, UPR2197, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France. Voice: +33-169-823-430 fax: +33-169-823-447. [email protected] Search for more papers by this author

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Institutional Login
Log in to Wiley Online Library

If you have previously obtained access with your personal account, please log in.

Purchase Instant Access
  • View the article PDF and any associated supplements and figures for a period of 48 hours.
  • Article can not be printed.
  • Article can not be downloaded.
  • Article can not be redistributed.
  • Unlimited viewing of the article PDF and any associated supplements and figures.
  • Article can not be printed.
  • Article can not be downloaded.
  • Article can not be redistributed.
  • Unlimited viewing of the article/chapter PDF and any associated supplements and figures.
  • Article/chapter can be printed.
  • Article/chapter can be downloaded.
  • Article/chapter can not be redistributed.

Abstract

A bstract : Parkinson's disease (PD) is, to a large extent, specific to the human species. Most symptoms are the consequence of the preferential degeneration of the dopamine-synthesizing cells of the mesostriatal-mesocortical neuronal pathway. Reasons for that can be traced back to the evolutionary mechanisms that shaped the dopamine neurons in humans. In vertebrates, dopamine-containing neurons and nuclei do not exhibit homogenous phenotypes. In this respect, mesencephalic dopamine neurons of the substantia nigra and ventral tegmental area are characterized by a molecular combination (tyrosine hydroxylase, aromatic amino acid decarboxylase, monoamine oxidase, vesicular monoamine transporter, dopamine transporter—to name a few), which is not found in other dopamine-containing neurons of the vertebrate brain. In addition, the size of these mesencephalic DA nuclei is tremendously expanded in humans as compared to other vertebrates. Differentiation of the mesencephalic neurons during development depends on genetic mechanisms, which also differ from those of other dopamine nuclei. In contrast, pathophysiological approaches to PD have highlighted the role of ubiquitously expressed molecules such as a-synuclein, parkin, and microtubule-associated proteins. We propose that the peculiar phenotype of the dopamine mesencephalic neurons, which has been selected during vertebrate evolution and reshaped in the human lineage, has also rendered these neurons particularly prone to oxidative stress, and thus, to the fairly specific neurodegeneration of PD. Numerous evidence has been accumulated to demonstrate that perturbed regulation of DAT-dependent dopamine uptake, DAT-dependent accumulation of toxins, dysregulation of TH activity as well as high sensitivity of DA mesencephalic neurons to oxidants are key components of the neurodegeneration process of PD. This view points to the contribution of nonspecific mechanisms (α-synuclein aggregation) in a highly specific cellular environment (the dopamine mesencephalic neurons) and provides a robust framework to develop novel and rational therapeutic schemes in PD.


Bugs, mice, and people may share one ‘brain ancestor’

You are free to share this article under the Attribution 4.0 International license.

Humans, mice, and flies share the same genetic mechanisms that regulate the formation and function of brain areas involved in attention and movement control, according to a new study.

The findings shed light on the deep evolutionary past connecting organisms with seemingly unrelated body plans.

They also may help scientists understand the subtle changes that can occur in genes and brain circuits that can lead to mental health disorders such as anxiety and autism spectrum disorders.

“The crucial question scientists are trying to answer is: Did the brains in the animal kingdom evolve from a common ancestor?”

Resemblances between the nervous systems of vertebrates and invertebrates have been known since the early 18th century, but only recently have scientists asked whether such similarities are due to corresponding genetic programs that already existed in a common ancestor of vertebrates and invertebrates that lived more than half a billion years ago.

“The crucial question scientists are trying to answer is: Did the brains in the animal kingdom evolve from a common ancestor?” says coauthor Nicholas Strausfeld, professor of neuroscience at the University of Arizona. “Or, did brains evolve independently in different lineages of animals?”

Microscopic image of a fruit fly brain showing several neurons specified during development of the deutocerebral-tritocerebral boundary, or DTB, revealed by Green Fluorescent Protein. The circuits arising from the DTB play crucial roles in the regulation of behavior. (Credit:Jessika Bridi/Hirth Lab/ King’s College London)

The study provides evidence of underlying gene regulatory networks governing the formation of two corresponding structures in the developing brains of fruit flies and vertebrates including mice and humans.

Uncovering previously unknown similarities in how their brains develop during embryogenesis, the study further supports the hypothesis of a basic brain architecture shared across the animal kingdom.

The evolution of the brain

The study in the Proceedings of the National Academy of Sciences provides strong evidence that the mechanisms that regulate genetic activity required for the formation of important behavior-related brain areas are the same for insects and mammals.

“The findings indicate that the evolution of their very different brains can be traced back to a single ancestral brain more than a half billion years ago.”

Most strikingly, the authors demonstrate that when these regulatory mechanisms are inhibited or impaired in insects and mammals, they experience very similar behavioral problems. This indicates that the same building blocks that control the activity of genes are essential to both the formation of brain circuits but also the behavior-related functions they perform. According to the researchers, this provides evidence that these mechanisms likely arose in a common ancestor.

“Our research indicates that the way the brain’s circuits are put in place is the same in humans, flies, and mice,” says senior study author Frank Hirth from the Institute of Psychiatry, Psychology, and Neuroscience at King’s College London. “The findings indicate that the evolution of their very different brains can be traced back to a single ancestral brain more than a half billion years ago.

Using neuroanatomical observations and developmental genetic experiments, the researchers traced nerve cell lineages in the developing embryos of fruit flies and mice to identify how adult brain structures, along with their functionalities, unfold.

The team focused on those areas of the brain known as the deutocerebral-tritocerebral boundary, or DTB, in flies and the midbrain-hindbrain boundary, or MHB, in vertebrates including humans.

“In both vertebrates and arthropods, this boundary belongs to the anterior part of the brain and separates it from the rest,” Strausfeld says. “The anterior part integrates sensory inputs, forms memories, and plans and controls complex actions. The part behind it is essential for controlling balance and autonomic functions like breathing.”

Ancient but stable over time

Using genomic data, the researchers identified the genes that play a key role in the formation of brain circuits of the DTB in flies and the MHB in mice and men, and ascertained that these circuits play crucial roles in the regulation of behavior. They then ascertained which regions of the genome control when and where these genes are expressed.

They found that those genomic regions are very similar in flies, mice, and humans, indicating that they share the same fundamental genetic mechanism by which these brain areas develop.

“For many years researchers have been trying to find the mechanistic basis underlying behavior. We have discovered a crucial part of the jigsaw puzzle…”

Manipulating the relevant genomic regions in flies resulted in impaired behavior. This corresponds to findings from research on people where mutations in these gene regulatory sequences or the regulated genes themselves have been associated with behavioral disorders, including anxiety and autism spectrum disorders.

The research builds on previous work led by Hirth showing that the early divisions of the fly’s brain into distinctive parts followed by an extended nerve cord correspond to the three front-to-back divisions of the developing mouse brain and its spinal cord. Both in flies and mice, the development of each morphologically corresponding part requires the same set of genes, called homeobox genes, suggesting homologous genetic programs for brain development in invertebrate and vertebrates.

Evidence from soft tissue preservation in fossils of ancient arthropods studied by Strausfeld suggests that overall brain morphologies present in arthropod lineages living today must indeed have originated before the early Cambrian era, more than 520 million years ago.

“This implies that basic neural arrangements can be ancient and yet highly stable over geological time,” he says. “You could say the jigsaw puzzle of how the brain evolved still lacks an image on the box, but the pieces currently being added suggest a very early origin of essential circuits that, over an immense span of time, have been maintained, albeit with modification, across the great diversity of brains we see today.”

“For many years researchers have been trying to find the mechanistic basis underlying behavior,” Hirth says. “We have discovered a crucial part of the jigsaw puzzle by identifying these basic gene regulatory mechanisms required for midbrain circuit formation and function. If we can understand these very small, very basic building blocks, how they form and function, this will help find answers to what happens when things go wrong at a genetic level to cause these disorders.”

Additional researchers from King’s College London, the University of Arizona, the University of Leuven, and Leibniz Institute DSMZ contributed to the research.

Funding for this study came from the Ministry of Education of Brazil, King’s College London, the Research Foundation Flanders, the US National Science Foundation, the UK Medical Research Council, the UK Biotechnology and Biological Sciences Research Council, and the UK Motor Neuron Disease Association.


History of critical thinking

T he intellectual roots of critical thinking can be traced back to

350BC. The first one to embrace critical thinking practices was the famous Greek philosopher Socrates. He was obsessed with the notion that many people were basing their ideologies on an empty rhetoric rather than sane and rational thinking.

Confused meanings, contradictory beliefs, biases and self-delusion formed the foundation of their argumentation and Socrates was constantly questioning those practices.

He also had an aversion towards authority because he believed that a person in power doesn’t necessarily possess sound knowledge and insight. He might just be a good manipulator and performer.

Socrates’ method of revealing unreasonable argumentation is called “Socratic Questioning” and epitomizes the idea of clarity and logical consistency.

His approach was later on adopted by Plato, Aristotle and the Greek skeptics, all of whom adopted the belief that things might be different than they appear and that the critical mind should read between the lines 1 to establish a reasonable truth.

The ancient Greek philosophers were the forefathers of the critical thinking movement and influenced the work of other renowned thinkers like Francis Bacon, Descartes, Niccolo Machiavelli, Isaac Newton, Adam Smith, and Immanuel Kant.


The Degeneration of Dopamine Neurons in Parkinson's Disease: Insights from Embryology and Evolution of the Mesostriatocortical System

Address for correspondence: Dr. Philippe Vernier, Development, Evolution, Plasticity of the Nervous System, UPR2197, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France. Voice: +33-169-823-430 fax: +33-169-823-447. [email protected] Search for more papers by this author

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Address for correspondence: Dr. Philippe Vernier, Development, Evolution, Plasticity of the Nervous System, UPR2197, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France. Voice: +33-169-823-430 fax: +33-169-823-447. [email protected] Search for more papers by this author

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Abstract

A bstract : Parkinson's disease (PD) is, to a large extent, specific to the human species. Most symptoms are the consequence of the preferential degeneration of the dopamine-synthesizing cells of the mesostriatal-mesocortical neuronal pathway. Reasons for that can be traced back to the evolutionary mechanisms that shaped the dopamine neurons in humans. In vertebrates, dopamine-containing neurons and nuclei do not exhibit homogenous phenotypes. In this respect, mesencephalic dopamine neurons of the substantia nigra and ventral tegmental area are characterized by a molecular combination (tyrosine hydroxylase, aromatic amino acid decarboxylase, monoamine oxidase, vesicular monoamine transporter, dopamine transporter—to name a few), which is not found in other dopamine-containing neurons of the vertebrate brain. In addition, the size of these mesencephalic DA nuclei is tremendously expanded in humans as compared to other vertebrates. Differentiation of the mesencephalic neurons during development depends on genetic mechanisms, which also differ from those of other dopamine nuclei. In contrast, pathophysiological approaches to PD have highlighted the role of ubiquitously expressed molecules such as a-synuclein, parkin, and microtubule-associated proteins. We propose that the peculiar phenotype of the dopamine mesencephalic neurons, which has been selected during vertebrate evolution and reshaped in the human lineage, has also rendered these neurons particularly prone to oxidative stress, and thus, to the fairly specific neurodegeneration of PD. Numerous evidence has been accumulated to demonstrate that perturbed regulation of DAT-dependent dopamine uptake, DAT-dependent accumulation of toxins, dysregulation of TH activity as well as high sensitivity of DA mesencephalic neurons to oxidants are key components of the neurodegeneration process of PD. This view points to the contribution of nonspecific mechanisms (α-synuclein aggregation) in a highly specific cellular environment (the dopamine mesencephalic neurons) and provides a robust framework to develop novel and rational therapeutic schemes in PD.


Where does conscience come from? The neurology of conscience

According to the encyclopedia, conscience is a personal sense of the moral content of one’s own conduct, intentions, or character with regard to a feeling of obligation to do right or be good. Conscience, usually informed by acculturation and instruction, is thus generally understood to give intuitively authoritative judgments regarding the moral quality of single actions.

According to Wikipedia, conscience is a cognitive process that elicits emotion and rational associations based on an individual's moral philosophy or value system.

Conscience is the ability by which we make moral judgments about our own actions and our whole moral being. Conscience is that part of the human psyche which, when we violate our value system by our actions, thoughts or words, causes mental anguish - guilt, remorse - and when we act in accordance with our value system, it gives us a sense of well-being and satisfaction.

Conscience depends on our current moral judgment. If we do not consider something immoral, we cannot have a conscience about it. Conscience is a faculty, an innate, genetic, evolutionary property of a person. However, conscience depends on historical development, individual upbringing, living conditions, environment and the spirit of the times, both in terms of its content and its strength.

Conscience, according to the philosophical approach, is a property related to morality. According to the religious approach, conscience is the voice of the existent God within us, which speaks out, answering yes or no whenever we are faced with a moral dilemma. According to this view, conscience is a person's God-given capacity for self-examination. According to the evolutionary approach, conscience is a property of our nature, created by selection. Its presence confers a selective advantage and has adaptive significance: those who have a conscience feel an inner compulsion to follow the rules of the group, and do not need to be forced to follow rules by external influence.

Conscience is therefore the judgment and qualification of our actions, thoughts and intentions within ourselves. It is an innate capacity, shaped and modified by experience.

Conscience is obviously a psychological phenomenon, its function is crucially linked to brain, to neurological processes. This connection is also clearly visible in the effects of certain brain injuries on conscience as a function influencing behavior. Numerous case studies of brain damage have shown that damage to areas of the brain results in the reduction or elimination of inhibitions, with a corresponding radical change in behavior. A classic example of the behaviour-consciousness-brain link is the case of Phineas Gage. Gage's extensive brain injury induced behavioral changes, which can be explained by the behavioral effects of the change in conscience caused by the injury.

What is the physical origin of conscience? What is the neural process in the brain that results in conscience?

A closer approach to a neurological understanding of conscience is to consider that the emotional and behavioral regulating function of conscience can be triggered not only by our own actions, but also by the behavior of others. For example, the emotional background of giving help to a person in distress or need is similar to the function of conscience. If, for example, we see someone being robbed on the street, we rush to their aid guided by our conscience.

However, we do know about the neurological background of this kind of behavior. Scientific experiments suggest that this type of behavior is determined by the presence and function of mirror neurons in the brain. The perception of events with others triggers the activity of mirror neurons, which generates an effect, a sensation, an emotional state as if the event had happened to us. As a result, experiencing the situation of the other person, behavioral mechanisms are induced, similar to how we would behave if the event had happened to us. The consequence of this state of mind is the helpful, supportive behavior that is required in that situation. However, this type of behavior can also be formulated as being guided by our conscience to help the other person.

In this way, by analogy, conscience - its philosophical concept, its emotional-psychological process - can be traced back to a specific neural process, the neurological activity of mirror neurons. On this basis, conscience as a phenomenon can be defined as a concrete neurological activity.

The phenomenon of conscience is the activity of brain mirror neurons triggered by our own actions, or even intentions, which induces the brain (emotional) state that would exist if the behavior in question were self-initiated. Conscience is therefore a neurological process, a feedback of our own behavior through the activation of mirror neurons. Conscience is the effect of our actual or even intended actions on ourselves, simulated by the mirror neurons, the feedback-like action of the mirror neurons.

This understands and explains why conscience is an innate property: the activity of the presence of mirror neurons, created by genetics and developed by evolution. If this hypothesis can be experimentally proven, then the abnormal functioning of conscience in some individuals could be due to the absence or malfunctioning of mirror neurons, and could therefore be explained by neurological, physiological, genetic causes. This would also explain the subjective nature of conscience. It is obvious that only neurological emotional states that have been formed depending on the historical development of a person, individual nurture, living conditions, environment, and the influence of the social environment can be activated by the feedback function of the mirror neurons present. These states are related to the subjective history of the individual.

Thus, on the basis of the mechanism outlined, conscience is not a philosophical concept, not a supernatural phenomenon, but a concrete neurological process that can be suitably and adequately explained.


The Degeneration of Dopamine Neurons in Parkinson's Disease: Insights from Embryology and Evolution of the Mesostriatocortical System

Address for correspondence: Dr. Philippe Vernier, Development, Evolution, Plasticity of the Nervous System, UPR2197, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France. Voice: +33-169-823-430 fax: +33-169-823-447. [email protected] Search for more papers by this author

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Address for correspondence: Dr. Philippe Vernier, Development, Evolution, Plasticity of the Nervous System, UPR2197, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France. Voice: +33-169-823-430 fax: +33-169-823-447. [email protected] Search for more papers by this author

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Abstract

A bstract : Parkinson's disease (PD) is, to a large extent, specific to the human species. Most symptoms are the consequence of the preferential degeneration of the dopamine-synthesizing cells of the mesostriatal-mesocortical neuronal pathway. Reasons for that can be traced back to the evolutionary mechanisms that shaped the dopamine neurons in humans. In vertebrates, dopamine-containing neurons and nuclei do not exhibit homogenous phenotypes. In this respect, mesencephalic dopamine neurons of the substantia nigra and ventral tegmental area are characterized by a molecular combination (tyrosine hydroxylase, aromatic amino acid decarboxylase, monoamine oxidase, vesicular monoamine transporter, dopamine transporter—to name a few), which is not found in other dopamine-containing neurons of the vertebrate brain. In addition, the size of these mesencephalic DA nuclei is tremendously expanded in humans as compared to other vertebrates. Differentiation of the mesencephalic neurons during development depends on genetic mechanisms, which also differ from those of other dopamine nuclei. In contrast, pathophysiological approaches to PD have highlighted the role of ubiquitously expressed molecules such as a-synuclein, parkin, and microtubule-associated proteins. We propose that the peculiar phenotype of the dopamine mesencephalic neurons, which has been selected during vertebrate evolution and reshaped in the human lineage, has also rendered these neurons particularly prone to oxidative stress, and thus, to the fairly specific neurodegeneration of PD. Numerous evidence has been accumulated to demonstrate that perturbed regulation of DAT-dependent dopamine uptake, DAT-dependent accumulation of toxins, dysregulation of TH activity as well as high sensitivity of DA mesencephalic neurons to oxidants are key components of the neurodegeneration process of PD. This view points to the contribution of nonspecific mechanisms (α-synuclein aggregation) in a highly specific cellular environment (the dopamine mesencephalic neurons) and provides a robust framework to develop novel and rational therapeutic schemes in PD.


That Is Not How Your Brain Works

T he 21st century is a time of great scientific discovery. Cars are driving themselves. Vaccines against deadly new viruses are created in less than a year. The latest Mars Rover is hunting for signs of alien life. But we’re also surrounded with scientific myths: outdated beliefs that make their way regularly into news stories.

Being wrong is a normal and inevitable part of the scientific process. We scientists do our best with the tools we have, until new tools extend our senses and let us probe more deeply, broadly, or precisely. Over time, new discoveries lead us to major course corrections in our understanding of how the world works, such as natural selection and quantum physics. Failure, therefore, is an opportunity to discover and learn. 1

Brains don’t work by stimulus and response. All your neurons are firing at various rates all the time.

But sometimes, old scientific beliefs persist, and are even vigorously defended, long after we have sufficient evidence to abandon them. As a neuroscientist, I see scientific myths about the brain repeated regularly in the media and corners of academic research. Three of them, in particular, stand out for correction. After all, each of us has a brain, so it’s critical to understand how that three-pound blob between your ears works.

M yth number one is that specific parts of the human brain have specific psychological jobs. According to this myth, the brain is like a collection of puzzle pieces, each with a dedicated mental function. One puzzle piece is for vision, another is for memory, a third is for emotions, and so on. This view of the brain became popular in the 19th century, when it was called phrenology. Its practitioners believed they could discern your personality by measuring bumps on your skull. Phrenology was discredited by better data, but the general idea was never fully abandoned. 2

Today, we know the brain isn’t divided into puzzle pieces with dedicated psychological functions. Instead, the human brain is a massive network of neurons. 3 Most neurons have multiple jobs, not a single psychological purpose. 4 For example, neurons in a brain region called the anterior cingulate cortex are regularly involved in memory, emotion, decision-making, pain, moral judgments, imagination, attention, and empathy.

LIZARD BRAIN: Why does the tale linger that our instincts stem from a part of our brain inherited from reptilian ancestors? Because if bad behavior stems from our inner beasts, then we’re less responsible for some of our actions. Galina Gala / Shutterstock

I’m not saying that every neuron can do everything, but most neurons do more than one thing. For example, a brain region that’s intimately tied to the ability to see, called primary visual cortex, also carries information about hearing, touch, and movement. 5 In fact, if you blindfold people with typical vision for a few days and teach them to read braille, neurons in their visual cortex become more devoted to the sense of touch. 6 (The effect disappears in a day or so without the blindfold.)

In addition, the primary visual cortex is not necessary for all aspects of vision. Scientists have believed for a long time that severe damage to the visual cortex in the left side of your brain will leave you unable to see out of your right eye, assuming that the ability to see out of one eye is largely due to the visual cortex on the opposite side. Yet more than 50 years ago, studies on cats with cortical blindness on one side showed that it is possible to restore some of the lost sight by cutting a connection deep in the cat’s midbrain. A bit more damage allowed the cats to orient toward and approach moving objects.

Perhaps the most famous example of puzzle-piece thinking is the “triune brain”: the idea that the human brain evolved in three layers. The deepest layer, known as the lizard brain and allegedly inherited from reptile ancestors, is said to house our instincts. The middle layer, called the limbic system, allegedly contains emotions inherited from ancient mammals. And the topmost layer, called the neocortex, is said to be uniquely human—like icing on an already baked cake—and supposedly lets us regulate our brutish emotions and instincts.

Myth number one is that specific parts of the human brain have specific psychological jobs.

This compelling tale of brain evolution arose in the mid 20th century, when the most powerful tool for inspecting brains was an ordinary microscope. Modern research in molecular genetics, however, has revealed that the triune brain idea is a myth. Brains don’t evolve in layers, and all mammal brains (and most likely, all vertebrate brains as well) are built from a single manufacturing plan using the same kinds of neurons.

Nevertheless, the triune brain idea has tremendous staying power because it provides an appealing explanation of human nature. If bad behavior stems from our inner beasts, then we’re less responsible for some of our actions. And if a uniquely human and rational neocortex controls those beasts, then we have the most highly evolved brain in the animal kingdom. Yay for humans, right? But it’s all a myth. In reality, each species has brains that are uniquely and effectively adapted to their environments, and no animal brain is “more evolved” than any other.

Noise Is a Drug and New York Is Full of Addicts

As soon as the door slams, I slide to the floor in a cross-legged position and hold my breath. The room in which I have just barricaded myself looks a bit like Matilda’s chokey a single light bulb casts a. READ MORE

So why does the myth of a compartmentalized brain persist? One reason is that brain-scanning studies are expensive. As a compromise, typical studies include only enough scanning to show the strongest, most robust brain activity. These underpowered studies produce pretty pictures that appear to show little islands of activity in a calm-looking brain. But they miss plenty of other, less robust activity that may still be psychologically and biologically meaningful. In contrast, when studies are run with enough power, they show activity in the majority of the brain. 7

Another reason is that animal studies sometimes focus on one small part of the brain at a time, even just a few neurons. In pursuit of precision, they wind up limiting their scope to the places where they expect to see effects. When researchers instead take a more holistic approach that focuses on all the neurons in a brain—say, in flies, worms, or even mice—the results show more what looks like whole-brain effects. 8

Pretty much everything that your brain creates, from sights and sounds to memories and emotions, involves your whole brain. Every neuron communicates with thousands of others at the same time. In such a complex system, very little that you do or experience can be traced to a simple sum of parts.

M yth number two is that your brain reacts to events in the world. Supposedly, you go through your day with parts of your brain in the off position. Then something happens around you, and those parts switch on and “light up” with activity.

Brains, however, don’t work by stimulus and response. All your neurons are firing at various rates all the time. What are they doing? Busily making predictions. 9 In every moment, your brain uses all its available information (your memory, your situation, the state of your body) to take guesses about what will happen in the next moment. If a guess turns out to be correct, your brain has a head start: It’s already launching your body’s next actions and creating what you see, hear, and feel. If a guess is wrong, the brain can correct itself and hopefully learn to predict better next time. Or sometimes it doesn’t bother correcting the guess, and you might see or hear things that aren’t present or do something that you didn’t consciously intend. All of this prediction and correction happens in the blink of an eye, outside your awareness.

If a predicting brain sounds like science fiction, here’s a quick demonstration. What is this picture?

If you see only some curvy lines, then your brain is trying to make a good prediction and failing. It can’t match this picture to something similar in your past. (Scientists call this state “experiential blindness.”) To cure your blindness, visit lisafeldmanbarrett.com/nautilus and read the description, then come back here and look at the picture again. Suddenly, your brain can make meaning of the picture. The description gave your brain new information, which conjured up similar experiences in your past, and your brain used those experiences to launch better predictions for what you should see. Your brain has transformed ambiguous, curvy lines into a meaningful perception. (You will probably never see this picture as meaningless again.)

Predicting and correcting is a more efficient way to run a system than constantly reacting in an uncertain world. This is clear every time you watch a baseball game. When the pitcher hurls the ball at 96 miles per hour toward home plate, the batter doesn’t have enough time to wait for the ball to come close, consciously see it, and then prepare and execute the swing. Instead, the batter’s brain automatically predicts the ball’s future location, based on rich experience, and launches the swing based on that prediction, to be able to have a hope of hitting the ball. Without a predicting brain, sports as we know them would be impossible to play.

What does all this mean for you? You’re not a simple stimulus-response organism. The experiences you have today influence the actions that your brain automatically launches tomorrow.

T he third myth is that there’s a clear dividing line between diseases of the body, such as cardiovascular disease, and diseases of the mind, such as depression. The idea that body and mind are separate was popularized by the philosopher René Descartes in the 17th century (known as Cartesian dualism) and it’s still around today, including in the practice of medicine. Neuroscientists have found, however, that the same brain networks responsible for controlling your body also are involved in creating your mind. 10 A great example is the anterior cingulate cortex, which I mentioned earlier. Its neurons not only participate in all the psychological functions I listed, but also they regulate your organs, hormones, and immune system to keep you alive and well.

Modern research in molecular genetics has revealed that the triune brain idea is a myth.

Every mental experience has physical causes, and physical changes in your body often have mental consequences, thanks to your predicting brain. In every moment, your brain makes meaning of the whirlwind of activity inside your body, just as it does with sense data from the outside world. That meaning can take different forms. If you have tightness in your chest that your brain makes meaningful as physical discomfort, you’re likely to visit a cardiologist. But if your brain makes meaning of that same discomfort as distress, you’re more likely to book time with a psychiatrist. Note that your brain isn’t trying to distinguish two different physical sensations here. They are pretty much identical, and an incorrect prediction can cost you your life. Personally, I have three friends whose mothers were misdiagnosed with anxiety 11 when they had serious illnesses, and two of them died.

When it comes to illness, the boundary between physical and mental is porous. Depression is usually catalogued as a mental illness, but it’s as much a metabolic illness as cardiovascular disease, which itself has significant mood-related symptoms. These two diseases occur together so often that some medical researchers believe that one may cause the other. That perspective is steeped in Cartesian dualism. Both depression 12 and cardiovascular disease 13 are known to involve problems with metabolism, so it’s equally plausible that they share an underlying cause.

When thinking about the relationship between mind and body, it’s tempting to indulge in the myth that the mind is solely in the brain and the body is separate. Under the hood, however, your brain creates your mind while it regulates the systems of your body. That means the regulation of your body is itself part of your mind.

Science, like your brain, works by prediction and correction. Scientists use their knowledge to fashion hypotheses about how the world works. Then they observe the world, and their observations become evidence they use to test the hypotheses. If a hypothesis did not predict the evidence, then they update it as needed. We’ve all seen this process in action during the pandemic. First we heard that COVID-19 spread on surfaces, so everyone rushed to buy Purell and Clorox wipes. Later we learned that the virus is mainly airborne and the focus moved to ventilation and masks. This kind of change is a normal part of science: We adapt to what we learn. But sometimes hypotheses are so strong that they resist change. They are maintained not by evidence but by ideology. They become scientific myths.

Lisa Feldman Barrett (@LFeldmanBarrett) is a professor of psychology at Northeastern University and the author of Seven and a Half Lessons About the Brain. Learn more at LisaFeldmanBarrett.com.

1. Firestein, S. Failure: Why Science Is So Successful Oxford University Press, Oxford, UK (2015).

2. Uttal, W.R. The New Phrenology MIT Press, Cambridge, MA (2001).

3. Sporns, O. Networks of the Brain MIT Press, Cambridge, MA (2010).

4. Anderson, M.L. After Phrenology MIT Press, Cambridge, MA (2014).

5. Liang, M., Mouraux, A., Hu, L., & Lannetti, G.D. Primary sensory cortices contain distinguishable spatial patterns of activity for each sense. Nature Communications 4, 1979 (2013).

6. Merabet, L.B., et al. Rapid and reversible recruitment of early visual cortex for touch. PLoS One 3, e3046 (2008).

7. Gonzalez-Castillo, J., et al. Whole-brain, time-locked activation with simple tasks revealed using massive averaging and model-free analysis. Proceedings of the National Academy of Sciences 109, 5487-5492 (2012).

8. Kaplan, H.S. & Zummer, M. Brain-wide representations of ongoing behavior: A universal principle? Current Opinion in Neurobiology 64, 60-69 (2020).

9. Hutchinson, J.B. & Barrett, L.F. The power of predictions: An emerging paradigm for psychological research. Current Directions in Psychological Science 28, 280-291 (2019).

10. Kleckner, I.R., et al. Evidence for a large-scale brain system supporting allostasis and interoception in humans. Nature Human Behavior 1, 0069 (2017).

11. Martin, R., et al. Gender disparities in common sense models of illness among myocardial infarction victims. Health Psychology 23, 345-353 (2004).


The Skeptics Society & Skeptic magazine

For decades now computer scientists and futurists have been telling us that computers will achieve human-level artificial intelligence soon. That day appears to be off in the distant future. Why? In this penetrating skeptical critique of AI, computer scientist Peter Kassan reviews the numerous reasons why this problem is harder than anyone anticipated.

digital image by Daniel Loxton and Jim Smith

On March 24, 2005 , an announcement was made in newspapers across the country, from the New York Times 1 to the San Francisco Chronicle, 2 that a company 3 had been founded to apply neuroscience research to achieve human-level artificial intelligence. The reason the press release was so widely picked up is that the man behind it was Jeff Hawkins, the brilliant inventor of the PalmPilot, an invention that made him both wealthy and respected. 4

You’d think from the news reports that the idea of approaching the pursuit of artificial human-level intelligence by modeling the brain was a novel one. Actually, a Web search for “computational neuroscience” finds over a hundred thousand webpages and several major research centers. 5 At least two journals are devoted to the subject. 6 Over 6,000 papers are available online. Amazon lists more than 50 books about it. A Web search for “human brain project” finds more than eighteen thousand matches. 7 Many researchers think of modeling the human brain or creating a “virtual” brain a feasible project, even if a “grand challenge.” 8 In other words, the idea isn’t a new one.

Hawkins’ approach sounds simple. Create a machine with artificial “senses” and then allow it to learn, build a model of its world, see analogies, make predictions, solve problems, and give us their solutions. 9 This sounds eerily similar to what Alan Turing 10 suggested in 1948. He, too, proposed to create an artificial “man” equipped with senses and an artificial brain that could “roam the countryside,” like Frankenstein’s monster, and learn whatever it needed to survive. 11

The fact is, we have no unifying theory of neuroscience. We don’t know what to build, much less how to build it. 12 As one observer put it, neuroscience appears to be making “antiprogress” — the more information we acquire, the less we seem to know. 13 Thirty years ago, the estimated number of neurons was between three and ten billion. Nowadays, the estimate is 100 billion. Thirty years ago it was assumed that the brain’s glial cells, which outnumber neurons by nine times, were purely structural and had no other function. In 2004, it was reported that this wasn’t true. 14

Even the most ardent artificial intelligence (A.I.) advocates admit that, so far at least, the quest for human-level intelligence has been a total failure. 15 Despite its checkered history, however, Hawkins concludes A.I. will happen: “Yes, we can build intelligent machines.” 16

A Brief History of A.I.

Duplicating or mimicking human-level intelligence is an old notion — perhaps as old as humanity itself. In the 19th century, as Charles Babbage conceived of ways to mechanize calculation, people started thinking it was possible — or arguing that it wasn’t. Toward the middle of the 20th century, as mathematical geniuses Claude Shannon, 17 Norbert Wiener, 18 John von Neumann, 19 Alan Turing, and others laid the foundations of the theory of computing, the necessary tool seemed available.

In 1955, a research project on artificial intelligence was proposed a conference the following summer is considered the official inauguration of the field. The proposal 20 is fascinating for its assertions, assumptions, hubris, and naïveté, all of which have characterized the field of A.I. ever since. The authors proposed that ten people could make significant progress in the field in two months. That ten-person, two-month project is still going strong — 50 years later. And it’s involved the efforts of more like tens of thousands of people.

A.I. has splintered into three largely independent and mutually contradictory areas (connectionism, computationalism, and robotics), each of which has its own subdivisions and contradictions. Much of the activity in each of the areas has little to do with the original goals of mechanizing (or computerizing) human-level intelligence. However, in pursuit of that original goal, each of the three has its own set of problems, in addition to the many that they share.

1. Connectionism

Connectionism is the modern version of a philosophy of mind known as associationism. 21 Connectionism has applications to psychology and cognitive science, as well as underlying the schools of A.I. 22 that include both artificial neural networks 23 (ubiquitously said to be “inspired by” the nervous system) and the attempt to model the brain.

The latest estimates are that the human brain contains about 30 billion neurons in the cerebral cortex — the part of the brain associated with consciousness and intelligence. The 30 billion neurons of the cerebral cortex contain about a thousand trillion synapses (connections between neurons). 24

Without a detailed model of how synapses work on a neurochemical level, there’s no hope of modeling how the brain works. 25 Unlike the idealized and simplified connections in so-called artificial neural networks, those synapses are extremely variable in nature — they can have different cycle times, they can use different neurotransmitters, and so on. How much data must be gathered about each synapse? Somewhere between kilobytes (tens of thousands of numbers) and megabytes (millions of numbers). 26 And since the cycle time of synapses can be more than a thousand cycles per second, we may have to process those numbers a thousand times each second.

Have we succeeded in modeling the brain of any animal, no matter how simple? The nervous system of a nematode (worm) known as C. (Caenorhabditis) elegans has been studied extensively for about 40 years. Several websites 27 and probably thousands of scientists are devoted exclusively or primarily to it. Although C. elegans is a very simple organism, it may be the most complicated creature to have its nervous system fully mapped. C. elegans has just over three hundred neurons, and they’ve been studied exhaustively. But mapping is not the same as modeling. No one has created a computer model of this nervous system — and the number of neurons in the human cortex alone is 100 million times larger. C. elegans has about seven thousand synapses. 28 The number of synapses in the human cortex alone is over 100 billion times larger.

The proposals to achieve human-level artificial intelligence by modeling the human brain fail to acknowledge the lack of any realistic computer model of a synapse, the lack of any realistic model of a neuron, the lack of any model of how glial cells interact with neurons, and the literally astronomical scale of what is to be simulated.

The typical artificial neural network consists of no more than 64 input “neurons,” approximately the same number of “hidden neurons,” and a number of output “neurons” between one and 256. 29 This, despite a 1988 prediction by one computer guru that by now the world should be filled with “neuroprocessors” containing about 100 million artificial neurons. 30

Even if every neuron in each layer of a three- layer artificial neural net with 64 neurons in each layer is connected to every neuron in the succeeding layer, and if all the neurons in the output layer are connected to each other (to allow creation of a “winner-takes-all” arrangement permitting only a single output neuron to fire), the total number of “synapses” can be no more than about 17 million, although most artificial neural networks typically contain much, much less — usually no more than a hundred or so.

Furthermore, artificial neurons resemble generalized Boolean logic gates more than actual neurons. Each neuron can be described by a single number — its “threshold.” Each synapse can be described by a single number — the strength of the connection — rather than the estimated minimum of ten thousand numbers required for a real synapse. Thus, the human cortex is at least 600 billion times more complicated than any artificial neural network yet devised.

It is impossible to say how many lines of code the model of the brain would require conceivably, the program itself might be relatively simple, with all the complexity in the data for each neuron and each synapse. But the distinction between the program and the data is unimportant. If each synapse were handled by the equivalent of only a single line of code, the program to simulate the cerebral cortex would be roughly 25 million times larger than what’s probably the largest software product ever written, Microsoft Windows, said to be about 40 million lines of code. 31 As a software project grows in size, the probability of failure increases. 32 The probability of successfully completing a project 25 million times more complex than Windows is effectively zero.

Moore’s “Law” is often invoked at this stage in the A.I. argument. 33 But Moore’s Law is more of an observation than a law, and it is often misconstrued to mean that about every 18 months computers and everything associated with them double in capacity, speed, and so on. But Moore’s Law won’t solve the complexity problem at all. There’s another “law,” this one attributed to Nicklaus Wirth: Software gets slower faster than hardware gets faster. 34 Even though, according to Moore’s Law, your personal computer should be about a hundred thousand times more powerful than it was 25 years ago, your word processor isn’t. Moore’s Law doesn’t apply to software.

And perhaps last, there is the problem of testing. The minimum number of software errors observed has been about 2.5 errors per function point. 35 A software program large enough to simulate the human brain would contain about 20 trillion errors.

Testing conventional software (such as a word processor or Windows) involves, among many other things, confirming that its behavior matches detailed specifications of what it is intended to do in the case of every possible input. If it doesn’t, the software is examined and fixed. Connectionistic software comes with no such specifications — only the vague description that it is to “learn” a “pattern” or act “like” a natural system, such as the brain. Even if one discovers that a connectionistic software program isn’t acting the way you want it do, there’s no way to “fix” it, because the behavior of the program is the result of an untraceable and unpredictable network of interconnections.

Testing connectionistic software is also impossible due to what’s known as the combinatorial explosion. The retina (of a single eye) contains about 120 million rods and 7 million cones. 36 Even if each of those 127 million neurons were merely binary, like the beloved 8࡮ input grid of the typical artificial neural network (that is, either responded or didn’t respond to light), the number of different possible combinations of input is a number greater than 1 followed by 38,230,809 zeroes. (The number of particles in the universe has been estimated to be about 1 followed by only 80 zeroes. 37 ) Testing an artificial neural network with input consisting of an 8࡮ binary grid is, by comparison, a small job: such a grid can assume any of 18,446,744,073,709,551,616 configurations — orders of magnitude smaller, but still impossible.

2. Computationalism

Computationalism was originally defined as the “physical symbol system hypothesis,” meaning that “A physical symbol system has the necessary and sufficient means for general intelligent action.” 38 (This is actually a “formal symbol system hypothesis,” because the actual physical implementation of such a system is irrelevant.) Although that definition wasn’t published until 1976, it co-existed with connectionism from the very beginning. It has also been referred to as “G.O.F.A.I.” (good old-fashioned artificial intelligence). Computationalism is also referred to as the computational theory of mind. 39

The assumption behind computationalism is that we can achieve A.I. without having to simulate the brain. The mind can be treated as a formal symbol system, and the symbols can be manipulated on a purely syntactic level — without regard to their meaning or their context. If the symbols have any meaning at all (which, presumably, they do — or else why bother manipulating them?), that can be ignored until we reach the end of the manipulation. The symbols are at a recognizable level, more-or-less like ordinary words — a so-called “language of thought.” 40

The basic move is to treat the informal symbols of natural language as formal symbols. Although, during the early years of computer programming (and A.I.), this was an innovative idea, it has now become a routine practice in computer programming — so ubiquitous that it’s barely noticeable.

Unfortunately, natural language — which may not literally be the language of thought, but which any human-level A.I. program has to be able to handle — can’t be treated as a formal symbol. To give a simple example, “day” sometimes mean “day and night” and sometimes means “day as opposed to night” — depending on context.

Joseph Weizenbaum 41 observes that a young man asking a young woman, “Will you come to dinner with me this evening?” 42 could, depending on context, simply express the young man’s interest in dining, or his hope to satisfy a desperate longing for love. The context — the so-called “frame” — needed to make sense of even a single sentence may be a person’s entire life.

An essential aspect of the computationalist approach to natural language is to determine the syntax of a sentence so that its semantics can be handled. As an example of why that is impossible, Terry Winograd 43 offers a pair of sentences:

The committee denied the group a parade permit because they advocated violence.

The committee denied the group a parade permit because they feared violence. 44

The sentences differ by only a single word (of exactly the same grammatical form). Disambiguating these sentences can’t be done without extensive — potentially unlimited — knowledge of the real world. 45 No program can do this without recourse to a “knowledge base” about committees, groups seeking marches, etc. In short, it is not possible to analyze a sentence of natural language syntactically until one resolves it semantically. But since one needs to parse the sentence syntactically before one can process it at all, it seems that one has to understand the sentence before one can understand the sentence.

In natural language, the boundaries of the meaning of words are inherently indistinct, whereas the boundaries of formal symbols aren’t. For example, in binary arithmetic, the difference between 0 and 1 is absolute. In natural language, the boundary between day and night is indistinct, and arbitrarily set for different purposes. To have a purely algorithmic system for natural language, we need a system that can manipulate words as if they were meaningless symbols while preserving the truth-value of the propositions, as we can with formal logic. When dealing with words — with natural language — we just can’t use conventional logic, since one “axiom” can affect the “axioms” we already have — birds can fly but penguins and ostriches are birds that can’t fly. Since the goal is to automate human-style reasoning, the next move is to try to develop a different kind of logic — so-called non-monotonic logic.

What used to be called logic without qualification is now called “monotonic” logic. In this kind of logic, the addition of a new axiom does- n’t change any axioms that have already been processed or inferences that have already been drawn. The attempt to formalize the way people reason is quite recent — and entirely motivated by A.I.. And although the motivation can be traced back to the early years of A.I., the field essentially began with the publication of three papers in 1980. 46 However, according to one survey of the field in 2003, despite a quarter-century of work, all that we have are prospects and hope. 47

An assumption of computationalists is that the world consists of unambiguous facts that can be manipulated algorithmically. But what is a fact to you may not be a fact to me, and vice versa. 48 Furthermore, the computationalist approach assumes that experts apply a set of explicit, formalizable rules. The task of computationalists, then, is simply to debrief the experts on their rules. But, as numerous studies of actual experts have shown, 49 only beginners behave that way. At the highest level of expertise, people don’t even recognize that they’re making decisions. Rather, they are fluidly interacting with the changing situation, responding to patterns that they recognize. Thus, the computationalist approach leads to what should be called “beginner systems” rather than “expert systems.”

The way people actually reason can’t be reduced to an algorithmic procedure like arithmetic or formal logic. Even the most ardent practitioners of formal logic spend most of their time explaining and justifying the formal proofs scattered through their books and papers — using natural language (or their own unintelligible versions of it). Even more ironically, none of these practitioners of formal logic — all claiming to be perfectly rational — ever seem to agree with each other about any of their formal proofs.

Computationalist A.I. is plagued by a host of other problems. First of all its systems don’t have any common sense. 50 Then there’s “the symbol- grounding problem.” 51 The analogy is trying to learn a language from a dictionary (without pictures) — every word (symbol) is simply defined using other words (symbols), so how does anything ever relate to the world? Then there’s the “frame problem” — which is essentially the problem of which context to apply to a given situation. 52 Some researchers consider it to be the fundamental problem in both computationalist and connectionist A.I. 53

The most serious computationalist attempt to duplicate human-level intelligence — perhaps the only serious attempt — is known as CYC 54 — short for enCYClopedia (but certainly meant also to echo “psych”). The head of the original project and the head of CYCORP, Douglas Lenat 55 has been making public claims about its imminent success for more than twenty years. The stated goal of CYC is to capture enough human knowledge — including common sense — to, at the very least, pass an unrestricted Turing Test. 56 If any computationalist approach could succeed, it would be this mother of all expert systems.

Lenat had made some remarkable predictions: at the end of ten years, by 1994 he projected, the CYC knowledge base will contain 30󈞞% of consensus reality. 57 (It is difficult to say what this prediction means, because it assumes that we know what the totality of consensus reality is and that we know how to quantify and measure it.) The year 1994 would represent another milestone in the project: CYC would, by that time, be able to build its knowledge base by reading online materials and ask questions about it, rather than having people enter information. 58 And by 2001, Lenat said, CYC would have become a system with human-level breadth and depth of knowledge. 59

In 1990, CYC produced what it termed “A Midterm Report.” 60 Given that the effort started in 1984, calling it this implied that the project would be successfully completed by 1996, although in the section labeled “Conclusion” it refers to three possible outcomes that might occur by the end of the 1990s. One would hope that by that time CYC would at least be able to do simple arithmetic. In any case, the three scenarios are labeled “good” (totally failing to meet any of the milestones), “better” (which shifts the achievements to “the early twenty-first century” and that still consists of “doing research”), and “best” (in which the achievement still isn’t “true A.I.” but only the “foundation for … true A.I.” in — 2015).

Even as recently as 2002 (one year after CYC’s predicted achievement of human-level breadth and depth of knowledge), CYC’s website was still quoting Lenat making promises for the future: “This is the most exciting time we’ve ever seen with the project. We stand on the threshold of success.” 61

Perhaps most tellingly, Lenat’s principal coworker, R.V. Guha 62 left the team in 1994, and was quoted in 1995 as saying “CYC is generally viewed as a failed project. The basic idea of typing in a lot of knowledge is interesting but their knowledge representation technology seems poor.” 63 In the same article, Guha is further quoted as saying of CYC, as could be said of so many other A.I. projects, “We were killing ourselves trying to create a pale shadow of what had been promised.” It’s no wonder that GOFA.I. has been declared “brain-dead.” 64

3. Robotics

The third and last major branch of the river of A.I. is robotics — the attempt to build a machine capable of autonomous intelligent behavior. Robots, at least, appear to address many of problems of connectionism and computationalism: embodiment, 65 lack of goals, 66 the symbol-grounding problem, and the fact that conventional computer programs are “bedridden.” 67

However, when it comes to robots, the disconnect between the popular imagination and reality is perhaps the most dramatic. The notion of a fully humanoid robot is ubiquitous not only in science fiction but in supposedly non-fictional books, journals, and magazines, often by respected workers in the field.

This branch of the river has two sub-branches, one of which (cybernetics) has gone nearly dry, the other of which (computerized robotics) has in turn forked into three sub-branches. Remarkably, although robotics would seem to be the most purely down-to-earth engineering approach to A.I., its practitioners spend as much time publishing papers and books as do the connectionists and the computationalists.

Cybernetic Robotics

While Turing was speculating about building his mechanical man, W. Grey Walter 68 built what was probably the first autonomous vehicle, the robot “turtles” or “tortoises,” Elsie and Elmer. Following a cybernetic approach rather than a computational one, Walter’s turtles were controlled by a simple electronic circuit with a couple of vacuum tubes.

Although the actions of this machine were trivial and exhibited nothing that even suggested intelligence, Grey has been described as a robotics “pioneer” whose work was “highly successful and inspiring.” 69 On the basis of experimentation with a device that, speaking generously, simulated an organism with two neurons, he published two articles in Scientific American 70 (one per neuron!), as well as a book. 71

Cybernetics was the research program founded by Norbert Wiener, 72 and was essentially analog in its approach. In comparison with (digital) computer science, it is moribund if not quite dead. Like so many other approaches to artificial intelligence, the cybernetic approach simply failed to scale up. 73

Computerized Robots

The history of computerized robotics closely parallels the history of A.I. in general:

  • Grand theoretical visions, such as Turing’s musings (already discussed) about how his mechanical creature would roam the countryside.
  • Promising early results, such as Shakey, said to be “the first mobile robot to reason about its actions.” 74
  • A half-century of stagnation and disappointment. 75
  • Unrepentant grand promises for the future.

What a roboticist like Hans Moravec predicts for robots is the stuff of science fiction, as is evident by the title of his book, Robot: Mere Machine to Transcendent Mind. 76 For example, in 1997 Moravec asked the question, “When will computer hardware match the human brain?” and answered “in the 2020s.” 77 This belief that robots will soon transcend human intelligence is echoed by many others in A.I. 78

In the field of computerized robots, there are three major approaches:

  • TOP-DOWN  The approach taken with Shakey and its successors, in which a computationalist computer program controls the robot’s activities. 79 Under the covers, the programs take the same approach as good old-fashioned artificial intelligence, except that instead of printing out answers, they cause the robot to do something.
  • OUTSIDE-IN  Consists of creating robots that imitate the superficial behavior of people, such as responding to the presence of people nearby, tracking eye movement, and so on. This is the approach largely taken recently by people working under Rodney A. Brooks. 80
  • BOTTOM-UP  Consists of creating robots that have no central control, but relatively simple mechanisms to control parts of their behavior. The notion is that by putting together enough of these simple mechanisms (presumably in the right arrangement), intelligence will “emerge.” Brooks has written extensively in support of this approach. 81

The claims of roboticists of all camps range from the unintelligible to the unsupportable.

As an example of the unintelligible, consider MIT’s Cog (short for “cognition”). The claim was that Cog displayed the intelligence (and behavior) of, initially, a six-month old infant. The goal was for Cog to eventually display the intelligence of a two-year-old child. 82 A basic concept of intelligence — to the extent that anyone can agree on what the word means — is that (all things being equal) it stays constant throughout life. What changes as a child or animal develops is only the behavior. So, to make this statement at all intelligible, it would have to be translated into something like this: the initial goal is only that Cog will display the behavior of a six-month-old child that people consider indicative of intelligence, and later the behavior of a two-year-old child.

Even as corrected, this notion is also fallacious. Whatever behaviors a two-year-old child happens to display, as that child continues to grow and develop it will eventually display all the behavior of a normal adult, because the two- year-old has an entire human brain. However, even if we manage to create a robot that mimics all the behavior of a two-year-old child, there’s reason to believe that that same robot will without any further programming, ten years later, display the behavior of a 12-year-old child, or later, display the behavior of an adult.

Cog never even displayed the intelligent behavior of a typical six-month-old baby. 83 For it to behave like a two-year-old child, of course, it would have to use and understand natural language — thus far an insurmountable barrier for A.I..

The unsupportable claim is sometimes made that some robots have achieved “insect-level intelligence,” or at least robots that duplicate the behavior of insects. 84 Such claims seem plausible simply because very few people are entomologists, and are unfamiliar with how complex and sophisticated insect behavior actual is. 85 Other experts, however, are not sure that we’ve achieved even that level. 86

According to the roboticists and their fans, Moore’s Law will come to the rescue. The implication is that we have the programs and the data all ready to go, and all that’s holding us back is a lack of computing power. After all, as soon as computers got powerful enough, they were able to beat the world’s best human chess player, weren’t they? (Well, no — a great deal of additional programming and chess knowledge was also needed.)

Sad to say, even if we had unlimited computer power and storage, we wouldn’t know what to do with it. The programs aren’t ready to go, because there aren’t any programs.

Even if it were true that current robots or computers had attained insect-level intelligence, this wouldn’t indicate that human-level artificial intelligence is attainable. The number of neurons in an insect brain is about 10,000 and in a human cerebrum about 30,000,000,000. But if you put together 3,000,000 cockroaches (this seems to be the A.I. idea behind “swarms”), you get a large cockroach colony, not human-level intelligence. If you somehow managed to graft together 3,000,000 natural or artificial cockroach brains, the results certainly wouldn’t be anything like a human brain, and it is unlikely that it would be any more “intelligent” than the cockroach colony would be. Other species have brains as large as or larger than humans, and none of them display human-level intelligence — natural language, conceptualization, or the ability to reason abstractly. 87 The notion that human- level intelligence is an “emergent property” of brains (or other systems) of a certain size or complexity is nothing but hopeful speculation.

Conclusions

With admirable can-do spirit, technological optimism, and a belief in inevitability, psychologists, philosophers, programmers, and engineers are sure they shall succeed, just as people dreamed that heavier-than-air flight would one day be achieved. 88 But 50 years after the Wright brothers succeeded with their proof-of-concept flight in 1903, aircraft had been used decisively in two world wars the helicopter had been invented several commercial airlines were routinely flying passengers all over the world the jet airplane had been invented and the speed of sound had been broken.

After more than 50 years of pursuing human- level artificial intelligence, we have nothing but promises and failures. The quest has become a degenerating research program 89 (or actually, an ever-increasing number of competing ones), pursuing an ever-increasing number of irrelevant activities as the original goal recedes ever further into the future — like the mirage it is.


History of critical thinking

T he intellectual roots of critical thinking can be traced back to

350BC. The first one to embrace critical thinking practices was the famous Greek philosopher Socrates. He was obsessed with the notion that many people were basing their ideologies on an empty rhetoric rather than sane and rational thinking.

Confused meanings, contradictory beliefs, biases and self-delusion formed the foundation of their argumentation and Socrates was constantly questioning those practices.

He also had an aversion towards authority because he believed that a person in power doesn’t necessarily possess sound knowledge and insight. He might just be a good manipulator and performer.

Socrates’ method of revealing unreasonable argumentation is called “Socratic Questioning” and epitomizes the idea of clarity and logical consistency.

His approach was later on adopted by Plato, Aristotle and the Greek skeptics, all of whom adopted the belief that things might be different than they appear and that the critical mind should read between the lines 1 to establish a reasonable truth.

The ancient Greek philosophers were the forefathers of the critical thinking movement and influenced the work of other renowned thinkers like Francis Bacon, Descartes, Niccolo Machiavelli, Isaac Newton, Adam Smith, and Immanuel Kant.


The Degeneration of Dopamine Neurons in Parkinson's Disease: Insights from Embryology and Evolution of the Mesostriatocortical System

Address for correspondence: Dr. Philippe Vernier, Development, Evolution, Plasticity of the Nervous System, UPR2197, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France. Voice: +33-169-823-430 fax: +33-169-823-447. [email protected] Search for more papers by this author

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Address for correspondence: Dr. Philippe Vernier, Development, Evolution, Plasticity of the Nervous System, UPR2197, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France. Voice: +33-169-823-430 fax: +33-169-823-447. [email protected] Search for more papers by this author

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Development, Evolution, Plasticity of the Nervous System, Institute of Neurobiology A. Fessard, CNRS, 91198 Gif-sur-Yvette, France

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Department of Pediatrics, Georgetown University, Washington, District of Columbia 20007, USA

Institutional Login
Log in to Wiley Online Library

If you have previously obtained access with your personal account, please log in.

Purchase Instant Access
  • View the article PDF and any associated supplements and figures for a period of 48 hours.
  • Article can not be printed.
  • Article can not be downloaded.
  • Article can not be redistributed.
  • Unlimited viewing of the article PDF and any associated supplements and figures.
  • Article can not be printed.
  • Article can not be downloaded.
  • Article can not be redistributed.
  • Unlimited viewing of the article/chapter PDF and any associated supplements and figures.
  • Article/chapter can be printed.
  • Article/chapter can be downloaded.
  • Article/chapter can not be redistributed.

Abstract

A bstract : Parkinson's disease (PD) is, to a large extent, specific to the human species. Most symptoms are the consequence of the preferential degeneration of the dopamine-synthesizing cells of the mesostriatal-mesocortical neuronal pathway. Reasons for that can be traced back to the evolutionary mechanisms that shaped the dopamine neurons in humans. In vertebrates, dopamine-containing neurons and nuclei do not exhibit homogenous phenotypes. In this respect, mesencephalic dopamine neurons of the substantia nigra and ventral tegmental area are characterized by a molecular combination (tyrosine hydroxylase, aromatic amino acid decarboxylase, monoamine oxidase, vesicular monoamine transporter, dopamine transporter—to name a few), which is not found in other dopamine-containing neurons of the vertebrate brain. In addition, the size of these mesencephalic DA nuclei is tremendously expanded in humans as compared to other vertebrates. Differentiation of the mesencephalic neurons during development depends on genetic mechanisms, which also differ from those of other dopamine nuclei. In contrast, pathophysiological approaches to PD have highlighted the role of ubiquitously expressed molecules such as a-synuclein, parkin, and microtubule-associated proteins. We propose that the peculiar phenotype of the dopamine mesencephalic neurons, which has been selected during vertebrate evolution and reshaped in the human lineage, has also rendered these neurons particularly prone to oxidative stress, and thus, to the fairly specific neurodegeneration of PD. Numerous evidence has been accumulated to demonstrate that perturbed regulation of DAT-dependent dopamine uptake, DAT-dependent accumulation of toxins, dysregulation of TH activity as well as high sensitivity of DA mesencephalic neurons to oxidants are key components of the neurodegeneration process of PD. This view points to the contribution of nonspecific mechanisms (α-synuclein aggregation) in a highly specific cellular environment (the dopamine mesencephalic neurons) and provides a robust framework to develop novel and rational therapeutic schemes in PD.


Can logic be traced back to neurons? - Psychology

In order to understand the great debate of the Neuron Doctrine, which is associated with Cajal, Vs. the Nerve network, which is associated with Golgi, it is perhaps necessary to backtrack a little bit in order to see how these theories originated.

Technology (see technology tab) was finally developed enough for scientists to explore how the body functioned at the cellular level.
Equipped with the carmine stain, Otto Friedrich Karl Deiters observed the anatomy of nerve cells: particularly, the somas and dendrites. Deiters wondered how these nerve cells communicated. Because the staining techniques at that point were not well advanced, after looking at his slides Deiters, around 1865, hypothesized that perhaps nerve cells were fused together, as this is what it looked like on from his slides (the synapse was not visible to scientists until the 1950s). Joseph von Gerlach took the fusing idea a step further and suggested that all nerve cells are interconnected, which creates a giant web, thus the nerve network was established (Finger, 2000).

Not long afterward, in 1873 Golgi invents the silver stain which allows scientists much clearer view of nerve cells (Finger, 2000).

In 1886 Wilhelm His suggests that perhaps it is possible that nerve cells do not fuse together. He backs up his claim by pointing out that motor neurons are not connected to muscle fibers therefore, he speculated that if nerve cells in the peripheral nervous system are not connected, nerve cells in the central nervous system may not be connected either . Around that same time, August Forel suggested that nerve cells may simply touch to send the nerve impulse, but they are not technically fused (Finger, 2000).

This was when Santiago Ramon y Cajal came into the scene. Cajal improved Golgi’s stain and created better slides. At the German Anatomical Society of 1889, Cajal presented his slides, and expressed his opinion that he found no evidence of nerve cells fusing together (Finger, 2000).

Thus, in 1891, Wilhelm von Waldyer, with his prestige, officially established that nerve cells are seprate entities called neurons. This theory thus became the neuron doctrine (Finger, 2000).

Though it was established that neurons were most likely seperate units, it was still not determined how they communicated or in what direction. Golgi believed that dendrites’ only job was to provide nutrition, so it was only the axons that commuicated information. Cajal however believed that communication flowed in one direction, but that dendrites received informaiton while axons send out information. He came up with this hypotheis by examining the sense organs. In the eyes, for example, neurons are all positioned with their dendrites outward, to recieve the information from the outside world, and then the axons are pointed in towards the brain to deliver that information (Finger, 2000).

However, Cajal still did not answer the question about how information passed between neurons. He hesitantly agreed with other scientists that perhaps dendrites communicate to axons via touching, but he was not entirelly convinced (Finger, 2000).

*Interesting Fact: Way back in 1872, before Golgi even invents the silver stain, Alexander Bain suggests that when learning takes place, nerve cells grow closer together, and when memory loss occurs, nerve cells grow further apart (Bain, 1873):

“For every act of memory, every exercise of bodily aptitude, every habit, recollection, train of ideas, there is a specific grouping, or co-ordination, of sensations and movements, by virtue of specific growths in the cell junctions.” (pp. 91).

“If the brain is a vast network of communication between sense and movement – actual and ideal – between sense and sense, movement and movement, by innumerable conducting fibres, crossing at innumerable points, — the way to make one definite set of currents induce a second definite set is in some way or other to strengthen the special points of junction where the two sets are most readily connected […]” (pp. 92)

Another interesting fact* The idea that perhaps nerve cells communicate either chemically or electronically can be traced back to 1877 to Emil Du Bois-Raymond.

Of known natural processes that might pass on excitation, only two are, in my opinion, worth talking about. Either there exists at the boundary of the contractile substance a stimulative secretionin the form of a thin layer of ammonia, lactic acid, or some other powerful stimulatory substance, or the phenomenon is electrical in nature. (qtd. Finger p. 260)

The answer to exactly how nerve cells communicate was answered by Otto Loewi, who did an experiment which showed that neurons can communicate via chemicals. Though the story varies in minute details, Otto Loewi was said to have insomnia and woke up often during the night. One time he woke up and got the idea for an experiment to test whether neurons respond via chemicals. He wrote it down, and later he did the experiment and was successful (Finger, 2000).

He took two frog hearts. One frog heart had its vagus nerve still attached while the other heart had its vagus nerve removed. He put neutral ringer’s solution on the heart with the vagus nerve. He stimulated the vegas nerve which made the heart slow down. He then took some of the ringer’s solution and applied it to the second heart, which immediately made it slow down its beating as well, just as if he had stimulated the second heart. This shows that the vagus nerve released some chemicals to tell the heart to slow its beating, and he was able to get some of those chemicals and make the other heart do the same thing (Sabbatini, 2003).

Bain, A. (1873). Mind and Body: The Theories of Their Relation. London. Henry S. King.

Finger, S. (2000). Minds Behind the Brain: A History of the Pioneers and Their Discoveries. New York NY: Oxford Press.


Bugs, mice, and people may share one ‘brain ancestor’

You are free to share this article under the Attribution 4.0 International license.

Humans, mice, and flies share the same genetic mechanisms that regulate the formation and function of brain areas involved in attention and movement control, according to a new study.

The findings shed light on the deep evolutionary past connecting organisms with seemingly unrelated body plans.

They also may help scientists understand the subtle changes that can occur in genes and brain circuits that can lead to mental health disorders such as anxiety and autism spectrum disorders.

“The crucial question scientists are trying to answer is: Did the brains in the animal kingdom evolve from a common ancestor?”

Resemblances between the nervous systems of vertebrates and invertebrates have been known since the early 18th century, but only recently have scientists asked whether such similarities are due to corresponding genetic programs that already existed in a common ancestor of vertebrates and invertebrates that lived more than half a billion years ago.

“The crucial question scientists are trying to answer is: Did the brains in the animal kingdom evolve from a common ancestor?” says coauthor Nicholas Strausfeld, professor of neuroscience at the University of Arizona. “Or, did brains evolve independently in different lineages of animals?”

Microscopic image of a fruit fly brain showing several neurons specified during development of the deutocerebral-tritocerebral boundary, or DTB, revealed by Green Fluorescent Protein. The circuits arising from the DTB play crucial roles in the regulation of behavior. (Credit:Jessika Bridi/Hirth Lab/ King’s College London)

The study provides evidence of underlying gene regulatory networks governing the formation of two corresponding structures in the developing brains of fruit flies and vertebrates including mice and humans.

Uncovering previously unknown similarities in how their brains develop during embryogenesis, the study further supports the hypothesis of a basic brain architecture shared across the animal kingdom.

The evolution of the brain

The study in the Proceedings of the National Academy of Sciences provides strong evidence that the mechanisms that regulate genetic activity required for the formation of important behavior-related brain areas are the same for insects and mammals.

“The findings indicate that the evolution of their very different brains can be traced back to a single ancestral brain more than a half billion years ago.”

Most strikingly, the authors demonstrate that when these regulatory mechanisms are inhibited or impaired in insects and mammals, they experience very similar behavioral problems. This indicates that the same building blocks that control the activity of genes are essential to both the formation of brain circuits but also the behavior-related functions they perform. According to the researchers, this provides evidence that these mechanisms likely arose in a common ancestor.

“Our research indicates that the way the brain’s circuits are put in place is the same in humans, flies, and mice,” says senior study author Frank Hirth from the Institute of Psychiatry, Psychology, and Neuroscience at King’s College London. “The findings indicate that the evolution of their very different brains can be traced back to a single ancestral brain more than a half billion years ago.

Using neuroanatomical observations and developmental genetic experiments, the researchers traced nerve cell lineages in the developing embryos of fruit flies and mice to identify how adult brain structures, along with their functionalities, unfold.

The team focused on those areas of the brain known as the deutocerebral-tritocerebral boundary, or DTB, in flies and the midbrain-hindbrain boundary, or MHB, in vertebrates including humans.

“In both vertebrates and arthropods, this boundary belongs to the anterior part of the brain and separates it from the rest,” Strausfeld says. “The anterior part integrates sensory inputs, forms memories, and plans and controls complex actions. The part behind it is essential for controlling balance and autonomic functions like breathing.”

Ancient but stable over time

Using genomic data, the researchers identified the genes that play a key role in the formation of brain circuits of the DTB in flies and the MHB in mice and men, and ascertained that these circuits play crucial roles in the regulation of behavior. They then ascertained which regions of the genome control when and where these genes are expressed.

They found that those genomic regions are very similar in flies, mice, and humans, indicating that they share the same fundamental genetic mechanism by which these brain areas develop.

“For many years researchers have been trying to find the mechanistic basis underlying behavior. We have discovered a crucial part of the jigsaw puzzle…”

Manipulating the relevant genomic regions in flies resulted in impaired behavior. This corresponds to findings from research on people where mutations in these gene regulatory sequences or the regulated genes themselves have been associated with behavioral disorders, including anxiety and autism spectrum disorders.

The research builds on previous work led by Hirth showing that the early divisions of the fly’s brain into distinctive parts followed by an extended nerve cord correspond to the three front-to-back divisions of the developing mouse brain and its spinal cord. Both in flies and mice, the development of each morphologically corresponding part requires the same set of genes, called homeobox genes, suggesting homologous genetic programs for brain development in invertebrate and vertebrates.

Evidence from soft tissue preservation in fossils of ancient arthropods studied by Strausfeld suggests that overall brain morphologies present in arthropod lineages living today must indeed have originated before the early Cambrian era, more than 520 million years ago.

“This implies that basic neural arrangements can be ancient and yet highly stable over geological time,” he says. “You could say the jigsaw puzzle of how the brain evolved still lacks an image on the box, but the pieces currently being added suggest a very early origin of essential circuits that, over an immense span of time, have been maintained, albeit with modification, across the great diversity of brains we see today.”

“For many years researchers have been trying to find the mechanistic basis underlying behavior,” Hirth says. “We have discovered a crucial part of the jigsaw puzzle by identifying these basic gene regulatory mechanisms required for midbrain circuit formation and function. If we can understand these very small, very basic building blocks, how they form and function, this will help find answers to what happens when things go wrong at a genetic level to cause these disorders.”

Additional researchers from King’s College London, the University of Arizona, the University of Leuven, and Leibniz Institute DSMZ contributed to the research.

Funding for this study came from the Ministry of Education of Brazil, King’s College London, the Research Foundation Flanders, the US National Science Foundation, the UK Medical Research Council, the UK Biotechnology and Biological Sciences Research Council, and the UK Motor Neuron Disease Association.


In my blog, I am intending to explore the neurological nature of theatrical performance. In the domain of Affective Neuroscience and Cognitive Psychology, it seems that the focus of most researchers stays on the processes involved in the genuine human emotional activity. Such activity as Acting, for example, involves mostly artificially stimulated emotions, and yet it requires full commitment and therefore its emotional processes are often labeled genuine, even though they are motivated by the need to perform. In real life, emotions are often uncontrolled. They are labeled as spontaneous, they don’t call before they show up at someone’s door, they just happen. In the Performing Arts, it is quite the opposite: emotions are provoked in performers, who then (ideally) receive reactions and empathy from the audience. I am interested in physical and psychological manifestations of artificially/purposefully stimulated emotive processing. I am hoping to demystify emotions using a scientific approach to the most ephemeral aspect of human behavior and nature. I am also exploring the concept of emotional prosody and the audial/musical/vocal basis of human emotions.

Human beings embody, express, process, inhibit, function, act, feel. All the verbs I just listed, along with many more, have as their sources the essential parts of what constitutes a human: body, mind, emotion, and behavior. In his dissertation, Kemp (2008) states that cognitive science acknowledges the central role of the body and enables a better understanding of understand the relationship between thought and expression (p. 20). Acting, on the other hand, does not explain the body-mind-soul relationship, but rather provides the richest material for exploration of and experimentation with human emotions. How does theatrical performance/activity conceptually relate to the cognitive science and affective neuroscience? The main things that both disciplines share is the idea of duality of the human nature. Are the emotions manifested through the body, or is the body producing emotions as integral parts of its purpose? Following the same logic, the acting traditions argue: can the physical work stimulate imagination to the point that the actor lives through the emotions of the character, or does the psychological approach to acting guarantee deep understanding and therefore meaningful expression? Kemp (2008) proposes that “the two approaches are in fact representative of positions on a continuum, rather than being mutually exclusive or necessarily oppositional. The empirically based concept of the embodied mind provides a foundation that explains the effectiveness of approaches to training and rehearsal that consciously link physicality and environment in the expression of meaning” (p . 24).

Unfortunately, until recently researchers and thinkers didn’t have a luxury to be inspired by scientific evidences of neurological activity and embodied cognition. And yet the juxtaposition of emotional and mental has always been present both in the science and the arts. Historically, acting has always been reflecting the latest trends in philosophical and cultural thought. For generations and even centuries, acting style maintained a very high level of artificiality, and what we know now as “believable acting” was simply a nonsense. In the early 19 th century, just several decades before Psychology emerged, Henry Siddons and Johann Jacob Engel summarized the European pre-realistic modern acting style in their book “Practical Illustrations of Rhetorical Gesture and Action”. The book describes and illustrates several emotions and their physical expressions, in a way that is very similar to the system of discrete emotions used in Neuroscience. The pre-realistic school of acting assumed that “habit becomes a kind of nature” (p. 3). By providing illustrations of various gestures and poses each of which was connected to a specific emotion, the authors made sure that the conventional emotional expressions get institutionalized via theatre and therefore become internalized by many generations of theatre practitioners. Before there was a Psychology, acting relied on captured generalized emotional stereotypes.

One thing they were missing. Fake, artificial or not, emotions kept engaging the audiences by making them feel and empathize. As Lewis, Gibson and Lannon simply put it – Detecting an emotion changes the observer’s own emotional tone in the direction of the emotion he’s observing. (p.4) Some researchers of the 20th century would argue that theatre owed its glory to the mirror neurons.

Nearly 20 years ago mirror neurons were discovered in chimpanzees by Rizolatti and colleagues (Drenko, 2013, p. 26). After a series of tests, it was concluded that the mammalian brain is capable of engaging in what Lewis (2000) calls “the internal neural simulation of behavior it observes in others” (p.5). This theory clearly has a great potential to literally explain the functionality of performance in general and of theatre in particular. Since the beginning of the Western theatre tradition as we know it, the famous author of Poetics Aristotle described the main functions of Tragedy as Fear, Pity and Catharsis. While many historians argue whether those translations from Ancient Greek are accurate or even if Aristotle existed to begin with, there is not really much authentic evidence to work with since Greeks didn’t leave us their secrets on a flash drive. Assuming that this interpretation of Aristotle’s suggested dramatic functions is roughly compatible with the actual truth, we see how the list includes an affective state (fear/terror), empathy (pity) and purgation (catharsis).

Below is the full Aristotelian definition of tragedy: “Tragedy, then, is an imitation of an action that is serious, complete, and of a certain magnitude in language embellished with each kind of artistic ornament, the several kinds being found in separate parts of the play in the form of action, not of narrative with incidents arousing pity and fear, wherewith to accomplish its katharsis of such emotions” (Butcher, S. H. (Ed.). (1917). The poetics of Aristotle. Macmillan)

Aristotle has been immortalized as the “Father” of Western Philosophy, Drama, and even of the Neuroscience. While his relation to the neuroscience may seem like a stretch, it worth mentioning that Aristotle was a trained doctor and researcher himself. While he acknowledged the duality of human nature manifested in tension between the mind and the heart, he did not believe in the brain’s involvement in emotions. (Gross, p. 247) If only he lived to see the mirror neurons, he would have known that empathy, essential component of Theatre, calls the brain its home. Empathy has been inscribed in the history of drama since the known beginning of it, as well as in the history of human kind. In the review article, Bernhardt and colleagues (2012) conclude that multiple studies, mostly based on empathy for pain, showed that “empathic responses recruit, to some extent, brain areas similar to those engaged during the corresponding first-person state” (p.). Linderberger (2010) describes the mirror neuronal process as two consecutive phases: stage one – imitation if the observed actions, second – internalization of the information and as a result the understanding of it (p.4). Those two stages may indeed constitute true empathy, and yet they only seem to be manifested in someone who is experiencing the event/emotion/story vicariously. When applied to the people impersonating and embodying characters in a story, the empathy cannot be enough.

Obviously, there is an endless number of acting techniques. The ones that prevail in the times contemporary with the modern neuroscience tend to be based on the psychological approach. Realistic acting is assumed to be the most common acting style people are exposed to, whether via television, cinema or live performances. We are going to set aside the improvisational methods and other non-traditional experimental approaches: in order to stay focused, let’s assume that generally realistic actors approach a character in a generally similar way. And this way involves two stages of processing. First, the actor gets acquainted with the character through reading his/her story. During this stage of the process, the actor is in the audience’s shoes: the incoming information resonates with his/her mind and perpetuates empathy. The actor’s goal is, however, not only to comprehend affectively the story and the character, but to undergo a process of transformation in order to portray/embody the given material. The actor must exist in the imaginary, or given circumstances: therefore, logically, his/her body needs to adjust and to start functioning as the one of the character. Since the body clearly includes the brain, can it be assumed that the actor rewires his/her brain to function as the one of the non-existing character, too? Kemp (2008) suggests that “the experience of emotion is something that is part of a disembodied consciousness rather than the processes of the body” (p.21). In this case, the emotions and the mind seem to be rather merged together, which contradicts the very traditional heart/mind dichotomy. But if we take a generalized realistic acting technique and trace every step of character’s coming to life, it appears that the consciousness and emotion walk hand in hand.

Once the actor internalizes the information about the character such as the background, demographics, looks, relationship history, beliefs, lifestyle (pretty much the equivalent of anyone’s first meeting with a psychologist), he/she connects the personal history and the given circumstances of the material that is being performed. Where is the line between the actor and the character? Where does the actor stop making decisions and begins choosing guided by emotions of his character? Creating a character is essentially reconstructing a human being from scratch, attributing all human aspects to his/her being/existence. Emotions then become the driving force of this process of creation. On stage or on screen, the actor creates a life, re-creates and re-tells a story. Without living emotions, the audience wouldn’t buy it (literally and figuratively).

More on current struggle to create an interdisciplinary bond between cognitive psychology and acting:

Bernhardt, B. C., & Singer, T. (2012). The neural basis of empathy. Annual review of neuroscience, 35, 1-23.

Butcher, S. H. (Ed.). (1917). The poetics of Aristotle. Macmillan.

Drinko, C. (2013). Theatrical improvisation, consciousness, and cognition. Palgrave Macmillan.

Engel, J. J., Siddons, H., & Engel, M. (1822). Practical Illustrations of Rhetorical Gesture and Action: Adapted from the English Drama: from a Work on the Subject by M. Engel.. Sherwood, Neely and Jones.

Gross, C. (1995). Aristotle on the Brain. The Neuroscientist, 1, 245-250.

Kemp, R. J. (2010). Embodied acting: cognitive foundations of performance (Doctoral dissertation, University of Pittsburgh).

Lewis, Gibson and Lannon. A primer on the neurobiology of inspiration. Published at http://www.terrypearce.com/pdf/PREREAD_gibson_et_al_061024.pdf

Lindenberger, H. (2010). Arts in the Brain or, What Might Neuroscience Tell Us? Toward a Cognitive Theory of Narrative Acts, ed. Frederick Luis Aldama, Austin: University of Texas Press, pp. 13-35


Where does conscience come from? The neurology of conscience

According to the encyclopedia, conscience is a personal sense of the moral content of one’s own conduct, intentions, or character with regard to a feeling of obligation to do right or be good. Conscience, usually informed by acculturation and instruction, is thus generally understood to give intuitively authoritative judgments regarding the moral quality of single actions.

According to Wikipedia, conscience is a cognitive process that elicits emotion and rational associations based on an individual's moral philosophy or value system.

Conscience is the ability by which we make moral judgments about our own actions and our whole moral being. Conscience is that part of the human psyche which, when we violate our value system by our actions, thoughts or words, causes mental anguish - guilt, remorse - and when we act in accordance with our value system, it gives us a sense of well-being and satisfaction.

Conscience depends on our current moral judgment. If we do not consider something immoral, we cannot have a conscience about it. Conscience is a faculty, an innate, genetic, evolutionary property of a person. However, conscience depends on historical development, individual upbringing, living conditions, environment and the spirit of the times, both in terms of its content and its strength.

Conscience, according to the philosophical approach, is a property related to morality. According to the religious approach, conscience is the voice of the existent God within us, which speaks out, answering yes or no whenever we are faced with a moral dilemma. According to this view, conscience is a person's God-given capacity for self-examination. According to the evolutionary approach, conscience is a property of our nature, created by selection. Its presence confers a selective advantage and has adaptive significance: those who have a conscience feel an inner compulsion to follow the rules of the group, and do not need to be forced to follow rules by external influence.

Conscience is therefore the judgment and qualification of our actions, thoughts and intentions within ourselves. It is an innate capacity, shaped and modified by experience.

Conscience is obviously a psychological phenomenon, its function is crucially linked to brain, to neurological processes. This connection is also clearly visible in the effects of certain brain injuries on conscience as a function influencing behavior. Numerous case studies of brain damage have shown that damage to areas of the brain results in the reduction or elimination of inhibitions, with a corresponding radical change in behavior. A classic example of the behaviour-consciousness-brain link is the case of Phineas Gage. Gage's extensive brain injury induced behavioral changes, which can be explained by the behavioral effects of the change in conscience caused by the injury.

What is the physical origin of conscience? What is the neural process in the brain that results in conscience?

A closer approach to a neurological understanding of conscience is to consider that the emotional and behavioral regulating function of conscience can be triggered not only by our own actions, but also by the behavior of others. For example, the emotional background of giving help to a person in distress or need is similar to the function of conscience. If, for example, we see someone being robbed on the street, we rush to their aid guided by our conscience.

However, we do know about the neurological background of this kind of behavior. Scientific experiments suggest that this type of behavior is determined by the presence and function of mirror neurons in the brain. The perception of events with others triggers the activity of mirror neurons, which generates an effect, a sensation, an emotional state as if the event had happened to us. As a result, experiencing the situation of the other person, behavioral mechanisms are induced, similar to how we would behave if the event had happened to us. The consequence of this state of mind is the helpful, supportive behavior that is required in that situation. However, this type of behavior can also be formulated as being guided by our conscience to help the other person.

In this way, by analogy, conscience - its philosophical concept, its emotional-psychological process - can be traced back to a specific neural process, the neurological activity of mirror neurons. On this basis, conscience as a phenomenon can be defined as a concrete neurological activity.

The phenomenon of conscience is the activity of brain mirror neurons triggered by our own actions, or even intentions, which induces the brain (emotional) state that would exist if the behavior in question were self-initiated. Conscience is therefore a neurological process, a feedback of our own behavior through the activation of mirror neurons. Conscience is the effect of our actual or even intended actions on ourselves, simulated by the mirror neurons, the feedback-like action of the mirror neurons.

This understands and explains why conscience is an innate property: the activity of the presence of mirror neurons, created by genetics and developed by evolution. If this hypothesis can be experimentally proven, then the abnormal functioning of conscience in some individuals could be due to the absence or malfunctioning of mirror neurons, and could therefore be explained by neurological, physiological, genetic causes. This would also explain the subjective nature of conscience. It is obvious that only neurological emotional states that have been formed depending on the historical development of a person, individual nurture, living conditions, environment, and the influence of the social environment can be activated by the feedback function of the mirror neurons present. These states are related to the subjective history of the individual.

Thus, on the basis of the mechanism outlined, conscience is not a philosophical concept, not a supernatural phenomenon, but a concrete neurological process that can be suitably and adequately explained.