Miguel Nicolelis

Miguel Nicolelis

Professor and Co-Director of the Center for Neuroengineering, Department of Neurobiology, Duke University Medical Center, Duke University Medical Center , USA

What can a monkey’s thought tell us about Parkinson’s disease? A lot, actually. The neurobiologist Miguel Nicolelis (1961) has found a way to implant electrode arrays into a monkey’s brain to detect ist motor intent – and thus to control reaching and grasping movements performed by a robotic arm. Already Nicolelis’s multi-electrode recordings in behaving animals have revolutionized neuroscience and broken the ground for a new generation of neuroprosthetic devices.  His current experiments with brain-machine interfaces entail that we will soon be able to mitigate the clinical effects of neurological dysfunctions caused by Parkinson’s disease and spinal-cord injuries. Nicolelis, who hails from Brazil and is currently working as a professor and Co-Director of the Center for Neuoengineering at Duke University’s Medical Center, has been chosen by the journal “Science” as one of the 100 most influential scientists, for thoughts that, literally, provoke actions.

Breaking the Wall of Neurological Disorder: How Brain-Waves Can Steer Prosthetics.


20 years ago I was in New Jersey; I remember being deeply moved with emotion.

Ladies and gentlemen, it is a great honour to be here. I would like to, first of all, thank the Einstein Foundation for this opportunity to speak to you in such a wonderful celebration of a date that holds a lot of significance for me too- even though I was born in Brazil, and at that time 20 years ago studying in the United States. I remember very vividly this day, because for me, for my colleagues, around this time 20 years ago, we were learning to do something that to us was a break of a small but important wall. At that time, we were learning for the first time to read signals like this, to read a true brainstorm, a brain symphony: ten seconds of a hundred cells of a brain that was thinking, that was producing motion out of vision, out of abstract information that came from the world, trying to reach for it.

At that time, when we first reached this milestone, of reading the electrical activity produced by 100 brain cells, we had no idea what would come in the next two decades. It turned out that in just ten days we realized that by being able to read these thoughts, these very small snippets of 100, 200, and now almost 1000 cells of millions of neurons, brain cells, that produce everything that we are, everything that we actually do, every hint of the history of mankind since the beginning to the end. We realized that we could break another wall, and that we could actually relieve the brain from the constraints of the body and liberate these symphonies, this brain activity, these waves of electricity, from the constraints of our biological being, so that we could actually allow brains to enact their voluntary will directly, without the interference of our flesh, into machines.

This is what we built ten years after we started listening to these symphonies. We built a brain machine interface, combining ways to listen to the electrical storms that brains produce, carrying information about motion, information about voluntary will. By using a series of mathematical and computational engineering tools, we are actually able to extract from these electrical signals the type of information that could be used by this brain to directly control an artificial device and use this device to enact the brain’s will on the world- either next to it or very far from it.

By instrumenting this new artefact, this artificial tool that became part of this brain’s voluntary will, we actually could send messages back from the brain- wherever this tool was located. And then, not only use our usual sensors like vision and touch to decode this information, but actually send information directly back to the brain in the language that a brain can understand- electrical waves.

This was very important for us neurobiologists, because we could use this information to test a series of hypotheses, a series of theories, on how large populations of neurons in the brain code information. But our dream, the true wall that we want to break in the future, is represented here. We want to use this knowledge and these brainstorms to directly restore things that some of us, millions of us, lose without the knowledge of those of us that continue to live our lives normally.

We all know that when a severe lesion affects one part of the brain, like the spinal cord, the messages that we produce in our brains about motion, about moving, about exploring the world, cannot reach our muscles- our brains can no longer command our body to move at will. Although these lesions occur, our brains, for the rest of our lives, continue to produce the messages, continue to dream about motion, about this impossible task that most of us perform without even thinking about, but some of us can no longer do.

Our idea is to use brain machine interfaces, the linkage of living brain tissue to devices that we build, with our brains, to actually restore motion to these patients, to use the electrical activity decoded with computational models, and then use this to command a new body- a body that these patients would wear, a robotic device, an exoskeleton that these patients would wear like they were their own bodies, and then move through the world.

I would like to show you a simple example to illustrate how this concept works, because we already have made something similar work in animals, but haven’t reached the final goal. This is a monkey that actually learned to walk like we do, bipedally on a treadmill. While he was doing that, we recorded and decoded the activity of hundreds of brain cells that produce the commands for the body of this animal to actually move on this treadmill. This information was transformed into digital commands that were then used to reproduce all of this step-cycle that this animal produced when he walks on the treadmill. These are some of the patterns of just a few cells of the hundreds that we can now record in brains like this. When we combine this electrical activity, using very simple computational models, we actually can reproduce this step-cycle very well, and the whole locomotion pattern of these animals in a variety of patterns. At that point we realized that we could try to break a really major wall, and try to liberate the electrical storms, the brain’s thinking, the brain activity of this primate, to see if we could enact its will very far from its body.

It was at that time, having a monkey walking at Duke University on the East Coast of the United States, that together with our good friend Gordon Cheng at ATR Robotics in Kyoto, we designed a protocol that allowed us to send brain signals from this brain all the way to Japan so that we could fulfil the dream of a humanoid robot, CB1. The dream of a humanoid robot, you probably all know, is to behave like men, to behave like a primate. That is what was done: the signals were sent to Kyoto, then video images from CB1 walking in Kyoto were sent back to Duke and projected in front of this monkey- this around the world trip took place 20 milliseconds faster than it takes for a certain brain activity to be produced in the brain and reach the body.

That brain machine interface on the other side of the world resulted in this more recent experiment. You should be seeing a robot walking. (Technical difficulties with PowerPoint presentation) What you should be seeing, I am sorry, was this robot, CB1, walking autonomously under the command of the brain activity that was broadcast from a monkey that was walking at Duke University. The images of the robot’s legs are projected back in front of this animal. What we learned at that point was when we stopped the treadmill and Idoya, our monkey, stopped moving on the treadmill, the robot continued to walk. Idoya realized that she just needed to think about moving, because the object of her desire was moving. So, she didn’t need to send the signals back to her legs; she just needed to imagine walking, and we could send these signals to command CB1 to walk.

This is what we want to do. This is the future for us. This is our future wall to break. This is a whole body exoskeleton that my good friend Gordon Cheng is developing right here in Munich to actually be commanded in the future by the brain signals of a body that cannot move anymore- the brain signals of a person that is paralyzed. We hope that we will be able to take advantage of brain machine interfaces to help those who are immobile regain mobility and, once again, roam freely around the world. This is our approach for treating body paralysis using brain machine interfaces.

But there is something more: this is going to be a project distributed around the world. We are going to achieve this task through a non-profit consortium of laboratories all over the world that will have collaborated, providing their expertise to build this brain machine interface for millions of people to benefit from, including the laboratory of my good friend, Klaus Müller, here at the Technical University Berlin, the Swiss Institute of Technology in Lucerne, Gordon Cheng at the Technical University in Munich, the Institute of Neuroscience of Natal, the International Institute of Natal, Brazil, the Sirio-Lebanese hospital in Sao Paolo, and the Center for Neuroengineering at Duke University.

But there is more, because we can now not only read and listen to signals, to symphonies, from the brain, but we have only recently learned to send messages back directly to the brain. We actually can, using electricity and light, deliver small messages back to the brain, hoping to recover, restore, and correct defects that we find in a variety of neurological disorders.

One of these disorders is Parkinson’s disease. As you know, millions of people are afflicted by this disorder. It produces clinical symptoms in which patients lose their ability to move. They shake; they cannot move; they cannot initiate movements; they lose the ability to coordinate movements correctly.

In our study of Parkinson’s, we used transgenic mice, mice that are altered genetically, so that they express the symptoms and signs of Parkinson’s disease. One of the possible reasons for the symptoms that we see in these animals is that the neural activity in these animals is all synchronized. Brain cells fire all together at the same time in the motor cortex and in other structures of the brain. So what we did was to send a simple message to the brains of these animals, an electrical message. Upon detecting this synchrony of the storms, we delivered an electrical signal to the surface of the spinal cord, in the back of these animals- very superficial, non-evasively.

What we saw, was this mouse with Parkinsonian symptoms at the beginning of this movie; he is just staying in that corner without being able to move, because he is shaking. It is severly Parkinsonian. Upon the delivery of these electrical signals to the spinal cord, we just disrupt this synchrony that the brain is producing. These waves of electricity now can be produced out of phase, randomly out of phase, and at that moment, instantaneously, this mouse can walk again, can move again. As long as we produce this little noise in the brain; the brain will behave like a normal brain of a normal mouse. This device is rapidly moving to the clinical stage. If it works, it is going to be a very simple procedure; a procedure that every patient can be using that suffers from Parkinson’s disease. We will be starting a new dialogue. We will be able to start with these simple messages, start talking to brains directly, trying to establish this conversation that we hope in a few decades may bring down the terrible walls of neurological disorders. Thank you very much.