Klaus Robert Müller
Professor at the Department of Mechanical Learning, Technische Universität Berlin, Director of Bernstein Center for Bernstein Focus Neurotechnology, Berlin, Germany
Breaking the Wall between Mind and Machine. How Neurotechnology Can Expand Human Capacity for Action.
20 years ago, breathless and deeply moved, my wife and I followed everything on a tiny black and white TV screen.
Thank you very much for the opportunity, the honour, and pleasure that I may present the Berlin Brain Computer Interface here at this very special conference. This is joint work, together with my colleagues Benjamin Blankertz, Michael Tangermann, and Gabriel Curio. We all come from different institutions. Gabriel is a neurologist from Charité; Benjamin, Michael, and I share the institution; Benjamin is also with Fraunhofer. This is joint work of the Bernstein Center of Neurotechnology.
Let us start with an example of what a Brain Computer Interface does. So you see the subject here is wearing an EEG cap, 64 electrodes at 1000 hertz sampling rate. The subject is asked to imagine left-hand and right-hand movements. By virtually only his thoughts alone, he is controlling the cursor that you can see- and tries to hit the blue ball as often as possible. If you watch closely, the subject is not moving and thinking, “Left and right”. Since this is not a monkey, you can ask a subject what this feels like, and I happen to be the subject here. I actually was pulling a string, a virtual string, in my imagination- “to the right and to the left”- in a very focused, concentrated, and relaxed manner.
I did this for maybe half an hour, and then, all of the sudden, I didn’t have to do this string pulling anymore, but rather it seemed that this cursor had become part of my body; so I can put it in any arbitrary position. This is often called “skill acquisition”. It is as if I learned to play tennis and the tennis racket becomes like my extended arm.
Let me just briefly discuss what kind of walls we have to break to achieve this. First of all, we have to break the wall between physiology and data analysis. My own field is data analysis, and we have to decode this messy EEG data in real time. When we started working on BCI about ten years ago, we stepped on the shoulder of the pioneers in the field: (Birbaumer, Pfurtscheller, Wolpaw). Patients had to train about two to three hundred hours to change their brain signals such that you could decode them.
Now we address this from the other side. In our case, the machines learn not the subjects. We now need about ten minutes of learning on the subject side. This gains three orders of magnitude. By this innovation , you can come to the lab and do the experiment and communicate with your brain, with the machine, within a single morning.
Let me discuss briefly the physiology, because this is one part of the story. Since this is late afternoon, you may close your eyes, then there is an idle rhythm kicking in at the occipital side- somewhere here in the back of the brain. If you open your eyes again, then this rhythm is suppressed. If I keep my arm at rest , there is an idle rhythm over the motor cortex contralaterally. If I start moving it, this idle rhythm is suppressed. But the interesting thing is: if I only imagine a movement, then also the idle rhythm is suppressed in a contralateral way: left imagination- right hemisphere, right imagination- left hemisphere. This gives enough physiology to get one bit out of the brain, so to say.
What are the challenges? Now, as you follow my talk, you listen, you watch me, you maybe play with something, and you think about dinner, and all sorts of things at the same time. We are actually facing a cerebral cocktail party problem, because our EEG sensors pick up superpositions of all the brain signals that are active. So, for the purpose of decoding, it is only the motor cortex that is interesting. All the other rest, which is very necessary for living, should be suppressed. That is the challenge: how to get out this sort of data that we are really after.
If you decided to come to our lab, then we would take about a half an hour for the montage to put the EEG on your head, to put gel in the electrodes, to sample the electromagnetic field that is emitted by your brain at 64 positions. The first thing is a ten-minute training where the subject is asked to imagine certain brain states. He is asked to assume certain brain states. We can do this together. So, put yourself now into a very special position. You have to be relaxed and focused after this long day. You shouldn’t move, and you should imagine a left-hand movement and right-hand movement- say squeezing a ball, pulling a string, playing the piano, or something like that. So, just try it. Relax. It starts immediately. If you see the letter you have to imagine. Now left imagination- not any more. Left imagination again- not anymore. You have to keep your eyes open in order to see the screen, of course. Now right imagination...
So, this gives you a good impression of how the training works. After ten minutes we have gathered about 100 data points of leftness and rightness. Now with this nice source of data, we would like to infer what are the spatial, temporal patterns of left activity and right activity. This is where machine learning comes into place. So you see for this one subject that left imagination has a very nice map. This is a view from above to the brain, this is the nose, and this is a sampling of the electromagnetic wave fields. It is right over the motor cortex, and it is a very nice starting point. With this, we can try to infer and distinguish between the two brain states.
We can now use the inferred classifiers and filters for the disabled, for example, for spelling. Here is a video where we actually have done the spelling in a very peculiar way. Remember there is one bit of information, and we translate this bit- right imagination into turning of the arrow- left imagination into choosing, elongating the arrow. In this way, while I am talking, with two choices you can go to any letter in the alphabet. I should say that this video was taken at CeBIT in 2006. You could think of about 30 people standing around the subject, maybe two TV camera teams, and it was quite hard for the subject to stay relaxed and focused. I think Guido was the subject in this video- he only misspelled once, when the German Science Minister put her hand on his shoulder. Also, I should say that behind this white wall here was the main power supply of Hall 8- a cable of this size. Remember, I haven’t told you that, but an EEG measures between 10 to 15 microV. So, you have to do the cerebral cocktail party problem well.
There is another demo, which I would like to point to. Outside, in the coffee room, there is a real size pinball machine, and we have now used BBCI for its control. You can see yourself that the brain activity of the subject out there is used to control the right and the left pedals in this flipper machine. The interesting thing is that you have to get timing and dynamics right, and recall: this is a non-invasive system; we don’t have brain surgeries here. Of course, it is clear that our patients shouldn’t start playing pinball; this is not the point. The point is that this pinball demo is a proxy for showing how fast you can actually communicate with the machine in this time-pressing environment.
Outside the medical world, in the more playful world, what are the walls yet to be broken? We go out of the lab with our EEG system. So, we measure EEG while driving a car. The idea is to predict cognitive workload while driving or access drowsiness, or cognitive alertness, in real time. This is a huge wall to go outside the lab, and we have started to get small bricks out of it. It is a very, very interesting world beyond that wall.
What are future walls to break? Sensors is an issue. The EEG today is still very close to the classical EEG- what Berger invented in the beginning of the last century. It takes 30 to 45 minutes to do the montage. What we would like to have is dry electrodes, contact-free electrodes, a cheap system to pull on the cap and start measuring right away. We would like to walk around with an EEG in order to observe human decision-making in realistic situations, to change man-machine into action, because now you always have to wash your hair after the experiment. You can imagine: I got some funny looks when coming to the TU computer science department requiring a shower room.
Machine learning has contributed to having the walls fall in this field. We can do non-invasive transmission with high information rate. At the CeBIT video, the subject was able to transfer six to eight bits per minute of the whole period where these experiments lasted six hours.
Future applications are, of course, rehabilitation, and you heard Miguel Nicolelis talk: I think it is yet to be discussed where non-invasive methods will find their place, certainly the data analysis will find its place again in the 'walk-again' project. We have a new measurement device with which you can look at the thinking and behaving brain in real time. This is a new measurement device with which you can use to help understand the brain. All this is done in the Bernstein Centers. There is also new ways of man machine into action- that is possible. So, we can condition the interaction with a machine on the brain state. There are long ways to go with the sensors, and thanks to the BBCI core team, and to the European and German tax payers. Thank you very much for your attention.