layout: post title: ‘Can We Copy the Brain?’ —

Can We Copy the Brain?

A review of IEEE Spectrum Special Report: Can We Copy the Brain?

The IEEE Spectrum this month has a story on synthetic brains. In this article I will review the story and comment on the status of the quest: replicating the human brain in synthetic systems. This article is about neuroscience, neuromorphic, artificial neural networks, deep learning, computing hardware in biology and synthetic, and how all of these come together in the the human grand challenge of creating a synthetic brain at or above human level.

Why We Should Copy the Brain: we should do this because we want to create intelligent machines that can do the work for us. In order to do our work, machine will have to live in our environment, have senses similar to our own, and be able to accomplish the same kind of tasks. It does not stop here: machine can do more and better than we can most tasks, simply as we do better than other life forms. And we would like them to do things we cannot do, and do better the things we can do. It is called progress, and we need to do this to bypass biological evolution and speed it up. The article has a good summary of what this will be, and what machines will do for us. More comments below in section PS1. For jobs, see PS3.

In the Future, Machines Will Borrow Our Brain’s Best Tricks: the human brain is one of the most efficient computing machines we know of. In that sense, it is the best “brain” of the know universe (known to men). And what is a brain? It is a computer that allows us to live our life in our environment. What is life? Oh well, maybe for now, let’s just say our life is devoted to procreate, ensure the best for our offsprings, promote future generation and their success, preserve the best environment for all this to happen (or are we now?).

And today us humans are trying to build artificial brains, inspired by our own. And slowly, in the recent years (many more articles and reviews online!), artificial neural networks and deep learning have slowly eroded many gaps between computers and human abilities. It is only inevitable that they will become more and more like a person, because we are actually building them with that goal in mind! We want them to do things for us, as driving our car, providing customer service, being a perfect digital assistants, reading your mind and predicting your needs. And also pervade every instrument and sensor in the world, so they can better assist us to get the right information at the right time, sometimes without us asking for it.

But building a synthetic brain does not mean we need to copy our own. In fact, that is not our goal at all, our goal is to make an even better one! Our brain is made of cells and biological tissues, and our synthetic brains are made of silicon and wires. The physics of these media is not the same, and thus we only take inspiration from the brain algorithms, as we build better and larger synthetic hardware, one step at the time. We are already dreaming of neural networks that can create computing architectures by themselves. The same way that a neural network trains its weights, it could also train to eliminate the last human input in all this: learn to create the neural network model definition!

Even if we wanted to limit ourselves to creating a clone of our brain, it will still rapidly evolve beyond our capabilities, as one of the goals of building it, is to continuously learn new knowledge and improve behavior. It is thus inevitable we will end up with a “better” brain than ours, possibly so much better, we cannot even imagine it. Maybe like our brain is compared to the one of an insect — and more. There may be no limit to how intelligent and knowledgeable a creature can become.

The Brain as Computer: Bad at Math, Good at Everything Else: us humans have studied neural networks for a long time now. And we have been studying our brains for a long time also. But we still do not know how we can predict what is going to happen by just looking at a scene, something we do every moment of our life. We still do not know how we learn new concepts, how we add them to what we already know, how we use the past to predict the future, and how we recognize complex spatio-temporal data, such as recognizing actions in the real world. See this for a summary. We also do not know how to best interact with a real or simulated world, or how we learn to interact with the world.

We may not know all this. But we are making strides. We started by learning to recognize objects (and faces, and road-scenes). We then learned to categorize and create sequences (including speech, text comprehension, language translation, image to text translation, image to image translation, and many more). We are still trying to learn how to learn without a lot of labeled data (unsupervised learning). And we started playing video games, first simple ones, then difficult ones, now very complex ones. It is only a matter of time that AI algorithms will learn about mechanics and physics of our world.

And we got really good at it, better than humans in all these tasks — or some! And we are not planning to stop until we have robots that can do common tasks for us: cook, clean, wash dishes, fold laundry, talk to us (Alexa, Siri, Cortana, etc), understand our feelings and emotions, and many more tasks commonly associated with human intellect and abilities! But how do we get there? We have been very good at having neural networks categorize things, now we need them to predict. Learn long sequences of events, categorizing long sequence of events. And since there are an infinite number of possible events, we cannot train an AI with examples, we do not have all the examples, so we need it to learn on its own. And the best theories of how our brains learns to do this, is by predicting the future constantly, so it knows to ignore all unimportant and previously-seen events, but at the same time know if some event is new. Unsupervised and self-supervised learning will be important components. More here.

Note also that much of this deep learning progress did not come out of neuroscience or psychology, the same way that making good batteries did not come out of alchemy.

Computing hardware is mentioned in this article also, stating that conventional computers may not be as good as some neuromorphic computer approach. We comment on this topic here. There will sure be more efficient hardware coming about, hardware that possibly will be able to run the latest and greatest deep learning algorithms, such as our work here. And it may have neuromorphic components, such as spiking networks and asynchronous communication of sparse data, but it is not this day. This day neuromorphic hardware has yet to run anything similar to the great successes of deep learning algorithms, such as the Google/Baidu speech recognition on mobile phones, Google text translation on the wild on your phone, or tagging of your images in the cloud. These are tangible results we have today, and they use deep learning and they use back-propagation on labeled datasets, and they use conventional digital hardware, soon to be specialized hardware.

What Intelligent Machines Need to Learn From the Neocortex: well this is a duh moment. Jeff Hawkins has written a very exciting book, only a decade ago or so. It hugely inspired all my students, our e-Lab, and myself, to work on synthetic brains, and take inspiration from what we know and can measure of the human brain.

But since then artificial neural networks and deep learning have stolen his thunder. Of course we need to learn to rewire. Of course we are learning a sparse representation. All deep neural networks do this — see PS2. And of course we need to learn to act in an environment (embodiment), we already do this by learning to play video games and drive cars.

But of course it does not say how to tackle real-world tasks, because Numentais still stuck in its peculiar business model where it does not help itself and it does not help the community. It would be better to listen to the community, share its success and fund smart people and startups. Maybe Jeff does think he alone can solve all this and is better than anyone else. We all are victims of this egocentric behavior…

I have to add I agree with Jeff’s frustration on how categorization-centric deep learning algorithms fail to tackle more complex tasks. We have wrote about this here. But as you can read in this link, we are all working in this area, and there will very soon be much advancement, as there has been in categorization tasks. Rest assured of that! Jeff says: “As I consider the future, I worry that we are not aiming high enough”. If Jeff and Numenta would join, we will all be faster and better off, and retarget our aims.

AI Designers Find Inspiration in Rat Brains: here we get to the culprit of all the problems in brain/cognition/intelligence: studying the brain. I spent more than 10 years trying to build better neuroscience instrumentation, with the goal of helping neuroscientists understand how human perceive the visual world. See this, slide 16-on. This is at a time where people are still poking neurons with 1 or few wires, and making only limited progress at the topics I was mostly interested in: understanding how neural networks are wired, encode information, and build higher-level representation of the real world. Why do I care about this? Because knowing how the brain does some of this would allow us to build a synthetic brain faster, as we would apply principles from biology first, rather that trying to figure out things with trials and errors. And bear in mind biology got there by trial and error anyway, in billions of years of evolution…

With time, I grew increasingly frustrated with the progress of brain studies and neuroscience, because:

Working with artificial neural networks allows to surpass many of these limitations, while keeping a variable degree of loyalty to biological principles. Artificial neural networks can be designed on a computer simulations, can run really fast, and can be used today for practical tasks. This is basically what deep learning has been doing in the last 5–10 years. Also these systems are fully observable: we know exactly how neuron work and what response they give and what the inputs were in all conditions. We also know in full detail how they got trained to perform in a specific manner.

But the question is: what biological principle are important to follow? While we have no answer to this questions, we can definitely conclude that if an artificial neural network can solve a practical task, it will be important, regardless of whether it perfectly mimics or not a biological counter-part. Studying a 1mm³ of cortex and hoping that we can get an idea of how the brain works and learn is ill-founded. We may get much data and details, but all of this can be discarded as the only underlying working principle is of importance. For example: we do not need to know where every molecule or drop of water is in a stream, all we need to know is where it is going on average and what is the average stream size. And for testing these underlying models, we can use our ideas or simulations, or even better, design a system that can design a synthetic system for us. We do not need to reverse-engineer every aspect of a piece of tissue, as it has little relevance to its underlying algorithms and operating principle. The same way that we do not need to know how our ear and vocal cord works, in order to send voice all over the world with radio cell-phones, surpassing the capabilities of any biological entities on this planet. Same can be said for airplane wings.

The article says: “AI is not built on “neural networks” similar to those in the brain? They use an overly simplified neuron model. The differences are many, and they matter”. This claim has been uttered many times to claim biology has some special property that we do not know of, and that we cannot make any progress without knowing it. This is plain non-sense, as we have made plenty of progress without knowing all the details, in fact this may prove the details are not important. There has not been a single piece of evidence showing that if we add some “detail” to the simple artificial neuron of deep learning the system can improve in performance. There is no evidence because all neuromorphic system operate on toy models and data, and are not able to scale to the success of deep learning. So no comparison can be made to date. One day this may be the case, but I assure you your “detail” can simply be encoded as more neurons and more layers or a simple rewiring.

Reading this article on Spectrum reminds me that the situation has not changed in the last 5–10 years and that we still do not know how much of the brain works and we still do not have the tools to investigate this. There is much information to back this claim, and there have been two large initiative on studying the brain both in USA and Europe, both with very limited success. I am not negative about this field, I am just stating my observations here. I hope some smart minds come and invent new tools, as I have tried and may have failed, for now. But please let us stop poking the brain with a few wires and keep claiming that EEG will one day solve all our problems. I would be a very happy person if we could make some strides in the neuroscience area, but I think the current way of doing things, basic research, goals and tools have to be reset and redesigned.

And maybe that is why in this IEEE Spectrum article all neuroscientists say it will take hundreds of years to reach human-level AI, while all researchers in AI say it will take 20–50 years. Because in neuroscience so far, there has been little progress towards explaining neural networks, while in AI / deep learning, the progress occurs on a daily basis. I call the the AI/neuroscience divide, which will only grow.

Neuromorphic Chips Are Destined for Deep Learning — or Obscurity and We Could Build an Artificial Brain Right Now: this we already talked about extensively in this post.

Why Rat-Brained Robots Are So Good at Navigating Unfamiliar Terrain: note we implemented a similar system here. All it has to do it to remember the view from a certain location. Since navigation is a basic need for anything that moves in an environments, it can be implemented as the representation of space from a certain point of view. Knowing where you are means recalling a certain view.

Can We Quantify Machine Consciousness? Consciousness is just a model of the world and of yourself. Like seeing yourself from another person’s perspective, but fully aware of your feelings and senses. Any brain with enough computational power will be able to do this, whether in silicon or in biology. As we go and build more and more complex artificial neural networks, we see the same pattern: at the end a small number of neurons encode “experiences” of all kinds. Now they may encode objects and classes, but they can also encode rewards and punishment, and soon long sequences such as actions and deferred reward. These are no different from human experiences, even as far as neuroscience can tell us.

I do not believe consciousness is a prerogative of humans, and any artificial brain large enough will be able to create a model of itself and the environments. In fact, we have already argued this has to happen to be able to predict what happens in an environment and the results of our actions. I do not think very highly of this piece, and I would rather see the authors use their intellect to help us create usable system, rather that conjecture about philosophy and things anyone can say anything about.

The fixation that bothers me the most is: humans are special, human brain is special, nothing will ever compare, nothing can be better. Well this is all nonsense. In evolutionary history, we see this happening all the time. We have evolved from single cell beings into what we are, and even though we are more capable of other life forms, it does not mean we will always be, or we have a special property. Our DNA is basically the same of many other beings on this planet! We keep evolving, and we are lucky we are now at the top, but it is not mean to last. Evolution goes on, with or without you.

And our evolution on this planet is using biological materials. Nothing says that we cannot build intelligence out of rocks and metals, or out of fire and smoke. In fact, it may have already happened, but our limited senses may prevents us from seeing it. It is not very open minded to think humans are special in any way, in fact if you look around, you will see human at some point in the future will be what cockroaches are to us now.

Post scriptum

PS1: Why should we copy the brain, part II: Many people think the human brain and human race is special, but if you look at the evolutionary tree, we can see the only advantage we have is that we are at the top of the pyramid. But nothing else. Other lower life forms share a lot with us: the same destiny we would have when we will face an higher intelligence. We may be visited by higher intelligence, or we may create it, as I think will happen in a very short time-frame. Once we create something more intelligence that us, as Jürgen Schmidhuber says in the article, this intelligence will most probably want to explore and understand the universe we live in, non confined by a biological body and it short (relatively) time-frame.

But before a true super-human AI will come along, we will try to replicate many other human abilities in computer programs, such as drawing, making art, painting, sculpting, making music, etc. And while at beginning synthetic art may tend to be only as good as humans, it will eventually evolve to a higher form, maybe one that we will never comprehend.

An interesting movie about this topic is Transcendence. I like how the AI in this movie merges with humanity for the greater good.

PS2: if you have a neuron with 10000 inputs and only 100 are active, this is a sparse signal. If you have a neuron with 100 inputs all active this is not sparse, but the data communicated is the same, hence the efficiency of the system.

PS3: If you worry about the machines taking all our jobs, do not worry! We will not need jobs in a world where machine do all our work, we can just play and have fun! And we do not want the jobs that machines will do anyway, as we are trying to get rid of jobs that are dangerous and boring and spend more time ending up like the humans in Wall-e.

For more:

For more about all you read here: see this video.

About the author

I have almost 20 years of experience in neural networks in both hardware and software (a rare combination). See about me here: Medium, webpage, Scholar, LinkedIn, and more…