2014 06 05 Uncertainty in deep learning and neural network? I do not feel that way!
Post
Cancel

Uncertainty in deep learning and neural network? I do not feel that way!

Uncertainty in deep learning and neural network? I do not feel that way!

Note: this is an old Blogger post from Thursday, June 5, 2014

As you might know, neural networks are often in the news these days, with many success stories.

Neural networks are now the state-of-the-art algorithm to understand complex sensory data such as images, videos, speech, audio, voice, music, etc.

Neural networks recently got rebranded under the name Deep Learning (DL) or deep neural networks.

In 2012 they made the news when they outperformed by more than 10% any other algorithm in a industry-standard image dataset:

http://image-net.org/challenges/LSVRC/2012/results.html

They also had similar improvements in speech recognition, up to 20%:

http://www.nytimes.com/2012/11/24/science/scientists-see-advances-in-deep-learning-a-part-of-artificial-intelligence.html?_r=0

And in many other tasks. Recently getting at the level of human performance in familiar face identification:

https://www.facebook.com/publications/546316888800776/

Yet industries and investors are wary.

They often see new algorithms come and go, almost on a year-to-year basis.

They say: ”why do a start-up on deep learning or neural networks? What happens when a better algorithm comes up and swipes you off your tablet?”

It is a legitimate doubt, however I am certain we should not worry about this anymore.

Neural Networks are here to stay for many years.

There are 3 strong reasons for this:

  1. NEURAL NETWORKS ARE STRONGLY SUPERIOR:

In 2012, when deep neural network algorithms proved to be the best algorithm for understanding complex sensory data10–20%, instead of the 1–2% typical year by year improvement.

This is a big difference in algorithm performance in complex data. I have not seen such large improvement in all my career.

Imagine a 100-meter dash athlete run in 7.5 seconds, beating everyone else by 2 full seconds when a typical record was usually just 0.1 seconds better before. Wouldn’t you be surprised? “Almost unreal!”

  1. NEURAL NETWORKS WILL REACH HUMAN-LEVEL PERFORMANCE IN MULTIPLE TASKS:

The human brain is a large neural network. Deep learning provides large neural network models inspired by biological neural systems, such as the human brain.

The human brain is the best “processor” of complex sensory data in the known universe. We can understand images, videos, voice, sound like no computer can today!

A model of the human brain, such as the artificial neural networks used in deep learning can scale to human performance, as datasets and network topologies improve. An example is Facebook DeepFace mentioned above.

As we deepen our knowledge of the neural topology of our brain and improve the artificial neural network models, we can reach human performance in many more tasks. And this is happening now: every few months I withness a new result in this direction.

  1. NEURAL NETWORKS CAN SCALE:

We have seen many algorithms in the past that cannot scale to the complexity of sensory data. They can do well for a few years, then they disappeared under a new wave of “new” algorithms. While this has happened, neural networks relentlessly continued to evolve, sometimes outside the limelight, for more than 60 years. We owe this progress to many smart colleagues and researchers.

But one difference that sets aside neural network from other techniques is that they are a very scalable model. Their ability to understand data can grow with the size of the model and also with the data available to train these models. By adding layers of neurons, different connections and connection topologies, more input data in the form of space and time, the models can provide ever-incrementing abilities to understand the complex data they are trained with, and new data of the same kind.

Bottom line:

Deep Learning and neural networks are here to stay. For 10 years at least, or as long as it will take us to get to human performance levels in the understanding of raw sensory data.

And if you want to invest and push the future of technology, this is it!

Why should you believe me? “This is just your opinion!”, “”Who are you anyway?”

I am a professor and an inventor, an engineer and a scientist. I want my work to change the world. I met US President Obama because of the success of my research. I teach college-level computational neuroscience, micro-chip design, computer-architecture, machine and deep learning, to name a few.

I have 20+ years of experience in the design of neuromorphic systems. I have seen neural networks in analog microchips, digital microchips, computer code, and theory. I have seen many examples of artificial neural systems that work and the ones that do not, both in the hardware and software domains. I have designed many systems myself, tested code and algorithms in first person, not just through the work of others.

My goal is to help humanity with my knowledge of technology and science. And this is the way to do it now, with deep neural networks!

I am not in love with the algorithm, I love what works and can scale. Deep neural networks work and scale — right now.

I have seen a lot of things and I have the experience to spend my life on important goals for humanity as a whole. I would not work in this area if I was not strongly convinced of the power and potential of neural networks.

Believe me now or let time convince you, at the expense of losing a big opportunity.

While you decide, all my colleagues and I will continue every day to push the envelope of technology, breaking the barriers that are impeding current computers from understanding images, videos, speech, audio and any complex sensory data.

It is our choice, our destiny.

And it is inevitable.

Comments welcome!

Posted on Blogger on Thursday, June 5, 2014

About the author

I have almost 20 years of experience in neural networks in both hardware and software (a rare combination). See about me here: Medium, webpage, Scholar, LinkedIn, and more…

This post is licensed under CC BY 4.0 by the author.

Trending Tags