5 Common Misconceptions About Deep Learning

Foteini Agrafioti (@fagrafioti)

November 25, 2015

From huffingtonpost.ca

Deep learning is the buzzword of the moment inside tech circles and as the public plugs into what this breakthrough in artificial intelligence (AI) means for the future of technology, a number of common misconceptions have emerged. Below, our machine learning experts at Architech Labs clear up some of the confusion.

1. Deep learning systems learn like babies.

To compare deep learning systems to babies implies that the systems will grow and mature over time by themselves, but that's not how it works. A more accurate way to use this analogy is to describe the research -- not the systems themselves -- as being in its infancy stage. Much like a first-time parent, we're still in the primary phase of understanding how everything operates.

2. Deep neural networks learn like the brain.

The artificial neural nets in deep learning systems are only loosely inspired by the biological neural nets in the human brain. Though much of the terminology applied to deep learning -- such as neurons or activations, for example -- are borrowed from brain research, the way both systems work, respectively, are very different.

A good way to illustrate this point is to compare deep learning "brains" to the brains of human children -- at least what we know of both so far (and the truth is we don't know that much). One of the major differences is that human children can explore the world and form knowledge on their own without any external teaching. Armed with this unsupervised knowledge, children can then break down the requirement of any tasks into sub-goals and take action to accomplish them.

To date, deep learning hasn't been able to match this capability: their systems need to be taught exactly what to learn, what is what, and what to do once the learning is done. Researchers are working on improving these models but we aren't there yet.

3. Artificial intelligence presents an existential threat to the human race.

We all remember the scene in the original Jurassic Park when Jeff Goldblum's scientist character utters the immortal line: "[S]cientists were so preoccupied with whether or not they could that they didn't stop to think if they should." More than two decades later, Tesla mastermind Elon Musk delivered an updated warning when he told a crowd at the MIT Aeronautics and Astronautics Department's Centennial Symposium that AI posed the greatest existential threat known to humans and compared what we're doing with technology to "summoning the demon."

More moderate thinkers, like AI expert Thomas G. Dietterich of Oregon State University, have highlighted both the benefits and the risks involved in our increasing dependency on AI and what scenarios could hypothetically arise in the future. And it's been a long-running statistical parlour game amongst experts to predict just when that human-comparable AI era will emerge. At present, median estimates hover somewhere around 2040-2050.

While it's valid to make the argument that technological advancement can suddenly explode in one instance and usher in the terrifying unknown (otherwise known as technological singularity), you could make the same predictive argument about any aspect of the human race: population explosion, raw material and energy consumption, pandemics, natural disasters, debt, social inequality, nuclear war -- and that's just the short list.

On the brighter side of the argument, deep learning has already given us far better tools to tackle the problems listed above like better access to information, more advanced tools to design new medications, and incredibly sophisticated models to conduct and synthesize scientific research across an almost unlimited breadth of subfields.

But it's also important to acknowledge the potential risks and keep them in mind as we determine the degree to which we invite AI into our lives. At present, people are already dubious about the way major technology companies are mining our search engine data "under the hood" in order to spam our email and social media feeds with hyper-targeted marketing. This is a far cry from dystopian territory, but it presents enough of an issue around privacy and coercion to warrant serious discussion. As with any new technology, whether it's biotech, nanotech, or autonomous vehicles, it's important to identify future risks and use these hypothetical scenarios to present the need for a new framework and significant debate around their ethical use.

4. Deep neural networks already have an understanding of the world similar to humans.

Today's best deep neural networks have the ability to organize knowledge in a semantically coherent fashion by learning from lots of examples with supervision. For instance, when a deep learning computer is fed a massive body of text, like the entire content of Wikipedia, it can learn from analogous equations such as: king -- man + woman = queen; or France -- Paris + London = England. But despite how impressive this is, the systems are still far from understanding the world in the intuitive way humans do.

The ultimate goal, then, is to reach the point where the computers understand the world in a way that's similar to our understanding, so we can use them in ways that are also intuitive to us. A good example of this is through the way we currently engage with search engines. When we're looking for information, we tend to type exact keywords into the search bar that we anticipate the machine may or may not understand. Compare that to the way we would ask an actual person for the same information. We tend to communicate well with other humans because we have a shared understanding of the world.

Major search engines are already building natural language understanding into their programs that will give users the ability to speak to their computer for a request instead of plugging in isolated keywords. As far as we've come in this area, however, we're still a long way off from making this human-computer intuitive communication a reality.

5. AI-powered robots will soon have self-awareness.

TechRadar recently published an article that reported a robot had passed the so-called "self-awareness test." In the test, roboticists at the Rensselaer Polytechnic Institute in New York programmed a trio of robots: one that was given the ability to speak, while the two other robots were rendered mute. The researchers then asked the robots which one of them could talk. The purpose was to see if the machines could figure it out themselves. Each of the three robots tried to answer, "I don't know." The two robots who couldn't speak were unable to reply. The robot that heard its own voice was able to self-identify as the one with a voice and replied, "Sorry, I know now!"

Readers without a deeper understanding of the experiment's context used these results to verify their theories that the machine revolution is imminent. But Yann LeCun, one of the pioneers of deep learning and current Director of AI Research at Facebook, was swift to dismiss the hysteria and reassure the human race that AI is "nowhere near" a self-aware state.