Should artificial intelligence copy the human brain?

The biggest breakthrough in AI, deep learning, has hit a wall, and a debate is raging about how to get to the next level Information from The Wall Street Journal

verything we’re injecting artificial intelligence into—self-driving vehicles, robot doctors, the social-credit scores of more than a billion Chinese citizens and more—hinges on a debate about how to make AI do things it can’t, at present. What was once merely an academic concern now has consequence for billions of dollars’ worth of talent and infrastructure and, you know, the future of the human race. That debate comes down to whether or not the current approaches to building AI are enough. With a few tweaks and the application of enough brute computational force, will the technology we have now be capable of true “intelligence,” in the sense we imagine it exists in an animal or a human?

On one side of this debate are the proponents of “deep learning”—an approach that, since a landmark paper in 2012 by a trio of researchers at the University of Toronto, has exploded in popularity. While far from the only approach to artificial intelligence, it has demonstrated abilities beyond what previous AI tech could accomplish.

The “deep” in “deep learning” refers to the number of layers of artificial neurons in a network of them. As in their biological equivalents, artificial nervous systems with more layers of neurons are capable of more sophisticated kinds of learning.

‘We need to take inspiration from nature.’

—Gary Marcus, NYU

Illustration by Brian Stauffer

 

to understand artificial neural networks, picture a bunch of points in space connected to one another like the neurons in our brains. Adjusting the strength of the connections between these points is a rough analog for what happens when a brain learns. The result is a neural wiring diagram, with favorable pathways to desired results, such as correctly identifying an image. Today’s deep-learning systems don’t resemble our brains. At best, they look like the outer portion of the retina, where a scant few layers of neurons do initial processing of an image.

It’s very unlikely that such a network could be bent to all the tasks our brains are capable of. Because these networks don’t know things about the world the way a truly intelligent creature does, they are brittle and easily confused. In one case, researchers were able to dupe a popular image-recognition algorithm by altering just a single pixel.

Despite its limitations, deep learning powers the gold-standard software in image and voice recognition, machine translation and beating humans at board games. It’s the driving force behind Google’s custom AI chips and the AI cloud service that runs on them, as well as Nvidia Corp.’s self-driving car tech.

Andrew Ng, one of the most influential minds in AI and former head of Google Brain and Baidu Inc.’s AI division, has said that with deep learning, a computer should be able to do any mental task that the average human can accomplish in a second or less. Naturally, the computer should be able to do it even faster than a human.

On the other side of this debate are researchers such as Gary Marcus, former head of Uber Technologies Inc.’s AI division and currently a New York University professor, who argues that deep learning is woefully insufficient for accomplishing the sorts of things we’ve been promised. It could never, for instance, be able to usurp all white collar jobs and lead us to a glorious future of fully automated luxury communism.

Dr. Marcus says that to get to “general intelligence”—which requires the ability to reason, learn on one’s own and build mental models of the world—will take more than what today’s AI can achieve.

“That they get a lot of mileage out of [deep learning] doesn’t mean that it’s the right tool for theory of mind or abstract reasoning,” says Dr. Marcus.

To go further with AI, “we need to take inspiration from nature,” say Dr. Marcus. That means coming up with other kinds of artificial neural networks, and in some cases giving them innate, pre-programmed knowledge—like the instincts that all living things are born with.

Many researchers agree with this, and are working to supplement deep-learning systems in order to overcome their limitations, says David Duvenaud, an assistant professor of machine learning at the University of Toronto. One area of intense research is determining how to learn from just a few examples of a phenomenon—instead of the millions that deep-learning systems typically require.

Researchers are also trying to give AI the ability to build mental models of the world, something even babies can accomplish by the end of their first year. Thus, while a deep-learning system that has seen a million school buses might fail the first time it’s shown one that’s upside-down, an AI with a mental model of what constitutes a bus—wheels, a yellow chassis, etc.—would have less trouble recognizing an inverted one.

Supplementing deep learning with other kinds of AI is all well and good, says Thomas Dietterich, former president of the Association for the Advancement of Artificial Intelligence, but it’s important not to lose sight of the magic of deep learning and machine learning in general.

“For machine-learning research, the goal is to see how far we can get computer systems to learn just from data and experience, as opposed to building it in by hand,” says Dr. Dietterich. The problem isn’t that innate knowledge in an AI is bad, he says, humans are bad at knowing what kind of innate knowledge to program into them in the first place.

“In principle we don’t need to look at biology” to figure out how to build future AIs, says Dr. Duvenaud. But the kinds of more sophisticated systems that will succeed deep-learning-focused tech don’t work yet, he says.

Until we figure out how to make our AIs more intelligent and robust, we’re going to have to hand-code into them a great deal of existing human knowledge, says Dr. Marcus. That is, a lot of the “intelligence” in artificial intelligence systems like self-driving software isn’t artificial at all. As much

as companies need to train their vehicles on as many miles of real roads as possible, for now, making these systems truly capable will still require inputting a great deal of logic that reflects the decisions made by the engineers who build and test them.

Share this!

Additional Articles

News Categories

Get Our Twice Weekly Newsletter!

* indicates required

Rose Law Group pc values “outrageous client service.” We pride ourselves on hyper-responsiveness to our clients’ needs and an extraordinary record of success in achieving our clients’ goals. We know we get results and our list of outstanding clients speaks to the quality of our work.