Podcast notes: Can you teach a machine to think?
Karen Hao, MIT Technology Review’s senior AI reporter, and Will Douglas Heaven, its senior editor for AI, spoke on the Deep Tech podcast about the possibility of artificial general intelligence (AGI) — when AI can think and act like humans. For this special feature, we summarized some of their main talking points. [Note: Responses were lightly edited for clarity.]
Heaven on the rise of AGI:
"Building a machine that can think and do things that people can do has been the goal of AI since the very beginning, but it's been a long, long struggle. And past hype has led to failure. So this idea of artificial general intelligence has become, you know, very controversial and very divisive — but it's having a comeback. That's largely thanks to the success of deep learning over the last decade."
Heaven on different kinds of general intelligence:
"We're sort of stuck on this idea of human-like intelligence largely I think because humans for a long time have been the best example of general intelligence that we've had, so it's obvious why they're a role model, you know, we want to build machines in our own image, but you just look around the animal kingdom and there are many, many different ways being intelligent. From the sort of the social intelligence that ants have, where they could collectively do really remarkable things, to octopuses, which we're only just beginning to understand the ways that they're intelligent, but they're intelligent in a very alien way compared to ourselves."
Hao on the need for value alignment:
"If you observe the conversations that are happening among people who talk about some of the ways that we need to think about mitigating threats around superintelligence or around AGI, however you want to call it, they will talk about this challenge of value alignment. Value alignment being defined as how do we get this super-intelligent AI to understand our values and align with our values. If they don't align with our values, they might go do something crazy. And that's how it sort of starts to harm people."
Hao on GPT-3:
"What I think GPT-3 has done differently is the fact that there's just orders of magnitude more data that is now being used to train this transformer technique. So what OpenAI did with GPT-3 is they're not just training it on more examples of words from corpora like Wikipedia or from articles from the New York Times or Reddit forums or all of these things, they're also training it on, sentence patterns and paragraph patterns, looking at what makes sense as an intro paragraph versus a conclusion paragraph."
Heaven on AI vs. the human brain:
"In spite of the massive complexity of some of the neural networks we're seeing today in terms of their size and their connections, we are orders of magnitude away from anything that matches the scale of a brain, even sort of a rather basic animal brain. So yeah, there's an enormous gulf between that idea and the ability to do it, especially with the present deep learning technology."
Heaven on the importance of AI being different than us:
"The very mission of building an AGI that is human is perhaps pointless because we have human intelligences, right? We have ourselves. So why do we need to make machines that do those things? It'd be much, much better to build intelligences that can do things that we can't do. They're intelligent in different ways to complement our abilities."