Gary Marcus on the Future of Artificial Intelligence and the Brain
Dec 15 2014

Gary Marcus of New York University talks with EconTalk host Russ Roberts about the future of artificial intelligence (AI). While Marcus is concerned about how advances in AI might hurt human flourishing, he argues that truly transformative smart machines are still a long way away and that to date, the exponential improvements in technology have been in hardware, not software. Marcus proposes ways to raise standards in programming to reduce mistakes that would have catastrophic effects if advanced AI does come to fruition. The two also discuss "big data's" emphasis on correlations, and how that leaves much to be desired.

RELATED EPISODE
Amy Webb on Artificial Intelligence, Humanity, and the Big Nine
Futurist and author Amy Webb talks about her book, The Big Nine, with EconTalk host Russ Roberts. Webb observes that artificial intelligence is currently evolving in a handful of companies in the United States and China. She worries that innovation...
EXPLORE MORE
Related EPISODE
Rodney Brooks on Artificial Intelligence
Rodney Brooks, emeritus professor of robotics at MIT, talks with EconTalk host Russ Roberts about the future of robots and artificial intelligence. Brooks argues that we both under-appreciate and over-appreciate the impact of innovation. He applies this insight to the...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

rhhardin
Dec 15 2014 at 11:34am

Coleridge has a refutation of AI in chapters V-VIII of Biographia Literaria link, that is what Wittgenstein would call a grammatical argument so it still works today. The motto would be Schelling’s “Matter has no inwards.”

We remove one surface, but to meet with another.

I used to say as a joke that AI has the longest history of undelivered future promise of any, ever.

I think the field attracts male philosopher skeptic types from what used to be analytic philosophy.

My own programs that started out as trying to do something intelligent with data wound up simply with a good algorithm, nothing intelligent at all. Just the opposite. I suspect that will always happen.

The intelligence problem is so hard to figure out that it’s undoubtedly being thought of wrong. Language is getting in the way, or you have a false picture of what you’re trying to do.

On the brain’s wiring, I might remark, with Derrida, that wiring takes the place of writing, and imports language into your thoughts, so it seems like it could be a basis for intelligence.

Studying the formation of letters isn’t going to get you anywhere.

What would happen if intelligence depended on existence at two times at once?

Presence is a little tricky.

Again with Derrida.

rhhardin
Dec 15 2014 at 12:05pm

On checking that programs work, there’s a huge field of formal verification that’s used widely for hardware design checking, based on BDDs or CNF checking more than enumeration.

You can establish that if a small piece of hardware has inputs constrained to A then its outputs are constrained by B (B being what you want it to do).

Then larger pieces of hardware can assume B and in turn be verified from that to have outputs satisfying C, and so on up until the whole thing is proven to work.

For programs, it doesn’t quite work because their state space is too large to deal with. BDDs and CNF postpone the point where exponential blow-up happens quite a ways, but not indefinitely.

Don McArthur
Dec 15 2014 at 2:09pm

My prediction for the 21st Century iteration of Bread & Circuses that will be provided to mollify those rendered useless by technological advance is 3-part; Guaranteed Basic Income, legalized recreational cannabis, and immersive virtual reality.

Don McArthur
Dec 15 2014 at 5:42pm

Nice timing. This available today from the NY Times:

“As Robots Grow Smarter, American Workers Struggle to Keep Up”

http://www.nytimes.com/2014/12/16/upshot/as-robots-grow-smarter-american-workers-struggle-to-keep-up.html

Cris Sheridan
Dec 16 2014 at 2:23am

Loving these AI-themed interviews!

Let’s start with Google translate since this a good stepping stone for how to think of this debate. Natural language processing made its momentous breakthrough not with “smarter” or “more intelligent” algorithms, but with massive amounts of data available on the web. As Google researchers, including Peter Norvig, wrote on this subject in 2009, “Simple models and a lot of data trump more elaborate models based on less data.” (see link)

We can apply this basic fact to many of the recent breakthroughs in AI. That said, “artificial intelligence” is now a misnomer and we refer instead to “big data” since the emphasis is less on machine intelligence and more on the quantity of human-generated data/information/intelligence (this point really should be understood…it’s a bit of a paradigm shift and one reason why big data has captured headlines over the last couple years).

Keeping that in mind, let’s now frame the issue this way: Bostrom and others conceive of a hypothetical superintelligent machine that exists independently of us, which may also become a threat. As an extension of this logic, it is conceived as a closed system that could possibly be confined if necessary. Does this type of reasoning fit with the real world? No. What we find is that this trend toward higher-scale intelligence or superintelligence is not some “god in a box” but, in fact, a distributed collective intelligence inextricably tied to humanity itself.

Via the International Society for Information Studies, “The Global Brain can be defined as the self-organizing network formed by all people on this planet together with the information and communication technologies that connect and support them. As the Internet becomes faster, smarter, and more encompassing, it increasingly links its users into a single information processing system, which functions like a nervous system for the planet Earth. The intelligence of this system is collective and distributed: it is not localized in any particular individual, organization or computer system. It rather emerges from the interactions between all its components – a property characteristic of a complex adaptive system.”

If this emerging superintelligence or global brain is formed from society, can it also be a threat to humanity at the same time? Of course! As history teaches us time and time again, we are the greatest threat to ourselves; technology merely amplifies our destructive capabilities when used for such purposes.

Todd
Dec 16 2014 at 9:09am

[Comment removed pending confirmation of email address and for irrelevance. Email the webmaster@econlib.org to request restoring your comment privileges. A valid email address is required to post comments on EconLog and EconTalk.–Econlib Ed.]

Andrew Burleson
Dec 16 2014 at 12:58pm

I really enjoyed the Gary Marcus interview. As a programmer, I thought his take on what is realistic and near to us, and what is unknown and likely far from us, was very accurate. I sat down to write my thoughts in response to this article and realized it was a *lot*, so I’ve broken this up into smaller segments.

1. Brute Force vs. Intelligence
In the opening segment Russ and Gary discuss how progress toward artificial intelligence has been linear, not exponential. In many cases things like Siri which seem to be intelligent are actually not intelligent at all, but rely on clever brute force solutions.

I worked on voice interaction software, so I have some insight into this case. If you’ve ever read a choose your own adventure book, voice interaction software is written a lot like that. There is a tremendous amount of brute force involved, which goes something like this:

– The microphone captures your voice as sound waves and sends them to the voice recognition software for processing

– The voice recognition software compares the captured sound waves against a substantial database of audio language patterns and creates a “fuzzy match” of words you might have said, ranked by how closely they match the pattern, resulting in a list sort of like this: “[I, eye, my, why] [write, ride, pride, pried, died] [by, my, I, thy, why] [pike, bike, Mike, mic]”

– The voice recognition software then looks at the string of words it thinks you may have said and compares them against language rules (grammar), to determine combinations that make sense. The computer can attempt every combination of words in the above example and see that “Eye price why Mike” and many other combinations are grammatically incorrect and eliminate those. It can then compare grammatically coherent combinations and determine that “I died by pike” is less likely than “I ride my bike”.

– With the speech converted to text, the voice interaction software can then analyze the nouns and verbs in the text and compare them against all the possible interactions and pick the most likely match. The software can proceed through a script where it does things like ask for clarification if, for example: “I couldn’t find anything called mickey dees near you, where did you want to go?” This is an example of the kind of special rules programmers could add to make the software seem smarter, know that “mickey dees” really means “McDonald’s.”

In this way brute force can create a useful simulation of intelligent interaction, but it is not intelligent. The computer doesn’t “understand speech” or “know how to work your phone.”

2. The Turing Test, Computation vs. Intuition
Gary talked about the Turing Test. The voice interaction example above illustrates why there is a need for “sequels” to the Turing Test. It has turned out that through brute force we can create compelling simulations of specific interactions. Given the ability of a computer to store a nearly infinite “script” of possible behaviors, and to select from among these possibilities nearly instantaneously, it is becoming increasingly feasible to write software that can convincingly imitate any particular human behavior.

The reason this is not like intelligence is it is not generally possible for a computer to make deductions or intuitive leaps that even small children can make easily. For example, you could work very hard to get a computer program to recognize birds in videos, and probably even succeed in making a computer software that could identify birds reliably. But as the google cat tester illustrated, there is a reasoning about what it means to be a bird that is not included in the task of visually identifying what a bird is. If you ask a small child “what do birds do?” the child would probably say “they fly.” You could easily write a script to tell the computer the answer to this question, but if you wrote a program to know all the facts about birds, the computer is not going to be able to reason that the essence of “bird-ness” is flight – so much so that we think of creatures like Penguins as belonging to a special class of “flightless birds.”

The ability to reason about something in the world and distill conclusions about its essence is something unlike computation. We don’t yet really understand what reasoning really is

3. Limits of Computation Hardware
The conversation moved into an analysis of computer hardware versus the neural networks of the brain. Many people look at the brute force simulations of intelligence and see hardware as the only gap between today’s “good fakes” and tomorrow’s “genuine AI.” The assumption is that as computational power continues to grow exponentially the computing power needed to achieve intelligence will eventually be available and then artificial intelligence will be inevitable.

This doesn’t actually make sense though, for two reasons. One computer hardware is already close to or possibly more powerful than the brain in terms of computational throughput. There’s room for debate here but the gap is not very large any more. And yet we’re nowhere near being able to simulate a human brain currently.

Further, the growth of computing power has changed dramatically in the last ten years. While we’re still able to pack more transistors into computers, we haven’t been able to do much about the clock speed or power throughput in a decade, which means the speed of computation has not been increasing. Instead, chip makers are creating CPUs with more and more cores on them. Essentially computers now come with more CPUs inside rather than faster CPUs.

The problem is, software can’t easily take advantage of these additional cores. Programs are, fundamentally, a series of steps for a computer to follow to compute a result. The steps usually need to go in a certain order. Multi-core computers can do more things at once, but they can’t take an existing program and run it faster. Instead the programmers need to re-write the program so that it is split into tasks that can be executed simultaneously, and then the final result is assembled at the end. This is very often possible but much more difficult than just running a sequential program faster. There are also often limits to how “concurrent” a process can be made, you can’t run step 2 of a program if it needs to know the result of step 1 and step 1 isn’t finished yet.

4. Intuition Hardware
By comparison to computer hardware which is optimized for incredibly fast computation (much faster than the brain), we understand the brain to be composed of neural networks which are massively parallel. But we don’t really know how this works. As Gary said, even trying to take a tiny slice of the brain and simulate its behavior is an extremely difficult computational task.

We know the brain is a complex adaptive system. One of the tenants of complexity theory is that it may actually not be possible to model a complex system because in order to do so one would have to perfectly capture the exact state of every actor in the system at a moment in time, which is not just incredibly difficult but might actually not be plausible within the laws of physics. Therefore just being able to model the brain accurately may not actually be plausible.

Let’s assume though that we could find a way to make an exact copy of a neural network, and in this way create an “intuition machine” that worked just like a brain. As Gary pointed out, we would expect this “brain in a box” to have exactly the same performance characteristics of the brains in our skulls: ie. it would be capable of self-awareness and intuitive reasoning but it would have a crappy memory and think at human speed, not computer speed.

I would propose, hypothetically, that this may actually be the cost of intuition. We understand the output of massively parallel neural networks in the human brain to be human consciousness. We also understand that this true intelligence is very different from the brute force computation that a computer does. In fact, the stark difference between our mechanical computation hardware and our biological intuition hardware is the very thing that makes the computation hardware so useful to us: it is good at what we are bad at, and therefore it may follow that it cannot also be good at what we are good at.

5. Environmental Adaptation
Russ and Gary close by discussing the shifts in the labor market that are occurring and will continue or accelerate due to improvements in technology, namely the ability of computers to move into more and more “semi intelligent” tasks like driving.

It’s important to note Gary’s example of *how* the google car is able to drive itself. It relies on extraordinarily detailed mapping of the environment and behavior rules for navigating that environment, and it cannot operate outside of that environment. The car doesn’t actually “know how to drive,” but instead it can run on the tracks the google engineers have laid for it.

As humans when we want to take advantage of a technology like this we often engage in *environmental adaptation.* In fact, cars themselves are the technology we have done the most environmental adaptation for. Humans are well-suited for traveling through rough terrain, over mountains and such, with no roads required. Cars can’t do that at all. But we like cars so much that we’ve laid concrete paths for them everywhere and in fact changed our environment so that the distances between places are actually quite infeasible for humans to operate in, because the cars work better that way. But the roads don’t literally go *everywhere*, there are places where you would need an “off-road” vehicle and there are places where even an off-road vehicle could not reach.

In the same way I can easily imagine the mapping work required to make driverless cars work rolling out just like cell-phone coverage. It works on the interstate and in major cities, and eventually in smaller and smaller cities, but probably is never very good in remote rural areas.

6. The Role of the Human
I appreciated that the conversation ended with the economic impact of machines moving into more and more “routine tasks,” and how this is bit by bit eroding the labor market. As Gary said, there will be lots of money to be made by a small number of people, but not much to go around for the rest. My favorite example of this are QuickBooks and TurboTax. As these programs get better and better, Intuit becomes increasingly wealthy, while the market demand for “ordinary” accountants drops. What happens if you have years of training and experience to be an accountant but you can’t find a job any more?

Russ sometimes imagines “Human Flourishing”, where most people have very little money but their lives are still nice because almost all their needs can be met freely or very cheaply. I think this is unlikely, however, because we are not seeing decreases in the cost of “basic opportunity”, things like housing, education, healthcare, and transportation, that would lead to life being rich with little or no money. Instead all these basics of life are becoming *more expensive*.

What strikes me as more realistic is that we end up with some kind of revolts or uprisings by large masses of underemployed people against the rising elites, and governments around the world implement populist taxation and regulation schemes designed to either reverse technological unemployment by restricting technology, or to “offset” widespread underemployment by massive wealth redistribution.

Of course, predictions of the future are generally wrong, and I’m sure mine fall into that category.

Great episode, Russ. I really enjoyed it!

Cris Sheridan
Dec 16 2014 at 3:55pm

@Andrew Burleson, Great points! Your explanation of how voice recognition software works was especially fascinating…tremendous brute force indeed!

On the replication/simulation of the human brain in a computer, again I just want to point out that this is already underway before our very eyes when viewed at the appropriate scale. Both the global markets and economy have been created by us and are the closest thing representing human intelligence, decision making, and value.

Mort Dubois
Dec 16 2014 at 8:30pm

@ Andrew Burleson: Comment of the Year!

As for the episode, I felt that this was a much more nuanced picture of AI than Otteson or Kurzweil. Those episodes struck me as interesting illustrations of how a scary idea, presented by an ostensibly credible person, is of sufficient value to provide large incentive for those actors to devote themselves to propagating unprovable fantasy. There’s a payoff in reputation and, in some venues, cash.

If we presume that an appearance on Econtalk is worth X for the guest’s reputation, for a reasonable expenditure of their own time, then Otteson and Kurzweil are the equivalent of Marcus. As a consumer of the show, which again costs me only time, I received a very different product from the former than the latter. I had a hard time taking Kurzweil and Otteson seriously – their arguments simply don’t hold water – so I had only the satisfaction of feeling superior to the guest. Marcus, on the other hand, delivered plausible ideas that forced me to think carefully about whether I thought he was correct. On the whole, that’s the experience I look for from Econtalk. Takeaway for Russ: thank you for providing not just the outlandish shock story, but the less exciting but more informative look as well.

Fepps
Dec 17 2014 at 5:15am

Hubert Dreyfus’ work on AI certainly deserves a mention here…

Russ Roberts
Dec 17 2014 at 10:30am

Andrew Burleson,

Wonderful comment, full of insight. Thanks so much.

I did want to correct a misimpression that I may have given to listeners.

By “human flourishing” I do not mean the enjoyment of material prosperity.

Human flourishing to me is about the deep satisfactions we receive from creating things, working with others, being appreciated for what we accomplish and so on. It’s the opportunity to use one’s skills and gifts to earn money, make the world a better place, help others commercially or through charities or whatever else gives people deep satisfaction. Part of the benefit of material prosperity is to give people time to find ways to use leisure that are entertaining and that produce delight. But that is only one part of what I think of as human flourishing.

I also believe we live in a time where very large numbers of people in the richer countries spend their time in pleasant, delightful, and rewarding ways, both on the job and in their leisure time. My worry about the future is that that life may not be available to all people but only to a small elite.

Amren Miller
Dec 17 2014 at 4:21pm

Indeed, the greatest question……..how to steer the future to a positive general utopia, instead of for the so-called elite. It’s a real question. And perhaps, like I always say, the answer may need to come from killing unearned income at the top of our society, or at least attacking it through taxes.

And it never ceases to amaze me how obsessed some economists are with the notion that so much of human happiness can be found through the market. Your middle paragraph was not exclusively focused on that idea, but it had that as a major focus. I would love to see the economy shrink in some ways. We already work and consume too much. And major parts of commerce are not truly needed. But they pay the bills! That’s the market for you, right? I sell video games to pay for my house! The market is fine if you were only earning spending money, but sucks when you have a mortgage and education to pay for. No good easy answers…….

Greg G
Dec 17 2014 at 10:28pm

As someone handicapped by age and aptitude when it comes to understanding technology, I really appreciate how accessible this discussion was. It’s rare to find someone who has a deep understanding of this stuff and even rarer to find someone who is good at explaining it to the layman.

Ditto for Andrew Burleson. Add me to the chorus of people blown away by his comment. The first time I glanced through the comments I skipped that one because it was so long. After reading it I was disappointed it was so short. I learned something from almost every sentence.

Scott Campbell
Dec 18 2014 at 10:58am

I was listening to a presentation about a milling machine so sophisticated that it required a stupid human interface to make it operable. That is the future of robotics. The machines will be so sophisticated that our only means of interacting will be through stupid human interfaces.  What this means is that the machine will offer us two choices for each decision culminating in the best course of action. Statistically speaking a machine could predict the answer but the probability of it being wrong is still significant enough that some answers will best be made by the human brain.

Russ Roberts
Dec 18 2014 at 1:44pm

Amren Miller,

You write:

And it never ceases to amaze me how obsessed some economists are with the notion that so much of human happiness can be found through the market. Your middle paragraph was not exclusively focused on that idea, but it had that as a major focus. I would love to see the economy shrink in some ways. We already work and consume too much. And major parts of commerce are not truly needed. But they pay the bills! That’s the market for you, right? I sell video games to pay for my house! The market is fine if you were only earning spending money, but sucks when you have a mortgage and education to pay for. No good easy answers…….

I assume you’re talking my comment just above yours. Not sure what you mean when you write: “And it never ceases to amaze me how obsessed some economists are with the notion that so much of human happiness can be found through the market.”

After my family and my religion, EconTalk is probably the most satisfying thing I do. It’s part of the market. A number of people, including me, get paid to make it happen. That you don’t have to pay out of pocket for it is irrelevant. Do you think it odd that I find working on EconTalk a deeply gratifying and rewarding way to spend my time? Or the books I write that people pay to read–is writing a book or reading one that you’ve purchased a strange way to find happiness or pleasure?

A lot of different things give our lives meaning and make them satisfying. Those things include the work we do especially when we work with others to create useful and delightful products and services. Is there something obsessive about believing that? Our work and what people do with our work seems to be an important part of life beyond the salary attached to it.

As I point out above, we do lots of things for others, both alone and joining with others in charities through volunteer work that are part of a rewarding and meaningful life. But work is or can be part of a meaningful life as well. If you want to call that being “obsessed with the market,” that’s OK with me.

Sean B.
Dec 18 2014 at 4:24pm

At some point, EconTalk should invite Kurzweil to the show, rather than talk around Kurzweil’s points whenever AI/robotics comes up, or in this episode, argue with Kurzweil in a proxy fashion via a detractor who oversimplifies his ideas & reasoning. I think in the case of this episode (“So, what would Ray Kurzweil say in response?”) he might argue that the exponential advance of hardware has ramifications on what is possible in software, pushing against the limits of what is possible to simulate in a brute force way, which may lead to techniques that trade detail for speed. In other words, advances in hardware lead to advances in software, often ones that are difficult to conceive until the capability is ubiquitous and the application obvious (e.g. Uber).

Marcus mischaracterizes Kurzweil by applying his “accelerated returns” theory to a chat bot. Kurzweil is more about paradigm shifts… say, from chat-bot gimmickry to neural network simulation. And also more disciplines becoming dependent on hardware capability; being able to scale with hardware (again, chat bots vs simulation). He would also point out the deceptive nature of exponential growth— it looks like it’s going nowhere until suddenly it’s smashing expectations. To paraphrase him, people like to think linearly and have a hard time conceptualizing exponential gains intuitively.

Otherwise an interesting episode. I’m a big fan of the show; by far my favorite podcast.

Debashish Ghosh
Dec 19 2014 at 3:48am

Russ, there was a part where you and Gary discuss the efforts of some AI researchers to attempt to create intelligence by replicating/simulating the human brain. Both of you were of the opinion that such a goal doesn’t make a lot of sense, because human minds have several shortcomings such as imperfect memory, the tendency to make mistakes, etc. At first I agreed with that but when I thought a little more I wasn’t sure that it isn’t a worthwhile approach.

  • Wouldn’t a machine that could mimic a human brain be useful for many of the relatively mundane jobs in today’s world, such as answering phones in call centers? These are roles where the human operator typically goes through a list of options with the caller, or follows a flowchart sort of approach (if caller reports Problem 1, go over possible solutions A, B, C and so on). If a machine could replicate that, wouldn’t that be valuable, even if it suffered from an equally poor memory as the human and had the same rate of making mistakes
  • Despite the limits to access to human brains to study, it is possible that we might be able to find ways to map all the neural pathways and replicate it. Might it not be worthwhile to continue to explore this approach until it becomes more clear that we are reasonably close to creating AI through other approaches that doesn’t suffer from human like limitations..
Daniel Barkalow
Dec 19 2014 at 2:14pm

I think a good way of summarizing advances in the field of AI is: 60 years ago, we thought we could make machines think, and then they would be able to do tasks we care about. In the past 60 years, we have made little if any progress on making machines think, and great progress on making machines able to do tasks we care about without thinking.

There are really two claims: (1) in 20 years, we’ll have solved The Hard Problem and (2) we’ll be able to achieve a practical application if and only if we solve The Hard Problem. Invariably, after 20 years, we find that we’ve achieved the practical application without solving The Hard Problem, or even making any progress at all on it.

Furthermore, in the past decade, this has largely been recognized in the field, so people generally aren’t even trying to work on The Hard Problem; instead, they’re working around it, and doing so very successfully. Working on making machines able to think is really more related to trying to understand how humans think than to practical applications, and the practitioners are generally (like the guest) cognitive scientists who want to test their theories.

As a side point, I think it’s a bit inaccurate to talk about some of the alternatives to thinking as “brute force”. There are some incredibly clever techniques and great engineering involved in coming up with these solutions. They aren’t really any more “brute force” than, for example, making aerodynamic cars is.

Andrew Burleson
Dec 19 2014 at 5:58pm

Cris, Mort, Russ, and Greg G., thanks for the kind words. Greg G, in particular, I’m honored and really appreciate your comment.

Russ, thanks for the additional remarks on flourishing. Your take on these things is very interesting. You allude to your own feelings on wealth, inequality, unemployment, and technology from time to time, have you written or podcast your personal take on these things in greater depth before? If not, I would be very interested to hear your thoughts in more detail.

Daniel Barkalow,
For what it’s worth, my characterization of computational power as brute force is not meant to convey that it is inelegant or clumsy. The engineering to harness a bunch of little circuits and logic gates and layer by layer build up the operating system and modern software on top of that is practically magical. The examples are just meant to illustrate how the computer does not “understand” what it is doing.

Cris Sheridan
Dec 19 2014 at 6:35pm

Correct me if I’m wrong but it appears the change of perspective I’m advocating (that is, AI/superintelligence is being created outwardly on a global scale in what is most popularly referred to as a “global brain”) does not strike EconTalk listeners as particularly interesting.

If this is true (and anyone cares to respond) I am very curious to know why. This was a very dramatic shift in my own thinking and seemed to put the whole “hard problem of intelligence” in the proper light.

Russ, given your familiarity with Hayek (and even Smith), I would think this would align more closely with your views as well.

Andrew Burleson
Dec 19 2014 at 10:54pm

Cris,

I think your concept is very interesting.

I agree that if we consider the activity of civilization as a whole we may already be living in the nascent era of super intelligence. The internet is facilitating the aggregation and distribution of information on a scale that has never before been possible, and this is allowing new ideas and information to emerge from the system in a way that would not be possible without it.

In a way, I see this as the logical outcome of the increased merging of our intuition hardware (brains) with better and more ubiquitous computation hardware (electronics). Frankly, humans are lousy at computation and recall, but the more that we correct for that with our machines the more we see collective “super intelligence” emerge.

Cris Sheridan
Dec 20 2014 at 6:02pm

Andrew, thanks for your feedback. It sounds like we are thinking along the same lines. My personal shift away from the largely mechanistic framework that has typified the AI community began when I started understanding human intelligence as an emergent property of a complex adaptive system that is biological in nature. Since it is illogical to think that we can produce greater than human-level intelligence through less complex hardware (a machine), I agree that it is more accurate to think about how machines are augmenting our pre-existing biology, most notably, as you point out, at the societal level. This opens the discussion to a much broader array of subjects like: superorganism, global brain, collective intelligence, cybernetics, networks, evolution, emergence, self-organization, etc. (see The Global Superorganism:
an evolutionary-cybernetic model of the emerging network society
)

As an aside, I am very interested in how this is taking shape through the financial markets and think this is perhaps where we are seeing the most unexpected results with the advent of high frequency trading.

Ron Crossland
Jan 2 2015 at 1:22pm

Russ – I appreciate the focus on artificial intelligence and the guests you have interviewed recently. Stimulating conversations to say the least.

I share your puzzlement concerning the current fear of super intelligent machines. It may be more prominent in the US simply because we are passing through a fear phase stimulated in large part by the idea of terror (unexpected attacks from malevolent sources – human or cyber).

In terms of artificial intelligence, I think moving directly from programmable machines directly to the associative architecture of an embodied neurology leaves out an amazing number of steps along the way. A discussion of “intelligence” might survey the number of steps required to place “Big Blue” on a continuum between a 1930’s Rolex and a human brain.

The idea that current silicon based processes may ever reach artificial intelligence performance seems very doubtful based on what we know. What we don’t know is whether or not we will be able, using organic chemistry, to develop some hybrid silicon/organic machine. This is “laboratory manufactured” portion of your thoughts on increasing human intelligence with ever increasing connected technology.

For me, a manmade intelligent machine is more likely to arise from technologies not yet in use versus variations on faster silicon.

Comments are closed.


DELVE DEEPER

About this week's guest:

About ideas and people mentioned in this podcast episode:Books:

Articles:

Web Pages and Resources:

Podcast Episodes, Videos, and Blog Entries:


AUDIO TRANSCRIPT

 

Time
Podcast Episode Highlights
0:33Intro. [Recording date: December 8, 2014.] Russ: We're going to talk about human intelligence, artificial intelligence, building on a recent talk and article on the subject that he has done; whether we should be worried about artificial intelligence running amok. Gary, welcome to EconTalk. Guest: Thanks very much. I should mention, by the way that I have a more recent book that's very relevant, which is called The Future of the Brain: Essays by the World's Leading Neuroscientists. Maybe we'll touch on that. Russ: Excellent. We'll put a link up to it. Now, there've been a lot of really smart folks raising the alarm about artificial intelligence, or as it's usually called, AI. They are worried about it taking over the world, forcing humans into second-class status at best or maybe destroying the human race. Elon Musk and Stephen Hawking have both shown concern. And here at EconTalk I recently spoke with Nick Bostrom about the potential for superintelligence, which is what he calls it, to be an anti-human force that we would lose control of. So, let's start with where we stand now. What are the successes of artificial intelligence, what are its capabilities today in 2014? Guest: I think we're a long way from superintelligence. People have been working on AI for 50 or 60 years, depending on how you count. And we have some real success stories. Like, Google Translate--pretty impressive. You can put in a news story in any language you like, get the translation back in English. And you will at least figure out what the story was about. You probably won't get all the details right. Google Translate doesn't actually understand what it translates. It's parasitic on human translators. It tries to find sentences that are similar in some big database, and it sort of cuts and pastes things together. It's really cool that we have it. It's free. It's an amazing thing. It's a produce of artificial intelligence. But it's not truly intelligent. It can't answer a question about what it reads; it can't take a complicated sentence and translate it, that, into good English. Apparently I can't, either. It has problems. Even though it does what it does well. It's also typical of the kind of state of AI, which is kind of like it's an idiot savant. The savant that's mastered this critic[?] of translation without understanding anything deeper. So, Google Translate couldn't play chess. It couldn't ride a bicycle. It just does this one thing well. And that's characteristic of AI. You can think, for example, of chess computers. That's all they do. Watson is really good at playing Jeopardy, but IBM (International Business Machines), hasn't yet really mastered the art of applying it to other problems--working in medicine for example. But nobody would use Watson as their doctor just yet. So we have a lot of specialist computers that do particular problems. Superintelligence I think would at a minimum require things like the ability to confront a new problem and say, 'How do I solve that?' So, read up on Wikipedia and see. Superintelligence ought to be able to figure out how to put a car together, for example. We don't have an AI system that's anywhere near being able to do that. So, it's[?] in progress; but we also have to understand that the progress is limited. On some of the deeper questions, we still don't know how to build genuinely intelligent machines. Russ: Now, to be fair to AI and those who work on it, I think, I don't know who, someone made the observation but it's a thoughtful observation that any time we make progress--well, let me back up. People say, 'Well, computers can do this now, but they'll never be able to do xyz.' Then, when they learn to do xyz, they say, 'Well, of course. That's just an easy problem. But they'll never be able to do what you've just said'--say--'understand the question.' So, we've made a lot of progress, right, in a certain dimension. Google Translate is one example. Siri is another example. Wayz, is a really remarkable, direction-generating GPS (Global Positioning System) thing for helping you drive. They seem sort of smart. But as you point out, they are very narrowly smart. And they are not really smart. They are idiot savants. But one view says the glass is half full; we've made a lot of progress. And we should be optimistic about where we'll head in the future. Is it just a matter of time? Guest: Um, I think it probably is a matter of time. It's a question of whether are we talking decades or centuries. Kurzweil has talked about having AI in about 15 years from now. A true artificial intelligence. And that's not going to happen. It might happen in the century. It might happen somewhere in between. I don't think that it's in principle an impossible problem. I don't think that anybody in the AI community would argue that we are never going to get there. I think there have been some philosophers who have made that argument, but I don't think that the philosophers have made that argument in a compelling way. I do think eventually we will have machines that have the flexibility of human intelligence. Going back to something else that you said, I don't think it's actually the case that goalposts are shifting as much as you might think. So, it is true that there is this old thing that whatever used to be called AI is now just called engineering, once we can do it. Russ: Right. Guest: There's some truth in that. But there's also some truth in the fact that the early days of AI promised things that we still haven't achieved. Like there was a famous summer project to understand vision. Well, computers still don't do vision. And that was 50-some years ago. And computers can only do vision in limited ways, like met-camera[?] does face recognition, and that's helpful for its autofocus. Russ: Amazing. Guest: And you know, that's pretty cool. But there's no digital camera you can point out in the world and say, 'Watch what's going on and explain it to me.' There is actually a program that Google just released that does a little bit of that. But if you read the fine print, they don't give you any accuracy data. And then some really weird results there, that like, if a 2-year-old made errors like that you would bring them to a doctor and say, 'Is there some kind of brain damage here? Why is my 2-year-old doing this?'
6:32Russ: So, we talked here in a recent episode, and you read, talked about it, the cat-recognition program that Google has. Not so good. Guest: So, the cat recognizer was the biggest neural network every constructed to date. It was on the front page of the New York Times about 2 years ago. Turns out that nobody is actually using it any more. The Times got very excited about something that was sort of a demo, but not really that rich. So, what it really would do is it would recognize cat faces of a particular sort. It wouldn't even recognize a line drawing of a cat face. It would just cluster together a bunch of similar stimuli. Well, I have a 2-year old; that's not what he does with cats. He doesn't just recognize this particular view of a cat. He can recognize many different views of cats. And he can recognize drawings of cats; he can recognize cartoons of cats. We don't know how to build any access to [?] that. Russ: So, what would Ray Kurzweil say in response--you know, he's an optimist, he thinks--in many dimensions; we'll talk about some of other ones as well. But he says it's "fifteen years away." Besides the fact that it makes it more fun to listen to him when he says that, what do you think his--what does he have in mind? Does he have something in mind? Guest: He's always talking about this exponential law. He's talking about Moore's Law. So, he's saying, 'Look at this; look at how much cheaper transistors have gotten, how many more we can pack in, how much faster computers have gotten.' And this is an acceleration here. He calls it the Law of Accelerating Returns, or something like that. And that's true for some things. But it's not for others. So, for a strong artificial intelligence, which is what we are really talking about, where you have a system that really is as flexible and clever as a person, you look over the last 50 years and you don't really see exponential growth. So, like, we have this chat bot called Siri. Back in the 1960s before I was born so it's a funny use of the word 'we', but the field had ELIZA that pretended to be a psychiatrist. And some people were fooled. Russ: And some people presumably got comfort from it. Guest: And some people presumably got comfort from it. But it didn't really understand what it was talking about. And it was really kind of a parlor trick. And if you talked to it for long enough you would realize. Now we have Eugene Goostman, that does a little bit better--"one that's earned the Turing test this year [?]". But did that by pretending to be a 13-year-old Russian boy who didn't know our culture and our language, but was basically a big evasion[?], as ELIZA was. It's not really any smarter. Siri is a little bit smarter than ELIZA, because it can tell you about the movies and maybe the weather and so forth. But I wouldn't say that Siri is an exponential increase on what it was before. I would say it's a lot incremental engineering for 50 years. But not anything like exponential important. I think Kurzweil conflates the exponential improvement in hardware--which is undeniable--with software, where we can exponentially improve certain things--[?] has gotten exponentially better. But on the hard problem of intelligence, of really understanding the world, being able to flexibly interpret it and act on it, we haven't made exponential progress. I mean, linear progress; and not even a lot of that.
9:34Russ: So, let me raise an unattractive thought here. And I'll lump in myself in a different way, or at least my profession, to try to soften the ugliness of it. Isn't it impossible[?] that people that people who are involved in AI, who of course are the experts, are a little more optimistic about both the potential for progress and the impact of it on our lives? And maybe they ought to be, because they are self-interested. I think about economists-- Guest: Whoa. I should say that I am involved. I actually started a very quiet startup company. I would like to see AI as enhanced[?], from a personal process, perspective. I write in the AI journals; I just had accepted yesterday, in Communications of the ACM, which is one of the big journals; I have another one coming out in AM Magazine[?]. So, I mean, I am part of the field, now. I am kind of converted over from cognitive science to artificial intelligence in some ways. Russ: Well, that's okay. You're allowed to be self-reflective about your [?]. Guest: And I look around in the field, and a lot of people are really excited. And there a lot people that aren't. So, I'm running a workshop in Austin[?], co-running I should say, workshop in Austin about sequels to the Turing test[desk?]. This is coming up in January. And my co-organizers and I are just doing an interview, and we talked about why we did this. We are trying to build a sequel to the Turing test. And we all have this. And the field has gotten really good at building trees, but the forest isn't there yet. And I don't think you'll actually find that many people in the field that will disagree. Russ: No, I know; but in terms of the--and by the way, explain what the Turing Test is, for those who don't know. And we'll come back to it. Guest: The Turing Test is this famous test of Alan Turing, devised to say whether a computer was intelligent. And he did it in the days of B. F. Skinner and behaviorism, and so forth. And we wouldn't do it the way he did it. But he said, let's operationally define intelligence as, let's see if you can fool me into thinking you are actually a person, if you are actually a machine. And I don't think it's actually that meaningful a test. So, if we don't have that long a conversation, I can make a computer that kind of pretends to not be very smart; that's what this program Eugene Goostman did--not very smart or not very sophisticated, can be very paranoid, and so forth, and so evades the questions. All that's really showing is how easy it is to fool a person, but it's not actually a true measure of intelligence. It was a nice try but it was 60 years ago, before people really had computers, and somehow it's become this divine test. But it doesn't [?] with the times, which is the point of this session, that Manuela Veloso, Francesca Rossi, and I are running at the Triple[?] AI Society, the big artificial intelligence society.
12:26Russ: Let me come back to this question of bias. What I was going to say is I think if you ask most economists how well we understand the business cycle, say, booms and busts, recessions, recoveries, depression, they'd say, well, we have a pretty good understanding but it's just a matter of time before we really master it. And I have a different perspective. I don't think it's just a matter of time. So I accept your point that there are certain people in AI who think we haven't gotten very far. But it seems to me that there are a lot of people in AI who think it's only a matter of time, and that the consequences are going to be enormous. They're not going to just be like a marginal improvement or marginal challenge. They "threaten the human race." Guest: Before we get to those consequences, which I actually do think are important, I'll just say that there's this very interesting [?] by a place called MIRI in Berkeley, MIRI (Machine Intelligence Research Institute). And what they found is that they traced people's prediction of how far away AI is. And the first thing to know is what they found is, the central prediction, I believe it was the modal prediction, close to the median prediction, was 20 years away. But what's really interesting is that they then went back and divided the data by year, and it turns out that people have always been saying it's 20 years away. And they were saying it was 20 years away in 1955 and they're saying it now. And so people always think it's just around the corner. The joke in the field is that if you say it's 20 years away, you can get a grant to do it. If you said it was 5 years away, you'd have to deliver it; and if 100 years, nobody's going to talk to you. Russ: Yeah. Twenty is perfect. Let's go back to your point about the progress not being as exponential as, say, the hardware, as people might have hoped. You said it's been linear at best, maybe not so much. It seems to me that we've made very little progress on the qualitative aspect and a lot of progress on the quantitative aspect--which is what you'd expect. Right? You'd expect there to be a chess-playing program that can move more quickly, look at more moves, etc. A driverless car is a little bit more sophisticated, it seems to me: it requires maybe a different kind of processing in real time. Guest: Actually, driverless cars are really interesting because you could do it in different ways. Same with chess. You could imagine playing chess like people do. The Grand Masters only look at a few positions. It's really interesting that they're able to do that. Nobody knows how to program a machine to do that. Instead chess was solved in a different way, through brute force, through looking at lots of positions really fast, with some clever tricks about deciding which [?]; but looking at billions of positions rather than dozens. It turns out in driving you can also imagine a couple of ways to do it. One would be, you teach a machine to have, say, values about what a car is worth and what a person is worth, and you give it a 3-dimensional understanding of the geometry of the world, and all of these kinds of things. In a way, what Google's actually doing is coming closer to brute force: an enormous amount of data, a lot of canned coded cases, although I'm not exactly sure how they're doing it. And they rely on incredibly detailed road maps--much more detailed than the regular maps that you rely on. They rely on things down to a much finer degree--I don't know if it's by the inch or something like that, I don't have the exact data, which they don't share very freely. But from what I understand, the car can drive around in the Bay area because they have very detailed maps there. They wouldn't be able to drive in New York, because they don't have the same maps. And so they are relying on this specialist data rather than a general understanding of what it is to drive and interact with other objects and so forth. Russ: Yeah; I think it was David Autor who was talking about it here on EconTalk. He said it's more like a train on tracks than it is like the way a person drives. Guest: Yeah; it's a very good analogy. Russ: And--so let's talk about that non-brute force strategy for a little bit. I think a lot of people believe that it's just a matter of time before we understand the chemistry and biology and physics of the brain, and we'll be able to replicate that in a box, and make a really good one--or a really big one--so that it would look at a dozen moves in a chess game and just go, 'Oh, yeah.' It would have what we call intuition. What are your thoughts on that? Guest: Well, my new book, The Future of the Brain, which is an edited book with a lot of contributors, not just me, is partly on that question. And there are several things I would say, kind of bringing together what everybody there has written. The first is: nobody thinks that we are that close to that. So, people are trying to figure out how to look, for example, at one cubic millimeter of cortex and figure out what's going on there. And people will be thrilled if we can do that in the next decade. Not that many people think we'll really get that far. So there's a lot of question about how long it will take in order to have, say, a complete wiring diagram. And where we are now is we have some idea about how to make a wiring diagram where we don't actually know what the units are. So, imagine you have a diagram for a radio but I've obscured what's a resistor, what's a transistor, and so forth. You just know something goes here. Well, that's not going to tell you very much. People are aware of the problem. So, part of the Brain Initiative is sponsoring programs to figure out what kinds of neurons do we have. How many different kinds of neurons are there in the brain? We don't even know that yet. A lot of people think it's like 800 or 1000. We don't know what they're all there for, or why there are so many different ones. We know there's an enormous amount of diversity in the brain, but we don't have at all a handle on what the diversity is about. So, that's one issue: when will we actually have enough data? And a sub-question there is: Can we ever get it from the living human brain? So, we can cut up mammals' brains and most people won't get too upset about it. But nobody's going to cut up their living relatives in order to figure out how the brain works. Russ: They might want to. It's gauche. Guest: They might want to. Most people are going to draw the line there. So, there's actually interesting things you can do. Like, you can take some brain tissue from people with epilepsy where they have to remove part of the brain. And you don't want to sort of cut too little out because then you leave things in and sort of like removing a tumor: it's a kind of delicate balance. So you get some extra brain tissue from living human brains that you can look at. It's not that we have 0 data. But it's pretty difficult to get the data that we need. And then, if you have it in a dish it's not the same thing as having it in the live brain. So, it's not clear when we are going to get the data that we would need to do a complete simulation of the human brain. I'm willing to go on record as betting that that won't happen in the next decade, and maybe not the next 2 decades. Then you have the question of how do they put it all together in a simulation. And there are people working on the question; it's a very interesting question but it's a pretty hard one. And even if people figure out what they need to do, which requires figuring out what level of analysis to have, which is something your economic audience would understand--like, do you want to model things at the level of the individual or the corporation; what's my sampling unit? Well, that comes up in the brain. So, do I want to model things at the level of the individual neuron, the individual synapse, or the molecules within? It makes a difference with the simulation, how [?] simulation is. In the worst case, we might need to go down to the level of the molecule. The chance that the brain simulation will run in real time is basically zero. Russ: Why is that? Guest: So, the computational complexity gets so vast. You can think about, like, the weather right now. People know how to build simulations of the weather where you take a lot of detailed information and you predict where we're going to be in the next hour. That works pretty. Right? Predicting weather in the next hour is great. Predicting it in the next day is okay, Predicting it two weeks from now--forget about it. Russ: But we're pretty good at--November through January is going to be colder than--right? Guest: Yeah, you get some broad trends; I can give you some broad trends without doing a detailed simulation of the brain. Like I can tell you if I offer somebody a choice between $1000 and $5, they are going to take the thousand dollars. I don't need to do the brain simulation to get that right. But if I really want to predict your detailed behavioral patterns, then to do that at any length of time beyond a few seconds is probably going to be really difficult. It's going to be very computationally expensive. And if there are minor errors, as there may well be, then you may wind up in totally the wrong place. We also think about the famous butterfly flapping its wings in Cincinnati changes the weather somewhere else. Really, there are[?] effects like that in the brain. It's just not clear that any time soon there is really going to be a way of building it in AI. And then the third objection I have to that whole approach is, we're not trying to build replications of human beings. I love Peter Norvig's line on that. Peter Norvig is Director of Research at Google. He says, 'Look, I already built two of those'--meaning his kids. Russ: Yeah. He's good at that. We know how to do that pretty well. Guest: The real question is how do we build a machine that's actually smarter and doesn't inherit our limitations. Another book I wrote is called Kluge which is about the limitations of the human mind. So, for example, our memories are pretty lousy. Nobody wants to build a machine that has lousy memory. Why would you do that? If all you could do is emulate every detail of the brain without understanding it, that's what you'd wind up with--a computer that's just as bad at remembering where it put its car keys as my brain is. That's not what we want. We really have to understand the brain to simulate it. And that's a pretty hard problem.
22:31Russ: Given all of that, why are people so obsessed right now--this week, almost, it feels like--with the threat of super AI, or real AI, or whatever you want to call it, the Musk, Hawking, Bostrom worries? We haven't made any progress--much. We're not anywhere close to understanding how the brain actually works. We are not close to creating a machine that can think, that can learn, that can improve itself--which is what everybody's worried about or excited about, depending on their perspective, and we'll talk about that in a minute. But, why do you think there's this sudden uptick, spike in focusing on the potential and threat of it right now? Guest: Well, I don't have a full explanation for why people are worried now. I actually think we should be worried. I don't understand exactly why there was such a shift in the public view. So, I wanted to write about this for The New Yorker a couple of years ago, and my editor thought, 'Don't write this. You have this reputation as this sober scientist who understands where things are. This is going to sound like Science Fiction. It will not be good for your reputation.' And I said, 'Well, I think it's really important and I'd like to write about it anyway.' We had some back and forth, and I was able to write some about it--not as much as I wanted. And now, yeah, everybody is talking about it. I don't know if it's because Bostrom's book is coming out or because people, there's been a bunch of hyping, AI stories make AI seem closer than it is, so it's more salient to people. I'm not actually sure what the explanation is. All that said, here's why I think we should still be worried about it. If you talk to people in the field I think they'll actually agree with me that nothing too exciting is going to happen in the next decade. There will be progress and so forth and we're all looking forward to the progress. But nobody thinks that 10 years from now we're going to have a machine like HAL in 2001. However, nobody really knows downstream how to control the machines. So, the more autonomy that machines have, the more dangerous they are. So, if I have an Angry Birds App on my phone, I'm not hooked up to the Internet, the worst that's going to happen if there's some coding error maybe the phone crashes. Not a big deal. But if I hook up a program to the stock market, it might lose me a couple hundred million dollars very quickly--if I had enough invested in the market, which I don't. But some company did in fact lose a hundred million dollars in a few minutes a couple of years ago, because a program with a bug that is hooked up and empowered can do a lot of harm. I mean, in that case it's only economic harm; and [?] maybe the company went out of business--I forget. But nobody died. But then you raise things another level: If machines can control the trains--which they can--and so forth, then machines that either deliberately or unintentionally or maybe we don't even want to talk about intentions: if they cause damage, can cause real damage. And I think it's a reasonable expectation that machines will be assigned more and more control over things. And they will be able to do more and more sophisticated things over time. And right now, we don't even have a theory about how to regulate that. Now, anybody can build any kind of computer program they want. There's very little regulation. There's some, but very little regulation. It's kind of, in little ways, like the Wild West. And nobody has a theory about what would be better. So, what worries me is that there is at least potential risk. I'm not sure it's as bad as like, Hawking, said. Hawking seemed to think like it's like night follows day: They are going to get smarter than us; they're not going to have any room for us; bye-bye humanity. And I don't think it's as simple as that. The world being machines eventually that are smarter than us, I take that for granted. But they not care about us, that they might not wish to do us harm--you know, computers have gotten smarter and smarter but they haven't shown any interest in our property, for example, our health, or whatever. So far, computers have been indifferent to us. Russ: Well, I think they have no intention other than what we put in them. And I think the parallel worry with the idea that some day we are going to cross this boundary from these idiot savants into a thinking machine is, 'Well, then, if they are thinking they must have intention. They must have consciousness.' I think that's the worry. I just don't know if that's a real--I don't know if that's a legitimate worry. I'm skeptical. I'm not against it; I don't think it's wrong. It's just not obvious. Guest: It's not obvious that consciousness comes with being smarter. First thing that I would say. And the second thing that I would say is, it's not obvious that even if they make a transition to being a lot smarter--whatever that means--that they will care about our concerns then, either. But, at the same time, it's not obvious that they won't. I haven't seen somebody prove that they won't, or shown me a regulation that will guarantee our safety. Russ: Yeah, that's a whole separate issue, when you think about--okay, let's take it seriously: what are we possibly going to do? I can't imagine what we might do to protect "ourselves"--humans, from these machines. Other than unplugging them, which, you know, Bostrom I think over-exaggerates but he suggests it might not be possible to unplug them. They'll just take charge of our brains and fool us and manipulate us and--the next thing you know, we're gone. I don't find that plausible. It's interesting. Maybe we should worry about it. But given that we can't imagine what the skillset of things are going to be, it's hard to know what we might do to prevent it from happening. Guest: I mean, at some level I agree with you. I think there's a difference between you and I can't imagine sitting here on this phone call, and maybe having society invest a little bit of money in academic programs to think about these things, and so forth. And maybe with enough intense interest we might come up with something. I'll give you an example. There is a field in AI, let's say in computer science, called program verification, in which you try to make sure a program actually does what it's supposed to do. Which most people most of the time don't do. Most of the time, they release something; there are bugs; they fix the bugs. And in some domains that's okay. In a car, it's not really okay. And in, you know, the stronger, more powerful a machine gets, the less okay it is to just say, 'Oh, we'll try that; we'll see if there are bugs and we'll fix them.' You would like actually a science of how you assure yourself that the machine is going to do what you want it to do. And there is such a field. It's not, I think, up to the job so far. But you could think about, how do you grow a field like that so that it might help us? So, there are academic avenues you can consider. And there are legal avenues, too. Do we need to think how people think more about what the penalties are? How serious a crime is it? Most people think that software violations, unless they are like embezzlement, they are not that serious. But maybe there should be some class of software violations that should be treated with much more severe penalties. Russ: Well, an air traffic control system that went awry, or ran amok, would be horrifying. Obviously, the driverless car that swerves off the road into a crowd. These are obviously bad things. Right now we have something of a legal system to deal with it; but you are right: it would have to probably be fashioned somewhat differently. But when you talk about that kind of regulation, it reminds me a little bit of the FDA (Food and Drug Administration). Right? The FDA is designed to try to make sure that the human-created intelligence in pharmaceuticals is "safe." I don't think it's been a very good--I think it's been a very bad way to do that. I'm not sure we want to go down that road for computer programs. Obviously we need a computer program that would measure whether they are safe or not. And of course, that's impossible. In my opinion. Because there's no such thing as 'safe'--it inevitably involves judgment. Guest: Yeah. I mean, I think there are steps one could take; but I don't think they add up to something that makes me feel totally confident. So, that's why I still worry, even though I don't think the problem is an immediate one. I guess the other thing that Bostrom and others talked about is, the problem could come more quickly than we think. I mean, I wouldn't want the whole species to bet on my particular pessimism about the field. I mean, I could be wrong. Russ: That's a good point. Guest: There could be a lot of arguments for why I think, you know, next decade not that much is going to happen. But maybe someone will come up with some clever idea that nobody really considered before, and it will come quickly. Russ: And all of our appliances will conspire while we are asleep to take over the house. Right? That's the worry, right? And we won't even know about it. They'll have extracted our organs and sold them on markets before we can even wake up. Guest: Well, you make it sound ridiculous. But-- Russ: I'm trying. Guest: But 20 years from now, the Internet of Things will be pervasive. People will be habituated to it, just like they are habituated to the complete lack of privacy that they have on Facebook. And they'll be used to the fact that all of their devices are on the web. And, I think, like create--what do they call them--I'll call it 'black malware' on the web--ransomware is the word. Where they create something that says 'I'm going to erase your hard drive unless you send me some Paypal money. Now multiply that by the Internet of Things. Russ: Yeah. I'd say that's more worrisome than--I'd say, as some listeners have pointed out in response to the Bostrom episode, that's a little more frightening than HAL run amok. Guest: I think in the short to medium term, it is.
32:22Russ: Let's go back to some of the technical side of things. And you speculated about this in a recent talk you gave; and we'll post that on the episode's web page. Why haven't we made more progress? As you say, we've made a lot of progress in certain areas. Why have some of the optimists been disappointed? Where do you think AI has gone wrong? Guest: Well, I think in the early days people simply didn't realize how hard the problem was. I mean, people really thought that you could solve vision[?] in the summer. There was a grant for it; there was a proposal; they said, this is what we are going to do. And people just didn't understand the complexity, I think. First and foremost, the way in which top-down knowledge about how the world works interfaces with bottom-up knowledge about, like, what the pixels look like--I've used pixels in a row; is there a line there in this diagram? And we're pretty good now, 50 years later, at the bottom-up stuff: do these patterns of dots look like a number '6' or a number '7'? We've trained a lot of examples; we can get a machine to do that automatically. But the top down stuff we really need to understand the world, nobody's got a solution yet. I think it's partly because you need to do a lot of hard work to get that right. It's possible to build relatively simple algorithms that do the bottom up stuff. And right now the commercial field of AI is dominated by approaches like that, where you use Big Data and you get things like that kind of part right. So, nobody cares if your recommendation is kind of 70% correct. So, if I told you you'd like a book by Gary Marcus, and you don't, well, it's not the end of the world. But there are domains where you need to get things right. Driving is one of them; maybe you can do that by brute force and maybe you can't. Google hasn't quite proven yet that you can. If you wanted a robot in your home then the standard needs to be very high. It's not enough to be sort of 70% correct using a statistical technique. So, the 70% correct statistical technique gives you the translation that gives you the gist. Nobody would use Google Translate on a legal contract, though, because gist wouldn't be good enough. And similarly, you wouldn't want a robot that is right most of the time. Right? Because if it's wrong a little bit, it puts your cat in your dishwasher, and it's bad. And so-- Russ: Steers you down a one-way street in the wrong way. Guest: [?] there is a higher standard for what is required, but nobody knows how to do that yet. So, people are kind of focusing on where the street lights are. The street lights are how to make money off Big Data. And that's kind of where the field is focused right now. And understandably so. There's money to be made. But that's not getting us to the deeper level there. Russ: And in your talk, I think you mentioned a very perceptive point about what Big Data is really about is, this thing is related to this other thing. And that's not what we really want. Guest: I mean, it's mostly right, doing statistical analysis, correlational analysis. And correlation can only get you so far. And usually correlations are out there in the world because they are causal principles that make them true. But if you only pick up on the correlation rather than the causal principle, then you are wrong in the cases where maybe there is another principle that applies or something like that. And so, statistical correlations are good guides, but they are not great guides. And yet that's kind of where more of the work is right now. Russ: Well, that's where we're at in economics. That's where we're at in epidemiology. That's where we're at to some extent with analyzing climate. These are all complex systems where we don't fully understand how things fully connect, and we hope that the things we've measured are enough. And I think they often aren't. So, I'm more of a pessimist about the potential of Big Data. Guest: I had that piece in The New York Times called "Eight (No, Nine!) Problems With Big Data," and expressed exactly that view. The graphic that you're talking about actually came from something that the Times's freelance artist did for that Op-Ed. And we went through all the kinds of problems that you get with Big Data--maybe you can put that one in the show notes. Ultimately they are variations on the theme of correlation and causation. And there are some sort of more sophisticated cases. But if that's all you are relying on is the Big Data and you don't have a deeper conceptual understanding of the problem, things can go wrong at any minute. Like a famous example now is Google Flu Trends, which worked very well for a while. Russ: Google what? Guest: Flu Trends. Like, do you have the flu? Russ: Okay. Guest: And what it did was it looked at the search trends people were doing. And for a while, they were pretty well correlated. More searches for these words meant more people had the flu. And then it stopped working. And nobody really quite knew why. And because there was just some correlational data, it was a guide[?] but it was a very soluble guide[?]. There were all these papers written when it first came out about how they were much better than the CDC (Centers for Disease Control); it was much faster than the data that the CDC was collecting, and so forth. And it is faster. It's immediate. But that doesn't make it right. Russ: Yeah. It's interesting.
37:45Russ: What's the up side? Let's not be so worried for the moment about, say, my coffee maker, which I can program, taking up my internal organs while I'm sleeping. Let's talk about something a little cheerier. I've been surprised--maybe I don't read enough, obviously--but when they talk about the potential for AI, they use words like 'energy', 'medicine', and 'science'. And I'm curious--which are all things we all care about; they are really important. I'd like to go the doctor; people are using AI to interpret x-rays; that's a good thing. Sometimes, and maybe a lot of the time--I was talking to Daphne Koller about maybe they are better than humans. Great. That's an improvement. What we really want, though, is a cure for cancer. I think, ideally. Are those things--we want "free energy," we want a battery that lasts more than a day--these are the things that are going to change the texture and quality of life. Are they in reach if we made enough progress? Guest: I think so. I mean, we were talking a minute ago I guess about epidemiology and things like that. I think that a lot of biological problems--we'll start with biology--are very, very complex in a way that an individual human brain probably can't fathom. So, think about the number of molecules. There are hundreds of thousands of different molecules in the body. And the interactions of them matter. You can think of it like a play with a hundred thousand different actors. Right? Your brain just can't handle that. People write plays--who was the guy?--Robert Altman would make movies with like 30 characters, and your brain would hurt trying to follow them. Well, biology is hundreds of thousands of characters. And really, it's like hundreds of thousands of tribes. Because each of those molecules is many, many copies, slightly different from one another. It might be that no human brain can ever really grok that, can never really interpret all those [?]. And a really smart machine might be able to. Right now, machines aren't that smart. They can keep track of all those characters but they don't really understand the relations between them. But imagine a machine that really understood, say, how a container works. How a blood vessel works. How molecules are transported through. Really had the conceptual apparatus that a good scientist has; but the computational apparatus that the machine has. Well, that could be pretty exciting. That could really fundamentally change medicine. So, and that's part of why I keep doing this, despite the worries: I do think on balance that probably it's going to be good for us rather than bad. I think it's like a lot of other technologies: there's some risks and there's some rewards. I think the rewards are in these big scientific problems and big engineering problems that individual brains can't quite handle. Russ: That's a little bit mesmerizing and fascinating. I should tell our followers--I wrote a followup to the Bostrom episode that is up at econtalk.org; you are welcome to go check it out. There were some interesting moments in that conversation. But one of the things I raise in that followup is related to your point about lots of molecules being analogous to lots of characters in a play. Which is--one analogy I think about is history. So, we don't have a theory of history. We don't pretend to understand the "real" cause of WWI or the American Civil War. We understand it's a messy, unscientific enterprise. People have different stories to tell; they have evidence for their stories. But we don't pretend that we're going to ever discover the real source of the Second World War, the First World War, the Civil War. Or why one side won rather than the other. We have speculation. But the real problem is what you just said--there's a hundred thousand players. Sometimes it's just 10--Kaiser Wilhelm and Lloyd George and Clemenceau and the Czar, and Woodrow Wilson--and that's already too hard for us. We can't--we don't have enough data; we don't have enough evidence; there's too much going on. And again, I think of economics being like that. There are many people who disagree. But I think these are in many ways, possibly, fundamentally insoluble. Is that possible? Guest: Well, I think there's a difference between like predicting everything that's going to happen in this particular organism from this moment going forward and understanding the role of this molecule such that I can build something that interacts with it. And realizing that if I do, that things might change. So, I don't know that the entire problem is graspable, but I don't think that that rules out that if you better understand the nature of some of those interactions that you won't be able to intervene. Russ: No, I agree. And obviously we've made--medicine's a beautiful example of how little we know and yet we've made extraordinary progress, maybe not as extraordinary as we'd like, in helping people deal with things that we call pathologies--things that are disease, etc. And I think we have a lot of potential there, for customized pharmaceuticals, to your own particular metabolism and body, etc. I think that's coming; I think we'll make progress there. Guest: Well, I think AI will be really important in making that progress, actually. If you think about how much data is in your genome, it's too much for you to actually sort out by yourself. But you might, for example, be able to run silico[?] simulations in order to get a sense of whether this drug is likely to work with your particular genome. And probably that's just too hard a computation for one doctor to do. So we thank God, machines help with it. Russ: Absolutely. Yeah. And they'll figure out the dose, [?] whether it will work or not; they'll tailor the dose, which is remarkably blunt at current levels of medical understanding. Guest: They'll find a cocktail for you. Russ: Sure. Because interactions are too hard for us. In theory, I guess simulation could take us a long way there. Guest: I would add that on the point about simulation that intelligence simulation, let's call it, is a lot better than blind simulation. Like, if you really have to go down to the level of the individual molecule, you get back into that problem I was talking about before, computational complexity. You really want the simulations to have some understanding of the [?] principles that are there in order to do it efficiently.
44:25Russ: Let's talk about how humans--let's move away from this machine that understands everything including what I need next, not just, not only knows what drug to give me; it knows that I shouldn't go skiing tomorrow because I'm not going to really like it so much. That's sort of, to me, this unrealistic but maybe possible future that machines, our interactions with machines. What about the possibility of humans just being augmented by technology? We think about the wearables, and I assume, people are already doing it, of course, implantables. What's the potential for machines to be tied to my brain in ways they aren't now? Now I'm just listening or looking at them. But maybe more directly. Is that going to happen? Guest: Well, something else you can add to your show notes is a piece I wrote on brain implants for The Wall Street Journal with Christof Koch. And we talked about these kinds of things and we went through some of the limitations. So, for example, right now a problem with brain implants is the risk of infection. So, we put something in, but we've got to clean the dressing every week or you might have an infection that will kill you. It's a pretty serious restriction. I would love to have Google onboard, directly interfaced with my brain, giving me all the information I need as I need it. But I don't really want to pay the risk of infection and death. And so there are some technical problems like that that need to be solved. And probably will. There's some energy and power problems that need to be solved. There's some interface problems. So, we know enough about how the motor cortex works to make it so that roughly you can move a robot arm with your thoughts. You can't move it that well; it's sort of inefficient. It's like one of those things you see in a little carnival where you've got a little gear driving this thing--it's not a very direct connection. But we know something about it. We don't know anything about how to interface ideas to machines. So, the software and the pulling things out of your memory is not that hard: Google solves that, and Spotlight and Apple solve that, and so forth. We have technology for things like that. But the problem of interpreting your brain state so that we know what search query to run, that's pretty hard. It's so hard that we've made no progress on it so far. We will eventually. There's no reason to think that there's no coding there to be understood. It's a matter of cracking codes. The code might be different for different individuals; you might have to do a lot of calibration. But there are probably some general laws that could help us get started. And some day we'll figure out those laws. But we haven't yet.
47:09Russ: Let's talk about the economic effects and talk about employment. Of course, it's a big issue right now. This is a little more plausible to me: it's not so much that AI is going to know how to interview really interesting, smart people so I won't be able to do EconTalk any more. There are plenty of technological advancements that we've seen in the last 25 years that have made people unemployable or certain skills unusable in the workforce. What do you think is coming there in the shorter run, before we get to this superintelligence? What are some of the things that are going to make it challenging for certain skills to be employable? Guest: The first major skillset that's going to diminish in value, pretty rapidly, is driving. In the next 2 decades, most taxi drivers will lose their job, delivery truck drivers, bus drivers. Most of that will go away. And it'll certainly go away in 3 decades, and probably in 2. Some of the problems are still on the software side, but I think they're mostly solvable. There are some liability issues, and people getting used to the idea. But eventually machines will drive better than people. And they'll do it cheaper and they'll be able to do it 24 hours a day, and so the trucking companies will want to do it, taxi companies will want to do it-- Russ: You'll be safer, in theory. It's a glorious thing, in theory: use less energy, it'll be more efficient. Guest: Eventually all that will come to pass. And there the 'eventually' really is like a 20, 30 year horizon. It's not 100 years. There's no reason that it will take that long. And so that's a pretty radical shift to society. There are lots of people that make their living driving. And it's not clear what those people will do. The common story I hear is, well, we'll all get micropayments; Google will pay for our information, there's a [?] story; or we'll all make tons of money on Youtube and Etsy and so forth. And I don't buy that. I think that there's a little bit of money--well, actually, there's a lot of money to be made for a small number of people. You look at Youtube videos; the top 100 people make a real career out of it. But most people don't. And that's going to be true in each of these domains, so you might get a few hundred thousand people, if you are really lucky across a whole lot of different creative enterprises making some money; and then you are going to have several hundred thousand people that really don't have an alternative career. The end--the problem's going to get worse, because the problem is going to happen in the service industry. So, you already, some places, can order your pizza by touching a touchpad; you don't need a waiter there any more. There's someone who has a burger assembly plant that's completely automated; and I'm sure McDonald's is investing in that kind of thing. There's going to be fewer people working in fast food. There's going to be a whole lot of industries, one by one, that disappear. What I think the endgame is here, and I don't know how in America we are going to get there, is in fact a guaranteed minimum income from the state. The state is going to have to tax more heavily the people that own all of these technologies--I think that that's clear. And there's going to have to be a separation in people's lives between how they find meaning and how they work. So, you and I grew up in an era in which meaning, especially for men but also for many women comes from work. I mean, not solely from that--it comes from parenting and so forth. But that's going to change. It's going to have to change, because for most people that's not going to be an option any more. People are going to have to make meaning in a different way. Russ: Yeah, it's interesting. I think a lot of the deepest questions around these technological changes are political and cultural. So you said those driverless cars are coming in 20 or 30 years, driverless vehicles. Guest: Could be 10. Russ: No, I think it could be 10, too. I think we'll have the technology. The question is whether we'll have the political will to fight it and to make it happen. So, right now, just to take a trivial example, Uber, which is, to me the forerunner of the driverless car--because I think that's the way you'll be picked up; you'll be picked up by a drone, whether it's in the air or on the ground, that's going to drive you where you ask it to go. And it'll figure out through a network system how not to run into other things. But Uber's having a lot--everyone who uses it, almost everyone, thinks it's the greatest thing since sliced bread. And yet there are many cities where you are not allowed to use it, because it hurts the cab drivers who have paid a lot for their medallions; or people are alarmed by it. They find it somehow unattractive that they can charge certain prices at certain times, that they don't do x, y, or z. So, one question is the political will. The cultural will is another area where you're point is a fantastic point, about meaning. Because to me, I think that's what matters. I think people--the pie is going to be really big, and dividing it up is going to be not as hard as it might be, as you might think. But the challenge is: how much fun is it going to be to watch Youtube all day? I mean, people do seem to be drawn to it. I, myself, have trouble sometimes pulling myself away from entertaining videos. But that's a strange life, compared to, as you say, the way we grew up. Guest: I personally never watch Youtube. But I will admit I spend a lot of time on my iPad, merely doing other things. I think that to some extent the pain will be eased for some people because a lot of [?] available-- Russ: Say that again--a lot of what? Guest: The pain will be eased. So, the Oculus Rift and its competitors--a lot of people are going to enjoy immersing themselves in virtual worlds. So, it might be that this a sort of eat cake, a kind of software driven cake that nobody imagined before. And it might be that some people don't find that meaningful. Some people might do physical things, go back to the land. I think different people respond differently. I do have to say that the Web and eye-devices and all those kinds of things really do suck up a lot of people's time; and I think that's part of what will happen. That will be the more true [?]. Russ: Yeah, I see it as a possible--obviously there will be cultural change as to what's acceptable and what's considered honorable and what's considered praiseworthy. My parents, and to some extent me, we frown on people who sit on the Internet all day, to some extent. But part of that is happening with us, too. So, we're not--but our children, they think it's normal. They don't think anything is remarkable about it at all, to inhabit a virtual world for long periods of time. And I presume it will become even more normal. So, some of these worries I think won't be worries. But as you point out--we have a lot of hardwired things in us that are not easily changed by culture, perhaps. I think about just how physically fit so many people are, physically active, in a world where being physically active is really not as valuable as it used to be, and maybe isn't even so healthy. People tout its healthiness; it makes you live longer. But a lot of it I think is just a desire for real stuff. Nassim Taleb points out how weird it is that when you check into the hotel you see the person's bags being carried by an employee of the hotel, and then half an hour later that same person is in the gym lifting heavy things. And lifting his own bags. We're a complicated species. Guest: We are a complicated species. I think what's interesting about the iPad, for example, is how well it taps into our innate psychology. So I think we do have an evolved psychology; it's a malleable one, malleable through culture. But people have figured out how to build toys that didn't exist before that really drive--first it was television, now it's the iPhone--toys that--the iPod in between. These toys really do tap into needs that have existed for hundreds of thousands of years.
55:30Russ: So, why don't we close with--I want to ask you what's coming in 20 years, because--that's not the best way to think about it, but what's coming soon besides driverless cars that excites you or that worries you? Guest: I'm actually pretty excited about the virtual reality stuff. I'm ambivalent about it. I think that it's going to be incredibly exciting and some people aren't going to want to leave it. I think it's going to be fun--like you step into a virtual reality system and suddenly you are climbing Mt. Everest. And I think that's different from playing a conventional video game where you might be walking around--I think that there'll be some real, visceral excitement to that. And that might be 10 years; certainly won't be more than 20. Part of the technology is already in place. I think that's going to feel very powerful. It may wear off. I remember being really excited by high definition television and watching videos of underwater creatures and thinking this is the most amazing thing ever.Russ: Mesmerizing. Yeah. Guest: I was totally mesmerized for months. And now, you know, I watch my HD TV once a month, maybe, and it's fine; but it doesn't really do much for me any more. Russ: I've felt that way about my iPad, too. When I first got it I just couldn't believe what it could do. I just enjoyed just touching it, watching it, putting it through its paces. Now, it's like, eh. Guest: I need it for work, my iPad. Russ: Yeah, it's a different thing. Guest: I think some of the excitement goes away and then it's a matter of, do these actually help me? And maybe that'll be true with the virtual reality. But I think also you'll have some people at least for a little while who will check out. And that's not necessarily what you want for your society. So, it's complicated. I think that's the next big movement that I see coming--virtual reality is in some way going to fundamentally change the texture of society. And I don't really want to guess which way, whether it's going to be positive or negative, or just fun, or doesn't last very long, or whatever if it's a long-term thing. But at least for a little while, that's going to be a big thing.