Robots and the Future of Jobs: The Economic Impact of Artificial Intelligence

Monday, November 14, 2016
Fabrizio Bensch/Reuters
Speakers
James Manyika

Senior Partner, McKinsey & Company, Inc.; Director, McKinsey Global Institute; Member, Board of Directors, Council on Foreign Relations

Daniela Rus

Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and Director of the Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology

Edwin van Bommel

Chief Cognitive Officer, IPsoft, Inc.

Presider
John Paul Farmer

Director of Technology and Civic Innovation, Microsoft Corporation

CFR Board Member James Manyika, Massachusetts Institute of Technology’s Daniela Rus, Chief Cognitive Officer at IPsoft Edwin Van Bommel join Director of Microsoft's Technology and Civic Innovation John Paul Farmer to discuss the economic effects of artificial intelligence. The panelists discuss the influence of AI as a tool in humans' lives, the challenges it can pose, and specific examples of AI such as autonomous vehicles.

FARMER: Good morning, everyone, and welcome to this conversation. I want to make one point, that this is on the record. But we’re going to have a great time discussing “Robots and the Future of Jobs: The Economic Impact of Artificial Intelligence.”

So I’ll start with simple introductions, and then we’ll lay out some definitions about the kinds of terms that will be involved in this conversation. So my name is John Paul Farmer. I’m with Microsoft. That’s my day job. Very happy to be here with three experts on the topic.

Next to me is Dr. James Manyika, who is a recovering roboticist. And his day job is at McKinsey, at the McKinsey Global Institute, where he’s been focusing on the future of jobs and the future of work in this new era.

In the middle, we have Dr. Daniela Rus. Dr. Rus is a professor and roboticist at MIT, and she is also the director of the Computer Science and Artificial Intelligence Lab there.

And at the end, we have Edwin van Bommel. Edwin is formerly of McKinsey, but now he’s the chief cognitive officer at IPsoft.

So, with that, let me lay out some definitions that are going to be important, I think, to following this conversation. You may have read in Foreign Affairs and elsewhere about this fourth industrial revolution, the changes that are happening in our society today and many more that will be coming down the pike. So as we—as we talk about these things, one, we should all be on the same page in terms of what artificial intelligence is. What do we mean when we say AI? And the definition that many accept is it’s the development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and even translation between languages. AI is sometimes humorously referred to as whatever computers can’t do today. It’s always the next thing coming.

Machine learning. Machine learning is another term you’re going to hear a lot, sometimes thought of as a rebranding of AI, of artificial intelligence. But there’s one key difference, which is that it takes a much more probabilistic approach as opposed to deterministic. So it looks at not just yes or no; it looks at a 30 percent chance of X, a 10 percent chance of Y, and so on.

Big data, a term that I think we’ve all heard. Data is the raw material. Some people call it the new oil for this new era. But at the same time, there is no agreed-upon threshold for what makes a dataset “big.”

And finally, data science. Data science is essentially the application of these various terms and these various practices for business purposes.

So let’s get started. AI, artificial intelligence, has been around for a long time. What’s different now? Daniela, do you want to start this off?

RUS: So let me—let me jump in. So what’s different is this extraordinary convergence of software and hardware developments. So let me explain.

On the hardware side, I am sure that most of you know about Moore’s Law, that states that the number of chips—that the computational power of a computer doubles about every 18 months. Well, we have similar exponentials in many other aspects. We have similar exponentials in wireless and communication, in storage, in the capabilities of hardware systems.

So all of these things have been developing and advancing to the point where computers now are extraordinarily fast. And in fact, the size of the transistor is approaching the size of the atom, so we’re really getting to the maximum power of what you can compute. We also have tremendous advances in how we move data and how we store data. So, on the hardware—and also, we have tremendous advances in the quality of sensors and of motors.

On the other side, we have made tremendous progress in algorithms. So we know how to solve problems in ways that we did not know 10 years ago.

And all of these are coming together to enable things that we did not even imagine 10 years ago. And I would just like to call your attention to one example. In 2010, nobody was talking about self-driving cars, and now everyone is taking it for granted. It’s only been six years—six very short years. And the reason we are here is because we have extraordinary computation power, we have good sensors and controllers, but also have good algorithms for making maps, for localizing in a map, for planning paths, in general for decision-making. And so here we are. In the future, I expect we will have hyper-exponentials—in other words, I expect that the progress will be increasingly faster.

FARMER: So, James, people at McKinsey are investing more and more time and energy on this. Is it for those reasons? Are there additional reasons that you’d share?

MANYIKA: Well, I would just add that one of the things that I think that’s brought AI—artificial intelligence sort of to the public consciousness in the most—in the last five or six years, in addition to what Dr. Rus just described, is the fact that we’ve actually had some fairly spectacular demonstrations, I think, things that have kind of made this quite real for most people. We think about what “Jeopardy” and what, you know, IBM was able to illustrate by playing “Jeopardy” and winning; the fact that we’ve had autonomous cars; the fact that we now do—can do visual perception and machine-based reading of visual data at error rates that actually are much better than human capability. So I think the fact that we’ve had these demonstrations. Then, of course, you’ve got DeepMind showing that they can play AlphaGo and all these very complicated games in ways that seem to exceed what humans can do. So I think it’s when we’ve started to approach and do better than what humans can do, it’s certainly gotten the attention—all our attentions.

And I think it’s also—there’s another important distinction that is also important to make, and it’s often put in the language of narrow AI versus AGI. I think when people talk about narrow AI—back to definitions—they typically are talking about specialized machines or algorithms that can do particular, specific tasks very, very well—reading an MRI image or driving a car autonomously, but very specific tasks. When people talk about AGI, which is still a term to be developed and defined, they’re talking mostly about trying to build machines and algorithms that can work on a variety of generalized problems. And that’s still much further away from us. So you have teams like—again, the DeepMind team is trying to do that, and many other teams are trying to do that. But these two paths of narrow AI and AGI are kind of proceeding at very different paces.

In our case, what we’ve been trying to understand in the research we’ve been doing is trying to understand what is the impact of all of this on jobs, on work, and also on wages as these technologies sort of play out in the—in the real world. And I’m sure we’ll get into that later on.

FARMER: Absolutely. And when you say AGI, you’re referring to artificial general intelligence.

MANYIKA: Yes.

FARMER: So, Edwin, one of the things or one of the manifestations of artificial intelligence that’s probably most common in our lives today is virtual agents, virtual assistants—so Microsoft’s Cortana, Apple’s Siri. And at IPsoft, you’re focused on Amelia, is that correct? Can you tell us about that kind of—that work and what you’re doing?

VAN BOMMEL: Yeah, I think there are actually two different types of these virtual agents.

The first category I would label much more as what I would call personal agents. You have them on your phone as well, like Siri. You have them on your computer, like Cortana. You might have them in the kitchen, like Alexa. So many things—what these elements do or what these agents do is they will help you with very daily tasks, right? Can you put on the music? Can you find me a cab? What’s the weather? All these type of things, which is already making our lives much more easy. And what you see is they get more and more integrated in the household, right? It’s also for your lights, for your temperature, all these things.

Then there is this second part, which are more the enterprise agents, which will go much deeper. So still in a conversation like you will have with Siri, they will now help you, for instance, to open up an insurance policy, to file your insurance claim, to maybe resolve a bill with your mobile phone provider. So they will go much more deeper into an enterprise and help you solving complex problems, which is going beyond purely giving you a link to a website and then do self-service. It’s like you will have your best customer representative on the phone who’s really helping you to solve the problem. And this requires dialogue—this is requiring emotional recognition of the situation, and this requires also that you can do process execution as well by these assistants.

FARMER: So let’s move on, then, to the future of work. Let’s get back to that topic that I think was brought up already by a couple of folks on the panel. James, I know this is something that you and your team have focused a lot of time and energy on. Would you like to lay out what you’ve found over the last couple of years you’ve been working on this, what this means for—in terms of full automation but also partial automation, and how this can impact not just jobs but also wages?

MANYIKA: Well, first of all, as you might imagine, this is one of the topics on everybody’s mind, which is, are the machines going to take away all our jobs and there won’t be anything for anybody to do? One of the things that we—at least where we’ve come at this—and by the way, lots of people have been looking at this question. Some of you may have already seen there’s a famous report that was done by Osborne and Frey at Oxford, which basically said 47 percent of all jobs in the U.S. would be—would be automated. And I think that was one of the early kind of analyses that was done on that, in 2013.

What we’ve tried to do, building on that, is to actually take a look at the—at the level of particular tasks and activities. If you think about what any of us do, any of our jobs or activities consist of numerous different tasks that we do on a near-daily basis. So we tried to take a task-level analysis, and we actually analyzed something like over 2,000 different tasks that people do in about 800 occupations. It turns out there’s actually some very good data out of the—that the BLS, Bureau of Labor Statistics, collects, as well as the ONAS (ph) database that various people have kind of contributed to, and other datasets.

And what you find when you go through all of that—and we did this taking into account currently demonstrated technology. So, in other words, we were looking at automation, artificial intelligence, and machine learning, and other technologies that are currently working today—so, in other words, not things yet to be invented. And so, when you go through that and you ask the question, so, how many of these tasks could actually be automated in the next decade with currently available technology, the conclusion we got to was something like 45 percent of activities and tasks that are conducted in the economy could be automated. And that’s a technical feasibility answer. You have to take into account a whole bunch of other considerations beyond just the technical feasibility. And the other considerations to take into account—before you conclude that, you know, that’s as many activities that will go away, you have to take into account what is the cost of that automation, what does—what does it take in terms of the—in terms of the labor supply dynamics that are alternatives to doing that automation, and what are going to be some of the regulatory and social acceptance—(audio break).

Charles (sp) laid that back into whole jobs. Again, keep in mind what I said earlier, which is that any of our jobs consists of lots of tasks. We found that something like only at most 5 percent of whole jobs in their entirety could be automated. However, what we did find, that even though it’s only 5 percent, if you look at the different skill categories and you look at the middle skill category, that’s the category of skills and jobs that will be automated the most. And for that group, we found that the percentage looks more like 15 to 20 percent. Now, to put that in an historical context, if you look back over the last five decades, we’ve already been automating most middle-skill jobs at pretty high rates, and that rate has been hovering around 8 or 10 percent. But what we did find is that with current technology, that rate—historical rate of automation, about 5 to 8—sorry—8 to 10 percent every decade is probably going to go up to about 15 to 20 percent. And that has profound implications for the workforce, particularly given the fact that most of those middle-skilled jobs is where most middle-income individuals and families happen to be. So that’s an important finding.

The second finding, which I think in some ways may be even more important is that even when you don’t have full automation of the whole job in its entirety, we’ve found that something like 30 percent of activities in about 60 percent of jobs could be automated. Well, what does that actually mean? It means that most jobs change, even though the job may not go away in its entirety.

A historical example may be useful to contemplate. Think about what bank tellers used to do before the ATM machine. And what happened with the advent of ATM machines is that we automated a portion of what they did. So the activity that had to do with getting your money out of the bank, that portion has been automated. But bank tellers didn’t go away. Their jobs basically changed. So I think this change in jobs is a much bigger effect, and it has implications also for the kinds of skills, capabilities that people are going to have to have if they’re going to work alongside machines.

FARMER: So it’s fairly easy to see the problems and the threats of these changes to jobs. At the same time, what you just referred to could be seen as opening up human capacity to do—to do more, to do new things, new tasks. Edwin, I know this is something that you’ve been working on. How do you think about your technologies at IPsoft helping people not simply do what they’re already doing better, but move on to new and bigger things?

VAN BOMMEL: I think there are a couple of things which I see from the market practice when we implement Amelia. The first thing is that when Amelia is starting to take over conversations, there are a lot of tasks still left for humans which are normally not being done—like, for instance, spending more time proactively with clients, spending more time on advising, if you talk, for instance, about the financial sector. So that’s the first thing which I see.

The second thing is that, to manage platforms like Amelia but also Watson, you need a new generation of jobs. You need people which I would call in a cognitive center of excellence which will manage all these systems and these machines, which makes sure that they keep on learning, which makes also sure that they will stay within regulations and are compliant with what the company really wants. So that’s, I think, the second thing.

If you think even bigger, I would also see in the near future that, for instance, people can start spending more time, when you’re, for instance, a doctor, on more taking care of patients which have more complex diseases or even doing more research. So, think: when you take out the mundane work, it opens up really a lot of new categories where humans can actually have much more value than some of the jobs we’re currently automating.

FARMER: Thank you.

RUS: Can I jump in and add to that?

FARMER: Absolutely. Please do, Daniela.

RUS: So I very much agree with you. And I’m extraordinarily excited about the possibility of using AI as a—as a tool, because this is how we think about it—AI as a tool to help humans in their lives, in their jobs, at work, at play. And so I would say that the bank tellers are no longer active. But honestly, that gives us access to our funds 24/7. And in fact, I was able to get some cash from my taxis in New York this morning at 6:00 a.m. when I went to the airport. So we wouldn’t have done—I wouldn’t have been able to do that, had I had to open—to walk into an open shop. So it’s important to think about AI as a tool, because this is how the community is developing the field.

I would also say that as the tools become available, the jobs change gradually. So it is not the case that we would flip a switch and all of a sudden the jobs go away. The jobs change gradually and that allows us to train the people who are doing the jobs, to get them to be more capable, to be—to have facility with the new tools. And so this training will take some time, just like the job evolution will take some time. I think we should be concerned with a new generation who will have to live in a world that is much more digitized and computerized than our world today, if we can imagine such a thing. But that means we really have to teach and prepare our kids for dealing with technology. That means we have to teach them how to program, how think computationally, how to make things. These are all 21st century capabilities that should be part of literacy for everyone.

FARMER: So the concern is that this time might be different. Do you think those concerns are irrational?

RUS: What do you mean?

FARMER: Well that, yes, if you look historically at how technology has changed the workforce and the nature of jobs.

RUS: Oh, right.

FARMER: But people are saying, well, AI is getting so much better, we’re on a steeper curve now and it might come faster. Is that a reasonable concern?

RUS: Well, so I see the progress accelerating. So the rate at which we have new tools is going up. But that just means that we have to be prepared to understand the basics. All these tools come out of the same basic technology. And if we understand it, we are prepared to use the tools and to improve them.

MANYIKA: Yeah, I actually think it’s a legitimate question to ask, you know, is this time different. To answer that—I’m not quite sure I have a perfect answer to that—but I think there are a couple things worth considering that may be different. If you take historical examples, I think, for example, we all know that if you’d gone back all the way to 1900, the percentage of the American population, the workforce that was working in agriculture was 48 percent. Now it’s down to 2 percent and we still feed everybody. And, well, you know, everybody moved into, if you like, higher-wage activities—industrialization and now services. So you could say, well, maybe we’ll follow that same historical path and we’ll find other things to do.

But I think there are a couple things that could be different. One is the pace of change. Those changes typically took much longer, you know, time to play it through, and people had time to adapt. The changes we’re talking about are happening much faster. The other possible question is that I think that as people work with machines—in this case, we’re talking about people in all kinds of activities working on machines—it might actually have slightly interesting effects when it comes to impact on wages.

So take the idea of people working with technology. I think several economists, and including some of the work you’ve done, have pointed out the fact that you could actually have two outcomes. One is that if somebody’s highly skilled and educated and they’re now working with a machine, that’s a win-win, right? You have the very skilled person working with the machine, boy, the results is so much better for everybody. They do much more spectacular work, and so forth. But at the same time you also often have the other impact, which is the fact that when people start working with machines that also potentially has the potential of depressing the wages.

So think of it this way: One interesting example is if you—if you can imagine what a London cab driver used to have to learn to navigate London streets—the so-called the knowledge. They had to memorize the map of London, quite frankly. It often took them a very long time understand that. And so by definition the supply of people who could do that was always somewhat constrained, who went through that entire process. You fast-forward to today, where you’ve got GPS systems. And so now the—what’s required of the driver isn’t just—you know, does not include learning the map. But they just need to learn to be able to—know how to drive.

The effect on that in a labor sense is that you’ve suddenly expanded the pool of potential people who could actually do that job. And that has a potential also of depressing wages for some activity. So you have these potential outcomes, and you could have these really, really good outcomes for people who are very skilled. And of course, if we try and educate everybody that will actually help give that outcome. But you could also have this other outcome which to put pressure on wages. So it could be different. There’s several reasons why this could be different this time.

FARMER: So I think bringing up the taxi drivers in London is a great segue from—most of this conversation in the last few minutes has been about automation. But what about autonomy? Autonomous vehicles are the thing most people are thinking about, but there’s actually going to be getting into so many different parts of our lives, and so many different industries beyond simply driving. How do you think about autonomy as opposed to automation?

RUS: So I have been working on self-driving systems since 2010. In fact, we have a program in Singapore where we have created a number of different types of autonomous vehicles, all operating on the same technology—cars, golf cars, and wheelchairs. And I will tell you that the—and everyone is excited, right? So in the recent past every car manufacturer has announced self-driving car efforts. Does that mean that all the taxis will disappear tomorrow? Absolutely not. And the reason is—there are actually two reasons. One is technological and one regulatory.

So on the technology side, the technology is really only ready for driving autonomously at low speed in low-complexity environments. So for instance, if you take your self-driving golf carts and you put them on a retirement community or active adult community, at that point people who do not have mobility are able to go places. And they reason they can do so with these autonomous machines is because those machines will move slowly enough so that their sensor systems can keep up with all the other slow-moving machines and people and animals in the scene.

If you put the same system in the middle of New York, I don’t know how it’s going to do. It’s just not going to—it’s just not going to perform. So we don’t know how to deal with autonomy when the speed of the vehicle is high and when the complexity of the environment is high. We also don’t know how to deal with autonomy in bad weather. Nobody can drive in snow nor in heavy rain. And we don’t know how to deal with autonomy and with self-driving cars when—with respect to how humans and machines interact. So on the road today maybe you have a silent language between yourself and the other drivers that help you cut in or move over, make a left turn much more easily. But we do not know how to create such a negotiation for cars.

So these are some of the technological obstacles. And then there are regulatory obstacles. So as of today, there is no law that says we can have self-driving vehicles anywhere in the U.S. We have laws that say we can test autonomous vehicles in some states. And there has been—there have been encouragements to all the other states to adopt—to think about what it would look to begin to test autonomous vehicles. So this is testing. This means that you have your robot car and there’s a human sitting in the passenger seat and that human is responsible for everything that happens in the car. And it took several years to get to where we are today. So imagine all the challenges required if you were to remove the human from the car. So in other words, who is liable? Is it the programmer, is it the user, the owner, the manufacturer? How do we think about liability? These are—these are really profound questions, and we do not have answers yet.

FARMER: James, go ahead.

MANYIKA: Well, I was going to add one other additional thing when it really comes to autonomous cars, which is that—building on what Dr. Rus just said—which is also raises a whole range of questions about how human work with machines in some profound way. So think about the fact that, you know, would you rather have partial automation or full automation? So if you think about it, already today we know that, you know, when we fly in planes for the most part, pilots are really only flying the plane only a small portion of the time. So if you now start to have partial automation driverless cars where the cars drive—are doing the driving 99, 90 percent of the time, and the human occasionally intervenes, do we get to a place and time when in fact the ability of the human to take over is so diminished that they actually don’t do that effectively?

We had a—obviously had an example of this with the Air France plane that crashed in South America a few years ago, when the autonomous autopilot threw back control to the pilot, but the pilots had not been as used in operating in that those conditions, and their skills had atrophied, and actually probably did a worse job handling the situation. So it starts to raise questions about, you know, is it better to go all the way to full automation or partial automation, which gives the perception that people still have control? What does that do to skills, capability? Is that just there to make us feel better?

RUS: So there is another model.

FARMER: I’m sorry. I think we’ve naturally transitioned into some of the concerns that people have about AI.

RUS: But there is another model that I would like to add to the discussion. So all the car companies today are pursuing sequential autonomous, with the exception of Tesla. And we have seen some of the issues with Tesla. By the way, Google found that humans cannot be trusted to take over at millisecond timeframes.

But at MIT we have a new project we are working on in collaboration with the Toyota Research Institute, where we are pursuing parallel autonomy, the idea that the autonomy system is a guardian angel to you, not a chauffeur. In other words, the human drives, but this guardian angel system that can see the road and estimate what is happening on the road much, much better than we can with our naked eyes—the guardian angel looks at what you want to do, looks at what happens on the road, and if your maneuver is unsafe—in other words, if you might—if you’re about to move into a different lane without noticing someone in your blind spot, the car takes over and keeps you safe.

So this is really an exciting alternative way to think about autonomy as a tool, as a way of protecting the human driver, and ultimately as a way of combining with autonomy at rest that we’ve talked about, to the point where the car could become your friend. It could help you. It could watch over your shoulders if you’re tired, especially on treacherous pieces of the highway. It could talk with your refrigerator and remind you to go get some cat food, et cetera.

FARMER: Now, anytime we’re talking about autonomy, ultimately we’re talking about the algorithms that are governing the machine. And those algorithms are programmed by human beings. And those human beings have their own biases and weaknesses are imperfections, which ultimately make it into the devices. Edwin, when you’re thinking about the work that you’re doing or these issues that have been brought up, how are you addressing those concerns?

VAN BOMMEL: Maybe two things. First of all, building on, still as you were saying, the whole—the car assisting the human driver, that’s also what we see with Amelia. So there are a couple of clients which more and more want Amelia to assist their employees to be successful and to stay compliant. So also there in more an office type of environment, we see the same thing, that technology is more advising humans and helping humans to perform their jobs.

And coming back to your question, I think actually it’s a brilliant point. And I think that currently the technology is not this far that I would have them unsupervised in terms of learning. So I actually would say, despite all the human biases, it’s still much better to, in the end, before something goes live, before something really is being brought as an advice or as a support, that humans have approved all these learning mechanisms in there, because you rather would have something you can explain than something you cannot explain when things are happening.

And the whole structures you need for that are happening. And the whole structures you need for that are very comparable to what you normally do when you train human beings as well to do the same job. So from that perspective, it’s not that different. The big difference is, and also coming back to what you said earlier is, the scalability of these things. So now you actually train a machine which might not serve 10 people, like a human being would do, but who would serve thousands or hundred thousand of people, could potentially go global if you think really about the scale some of the new technology providers really have. That being said, I strongly support still, despite human biases, that humans supervise the machines given also where technology is.

FARMER: So no—

MANYIKA: Yeah—

FARMER: Oh, go ahead, James.

MANYIKA: But I think one of the things they’re going to really need to think through is that while humans have biases the machines will to, partly because, you know, the way machine learning actually works is that there’s typically training data. And so that typically will then also create all kinds of biases. We saw the example of what happened with—you know, with some of the bots that were picking up kind of traffic and information on the internet, and ended up with biases that were one way—in some cases that were discriminatory and so forth.

So I think we’re dealing with potentially two very different kinds of biases. Humans are typically cognitive biases and the machines arguably at some level are biases into the data itself. So we’re going to need to sort of think through these things and think about where are machines better and where are they not. In some ways, I think it’s interesting that—the autonomous cars I think are a very interesting window and a way to examine a lot of these things.

Dr. Rus, you just mentioned, for example, some of the product liability questions and how do we deal with those. There are other questions that some have posed in the form of the classic trolley problems, which depending on how you think about those—I mean, the trolley problems, as you know, are the questions about—these kind of emanate mostly—kind of philosophy—where you’re thinking through—you know, if you had to—you couldn’t stop a train or a car and you had to run over five people versus a kid, you know, which way would you go? And so those questions now become quite real in the case of autonomous systems because the algorithms can actually slow down time, so to speak, and somebody’s going to have to program how to think about that choice and how to deal with choice.

So then you say, well, OK, does that—does it matter what the ethical framework one is using to make that choice? You know, if you’re a consequentialist, you may program it one way where you care more about the outcomes. If you’re a deontologist you go another way where you care about the principles and the rules being applied. So the question then becomes so who is—who is making those choices as we build these systems, I think becomes an interesting question.

Then you have another question which is one of the things that’s interesting about machine learning and AI is what you might describe as kind of the detection problem, in the sense that when a machine is being used, especially now that many of these have kind of passed, you know—you know, what you might call the Turing test, or the ability to where you can tell this bot is a machine, this one’s a human being—how do we know when these systems are being applied and used? And, you know, how do we—so there’s a question about how do we detect the use of these systems and do we care about that? So if we’re being fed information of one sort, and it’s a machine versus humans, are we more or less comfortable with that?

So you have these kind of different classes of questions. And then, of course, you have the ones that some have raised which are the more existential questions. I don’t know what the panel thinks. I’m not worried about those, because I think they’re so far away, but—

RUS: I would like to jump in to say something about the trolley problem, if I may. So—right, so the trolley problem is about whether to kill one person or five—maybe one young child or five elderly people. But I would like to be optimistic and say that our technology will advance so that the trolley problem will simply go away because our sensors would know that the kid is running around the corner and we’ll see them—the other group of people. We’ll see all these unusual events that we cannot detect with naked eyes. But the sensors on the car, and especially if you begin to network the cars with the infrastructure and with the other cars, they will know. And they will adjust the control system of the car in order for the car to be safe and to not have to choose between one versus five. So this is the challenge that we are undertaking as technologists, as roboticists.

FARMER: So I think this conversation could keep going for a very long time. But what I’d like to do is invite members to join the conversation with their own questions. Just a reminder, this is on the record. So keep that in mind. And please wait for the microphone, speak directly into it, stand, state name and affiliation, and, of course, limit yourself to one question and always make it a question.

Looks like we’ve got a question back middle, right here.

Q: Hi. Ian Murray of Peak Ten.

Dr. Rus, I have a question about autonomous vehicles. And anyone on the panel can answer but, you know, you read these very optimistic projections of, you know, 40 million autonomous cars by 2020. And you start to dismiss it but then you think, well, it could be that there are 40 million semi-autonomous cars on the road by 2020. And so I’d like, if possible, for you to kind of describe the sequence, you think what will be adopted first and in what quantity. And then what do you think is the timeline to this nirvana of fully autonomous cars on the road? Thank you.

RUS: So I think what could be adopted immediately is slow-moving vehicles in low-complexity environments. In other words, putting golf carts on retirement communities and off of public roads, because the technology is truly ready for that.

In terms of—in terms of the public roads, it will take some time. It will take some time, because the perception systems of robot cars are not ready for dealing with the kind of complexity, with the kind of fast decision making that road situations require. And they still make mistakes. They make—and even if these vehicles make mistakes 1 percent of the time, that is not good enough. Computer vision systems—the most advanced computer vision systems still have a recognition error of about 80 percent. So how would you get into a vehicle that is 20 percent likely to make a mistake. It will take some time to get to that point.

And a big challenge is that the fleet of vehicles that we have to test is nowhere near what human-driven vehicles can do. So in a single year Toyota vehicles around the planet drive over 1 trillion miles. To test autonomous cars to that same level of driving will take an enormous amount of time. So we have to develop technology that begins to verify and guarantee what the robot will do at and what point.

Now, you asked about—

FARMER: I was going to say, specifically on the question of autonomous vehicles, because I know some people in the room might have heard about Otto. Otto was acquired by Uber. It’s an autonomous truck company. And just a week or two ago delivered beer—I think was their first big proof case—delivered some beer through autonomous highway driving on real public highways. And I know one big concern people have is truck driving is a major industry. And a lot of people are employed as truck drivers in this country. So how do you think about—it seems, Dr. Rus, that you may be a bit more skeptical about how quickly this will come, but the fear is still there. And curious how you feel about that.

RUS: So I would say that it is not difficult to have a successful run, especially if there is a human in the seat ready to take over. So that is not a difficult challenge. But to be always safe, that is a challenge. And so I really believe in the guardian angel approach to improving the job of the driver rather than taking it away. And for the car—for the truck driving industry, that is especially important because truck drivers have this monotonous job. There’s the humming in the car that reminds you of the humming as a—in the mother’s belly, right, so it naturally lulls you into sleep, into relaxing. And if you do this for many hours, it’s very easy to lose focus. So I think that truck drivers could be especially helped by autonomy systems that could watch over their shoulders and watch the road.

FARMER: So sticking—

RUS: And it’s important also to know that car crashes are the eight-leading cause of death in the world. And most of these car crashes are due to human error. So putting technologies that help us prevent human error will be a tremendous advance, a tremendous support for all the professional drivers out there.

FARMER: Yeah. I think over 30,000 people die in car crashes every year just here in the United States.

I’d love to hear from Edwin and James about the future of trucking and how you see that changing with some of these advancements on autonomous vehicles.

VAN BOMMEL: Yeah. Obviously I’m not the super—(laughs)—expert in this—in this field, and I don’t see Amelia driving a truck very soon.

I think, overall, when it comes back to it is where does the whole liability lay around these things as well? Because if you have a person driving a car, I mean, you already have a lot at stake. But if you have a truck, an impact of an accident is just much bigger because of the weight and the size, et cetera. But honestly, for the rest I don’t have a lot of insight in this—in this development.

MANYIKA: The only comment I would add is to actually agree with Dr. Rus, which is I like this idea of guardian angel a lot, actually, because one of the things about where we are likely to see automation—and it is—you know, it is, in fact, in these highly routine activities, and also where you don’t want to have errors happen. And so it turns out that truck driving, particularly long-haul truck driving, much of it is actually fairly structured—more structured than driving on the street, in a city street. So, in fact, you’ll probably likely see automation happen faster on long-distance routes than you would on the city streets for exactly those reasons.

And it is the case that truck driving does employ a lot of people. It’s actually a very high-employment arena. And in fact, also the wages for it are relatively high. So we’re going to—we’re going to have to grapple with that question.

But I also want to emphasize, I mean, in case you get the impression that, you know, we’re all pessimistic, there’s actually a lot to be excited about. The guardian angel notion is one. But also one of the things that machine learning and artificial intelligence are going to do is going to allow us to make quite some spectacular breakthroughs, actually, in several areas where we’ve been limited in our ability to do that by human cognition. So think about some of the problems in materials science. Think about some of the problems in climate science. Think about some of the problems in—when it comes to discovery in the life sciences, where you’re trying to understand patterns of how genomics and synthetic biology work, which are very difficult to analyze and actually understand with conventional methods.

And I think those are some of the areas where we’re trying to see some breakthroughs that we couldn’t otherwise make. So I think we also should look forward to the prospect of these spectacular breakthroughs when we’re able to go beyond what humans can actually do.

FARMER: That’s exciting.

VAN BOMMEL: I actually would adhere to that. So if you think about where we stand currently with AI but also other technologies, look at cloud and look at mobile, which makes AI accessible for many, many people.

A great example, which was at the first start of digitization, was M-PESA, which was digital payments, right, in other countries. If you think about now putting AI again on mobile devices distributed by a global cloud, you can give people, for instance, a doctor, a basic practitioner, who normally don’t have easy access to medical advice—you can give many people who have been cut by banks to get financial advice—you can now give them a personal financial adviser and help them through day-to-day financials as well.

So there are many things which are very beneficial for everyone in the world if we make it—if we make it happen.

RUS: So if I could say it in just a—in different words, I would say that any job that requires understanding and knowing about massive amount of information that exists in books, where you have experts, all those jobs could be enhanced by the current technology, natural language processing, which are capable to read those books much faster than a human could and to extract—to understand what’s in those books and to extract the salient information so that doctors could benefit from knowing about case studies that the doctors don’t personally know about or lawyers can benefits from decisions that the current lawyers don’t know about.

FARMER: Certainly. Certainly.

Let’s go to the next question. Yes, ma’am, right here in the second row.

Q: (Off mic)—benefits of AI I buy, and I think we’re all very excited about that. But coming back here to the results of our election, which were fueled in large part by a lot of people who are not the highly educated, the highly skilled but the lower skilled and the lower educated, who feel left behind, and while the guardian angel idea is wonderful, lovely—hopefully it works—clearly a lot of jobs have been lost due to globalization and automation and more will be lost.

And James, you mentioned the impact on wages for the taxi drivers. I’d like to hear more about what are we going to do as a policy matter to address that population, who feels left behind and is going to be probably more left behind unless they all have guardian angels. And what is the impact on wages? And what—if you were advising the new administration, what would your policy recommendations be?

MANYIKA: I think the question of what this will do to work and jobs and wages, I think, is going to be the challenge for the next decade, quite honestly. I don’t think there are easy solutions to it. I think there are partial solutions to it. I think one response to it is clearly education, without a doubt. So to the extent that people can be better educated and prepared to be able to work with technologies—that’s clearly part of the solution.

I happen to think that that’s not a complete solution. I think we have to look at other solutions. And I think as you open up the possible kind of solution space, I think we have to grapple with the question of, in a world of abundance—assuming that, of course, these technologies are going to create all kinds of abundance and so forth—how do we take care of the least of us who don’t have that?

So I think it’s a legitimate question to talk about questions about basic income. I think it’s a legitimate question to talk about different kinds of safety nets. I think it’s a—it’s a fair question to ask about how do we create new ways for people to earn incomes in ways that may be different to the way we thought about ways that people earn incomes.

I think one of the things that’s been quite—that’s—it’s maybe important to reflect upon is the fact that this thing that we’ve called a job historically has kind of been a way to do lots of things with it. It’s a way to get income; it’s a way to get meaning, a sense of self-respect and dignity; a way to occupy time. It does all of these different things. And I think these technologies are starting to un-bundle that a bit, because we can now get economic things done but without necessarily needing labor and work to get those things done.

I think it’s not lost on me about the fact that if you look at the last hundred years, even for an economy like the United States, the share of the national income that goes to wages has been declining actually because—you know, because we have—we now combine labor and capital to get the outputs that we need. And that mix, that—the proportions of those things is changing a bit.

So I think we have to contemplate some solutions that go beyond simply saying, let’s educate everybody. Of course, education will help, but I don’t think that’s the only answer.

RUS: So if I could just add one thing about education, I would say that it is very important for our country to consider training the young kids in computational thinking and in data science, starting with kindergarten, starting with first grade, and going through graduation. There are a number of countries around the world who have started doing that. And our children will be well-served to get these skills, because today there is a shortage of jobs in the IT sector. And this is projected to continue to be like this.

And I believe that in the future all the fields will be computational fields, from anthropology to zoology. So the kids who will be prepared to bend the power of computing to their own will will be much better prepared for the job market than those who are not. And so thinking seriously about what constitutes literacy and what are the mandatory subjects to teach to our children is very important.

FARMER: And do any of the panelists expect regulatory intervention with the stated mission of preserving jobs?

MANYIKA: I don’t. I don’t. I think—I think it’s still going to be important to find a way to have people engage and participate in the economy as a way to find a livelihood. And I don’t think that should be necessarily a regulatory thing. I think we have to find different pathways for people to participate.

I still find it quite striking that, you know, if you take the U.S.—I mean, we often pay attention to the unemployment numbers. I think we should really be looking at three numbers. We should be looking at unemployed people, underemployed people, but also inactives. If you put all those three things together, something like 30 percent of the adult working population in the United States is actually not participating.

So the question is, how do we find other pathways of people to participate in the economy? It used to be that we used to think about that just in the—with the—with the mindset of just jobs, but there are now other ways to participate in the economy. I mean, we’ve seen, you know, for better or for worse, independent work or the gig economy. Not to say that’s better or worse, but at least those provide broader pathways for people to participate in the economy beyond just the traditional jobs.

FARMER: Yeah.

MANYIKA: So I actually like to think about the problem as really an income question, not just of which jobs are a subset of that, but how do we help people generate the income and participate in the economy.

FARMER: So we have a handful of minutes left, and I’d like to get to a number of these questions.

Let’s try to go with a bit of a lightning round, punchy answers. Sir, in the fourth row.

Q: Steven Blank.

You all keep talking about teaching more. But if you go back—and making people more intelligent, able to use this stuff—if you go back to the industrial era, guys like Frank Gilbreth, the time-and-motion people, made the complex work easier. We say “dumbing down,” but they in fact made it possible for people who didn’t have much education to work in highly—fairly complex situations.

We’re not going to change our educational system very quickly to the degree you’re saying. But why—I mean, isn’t the idea of making work—lowering the barriers of entry, to—making it easier to get into more jobs another goal of artificial intelligence?

FARMER: Yes?

RUS: So I would say yes, absolutely. And there are small steps in those directions. So, for instance, for manufacturing, we have recently introduced a number of low-cost products that are meant to be programmed by demonstration and are meant to make an impact on small- to medium organizations, where a little bit of automation that happens, that changes frequently enough to keep up with the products will make a big difference.

So, for instance, companies like Rethink Robotics is distributing these amazingly flexible robots that are actually helping non-experts, people automate and improve the effectiveness of their operations. But we have to do much more of that.

MANYIKA: The only thing I would add to that is I think that certainly should be the goal, but every time you do that, by lowering the skills required to operate and work, you’re also going to expand the pool of available people who can do that, and you’re probably going to put pressure on the wages for those activities. So the question then becomes, is your goal employment or is it wages? If it’s both, then you have a slight challenge with that.

VAN BOMMEL: Maybe from a different perspective—(inaudible)—is that I also see that the AI platforms themselves also are becoming more easier to train, so that you don’t need the supercomputer scientist training or to do all the training, that you actually can also give people who are normally doing their job on a day-to-day basis the ability to train these systems as well.

RUS: So just one more positive thing about computerization. I would say that today everyone with a smart phone is able to use computing in a much more meaningful way than we ever thought was possible. So we already experience a certain level of democratization of access to computation and access to tools.

And this was not the case 20 years ago. Just think about where we were 20 years ago. Twenty years ago, we were talking about a possibility that everyone might have a computer at some distant point in the future. And it’s extraordinary to me to see that the technology has advanced and the human interaction—human-machine interaction has advanced to the point where people can use it in substantive ways, whether they know how to program or not, whether they understand the machine or not, just like with driving.

MANYIKA: Yeah, and maybe one last—one additional point. I think if we think about this from the point of view of is this good for the economy, I think the answer is unequivocally yes. Is it good for users and consumers of the technology? I think it’s mostly almost all yes.

I mean, the questions we have to grapple with are the ones about us as participants, as workers in the—in the economy. That’s where we have this complicated set of issues we’re going to need to work through.

FARMER: Excellent.

In the back there holding the paper.

Q: Thank you. Mark Longstreth from Soros Fund Management.

Back to autonomous cars, you know, we had spoken before about the different business models Tesla has versus some of the others. You know, one thing I think is interesting is that, you know, Google and Uber have a very limited fleet, of which they’re trying to collect data on their own, where Tesla just installs all the sensors in their fleet. So they’re collecting data on all their users. So you’d think that if software is what makes the difference in the future, Tesla would be positioned to win that race.

RUS: Well, there’s no question that access to data is a significant advantage. And Tesla will have access to a lot of data. But it’s interesting to consider aspects of the autonomous driving that are actually not captured by current—by current systems. So how a human interacts with a car that does not have a driver is something we’re not experiencing and we’re not collecting data on. And this is an important part of developing learning systems about autonomy.

So, for instance, if you go to your Uber driver, there is a protocol. He has a number, you have a number, you have this exchange, and then you know you can carry on. But if you show up in a car that has nobody in it, how do you identify yourself to the car, and how does the car know that you’re in but then you should also wait for your daughter who is lingering and maybe—or maybe halfway on the inside of the car? How does the car know that you’ve buckled in?

There are so many issues about human-machine interaction that are not currently studied and at some point we will need to have solutions, we will need to study these problems in order to have comprehensive systems that can cope with people and robots.

FARMER: So I think we have time for one final question.

Q: Hi, I’m Ken Miller—

FARMER: We’ve got a microphone coming from the side here.

Q: I’m Ken Miller.

I wanted to talk about the guardian angel maybe ultimately guarding itself. So a programmer up in Toronto told me late last week that when she programs the machine, she doesn’t quite know what’s going on inside during the billions and billions of calculations. And the question is basically super-intelligence and whether it’s a danger to us on a broader sense.

RUS: So we—in the—in the guardian angel space, a different way to pose the same question would be to say in parallel with software that controls the car, can we also provide a system that explains what the car is doing or why the car has made a certain decision? So let’s say the car accidentally crashes into the curb. Could the car evaluate its own dynamics and then say, well, I saw—so given my current dynamics and the map I have just looked at, I thought that taking this turn was safe, but it turns out that I’m using a dated map and for that reason the curb is no longer where I thought it was.

So we are certainly ready to begin to deliver on this side of explaining what the systems are doing. Beyond that, I think the questions are wide open. But this idea that AI systems need to explain themselves is very much on the mind of the community.

MANYIKA: Yeah, and just to build on that, I think there’s two parts to your question. I think on the question on explainability, that’s actually a very important concept, particularly if you think about—I’ll give you two examples. If you’re applying these tools to, say, medical diagnosis, you may not care as long as you get the best diagnosis how it actually got to that. But if you’re applying the system to, say, the criminal justice system, where you want to be able to explain why it is that the judgment was made this way as opposed to that way for accountability and transparency reasons, then we have a problem, I think.

So that’s—this explainability question has become an important thing that currently, you know, AI researchers are starting to work on. In fact, my old friends who used to be in the research lab that I used to be a part of who are now at DeepMind are working on this question of how do we start to explain why the machine made the choice that it did.

The second part of what you’re asking, which I think is—in my mind at least, you know, I think it’s still a faraway problem—is really the one of super-intelligence, where I think it is in principle possible that you could have emergent behaviors that were not actually programmed into it, where the machine actually decides to set its own goal or rewrite its own program and so forth. But I think that’s a challenge that’s still far, far, far away.

I know there’s a debate about this in the public discourse. People like Elon Musk and others have said one thing. But I think many people that I know and certainly people that I’ve worked with think that’s a faraway problem. I don’t know what you think, but—

RUS: Yeah, I agree. I would quote my colleague, Andrew Ng, who said that worrying about super-intelligence is like worrying about crowding on Mars. At some point in the future, that could be a problem, but that’s so distant—that’s such a distant time away that we can’t really imagine it.

And so I would say—with respect to super-intelligence, I would say that today machines are really good at some things. They’re really good at detecting patterns. They’re really good at making predictions about those patterns. But they are not good at abstractions, generalizations, cognitive reasoning. They’re not good at many things that we are so good at.

So if you look at the AlphaGo program and you say, wow, how could this program beat a grand master, I would say, well, the machine beat the grand master because the machine had access to all the games in the world and then the machine played with itself for a long time and the computer hardware, the computation, was fast enough to deal with that particular size of the board.

But if we increase the size of that board significantly, will the machine still be able to have the same performance? Probably not, but the human might.

And also the machine who played Go is probably not able to play chess or poker or any other game because the machine is just narrowly trained on that particular topic, on that particular task. But if you take a human, if you play Go, you probably played chess, and you probably played poker. And if explain to you the rules, you will very quickly become reasonably good at those rules. But today’s machines cannot because they are so narrowly focused on single tasks.

VAN BOMMEL: Maybe also building on you, I think it depends a lot on the types of use cases, where you will apply certain types of learning just to make sure that where you want to have control, so far you can keep control until the machine can explain actually its own reasoning and that you can identify why a machine has taken some decisions. In other cases, you can still let a human verify what a machine will do before the machine actually does the things or at least make it transparent, why the machine has certain activities. So—

FARMER: Well, I think we could be here all day.

VAN BOMMEL: Yeah.

FARMER: I think we’ve heard that certainly—net/net it seems like the panelists believe artificial intelligence is a force for good, but there are some real questions, particularly around the future of work and the impact on jobs and wages.

So we’re going to have a lunch after this. I’d like to invite all the members to stay, to speak with one another and with our panelists. And please join me in thanking the panel. (Applause.)

(END)

This is an uncorrected transcript.

Top Stories on CFR

Iran

Steven Cook, the Eni Enrico Mattei Senior Fellow for Middle East and Africa Studies at CFR, and Ray Takeyh, the Hasib J. Sabbagh senior fellow for Middle East studies at CFR, sit down with James M. Lindsay to discuss Iran’s unprecedented attack on Israel and the prospects for a broader Middle East war.

Economics

CFR experts preview the upcoming World Bank and International Monetary Fund (IMF) Spring Meetings taking place in Washington, DC, from April 17 through 19.   

Sudan

A year into the civil war in Sudan, more than eight million people have been displaced, exacerbating an already devastating humanitarian crisis.