📬 Want some Southern goodness in your inbox every Friday?
Get Scalawag's latest stories and a run down of what's happening across the South with our weekly newsletter.
Sitting in the belly of the Kentucky Center for the Performing Arts, in the hubbub of IdeaFestival 2016—a 3-day series of lectures, panels, and performances in Louisville, Kentucky—I overheard a conversation I couldn't ignore. Two professors were hashing out how we might save the planet from super-intelligent artificial intelligence. As a kid I'd spent late nights working on the same question via various video games and Lego dioramas, so I was familiar with the basic parameters. The longer I listened in, the more I figured these two experts—Dr. Susan Schneider of the University of Connecticut, and Dr. Roman Yampolskiy of the University of Louisville—probably tried to avoid banal questions from grown-up science fiction fans.
Then I remembered my media badge.
All of which is to say that what follows reflects substantial indulgence and generosity from both Dr. Schneider and Dr. Yampolskiy. I hope you enjoy the conversation as much as I did. Most of all, I hope the time they spent answering my questions isn't the reason that the Terminators/the Borg/the Cylons win.
Scalawag: Thanks for taking the time to chat. To start with, tell me: what do you each work on?
Dr. Schneider: I'm a philosopher and cognitive scientist working on the nature of self and mind. I frequently think about artificial intelligence (AI), and about the metaphysics of the mind involved in AI—I think it's obvious that there are philosophical issues lurking in AI. The possibility of creating human-level intelligence or beyond raises issues: What is it to be a self? Could an AI have a mind or be a person, and could it be conscious?
Dr. Yampolskiy: I'm a computer scientist. My background is primarily in cybersecurity, and I study AI safety.
Scalawag: Why in Louisville?
Y: They gave me a job. Thank you to the taxpayers of the Commonwealth [of Kentucky] for making this possible and for putting us on the map.
S: Amen.
Scalawag: So when is AI safety going to become an issue?
Y: 30 years ago. Across that whole time, we've always had intelligent systems and we've always had them failing in various ways. The damage is already real. As their complexity increases, the possibility of failure becomes more common and more damaging. That's why it's crucial we start studying this stuff now. It's too late, once you have a car, to develop breaks.
Scalawag: Hold on—what are some examples of AI safety failures in the last thirty years?
Y: Think about stock market trading—there have been flash-crashes where mis-calibrations have wiped out trillion dollars of wealth. There have been false alarms in military warning systems that have almost caused nuclear war. These weren't AGIs, but they illustrate the concept.
(AGI refers to an artificial general intelligence—general in the sense that like a human, it might not only do math but also critique dance performances and find its way across a city. You can imagine an AGI being much more intelligent than a human being, which presents all sorts of problems, for instance, what happens if it tries to kill us? Thanks to my inexpert questions, we used the term "AI" a little imprecisely throughout this conversation, but often we were referring to AGIs of this kind.)
Scalawag: So how do we keep AI safe?
Y: No one knows how to keep AI safe. If someone tells you they do, they're lying. And no one has any idea how to get there. That's why it's such an important and interesting problem.
Scalawag: Well where do we start? What are the avenues of inquiry?
Y: We start by buying a little time so we can do research. We're working on confinement: confining AI to restricted spaces so that we have time to develop safeguards—similar to what we sometimes do with computer viruses. We try to work to make sure systems have fewer bugs—verified code is one effort in this regard, putting together formal proofs of code outcomes. And we try to learn from cyber security, making sure the system isn't accessible to malevolent actors.
Scalawag: If AI is conscious, isn't confinement an ethical issue?
S: Yes, it would be. It's important to bear in mind that we frequently make ethical decisions that impinge on the rights of people. Still, if an AI feels, making it work for us without its consent would be slavery. To unplug it would be horrible. So we have to think about consciousness for a number of reasons—including safety, actually. If it's conscious, if it can feel pain or the burning drive of curiosity, then we have an ethical obligation to treat it fairly—just like I think we have an obligation to non-human animals. And, of course, obligations to other humans. Whereas if [an AI] isn't conscious, I would feel less uncomfortable unplugging it or restricting it to a confined environment. Consciousness is a game-changer.
Y: But doesn't unplugging an AI remove all pain entirely?
S: Maybe, but doing so ends the AI's future! Which is one reason why, by way of analogy, we as human society we don't endorse murder of human beings.
For a few minutes the back-and-forth outpaced my stenography, which I regret. We eventually took Susan's point. I slipped in another question.
Scalawag: Why should we fund AI safety research? It seems like funding science fiction when right here in Louisville, thousands of people are unemployed, or don't have access to healthcare, or housing.
S: You've got to recognize that science fiction is quickly becoming science fact. Right now, research suggests robots are going to supplant humans in millions and millions of jobs. Self-driving cars are already on the verge of pushing millions of people out of work. AI is coming fast, and if we don't think about it now, it could overtake us.
Y: It's also a question of impact—you're talking about poverty, and I agree, people not being wealthy seems disappointing—but it's nothing compared to being killed.
Scalawag: So it seems like there are political consequences to the emergence of AI. Set aside the technical matters for a second—are we capable of handling AI from a political perspective? Can we figure out who should have a say in controlling it and how?
Y: It sounds nice to say that there should be democracy on these questions, but I would prefer experts—those who actually understand the issues—to have more weight to their opinions. "One person, one vote" works for lots of issues, but probably not for scientific issues.
Scalawag: There are experts in political theory. Should they be part of conversations about use of AI?
Y: Yes.
Scalawag: And are they?
Y: It's hard to tell what's going on—there's no central AI governance; there are no central decision-making bodies.
Scalawag: How will AI change politics?
Y: Hopefully it will remove all human politicians. [Laugh]
S: Note that he's joking.
Y: People have biases, people can be bribed—algorithms can't. There's research into rule by algorithm, which would mean a society is governed by algorithms using a scientific approach.
Scalawag: Who chooses the algorithms?
Y: Well, one thing to do is have a competition between algorithms to see which is best. Let's say we all agree we want to reduce murders—we can see which algorithm is most effective, within a given experimental window, at bringing the murder rate down.
Scalawag: But who gets to agree on what the aims of these algorithms should be? And who gets to prioritize values? So to take your example, maybe controlling the murder rate seems like a relatively easy point of agreement, but who gets to decide if gun control is a permissible mechanism for the algorithm to use? That seems like it's still human politics.
Y: Well, there are some organizations working on this. We had a panel on AI safety organized by the White House, for instance. Internationally, organizations of engineers like IEEE have ethical boards on superintelligence and AI safety—they released guidelines on this just recently—and in fact we had Google, Microsoft, and top-5 companies come together to create a consortium to manage future AI research.
S: We really do have to have open discussions that include society as a whole. AI safety shouldn't just be settled by a bunch of CEOs—because the issues involve all of humanity. It's really frightening to think this might all be adjudicated behind closed doors. A proper public dialogue doesn't just include business leaders, but philosophers, theologians, activist groups, and privacy advocates.
Scalawag: Are there spaces for this kind of dialogue?
S: Well, that new development today is a nice start—a large, relatively open collaborative on AI. Check out the Tech Republic article on the Partnership On AI. It was started by Alphabet (Google's parent company) and Facebook, and we'll have to wait and see if they actually loop everyone in. If it's just the same-old closed-door format, it won't be so great. There have also been government sponsored AI safety meetings across the country. Universities are having forums. So there are a variety of potential spaces, but all of them underscore the need for a wide range of voices as well as the importance of quality science writing in bringing these issues to the public.
Scalawag: Back to the front-line part of AI safety—will we get it right in time?
Y: No.
Scalawag: So why work on it?
Y: If we work on it, at least we have a fighting chance.
Scalawag: What does a fighting chance look like?
Y: Well, we can delay negative outcomes. We may find partial solutions—whatever amount of human autonomy we can preserve, however many additional years we can give the species, that's inherently good.
Scalawag: Can we buy fifty or so years? I'm just trying to die before the robots can kill me.
Y: No one can tell you. After the singularity, no one can make real predictions. That's part of the definition.
Scalawag: For the benefit of our readers, what's the singularity and what makes you so confident in it?
Y: The point in time where science fiction and science converge. All the things we predict about the future—tech, superintelligence, super-intelligent machines become reality. Here's how it happens, basically: Once we have human-level artificial intelligence, which doesn't seem so far away anymore, we can automate science and engineering. AI doesn't need to sleep or eat, so they drastically speed up the timeframes on which new software and hardware can be released. As they improve, they accelerates: new systems every month, then every week, then every day—at that point, you have an intelligence explosion. We can't predict what happens after that. Why am I confident in this? Well, there seem to be a lot trends pointing in the same direction. Look at software, hardware, anything—they converge at about the same time.
Scalawag: Sounds terrifying. Are you afraid?
Y: There's a lot of risk involved in suddenly no longer being the most intelligent species on the planet. You know, suddenly, we're no longer in charge. Especially when you look historically at what humans have done to those they perceived to be less intelligent—that's frightening.
Scalawag: It is frightening. Do you think AI could bring with it an ethical revolution?
Y: There is research on moral enhancement—how to use AI to make people more moral—but the problem is we don't agree on what that means. We all have different religions, different cultural backgrounds, different experiences. I think it is very unlikely we'll solve ethics after thousands of years of trying.
Just then, Susan returned with a fresh coffee.
Scalawag: Susan, we were just talking about whether machines could solve ethics.
S: That's a really interesting question. It's certainly possible that artificial intelligence could offer all kinds of insights. We just don't know—but you've got to think that super-intelligent AI could open all kinds of doors to philosophical questions. That's separate from AI safety, of course, but it's in some ways the dream of transhumanists—that machine intelligence opens all kinds of intellectual paths we simply couldn't see, in the same way a cat can't conceptualize general relativity.
Y: All that is possible, but it also assumes there is a solution. On the other hand, it's possible the AI may come up with a new set of guidelines and we may not like them very much.
Scalawag: In that case, who's right?
Y: Whoever's in charge will be declared the winner. Or, self-declared.
Scalawag: You know, the way a lot of Americans conceptualize the Civil War—or for that matter, The Matrix—we like to think there's some kind of vague, inherent force behind correct ethics. Do you think righteous human underdogs would have a leg up on evil machines?
Y: Well, I'm not a historian but from what I understand those who win write history and construct these stories.