The thing about artificial intelligence is, you don’t want it getting too smart. At least, not in a Skynet, self-aware, Matrix-y, “let’s-enslave-the-humans!” kind of way.
And while those dark visions seem best suited for the realm of science fiction, consider this: Facebook recently pulled the plug on an AI project because it invented its own language—one that humans couldn’t understand.
The research team at Facebook Artificial Intelligence Research (FAIR) built two AIs, named Alice and Bob (for A and B, get it?), who were tasked with learning how to negotiate between themselves to trade hats, balls and books. By observing and imitating human trading and bartering practices, they were expected to cut deals based on the value of each object and subsequent bartering.
After some preliminary flirtations, the chatbots got down to business, apparently deciding that using English was simply totes inefficient. So they abandoned it.
Bob: “I can can I I everything else.”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
Umm—okay.
This seeming devolvement into babbling is anything but—it represents the lightning-fast optimization of language to include code words and the stripping out of superfluous linguistics, until all that was left was the pure stuff of the deal. Just the facts, ma’am.
“There was no reward to sticking to English language,” Dhruv Batra, visiting research scientist from Georgia Tech at FAIR, speaking to Fast Company. “Agents [the AIs] will drift off understandable language and invent codewords for themselves. Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
The problem of course is that while we know why this language creation happens, we don’t know what they’re actually saying.
“It’s important to remember, there aren’t bilingual speakers of AI and human languages,” says Batra.
Facebook hit the reset button and this time programmed a requirement for AIs to use English-language conversations, saying that FAIR’s focus area is the creation of personal assistants that interact with people. The unspoken bit of course is that the idea of chatbots (and eventually android-like things, potentially) that do things better than humans and are capable of making decisions and carrying on conversations among themselves on their own without human oversight can lead to some very dark scenarios indeed. We all saw Alien: Covenant, right?
The situation, deliciously, comes in the middle of a classic tech CEO war-of-words romp. Facebook’s Mark Zuckerberg a week ago called out Elon Musk for being “pretty irresponsible” when asked about the Tesla genius bringing up the robot-overlord trope—which he does often.
"I have pretty strong opinions on this. I am optimistic," Zuckerberg said, while roasting a brisket in his backyard and fielding fan questions using Facebook Live. "And I think people who are naysayers and try to drum up these doomsday scenarios—I just, I don't understand it. It's really negative.”
Musk was quick to reply, tweeting, “I’ve talked to Mark about this. His understanding of the subject is limited.”
Ouch.
Earlier in July Musk addressed the National Governors Association summer meeting, saying that AI poses a "fundamental risk to the existence of human civilization."
“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” he said. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”
Hey look, I get Zuck’s perspective on Musk. This is the guy that refers to the mission to Mars as “plan B for the human race,” already envisioning the extinction-level event that is sure to come along sooner or later. Serious buzzkill, amirite? And Zuckerberg also wants to make money—lots of money—for shareholders and himself by pioneering and leading one of the hottest new tech sectors out there. But Musk isn’t alone in bringing up the longer term considerations of fully developed AI—Bill Gates and Stephen Hawking are among those in the wary camp.
AI researcher Stuart Russell, professor of Computer Science at the University of California and author of Artificial Intelligence: A Modern Approach, once compared developing AI to the splitting of the atom—and we all saw how that ended up.
“From the beginning, the primary interest in nuclear technology was the inexhaustible supply of energy... I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence,” he said. “Both seem wonderful until one thinks of the possible risks.”
While I actually think the primary interest in nuclear technology started out being something much more destructive (“Oppenheimer’s deadly toy), it’s hard not to agree with his broader point: In the end, what starts out as good, helpful, interesting tech can evolve into something that threatens our very existence if left unchecked, or developed without stringent ethical research parameters. Listen up, Zuck.