Monday, July 15, 2024

What is AI?

Must read


Right around the same time, Tegmark founded the Future of Life Institute, with a remit to study and promote AI safety. Depp’s costar in the movie, Morgan Freeman, was on the institute’s board, and Elon Musk, who had a cameo in the film, donated $10 million in its first year. For Cave and Dihal, Transcendence is a perfect example of the multiple entanglements between popular culture, academic research, industrial production, and “the billionaire-funded fight to shape the future.”

On the London leg of his world tour last year, Altman was asked what he’d meant when he tweeted: “AI is the tech the world has always wanted.” Standing at the back of the room that day, behind an audience of hundreds, I listened to him offer his own kind of origin story: “I was, like, a very nervous kid. I read a lot of sci-fi. I spent a lot of Friday nights home, playing on the computer. But I was always really interested in AI and I thought it’d be very cool.” He went to college, got rich, and watched as neural networks became better and better. “This can be tremendously good but also could be really bad. What are we going to do about that?” he recalled thinking in 2015. “I ended up starting OpenAI.”

Why you should care that a bunch of nerds are fighting about AI

Okay, you get it: No one can agree on what AI is. But what everyone does seem to agree on is that the current debate around AI has moved far beyond the academic and the scientific. There are political and moral components in play—which doesn’t help with everyone thinking everyone else is wrong.

Untangling this is hard. It can be difficult to see what’s going on when some of those moral views take in the entire future of humanity and anchor them in a technology that nobody can quite define.

But we can’t just throw our hands up and walk away. Because no matter what this technology is, it’s coming, and unless you live under a rock, you’ll use it in one form or another. And the form that technology takes—and the problems it both solves and creates—will be shaped by the thinking and the motivations of people like the ones you just read about. In particular, by the people with the most power, the most cash, and the biggest megaphones.

Which leads me to the TESCREALists. Wait, come back! I realize it’s unfair to introduce yet another new concept so late in the game. But to understand how the people in power may mold the technologies they build, and how they explain them to the world’s regulators and lawmakers, you need to really understand their mindset.

Timnit Gebru
Timnit Gebru


Gebru, who founded the Distributed AI Research Institute after leaving Google, and Émile Torres, a philosopher and historian at Case Western Reserve University, have traced the influence of several techno-utopian belief systems on Silicon Valley. The pair argue that to understand what’s going on with AI right now—both why companies such as Google DeepMind and OpenAI are in a race to build AGI and why doomers like Tegmark and Hinton warn of a coming catastrophe—the field must be seen through the lens of what Torres has dubbed the TESCREAL framework.

The clunky acronym (pronounced tes-cree-all) replaces an even clunkier list of labels: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism. A lot has been written (and will be written) about each of these worldviews, so I’ll spare you here. (There are rabbit holes within rabbit holes for anyone wanting to dive deeper. Pick your forum and pack your spelunking gear.)

Latest article