Stephen Fry Explains Why Artificial Intelligence Has a “70% Risk of Killing Us All”

Apart from his comedic, dra­mat­ic, and lit­er­ary endeav­ors, Stephen Fry is wide­ly known for his avowed technophil­ia. He once wrote a col­umn on that theme, “Dork Talk,” for the Guardian, in whose inau­gur­al dis­patch he laid out his cre­den­tials by claim­ing to have been the own­er of only the sec­ond Mac­in­tosh com­put­er sold in Europe (“Dou­glas Adams bought the first”), and nev­er to have “met a smart­phone I haven’t bought.” But now, like many of us who were “dip­py about all things dig­i­tal” at the end of the last cen­tu­ry and the begin­ning of this one, Fry seems to have his doubts about cer­tain big-tech projects in the works today: take the “$100 bil­lion plan with a 70 per­cent risk of killing us all” described in the video above.

This plan, of course, has to do with arti­fi­cial intel­li­gence in gen­er­al, and “the log­i­cal AI sub­goals to sur­vive, deceive, and gain pow­er” in par­tic­u­lar. Even in this rel­a­tive­ly ear­ly stage of devel­op­ment, we’ve wit­nessed AI sys­tems that seem to be alto­geth­er too good at their jobs, to the point of engag­ing in what would count as decep­tive and uneth­i­cal behav­ior were the sub­ject a human being. (Fry cites the exam­ple of a stock mar­ket-invest­ing AI that engaged in insid­er trad­ing, then lied about hav­ing done so.) What’s more, “as AI agents take on more com­plex tasks, they cre­ate strate­gies and sub­goals which we can’t see, because they’re hid­den among bil­lions of para­me­ters,” and qua­si-evo­lu­tion­ary “selec­tion pres­sures also cause AI to evade safe­ty mea­sures.”

In the video, MIT physi­cist, and machine learn­ing researcher Max Tegmark speaks por­ten­tous­ly of the fact that we are, “right now, build­ing creepy, super-capa­ble, amoral psy­chopaths that nev­er sleep, think much faster than us, can make copies of them­selves, and have noth­ing human about them what­so­ev­er.” Fry quotes com­put­er sci­en­tist Geof­frey Hin­ton warn­ing that, in inter-AI com­pe­ti­tion, “the ones with more sense of self-preser­va­tion will win, and the more aggres­sive ones will win, and you’ll get all the prob­lems that jumped-up chim­panzees like us have.” Hin­ton’s col­league Stu­art Rus­sell explains that “we need to wor­ry about machines not because they’re con­scious, but because they’re com­pe­tent. They may take pre­emp­tive action to ensure that they can achieve the objec­tive that we gave them,” and that action may be less than impec­ca­bly con­sid­er­ate of human life.

Would we be bet­ter off just shut­ting the whole enter­prise down? Fry rais­es philoso­pher Nick Bostrom’s argu­ment that “stop­ping AI devel­op­ment could be a mis­take, because we could even­tu­al­ly be wiped out by anoth­er prob­lem that AI could’ve pre­vent­ed.” This would seem to dic­tate a delib­er­ate­ly cau­tious form of devel­op­ment, but “near­ly all AI research fund­ing, hun­dreds of bil­lions per year, is push­ing capa­bil­i­ties for prof­it; safe­ty efforts are tiny in com­par­i­son.” Though “we don’t know if it will be pos­si­ble to main­tain con­trol of super-intel­li­gence,” we can nev­er­the­less “point it in the right direc­tion, instead of rush­ing to cre­ate it with no moral com­pass and clear rea­sons to kill us off.” The mind, as they say, is a fine ser­vant but a ter­ri­ble mas­ter; the same holds true, as the case of AI makes us see afresh, for the mind’s cre­ations.

Relat­ed con­tent:

Stephen Fry Voic­es a New Dystopi­an Short Film About Arti­fi­cial Intel­li­gence & Sim­u­la­tion The­o­ry: Watch Escape

Stephen Fry Reads Nick Cave’s Stir­ring Let­ter About Chat­G­PT and Human Cre­ativ­i­ty: “We Are Fight­ing for the Very Soul of the World”

Stephen Fry Explains Cloud Com­put­ing in a Short Ani­mat­ed Video

Stephen Fry Takes Us Inside the Sto­ry of Johannes Guten­berg & the First Print­ing Press

Stephen Fry on the Pow­er of Words in Nazi Ger­many: How Dehu­man­iz­ing Lan­guage Laid the Foun­da­tion for Geno­cide

Neur­al Net­works for Machine Learn­ing: A Free Online Course Taught by Geof­frey Hin­ton

Based in Seoul, Col­in Marshall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities and the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les. Fol­low him on Twit­ter at @colinmarshall or on Face­book.


by | Permalink | Comments (1) |

Sup­port Open Cul­ture

We’re hop­ing to rely on our loy­al read­ers rather than errat­ic ads. To sup­port Open Cul­ture’s edu­ca­tion­al mis­sion, please con­sid­er mak­ing a dona­tion. We accept Pay­Pal, Ven­mo (@openculture), Patre­on and Cryp­to! Please find all options here. We thank you!


Comments (1)
You can skip to the end and leave a response. Pinging is currently not allowed.
  • Luis says:

    RE “Max Tegmark speaks por­ten­tous­ly of the fact that we are, “right now, build­ing creepy, super-capa­ble, amoral psy­chopaths that nev­er sleep, think much faster than us, can make copies of them­selves, and have noth­ing human about them what­so­ev­er.””

    Tegmark, like most every­one else, miss­es the BIGGER pic­ture… these arti­fi­cial intel­li­gence psy­chopaths mir­ror THEIR cre­ators, pro­mot­ers, and defend­ers — read The 2 Mar­ried Pink Ele­phants In The His­tor­i­cal Room –The Holo­caustal Covid-19 Coro­n­avirus Mad­ness: A Soci­o­log­i­cal Per­spec­tive & His­tor­i­cal Assess­ment Of The Covid “Phe­nom­e­non” at https://www.rolf-hefti.com/covid-19-coronavirus.html

    “When a well-pack­aged web of lies has been sold grad­u­al­ly to the mass­es over gen­er­a­tions, the truth will seem utter­ly pre­pos­ter­ous and its speak­er a rav­ing lunatic.” — Dres­den James

    “The indi­vid­ual is hand­i­capped by com­ing face to face with a con­spir­a­cy so mon­strous he can­not believe it exists.” — J. Edgar Hoover, for­mer FBI chief

Leave a Reply

Quantcast
Open Culture was founded by Dan Colman.