Kate Bush, Annie Lennox and 1,000 Musicians Protest AI with a New Silent Album

The good news is that an album has just been released by Kate Bush, Annie Lennox, Damon Albarn of Goril­laz, The Clash, Tori Amos, Hans Zim­mer, Pet Shop Boys, Jamiro­quai, and Yusuf (pre­vi­ous­ly known as Cat Stevens), Bil­ly Ocean, and many oth­er musi­cians besides, most of them British. The bad news is that it con­tains no actu­al music. But the album, titled Is This What We Want?, has been cre­at­ed in hopes of pre­vent­ing even worse news: the gov­ern­ment of the Unit­ed King­dom choos­ing to let arti­fi­cial-intel­li­gence com­pa­nies train their mod­els on copy­right­ed work with­out a license.

Such a move, in the words of the pro­jec­t’s leader Ed New­ton-Rex, “would hand the life’s work of the country’s musi­cians to AI com­pa­nies, for free, let­ting those com­pa­nies exploit musi­cians’ work to out­com­pete them.” As a com­pos­er, he nat­u­ral­ly has an inter­est in these mat­ters, and as a “for­mer AI exec­u­tive,” he pre­sum­ably has insid­er knowl­edge about them as well.

“The gov­ern­men­t’s will­ing­ness to agree to these copy­right changes shows how much our work is under­val­ued and that there is no pro­tec­tion for one of this coun­try’s most impor­tant assets: music,” Kate Bush writes on her own web­site. “Each track on this album fea­tures a desert­ed record­ing stu­dio. Doesn’t that silence say it all?”

As the Guardian’s Dan Mil­mo reports, “it is under­stood that Kate Bush has record­ed one of the dozen tracks in her stu­dio.” Those tracks, whose titles add up to the phrase “The British gov­ern­ment must not legalise music theft to ben­e­fit AI com­pa­nies,” aren’t strict­ly silent: in a man­ner that might well have pleased John Cage, they con­tain a vari­ety of ambi­ent nois­es, from foot­steps to hum­ming machin­ery to pass­ing cars to cry­ing babies to vague­ly musi­cal sounds ema­nat­ing from some­where in the dis­tance. What­ev­er its influ­ence on the U.K. gov­ern­men­t’s delib­er­a­tions, Is This What We Want? (the title Sounds of Silence hav­ing pre­sum­ably been unavail­able) may have pio­neered a new genre: protest song with­out the songs.

You can stream Is This What We Want? on Spo­ti­fy.

Relat­ed con­tent:

Arti­fi­cial Intel­li­gence, Art & the Future of Cre­ativ­i­ty: Watch the Final Chap­ter of the “Every­thing is a Remix” Series

Arti­fi­cial Intel­li­gence Cre­ativ­i­ty Machine Learns to Play Beethoven in the Style of The Bea­t­les’ “Pen­ny Lane”

Watch John Cage’s 4′33″ Played by Musi­cians Around the World

Chat­G­PT Writes a Song in the Style of Nick Cave–and Nick Cave Calls it “a Grotesque Mock­ery of What It Is to Be Human”

Noam Chom­sky on Chat­G­PT: It’s “Basi­cal­ly High-Tech Pla­gia­rism” and “a Way of Avoid­ing Learn­ing”

Based in Seoul, Col­in Marshall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities and the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les. Fol­low him on the social net­work for­mer­ly known as Twit­ter at @colinmarshall.

How Do You Use AI in Your Daily Life? Share the Applications That Have Made a Big Difference

Image by Jernej Fur­man, via Wiki­me­dia Com­mons

It would be dif­fi­cult to imag­ine the last cou­ple of years with­out arti­fi­cial intel­li­gence, even if you don’t use it. Can you recall the last day with­out some AI-relat­ed news item or social-media post — or indeed, a time when the hype did­n’t slide into utopi­an or apoc­a­lyp­tic terms? “If I look five or ten years down the road, it seems like we will be in a world in which the use of AI tools will not just be nor­mal,” writes Justin Wein­berg at Dai­ly Nous, offer­ing a more sober take. “Facil­i­ty with them will be expect­ed, and that expec­ta­tion will inform the social and pro­fes­sion­al norms we’ll all be sub­ject to, whether we like it or not.”

To his audi­ence of phi­los­o­phy aca­d­e­mics, Wein­berg pos­es a ques­tion: are you using AI? And fur­ther­more, “Is there a par­tic­u­lar kind of task you think you’d like to learn how to use AI for, but don’t know how?” Here at Open Cul­ture, we’d like to ask some­thing sim­i­lar of our read­ers. If you use AI in your dai­ly life in mean­ing­ful ways, what do you use it for? We’ve pre­vi­ous­ly fea­tured appli­ca­tions like Ope­nAI’s text-gen­er­at­ing Chat­G­PT and image-gen­er­at­ing DALL‑E, both of which have aston­ished users with the rapid­i­ty of their evo­lu­tion. Now, tools promis­ing “the pow­er of AI” pro­lif­er­ate dai­ly across ever more diverse fields of human endeav­or.

For many of us, AI has thus far amount­ed to lit­tle more than a tech­nol­o­gy with which to amuse our­selves, albeit a very impres­sive one. I myself have laughed as hard at AI-gen­er­at­ed sto­ries as I have at any­thing else over the past year or two, though much depends on the thought I put into the prompts. But I’ve also heard the occa­sion­al sto­ry of gen­uine ben­e­fit that an AI tool has brought to some­one’s per­son­al or pro­fes­sion­al life, whether by clear­ly explain­ing a long-mis­un­der­stood con­cept, fill­ing the gaps in a child’s edu­ca­tion, or help­ing to deter­mine what kind of care to seek for a med­ical prob­lem.

If you have any such expe­ri­ences your­self, please do leave a com­ment on this post telling us about them — and don’t for­get to men­tion what vari­ety of AI you’re using. Open Cul­ture read­ers may well be get­ting real mileage out of AI “for sum­ma­riz­ing com­plex aca­d­e­m­ic texts, trans­lat­ing his­tor­i­cal doc­u­ments, or explor­ing phi­los­o­phy, lit­er­a­ture, and sci­ence more deeply”; for gen­er­at­ing “poet­ry, music com­po­si­tion, or visu­al art in the vein of his­tor­i­cal and avant-garde styles”; or for “prac­tice with for­eign lan­guages, whether through trans­la­tion, con­ver­sa­tion, or gram­mar cor­rec­tion.” At least, that’s what Chat­G­PT thinks. Look for­ward to read­ing your thoughts in the com­ments below.

Relat­ed con­tent:

Google Launch­es a New Course Called “AI Essen­tials”: Learn How to Use Gen­er­a­tive AI Tools to Increase Your Pro­duc­tiv­i­ty

Sci-Fi Writer Arthur C. Clarke Pre­dict­ed the Rise of Arti­fi­cial Intel­li­gence & the Exis­ten­tial Ques­tions We Would Need to Answer (1978)

Google & MIT Offer a Free Course on Gen­er­a­tive AI for Teach­ers and Edu­ca­tors

Unlock AI’s Poten­tial in Your Work and Dai­ly Life: Take a Pop­u­lar Course from Google

Noam Chom­sky on Chat­G­PT: It’s “Basi­cal­ly High-Tech Pla­gia­rism” and “a Way of Avoid­ing Learn­ing”

Based in Seoul, Col­in Marshall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities and the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les. Fol­low him on the social net­work for­mer­ly known as Twit­ter at @colinmarshall.

Unlock AI’s Potential in Your Work and Daily Life: Take a Popular Course from Google

Gen­er­a­tive AI is rapid­ly becom­ing an essen­tial tool for stream­lin­ing work and solv­ing com­plex chal­lenges. How­ev­er, know­ing how to use GenAI effec­tive­ly isn’t always obvi­ous. That’s where Google Prompt­ing Essen­tials comes in. This course will teach you to write clear and spe­cif­ic instructions—known as prompts—for AI. Once you can prompt well, you can unlock gen­er­a­tive AI’s poten­tial more ful­ly.

Launched in April, Google Prompt­ing Essen­tials has become the most pop­u­lar GenAI course offered on Cours­era. The course itself is divid­ed into four mod­ules. First, “Start Writ­ing Prompts Like a Pro” will teach you a 5‑step method for craft­ing effec­tive prompts. (Watch the video from Mod­ule 1 above, and more videos here.) With the sec­ond mod­ule, “Design Prompts for Every­day Work Tasks,” you will learn how to use AI to draft emails, brain­storm ideas, and sum­ma­rize doc­u­ments. The third mod­ule, “Speed Up Data Analy­sis and Pre­sen­ta­tion Build­ing,” teach­es tech­niques for uncov­er­ing insights in data, visu­al­iz­ing results, and prepar­ing pre­sen­ta­tions. The final mod­ule, “Use AI as a Cre­ative or Expert Part­ner,” explores advanced tech­niques such as prompt chain­ing and mul­ti­modal prompt­ing. Plus, you will “cre­ate a per­son­al­ized AI agent to role-play con­ver­sa­tions and pro­vide expert feed­back.”

Offered on the Cours­era plat­form, Google Prompt­ing Essen­tials costs $49. Once you com­plete the course, you will receive a cer­tifi­cate from Google to share with your net­work and employ­er. Bet­ter yet, you will under­stand how to make GenAI a more use­ful tool in your life and work. Enroll here.

Note: Open Cul­ture has a part­ner­ship with Cours­era. If read­ers enroll in cer­tain Cours­era cours­es and pro­grams, it helps sup­port Open Cul­ture.

Launch Your Project Management Career with Google’s AI-Enhanced Professional Certificate

?si=TMflasoogRfSD14h

Back in 2021, Google released a series of cer­tifi­cate pro­grams, includ­ing one focused on Project Man­age­ment. Designed to give stu­dents “an immer­sive under­stand­ing of the prac­tices and skills need­ed to suc­ceed in an entry-lev­el project man­age­ment role,” the cer­tifi­cate pro­gram fea­tures six cours­es over­all, includ­ing:

  • Foun­da­tions of Project Man­age­ment
  • Project Ini­ti­a­tion: Start­ing a Suc­cess­ful Project
  • Project Plan­ning: Putting It All Togeth­er
  • Project Exe­cu­tion: Run­ning the Project
  • Agile Project Man­age­ment
  • Cap­stone: Apply­ing Project Man­age­ment in the Real World

More than 1.7 mil­lion peo­ple have since enrolled in the course sequence. And Google has now updat­ed the cours­es with 6 new videos on how to use AI in project man­age­ment. The videos will teach stu­dents how to boost project man­age­ment skills with AI, iden­ti­fy poten­tial project risks with gen AI, use AI to improve project com­mu­ni­ca­tions, and more.

The Project Man­age­ment pro­gram takes about six months to com­plete (assum­ing you put in 10 hours per week), and it should cost about $300 in total. Fol­low­ing a 7‑day free tri­al, stu­dents will be charged $49 per month until they com­plete the pro­gram.

All Google career cours­es are host­ed on the Cours­era plat­form. Final­ly, it’s worth men­tion­ing that any­one who enrolls in this cer­tifi­cate before Novem­ber 30, 2024 will get access to Google AI Essen­tials at no cost.

Note: Open Cul­ture has a part­ner­ship with Cours­era. If read­ers enroll in cer­tain Cours­era cours­es and pro­grams, it helps sup­port Open Cul­ture.

Stephen Fry Explains Why Artificial Intelligence Has a “70% Risk of Killing Us All”

Apart from his comedic, dra­mat­ic, and lit­er­ary endeav­ors, Stephen Fry is wide­ly known for his avowed technophil­ia. He once wrote a col­umn on that theme, “Dork Talk,” for the Guardian, in whose inau­gur­al dis­patch he laid out his cre­den­tials by claim­ing to have been the own­er of only the sec­ond Mac­in­tosh com­put­er sold in Europe (“Dou­glas Adams bought the first”), and nev­er to have “met a smart­phone I haven’t bought.” But now, like many of us who were “dip­py about all things dig­i­tal” at the end of the last cen­tu­ry and the begin­ning of this one, Fry seems to have his doubts about cer­tain big-tech projects in the works today: take the “$100 bil­lion plan with a 70 per­cent risk of killing us all” described in the video above.

This plan, of course, has to do with arti­fi­cial intel­li­gence in gen­er­al, and “the log­i­cal AI sub­goals to sur­vive, deceive, and gain pow­er” in par­tic­u­lar. Even in this rel­a­tive­ly ear­ly stage of devel­op­ment, we’ve wit­nessed AI sys­tems that seem to be alto­geth­er too good at their jobs, to the point of engag­ing in what would count as decep­tive and uneth­i­cal behav­ior were the sub­ject a human being. (Fry cites the exam­ple of a stock mar­ket-invest­ing AI that engaged in insid­er trad­ing, then lied about hav­ing done so.) What’s more, “as AI agents take on more com­plex tasks, they cre­ate strate­gies and sub­goals which we can’t see, because they’re hid­den among bil­lions of para­me­ters,” and qua­si-evo­lu­tion­ary “selec­tion pres­sures also cause AI to evade safe­ty mea­sures.”

In the video, MIT physi­cist, and machine learn­ing researcher Max Tegmark speaks por­ten­tous­ly of the fact that we are, “right now, build­ing creepy, super-capa­ble, amoral psy­chopaths that nev­er sleep, think much faster than us, can make copies of them­selves, and have noth­ing human about them what­so­ev­er.” Fry quotes com­put­er sci­en­tist Geof­frey Hin­ton warn­ing that, in inter-AI com­pe­ti­tion, “the ones with more sense of self-preser­va­tion will win, and the more aggres­sive ones will win, and you’ll get all the prob­lems that jumped-up chim­panzees like us have.” Hin­ton’s col­league Stu­art Rus­sell explains that “we need to wor­ry about machines not because they’re con­scious, but because they’re com­pe­tent. They may take pre­emp­tive action to ensure that they can achieve the objec­tive that we gave them,” and that action may be less than impec­ca­bly con­sid­er­ate of human life.

Would we be bet­ter off just shut­ting the whole enter­prise down? Fry rais­es philoso­pher Nick Bostrom’s argu­ment that “stop­ping AI devel­op­ment could be a mis­take, because we could even­tu­al­ly be wiped out by anoth­er prob­lem that AI could’ve pre­vent­ed.” This would seem to dic­tate a delib­er­ate­ly cau­tious form of devel­op­ment, but “near­ly all AI research fund­ing, hun­dreds of bil­lions per year, is push­ing capa­bil­i­ties for prof­it; safe­ty efforts are tiny in com­par­i­son.” Though “we don’t know if it will be pos­si­ble to main­tain con­trol of super-intel­li­gence,” we can nev­er­the­less “point it in the right direc­tion, instead of rush­ing to cre­ate it with no moral com­pass and clear rea­sons to kill us off.” The mind, as they say, is a fine ser­vant but a ter­ri­ble mas­ter; the same holds true, as the case of AI makes us see afresh, for the mind’s cre­ations.

Relat­ed con­tent:

Stephen Fry Voic­es a New Dystopi­an Short Film About Arti­fi­cial Intel­li­gence & Sim­u­la­tion The­o­ry: Watch Escape

Stephen Fry Reads Nick Cave’s Stir­ring Let­ter About Chat­G­PT and Human Cre­ativ­i­ty: “We Are Fight­ing for the Very Soul of the World”

Stephen Fry Explains Cloud Com­put­ing in a Short Ani­mat­ed Video

Stephen Fry Takes Us Inside the Sto­ry of Johannes Guten­berg & the First Print­ing Press

Stephen Fry on the Pow­er of Words in Nazi Ger­many: How Dehu­man­iz­ing Lan­guage Laid the Foun­da­tion for Geno­cide

Neur­al Net­works for Machine Learn­ing: A Free Online Course Taught by Geof­frey Hin­ton

Based in Seoul, Col­in Marshall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities and the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les. Fol­low him on Twit­ter at @colinmarshall or on Face­book.

Fritz Lang First Depicted Artificial Intelligence on Film in Metropolis (1927), and It Frightened People Even Then

Arti­fi­cial intel­li­gence seems to have become, as Michael Lewis labeled a pre­vi­ous chap­ter in the recent his­to­ry of tech­nol­o­gy, the new new thing. But human anx­i­eties about it are, if not an old old thing, then at least part of a tra­di­tion longer than we may expect. For vivid evi­dence, look no fur­ther than Fritz Lang’s Metrop­o­lis, which brought the very first cin­e­mat­ic depic­tion of arti­fi­cial intel­li­gence to the­aters in 1927. It “imag­ines a future cleaved in two, where the afflu­ent from lofty sky­scrap­ers rule over a sub­ter­ranean caste of labor­ers,” writes Synapse Ana­lyt­ics’ Omar Abo Mos­al­lam. “The class ten­sion is so pal­pa­ble that the inven­tion of a Maschi­nen­men­sch (a robot capa­ble of work) upends the social order.”

The sheer tire­less­ness of the Maschi­nen­men­sch “sows hav­oc in the city”; lat­er, after it takes on the form of a young woman called Maria — a trans­for­ma­tion you can watch in the clip above — it “incites work­ers to rise up and destroy the machines that keep the city func­tion­ing. Here, there is a sug­ges­tion to asso­ciate this new inven­tion with an unrav­el­ing of the social order.” This robot, which Guardian film crit­ic Peter Brad­shaw describes as “a bril­liant eroti­ciza­tion and fetishiza­tion of mod­ern tech­nol­o­gy,” has long been Metrop­o­lis’ sig­na­ture fig­ure, more icon­ic than HAL, Data, and WALL‑E put togeth­er.

Still, those char­ac­ters all rate men­tions of their own in the arti­cles review­ing the his­to­ry of AI in the movies recent­ly pub­lished by the BFI, RTÉ, Pic­to­ry, and oth­er out­lets besides. The Day the Earth Stood Still, Alien, Blade Run­ner (and even more so its sequel Blade Run­ner 2049), Ghost in the Shell, The Matrix, and Ex Machi­na. Not all of these pic­tures present their arti­fi­cial­ly intel­li­gent char­ac­ters pri­mar­i­ly as exis­ten­tial threats to the exist­ing order; the BFI’s Georgina Guthrie high­lights video essay­ist-turned-auteur Kog­o­na­da’s After Yang as an exam­ple that treats the role of AI could assume in soci­ety as a much more com­plex — indeed, much more human — mat­ter.

From Metrop­o­lis to After Yang, as RTÉ’s Alan Smeaton points out, “AI is usu­al­ly por­trayed in movies in a robot­ic or humanoid-like fash­ion, pre­sum­ably because we can eas­i­ly relate to humanoid and robot­ic forms.” But as the pub­lic has come to under­stand over the past few years, we can per­ceive a tech­nol­o­gy as poten­tial­ly or actu­al­ly intel­li­gent even it does­n’t resem­ble a human being. Per­haps the age of the fear­some mechan­i­cal Art Deco gynoid will nev­er come to pass, but we now feel more keen­ly than ever both the seduc­tive­ness and the threat of Metrop­o­lis’ Maschi­nen­men­sch — or, as it was named in the orig­i­nal on which the film was based, Futu­ra.

Relat­ed con­tent:

Metrop­o­lis: Watch Fritz Lang’s 1927 Mas­ter­piece

Arti­fi­cial Intel­li­gence, Art & the Future of Cre­ativ­i­ty: Watch the Final Chap­ter of the “Every­thing is a Remix” Series

Hunter S. Thomp­son Chill­ing­ly Pre­dicts the Future, Telling Studs Terkel About the Com­ing Revenge of the Eco­nom­i­cal­ly & Tech­no­log­i­cal­ly “Obso­lete” (1967)

Ama­zon Offers Free AI Cours­es, Aim­ing to Help 2 Mil­lion Peo­ple Build AI Skills by 2025

Isaac Asi­mov Pre­dicts the Future in 1982: Com­put­ers Will Be “at the Cen­ter of Every­thing;” Robots Will Take Human Jobs

Google Launch­es a New Course Called “AI Essen­tials”: Learn How to Use Gen­er­a­tive AI Tools to Increase Your Pro­duc­tiv­i­ty

Based in Seoul, Col­in Marshall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall or on Face­book.

Isaac Asimov Predicts the Future of Online Education in 1988–and It’s Now Coming True in the Age of AI & Smartphones

“I have nev­er let my school­ing inter­fere with my edu­ca­tion.” Though that line prob­a­bly orig­i­nat­ed with  a Cana­di­an nov­el­ist called Grant Allen, it’s long been pop­u­lar­ly attrib­uted to his more col­or­ful nine­teenth-cen­tu­ry con­tem­po­rary Mark Twain. It isn’t hard to under­stand why it now has so much trac­tion as a social media-ready quote, though dur­ing much of the peri­od between Allen’s day and our own, many must have found it prac­ti­cal­ly unin­tel­li­gi­ble. The indus­tri­al­ized world of the twen­ti­eth cen­tu­ry attempt­ed to make edu­ca­tion and school­ing syn­ony­mous, an ambi­tion suf­fi­cient­ly wrong­head­ed that, by the nine­teen-eight­ies, no less pow­er­ful a mind than Isaac Asi­mov was lament­ing it on nation­al tele­vi­sion.

“In the old days you used to have tutors for chil­dren,” Asi­mov tells Bill Moy­ers in a 1988 World of Ideas inter­view. “But how many peo­ple could afford to hire a ped­a­gogue? Most chil­dren went une­d­u­cat­ed. Then we reached the point where it was absolute­ly nec­es­sary to edu­cate every­body. The only way we could do it is to have one teacher for a great many stu­dents and, in order to orga­nize the sit­u­a­tion prop­er­ly, we gave them a cur­ricu­lum to teach from.” And yet “the num­ber of teach­ers is far greater than the num­ber of good teach­ers.” The ide­al solu­tion, per­son­al tutors for all, would be made pos­si­ble by per­son­al com­put­ers, “each of them hooked up to enor­mous libraries where any­one can ask any ques­tion and be giv­en answers.”

At the time, this was­n’t an obvi­ous future for non-sci­ence-fic­tion-vision­ar­ies to imag­ine. “Well, what if I want to learn only about base­ball?” asks a faint­ly skep­ti­cal Moy­ers. “You learn all you want about base­ball,” Asi­mov replies, “because the more you learn about base­ball the more you might grow inter­est­ed in math­e­mat­ics to try to fig­ure out what they mean by those earned run aver­ages and the bat­ting aver­ages and so on. You might, in the end, become more inter­est­ed in math than base­ball if you fol­low your own bent.” And indeed, sim­i­lar­ly equipped with a per­son­al-com­put­er-as-tutor, “some­one who is inter­est­ed in math­e­mat­ics may sud­den­ly find him­self very enticed by the prob­lem of how you throw a curve ball.”

The trou­ble was how to get every house­hold a com­put­er, which was still seen by many in 1988 as an extrav­a­gant, not nec­es­sar­i­ly use­ful pur­chase. Three and a half decades lat­er, you see a com­put­er in the hand of near­ly every man, woman, and child in the devel­oped coun­tries (and many devel­op­ing ones as well). This is the tech­no­log­i­cal real­i­ty that gave rise to Khan Acad­e­my, which offers free online edu­ca­tion in math, sci­ences, lit­er­a­ture, his­to­ry, and much else besides. In the inter­view clip above, its founder Sal Khan remem­bers how, when his inter­net-tutor­ing project was first gain­ing momen­tum, it occurred to him that “maybe we’re in the right moment in his­to­ry that some­thing like this could become what Isaac Asi­mov envi­sioned.”

More recent­ly, Khan has been pro­mot­ing the edu­ca­tion­al use of a tech­nol­o­gy at the edge of even Asi­mov’s vision. Just days ago, he pub­lished the book Brave New Words: How AI Will Rev­o­lu­tion­ize Edu­ca­tion (and Why That’s a Good Thing) and made a video with his teenage son demon­strat­ing how the lat­est ver­sion of Ope­nAI’s Chat­G­PT — sound­ing, it must be said, uncan­ni­ly like Scar­lett Johans­son in the now-prophet­ic-seem­ing Her — can act as a geom­e­try tutor. Not that it works only, or even pri­mar­i­ly, for kids in school: “That’s anoth­er trou­ble with edu­ca­tion as we now have it,” as Asi­mov says. “It is for the young, and peo­ple think of edu­ca­tion as some­thing that they can fin­ish.” We may be as relieved as gen­er­a­tions past when our school­ing ends, but now we have no excuse ever to fin­ish our edu­ca­tion.

Find a tran­script of Asi­mov and Moy­ers’ con­ver­sa­tion here.

Relat­ed con­tent:

1,700 Free Online Cours­es from Top Uni­ver­si­ties

Isaac Asi­mov Pre­dicts the Future in 1982: Com­put­ers Will Be “at the Cen­ter of Every­thing;” Robots Will Take Human Jobs

Arthur C. Clarke Pre­dicts the Future in 1964 … And Kind of Nails It

Noam Chom­sky Spells Out the Pur­pose of Edu­ca­tion

The Pres­i­dent of North­west­ern Uni­ver­si­ty Pre­dicts Online Learn­ing … in 1934!

Salman Khan Returns to MIT, Gives Com­mence­ment Speech, Likens School to Hog­warts

Based in Seoul, Col­in Marshall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall or on Face­book.

Google Launches a New Course Called “AI Essentials”: Learn How to Use Generative AI Tools to Increase Your Productivity

This week, Google announced the launch of Google AI Essen­tials, a new self-paced course designed to help peo­ple learn AI skills that can boost their pro­duc­tiv­i­ty. Taught by Google’s AI experts, and assum­ing no pri­or knowl­edge of pro­gram­ming, the course ven­tures to show stu­dents how to “use AI in the real world,” with an empha­sis on help­ing stu­dents:

  • Devel­op ideas and con­tent. If you’re stuck at the begin­ning of a project, use AI tools to help you brain­storm new ideas. In the course, you’ll use a con­ver­sa­tion­al AI tool to gen­er­ate con­cepts for a prod­uct and devel­op a pre­sen­ta­tion to pitch the prod­uct.
  • Make more informed deci­sions. Let’s say you’re plan­ning an event. AI tools can help you research the best loca­tion to host it based on your cri­te­ria. You can also use AI to help you come up with a tagline or slo­gan.
  • Speed up dai­ly work tasks. Clear out that inbox faster using AI to help you sum­ma­rize emails and draft respons­es.

Google AI Essen­tials fea­tures five mod­ules (the video above comes from Mod­ule 1) and takes about 9 hours to com­plete. The tuition is cur­rent­ly set at $49, and those who com­plete the course will earn a Google cer­tifi­cate that they can share with their pro­fes­sion­al net­work.

Google AI Essen­tials fol­lows up on anoth­er course recent­ly-fea­tured here on OC, Gen­er­a­tive AI for Edu­ca­tors. Find it here.

Note: Open Cul­ture has a part­ner­ship with Cours­era. If read­ers enroll in cer­tain Cours­era cours­es and pro­grams, it helps sup­port Open Cul­ture.

Relat­ed Con­tent 

Google & Cours­era Launch New Career Cer­tifi­cates That Pre­pare Stu­dents for Jobs in 2–6 Months: Busi­ness Intel­li­gence & Advanced Data Ana­lyt­ics

Google & MIT Offer a Free Course on Gen­er­a­tive AI for Teach­ers and Edu­ca­tors

Google & Cours­era Cre­ate a Career Cer­tifi­cate That Pre­pares Stu­dents for Cyber­se­cu­ri­ty Jobs in 6 Months

by | Permalink | Make a Comment ( 1 ) |

More in this category... »
Quantcast