Late last year, AmaÂzon announcedAI Ready, a new iniÂtiaÂtive “designed to proÂvide free AI skills trainÂing to 2 milÂlion peoÂple globÂalÂly by 2025.” This includes eight free AI and genÂerÂaÂtive AI coursÂes, some designed for beginÂners, and othÂers designed for more advanced stuÂdents.
As the Wall Street JourÂnal podÂcast notes above, AmaÂzon creÂatÂed the AI Ready iniÂtiaÂtive with three goals in mind: 1) to increase the overÂall numÂber of peoÂple in the workÂforce who have a basic underÂstandÂing of AI, 2.) to comÂpete with Microsoft and othÂer big comÂpaÂnies for AI talÂent, and 3.) to expose a large numÂber of peoÂple to AmaÂzon’s AI sysÂtems.
For those new to AI, you may want to explore these AI Ready coursÂes:
You can find more inforÂmaÂtion (includÂing more free coursÂes) on this AI Ready page. We have othÂer free AI coursÂes listÂed in the RelatÂeds below.
Andrew Ng–an AI pioÂneer and StanÂford comÂputÂer sciÂence professor–has released a new course called GenÂerÂaÂtive AI for EveryÂone. Designed for a non-techÂniÂcal audiÂence, the course will “guide you through how genÂerÂaÂtive AI works and what it can (and can’t) do. It includes hands-on exerÂcisÂes where you’ll learn to use genÂerÂaÂtive AI to help in day-to-day work.” The course also explains “how to think through the lifeÂcyÂcle of a genÂerÂaÂtive AI project, from conÂcepÂtion to launch, includÂing how to build effecÂtive prompts,” and it disÂcussÂes “the potenÂtial opporÂtuÂniÂties and risks that genÂerÂaÂtive AI techÂnoloÂgies present to indiÂvidÂuÂals, busiÂnessÂes, and sociÂety.” GivÂen the comÂing prevaÂlence of AI, it’s worth spendÂing six hours with this course (the estiÂmatÂed time needÂed to comÂplete it). You can audit GenÂerÂaÂtive AI for EveryÂone for free, and watch all of the lecÂtures at no cost. If you would like to take the course and earn a cerÂtifiÂcate, it will cost $49.
Many of us can rememÂber a time when artiÂfiÂcial intelÂliÂgence was wideÂly disÂmissed as a sciÂence-ficÂtionÂal pipe dream unworÂthy of seriÂous research and investÂment. That time, safe to say, has gone. “WithÂin a decade,” writes blogÂger Samuel HamÂmond, the develÂopÂment of artiÂfiÂcial intelÂliÂgence could bring about a world in which “ordiÂnary peoÂple will have more capaÂbilÂiÂties than a CIA agent does today. You’ll be able to lisÂten in on a conÂverÂsaÂtion in an apartÂment across the street using the sound vibraÂtions off a chip bag” (as preÂviÂousÂly feaÂtured here on Open CulÂture.) “You’ll be able to replace your face and voice with those of someÂone else in real time, allowÂing anyÂone to socialÂly engiÂneer their way into anyÂthing.”
“The probÂlem with the way we build AI sysÂtems now is we give them a fixed objecÂtive,” RusÂsell says. “The algoÂrithms require us to specÂiÂfy everyÂthing in the objecÂtive.” Thus an AI charged with de-acidÂiÂfyÂing the oceans could quite plauÂsiÂbly come to the soluÂtion of setÂting off “a catÂalytÂic reacÂtion that does that extremeÂly effiÂcientÂly, but conÂsumes a quarÂter of the oxyÂgen in the atmosÂphere, which would apparÂentÂly cause us to die fairÂly slowÂly and unpleasÂantÂly over the course of sevÂerÂal hours.” The key to this probÂlem, RusÂsell argues, is to proÂgram in a cerÂtain lack of conÂfiÂdence: “It’s when you build machines that believe with cerÂtainÂty that they have the objecÂtive, that’s when you get sort of psyÂchoÂpathÂic behavÂior, and I think we see the same thing in humans.”
A less exisÂtenÂtial but more comÂmon worÂry has to do with unemÂployÂment. Full AI automaÂtion of the wareÂhouse tasks still perÂformed by humans, for examÂple, “would, at a stroke, elimÂiÂnate three or four milÂlion jobs.” RusÂsell here turns to E. M. Forster, who in the 1909 stoÂry “The Machine Stops” enviÂsions a future in which “everyÂone is entireÂly machine-depenÂdent,” with lives not unlike the e‑mail- and Zoom meetÂing-filled ones we lead today. The narÂraÂtive plays out as a warnÂing that “if you hand over the manÂageÂment of your civÂiÂlizaÂtion to machines, you then lose the incenÂtive to underÂstand it yourÂself or to teach the next genÂerÂaÂtion how to underÂstand it.” The mind, as the sayÂing goes, is a wonÂderÂful serÂvant but a terÂriÂble masÂter. The same is true of machines — and even truer, we may well find, of mechanÂiÂcal minds.
Based in Seoul, ColÂin MarÂshall writes and broadÂcasts on cities, lanÂguage, and culÂture. His projects include the SubÂstack newsletÂterBooks on Cities, the book The StateÂless City: a Walk through 21st-CenÂtuÂry Los AngeÂles and the video series The City in CinÂeÂma. FolÂlow him on TwitÂter at @colinmarshall or on FaceÂbook.
William ShakeÂspeare’s plays have endured not just because of their inherÂent draÂmatÂic and linÂguisÂtic qualÂiÂties, but also because each era has found its own way of enviÂsionÂing and re-enviÂsionÂing them. The techÂnolÂoÂgy involved in stage proÂducÂtions has changed over the past four cenÂturies, of course, but so has the techÂnolÂoÂgy involved in art itself. A few years ago, we feaÂtured here on Open CulÂture an archive of 3,000 illusÂtraÂtions of ShakeÂspeare’s comÂplete works going back to the mid-nineÂteenth cenÂtuÂry. That site was the PhD project of Cardiff UniÂverÂsiÂty’s Michael GoodÂman, who has recentÂly comÂpletÂed anothÂer digÂiÂtal ShakeÂspeare project, this time using artiÂfiÂcial intelÂliÂgence: Paint the PicÂture to the Word.
“Every image colÂlectÂed here has been genÂerÂatÂed by StaÂble DifÂfuÂsion, a powÂerÂful text-to-image AI,” writes GoodÂman on this new proÂjecÂt’s About page. “To creÂate an image using this techÂnolÂoÂgy a user simÂply types a descripÂtion of what they want to see into a text box and the AI will then proÂduce sevÂerÂal images corÂreÂspondÂing to that iniÂtial texÂtuÂal prompt,” much as with the also-new AI-based art genÂerÂaÂtor DALL‑E.
Each of the many images GoodÂman creÂatÂed is inspired by a ShakeÂspeare play. “Some of the illusÂtraÂtions are expresÂsionÂisÂtic (King John, Julius CaeÂsar), while some are more litÂerÂal (MerÂry Wives of WindÂsor).” All “offer a visuÂal idea or a gloss on the plays: HenÂry VIII, with the cenÂtral charÂacÂters repÂreÂsentÂed in fuzzy felt, is grimÂly ironÂic, while in PerÂiÂcles both MarÂiÂana and her father are seen through a watery prism, echoÂing that play’s conÂcern with sea imagery.”
Based in Seoul, ColÂin MarÂshall writes and broadÂcasts on cities, lanÂguage, and culÂture. His projects include the SubÂstack newsletÂterBooks on Cities, the book The StateÂless City: a Walk through 21st-CenÂtuÂry Los AngeÂles and the video series The City in CinÂeÂma. FolÂlow him on TwitÂter at @colinmarshall or on FaceÂbook.
LyriÂcists must write conÂcreteÂly enough to be evocaÂtive, yet vagueÂly enough to allow each lisÂtenÂer his perÂsonÂal interÂpreÂtaÂtion. The nineÂteen-sixÂties and sevÂenÂties saw an espeÂcialÂly rich balÂance struck between resÂoÂnant ambiÂguÂiÂty and masÂsive popÂuÂlarÂiÂty — aidÂed, as many involved parÂties have admitÂted, by the use of cerÂtain psyÂchoacÂtive subÂstances. Half a cenÂtuÂry latÂer, the visions induced by those same subÂstances offer the closÂest comÂparÂiÂson to the strikÂing fruits of visuÂal artiÂfiÂcial-intelÂliÂgence projects like Google’s Deep Dream a few years ago or DALL‑E today. Only natÂurÂal, perÂhaps, that these advanced appliÂcaÂtions would soonÂer or latÂer be fed psyÂcheÂdelÂic song lyrics.
The video at the top of the post presents the ElecÂtric Light OrchesÂtra’s 1977 hit “Mr. Blue Sky” illusÂtratÂed by images genÂerÂatÂed by artiÂfiÂcial intelÂliÂgence straight from its words. This came as a much-anticÂiÂpatÂed endeavÂor for Youtube chanÂnel SolarProphet, which has also put up simÂiÂlarÂly AI-accomÂpaÂnied preÂsenÂtaÂtions of such already goofy-image-filled comÂeÂdy songs as Lemon Demon’s “The UltiÂmate ShowÂdown” and Neil CiciereÂga’s “It’s Gonna Get Weird.”
Jut above appears a video for David Bowie’s “StarÂman” with AI-visuÂalÂized lyrics, creÂatÂed by YoutuÂber AidonÂtÂknow. CreÂatÂed isn’t too strong a word, since DALL‑E and othÂer appliÂcaÂtions curÂrentÂly availÂable to the pubÂlic proÂvide a selecÂtion of images for each prompt, leavÂing it to human users to proÂvide specifics about the aesÂthetÂic — and, in the case of these videos, to select the result that best suits each line. One delight of this parÂticÂuÂlar proÂducÂtion, apart from the booÂgieing chilÂdren, is seeÂing how the AI imagÂines varÂiÂous starÂmen waitÂing in the sky, all of whom look susÂpiÂciousÂly like earÂly-sevÂenÂties Bowie. Of all his songs of that periÂod, sureÂly “Life on Mars?” would be choice numÂber one for an AI music video — but then, its imagery may well be too bizarre for curÂrent techÂnolÂoÂgy to hanÂdle.
Based in Seoul, ColÂin MarÂshall writes and broadÂcasts on cities, lanÂguage, and culÂture. His projects include the SubÂstack newsletÂterBooks on Cities, the book The StateÂless City: a Walk through 21st-CenÂtuÂry Los AngeÂles and the video series The City in CinÂeÂma. FolÂlow him on TwitÂter at @colinmarshall, on FaceÂbook, or on InstaÂgram.
Back in 2017, CoursÂera co-founder and forÂmer StanÂford comÂputÂer sciÂence proÂfesÂsor Andrew Ng launched a five-part series of coursÂes on “Deep LearnÂing” on the edtech platÂform, a series meant to “help you masÂter Deep LearnÂing, apply it effecÂtiveÂly, and build a career in AI.” These coursÂes extendÂed his iniÂtial Machine LearnÂing course, which has attractÂed almost 5 milÂlion stuÂdents since 2012, in an effort, he said, to build “a new AI-powÂered sociÂety.”
Build machine learnÂing modÂels in Python using popÂuÂlar machine learnÂing libraries NumPy and scikÂit-learn.
Build and train superÂvised machine learnÂing modÂels for preÂdicÂtion and binaÂry clasÂsiÂfiÂcaÂtion tasks, includÂing linÂear regresÂsion and logisÂtic regresÂsion.
Build and train a neurÂal netÂwork with TenÂsorÂFlow to perÂform mulÂti-class clasÂsiÂfiÂcaÂtion.
Apply best pracÂtices for machine learnÂing develÂopÂment so that your modÂels genÂerÂalÂize to data and tasks in the real world.
Build and use deciÂsion trees and tree ensemÂble methÂods, includÂing ranÂdom forests and boostÂed trees.
Use unsuÂperÂvised learnÂing techÂniques for unsuÂperÂvised learnÂing: includÂing clusÂterÂing and anomÂaly detecÂtion.
Build recÂomÂmender sysÂtems with a colÂlabÂoÂraÂtive filÂterÂing approach and a conÂtent-based deep learnÂing method.
Build a deep reinÂforceÂment learnÂing modÂel.
Note: Open CulÂture has a partÂnerÂship with CoursÂera. If readÂers enroll in cerÂtain CoursÂera coursÂes and proÂgrams, it helps supÂport Open CulÂture.
DALL‑E, an artiÂfiÂcial intelÂliÂgence sysÂtem that genÂerÂates viable-lookÂing art in a variÂety of styles in response to user supÂplied text prompts, has been garÂnerÂing a lot of interÂest since it debuted this spring.
It has yet to be released to the genÂerÂal pubÂlic, but while we’re waitÂing, you could have a go at DALL‑E Mini, an open source AI modÂel that genÂerÂates a grid of images inspired by any phrase you care to type into its search box.
Some of the conÂcepts are learnt (sic) from memÂoÂry as it may have seen simÂiÂlar images. HowÂevÂer, it can also learn how to creÂate unique images that don’t exist such as “the EifÂfel towÂer is landÂing on the moon” by comÂbinÂing mulÂtiÂple conÂcepts togethÂer.
SevÂerÂal modÂels are comÂbined togethÂer to achieve these results:
• an image encoder that turns raw images into a sequence of numÂbers with its assoÂciÂatÂed decoder
• a modÂel that turns a text prompt into an encodÂed image
• a modÂel that judges the qualÂiÂty of the images genÂerÂatÂed for betÂter filÂterÂing
My first attempt to genÂerÂate some art using DALL‑E mini failed to yield the hoped for weirdÂness. I blame the blandÂness of my search term — “tomaÂto soup.”
PerÂhaps I’d have betÂter luck “Andy Warhol eatÂing a bowl of tomaÂto soup as a child in PittsÂburgh.”
Ah, there we go!
I was curiÂous to know how DALL‑E Mini would riff on its nameÂsake artist’s hanÂdle (an honÂor Dali shares with the titÂuÂlar AI hero of Pixar’s 2018 aniÂmatÂed feaÂture, WALL‑E.)
Hmm… seems like we’re backÂslidÂing a bit.
Let me try “Andy Warhol eatÂing a bowl of tomaÂto soup as a child in PittsÂburgh with SalÂvador Dali.”
Ye gods! That’s the stuff of nightÂmares, but it also strikes me as pretÂty legit modÂern art. Love the sparÂing use of red. Well done, DALL‑E mini.
At this point, vanÂiÂty got the betÂter of me and I did the AI art-genÂerÂatÂing equivÂaÂlent of googling my own name, adding “in a tutu” because who among us hasn’t dreamed of being a balÂleÂriÂna at some point?
Let that be a lesÂson to you, PanÂdoÂra…
HopeÂfulÂly we’re all planÂning to use this playÂful open AI tool for good, not evil.
It’s all fun and games when you’re genÂerÂatÂing “robot playÂing chess” in the style of Matisse, but dropÂping machine-genÂerÂatÂed imagery on a pubÂlic that seems less capaÂble than ever of disÂtinÂguishÂing fact from ficÂtion feels like a danÂgerÂous trend.
AddiÂtionÂalÂly, DALL‑E’s neurÂal netÂwork can yield sexÂist and racist images, a recurÂring issue with AI techÂnolÂoÂgy. For instance, a reporter at Vice found that prompts includÂing search terms like “CEO” excluÂsiveÂly genÂerÂatÂed images of White men in busiÂness attire. The comÂpaÂny acknowlÂedges that DALL‑E “inherÂits varÂiÂous biasÂes from its trainÂing data, and its outÂputs someÂtimes reinÂforce sociÂetal stereoÂtypes.”
Co-creÂator DayÂma does not duck the trouÂbling impliÂcaÂtions and biasÂes his baby could unleash:
While the capaÂbilÂiÂties of image genÂerÂaÂtion modÂels are impresÂsive, they may also reinÂforce or exacÂerÂbate sociÂetal biasÂes. While the extent and nature of the biasÂes of the DALL·E mini modÂel have yet to be fulÂly docÂuÂmentÂed, givÂen the fact that the modÂel was trained on unfilÂtered data from the InterÂnet, it may genÂerÂate images that conÂtain stereoÂtypes against minorÂiÂty groups. Work to anaÂlyze the nature and extent of these limÂiÂtaÂtions is ongoÂing, and will be docÂuÂmentÂed in more detail in the DALL·E mini modÂel card.
And a TwitÂter user who goes by St. Rev. Dr. Rev blows minds and opens mulÂtiÂple cans of worms, using panÂels from carÂtoonÂist Joshua BarkÂman’s beloved webÂcomÂic, False Knees:
Much has been made in recent years of the “de-aging” processÂes that allow actors to credÂiÂbly play charÂacÂters far younger than themÂselves. But it has also become posÂsiÂble to de-age film itself, as demonÂstratÂed by Peter JackÂson’s celÂeÂbratÂed new docu-series The BeaÂtÂles: Get Back. The vast majorÂiÂty of the mateÂrÂiÂal that comÂprisÂes its nearÂly eight-hour runÂtime was origÂiÂnalÂly shot in 1969, under the direcÂtion of Michael LindÂsay-Hogg for the docÂuÂmenÂtary that became Let It Be.
Those who have seen both LinÂday-HogÂg’s and JackÂson’s docÂuÂmenÂtaries will notice how much sharpÂer, smoother, and more vivid the very same footage looks in the latÂter, despite the sixÂteen-milÂlimeÂter film havÂing lanÂguished for half a cenÂtuÂry. The kind of visuÂal restoraÂtion and enhanceÂment seen in Get Back was made posÂsiÂble by techÂnoloÂgies that have only emerged in the past few decades — and preÂviÂousÂly seen in JackÂson’s They Shall Not Grow Old, a docÂuÂmenÂtary acclaimed for its restoraÂtion of cenÂtuÂry-old World War I footage to a time-travÂel-like degree of verisimilÂiÂtude.
“You can’t actuÂalÂly just do it with off-the-shelf softÂware,” JackÂson explained in an interÂview about the restoraÂtion processÂes involved in They Shall Not Grow Old. This necesÂsiÂtatÂed marÂshalÂing, at his New Zealand comÂpaÂny Park Road Post ProÂducÂtion, “a departÂment of code writÂers who write comÂputÂer code in softÂware.” In othÂer words, a sufÂfiÂcientÂly ambiÂtious project of visuÂal reviÂtalÂizaÂtion — makÂing media from bygone times even more lifeÂlike than it was to begin with — becomes as much a job of traÂdiÂtionÂal film-restoraÂtion or visuÂal-effects as of comÂputÂer proÂgramÂming.
This also goes for the less obviÂous but no-less-impresÂsive treatÂment givÂen by JackÂson and his team to the audio that came with the Let It Be footage. RecordÂed in large part monauÂralÂly, these tapes preÂsentÂed a forÂmiÂdaÂble proÂducÂtion chalÂlenge. John, Paul, George, and Ringo’s instruÂments share a sinÂgle track with their voicÂes — and not just their singing voicÂes, but their speakÂing ones as well. On first lisÂten, this renÂders many of their conÂverÂsaÂtions inaudiÂble, and probÂaÂbly by design: “If they were in a conÂverÂsaÂtion,” said JackÂson, they would turn their amps up loud and they’d strum the guiÂtar.”
This means of keepÂing their words from LindÂsay-Hogg and his crew worked well enough in the wholÂly anaÂlog late 1960s, but it has proven no match for the artiÂfiÂcial intelligence/machine learnÂing of the 2020s. “We devised a techÂnolÂoÂgy that is called demixÂing,” said JackÂson. “You teach the comÂputÂer what a guiÂtar sounds like, you teach them what a human voice sounds like, you teach it what a drum sounds like, you teach it what a bass sounds like.” SupÂplied with enough sonÂic data, the sysÂtem evenÂtuÂalÂly learned to disÂtinÂguish from one anothÂer not just the sounds of the BeaÂtÂles’ instruÂments but of their voicÂes as well.
Hence, in addiÂtion to Get Back’s revÂeÂlaÂtoÂry musiÂcal moments, its many once-priÂvate but now crisply audiÂble exchanges between the Fab Four. “Oh, you’re recordÂing our conÂverÂsaÂtion?” George HarÂriÂson at one point asks LindÂsay-Hogg in a charÂacÂterÂisÂtic tone of faux surÂprise. But if he could hear the recordÂings today, his surÂprise would sureÂly be real.
Based in Seoul, ColÂin MarÂshall writes and broadÂcasts on cities and culÂture. His projects include the book The StateÂless City: a Walk through 21st-CenÂtuÂry Los AngeÂles and the video series The City in CinÂeÂma. FolÂlow him on TwitÂter at @colinmarshall or on FaceÂbook.
We're hoping to rely on loyal readers, rather than erratic ads. Please click the Donate button and support Open Culture. You can use Paypal, Venmo, Patreon, even Crypto! We thank you!
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.