Artificial Intelligence Creates Realistic Photos of People, None of Whom Actually Exist

Each day in the 2010s, it seems, brings anoth­er star­tling devel­op­ment in the field of arti­fi­cial intel­li­gence — a field wide­ly writ­ten off not all that long ago as a dead end. But now AI looks just as alive as the peo­ple you see in these pho­tographs, despite the fact that none of them have ever lived, and it’s ques­tion­able whether we can even call the images that depict them “pho­tographs” at all. All of them come, in fact, as prod­ucts of a state-of-the-art gen­er­a­tive adver­sar­i­al net­work, a type of arti­fi­cial intel­li­gence algo­rithm that pits mul­ti­ple neur­al net­works against each oth­er in a kind of machine-learn­ing match.

These neur­al net­works have, it seems, com­pet­ed their way to gen­er­at­ing images of fab­ri­cat­ed human faces that gen­uine humans have trou­ble dis­tin­guish­ing from images of the real deal. Their archi­tec­ture, described in a paper by the Nvidia researchers who devel­oped it, “leads to an auto­mat­i­cal­ly learned, unsu­per­vised sep­a­ra­tion of high-lev­el attrib­ut­es (e.g., pose and iden­ti­ty when trained on human faces) and sto­chas­tic vari­a­tion in the gen­er­at­ed images (e.g., freck­les, hair), and it enables intu­itive, scale-spe­cif­ic con­trol of the syn­the­sis.” What they’ve come up with, in oth­er words, has made it not just more pos­si­ble than ever to cre­ate fake faces, but made those faces more cus­tomiz­able than ever as well.

“Of course, the abil­i­ty to cre­ate real­is­tic AI faces rais­es trou­bling ques­tions. (Not least of all, how long until stock pho­to mod­els go out of work?)” writes James Vin­cent at The Verge. “Experts have been rais­ing the alarm for the past cou­ple of years about how AI fak­ery might impact soci­ety. These tools could be used for mis­in­for­ma­tion and pro­pa­gan­da and might erode pub­lic trust in pic­to­r­i­al evi­dence, a trend that could dam­age the jus­tice sys­tem as well as pol­i­tics.”


But still, “you can’t doc­tor any image in any way you like with the same fideli­ty. There are also seri­ous con­straints when it comes to exper­tise and time. It took Nvidia’s researchers a week train­ing their mod­el on eight Tes­la GPUs to cre­ate these faces.”

Though “a run­ning bat­tle between AI fak­ery and image authen­ti­ca­tion for decades to come” seems inevitable, the cur­rent abil­i­ty of com­put­ers to cre­ate plau­si­ble faces cer­tain­ly fas­ci­nates, espe­cial­ly when com­pared to their abil­i­ty just four years ago, the hazy black-and-white fruits of which appear just above. Put that against the grid of faces at the top of the post, which shows how Nvidi­a’s sys­tem can com­bine the fea­tures of the faces on one axis with the fea­tures on the oth­er, and you’ll get a sense of the tech­no­log­i­cal accel­er­a­tion involved. Such a process could well be used, for exam­ple, to give you a sense of what your future chil­dren might look like. But how long until it puts con­vinc­ing visions of mov­ing, speak­ing, even think­ing human beings before our eyes?

via Petapix­el

Relat­ed Con­tent:

Sci­en­tists Cre­ate a New Rem­brandt Paint­ing, Using a 3D Print­er & Data Analy­sis of Rembrandt’s Body of Work

Arti­fi­cial Intel­li­gence Writes a Piece in the Style of Bach: Can You Tell the Dif­fer­ence Between JS Bach and AI Bach?

Arti­fi­cial Intel­li­gence Pro­gram Tries to Write a Bea­t­les Song: Lis­ten to “Daddy’s Car”

Google Launch­es a Free Course on Arti­fi­cial Intel­li­gence: Sign Up for Its New “Machine Learn­ing Crash Course”

Google Launch­es Three New Arti­fi­cial Intel­li­gence Exper­i­ments That Could Be God­sends for Artists, Muse­ums & Design­ers

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall or on Face­book.


by | Permalink | Comments (5) |

Sup­port Open Cul­ture

We’re hop­ing to rely on our loy­al read­ers rather than errat­ic ads. To sup­port Open Cul­ture’s edu­ca­tion­al mis­sion, please con­sid­er mak­ing a dona­tion. We accept Pay­Pal, Ven­mo (@openculture), Patre­on and Cryp­to! Please find all options here. We thank you!


Comments (5)
You can skip to the end and leave a response. Pinging is currently not allowed.
  • Auke says:

    This is awe­some for D&D DMs — end­less num­ber of authen­tic NPC illus­tra­tions!

  • Mikey says:

    Great. Just what we need. Real Demo­c­rat NPC vot­ers. I guess the dead aren’t enough to get them elect­ed

  • Wheredoyoucomefrom says:

    Good to know right wing social media manip­u­la­tors are so present in online com­ment sec­tions. Maybe you’ll dri­ve some­one to send more bombs or shoot up anoth­er place of wor­ship

  • Bill L says:

    Where is the report but­ton to report extreme­ly tox­ic com­ments? (refer­ring to the one by Where­doy­oucome­from of course.)

  • MB says:

    For sev­er­al years, we had pho­to­shop to mod­i­fy pic­tures. So all the pos­si­ble prob­lems that could occur with fake pho­tos are far from being new.
    How­ev­er, it’s the face ani­ma­tion that can become the next real prob­lem, when you are in front of an AI gen­er­at­ed image on your screen, mov­ing and act­ing as a nor­mal human being.

    You could even take an exist­ing per­son and mim­ic the move­ment of its face to make it say any­thing you want. That could become prob­lem­at­ic in court and can destroy the rep­u­ta­tion of some­one.

Leave a Reply

Quantcast