By Russ Banham
Perspectives magazine, published by Dell Technologies
The AI-generated headshots created by Generated Photos are designed, in part, to be a competitive alternative to pricey photoshoots with models, camera operators, fashion stylists, makeup artists, sets and lighting. Pick a beautiful background, pop in an attractive face, pay a one-time fee of $2.99, and print! That’s all there is to it…maybe.
Although not traditional deepfakes in the sense of replacing one person’s face with the face of someone else to create a counterfeit image, AI-generated faces present a similar risk of forgery—such is the case with fake social media profiles.
“When a person’s profile image on social media looks realistic and attractive, it multiples the possibilities for deception,” says Siwei Lyu, professor of computer science and engineering at the University of Buffalo, whose primary research focus is digital media forensics—the development and use of digitized images.
Lyu is considered among the world’s top researchers in the nascent field of AI-generated images. Over the past 20 years, he has twinned his academic background in machine learning (he has a Ph.D. in computer science from Dartmouth College) with a lifelong interest in forensics (the application of scientific methods to investigate crime) to create realistic fake faces and figure out ways to detect the counterfeit images. In 2019, he testified in front of Congress on the ubiquity of deepfakes and its associated threats.
Despite the risks, AI-generated faces nonetheless have much to commend them, Lyu maintains. “You can create as many highly realistic faces as you want, with enormous control over an image’s gender, age, ethnic group and so on,” he says. “Unlike traditional deepfakes, the images aren’t made using real human subjects, so you don’t need to obtain proper consent forms from someone to release their image. You can post them online without worrying about privacy issues, as they aren’t real people. From a business standpoint, the opportunities are huge.”
Regrettably, so are the risks. Both Russian and Chinese spies are alleged to have faked the photos in LinkedIn accounts to cultivate business relationships for the purpose of learning competitive information. Six or seven years ago, this criminal enterprise was limited by available AI-face-generating technology, which was imperfect at best. No longer is this the case, making this new generation of deepfakes a very effective fraud.
Models reimagined
Yesterday’s deepfakes were a way to entertain people by swapping the face of a public figure onto a body of someone doing things one would not necessarily associate with the person, from the sublime to the ridiculous.
While the First Amendment appears to protect these creations from slander and defamation, regulators are pushing for laws requiring their content creators to specify if the video is real or fake. To a degree, Generated Photos’ faces sidestep these concerns since they do not involve actual human beings.
That said, the headshots look exactly like real people (this writer couldn’t tell the difference between the computer-generated faces and photographs of his kids). Moreover, there are considerable market opportunities inherent in the always-improving, machine-learning technology used to create this content, says Ivan Braun, founder and CEO of Generated Photos.
Like other companies monetizing the use of AI-generated facial images, Braun credits AI researcher Ian Goodfellow for the development of generative adversarial networks (GAN), the deep learning algorithm used to produce realistic fake faces.
Goodfellow unlocked the code in 2014 at the University of Montreal and was cited that year as one of the “35 Innovators Under 35″ in MIT Technology Review. After receiving his Ph.D. at the university, he became a research scientist at Google Brain and is presently Apple’s director of machine learning.
Generated Photos uses a tool developed by Nvidia that’s based on Goodfellow’s groundbreaking research. Instead of a paintbrush and paint, faces are drawn using software codes. Since GAN relies on deep learning, the sub-field of machine learning concerned with algorithms, the realistic quality of AI-generated faces improves over time without human intervention (hence: machine learning).
“There are really two parts to it, involving the machine-learning algorithm and then basic innovation and judgment,” Braun says. “Like a painter, the artist uses the algorithm to create a realistic face from random combinations of millions and millions of pixels, to the point where it is virtually impossible to distinguish between a real face and an AI-generated one.”
By producing what Braun called “hyper-realistic faces,” Generated Photos is compelling advertisers to rethink the use of traditional photoshoots with live models. “We’re making modeling more productive by eliminating the need to hire all the people engaged in a typical fashion shoot, from the photographer to the stylists,” he explains.
His biggest market, however, is video gaming. “One of our clients has a game that simulates different situations involving a business crisis like a factory blowing up,” he says. “The PR manager calls up the CEO asking what she should tell the press. Meanwhile, the Mayor is texting the CEO. All the characters have AI-generated faces that look as realistic as if they were filmed for a documentary.”
The company also has made zombie faces for video games that look as menacingly real as the walking dead in George A. Romero’s best zombie movies. Another market? Academia. “The University of Texas is using our faces to study and measure human emotions that provoke trust,” Braun says. “And a very large social media company signed a contract with us to train their [fraud detection] models to distinguish AI-generated photos from authentic images, reducing the threat of fake profiles.”
AI counterintelligence
That’s good news, since the use of AI-generated faces for criminal intents is worrisome. In 2019, the Associated Press reported on Katie Jones, a 30-something woman who worked at a major think tank and whose LinkedIn connections included top-level personnel at the Brookings Institution and the Heritage Foundation. Jones knew a deputy assistant secretary of state and a senior aide to a senator, among other important politicos. She wasn’t a real person, however.
Jones’s “face” and profile allegedly were created by Russian or Chinese spies using GAN technology. After the AP contacted LinkedIn about Ms. Jones, her social media account suddenly disappeared. “That’s probably because her fake account had attracted several real LinkedIn accounts,” Lyu says.
Since the story ran, LinkedIn has taken actions to educate users on how to spot a real social media profile face from an AI-generated one. Not that the task is easy. Just take a peek at some of Generated Photos’ faces to see if you can tell the difference from the real deal (or click over to generate your own facial images). When it comes to spotting deception, “one thing to look for is the light reflection in the eyes,” Lyu says.
He explains that the cornea is a highly reflective semi-sphere; each eye reflects a light-emitting source exactly the same. In many GAN images, the light reflection in one eye often is subtly different in the other eye. To make the task easier, Lyu’s team of computer scientists at the university have developed a tool that eyeball light reflections to automatically identify a fake face with 94% effectiveness.
Other “tells” are the background imagery and hair. “Everyone’s hair is not perfectly continuous, with random hairs sticking out here and there. In GAN images, the hair is often too perfect, what we call `picture imperfect,’ pun intended,” he says.
The background of a facial photo, on the other hand, needs to be blurrier than the sharpness of the face itself. “The blurriness must reflect the distance between the person and the objects behind,” Lyu says. “If the objects are 20 feet behind instead of 6 feet, the background should be blurrier. We’ve developed an algorithm that calculates proper blurriness to discern authenticity or fakery.”
Lyu’s thrill for the hunt is clear. “It’s really a `cat-and-mouse’ game,” he says, smiling widely. “The better we get at creating fake faces—tweaking the models to develop more realistic synthesized images—the better we need to get at discovering the counterfeits. It keeps us engaged and busy.”
When asked to project the future of AI-generated human images, both Braun and Lyu say they wouldn’t be surprised if filmmakers used the GAN technology a decade from now to create movies that look like today’s blockbusters, albeit without actors (“And the Academy Award for Best Fake Actor goes to…”).
“We’re developing face-swap technology right now, where you take AI-generated faces and slap them on bodies that have already been filmed,” says Braun. “Given the learning capabilities of the GAN technology, it’s easy to see how this can morph into entirely AI-generated images 10 years from now.”
Not surprisingly, Lyu says the technology may be used to create a real-time video of a human being with an AI-generated face and an AI-synthesized voice—posing as someone else for criminal purposes. He provides the following scenario of a bank manager and an employee:
“The manager schedules an important Zoom conference with the employee and sure enough, there she is looking just like his boss, except unbeknownst to him, she was generated by AI algorithms. The employee says `Hi,’ and she says, `Listen, I don’t have time to chat, but I need you to immediately send $250,000 in a real-time payment from one of our account holders to this account at another bank—right now!’”
While plausible enough to envision, the hope is that 10 years from now the bank will have the tools to instantly flush out such deceptions. Thanks to machine learning, in this cat-and-mouse game of constant pursuits, captures and recurring escapes, tomorrow’s mousetrap will be better than today’s.
Pulitzer-nominated journalist and author Russ Banham writes frequently about cyber security.