In groundbreaking study, NVIDIA releases examples of truly lifelike GAN-generated human faces

NVIDIA’s new GAN can create photos of imaginary human faces

We’ve already seen a plethora of deep fakes and AI-generated art in the last few years, and it was just over a month ago that China’s Xinhua News Agency announced that they had created an AI-generated news anchor. But AI has just upped its game once again. This December, NVIDIA released a report titled “A Style-Based Generator Architecture for Generative Adversarial Networks” (GANs) in which they announce their success in creating high-quality images of AI-generated human faces.

Taking cues from the way the human brain works, NVIDIA’s technology allows for more control than before over the different aspects of each image. In the case of human faces, each image is divided into three “styles”: coarse styles such as pose or face shape; middle styles such as facial features; and fine styles such as color scheme. Using a technique known as style transfer, the GAN is able to create endless realistic images of entirely imaginary people using a combination of these existing traits.

NVIDIA’s GAN still relies on traits from source photos

Perhaps a better term for the AI-generated faces is “customized” faces – all of the traits are directly lifted off the source photographs. But the fact remains that the AI-generated faces represent people who do not exist in real life. The images are of such good quality that most people can’t tell they’re not photos of real people.

So far, the biggest indicator of a GAN-generated human face is the hair, which is very difficult for an AI to create. GANs also tend to have trouble understanding facial symmetry – generated photos might have two different-colored eyes or weird teeth – and they have a hard time generating text and numbers. However, these difficulties apply more to older models of GAN, and as we see in NVIDIA’s video, at first glance it’s impossible to tell that the faces aren’t actually real.


GANs have come a long way since 2014

GAN technology was developed in 2014 and was originally only capable of generating blurry images. Since then, GANs have improved considerably. The basic premise is that two computer networks are pitted against each other, the first generating random images and the second giving feedback on the images until the first network manages to create an image that is satisfactorily lifelike.

NVIDIA’s tool can play with everything from hair, skin color and facial features to accessories like hats or glasses. Aside from human faces, GANs have also been used to create everything from cats to cars to bedrooms. The possibilities are infinite.

Real-world consequences of GAN-generated images

GAN-generated images have implications for everybody from professional photographers and stock photo models, to lovesick Tinder users. Imagine if all you had to do to get a high-quality photo was type some keywords into a machine. And imagine the advertising potential and privacy issues if bots start flooding Tinder, asking personal questions to unwitting users and collecting sensitive data before people realize they’re speaking to a bot.

The technology still uses enormous amounts of power and it’s not realistic to think it will be readily available anytime in the near future. To create the images in their promotional video, NVIDIA’s team had to train eight Tesla GPUs for a week using 70 000 photos from Flickr. But given the pace at which AI tends to advance, we can probably expect this technology to become an important part of our lives in the next few years.

What are the ethical implications of GAN-generated images in the media and on social networks?