False Faces
- Alex Zhang
- Jan 2
- 4 min read
I am an avid social media user, and I’ve experienced the often-stated “isolation” that social media can bring. The algorithms used on platforms like Instagram and TikTok are designed to display content that keeps you engaged on the platform for as long as possible. This leads to several problems. First, the time spent on social media is time spent alone and time that could have been spent socializing in person, rather than on social media. Second, social media can make it difficult for people, especially the young and impressionable, to discern what is real/normal/staged. Especially with the rise of better and better AI video/image generation models, social media can distance us and defamiliarize (a modern concept of taking what was once familiar and warping it into something unfamiliar) us with reality and each other by setting unrealistic expectations and spreading misinformation. Both issues are quite prevalent in my life, especially with the release of OpenAI’s Sora 2 video generator. Thus, social media tends to disconnect us from the rest of the world, contradicting the human tendency of seeking connection with one another. The project I created to address this question was also inspired by the contemporary work thispersondoesnotexist.com by software engineer Philip Wang, which is the deployment of an image generator fine-tuned on faces. This website defamiliarizes the concept of a real human face and human perception by deploying an AI model to generate hyperrealistic yet nonexistent human faces.
The contemporary work that inspired this project, thispersondoesnotexist.com, involves the deployment of a StyleGAN2 instance fine-tuned on a face dataset. StyleGAN2 is a type of generative adversarial network (GAN) that excels at photorealistic image generation. In the case of thispersondoesnotexist.com, the GAN generates a realistic face every time the user refreshes the page. Wang’s platform raises awareness of the ever-increasing power of AI and challenges the existing narrative that images are proof of reality. This website defamiliarizes the concept of an image; photos are warped into the output of an algorithm rather than a snapshot of reality, raising questions about human perception and the future of belief, especially when AI-generated content meets social media.
My creative piece is similar to that of Wang’s. I finetuned a StyleGAN2 model on a personally curated face dataset. However, instead of training the GAN on realistic faces, I modified my fine-tuning dataset to add augmentations to each image that defamiliarized the face for the training process. Instead of training on normal faces, the GAN trained on faces with random mirrorings and non-linear transformations that made each face “monster-like.” These augmentations represent how social media can often distort our understanding of one another, isolating us with repetitive information. I compared the outputs of the model before fine-tuning, after training for 1 kimg (a unit of training that denotes that the model has trained on 1000 images), and the model after 24 kimg. Initially, the model remained realistic and generated normal faces, as seen in Figure 1. After 1 kimg, however, the color started to bend, and the texture of the skin began to redden, almost as if the face was being burned, as seen in Figure 2. As seen in Figure 3, however, the faces became unrecognizable at 24 kimg, with some generations being oceans of skin color. It could no longer recognize or generate faces functionally. This defamiliarization of the face extends Wang’s and Picasso’s defamiliarizations. Human faces are something we recognize most quickly and accurately, but with the repetition of information on social media, as represented by the training process of the GAN, we can lose our perception, empathy, and affection for those closest to us.
Going through the process of creating this project has reiterated to me the power of information and how its corruption is dangerous. When used maliciously, especially in forms of propaganda, as seen in Orwell’s 1984, victims can often have their entire perceptions altered, as evidenced by the corruption of this GAN model. Even a GAN model that could previously generate perfect faces was corrupted by a few thousand images. Using the medium of machine learning has also made me realize that creative expression can come from anything, even cold, hard metal. Looking back, I find it somewhat ironic to turn an AI, one of the biggest perpetrators of misinformation in modern media, into a victim of that misinformation. As AI models get more powerful and modern media grows, the line between what’s real and what’s generated will only continue to blur. Ultimately, this project has shown me that the modernist concerns of Picasso and other early modernists of identity and its fragmentation have only grown into the digital age. If we don’t learn to recognize when our sense of one another is being distorted, whether it be by media, repetition, or AI, we risk losing the empathy, authenticity, and love for each other that make us human.



Works Cited
Wang, Philip. thispersondoesnotexist.com. February 2019.
Orwell, George. Nineteen Eighty-Four. 1949. Penguin Classics, 2021.


Comments