Characteristics and Prevalence of Fake Social Media Profiles with AI-generated Faces
Publication
NetSI authors
Research area
Resources
Abstract
Recent advancements in generative artificial intelligence (AI) have raised concerns about its potential to create convincing fake social media accounts, but empirical evidence is lacking. In this paper, we present a systematic analysis of Twitter (X) accounts using human faces generated by Generative Adversarial Networks (GANs) for their profile pictures. We present a dataset of 1,420 such accounts and show that they are used to spread scams, disseminate spam, and amplify coordinated messages, among other inauthentic activities. Leveraging a feature of GAN-generated faces—consistent eye placement—and supplementing it with human annotation, we devise an effective method for identifying GAN-generated profiles in the wild. Applying this method to a random sample of active Twitter users, we estimate a lower bound for the prevalence of profiles using GAN-generated faces between 0.021% and 0.044%—around 10,000 daily active accounts. These findings underscore the emerging threats posed by multimodal generative AI. We release the source code of our detection method and the data we collect to facilitate further investigation. Additionally, we provide practical heuristics to assist social media users in recognizing such accounts.