Amy Gaeta reviews this fourth volume in Sternberg Press’s Research/Practice series, on the work of Trevor Paglen: The volume roadly aims to refute the mystery behind genAI images – the notion that they just appear like ‘magic’, or, perhaps more dangerous, the notion that genAI images are representational of the real world.
11 March 2025
Generative AI makes me uncomfortable. I read some text churned out from Chat GPT or I look at yet another AI-generated image on my social media feed and something within me is uneasy. Perhaps I am disturbed due to my awareness of the gross environmental impacts of generative AI, and that generating something as simple as an email seems wasteful. Or maybe my discomfort arises from the fact that many popular generative AI models rely on the exploited, arduous labour of people in the Global South. Yet, at the coloniality and unsustainability of AI I feel more outrage than discomfort. This feeling is something else, an unease with knowing that AI text or image is not quite human, not quite ‘real’. I want to dismiss the feeling as much as I want to dismiss the AI-generated content from my purview. This discomfort is rather important, however, and my desire to banish it from my body would only be to refuse to consider what I might learn about generative AI, the power of estrangement, and myself – as one of the millions of people forced to confront, and be subjected to, AI content, and more broadly machine vision, every day. To understand how machines are built to see the world is to understand how we are made to see the world.
I was reminded of my ambivalent affective response to generative AI upon reading the latest installment in the Research/Practice series from Sternberg Press, Trevor Paglen: Adversarially Evolved Hallucinations, edited by Anthony Downey. This short volume features one essay from Downey, a conversation between Downey and Paglen, and a large selection of images from Paglen’s Adversarially Evolved Hallucinations series (2017–ongoing). Paglen’s series explores the outputs and datasets of a type of generative AI model called a generative adversarial network (GAN), with specific emphasis on hallucinations. The norm in critical discussions of genAI images is to think of hallucinations, genAI images that ‘go wrong’, or, rather, do not follow the text prompt, as errors and therefore signs of the limitations of genAI models. Rather than cast off hallucinations as mere flukes, Downey and Paglen both differently argue for the need to think of hallucinations as central to genAI image models for multiple reasons. For one, these errant images can act as annexes into the ‘black boxes’ of AI. Moreover, these hallucinations can serve as ‘computationally generate disquieting allegories of our world’ (Downey, p 47).
The volume broadly aims to refute the mystery behind genAI images – the notion that they just appear like ‘magic’, or, perhaps more dangerous, the notion that genAI images are representational of the real world. By exploring how machine learning is deployed as a way of knowing, Paglen and Downey both warn against promoting ‘AI as a heuristic device: capable, that is, of making sense of, if not predefining, how we perceive the world’ (Downey, p 49). This short, yet dense volume invites a conversation about the importance of understanding the mechanisms behind hallucinations to highlight the interworkings of genAI, and in doing so it poses several questions about the future of the epistemological authority of images as truth-telling objects and the status of human optical sight in relation to machine vision.
Paglen focuses on the phenomenon of ‘machine realism’ where the creation of a training set for a genAI model includes people classifying and categorising a mass of images, typically in a reductive manner that is built on the ‘assumption that those categories, alongside the images contained in them, correspond to things out there in the word’ (Paglen, p 118). Machine realism is full of flattening reductions. This is where the importance of thinking with semiotic insights from visual culture and art theory come into their own as each refuses any easy equation between an image of an object and the actual object. For Paglen, this is the hallucination – not the ‘incorrect’ image or ‘failed output’, but the fallacy of one-to-one correspondence upon which machine realism rests.
Trevor Paglen, Data Set: Omens and Portents, 2017, courtesy of Trevor Paglen Studio
Different from the more reductive training sets used in genAI systems (ie a dataset of rainbows containing only realistic pictures of rainbows), Paglen’s corpora are organised more thematically, such as the Omens and Portents corpus that contains the image category of rainbows along with other image categories of comets, black cats and eclipses. Or, consider another corpus called American Predators, which contains the image categories of Mark Zuckerberg, drones, wolves and Venus flytraps. By putting together such seemingly arbitrary image categories (why these images for this corpus and not others with similar denotations? what is the logic of this taxonomy?) into their given corpus, Paglen prompts us to ask just what is the ‘stuff’ that trains AI and may ultimately train our own eyes? More importantly, Paglen begins to demystify the magic of genAI by so blankly explaining the process of creating the series at a computational level and showing the actual image categories in some of the corpora.
Trevor Paglen, Rainbow (Corpus: Omens and Portents), 2017, courtesy of Trevor Paglen Studio
But what are the stakes of machine realism and what can we learn from hallucinations? The use of ‘adversarially’ in the book title and in Paglen’s project refers, of course, to the use of GANs, a machine learning framework that Paglen used to create the images in the series. In a quick and dirty way of saying it, GANs consist of two neural networks that work in opposition to each other to create images. The first part is the generator, which creates a ‘fake’ image that looks like the ‘real’ thing; the second part is the discriminator, which tries to determine if the data presented by the generator is real or fake. In short, the generator is always trying to ‘fool’ the discriminator. Over time, this adversarial process continues (they evolve, to signal the project’s title), meaning the generator continues to create more realistic images that make it difficult for the discriminator to detect it as fake. GANs ultimately produce artificial neural networks that identify an image.
‘Evolved’ also, of course, reminds us of evolution, thereby posing the question of what genAI means for humanity. This is partly what Downey asks in his chapter ‘The Return of the Uncanny: Artificial Intelligence and Estranged Futures’. He enters into conversations with the provocations of Paglen’s series to ask ‘If AI models of image production replace ocular-centric ways of seeing, do these models have the capacity to further estrange, if not profoundly alienate, us from the world and our responsibility for the potential impact of such images?’ (Downey, p 48). Responsibility here is key, as this word haunts discussions of AI ethics and AI regulation around the world. The more people are alienated from the processes behind AI, the more we run the risk of being unable to appropriately allocate or even discuss matters of accountability.
When ‘adversarial’ is sat alongside ‘evolved’, another, related theme in both Paglen and Downey’s remit emerges – that is, the visual culture of empire, the state and martial power – and becomes apparent, as does the potential for emergent harm and social control. GenAI is not just a set of convenient or fun computational tools used to create images for everyday consumers. GenAI has existing and prospective applications in key critical sectors, including the military, security, healthcare and policing, to name a few. Paglen’s series and his emphasis on hallucination suggest that these hallucinations may provide us with a sense of the utter instability of the computational models that are increasingly embedded in the functioning of contemporary global infrastructures.
To understand the project and its potentials, one must first understand just how breathtakingly odd Paglen’s hallucinated images are in comparison to the usual characteristics of genAI images, especially images of human beings. Take Paglen’s A Pale and Puffy Face (Corpus: The Interpretation of Dreams) from 2018. This generated image was hallucinated by a genAI system trained on a dataset that Paglen organised – what he calls a ‘corpus’ – of images of pale and puffy faces. You see part of the training data in the book, along with some of his other major corpora. The images have different definition qualities and features but do indeed all show what can be seen as pale and puffy faces. The resulting image trained on the Interpretation of Dreams corpus is somewhat monstrous – a poorly defined amorphous white and reddish long blob against a black, cloudy background. Only by the title’s indicator of a face am I able to see the outline of a potential eye and mouth. GenAI images of faces and people are often defined by traits of idealised Western beauty: smooth, clear skin, sleek and skinny features, conventional cisgender presentations, and, of course, whiteness. This norm has been frequently criticised for obvious reasons of racial, sexist and ableist bias.
Left: Trevor Paglen, Angel (Corpus: Spheres of Heaven), Adversarially Evolved Hallucination, 2017, dye sublimation print, 71 x 54.5 cm; right: A Pale and Puffy Face (Corpus: The Interpretation of Dreams), Adversarially Evolved Hallucination, 2018, dye sublimation print, 101.5 x 81 cm, courtesy of Trevor Paglen Studio
A Pale and Puffy Face is indeed uncanny; [1] it signals something like a typical genAI image, but not quite; something that may resemble a human yet not quite. The uncanny is not just estrangement, it is also familiarity. That is, a sense of home laced with a sense of alienation. With this interplay between home and alienness, we must seek ways to think with the uncanniness of the image to turn it into a site of knowing AI and its infrastructures differently. We should give just as much attention to genAI’s so-called failures, mess-ups and inconsistencies as its apparent successes. This is perhaps one of the most potent and exciting claims of Paglen’s and Downey’s work in this volume.
Initiated in 2017, Paglen’s project predates the current wave of AI hype and widespread access to relatively high-quality AI tools. In doing so, Paglen’s series functions as a sort of instigator for a host of questions that are already here, allowing us to more critically reflect on the status of the image and the eye in an accelerating world of AI images. As popular genAI platforms move from being based on GANs to being based on stable diffusion models (which operate more quickly and produce sharper images), how can we use the hallucinations of these newer systems to work backward to see how AI systems are trained to see? As genAI becomes a commodity tool central to many institutional operations, will its hallucinations – the system failing to operate as instructed – still signal estrangement, or will it perhaps spark frustration or anger? And what does it mean to resist the influence of genAI image machines on human perception on an individual and collective level? As he has done differently across his career, Paglen shows here that artistic experimentation in the age of machine vision means not just using these computational tools, but finding out what these tools are, mean and do on one’s own terms.
[1] See Anthony Downey, ‘The Return of the Uncanny: Artificial Intelligence and Estranged Futures’, Visual Studies, published online October 2024
Trevor Paglen: Adversarially Evolved Hallucinations, edited by Anthony Downey, is no 4 in Sternberg Press’s Research/Practice series, 2024, 160 pages, 57 colour illustrations, ISBN 978-3-95679-583-1
Amy Gaeta is a Research Associate at the Leverhulme Center for the Future of Intelligence at the University of Cambridge. She uses feminist theory and critical disability studies to analyse the emotional, aesthetic and political dimensions of human-tech relations, especially those concerning domesticated military technology. Gaeta asks how semi-autonomous technologies impact the formation of subjecthood and ideas of humanness. She earned her PhD in English and Visual Cultures at the University of Wisconsin-Madison. Her work has appeared in First Monday; the Journal of Visual Culture; Culture, Theory and Critique; and more. She is strongly committed to the aims of disability justice, many of which inform her work as a researcher, advocate and a poet.