An exhibition at San Francisco's De Young Museum features artists working with the mechanisms and meanings of AI and machine learning, and the implications for new ways of thinking, creative practice and being human – reviewed here by Frances DeVuono
image: Lynn Hershman Leeson, Shadowstalker, 2019, video still/detail
11 May 2021
‘Uncanny Valley: Being Human in the Age of AI’, De Young Museum, San Francisco,
22 February 2020 – 27 June 2021
‘Uncanny Valley: Being Human in the Age of AI’ at the De Young Museum in San Francisco is a timely show. Fourteen artists or art groups have made works that address how artificial intelligence (AI) is applied to nearly all the habits of daily life in the Western world. It is timely because we are at a moment when journalists, scholars and legislators in the United States, and elsewhere, are scrambling to grapple with the implications of our digitally connected world and the power of the companies that own them.
The term ‘uncanny valley’ is attributed to Masahiro Mori, a robotics professor who used the phrase in the 1970s to describe the psychological zone where humans become uncomfortable with robots and artificial intelligence’s proximity to the human, either visually or behaviourally. The sweet spot of acceptance is based on the idea that when humans are consciously aware of AI, it should be artificial (think of Siri’s voice on an iPhone), or it should be so utterly concealed from view that we are unaware of its existence (think of the times you have been fooled by a chatbot when complaining about a product online). That our tolerance for AI is shifting, and differs for different generations, is a given.
The works in ‘Uncanny Valley’ span media and issues from labour to social networking, surveillance to machine learning, human collaborations with AI, to what role it plays in exporting Western values, and more. As visual art about the digital world, there is a certain quotient of beauty, fun and fantasy to be found in this exhibition, but it is largely serious. Curator Claudia Schmuckli’s excellent catalogue essay sets the tone. She explains how AI ‘learns’ from itself, describes what AI algorithms are and cautions that these technologies are optimised toward certain solutions, therefore necessarily incomplete – and possibly wrong. Geared as it is for those who go to art museums, I initially wondered how ‘Uncanny Valley’ might contribute to current debates around the powerful companies (those the computer scientist and critic, Jaron Lanier, labels the ‘Three Sirens’) such as Facebook, Amazon and Google, who accrue most AI data from us for free. After several visits to the exhibition, it is, in my view, a good start.
One of the first installations is The Doors by Zach Blas. Accompanied by sounds from the eponymous band, it is a caricature of male Silicon Valley culture. The Doors is a complex, multi-media installation consisting of six large monitors arranged in a circle, dotted with plastic plants and bathed in green light. What looks like a modest Ikea shelving unit sits in the centre of the room. Its shelves are filled with vials of various drugs popularly believed to improve cognitive function by the tech community, ranging from herbs to pharmaceuticals to LSD. The Doors is a comic, if troubling, visualisation of high tech’s neoliberalism and hubris, mixed as it is with nostalgia about 1960s countercultural ideas, from health food to psychedelics, and set alongside its embrace of libertarianism and monopoly capital.
Installation view of Zach Blas’s The Doors, 2019, in the De Young Museum’s ‘Uncanny Valley’ exhibition, photography by Gary Sexton
Naturally, the exhibition makes use of interactive media with the assumption that viewers will have a smart phone on which they can download apps to play Ian Cheng’s game in BOB (Bag of Beliefs), and/or ask Martine Syms’s appealingly frank avatar personal questions in Mythiccbeing. Even Simon Denny’s careful examination of Amazon’s draconian labour practices offers a glimpse of digital phantasmagoria through augmented reality (AR). Amazon worker cage patent drawing as virtual King Island Thornbill cage, as the title indicates, is a replicated cage designed by Amazon to control workers’ production. By directing a smartphone at the cage, an AR image of the endangered thornbill bird appears within. This may be an unnecessarily mixed metaphor. With the company’s much reported on-the-job injuries, its notorious six-minute allocated bathroom breaks, and its 100% turnover rate, Denny could have simply used an Amazon worker in lieu of the bird. 
AIDOL, by Lawrence Lek, an animated film in English and Mandarin, is a respite from the real. A futuristic fable about an aging diva and an AI weather satellite who aspires to become an artist, visually and aurally it is a wonderful reminder of how technology gives pleasure as AIDOL is laced with the kind of high-resolution imagery that only started appearing in games and films less than a decade ago. An installation combining both a film and pieces of mid-twentieth-century modernist art by Christopher Kulendran Thomas, with Annika Kuhlmann, and titled Being Human, is less clear cut. The film is underpinned by the story of Kulendran Thomas’s return to Eelam, Sri Lanka, the home of his activist uncle and the proposed site of an independent state for Tamils. It touches on Western philosophies of humanism, digital ‘deep fake’ pop stars, and the culturally homogenising power of art and money. The actual sculptures and paintings dotted in front of and behind the projection screen reify the latter.
Installation view of Christopher Kulendran Thomas’s Being Human, 2019, in collaboration with Annika Kuhlmann, courtesy of the artists and the De Young Museum
In keeping with its broad reach, the art in ‘Uncanny Valley’ addresses AI’s social applications, documents its history and/or uses visual metaphors to describe its reach. Hito Steyerl’s The City of Broken Windows, an installation about community and municipal policing, might fall into the first category. Trevor Paglen’s They Took the Faces from the Accused and the Dead fills an enormous wall with over a thousand prints set in a grid. Each one was taken from a database of the criminally accused and/or dead that were used in early attempts to develop the technology of facial recognition. Now, of course, our own social media posts fulfil that function. The Zairja Collective’s collages, which look like brilliantly coloured abstractions, use images of mining pits as a metaphor for data pools. In a similar fashion, Pierre Huyghe’s Exomind (Deep Water), a crouched bronze figure with an actual beehive as its head (placed outside, in the De Young’s grounds), becomes a more poetic analogy, suggesting that the processes of artificial intelligence are modelled after nature. At first glance, Agnieszka Kurant’s A.A.I. sculptures made from the activities of termite colonies might seem similar, but the title A.A.I. refers to artificial artificial intelligence, a reference to the use of poorly paid, crowd-sourced, online human work by companies such as Amazon’s MTurk, part of an increasing labour pool that, as-of-yet, requires low level but human implementation. 
Trevor Paglen, They Took the Faces from the Accused and the Dead, 2019, detail, courtesy of the artist
Triple-Chaser by Forensic Architecture was first shown at the 2019 Whitney Biennial. The film was instrumental in exposing Warren B Kanders, then vice chair of the Whitney Museum Board, as the CEO of the Safariland Group, the manufacturer of Triple-Chaser, a tear gas grenade used against civilian protests in countries such as the USA, Israel, Turkey, and others. The film additionally shows how Forensic Architecture applied AI or ‘machine learning’ to make that identification. They uploaded a few images of the canisters, letting AI recreate those same images in the thousands or millions to generate new data sets that could then be used to classify its actual use from photos taken at demonstrations worldwide. That this machine learning correctly discovered Triple-Chaser use might be seen as a social victory, but in this exhibition the group also touches on the potential for abuse in that very same technology.
Lynn Hershman Leeson’s installation gives a more tangible explanation of how that abuse affects us. The first thing one sees upon entering Leeson’s largely darkened space is a computer interface where viewers are encouraged to input their email addresses. I did, and immediately regretted it. Faster than a Google search and without any sorting, multiple lines about my personal life were projected onto the wall. After demonstrating how much of our lives are digitally archived, her accompanying video, Shadowstalker, theatrically describes how these archives are part of the machine learning that is now being applied to predict behaviours by police departments and educational institutions across the US. Schmuckli’s catalogue essay alludes to this when she points out that ‘AI learns from the data traces of people’s engagement with social media platforms, e-commerce sites or search engines’.  But it is worth remembering that AI is no wiser than the humans whose data it uses and is susceptible to reflecting human biases. Later in her catalogue essay, Schmuckli points this out by explaining that ‘actual algorithmic learning is a process based on trial and error, interpolation and extrapolation, information compression and information loss, and is thus fundamentally heuristic’.  Digital predictions, commonly called ‘predictive analytics’ and their potential for gender, ethnic, racial and class bias, is becoming more publicly transparent with Dr Timnit Gebru’s dismissal from Google last December. Gebru, one of the very few African American women in high tech, was originally hired by Google to research ethical artificial intelligence, including language and racial and gender disparities in facial-analysis technologies. Apparently she excelled at it, but when she submitted her report that included ‘a research paper critical of (profitable) large-scale AI systems, Google asked her to leave’. 
Conversations with Bina48 may be the most curious piece within this large exhibition, but the images of it linger. As poetic as Huyghe’s Exomind (Deep Water) or Lek’s AIDOL, and with the autobiographical qualities of Syms’s Mythiccbeing, Conversations with Bina48 consists of four videos where the artist, Stephanie Dinkins, is seen quietly talking to a brown-skinned bust of a chatbot. Both figures are shown from the shoulders up, wearing white t-shirts and colourful neck scarves, one tangibly human and the other patently not. Because chatbots respond to verbal or aural cues based on their existing data set, the conversations seem awkward or superficial and it makes some of the backstory relevant.
Video still from Stephanie Dinkins, Conversations with Bina48, 2014–present, courtesy of the artist
Bina48 (Breakthrough Intelligence via Neural Architecture 48) is an actual robot commissioned by SiriusXM founder and biopharmaceutical entrepreneur Martine Rothblatt, who modelled and named it after her wife, Bina. The Rothblatts believe in cryogenics and are dedicated to the possibility of cloning minds, making digital copies that can be preserved outside the body.  This may, like many wealthy entrepreneurs’ fascination with futuristic explorations (such as Jeff Bezos and Elon Musk), seem hubristic, but watching Dinkins gently probe Bina48 on issues of emotion or identity suggests something else. It looks like Dinkins is patiently waiting. The catalogue reinforces that idea when it states ‘although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black identity or with knowledge of Black history’. The catalogue goes on to state that ‘Dinkins’ work situates this omission amid the larger tech industry’s lack of diversity…’.  What we see are videos of Dinkins’s carefully modulated intelligence with this less than fully formed symbol of the future. It appears as if the artist, as both a woman and an African American, is waiting for technology to catch up and address her as a human being. The entire exhibition suggests that we all are.
 Amazon (owned by Jeff Bezos, with his reported net worth between U$115–187 billion) has long been criticised for its labour practices, and at the time of this writing Amazon workers in Alabama just voted to unionise. See David Weigel, ‘The Trailer: Amazon workers in Alabama just voted on a union. What’s next?’, The Washington Post, 30 March 2020, accessed 30 March 2020
 The term MTurk comes from the robotic chess player of the late eighteenth century called The Mechanical Turk, later revealed to have a human inside. On hidden, online human labour and its implications, see Alana Semuels, ‘The Internet Is Enabling a New Kind of Poorly Paid Hell’, The Atlantic, 23 January 2018, accessed 01 April 2021
 See Claudia Schmuckli, ‘Automatic Writing and Statistical Montage’, Beyond the Uncanny Valley: Being Human the Age of AI, Fine Arts Museums of San Francisco, distributed by Cameron Books, 2020, p 8
 Ibid, p 12
 See Martine Rothblatt, interviewed by Katherine Klein, a University of Pennsylvania professor, in an edited transcript: ‘Digital Immortality and the Future of Humanity’ 03 December 2015, accessed 30 March 2021
 Janna Keegan and Claudia Schmuckli, ‘Conversations with Bina’, Beyond the Uncanny Valley: Being Human in the Age of AI, Fine Arts Museums of San Francisco, distributed by Cameron Books, 2020, p 37
Frances DeVuono is an art writer, artist and former Associate Professor of Art at the University of Colorado Denver. She was a Contributing Editor for Artweek, and her reviews and articles have appeared in magazines such as Art in America, Arts, Art Papers, Sculpture Magazine and New Art Examiner, among others. She currently lives in Berkeley, California.