Studio International

Published 30/09/2019

Trevor Paglen — interview: ‘Everything is surveillance software at this point’

As he opens a new project at the Barbican Centre, the artist and critical geographer explains online image sets, surveillance culture and the return of human classification

by JOE LLOYD

Few artists have scrutinised the mechanisms of political and corporate power as keenly as Trevor Paglen (b1974, Camp Springs, Maryland). The American artist, author and geographer’s myriad projects have included a comprehensive account of the CIA’s extraordinary rendition programme (2006), an unveiling of the patches worn by US Army Black Ops squads (2007) and a chronicling of classified military bases using high-powered telescopes (2012). Of late, he has trained his gaze on surveillance, going so far as to become a certified scuba diver in order to capture the underwater cables that enable the internet, which are tapped by the US National Security Agency.

The digital has increasingly figured in Paglen’s project of making the invisible visible and the visible invisible. The Autonomy Cube (2014), developed with Jacob Appelbaum, allowed users to connect to the internet via the anonymising software Tor, while the exhibition A Study of Original Images (2017), at New York’s Metro Pictures, concerned the way computers perceive and categorise pictures. From ‘Apple’ to ‘Anomaly’, a new installation at the Curve, Barbican Centre, London, extends this strand of Paglen’s work. It centres around ImageNet, a research project that has gathered a dataset of arround 15 million images, which have been sorted by human workers into more than 20,000 categories. ImageNet is used to train a range of AI software, including surveillance systems, forming the basis by which they judge new visual information. Some of these categories are apparently innocuous – “cloud”, “orchard”, “anchovy”. Others, which sort pictures of humans, contain value judgments: “traitor”, “schemer”, “first offender”, “alcoholic”. Paglen’s work makes this system perceptible, allowing us to perceive material usually confined to computer programs.

[image7]

For an exhibition concerned with the invisible operations of data, Paglen’s exhibition at the Barbican is surprisingly analogue. It begins with a small-scale work, The Treachery of Object Recognition, which shows René Magritte’s painting Ceci n’Est Pas une Pomme (1964) subjected to ImageNet categorising. Then, sprawling across the Curve’s mammoth wall like a patchwork quilt is the eponymous piece itself.

It includes 30,000 photographs from ImageNet, organised into 100 categories. Some tags are organised into related terms – “porker” is adjacent to “ham and eggs”, which is adjacent to “abattoir” – while others show less overt connections. The subjects gradually wend their way over to humans, where the technical flaws in ImageNet become more evident. For “divorce lawyer”, for instance, a majority of images depict practitioners from the same firm, standing against a red background. More sinisterly, value judgments begin to creep in. Among “schemer”, we find the Dalai Lama, while “fucker” returns a sole shot of Osama bin Laden enmeshed in photographs of staged erotic poses. Pete Doherty and Amy Winehouse come up as “alcoholic”, and Donald Trump makes an appearance under “moneygrabber”, although in the years since ImageNet’s inception he would have undoubtedly penetrated some other unsavoury categories.

Ahead of the exhibition’s opening, Paglen explained online classification systems, the omnipresence of surveillance, and his motivations as an artist.

Joe Lloyd: Much of your recent art has involved presenting that which is usually invisible to human eyes. How will From ‘Apple’ to ‘Anomaly’, your new piece at the Barbican, be physically manifested?

Trevor Paglen: It consists of thousands of images installed across the entirety of the Curve. The source material for the piece is ImageNet, a massive database of images and labels that are used in training artificial intelligence, particularly in object recognition. It begins with the concept of an apple, where it shows a lot of pictures of apples. Then it moves to different categories, such as “apple orchard”. Basically, over the course of the piece, the categories that you’re seeing become more relational and more historical.

[image2]

You start to see the ways in which the relationship between images and labels is always historical and always cultural, but sometimes in more places than others. As you get towards the end of the piece, you’re looking at categories that are entirely relational, some of which even have no relationship whatsoever to visuality even. So there are categories such as “anomaly”. These are all categories that are used in machine learning. Part of the purpose of the piece is showing how suspect the project of machine learning in relation to images is in the first place, from an aesthetic or even a philosophical standpoint. And then showing the harm you can perpetuate by taking categorises that are historical and relational – and often more about judgment than they are about description – and applying them to people.

JL: Before ImageNet Roulette [Paglen’s application that let you see how the program classifies an image of yourself], I hadn’t encountered ImageNet. And when I tried to access it, it was down for maintenance.

TP: It’s not surprising that you don’t know about ImageNet. It’s very much a thing in machine learning research, and nobody else knows about it. I have a big article coming out next week with a friend of mine named Kate Crawford, who is a critical AI researcher, about the politics of classification. We’d given a talk in Berlin where we pointed out some of the really bad politics baked into it. And then, about a week later, that website was under maintenance, which it hadn’t been for about 10 years. So we were a little bit suspicious of that.

[image3]

JL: How does ImageNet group images into categories?

TP: The dataset was built by researchers at Stanford and at Princeton universities. And they constructed it using an old framework that was developed at Princeton in the 1980s, called WordNet. WordNet was an effort to categorise all the words in the English language by a series of nested categories. It was basically a big classification system. The concept of chair would be nested in the concept of furniture, and so on. It was a giant taxonomy of everything in the world.

JL: That sounds like some sort of early modern encyclopaedic system.

TP: It’s weird – it seems very medieval! Anyway, they removed the verbs and adjectives from WordNet and took just the nouns. I think their idea was that nouns were the things in the world for which a picture can exist. They took a subset of those of about 12,000 categories. Then they scraped the entire internet for images, in 2008 or 2009, and collected tens of millions.

Finally, they created a system using Amazon Mechanical Turk, a crowdsourcing platform for clickworking. About 25,000 people categorised all the images, and they ended up with a database of about 20,000 categories and about 15m images, all labelled and categorised. And within those, there are about 2,500 images of people. The further you get into the classification of people, the more suspect you get.

[image4]

JL: It’s enormously sinister. It reminds me of the physiognomy used by 19th-century criminologists to determine a people’s character by their external appearance, or the phrenology [measurement of skulls] used to justify racism.

TP: That’s one of the things Kate and I talk about in the article, showing the connection to phrenology. The face of the criminal, the size of the cranium: these things that seemed to be part of history have very much come back.

JL: As well as its ethical flaws, ImageNet seems a fairly inaccurate system of classification. How often does contemporary surveillance software use it?

TP: It depends on what you mean by surveillance software, because everything is surveillance software at this point! ImageNet is used for a lot of different things. It is a kind of training set that you use when you want to test the efficiency of a system you’ve built over someone else’s. It’s a good benchmark because you can easily compare your systems to others. And because it’s so vast, it has provided the seeds for other classificatory models. You could use it as a baseline classifier, then once it has classified you could incorporate those results and build a more robust system based on that. So ImageNet is often the first layer in a stack of models that are built on each other. I think that’s a very wide spread.

JL: It seems to me that the images on systems such as ImageNet, which exist only in a digital system, are fundamentally a different thing from visual images as we commonly understand them.

TP: We’re used to thinking of images as things that people look at, in the sphere of culture. Looking at ImageNet, you could more appropriately describe them as part of infrastructure.

[image5]

JL: You say that all software is surveillance at this point. Do you draw a line between different processes of surveillance?

TP: I think, increasingly, even seemingly innocuous applications are gathering all types of data. Of course, there are people trying to build privacy-based projects, but they are very, very small exceptions to the greater rule. At this point in my own thinking, I don’t distinguish between state surveillance and corporate surveillance. They are profoundly linked with one another. Cops and intelligence agencies have access to any kind of data collection. The spheres are overlapping and interwoven. And huge amounts of data are collected – everything that you could imagine.

It’s affecting your life more and more. In the past, you could say, “I don’t like the idea that GCHQ is checking my emails”, but it wouldn’t seem to affect your everyday life unless it came to do so in a big way. But the collection of data by Google and Facebook is going to directly affect your life in ways that are more concrete. Your collected data, for example, influences your health insurance or credit rating. All the ways in which we are classified by companies are increasingly becoming moderated by our data trails, which has a very real effect on our de facto liberties, our ability to utilise various institutions in society, and what it costs to use them.

JL: It seems we have entered a period where people tacitly accept that they are being constantly surveilled. Why do you think this is?

TP: The fact of the matter is that if you want to participate in society at this point you have to use a smartphone, you have to use these kind of technologies. You don’t have a choice; I don’t have a choice. If I want to do my job, I have to use tools that are spying on me all the time. I think we sometimes rationalise it and say: “Oh, well, I’ve got nothing to hide.” We imagine that we have some free will here, but really we don’t. These are policies imposed on us by infrastructures that we really don’t have a choice about using.

[image6]

I use programs such as Tor for, say, searches about health questions, because I don’t want there to be records of me querying data on health issues. And I think in general that’s a smart thing to do, but it’s not a strategy. It’s very much a small step, an edge case.

JL: Can you think of any positive applications of surveillance software?

TP: Nope! Some people are going to make a bunch of money, but I don’t think that’s necessarily a positive for anyone else.

JL: How have surveillance companies and government agencies reacted to your critique?

TP: For the most part, pretty positively! Ironically, some of the things that I am very critical about are things that people who work in these industries would also critique. It’s interesting that when you talk to people that really do use and understand the systems, they are often the most worried about them, but they’re not often in position to be able to change it.

JL: Your practice is quite far-removed from conventional artistic practice. How do you see your role as an artist?

TP: As an artist, your job is to learn to see. And seeing is always changing. Seeing is historical, seeing is cultural. And the nature of seeing in 2019 involves AI systems. It’s different from forms of mechanical seeing in the 1980s, which are themselves different from forms of mechanical seeing in the 1880s. There are threads that tie those things together, but those things are different. I think that seeing always has politics to it, has economies to it. I’m trying to learn how to see all of that at work. And I think artists have always tried to contribute to an understanding of how all those things inform the way we make sense of the world.

Trevor Paglen: From ‘Apple’ to ‘Anomaly’ is at the Barbican Centre, London, until 16 February 2020. Kate Crawford and Trevor Paglen: Training Humans is at the Fondazione Prada Osservatorio, Milan, until 24 February 2020.