Studio International

Published 21/05/2019

Casey Reas – interview: ‘There is an increased understanding that software is central to our lives’

Reas is known as the man who helped to create the open-source programming language Processing and brought coding within the grasp of visual artists. Here, he talks about how his work has changed over the course of his career and gives his views on the future of creativity and computers

by CAROLINE MENEZES

For the American artist Casey Reas (b1972), programming is a way of thinking. Alongside Ben Fry, he revolutionised the manner in which a whole generation deals with coding, by launching Processing, in 2001. This computer language has become the major open-source software for different types of creative endeavours. His role in creating Processing has made him well known worldwide. But before being Reas the famous programmer, he was already an artist whose artworks helped to renovate the potential of algorithms in art. Breaking away from the idea of the computer as a mere tool, his career has been based on the notion of coding as an artistic language able to bring about new and unexpected results.

Reas’ body of work consists mainly of a series of pieces developed from different versions and applications of software that he creates. The long-running series Ultraconcentrated (2003-) and Atomism (2012-) are examples of this system, which is also described as generative art. Their software produces abstract images that emerge from a set of instructions the artist gives to the computer.

[video3]

Ultraconcentrated, 2013. Diptych, custom software, two computers. Dimensions variable, unique, 1920 x 1080 pixels, each 56 x 100 in (142.2 x 254 cm).

The visual manifestation can take different forms: live projections, videos, animations, installations, prints, drawings or, alongside other inputs, real-time performative events. A 2013 version of Ultraconcentrated, for instance, was developed to be an installation in which a software system captured television content and transformed it into a live projection of geometric patterns systems. Another version, from 2015, was a generative animation made by shuffling and distorting a collection of photos published in an issue of the New York Times.

Recently, Reas has added another layer of technology to his creative practice. He is making artwork with Generative Adversarial Networks (GANs), a new type of generative process that uses deep learning algorithms, a branch of modern artificial intelligence (AI). The concept of GANs is more complex than his previous work, giving a set of rules to the software so that it executes an operation. In this case, the artist feeds the software with data, “teaching” the software what he intends, and then the algorithm creates unique images drawing from what was learned. His most recent exhibition, at DAM Gallery in Berlin, titled Compressed Cinema, introduced this new set of artworks developed using AI to the public for the first time.

[image3]

In the following conversation, that began in Berlin and continued by email, Reas offers his view on the future of creativity and computers. I first ask Reas about the recent transformations in his practice and the new challenges he is pursuing. His experiments with AI brought surprisingly more visible references from the natural world to the artworks on display in Berlin. The Untitled set of c-prints (2019) that he described as akin to “photographic images” are more organic than the pixellated images he has previously produced. Although they are newly created with machine learning, they appear to be fragments of faded old pictures. He also presented Earthly Delight 1.1 and Earthly Delight 2.1 (2019), two animations running live on screens with custom software. Both resembled experimentations with the fine material of analogue film. Reas talks about the influences and ideas behind these latest creations and how these factors led him to explore the possibilities of GANs.

The artist, who is also a professor at the University of California, Los Angeles, explains how GANs differs from GOFAI (good old-fashioned artificial intelligence), which was used by earlier generations who dared to work with AI and arts. In addition to that, he speaks about the issues that GANs is bringing to the art field and its importance to contemporary artists. Reas also talks about what he has witnessed over his 20-year long career regarding the reception of computer art in museums and other art institutions. His trajectory started in 2000 when he began to show his artwork in celebrated places such as the Ars Electronica Center, in Linz, Austria. A year after his official public debut, he participated in an exhibition at the Museum of Modern Art in New York. Over two decades, he has built up a long list of solo and group shows and has artworks in prominent collections around the world, including the Pompidou Centre, Paris, the Victoria & Albert Museum, London, and the San Francisco Museum of Modern Art, California.

[image4]

Caroline Menezes: How does the new work presented in your latest exhibition, Compressed Cinema at DAM Gallery in Berlin – Earthly Delight 1.1, Earthly Delight 2.1 and the c-prints Untitled Film Stills 2.1-2.24 – differ from a previous series of yours, such as Ultraconcentrated or Atomism? What are their creative and aesthetic challenges?

Casey Reas: The Earthly Delight software works share much in common with software in the Ultraconcentrated series, but I do think the work has changed significantly with the Untitled Film Stills. From my perspective, it was a natural shift that took place over many years, but almost everyone who sees the new work mentions how they feel it’s different. At first, I rejected the idea that the work was different, but now I think it’s clear that it is. Over 20 years, the work transitioned from purely generative and geometric systems, to working with sets of photos and videos within generative software, to this third phase. The work moved from references to drawing and painting more towards photography and cinema. It has shifted in focus from abstracted systems towards more subjective spaces. The Untitled Film Stills is highly personal in my own way. I’m pursuing a new idea of Compressed Cinema – creating hybrids forms of visual media in sequence with sound.

[video1]

Earthly Delights 1.1, 2019. Custom software (colour, silent), computer, screen or projector. Dimensions variable, horizontal. (Two-minute capture from generative software/video).

CM: Unlike what one imagines of an artist who normally uses coding as a creative language, you were not in front of a computer at the time of the inception of the artworks Earthly Delight 1.1 and Earthly Delight 2.1. They emerged from your stay at an artistic residency in Colorado in which the landscape ignited your process. Could you tell us more about this experience?

CR: I had a residency at Anderson Ranch [Arts Center] in Colorado organised for a long time. One month before I was scheduled to go, it clicked that the mountains where I would be staying were similar to those where Stan Brakhage lived much of his life. I had been rewatching a set of Brakhage’s films as video transfers after a recent 16mm screening in Los Angeles, and the idea emerged to spend the week interpreting his film The Garden of Earthly Delights as a digital video. This 35mm film was created entirely by pasting pieces of plants on to clear strips of film. It was an experimental film neither drawn or photographed, but collaged. I saw this week in Colorado as an opportunity to focus closely on something that I was deeply curious about, and it was something very different from what I normally do. I created a collection of 90-second digital video loops that week by making scans of collages I created on a flatbed scanner. Each subsequent video in the series moved further away from the original source material.

[video2]

Earthly Delights 2.1, 2019. Custom software (colour, silent), computer, screen or projector. Dimensions variable, horizontal. (Two-minute capture from generative software/video).

CM: Could you give more details about this connection between the work of the experimental film-maker Stan Brakhage and Earthly Delight 1.1 and Earthly Delight 2.1? How is it possible to conceptually retrieve a media – in this case, a non-narrative animation (Brakhage’s films) – which contains a certain “handicraft” technique (he painted on the celluloid for example), and transpose it to a contemporary computer artwork made with AI?

CR: The software of Earthly Delight 1.1 and Earthly Delight 2.1 emerged out of the tests just mentioned. I was interested in the idea of the Brakhage films, of directly collaging natural materials to construct a film, and how this idea would translate from analogue film to digital video. I wanted to see how the media emerged differently when the core idea of The Garden of Earthly Delight was applied to digital video in 2019. Instead of working with clear film and an optical printer, I used a transparency scanner. I composed the video on the glass and scanned the material. This large scan was cut into small frames and assembled into a video.

The Earthly Delight 1.1 and Earthly Delight 2.1 software in the show at DAM emerged about nine months later, after more tests. The frames in the original video experiments were used to train a machine learning system called a generative adversarial network (GAN) to generate new images that felt similar to the frames of the original film, but are unique and have a different visual quality. I felt this shift toward creating a simulation of natural images was needed to move this work from a studyor sketch to a finished work.

CM: What aspects of AI art are of interest to you? How can we see these aspects in your artistic production?

CR: This all started for me in the 1990s when I found cybernetics and artificial life. These ideas were the most engaging I had ever experienced. Making images and working with systems within the arts was the most natural thing to be doing, but it took time to find this path. I started to learn to write code in my mid-20s to begin to work with ideas from artificial-life research in a visual way. One clear example is the work I started to make with coded simulations of Braitenberg vehicles in the early 2000s through my Tissue and MicroImage prints. I wasn’t interested in AI as cognition at that time, and that is still true. Then, I was engaged with emergence, behaviour and abstracting systems. Now, I’m working with machine-learning systems as a way to generate images. In all of this work, I’ve wanted to get outside of myself and my own frames of reference to discover something I hadn’t experienced before.

[image5]

CM: The artistic use of AI has a long trajectory. Artists such as Harold Cohen built paths spanning decades using a symbolic AI (GOFAI) approach to creating expert systems with hard-coded rules to make art. Nowadays, several artists are using deep-learning networks, such as GANs, and big data to generate artworks. How do you perceive this transition from the symbolic AI to a deep-learning paradigm?

CR: I’ve been looking at Harold Cohen’s work since the 1990s, already a few decades after that work started, but that is when I began to engage with these areas. When I started drawing with code in the early 2000s, my focus was on artificial life (AL). My view of GOFAI is that it went through shifts and splits through the 80s and 90s with a new focus on behaviour rather than intelligence. I was heavily influenced by artificial-life research and I had little interest in the more traditional domains of AI. My primary references were reading [the Australian roboticist] Rodney Brooks and [the Italian neuroscientist and cybernetist] Valentino Braitenberg and looking at how these ideas were being applied within robotics. This was fuelled by my coursework at MIT. I see the push toward GANs as being similar to the move many artists made toward AL in the 90s. I thinks GANs have more potential within the visual arts than AL did. In my way of thinking about them, GANs are pattern machines. I don’t think there’s a pretence of intelligence. I imagine them as a highly specialised organ that is the opposite of so-called general intelligence.

[image8]

CM: There is a continuing discussion about the ownership of the artworks made with deep-learning technologies. Some people argue that the creative part of the process is carried out by the models devised by engineers, and not by the artists, who usually train the pre-made networks with a dataset of images. There is criticism related to the fact that technologies such as GANs are “merely” emulative, as they cannot generate any new information, being conditioned by the initially supplied dataset. What is your opinion about this debate in terms of aesthetic decisions and from a creative perspective?

CR: These claims don’t match my experience. I’ve trained dozens and dozens of models on custom data sets over the last year and a half and I’ve experienced images generated from the models that have no clear relationship to the training images. For me, this is the primary excitement and reason to be working with GANs. They assist with creating unexpected images, unlike any that have been created before. They can be unlike photographs and paintings – they are truly something new. If a GAN is trained on a narrow range of homogeneous images, it’s true that what comes out is mundane and can’t be distinguished from the training data in an engaging way.

However, there’s a balance that can be reached where the training data is diverse enough to pull out unexpected patterns, but it’s not too diverse so the system only produces noise. The model can be pushed and pulled in any direction based on the curation of the training images. To the claim that the true creator of an image created with a GAN is the architect of the model, I feel that primary work done to define GANs is extraordinary and creative. The new ideas developed by Ian Goodfellow et al, and released through the paper Generative Adversarial Networks, is essential for all artists working with GANs. However, I don’t feel this visionary work is relevant to the question of authorship of an image created by an artist. I think of a GAN model as a complex camera. Like a camera, a GAN is an apparatus that can be used by an artist to make pictures. The quality of the image that is created with the apparatus has everything to do with how the artist uses it and little to do with the machine itself.

[image6]

CM: Almost 60 years after the first artworks were made with computers, it is still unusual to see computer or algorithmic art as part of a large contemporary art exhibition (such as the Venice Biennale or Documenta). You have exhibited in great art institutions, such as the Pompidou Centre, the San Francisco Museum of Modern Art and the Victoria and Albert Museum. Do you think that there is a division between “mainstream/traditional” and “computer/algorithmic” art? Is there any contrast between the exhibition of your work at long-established art institutions and spaces dedicated to art and technology?

CR: I think you’re correct: it is still rare to see this kind of work in the larger art sphere, but it is less rare than it was a decade ago. There are increasingly more artists who create work with technology outside of spaces dedicated to art and technology. I don’t feel this lack of access to institutional spaces is unique to computational work – forms such as sound art, performance and video/film are also limited within many established institutions that focus on contemporary art. I know most about what is happening in the United States, and here I’ve seen institutions such as the Whitney Museum of American Art and MoMA set up acquisition committees for media-oriented work and there’s more energy now around conservation of digital media than ever before – developing conservation techniques for this kind of work is an essential step needed for collections to invest resources in acquisition.

The larger institutions I’ve worked with are divided into different departments: painting, photography, etc. When I’ve shown at the Pompidou Centre and the San Francisco Museum of Modern Art, it was in exhibitions initiated by the design curators, which is not the context that I see my work existing within. The recent Programmed: Rules, Codes, and Choreographies in Art, 1965-2018 show at the Whitney was important for me because my work was shown adjacent to the work of Sol LeWitt, Charles Gaines and others working with systems and drawings. These are the primary influences for my earlier work. A diptych projection from my Software Structures (2004) was projected large on a wall next to a LeWitt wall drawing.

[image7]

CM: Can you feel a difference from when you began working with computer art and nowadays in terms of the public’s reception?

CR: Yes, I think there have been tremendous changes in how schools, museums, galleries, and the public engage with work created with software. People think differently about software now because it has become essential. When I was in college, all photography was analogue and now people carry small, high-quality digital cameras in their pockets. This was also before the first graphical web browser was released in 1993. As our culture has become more dependent on software for work and play, there has been an increased understanding that software, and discourse around software, is central to our lives.

CM: Creative use of computers is now ubiquitous, and we can say that you made a significant contribution to that, as Processing, the open-source computer language you initiated with Ben Fry in 2001, is taught everywhere at different levels. Many people have their first contact with the creative use of computers or even with coding using Processing. What is your vision concerning the future of creativity and computers? What is the role of AI in this future?

CR: I see many possible futures and some might exist at the same time, but I’m not a futurist. I find truth in the oft quoted William Gibson quip: “The future is already here – it's just not very evenly distributed.” We started Processing to disseminate ideas about creativity and computers that had developed at MIT. We wanted to push those ideas out of that institution and into the larger world. We did our best to bring ideas and information that was specialised and concentrated at the time into new spaces. The new vision is one shared by additional close collaborators at the Processing Foundation, including Lauren McCarthy, Dan Shiffman, Dorothy Santos, Saber Khan, and Johanna Hedva. We think the next threshold is increased access to new communities that haven’t participated in coding before. Through fellowships and new educational initiatives in high schools, we hope to empower people from wider backgrounds and communities to work with code in the arts and to shift the larger culture around coding. We believe software, and the tools to learn it, should be accessible to everyone.