Martine Syms and Gina Trapani at the 7th edition of Seven on Seven. Photo: Madison McGaw/BFA
On Saturday, the seventh edition of Rhizome’s Seven on Seven conference took place at the New Museum. For the conference, Rhizome pairs up seven artists and seven technologists and gives them a simple assignment: make something in twenty-four hours, and present the results the following day in a public conference. In the past, participants have launched artworks and startup companies; the potential risk and reward involved in these encounters between leading figures in distinct but overlapping fields is what lends Seven on Seven its particular drama. Whatever the results, the conference is always a fascinating look at the process of collaboration and a snapshot of contemporary concerns in the discourse around art and technology. The theme of Empathy & Disgust ran throughout this edition; as a point of departure, it allowed participants a starting point around which to structure their collaborations; in particular, it seemed to point many participants toward the problem of having to relate to the computer, whether it treats as data points or patients, consumers or targets. Without further ado, here are the big ideas to emerge from this edition!
Kate Crawford interviews Laura Poitras. Photo: Madison McGaw/BFA.
Kate Crawford began her conversation with Laura Poitras by asking about the "cultural and artistic moments" around mass surveillance. It was an apt starting point, because in the intense public discussions that surrounded the Snowden leaks, the importance of Poitras' artistic practice was easy to overlook. But in addition to transforming public awareness, Poitras' film was also an important document of the experience of standing up to the state and living under constant, invasive surveillance. There is a need for such stories; for her part, Poitras cited 1984 as a source of inspiration, and described the difficulties of trying to make art under constant watch: the arduous logistics of maintaining secrecy, the problem of trying to record one’s thoughts in full view of a hostile government, the symptoms of stress that manifest under surveillance. She maintained that artists should be given wide latitude in their approach to questions of surveillance - even if their work sometimes raises ethical questions. But while she acknowledged the validity of a range of artistic positions (such as that of Sophie Calle) on surveillance, her own position was clear: "Surveillance is a kind of power, and counter-surveillance is an act of resistance."
Stanya Kahn and Rus Yusupov give their presentation. Photo: Madison McGaw/BFA.
"Messaging is an intimate one-on-one way of connecting, but it also implies a distance," Stanya Kahn said, explaining why the video she made with Rus Yusupov took the form of a dialogue. The philosophical differences between the two marked their collaboration: Kahn's video work has many characteristics that would play well on Vine—weird characters, penis costumes, body horror—but she sees political potential in art. Yusupov, for his part, believes that "the future of art is bliss," not politics. But despite their differences, Kahn and Yusupov stuck with the process. They found the most common ground in the idea that constraints are beneficial to the creative process. This idea can be seen in the design of Vine, which gives users constraints such as six seconds and a square frame; Kahn also uses self-imposed constraints as a way to shape her videos. The two worked on ways to offer these additional constraints to Vine users, such as: "BLINDFOLD / RUNNING SHOT / AT LEAST 2 GIFS / NIGHT." But in the end, their contribution will be remembered for the humor they were able to find in their situation as collaborators with diverging philosophies.
Or perhaps it will be for Yusupov's transformation from feelings of disgust to empathy. Initially, he was disgusted at the unfairness of a process that allowed Ai Weiwei and Jacob Appelbaum to spend two days together with their own filmmaker (something none of the other participants were granted). He embarked on a secret mission, asking Rhizome staff and journalist Kashmir Hill what had been produced in Beijing, covertly filming all the while. He and Kahn discussed the idea of leaking Ai and Appelbaum's project as their presentation. Ultimately, however, Yusupov began to feel more empathy for the heavily surveilled duo, and although he did learn what Ai and Appelbaum made, he and Kahn (spoiler alert) decided to keep mum. Their video is below:
Liam Gillick and Nate Silver give their presentation. Photo: Madison McGaw/BFA.
Artist Gillick came into the collaboration with statistician and data journalist Silver thinking about the concept of what he called "phantom data"—of the unquantifiable or unknowable. Focusing on what data can't tell us might seem to offer a challenge to the statistician's world view, but Silver had his own term ready for this phenomenon: "dark corners." Finding common ground in skepticism, but approaching it with very different vocabularies, the two took one of the common questions commonly addressed using statistical analysis—how to reduce risk—and inverted it, asking "how can we guarantee risk?" Applying this question to Seven on Seven itself and the process of creativity and innovation it demanded, Silver observed that our understanding of innovation suffers from "sample bias": we have a distorted perception of the success rate of new ideas because only the successful ones are discussed. Failure in creative production and innovation represents "phantom data" or a "dark corner." Looking for a data set that could fill in this gap, Silver and Gillick delved into the database of failed trademarks, 4.5 million company names that represent ideas that didn’t quite play out, from Krautsource to Porna. They curated a selection of the names, and opened their presentation with this surreal found text, set to music, which offered a glimpse of the little-shared stories of failure in creativity and innovation.
Many of the presentations at Seven on Seven were characterized by an interest in the problem of human-computer relationships. Henrot and Holmes, instead of thinking of the computer as an artificial human, looked to other non-human categories to help understand our relationship with it. Drawing on the writing of Peter Sloterdijk, they proposed thinking of the computer as a kind of god or supernatural presence. Their project drew inspiration from the Ai Qing; as Henrot observed, this divinatory system predates any written record of civilization; thus "The desire to know the future comes before the recording of what's happened. And that's why we directed ourselves to this idea instead of the idea of archiving or what happened yesterday." The two developed a software tool that takes a reading of a user’s desktop in answer to specific questions, responding with a mixture of imagery and remixed text drawn from poets such as Dickinson (WWII codes, they pointed out, were based on poetry). While the methodology behind their application was charmingly dubious, it was complex enough to distract an anxious mind from obsession with a specific problem, which they argued was a key function of any divinatory system. And there is something to the idea that the contents of our desktop reveal something about ourselves that we may not even see: "The personal computer is a space that represents you somehow, or a moment of your being...The desktop is where things rest when they haven’t been processed yet. In that way, it's similar to a dream."
Paglen and Krieger identified a shared interest in machine vision early in the work day. Their first idea was to develop tools to fool machine vision algorithms—to “cloak” images so that humans could understand them, and machines could not. Imagine a "goldfish" filter on Instagram that could make any image look like a goldfish to an algorithm. This specific idea didn't quite work out, but spoofing machine vision in general turned out to be extremely easy to do, so the pair pulled in a third collaborator to deepen their research: artist and technologist Adam Harvey, who has done extensive research into machine vision and cloaking. Working together, the three decided to ask a different question: how can we teach ourselves to see like machines? They selected a range of images that have unique cultural importance, and analyzed them through various machine vision algorithms: edge detection, nudity detection, facial recognition. The results, presented as a slideshow, serve as a snapshot of how vision is carried out at this historical moment, making visible some of the ideologies behind algorithms that are often seen as neutral or objective: ethics and historical context are stripped; non-white skin is overlooked; everything is reduced to faces, bodies, and objects. And the Mona Lisa is interpreted as: "a beautiful young blonde in a black dress taking a selfie."
Syms and Trapani’s contribution took the form of a quiz. What is your mother-in-law’s maiden name? Who was your favorite uncle? These questions appear on a black screen; after you have filled in many answers, a result is generated. The questions it asks may seem familiar; they are all drawn from security questions asked during the sign-up process on a range of websites. Like the text used by Gillick and Silver, this text is both an index of this moment in capitalism and a snapshot of human experience. It’s an index of capitalism in that these questions exist because websites do not want to pay for customer service representatives to talk users through the process of resetting their password, and it’s a snapshot of human experience in that so much of our experience on the web is about being asked questions. (“What’s happening?” asks Twitter. "What 90s girl group are you?" asks Buzzfeed.) Why this eagerness to answer questions, even when they are posed by a bot? For Syms and Trapani, we are always looking for the same thing, whether we are trying to understand our deep connection with Bikini Kill or telling our feed about our train delay. Thus, the result of their quiz is always the same: “You are connected to something larger than yourself.”
Heather Corcoran, Laura Poitras, Kashmir Hill, Kate Crawford. Photo: Madison McGaw/BFA.
The quote is actually from Appelbaum’s Twitter feed, but it sums up his collaboration with artist Ai Weiwei very well. Another quote that might have served, if it wasn’t so US-centric: “F*CK THE NSA,” from Appelbaum’s T-shirt. As Appelbaum isn’t able to travel to the US, and Ai isn’t able to leave China, the two met at Ai’s Beijing studio, along with filmmaker Laura Poitras, who documented the experience. The story of the meeting of the “global dissident elite” was trenchantly covered by Kashmir Hill in a must-read story at fusion.net. The project, in the end, took the form of eight toy pandas from whom the stuffing had been removed and replaced with shredded Snowden documents and an SD card (contents unknown, supplied by Appelbaum). The project was called “Panda to Panda,” and Poitras’ film on the collaboration, entitled “Surveillance Machine,” will be online in the coming weeks.
Hannah Black read unremittingly into a microphone. A Mac running Apple Translate tried to keep up; when Black paused, a Python script would attempt to interpret her words back to her. “Enemy” became “Miami,” and at one point it referred to the “extinct (sic) Algonquian (sic) people”; Thrice looked horrified that their creation would say such a thing, and Black accused the bot of a colonialist slur. Overall, the performance focused on the question of what kinds of human labor can be mechanized - or are valued enough to mechanize - and dramatized the bot’s failure to perform the “affective” labor of understanding and care. But the presentation was most memorable for the specific observations that emerged from Black’s staccato delivery, addressed to a dumb bot:
Intimacy remains necessary and hard to mechanize. That is either because it is irreducibly human, or because the work of women or people who are like women is cheap, or free, at point of service.
When you aggregate the social, for example on an app, you also aggregate violence. Violence is not hard to mechanize. We already have robots who can do that.
The robot is disappointing. It does not know how to behave. The artist does not know how to behave either, but she hopes that she knows how not to be disappointing. Do you know what I am a technology for?
Sometimes I circulate as images, or as words.
In theory, there is something utopian about Twitter in that it collectivizes feeling.
In theory, there is something utopian about the NSA's total recording of all language. Of every word said in love or anger or boredom. As we can see from social media, most people do not mind if a machine records all their words...
Reading and misreading are among the first operations of love.
How did something called language become separate from something called nature? In a different context, human names for things are also their secret essences.
I speak an ugly language that history forced into my mouth. I work hard to make it beautiful for people and machines who don't understand me.
I am telling you this for money and for love.
Maybe next year's edition should be titled "money and love." Until then, look out for Laura Poitras' film Surveillance Machine and full video of the conference in the coming weeks!
Rhizome's Seven on Seven 2015 was supported by: