So last week I presented my research on 'Abstract Machines : Art and the Age of the Algorithm' to the LEI and ARI labs. An interresting debate ensued, with my thesis director Jean-Louis Boissier heading the charge, mostly around what I suspect will be further debate concerning the ontological status of programmed images. While we both apparently still refer to my old mentor Raymond Bellour and his fundamental work on cinema and semiotics, it looks as if we have an emerging debate around the status of the image once it has been seized by the processor. Precisely where we place the 'entre' of Bellour's Entre-Images is probably where we still have some issues to discuss. I propose a strange concept more or less revolving around the idea of a Frankenstein process in which the image re-negotiates with the processor at each iteration in a discrete disembodied process, whereas for Boissier it looks as if the algorithm works at the temporal edge of the image, between images, but acting on the image as a whole. I might be misrepresenting Jean-Louis here, so I'll quit while I'm behind, but I thought the distinction interesting and look forward to debating this issue in a future session, perhaps during the defense itself.
As for the rest, I began the talk with an easy distinction: separating the algorithmic layer of computers from their computational layer --an idea that many reading this would probably already take for granted. Indeed it is not a new position, however it is one I've been working with for quite some time, and it is the object of this thesis. On this subject, Marius Watz over at Generator.x has a recent post subtitled Your new procedural lifestyle where he mentions Michael Mateas' Procedural Literacy: Educating the New Media Practitioner, which from my quick scan deals precisely with this issue.
However, once we got to the meat of the thesis -- and the part I'm having the most fun with -- i.e. the theoretical 'diagrams' of various artistic and commercial machines, the discussion veered into a strange ideological debate, yet again leading us to the pros and cons of using Processing in artistic practice. Ultimately, I'm building these diagrams with Processing because it represents a common pedagogical and artistic platform and allows me to easily share the code for anyone willing to take the next step. You don't have to be a rocket scientist to realize that it would probably be a good idea to have access to working code from a thesis exploring the relationship between art and code. Processing is far more open for these needs, while still remaining accessible to anyone willing to make the effort.
But Processing is still a tough swallow apparently for many that have an artistic past steeped in other environments. There is also a (somewhat justified) fear of visually (and procedurally) formatting the art to a specific school of thought.
There was also an interresting observation my collegue Jean-Michel Géridan made, i.e. that as curators become more and more interested in code-based works, artworks are increasingly chosen based on preconceptions about the environment and compiler they were programmed with. In this respect, Processing would currently be 'in', whereas other environments like Director would be 'out'. So true. [More....]