RHIZOME DIGEST: 6.21.02

<br />RHIZOME DIGEST: June 21, 2002<br /><br />Content:<br /><br />+opportunity+<br />1. Lenssen Ute: Call for applications - Bauhaus Kolleg Dot.City<br /><br />+announcement+<br />2. yukiko shikata: &quot;art.bit collection&quot; at ICC<br /><br />+thread+<br />3. John Klima, sgp, and Christopher Fahey: Context Breeder Mid-Project<br />Report<br /><br />+interview+<br />4. David Mandl: Harwood interview – TextFM<br /><br />+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +<br /><br />1.<br /><br />Date: 6.19.02<br />From: Lenssen Ute (lenssen@bauhaus-dessau.de)<br />Subject: Call for applications - Bauhaus Kolleg Dot.City<br /><br />Apply now!<br /><br />The Bauhaus Dessau Foundation announces the<br /><br />IVth BAUHAUS KOLLEG DOT.CITY 2002/2003<br /><br />On September 26, 2002 the first Trimester of the Bauhaus Kolleg DOT.CITY<br />will start. Apply now for an outstanding interdisciplinary program. Work<br />together with architects, geographers, sociologists, artists and other<br />professionals on one of the most challenging topics in urbanism:<br /><br />The Impact of Information and Communication Technology on Cities.<br /><br />Learn from international experienced specialists. Join people from all<br />over the world coming together at a unique place: The Bauhaus Dessau.<br /><br />1st trimester: Finding Human/ICT-Interfaces Sept 26, - Dec 4, 2002<br />Application Deadline: August 1, 2002<br /><br />2nd trimester: Creating Dot.Urban Amplifiers Jan 23, - Apr 17, 2003<br />Application Deadline: Dec 6, 2002<br /><br />3rd trimester: Planning the Dot.City May 22, - Aug 14, 2003<br />Application Deadline: Apr 4, 2003<br /><br />For details of the current program see<br />&lt;<a rel="nofollow" href="http://www.bauhaus-dessau.de/en/kolleg.asp?p=dot">http://www.bauhaus-dessau.de/en/kolleg.asp?p=dot</a>&gt;<br /><br />and details of application see<br />&lt;<a rel="nofollow" href="http://www.bauhaus-dessau.de/en/kolleg.asp?p=application">http://www.bauhaus-dessau.de/en/kolleg.asp?p=application</a>&gt;<br /><br />See also: &lt;<a rel="nofollow" href="http://www.bauhaus-dessau.de/dotcity/">http://www.bauhaus-dessau.de/dotcity/</a>&gt; FORUM the official<br />webpage of the BAUHAUS KOLLEG DOT.CITY.<br /><br />Join our first live chat: &lt;<a rel="nofollow" href="http://www.bauhaus-dessau.de/dotcity/chat.asp">http://www.bauhaus-dessau.de/dotcity/chat.asp</a>&gt;<br />Thursday, June 27th, 2002,<br />1830 - 2000 [ GMT ]<br />1930 - 2100 [ Central European Time ]<br />1430 - 1600 [ US Eastern Standard Time ]<br />2230 - 0000 [ Indian Standard Time ]<br />0730 - 0900 [ New Zealand Time next day ]<br /><br />Chat with us about the concept of our TELECITY-EXHIBITION<br /><br />Ute Lenssen<br />Bauhaus Dessau Foundation<br />BAUHAUS KOLLEG <br />Project Manager<br />Gropiusallee 38<br />06846 Dessau<br />Tel: ++49 (0)340-6508-402,<br />Fax: ++49 (0)340-6508-404<br />E-mail: lenssen@bauhaus-dessau.de &lt;<a rel="nofollow" href="mailto:lenssen@bauhaus-dessau.de">mailto:lenssen@bauhaus-dessau.de</a>&gt;<br />&lt;<a rel="nofollow" href="http://www.bauhaus-dessau.de">http://www.bauhaus-dessau.de</a>&gt;<br /> <br />+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +<br /><br />+ad+<br /><br />**MUTE MAGAZINE NEW ISSUE** Coco Fusco/Ricardo Dominguez on activism and<br />art; JJ King on the US military's response to asymmetry and Gregor<br />Claude on the digital commons. Matthew Hyland on David Blunkett, Flint<br />Michigan and Brandon Labelle on musique concrete and 'Very Cyberfeminist<br />International'. <a rel="nofollow" href="http://www.metamute.com/mutemagazine/issue23/index.htm">http://www.metamute.com/mutemagazine/issue23/index.htm</a><br /><br />+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +<br /><br />2.<br /><br />Date: 6.19.02<br />From: yukiko shikata (sica@dasein-design.com)<br />Subject: &quot;art.bit collection&quot; at ICC<br /><br />&quot;art.bit collection&quot;<br /><br />Date: June 21 (Fri) - August 11 (Sun), 2002 10:00am-6:00pm closed:<br />Mondays, August 4 (Sun) Venue: NTT InterCommunication Center [ICC]<br />Gallery A, B Address: Tokyo Opera City Tower 4F, 3-20-2 Nishi-Shinjuku,<br />Shinjuku-ku, Tokyo, 163-1404 Japan<br /><br />URL for this exhibition: <a rel="nofollow" href="http://www.art-bit.jp/">http://www.art-bit.jp/</a><br />—<br />on &quot;Art.Bit Collection&quot;<br /><br />In the art world, a work of art is called an &quot;art piece.&quot; The word &quot;<br />piece&quot; designates a thing that actually exists, but since software<br />creations exist only as binary data, calling them an &quot;art piece&quot; doesn't<br />suit well. Substituting &quot;bit&quot; for &quot;piece,&quot; we have decided to call such<br />a work an &quot;art bit.&quot;<br /><br />In the case of software, which is used as a medium, material, tool, and<br />environment for art, it is necessary to know the conditions of the &quot;art<br />bit&quot;; under the present circumstances, however, when the market is<br />glutted with high-performance application software, it is becoming<br />increasingly difficult to stretch the individual's imaginative powers.<br />Some people have even become convinced that no new software is needed<br />beyond what already exists. Software ought not to be simply a tool that<br />allows us to imitate actual operations and rationalize routine work. We<br />must delve down and discover new possibilities that are latent in<br />software and experiment with them through trial and error as &quot;art bits.&quot;<br /><br />The &quot;Art.Bit Collection&quot; exhibit brings together and displays works that<br />explore software possibilities in this sense – programming language<br />(especially visual programming language and language environment<br />software for computer music), network community (software available on<br />the Internet for creating and exhibiting artwork), software for<br />visualization for the World Wide Web, new application software, and<br />interactive works. Although we cannot perhaps say that these art bits<br />have as yet evolved into major works in this sense, we can say that each<br />of them contains a &quot;bit of art&quot; that shows extraordinary creativity.<br /><br />—<br /><br />7 categories with 39 works<br /><br />*Visual Programming Environment (8 works) How can we create open ended<br />Programming Environment for the end-user?<br /><br />*Media Programming Environment (5 works)<br /><br />*CommunityWare (1 work)<br /><br />*Virtual Environment (3 works) You can feel strange reality by virtual<br />environment in computer.<br /><br />*Web Browser historical view and alternatives (7 works) You can see<br />history and the future of Web Browser.<br /><br />*Behind the Network (5 works) Visualize the streams of network and data<br />on network. You can realize there are many background behind the<br />network.<br /><br />*NoiseWare - deconstructing desktop and application (10 works) Input<br />noise into desktop and application. They reconstruct your common sense<br />about computer.<br />–<br /><br />NTT InterCommunication Center [ICC] Tel: +81-3-5353-0800 (International)<br />E-mail : query@ntticc.or.jp URL: <a rel="nofollow" href="http://www.ntticc.or.jp/">http://www.ntticc.or.jp/</a> URL for this<br />exhibition: <a rel="nofollow" href="http://www.art-bit.jp/">http://www.art-bit.jp/</a><br /><br />+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +<br /><br />+ad+<br /><br />Limited-time offer! Subscribe to Leonardo Electronic Almanac (LEA), the<br />leading electronic newsletter in its field, for $35 for 2002 and receive<br />as a bonus free electronic access to the on-line versions of Leonardo<br />and the Leonardo Music Journal. Subscribe now at:<br /><a rel="nofollow" href="http://mitpress2.mit.edu/e-journals/LEA/INFORMATION/subscribe.html">http://mitpress2.mit.edu/e-journals/LEA/INFORMATION/subscribe.html</a>.<br /><br />+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +<br /><br />3.<br /><br />Date: 6.16.02-6.19.02<br />From: John Klima (klima@echonyc.com), sgp (somebody@sgp-7.net),<br />Christopher Fahey (askROM@graphpaper.com)<br />Subject: Context Breeder Mid-Project Report<br /><br />[editor's note: An interesting thread on John Klima's Context Breeder<br />project, which is in development but open for drive-bys and beta<br />testers, started on Raw this week. Salient themes are visualization and<br />an artist's obligation, or lack thereof, to usability.]<br /><br />John Klima posted:<br /><br />heya all,<br /><br />The Rhizome alt.interface mid-project report is online at<br /><a rel="nofollow" href="http://www.rhizome.org/Context_Breeder">http://www.rhizome.org/Context_Breeder</a> with a link to the app as it<br />exists. The report is also included below for your convenience.<br /><br />the original project proposal and description is at<br /><a rel="nofollow" href="http://www.cityarts.com/rhizome">http://www.cityarts.com/rhizome</a> if yer not familiar with it.<br /><br />the current application url is:<br /><br /><a rel="nofollow" href="http://www.rhizome.org/Context_Breeder/userface.htm">http://www.rhizome.org/Context_Breeder/userface.htm</a><br /><br />Give it a try, see if you can figure out the interface, pound on it and<br />create new genes. You create a gene by selecting four artbase objects.<br />users will eventual have the ability to accept an existing default gene<br />with no muss or fuss, and be able to find genes they previously created.<br />The creation interface is the most complex, so I managed it first. slide<br />the blue dot for big change, red dot for small, click around on things,<br />lemme know. will eventually add pop-up help, and loading status bars,<br />give it a few moments to load.<br /><br />best, j<br /><br />Context Breeder Mid-Project Report 15 June 2002:<br /><br />The Context Breeder project is well along in its development cycle. By<br />using server-side php scripts to accept and return data with a Java 1<br />front end, the data backbone for Context Breeder is fully established.<br />Work on the front end continues, with significant milestones<br />accomplished including cross platform browser-based 3d rendering, and a<br />gene creation interface that populates the rendering with new genes<br />(give it a try <a rel="nofollow" href="http://www.rhizome.org/Context_Breeder/userface.htm">http://www.rhizome.org/Context_Breeder/userface.htm</a>).<br /><br />The 3d gene pool rendering displays sequences of four gif images,<br />representing four Artbase objects. Context Breeder has two such 3d<br />renderings that will be finalized as a single interface. The first<br />displays the sequences as a transparent stack, the second displays them<br />in orbits. By combining the stacks and orbits, the sequences will be<br />arranged by their similarity with each other.<br /><br />The 3d rendering is populated by a java interface that allows the user<br />to create a sequence of four genes. Complete information about the<br />artbase objects in the scene will be visible below the 700 x 200 pixel<br />3d rendering area. This is by no means the finished 3d rendering,<br />however it fully demonstrates the instant dynamic adding of gene<br />sequences to the pool, constituting the major functional hurdle for the<br />project.<br /><br />Summary: breakdown of code modules written thus far: 1. php scripts<br />retrieve and add genes to the artbase. 2. a java interface accesses the<br />database through the php scripts. 3. a cross platform 3d rendering<br />displays genes as .gifs in stacks. 4. a cross platform 3d rendering<br />displays genes as .gifs in orbits. breakdown of tasks ahead: 1.<br />integrate the two 3d renderings based on gene similarity 2. enable<br />travel through the rendering. 3. enable gene crossover and lifespan. 4.<br />beta feedback and debug.<br />[sgp] wrote:<br /><br />John, This is a really interesting project and I have lots of questions.<br />You seem to be taking on the challenge of creating a better<br />visualization scheme for associative data by finding alternative<br />metaphors for their organization, display and interconnectedness. Much<br />of my criticism and commentary below are informed by my experience<br />dealing with Thinkmap and more recently teaching interface design.<br /><br />I found the whole experience to be not very intuitive. This is not<br />necessarily a problem. I know we have had discussions in the past about<br />your interest in providing challenging gaming mechanisms and interaction<br />designs. While I tend to agree with you on those, I find it important to<br />provide a layered experience. Another reason I bring this up, is because<br />the project is wavering between being a tool and an interface - it is<br />two screens, the first, a user interface(tool) and second window is<br />interface of artbase(visualization). I say this because in your<br />explanation of the piece, it is clear to me that user participation is<br />important to the project's success. Therefore legibility of the<br />interface as a usable tool is important while legibility of the<br />interface as visualization can rely on additional parameters - like<br />those found in genetics(?).<br /><br />Below I will try to identify key aspects of the interface and<br />interaction design that I found confusing.<br /><br />There seem to be 3 main modes of activity for the gene pool selection<br />interface. - Scrolling : physical metaphors and measure There are two<br />different methods to scroll a list provided to the user - a blue dot and<br />a red dot. The fact that they are even scroll bars is obfuscated by the<br />lack of typical scroll bar conventions, up/down arrows or the physical<br />metaphors of a button in a slot, track or delimited slide zone. You<br />provide a box in which the dots reside, but it is unclear how they<br />relate to that box in part because the box looks more to be a framing<br />element, creating modules, common to the entire interface. Once users<br />catch on that the dots are scroll bars, the interface responds well, the<br />feedback is as expected. However, it is difficult to understand what<br />proportion the red dot scrolls compared to the blue. For example, the<br />blue allows users to jump every 20 names vs. the red scrolls within<br />those 20. This is a nice feature but I think unusual and therefore<br />needing more visual clarification of measurement.<br /><br />- Selecting - multiple clicking/highlighting options to designate choice<br />It is very subtly implied(reading left to right) that the sequence of<br />browsing is blue dot, red dot, click box to position, hit button. I<br />found this sequence so subtle as to be invisible due to various other<br />competing interactions. The red dot is both scrolling and indicating<br />selection. There is a visual connection between the red dot and the<br />placement box rather than a connection between the artbase item and the<br />placement box. Having many items hot(clickable) makes it easy to get out<br />of sequence. Non-linear selection is great, I am all for serendipity but<br />it becomes very important for users to be able to track their current<br />state. Therefore, I would disconnect the red dot from highlighting a<br />selection and let the user click on the item to select it OR keep the<br />connection and have the placement box, constantly updating as users<br />scroll and then provide an obvious way to select the next placement box.<br /><br />- Producing - one big interface So, I have tried to make a distinction<br />between what I think are currently confusing interface issues and their<br />possible outcomes, one being the more typical usability-oriented and the<br />other more serendipitous. My last comment about the interface is a<br />general layout one. What if you placed the scrolling window at the top<br />and the placement boxes and go button at the bottom? I would eliminate<br />the extra readout list currently in the upper right corner as that<br />information could easily be incorporated into the placement boxes. This<br />adjustment would actually allow you to place the second window,<br />&quot;addgene.php&quot; to the bottom of the interface, thereby making it one<br />fluid experience. Right now, taking me to a second blank window, I<br />forget my choices, and am left to drift through un-annotated field of<br />thumbnails. Which ones are the ones I selected? How are the others being<br />generated? Why in a circle? Can the user interface foreshadow some of<br />there structures?<br /><br />All that said, what I find exciting is the possibility for the 3d<br />environment to be readily updateable because it is part of the same<br />interface. This, for me, would be a great and fluid context breeder. By<br />placing the thumbnail visualization back into a selection environment<br />could allow you to highlight the existing structures at play in the<br />curation and categorization of art works. It would also allow you to<br />address Patrick's fine comment about wanting to search by numerous<br />criteria beyond alpha-numeric listing. On the other hand, I have been<br />considering the 3d environment as the 2nd experience. It is really the<br />primary experience and therefore one could consider the user interface<br />as a kind of heads-up display therefore making it all one fluid piece.<br />Which makes me wrap-up by asking how does all this relate, if at all, to<br />current methods of genetic visualization and sequencing? What does an<br />additional dimension(2d to 3d) afford you? Ok, I'll stop…I apologize<br />for being so long winded, but I am excited to see where you're going to<br />take the project.<br /><br />Best regards, [sgp]<br /><br />ps: I had no technical problems on my PC(w2000) / IE 5<br />John Klima responded:<br /><br />hi scott &amp; all,<br /><br />thanks a ton for your detailed assessment. the first time you look at<br />anything of mine its not very intuitive, but in this case it *is* very<br />simple. i think a few bits of pop up instruction will go a long way<br />(which i fully plan to implement). regarding your more specific points,<br />i don't think the first time i looked at any scroll bar, i intuitively<br />knew what it did. but once i tried it it became quite obvious. i'd like<br />to suggest we are at the point of computing sophistication where ugly<br />little up and down arrows can be dispensed with, and seeing a list with<br />a gizmo next to it is all it takes to say &quot;scroll&quot;<br /><br />the discussions we've had about this topic in general have always lead<br />me to believe that no interface is intuitive, only similar to past<br />interfaces, or in the worst cases, simply habitual. i also think that a<br />good interface does not need to be intuitive, it needs to be easy to<br />master, and effective in its operation. that is not necessarily the<br />definition of intuitive. i also believe the only way to create new<br />interfaces that do more than the existing paradigms, is to simply not<br />worry about whether grandma and little billy can use it. i guess thats<br />why i'll never be a web designer. but seriously, the only way to make<br />more effective interfaces is to demand a bit more of the user. and i<br />think this really comes down to habit. we are used to seeing a scroll<br />bar that has arrows, thumb boxes, heavy raised boarders, all this crap<br />that takes up space and perhaps makes it more confusing for grandma and<br />little billy. if it is assumed that everything in an interface has a<br />purpose, two dots in a rectangle next to a list seems obvious enough. a<br />quick investigation and their purpose is revealed. which is of course<br />part of the fun, and this is of course, not an online realtime stock<br />trading application.<br /><br />to address your comments on selection, i'm still playing with things.<br />however, i quite like the sequence of events for selecting the four<br />objects. the red line connects to the red dot, which also highlights<br />it's neighboring text entry in the list. to add that object, clicking on<br />the text entry OR clicking on one of the four boxes adds that entry.<br />clicking on a different entry adjusts the list to it, and it appears in<br />the selection box. so it becomes quite quick to set all four image boxes<br />to the same object, which one might actually want to do, or two of one<br />and two of another. i think the mechanism, though perhaps not<br />&quot;intuitive&quot; is highly effective for making the selection. a few little<br />tweaks and drill-downs and it will super effective. btw the problem with<br />*automatically* selecting the highlighted list entry into the image box<br />is the network lag it takes to load it, and also the fact that my<br />unfiltered database list has lots of entries where there are no images.<br />however your points are well taken and i'll likely incorporate some of<br />their concerns into the interface. keep in mind though that prolly 90%<br />of first time users will opt for a default gene and never use the<br />creation interface, so my primary concern will be to make it fast and<br />effective to use once you know the (few) oddball mechanisms.<br /><br />as far as the 3d rendering is concerned, there is no room for the<br />selection mechanism on this page because there will be a whole other<br />interface that the rendering is only a part of. if it had consisted of<br />only these two interfaces, they would have been on the same page. so in<br />the final version, there will be an interface of simmilar look and feel<br />to the selection interface, wraped around the 3d rendering. also the 3d<br />rendering is in no way the final appearance of the sequences, it right<br />now functions only as a proof of function - it can accept user selection<br />of artbase objects into a 3d rendering of their thumbnails. also the<br />selection screen is not really part of the interface per se. the real<br />interface will be inside and around the rendering where connections are<br />made between genes, and will be fully annotated. btw, the gene you<br />created is the first one that appears in the rendering, and all the<br />others spiral out from there. in the final interface, your gene will<br />act something like a crosshair in the center of the screen, and the<br />other genes will be stacked and orbited in the scene according to their<br />similarity with your gene.<br /><br />thanks again for your detailed evaluation, it will really help when i'm<br />faced with decisions where i'd prefer to say &quot;oh, fuck the user.&quot; i<br />look forward to your comments in the future.<br /><br />best, j<br />Christopher Fahey wrote:<br /><br />While I advocate usability professionally, and while I think that poor<br />usability often unwittingly ruins a lot of ambitious net.art work<br />(<a rel="nofollow" href="http://010101.sfmoma.org/">http://010101.sfmoma.org/</a>), I also think that John's project has a<br />formal goal beyond the conceptual algorithm which recombines the Artbase<br />&quot;DNA&quot;: He is also experimenting with user interface paradigms, and as<br />such we should not expect the interface to stick to normal interface<br />standards.<br /><br />A really great book on web site usability is titled &quot;Don't make me<br />think&quot;, and in my day job as an information architect and interaction<br />designer I think this is a great rule of thumb. But in an art context, I<br />think the opposite can be quite true: &quot;Make me think!&quot; is the name of<br />the game. Josh Davis once said that we shouldn't make interfaces that<br />assume the user is stupid. I agree.<br /><br />That said, I think most net.artists, including John, need to keep in<br />mind the usability of their work. Just because it's art doesn't<br />necessarily mean that the artists has carte blanche with the GUI. If<br />subverting the interface is the point, then go ahead and rock it Jodi<br />style and make every button and widget a total mystery. If building<br />compelling, elegant, and innovative interactive experiences is your goal<br />(this well describes John Klima's whole artistic practice, IMHO), then<br />usability should be a factor in your equation.<br /><br />I reserve judgement on the usability of John's interface, but it seems<br />to me at this in-progress stage that it is not so challenging that his<br />audience wont figure it out after a little bit of thinking. Also, it<br />shows promise as something that might actually be an interesting<br />interactive experience when it's done.<br /><br />-Cf<br />John Klima replied:<br /><br />chris,<br /><br />thanks for your input. the usability of the creation interface is an<br />issue in as far as it's effectivness to create a sequence. however the<br />interface's measure of usability is not dependant on how intuitive, or<br />similar to other interfaces, it happens to be.<br /><br />and i have to insist that an artist does indeed have carte blanche with<br />the gui, it is the only arena that allows for this. its is the payback<br />for not having any practical thing to market, at least you can do what<br />ever you want. but thats another heavily worked topic.<br /><br />while full bore chaos that one often experiences in a jodi piece is<br />great, it does not have to be an all or nothing affair. an interface<br />does not have to be completely enigmatic or completely comprehensible,<br />and in a sense something that is neither is the most interesting, 'cause<br />you swear it makes sense but you just don't know why. thats a fun kind<br />of mental friction.<br /><br />best, j<br /><br />+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +<br /><br />4.<br /><br />Date: 6.14.02<br />From: David Mandl (dmandl@panix.com)<br />Subject: Harwood interview – TextFM<br /><br />[This interview appears in the current issue of The Brooklyn Rail]<br /><br /><a rel="nofollow" href="http://www.thebrooklynrail.org">http://www.thebrooklynrail.org</a><br /><br />———————————–<br /><br />Harwood interview: TextFM<br /><br />Dave Mandl<br /><br />Though you might not be aware of it if you live in the U.S.–where<br />mobile-phone technology is still a creaky Tower of Babel–&quot;texting&quot; is a<br />massively popular phenomenon in the rest of the industrialized world,<br />especially among young people. Formally known as &quot;SMS&quot; (for Short<br />Message Service), texting is a way to send text messages from one mobile<br />phone to another quickly, easily, and cheaply. There are currently more<br />than thirty million text messages a month being sent worldwide, and that<br />number is expected to rise to more than a hundred million in the next<br />two years.<br /><br />TextFM, a &quot;simple, lightweight, open media system&quot; designed by Londoners<br />Graham Harwood and Matthew Fuller, takes advantage of the widespread<br />availability of SMS-capable mobile phones to allow people to broadcast<br />_voice_ messages over the public radio airwaves. Using TextFM is<br />simple: You send a normal text message to a central phone number, where<br />it is captured by a computer. The computer converts the message to<br />speech using voice-synthesis software, and your spoken text is then sent<br />to a transmitter and broadcast over an FM radio frequency. As part of<br />your message you can also include several optional codes (&quot;switches&quot;)<br />specifying the language your message is in, which of ten voices to use,<br />the pitch of the voice, and the speed at which you want the text read.<br /><br />The TextFM software is non-proprietary, &quot;open source&quot; code, meaning it<br />can be freely downloaded–and even customized, if necessary–by anyone.<br />Anyone with access to a computer running the Linux operating system<br />(which is itself free, open-source software) can set up their own TextFM<br />&quot;server.&quot; Installations are currently running in Vienna, London, and<br />Amsterdam, with more locations in the works. One of the current goals<br />of the project is to grow a decentralized network of TextFM servers<br />around the world: After a message is received and broadcast at one<br />TextFM site, it can then be forwarded to other sites in the network for<br />broadcast there.<br /><br />I spoke to Graham Harwood (who is currently doing a residency at De Waag<br />in Amsterdam) during a recent visit to New York, where he gave a<br />presentation on TextFM at the Museum of Modern Art.<br /><br />———–<br /><br />DM: Can you describe how TextFM servers in different locations would<br />work together?<br /><br />GH: The server doing the voice synthesis sits there [in Amsterdam], and<br />so people text to my phone, my computer reads the text messages straight<br />off, then sends those streams to the server in Austria [where they] join<br />the stream of people texting there. And the same is happening there, on<br />their server. So it's looking more and more likely that you can have<br />different nodes of this device. Because one of the big problems has<br />been getting around the airwaves problem [i.e., getting access to radio<br />frequencies to broadcast over]; the radio thing is a complete nightmare.<br /><br />DM: That's interesting, because one of the original goals of the project<br />was opening up the airwaves. So do you now see the future being more in<br />webcasting these messages, streaming over the net rather then continuing<br />with the radio model?<br /><br />GH: No. Generally it's a localized project. [Local administrators can<br />send messages] off into radio, or off into a public announcement speaker<br />system, or some other viable way for the local area. Because the laws<br />on radio are so very different between different borders and different<br />places, there's not a kind of one-solution-fits-all. It looks like<br />you've got to have a lot of different elements of the project that can<br />be locked together in different ways to suit local environments. It<br />could be in a public address system in a particular environment, it can<br />be in a club, you can use a CB…<br /><br />DM: So it's completely decentralized and autonomous: &quot;Here's your<br />stream; do what you want with it. If you have access to some radio<br />frequency, then broadcast it. If you want to webcast it, do that.&quot; What<br />kinds of messages have people been experimenting with?<br /><br />GH: One kind of speculative notion would be if we can set up a series of<br />speakers aimed at a public building here, or a public monument or<br />something, and do the same in a number of countries, and then use these<br />different nodes to actually just send shit to these public address<br />systems, it would be a really good method of–<br /><br />DM: An audio bulletin board.<br /><br />GH: Yeah. Because a lot of people in Vienna use texting as they're<br />walking past the public-address system there to just write in their text<br />message that just booms out in that locality. So it's almost like<br />grafittiing as you walk past. And one of the really invigorating<br />notions about SMS is that everyone has their own remote in their pocket,<br />you know, as you walk past some kind of bulletin board, some kind of<br />address system to just leave something, post something, place it there,<br />in a mobile space. And that is really a kind of social dynamic, because<br />it gets it back out in the streets out of your bedroom and your screen.<br /><br />What's interesting about it is the complete system, it's not the content<br />of the system. It's the media systems that are being brought into play<br />for particular purposes. And the content of it is kind of secondary.<br />For me, if it's particularly geared at a physical object or a physical<br />space, then I'll quite happily send a stream of Bush probability<br />speaking [a Harwood project that creates ersatz Bush speeches based on<br />word frequencies in previous Bush speeches], or some other activity. And<br />so I think they're the really core interests for me, and it also came<br />about because of this thing of wanting at first to create a local media<br />system, and then seeing how people wanted to actually interact or<br />manipulate that system. Not just content. And that became part of the<br />project.<br /><br />DM: What do you mean by &quot;manipulate the system&quot;?<br /><br />GH: I mean being able to change voice, trigger events, change pitch of<br />voice. We did one experiment with a group of students where we took<br />this trip of Bush to some South American country and combined [his<br />speech] with a bunch of other robots crawling other websites, and put<br />that [material] together–<br /><br />DM: So you just inject it into the stream?<br /><br />GH: Inject it into the stream, yeah. At timed intervals. And of course<br />you get these kinds of reactions to it from people texting. So it's not<br />a completely _open_ system, but it's a system that's using language as<br />data, and then allowing people to interrupt that.<br /><br />DM: You're going to be doing something with Resonance FM [a new<br />community radio station in London]?<br /><br />GH: Yeah, we're going to do it with Resonance. I think we're going to<br />use nighttimes.<br /><br />DM: You mean in a time slot between the hours of so-and-so…?<br /><br />GH: In the different kinds of testing we've done, we've seen that TextFM<br />works really badly in some environments and really well in others.<br />That's quite interesting in itself. If you only have a three-hour time<br />slot somewhere and you just do it, it's crap. Because the network<br />doesn't develop. If you do it, though, in a kind of closed<br />conference-type session, it works very well. Like where there is a<br />particular subject and you use a local PA system, and people are<br />dropping their messages into it. It works really well like that. Where<br />it works the best is when you've got something ongoing over a month<br />period or something like that, where it can build up its own clientele.<br />If you've got a specific action with a public-address system against a<br />particular building, that works very well. But these light encounters<br />with it in public spaces are bad. Because people don't get it.<br /><br />DM: This project seems more humanist, in a way, than the net, just<br />because there's a voice involved–though I haven't heard it; I don't<br />know how synthesized and cyberpunk it sounds…<br /><br />GH: The aesthetic of voice synthesis is bad. A lot of people hate it. I<br />went through a thing of really hating it, but then I began to like it<br />because it's like the country-and-western of the cyber world. It's naff,<br />it's tasteless, and it grates. That's one of the things in<br />Amsterdam–I've done it at some reasonably bourgeois events. And people<br />kept turning it off, because they found it so annoying, and I was in<br />heaven. And people got really scared of it as well, because once you<br />alter the pitch and rate of the thing, you get into some really grating,<br />tasteless aesthetics, which I have a fascination for, social elites' use<br />of aesthetics. Also I did things like use a lot of harmonies with the<br />voice synthesis, with jingles and stuff. So those horrible synthesized<br />voices are actually singing harmony with a TextFM jingle. And we use<br />birdsong, British birdsong, as the audio track. So that's the background<br />all the time in TextFM. Because birds kind of have these intricate<br />media systems by which they declare territory and intention. It's also<br />like the music sound of the twittering of the birds. So it fits really<br />well. &quot;What kind of aesthetic can you choose for such a system?&quot; And<br />birdsong seemed to be the most stupid and appropriate [laughs].<br /><br /><a rel="nofollow" href="http://www.scotoma.org/cgi-bin/textfm/textfm.pl">http://www.scotoma.org/cgi-bin/textfm/textfm.pl</a><br /><br />–<br />Dave Mandl<br />dmandl@panix.com<br />davem@wfmu.org<br /><a rel="nofollow" href="http://www.wfmu.org/~davem">http://www.wfmu.org/~davem</a><br /><br /><a rel="nofollow" href="http://www.scotoma.org/cgi-bin/textfm/textfm.pl">http://www.scotoma.org/cgi-bin/textfm/textfm.pl</a><br /><br />+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +<br /><br />Rhizome.org is a 501©(3) nonprofit organization. If you value this<br />free publication, please consider making a contribution within your<br />means.<br /><br />We accept online credit card contributions at<br /><a rel="nofollow" href="http://rhizome.org/support">http://rhizome.org/support</a>. Checks may be sent to Rhizome.org, 115<br />Mercer Street, New York, NY 10012. Or call us at +1.212.625.3191.<br /><br />Contributors are gratefully acknowledged on our web site at<br /><a rel="nofollow" href="http://rhizome.org/info/10.php3">http://rhizome.org/info/10.php3</a>.<br /><br />Rhizome Digest is supported by grants from The Charles Engelhard<br />Foundation, The Rockefeller Foundation, The Andy Warhol Foundation for<br />the Visual Arts, and with public funds from the New York State Council<br />on the Arts, a state agency.<br /><br />+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +<br /><br />Rhizome Digest is filtered by Rachel Greene (rachel@rhizome.org).<br />ISSN: 1525-9110. Volume 7, number 25. Article submissions to<br />list@rhizome.org are encouraged. Submissions should relate to the theme<br />of new media art and be less than 1500 words. For information on<br />advertising in Rhizome Digest, please contact info@rhizome.org.<br /><br />To unsubscribe from this list, visit <a rel="nofollow" href="http://rhizome.org/subscribe.rhiz">http://rhizome.org/subscribe.rhiz</a>.<br /><br />Subscribers to Rhizome Digest are subject to the terms set out in the<br />Member Agreement available online at <a rel="nofollow" href="http://rhizome.org/info/29.php3">http://rhizome.org/info/29.php3</a>.<br />