In audio practice X

Funk's SoundBox 2012

Funk's SoundBox 2012 logo
Funk's SoundBox 2012 logo

As partly told in a previous commentary, I spent time in Portugal in February 2013. Recordings I made during the visit consist of the files presented on PennSound, a series of ambient tracks in Porto and Buçaco, documentation of the proceedings of the Poesia experimental: materialidades e representações digitais colloquium at University Fernando Pessoa, and my "Seminário Transversal" for the Materialidades da Literatura seminar at University of Coimbra. At the tail end of the excursion, proposals to submit work for the Electronic Literature Organization’s Chercher le texte conference (Paris, September 2013) Virtual Gallery were due. Mulling over what to propose during course of stimulating days in Portugal, being wowed by PO.EX’s documentation style and possibilities of making new work within the context of documentary work, I decided to propose compiling recordings I made during the previous year.

My proposal description for Funk’s SoundBox 2012 simply stated, “In this multi-track interactive application, users will find ambient sounds, discussions led by some of the most influential figures in the field of digital writing, grand improvisations, audio play, and more weaved into a single sonic projection.” For years, I have made abundant recordings, but had never thought to package and produce them for public consumption in any organized way or with such focus.

Once the proposal was accepted in May, four months of various intensive types of labor followed. Closely reviewing all the recordings—which consist of ambient sounds, discussions led by (and between) digital writers/scholars/students, musical improvisations, studio play, readings and festival performances, &c.—for content, making selections, and preparing (i.e., ripping and mastering) 427 unique audio files and then converting them to .mp3 takes a long time! Such work is interesting research on many registers, and not without vast moments of tedium, which I usually welcome as time for contemplation—though it involves at least partially a type of split attention because one always needs to pay mind to technical, temporal, and other production components.

With the exception of Girassol studio recordings made directly to my laptop (with which I use an external M-Audio Fast Track Pro sound card), all of the recordings for the project were made in .wav format (44100 Hz) with a 24 bit 96 kHz handheld device (an Edirol R-09HR, made by Roland) directly to an SD card—via which files were transferred to the laptop. I used Ableton Live 8.2.8 to master the .wav files, which I then converted to the Web-compatible .mp3 format with Adobe SoundBooth. Since the audio content of the projects amounts to approximately twenty-four hours of material, I spent many days sequestered in Girassol in order to get the result I sought and achieved. With more than sixty contributors—including prominent figures in a range of fields, such as John Cayley, Mary Flanagan, Nick Montfort, Vanessa Place, Joan Retallack, Alan Sondheim, and many others—the process of seeking permissions, posting preview files, and sending updates were not trivial tasks either. Having an assistant/intern, or even a small team, to help out would have been useful. Yet because it could be made a priority (thanks in large part to Amy Hufnagel), I managed to accomplish what I set out to do—though doing these tasks on a project of this scale as a solo act is inadvisable!

Fortunately, as I began to design the interface in August, a most unexpected thing happened. Planning to use Flash software to construct the interface, I contacted my friend Jim Andrews (whose program dbCinema I've performed with) with a question about using Flash’s audio (in particular, regarding code for sound-embed buttons). Andrews used Flash for audio projects for many years (see Nio for an example), and I knew he would be a helpful consultant. I did not expect to have an elaborate engagement with him about it, or that he—once hearing the premise of what I wanted to do—would spend time authoring examples of how my designs could be implemented through a combination of HTML5 code and JavaScript. We thus ended up collaborating on the project’s premium attribute—its “version Stereoeo”—which interactively hyperlinks all of the vocal and musical tracks included in the most complex of three interfaces I designed. (The other versions, named “Table” and “Inventory”, allow users to play multiple files by individual contributors and an index for downloads, but not the ability to play multiple files by any contributor at the same time). Being fluent in HTML was certainly helpful, because once I had a general understanding of how the code Andrews provided worked it was not difficult to author necessary additions.

My intent was to permit users to mix and match sound tracks with verbal tracks. Andrews’s code structure functions perfectly for the task. I/we were able to create pull down lists for both types of tracks, and enable users to activate (and deactivate), as well as adjust the volume and timeline tracking for any number of tracks in each category. Possibility and play in such re-mixing often delivers surprising juxtapositions (or textual chaos, depending how you use it) and expands the project in a way presenting one sound file at a time would not. I would not argue that all archives should be presented thus, only wish to suggest that such components can have utility and offer fine alternatives to the norm.

Notably, at the same time I reached out to David Jhave Johnston with similar questions regarding the SoundBox audio design, because I have tremendous respect for what he did with Flash in his work MUPS (Mash Up PennSound) in 2011-2012. Johnston ultimately sent me all of the code for MUPS, with instructions on how to implement it with my files, though I did not pursue making another iteration of the materials beyond the Andrews version. The spirit of collaboration evident in my dialogs with Andrews and Johnston was inspiring and helped support SoundBox 2012 enormously. I also appreciate that several contributors–particularly David Clark, Roberto Simanowski, and Stephanie Strickland—closely engaged with their recordings and selectively refined the content; their attention to detail adds refinement to the project.

My original plans were altered and developed into an application that was easier to produce, more effective in affect, and quite easy to use. I cannot think of a way I could be more pleased by the end result, other than to say—as I do in notes that accompany the project—that a fully annotated set of “liner notes” for each file would be a positive addition.

My notes further divulge a backdrop to the story and approaches one can take with what I have put together. I explain that the project captures, “many different voices and sounds travelling up and down the building and involvement of a year. The content is creative, critical, and other; the selective process of its production, too, is a creative and critical endeavor” and posit my, “hope to see many more examples of this type of scholarship emerge in the Digital Humanities as time passes”.

Funk’s SoundBox 2012 was, thanks to my NJIT colleague Andrew Klobucar, nominated for a 2013 Digital Humanities Award in the category of “Best use of DH for fun”. It did not win the award, but certainly many more people were (and continue to be) exposed to the work as a result of the nomination. During the winter 2014 Polar Vortex I repurposed some of the code Andrews and I devised to create Funk’s SoundBox version archival (We Press, 2014), which appears (for simplicity’s sake) to be a uni-track construction, but in fact users can play with multi-tracking by clicking on the box with my name on the upper left corner of the screen.