xCoAx 2020 8th Conference on Computation, Communication, Aesthetics & X 8–10 July, Graz online

Raw Green Rust: Improvisation (with FluCoMa and UniSSON)

Keywords: Computer Music, Machine Listening, Signal Decomposition, Audiovisual Performance.

Raw Green Rust make abstract electronic music informed by wide-ranging musical tastes, using custom software instruments and processes with gestural controllers (Rawlinson, Green and Murray-Rust 2018, 2019). An important aspect of our improvising approach is to constantly sample and transform each other, in pursuit of an organic, shifting sound mass. Our performance practice builds on existing strands of work in creative computing, computer music and musicology but seeks to make playful and agile use of technology to explore shared musical agency.

We conceive of our work in ecosystemic terms (Waters 2007, Green 2011). Mutual connectivity through networked audio and control data offers radical possibilities for making sonic outcomes that are fluid, shared and responsive through performative agency that is distributed across people and processes. Performative agency in software can be found in the software’s capacity to act as a conduit and focus of interaction and exchange, as an object that can influence or change behaviours (Bown, Eldridge and McCormack 2009). For Raw Green Rust, this can be found in applications of the FluCoMa and UniSSON toolsets, alongside our already well-established plunderphonic aesthetic. [1]

Fluid Corpus Manipulation

Fluid Corpus Manipulation (FluCoMA) [2] is a five-year ERC-funded project, focusing on musical practices that work with collections of recorded audio and machine listening technologies. The project will produce software toolkits, learning resources and a community platform, in the hope of facilitating distinctively artistic and divergent approaches to researching machine listening (Green, Tremblay and Roma 2018).

In the context of this performance, the focus is on how techniques such as audio novelty measurement, dimensionality reduction and clustering can be used to facilitate live corpus co-creation between the members of the group. What effects might this have on the timescales over which we can quote and transform each other’s gestures?

Unity Supercollider Sound Object Notation

The sound-symbol relationship is the main focus of the Unity Supercollider Sound Object Notation project (UniSSON) [3] (Rawlinson and Pietruszewksi 2019).

In electronic music, especially by groups, it can be hard for audiences and performers to know who is doing what as movement and action is decoupled from sonic results. If the audience and performers are not able to audibly or visibly (at a gestural level) perceive contribution, how might it otherwise be represented and communicated?

The main output from this research is a suite of software tools that presents a real-time multi-temporal and multi-resolution view of sonic data. The current state of the software gives a clear view of which audio features belongs to which player, and indicates relationships between events/streams and gestures while exploring legibility and co-agency in laptop performance.

A prior Raw Green Rust performance, Beyond Festival 2019
Multi-temporal, multi-resolution visualization of audio features in UniSSON.


FluCoMa has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 725899).


  1. A Raw Green Rust performance can be heard at rawgreenrust.bandcamp.com.
  2. For more information see www.flucoma.org.
  3. For more information see www.pixelmechanics.com/unisson.


Join the conversation