Why I Live Code - by co34pt

As an exiled classical violinist, dormant guitarist, habitual electronic tinkerer and (as of 2014) live coder, I got interested in making electronic music when I listened to Portishead's 'Dummy' and Boards of Canada's 'Geogaddi' (among others) in my early teens. I began learning how to produce it as soon as I could by experimenting with FL Studio alongside early YouTube tutorials, the first milestone of this being the release of my first 'album' of 'Ambient music' as a .zip of 128kbps .mp3 files on MediaFire.

These Digital Audio Workstation (hereafter DAW) compositions and arrangements were a lot of fun to make, and enabled me to experiment with many techniques and genres, but I couldn't 'perform' them. This was of course until I discovered Ableton Live. As someone who had been confined to static DAW arrangements for some time, Ableton with its emphasis on live performance through alternative interfaces/controller mechanisms was my platform of choice for around five years. Ableton's emphasis on performance initially allowed me to compose music in a performative manner by using loops, triggers and controllers, and eventually gave me the confidence to take specific compositions to a stage, with varying degrees of success. I then began composing and performing music using a mix few proprietary DAWs and programs.

here's an old performance of mine

After a while I had some reservations about my continuing use of proprietary DAWs, for a few reasons.

First was the inflexible nature of the kinds of performances I was delivering. I had a set of compositions (or 'songs', if you will), which were arranged into a set of loops which could be triggered in theoretically any combination, but in order for the songs to make sense as pieces of music, the order had to be reasonably strictly obeyed. I had some flexibility in the way I applied effects to individual channels, but this to me did not translate to directly 'performing' tracks in the way I would 'perform' with a traditional instrument - I felt as if my performances had become glorified button pushing ceremonies. I am very aware that there are much more 'live' ways to play with various DAWs than the methods I used, but this was not how I had ended up performing. Around the time I decided to give up on proprietary DAWs I was pretty immersed in playing improvised music with guitar/violin/electronics/various media during my Music degree, and I wanted to be able to bring an improvisatory instrumental spirit to my performances of electronic music. In performing with proprietary DAWs however I personally fell far short.

Second was the fact that the software was --h u g e--, and !DEMANDING!. My performance DAW suite of choice took up around 54GB of hard disk space, and became very difficult for my laptop to handle if I used any external software instruments at all. As a result of this, each individual track was an unwieldy bundle of samples and instruments, which would take a large amount of processing power to render. If I then wanted to perform a set of these tracks, I'd often have to combine a number of live 'projects' together and save them as one large project, as having to load each individual song before I played it would take minutes, breaking the flow of performance. What resulted were metaprojects which would be utterly enormous, unresponsive and would sometimes crash on loading. They could also be quite buggy, and performances felt 'risky' in the sense that any movement could topple them and bring my entire performance with it. While i'm all for embracing the possibility of a crash, this possibility being a structural feature of a performance without that being my intention was not an enjoyable way to perform.

Third is that the software is proprietary, and I was unhappy with what that represents. Leading up to the time I eventually gave up with proprietary DAWs (and subsequently proprietary software in general, where possible) I had been watching a number of lectures by Richard Stallman discussing proprietary software and user freedom. This, coupled with the work of glitch artists (particularly Rosa Menkman and Nick Briz) focusing on the role of platforms and softwares as often unacknowledged intermediaries in our material experiences of technology presented me with a set of issues I could not personally resolve. While I released all of my music under creative commons in disagreement with copyright legislation, I was producing music using tools that were not only bound by the legislation I disagreed with, but tools that were purposefully restricted the way that I could use them. In the words of Richard Stallman:

'With software, either the users control the program (free software) or the program controls the users (proprietary or nonfree software).' The proprietary nature of the software also means that it can only be run on certain systems by those with the financial ability to run it (or willingness to break various laws), on top of having to have access to a computer. The copyleft approach I had to the works I produced were very difficult (if not impossible) to apply to the materials used to make the works themselves.

Fourth was my relationship to traditions of performance in 'laptop music'. Even with controllers, performances I would deliver would always be me staring into a black box in the form of a laptop, occasionally triggering things on a controller. While I attempted to get around this in some ways by projecting a video of my controller during sets as part of the visuals during sets, this didn't alleviate the problem of obfuscation. I was very used to a direct cause-and-effect relationship between actions and sounds, and for that relationship to be apparent to an audience. Whether I was bowing a violin, chugging away at 12/8 swing, or playing guitar with a handheld fan and a wood file (actually happened), the cause-effect relationship between myself and any potential audience was pretty clear. I felt as if my performances of electronic music did not have this kind of immediacy, and I didn't like that at all†. I'm very aware that this kind of immediacy isn't something that everyone strives for in laptop performance, but I missed it dearly. In addition to this, performances of electronic music of this type offered no opportunities for me as an audience member to learn about its construction besides how it sounded. I've always been fascinated by the construction of music and art, and the ability to deconstruct this in real-time is something I really value, much like the YouTube FL Studio tutorials I followed to learn how to make electronic music in the first place (I did this because I didn't realise the software actually had a manual, and I didn't realise my performance DAW even had a manual until I had been using it for three years). With this 'black boxing' of the performance setup, I had no layers to peel back - if a performer did something cool and I wanted to do it, tough luck, time to go home and reverse-engineer it without any idea what tools were used in its construction! I've never been enamoured of obfuscation or secrecy around technique. Why should techniques be a big secret? Much like the copyrighting and locking-down of the software, performance traditions that obscure the mechanisms one can use to do 'cool things' are pretty frustrating for me, whether or not that is the intention of the performer.

With these issues in mind, what was the answer to my problems with digital music performance? The best answer I have found is live coding, but it took me a while to get there.

Until around 2014, I had been dead-set against 'music-programming' (at the time I meant Pure Data and Max/MSP), as I was convinced that the integration of programming and music would take the 'human element' out of the music I was performing. Needless to say this was short-sighted and incorrect, and was probably a hangover from my education in the classical music tradition through the British schooling system, in which electronic music was often derided as a something not to be taken seriously, and not as 'real music'. I had overcome this once I learned that my university took electronic music pretty seriously, however the idea of programming still stuck around as 'non-musical'. As was reasonably common among my peers, I found programming to be an alienating concept, with its syntax, language, args/ints/strings/longs and so on, it seemed the exact opposite of what I considered music creation to be - intuitive, tactile, 'musical'. How could

{SinOsc.ar(LFSaw.ar(XLine.kr([0.01,0.02],[400,500],100)).range(1,2000).round(200))}.play; 

be music if it didn't look like any music I had ever played before?

Around the time I was considering these issues and starting to look for alternatives I was fortunate enough to audit some classes by John Bowers where I learned how to use Pure Data and Arduino for multimedia performance and installation work. As a result, I actually learned how programming worked and what it was capable of, and began producing interactive digital works and performances. In addition, I was using free and open soure software almost exclusively to create these works (with the exception of Max/MSP for video). It turned out that by using programming I could not only escape the trappings of limited systems for artistic expression by creating my own, but could extend outside of audio and into video, graphics and electronics through the use of open standards. I had overcome my fear of code!

While this was great for developing artworks, and provided a way out of using proprietary software (again, with the exception of Max/MSP), it didn't provide me with a solution for the music performance problem.

However, a housemate of mine at the time had been teaching me a little SuperCollider, a platform for audio synthesis and algorithmic composition. SuperCollider seemed to be the best platform for applying my newfound programming enthusiasm to electronic music, with the ability to operate outside of proprietary software, and the ability to choose the terms on which I would interact with the music I created (what DAW environment will let you play 1,000 copies of a three minute sound at random speeds with one action?). Around the time that I learned basic SuperCollider skills I had to complete my final year of my undergraduate music course, where I elected to do a 40-minute performance in place of a formal written dissertation. I figured the best thing to do would be to put my money where my mouth is (so to speak) and take the plunge away from proprietary DAWs into performing music with code. When I decided to do this Algorave had been in my periphery for a little while as live-coding's answer to electronic music performance. The TOPLAP Draft Manifesto alongside some events I had attended in Newcastle and Sheffield featuring live coding musicians piqued my interest in Algorave and what it could offer me by way of an approach to electronic music performance, and it turned out to be a great working answer to my main gripes with performing electronic music with proprietary DAWs.

"First was the inflexible nature of the kinds of performances I was delivering" - Live Coding tends to revolve around wholly or partly improvised performances, and the ability to write code in a non-linear way and execute it in real time and have the results instantly rendered as audio opened the playing field for me hugely. While it is possible to have live coding performances with a very set trajectory which evolve in the manner of a meticulous composition, it's equally possible to start from literally nothing except a running synthesis server. With a language as broad as SuperCollider, I could integrate anything from blistering noise based on non-linear maps through to 5/4 kick drums through to complex sample manipulation through to 4/4 kick-snare-clap patterns within one performance. While of course it's not always productive (or possible) to draw on such wildly disparate techniques during performances, the fact that the possibilities exist is very important. In addition to this, there are a plethora of live coding languages that can all be networked to one degree or another (although I usually stick to SuperCollider for reasons I'll detail in a later post).

"Second was the fact that the software was --h u g e--, and !DEMANDING!" - In switching to a programming platform like SuperCollider to make music, one is presented with the ability to start from basically zero. The SuperCollider source code is currently (as of March 2017) an 14.6 MB download from GitHub, and runs without any GUI by default, meaning that system load is very low out of the box (SuperCollider comfortably runs on Raspberry Pi), with the loading of extended functionality and libraries at the discretion of the user. In addition, projects are written and loaded as text files, which take up very little disk space and can be loaded near-instantly. By switching out my proprietary DAW for a live coding setup, I wouldn't have to wait minutes for projects to load (or have them crash outright after loading), and the separation of editor/server/interpreter in SuperCollider makes the management of any crashes much easier. If i need to, I can also perform on low-cost, low-power hardware, or use SuperCollider to create embedded installation works.

As it is a programming language, SuperCollider can be (and has been) built up to a fully-functioning DAW-type environment if necessary. With this I could try to like-for-like replace a proprietary DAW environment if I wanted, but doing so would, for me, partially defeat the point of learning how to live code in the first place. In live coding I can build and maintain an environment that suits me as a performer, keeping a simple, effective workflow to articulate my ideas within.

"Third is that the software is proprietary" - With a few exceptions (notably Max/MSP), live coding draws from rich ecosystem of free and open source tools, often with practitioners being active contributors to the software packages that they use (a good example being Alex McLean and TidalCycles). In adopting Live Coding as a method for electronic music performance I could finally leave the Apple ecosystem and the proprietary DAW paradigm in favour of using GNU/Linux and open source tools. I could now have full access to the tools I would be using to create music and the ability to modify these tools as I wished. In addition, so can anyone else! I can happily write a set of tutorials on how I live code electronic music knowing that anyone who has access to a computer running a compatible operating system should have the ability to follow that tutorial without them having to have access to hundreds of pounds worth of software and a license for Windows or an Apple machine. Live Coding was the last piece of the puzzle in my transition to a fully open source art practice, both in the tools I use and the work I create, which is now the focus of my PhD research. I try to keep an updated GitHub repo containing my live coding setup and sets, and I am going to be writing some docs/guides on how I live code dance music using SuperCollider and my own custom boilerplate code. The repo can be found here and a set of resources on how to live code in SuperCollider can be found here.

"Fourth was my relationship to traditions of performance in 'laptop music'" - I'm far from the first person to pick up on this, but the TOPLAP manifesto's 'Obscurantism is dangerous. Show us your screens.' seemed like a beautiful answer to the kinds of indecipherable laptop performances that frustrated me as a concert-goer. Important to 'Show us your screens' too is its corollary:

'It is not necessary for a lay audience to understand the code to appreciate it, much as it is not necessary to know how to play guitar in order to appreciate watching a guitar performance.' By adopting a text-based interface to perform and also projecting that text-based interface for an audience to see during a performance, a number of things are achieved. First, for anybody interested the text makeup of a performance is shown, showcasing the inner workings of a performance as it comes together, live on stage. This is useful for me as a live coder myself because I can see how 'cool things' are done as the 'black box' of the performance laptop is removed to some degree - I've learned a whole bunch of techniques by going along to algoraves and following the projections to see what is being done by the performer (this also includes live streaming one's sets, which I have done a decent amount of). In addition to this, for anyone who doesn't understand the specifics of the language being used (or isn't interested) this opening of the laptop performance ecology serves the purpose of exposing the materiality of the performance - in watching a performer type and execute code you are seeing the performer at work, how they respond to various stimuli during performance, and how their thoughts are translated to text. In addition to this, through the selective writing of, navigation through, and execution of text, the kinetic intent of the music is demonstrated. Much as an instrumentalist stamping their foot to a beat more than likely shows the path of their playing, a live coder hurriedly typing ~kickdrum.play (or equivalent) shows their vision of the music in real time.

More significantly though, I'd argue this projection of text is more than the fleeting glimpse one can see when observing a traditional instrumentalist at work. In watching a performer articulate their music as a text file on screen, I feel as if I am watching a performer build and manipulate a sculpture over the course of a performance, with the form of that sculpture being mirrored in the changes in the music heard throughout the performer's set. Whether that involves a performer starting from absolutely nothing and building a performance from minimal roots, regularly deleting their entire text and starting again, or a performer loading a pre-written text and selectively executing/modifying it, drawing on an extensive codebase to craft a detailed performance (both of which I've seen Yaxu alone do), or anything in between. As I perform using SuperCollider, the level of verbosity required means I often type and navigate through text a lot, however I am always shocked at how little code I actually have at the end of a performance. My performances are usually composed of a select few carefully-maintained symbiotic micro-structures which I edit extensively. I don't write an awful lot from scratch, but I fairly meticulously edit and re-edit what I do write, executing the same piece of code many times in one performance with slight changes to fit the other few running pieces of code.

In watching a live coding performance, you can see the performer not only deal with the environment of performance in real time in a way that is potentially useful to practitioners and (relatively) transparent to 'lay-persons', but see them dealing with both the history of, and potential futures of their performance in an engaging way.

It's also undeniably eye-catching.

So with all of this in mind I decided to take the plunge and learn to live code. I was fortunate enough to have a great opportunity to uproot everything I knew about performing electronic music in the form of my final-year undergraduate dissertation, which I used as an opportunity to deliver a 40-minute live coding performance. I was also fortunate enough to have some teaching on how to live coding using SuperCollider from Data Musician and Algobabe Shelly Knotts. I've since played a bunch of Algoraves and live shows (a lot of which can be found here), streamed a whole bunch of sets, and applied live coding approaches to other projects.

Reasonably quickly Live Coding became 'how I made music', and a few realisations followed:

In live coding I could not only embrace alternate traditions of laptop performance, but also paradigms of laptop music. The way I had worked in DAW software had always been dominated by audio loops, MIDI data and VST plugins, and these methods are much less immediately accessible in live coding performance with SuperCollider. Much is made in the live coding community of the role of the algorithm in performance, and I've only recently realised what that actually meant, after initially being quite scared by the 'maths-ness' of the term. In creating a drum pattern in a DAW environment, I would layer together drum loops and play instrumental lines using a keyboard to achieve the desired rhythms, but in a live coding environment I specify a bunch of behaviours to determine how drums are 'played', and similarly with melodies, textures and bass. In performing I am creating multiple rule-governed self-managing instrumental 'players', and shepherding them around to create a performance, rather than 'playing' the music in a traditional sense - this is something that is intuitively quite easy to achieve through live coding in SuperCollider, but something I found quite difficult to achieve in a DAW environment. Incidentally I find this method of performance much more tactile and 'instrumental' than the DAW paradigm, after this method of performance was the very thing I was afraid would take the 'human element' out of music!

Aspects of music as fundamental as pitch and rhythm organisation are easy to experiment with too. I'm a big fan of using Euclidean rhythms and some constrained randomness to generate compound rhythmic patterns, as well as using the Harmonic Series to determine pitch for melodies and textures, and the bare-bones 'do it yourself' nature of live coding in SuperCollider means that I can fairly easily build performance systems based around non-standard musical techniques.

Electronic Music also has problems with diversity, and there are a number of facets of the live coding community that are actively addressing this. There are groups such as SoNA and YSWN encouraging the involvement of women in the live coding community, and socially-concerned organisations such as Access Space are also actively involved. My experience both attending and taking part in live coding events shows commitment to addressing these issues too - while there is no formal code of conduct, a general commitment to inclusivity in participation (no all-male bills at Algoraves), attitudes and language are commonplace. With the recent #Algofive stream showcasing not only a diverse global network of artists but a diversity of approaches to live coding too, it's a community I'm very proud to be a part of.

Like everything, Live Coding does have its problems. I've realised that all of the freedom that live coding in SuperCollider offers also comes with the drawback that I have to build my own frameworks to perform with, starting from the basics, which is sometimes pretty paralysing. If I'm stuck for inspiration, it's actually quite hard to get myself out of a rut, and discovering how to use different features is actually quite difficult without having the software having a 'manual'. Further to this, Open Source software and libraries can sometimes be scantily documented, with incredibly useful tools remaining difficult to access because only the creator of those tools knows how to use them properly. In addition, the issue of performative transparency isn't quite as clear cut as 'I'm projecting code, therefore my intent, action and gesture in performance are immediately and clearly articulated' - in '[showing] your screens', the black box has just been shifted to the processes underlying the code itself. There's also the issue of 'code literacy' presenting a barrier to entry to live coding, however this is addressed both through the publishing of learning tools by the community and languages that require less specialist knowledge to use effectively, as well as workshops by the community to engage those unfamiliar with live coding and programming in general. I am also very aware that my somewhat idealistic notions of what I want to demonstrate through performance may well not matter to other performers, and this is fine too.

All things considered, I live code because it allows me to use free/libre/open source tools to create flexible musical environments that allow me to perform electronic music in a way that I feel gives me the ability to think and play like an improviser. My initial fears that coding music would lead me to academic 'maths music' turned out to be completely the opposite - performing with live coding is by far and away the closest I have come to an 'instrumental' way of performing electronic music. Let's keep going with those repetitive conditionals!

I have written (and am continuing to write) resources/guides/tutorials/docs etc on live coding with SuperCollider here. My website is here.

† As a caveat to this, the closest I probably came to this cause-effect relationship becoming clear while using DAW software was with Mutual Process, an improvised music project with Adam Denton of Trans/Human. For Mutual Process I performed manipulations of live-recorded samples of Denton's guitar, which were fed back to him - and I used a number of controllers to live-patch effects and record/process samples. I had a huge amount of control over this setup to the point where I felt as if I could impact upon the performance with physical control gestures, and embody my action within the music somewhat. Interestingly enough this performance setup was a complete 'hack' of Ableton's core functionality.