PADWORKS Building a solo live performance with iPad

Master's thesis | Tuomas Ahva | May 2016
Master's degree program | Sound in New Media
Aalto University | School of Art, Design and Architecture | Media Lab Helsinki

Introduction

This chapter explains what has drawn me into the world of iPad music making and what is it that is interesting in it to me. Why am I doing a research like this, what is the background for it, and what questions am I trying to answer with the research? The research leans heavily on my own practice as an iPad musician; this chapter also explains briefly what is the practice that’s included in the research process.

Research interest

This section explains my motives for doing this research. What makes iPad an interesting study object? What are its novelties in music making process, especially in using it in live performance?

iPad is not a musical instrument by design, but there are a lot of applications for the iPad which are versatile musical instruments. In this research I want to explore and exploit the way iPad can be used in live situations and show interesting uses for the iPad as a musical instrument. In addition, I want to use live visuals and project to the audience what’s happening on stage and on touch screen. In order to fully concentrate on the capabilities of the iPad, I’m building a solo live performance.

I’ve been using iPad as a musical instrument since 2012. I’ve kept it as an equal instrument along with my bass, guitar and laptop. I started with iPad in fact, for no special reason. However, compared to other tablets, iPad has the largest selection of musical apps available at the moment. Furthermore, if the operating system of an iPad, iOS, is compared with Android, it performs with shorter latency[1] time (Szanto & Vlaskovits 2015). This may be the main reason why iPad works better as a musical instrument than other tablets, at least when compared to Android devices.

The research focuses on live situations. iPad works as part of music making process in other phases, too: it can be used as a portable digital recording studio or as a handy digital notepad when composing. However, iPad is particularly interesting as a live instrument. Just plug in your iPad and you’re ready for the gig. The possibilities for human-computer interaction are more easily at hand with iPad than with regular laptop computers. iPad has a fairly big touch screen for playing, built-in sensors like accelerometer and gyroscope for controlling the outcoming sound and protocols like MIDI and bluetooth to communicate with other players and musical instruments. I focus on iPad in a live situation and exclude recording and practicing phases from the research.

Computers have been used for making music for more than 50 years (Elsea 1996). In principle iPad is only a portable computer, and in modern computing standards not a very powerful one. Many things that can be done with iPad can be done faster and more easily with a laptop. On the other hand, many things can be done with the iPad that would be very difficult with a laptop. That’s interesting to me.

The first chapter is introduction. In the second chapter I lay out the methods and define myself as an artist. The third chapter is theoretical part that locates the practice into theoretical discourse in fields of Computer Music and NIME[2]. Practice part begins in the fourth chapter with describing different elements for building a musical performance using an iPad. The practice part continues in chapter five with listing interesting iPad apps and reflecting them on my artistic perspective. I select the apps based on both how I as an artist regard them interesting and how they can be seen as interesting instruments because they exist on an iPad, and not for example as desktop applications. Those apps contain the apps that I use for the compositions. I analyze what’s special with the selected apps. In chapter six I go through building the setup for live performance, both audio and visuals. Chapter seven is concusion. I, as an artist-researcher, go through the composition process and the outcome. Chapter eight discusses next steps for the research.  

Background

This section explains how my previous experience supports the idea of doing this research as artistic research.

I’m in a duo called Haruspex where my main instrument is iPad. I’ve used iPad as an instrument in various gigs but I’ve never really analysed what I’ve been doing. There isn’t much prior research on the topic of using iPad as a live instrument. Now in this research I would like to break down the use of iPad as a live instrument to bits that can be discussed further.

Along with the gigs with Haruspex, one of the most important gigs was a concert with iPad Orchestra in late 2014. iPad Orchestra was an ensemble formed after a year-long course in Sibelius Academy in 2013–2014. The subject of the course was to discuss and explore the possibilities of a tablet computer as musical instrument. I took part in the course and got even more interested in the subject. We ended up using iPads instead of some other tablets because it had the best selection of musical apps available at the time. The course culminated in a final concert where we played iPads as a band of 6 players and also together with other more traditional instruments[3]. The concert was received well by the audience. There were many interested persons coming to us after the concert and asking about the apps that we were using.

There’s existing research about certain iPad app as a musical instrument (Trump & Bullock 2014), a book about iPad as a musical instrument (Johnston 2015), and I think there will be more content about the subject available in the near future. But currently this kind of literature is not very abundant. The whole field of mobile technology is developing so quickly that it’s hard to keep track what’s happening and what are the existing possibilities. The iPad book together with electronic music guides and numerous how-to-use-specific-app tutorials on YouTube form a nice background for the topic for anyone starting to make music with iPad. It’s important to set the research background somewhere and this is my attempt to do so. I will do it as an artistic research, using my own art as basis.

Brown and Sorensen (2008, p. 156) argue that a practitioner is not equivalent to a researcher and researchers shouldn’t believe they can automatically be practitioners. I somewhat disagree. Of course, it’s not automatic that a practitioner is a good researcher and vice versa. Klein (2010, p. 1) claims that many practitioners already do a lot of research when preparing the practice. I believe that this knowledge is transferable to research. It’s essential to find a suitable method to research the practice (Hannula et al, 2003, p. 13) . I do agree with Brown and Sorensen (2008, p. 156) that it requires certain capability in both domains. I use my experience of using iPad as a live instrument as a starting point to line out the actual practice part of this research. Additionally, I wanted the research to make me compose more music and analyze what I am doing – and by making me build a whole solo live performance around iPad it surely does this.

Research questions 

This section lays out the research questions. One one them is the most important, other three just follow along. 

Sometimes iPad apps drive me mad; sometimes I love them. Usually I feel that I spend too little time with them to truly explore all of their possibilities. I think that there are certain things that make the iPad a really nice live instrument: it can be used as an electronic music device with the possibility to give input in non-discrete manner, a bit like with the traditional instruments. My personal interest lies in the intersection of those two worlds. I want to find out how a live set can be built using iPad as a centre point.

Main question:

How to build an interesting and musically versatile solo performance with iPad?

Subquestions:

How does the iPad function as a musical instrument in a live situation?

What is an iPad good for in musical live performance?

What is an iPad not good for and what are its limitations in musical live performance?

Additionally, I want to find new ways to enhance my creativity. I believe that the iPad lies in a field that has a lot to give, already now and especially in the future.

I anchor the research in computer music and research of NIME. I use my own practice and art as basis for the research. The method is called Practice as Research. Along the research process I experiment with different interesting apps and keep a diary at the same time. I compose songs for the iPad – for myself to perform, using only iPad – and I explain how the compositions are constructed. Then, as a final result I shoot videos of the performances. It’s also a good way to make sure I’m actually able to perform the compositions live.

I go through the compositions in the light of NIME and computer music theories by answering questions like “do I make use of the methods of computer music?”, “does the iPad as live instrument enable expressivity in live performance?” and most importantly, “what are the building blocks and methods of building the solo live performance with iPad?”

Practice

This section gives an overall view of the practice part of the research.

This research is an artistic research which locates itself in theory in the literature review in chapter three. The research method is artistic research. I am the artist who produces the art that is used as research material.

The actual practice part of the research consists of four compositions for the iPad. The performance is intended to be performed live but in the scope of this research I shoot videos of the performances in a studio. However, I want the compositions to require practice. It won’t be just pressing play buttons. By practicing I’m able to determine what are the things that an iPad is good for and not so good for, and what are its limitations. I want to see how much presetting needs to be done before the gig and to what extent the different apps and ways of playing leave room for spontaneity and improvisation, both positive qualities for me.

When readers review the results of this research they should take into consideration my artistic ideas and tendencies that are described in chapter two. Zappi and McPherson (2014) have shown that already a simple instrument can lead to a wide variety of musical styles. This research is a subjective interpretation of what the selected apps can do, but however, it can provide insight and inspiration for others, too. I will aim at giving results that can be applied in many ways.

The goal of this research is not to compare iPad with other ways of doing similar things. It’s not in the core of this research. For me as an artist it’s important and interesting to find out how iPad  enhances the creative thinking and creativity of a musician. In many parts of the research I rely on my own intuition in finding out what I feel new and interesting. I’ve been using iPad in live gigs without analysing the performance. I know what has been working for me and what has been difficult, but now I’d like to dig a bit deeper and find out why certain things work and other things don’t.  


Method

In an artistic research, researcher tries to be in a relationship with the subject (Hannula et al 2003, p. 46). It’s important to find out what are the expectations, needs, interests and fears towards the research subject (ibid.). I interpret this guidance in the way that it’s important to write down the feelings towards the topic, research and the practice. The goal of this chapter is to describe the general qualities of an artistic research, the research process of the research and my artistic tendencies.

Artistic research

The point of this section is to empower me to do the research as I am intending to do it and describe on high level how an artistic research should be carried out.

Hannula et al (2003, p. 9) write about artistic research: “Results of the research, the outcome, are a surprise for the researcher(s).” This idea inspired me to do my master’s thesis as an artistic research. I want to explore the unknown. I want to do something experimental. There’s something similar in what Hélène Cixous (2008, pp. 145-147) writes: “So, you’re tracing a secret that is escaping. You’re approaching the secret and it escapes. Painting or writing takes place when you’re tracing a secret, as a matter of fact, painting or writing is tracing a secret.“ I feel that doing artistic research is similar to what Cixous writes – I want to trace a secret.  According to Barrett and Bolt (2007, p. 5) artistic research provides a more profound model of learning – “one that not only incorporates the acquisition of knowledge pre-determined by the [researcher’s] curriculum – but also involves the revealing or production of new knowledge not anticipated by the curriculum. “ (ibid.)

According to Hannula et al (2003, pp. 13–14) artistic research should only be done if the research has an impact on the art, and the art has impact on the research. Thus the research has the potential to shine both artistic and scientific light. The whole research process should be tightly coupled with the art. If the research has no impact in the research part, then Hannula et al (2003, pp. 13–14) claim that the artistic part is separate from the research and it doesn’t make much sense for the researcher to be the artist of the research.

I base my research on the ideas and approach of Hannula et al (2003). They formed the basis of artistic research in Finland. Hannula et al (2003, p. 16) highlight that the artistic side and scientific side of the research should interact with each other, throughout the research. Only then the research can be critically reflected by the artist-researcher. An important question is (Hannula et al 2003, pp. 16-17): how does the experience of the artist guide the formation of theoretical knowledge – and vice versa:  how does the reading, thinking and theoretical discussion guide the artistic experience?  It’s important that the artist-researcher explicitly describes all the hermeneutical loops, re-evaluations of the topic and choices of discourse that the artistic experience leads to (ibid. p. 17).

In my case the normal research cycle begins with me reading internet forums on musical iPad apps, and getting inspired by a new app, or a feature that someone has discovered. After discovering something interesting I start jamming, and after the jamming session I write a few sentences about the jamming session, answering questions like ‘what did I find interesting in the session?’, what problems did I have or what settings do I need to do in order to do it again. Then, based on the diary markings I would record some kind of demo as an audible diary marking. Then, with the demos I write more specific instructions that will become a composition at some point. And then I would finish it by reading research papers on the subject, putting the research into a context for myself. 

This artistic research cycle is depicted in figure 1 below. It is not as strict as it sounds, though. During the research I jump between different tasks, and I let the whole process inspire me, and, in fact, it is the art that inspires the research as much as the research inspires the art.  

Screenshot 2016-03-22 19.30.40.png

Figure 1. My research process cycle.

I also have university education in engineering, and I have been involved in software projects. I’m highly interested in what technology, especially mobile technology, brings us in the future. I wish this to be part of my artistic identity and I want to make music using methods that are inspired by technology. In my opinion this is what guides the artistic experience fairly drastically. One of the possible paths for this master’s thesis was to develop a musical instrument app of my own. But then I realised there’s still so much uncharted territory in the existing iPad apps that I want to go there first. All the brilliant developers deserve their inventions to have use. Maybe the time for my own app comes later. In general, my background in technology and engineering practices is definitely one of the reasons why I’m interested in new innovations in interfaces and ways to create interesting soundscapes.

Another essential element in good research is criticality (ibid. p. 25). Scientific ideas should be compared with empirical results to test them systematically and omit erratic results. In this research, the concepts of performing live with an iPad are tested in practice and erratic perceptions are omitted in conclusion. For example, if I assume that I’m able to build a rhythmic song without syncing the tempo between different apps I clearly have to state whether it is possible or not. And if not, I need to find out another way to sync rhythmic elements between the apps. If that proves to be too difficult I clearly need to describe the process and state that the assumption is erratic.  

How can the artist as a researcher maintain critical distance from the research process? Barrett and Bolt (2007, p. 140–141) have listed three things to be considered. Firstly, the researcher should locate himself in the field of theory and practice in the literature review. Secondly, the researcher should have clear methodological and conceptual framework where the researcher argues and demonstrates and uses terms like “I conclude”, “I suppose” as they relate to the hypothesis and design of the project. Thirdly, the researcher should discuss the work in relation to lived experience, other works, application of results obtained, contribution to discourse, new possibilities, obstacles encountered and the remaining problems to be addressed in future research.

I think these three points are covered in this research. The field of theory is discussed in the next chapter. For the second point I’ve adopted fairly personal research approach relating the theory to my own thinking . The third point is rather vast, but wherever applicable, I’m relating the research to my career in a larger scale. I think keeping a diary helps with the conclusions.

In addition, Barrett and Bolt (2007, p. 139) emphasize that the researcher should make a clear statement of the origin of ideas, reflecting them with current and previous projects: “The researcher should trace the genesis of ideas in his/her own works as well as the works/ideas of  others and compare them and map the way they inter-relate and examine how earlier work has influenced development of  current work and identify gap/contribution to knowledge/discourse made in the works. The researcher should also assess the work in terms of  the way it has extended knowledge and how his/her own work as well as related work has been, or may be used and applied by others. “

There’s not much academic discourse on the topic of this research, but there is available a lot of material online produced by other iPad musicians. Sometimes, when the creative process is heavily based on intuition, it’s not possible to trace the influences, but if I can clearly state, where the influence comes from, I’ve added it in the diary I’m keeping.

Klein (2010, p. 4) argues that there is no real distinction between scientific and artistic research. Both aim at gaining broader knowledge within the field of the research. Artistic research can therefore also be scientific (Ladd 1979, in Klein 2010, p. 4). Also, artists argue that definition of science is somewhat ambiguous (Hannula et al, 2003, p. 10). I agree that this might be a good argument for those who criticize artistic research for its lack of scientific methods.

In this research I adopt the freedom of artistic research and engage in it without thinking how I  should categorize the research in the academic world. After all, I have a research question and ways and means to find an answer to that. I think that will take me quite far.

Myself as a musician

My iPad musicianship is defined through my previous experience with the iPad and it is used as a point of reflection in this research. The goal of this section is to set expectations on what kind of music I’m going to make and what are my artistic choices in the music and what is my experience in using iPad as a live instrument. Experience, feelings and aesthetics are important concepts in that.

Being a musician often requires virtuosic handling of the instrument chosen. However, I don’t think I master any instrument so well that I could be hired to an orchestra to play parts from the sheet. I’ve never practised any instrument for an extensive span of time but somehow I’ve always had a drive to become a musician. Hour-long independent practice of fingerings and scales hasn’t appealed to me and I haven’t been able to define what kind of musicianship is my cup of tea. I started studying engineering, but I’ve always regarded music as my dearest pastime.  

My growth as a musician has been on very ordinary lines. My parents made me play the piano when I was eight. After a  few months I wanted to quit. My parents warned me: “If you quit, someday you’ll regret.” And how painfully right they were! Fortunately, a  few years later, my good friend Juha asked me to start playing the bass, he wanted to play the guitar. That’s how it finally got started. We’ve been playing partners ever since and I’ve been able to call myself a musician.

Nowadays I tend to define my musicianship using the following five points of view.

  1. Sense of musical community and communication
  2. Emphasis on live performance
  3. Experimental pop as genre, with controversiality and surprises
  4. Physicality and honesty in live performance
  5. Playfulness

The first characteristic that defines me as a musician is the sense of musical community and communication. A few years after starting to play the bass I realized what was appealing to me in music, and what was in the core of my musicianship. It wasn’t just the fame and fortune of a potential rockstardom. It was those moments when I was playing together with the band, and we communicated through playing. We were even able to tell each other jokes by playing, and also conveying feelings by playing. It was a new experience to me. Improvisation played a significant role in this. I admire virtuosity and virtuoso players, but I don’t think virtuosity matters if the music that’s being played doesn’t suit to the social context, or if the music doesn’t deliver the feelings it’s supposed to do.  

“If you want to make it to the top. Practice.” Maybe it wasn’t those exact words, but that’s how I remember a Sprite commercial from the 90s, telling you how amateur basketball players would make it to the NBA. What’s the NBA of musicians? Perhaps pondering over that has lead me to the situation where practicing hasn’t been exactly in the essence of my musical development. There are moments when I regret that. I enjoy practicing, seeing my development, but I often feel that I need accompaniment, I need other people to play with. I need band mates.

But as I’ve been growing older I’ve realised it’s more and more difficult to find common free time to practise regularly. So I needed to take another approach to find motivation to make myself practise. I found that in live performance. If there’s no band to play with, I need an audience to play for. That’s why live performance is such an important aspect for me. That makes me practise, and perhaps some day it will pay off and I notice I’ve become some sort of virtuoso myself. I enjoy spending time in studio and being absorbed in the sounds and music there, but – to me – the true essence of music is playing live. The second characteristic of my musicianship is emphasis on live performance.

The third thing that defines me as a musician is related to how I feel about musical genres. It may be said that currently my genre is electronic music, but I’m not leaning against the traditions of electronic music. I want to explore how electronic and computational means can be used to create music that defies genres.

In my personality there is one feature that prevails: I want to please other people. It usually means that I’m leaning towards pop music. However, I want to surprise people, do tricks, and that’s not usual in pop. Perhaps, in my family as a little brother, I’ve become the entertainer without the need to take responsibility. I can concentrate on doing tricks. Some of my idols in music, e.g. groups like Animal Collective, The Books and Battles, are quite experimental, but still maintaining something ‘pop’ in their music. I want to present contradictions.

The fourth aspect that defines me as a musician is the traits that I think make a good live performance. Those traits are physicality and honesty. I like it very much when a musical performance is a physical performance at the same time. I find live coding[4] a very interesting subject, but it lacks the physicality that is often tied to traditional instruments. I also think that laptops in general as live instruments lack the physical aspect, and I think iPad brings that idea back to computer music, without the need to tangle physical controllers.

Another aspect of good live performance is being honest to the audience. For example, it’s easy to cheat the audience so that a musical performance looks physically more demanding than it actually is, as is arguable in case of many EDM[5] live performances. What’s important is that the performer shouldn’t feel as if he/she is betraying the audience by making the musical performance look more complicated than it actually is. In short, I tend to think that playing music live requires some sort of special skill, technical or artistic. Live performance is about showing this skill to the audience. But if this skill is, e.g. merely a push of the play button, then I in the audience feel betrayed, and start thinking if the performer is doing it for the sake of art, or something else. That’s what I consider honesty in live performance. I think honesty is important in life, and I want to remember that also in my art.

The fifth aspect of my musicality is playfulness. I don’t avoid dark music or negative feelings, but my wish is that even in the darkest moment there’s a blinking light ahead that makes the listener smile. Perhaps my compositions are more often in major than in minor.

I believe that all these five things can be heard in this research, and they go well together with my music and the subject – iPad as a live instrument.  

My experience as iPad musician

I’ve used iPad as an instrument in various gigs, mostly with my band Haruspex. It’s a duo consisting of Ava Grayson and me. We started with improvised noise music. It was a good approach for us, because we could just start playing without defining what we were actually playing. Ava plays her laptop and I play various iPad apps. Ava has created fantastic MAX[6] patches where she usually manipulates sound samples and creates new soundscapes from those. My role is to add another layer on top to what she’s playing. We’ve been really happy with what Haruspex has been doing and we have a plan to release an album in late 2016. Haruspex has been a really fruitful playground for me to play the iPad. It made my role as the iPad player easy because I wasn’t restricted to any particular musical quality or any particular app. I could choose any musical application and create sounds with it.

I like to consider iPad more as an acoustic instrument than a computer, so that I can give analog commands to the computer, instead of precise keyboard commands. In order to achieve that with an outcome that falls within specific limits (tempo, scale, chord progression), it usually requires making settings beforehand. It means a little bit of work but pays off later: when an instrument makes spontaneous changes possible in the music that I’m playing, it is also more expressive to me.

We also like to have a visual element in our performances. Sometimes we project the screen of the iPad for the audience, sometimes we project videos from laptop. Projecting the iPad screen works nicely with some apps, like Tachyon and Geo Synth and I’d like to use the iPad more for the visuals, too. The grand idea behind using images from the iPad is that I want to fight the idea that some people have: electronic musicians might just as well be checking their emails as performing music while playing  their electronic instruments

That’s pretty much where Haruspex was left in summer 2015, and that’s where I’m continuing from with the practice part of this research.

One important occasion for me as an iPad musician was the concert of iPad Orchestra in Sibelius Academy in Helsinki in late 2014. I like to think of it as an occasion where I and the players with me legitimised iPads as musical instruments. We played about 10 songs, a few classical compositions and some improvised music, both unaccompanied and accompanied by traditional instruments. My personal contribution was fairly modest, I played in two different songs. In the first one I wanted to showcase how the iPad can be used to approach live playing with a very low barrier. I played Bach’s ‘Air on a G String’ with Magic Piano[7], accompanied by a fellow iPad musician playing real cello. There was a great deal to be hoped for the nuances that the sound of Magic Piano provided, but nevertheless I was happy with the results. The other song was ‘Norwegian Wood’ by the Beatles. I invited all the members of iPad Orchestra to play in the song. I played acoustic guitar sounds from GarageBand[8] and sitar sound from SampleTank[9], using GeoSynth[10] as an interface.

The feedback we got was very positive. I was very happy how the arrangement of ‘Norwegian Wood’ worked together. The band of six iPads worked well. We didn’t sync the iPads but used iPads as if they had been traditional instruments. Apps that we used were GarageBand for guitars and electric piano, Animoog[11] for the bass and Impaktor[12] for the drums.

For a while there was a buzz about our gig but we haven’t played new gigs. I think the main reason is that it doesn’t make a big difference whether the songs we played are played with either more traditional instruments or with iPads. There’s just the novelty factor of using iPads, but after we showed to us, and to the world, that iPads can actually be used in the way we used them, the idea of playing more became rather boring. I think that we should find material for the set that is specifically related to iPads, not just any music or any songs. That’s one of the reasons why I’m doing this research, too.  

In addition to Haruspex and iPad Orchestra I’ve used iPad in various more unofficial occasions, mostly at friends’ parties and as a DJ player. I’ve used it many times as a drum machine (mainly DM1 app) in live gigs. I think iPad functions as a very good drum machine, actually being more versatile than a hardware drum machine, because it can at the same time be used as (nearly) any other instrument, too.


Feelings  

The expectations that I have towards this research are divided into two. On one hand there are the academic expectations where I confirm to the academic world that I’m able to handle the subject in a meaningful way and conduct artistic research. On the other hand there are my personal artistic expectations, which I’ve set quite high. Now that I’m devoting so much time and effort to a project, the outcome should be nearly perfect.

But it’s good to say it aloud and honestly: the outcome is probably not the ultimate masterpiece. However, it is something interesting, and the most important thing is that it is finished and hasn’t been forgotten in the digital drawer. It’s wise to get the research done and out in the world as soon as possible. The subject is rather new, and there exists fairly little research on it. But there is related activity going on all the time and eventually there will be research, too. The sooner I get this done, the bigger the possibility that other people find it interesting and even useful. My personal interest in the subject is to produce something that is unique, that makes clever use of emerging technology – and makes me produce interesting music.

I feel great about doing something that makes me a more productive musician. But it’s also contradictory. I often tend to think that music is most genuine, when it works as a whole – when I as a listener pay attention to the entire creation, not only to details and analyse how it’s been produced. Now, for the practice part of the research I act the opposite: I produce music through analysis. One of my goals is to expose the musical interface for the audience, and make them see how the music is created. It’s a little bit frightening. Will I become less interesting and too contemplative as a musician? But, on the other hand, only the audience, i.e. the viewers and listeners, can provide me with this insight.

Aesthetics

Personally, in this artistic research, the most significant result is the music that comes with the research. But how can I tell whether the music is good or not? In the context of the research it might not matter, but personally it’s very important. The aesthetics need to be validated by bigger audience. The aesthetic judgement makes an important contribution to the pragmatism of the research (Brown and Sorensen (2008, p. 161). However intimidating it is, I need to give the result to the general audience to judge, rate and review. If the end result is aesthetically pleasing to others too, it will keep me pushing forward and not leave the practice in this research as a one-time experiment.  That’s also something worth striving for.  

What are my aesthetic preferences besides experimental pop music in major? My style is a mixture of many sources. There isn’t one particular genre of music where my music would belong to. Perhaps the most significant feature is that I want to cross the borders of different genres – but breaking the boundaries in a kind way. My intention is not to shock like punk in the 70’s, or try to be progressive for the sake of progressiveness. I think the main goal is to inspire both feelings and thoughts. I want to tickle both sides of the human brain.

I’ve grown to listen and experience all kinds of sounds and music. I want to bring out best sides of different musical worlds that I know. If I’m patiently painting a long soundscape, at some point I want to reward the listener, and myself, with a hook. If I’m trying to construct a perfect pop song, I want to add experimental sounds, or use experimental methods to produce it. I want to compose electronic music that doesn’t sound like electronic music. I want to compose music that has an acoustic feel to it, but making use of the latest inventions in music technology.

I got very inspired by a talk given by a musician, composer and sound designer Tuomas Norvio in May 2015. He gave a speech in an event aimed at theatre sound designers and told about his methods and thoughts on sound design. His latest work had been sound design for dance and circus performances, but he has a background in the pop group Killer and in a pioneering Finnish electronic music group Rinneradio. I was inspired by the way he explained how he creates different forms using sound. With the forms he conveys the feelings that the director wants to convey. The most important thing he’s realized is that those forms don’t need to fit in any pattern or scale or genre. The sounds need to act as messengers, and he tries to deliver messages that he thinks are effective but also aesthetically pleasing. I thought his words were very wise and I could easily share his views.

Theoretical background

The theoretical background provides understanding of all the things that previous researchers have discovered. In this chapter I go through what musical can be done with a computer, or with a computer full of sensors. What are the aspects that should be taken into consideration when designing a new digital instrument? Not all the points covered in this chapter show up in the practice section, but at least they point the readers to investigate more, if some of the fields are left for too little attention.

The first part of the theoretical background comes from the field of electronic music, or more precisely computer music. The distinction of electronic and computer music nowadays is not that clear, because in principle nearly all the aspects that once were essential to electronic music can be achieved in the digital domain using computers.

The second part of the theoretical background is in NIME. There has already been done some research on using touch screen, different sensors and iPad as a musical instrument in the NIME community.

 

Computers in music

The point of this section is to explain that music in the context of this research should not be regarded as Computer Music, or Electronic Music, or anything special, just music. This idea is communicated by going through how computer music has been evolving over the past 50+ years, and how gradually computers have become more and more responsive  and interactive. Nowadays computers are just one way to make music, and a rather important one. Music created with computers doesn’t necessarily have to sound like computer music as we normally think of computer music.

“After humans started making music with something else than our voice and heartbeat, we incorporated machines of sorts in our music. And computers are just one example in that process.” (Cox & Warner 2004, p. 113) I think this quote summarizes how I feel about music technology today. I feel that computers are not something separate from other means of making music. Computer can be used for creating music, just like any other more traditional musical instrument. On the other hand, computer as a musical instrument is more powerful than a traditional acoustic instrument because computers can take and obey orders, and repeat them as long as wanted, while the player can do something else, or add something on top of what’s already being played. Bongers (2007, p. 9) says "The essence of a computer is that it can change function under the influence of its programming." They do exactly and literally what we tell them to do, without the ability to interpret what we might be meaning. Computers are also very strict in how they take in the input. Traditionally we manipulate computers with such devices as keyboards where a key is binary: it’s either pressed or not, there hasn’t been any middle ground in that. I believe that this has had an influence on how computers have been used as musical instruments.

However, the use of touch screen and sensors in giving input to the computer is gradually changing how computers are used as musical instruments. Already, the addition of the touch screen to computer is a step for computers to become better instruments; they can be manipulated with ten fingers. Different sensors provide even more analog-like[13] means to give orders to the computer which makes the computer a more expressive instrument.

It’s likely that as soon as we figure out better ways to give (more ambiguous) orders to computers, and we are able to teach a computer to interpret the player better, computers will be even better instruments. But that may require a bit more intelligence than the computers have today.

There’s been a time, when I’ve personally disregarded electronic music as not proper music, not being for my taste. I’ve been thinking  that there’s little life in a steady electronic beat and basicly no groove. Recently I’ve changed my mind. I think it’s mainly because enough time has passed and computers have invaded the music production world so thoroughly. I’ve been exposed to so many groovy electronic music examples that it has changed my mind. I think that also a combination of a drum machine and a real drummer produces groovy results. In one way or another computers are part of nearly every commercial music production, if not as instruments, then in the recording or post-production phase. Anything you can do in an analog studio and with analog instruments, you can do with computers, and even more. And this is constantly evolving.

I don’t think there’s a special need to talk about computer music as such, unless something specific is meant by that. Computers are used in music, but not necessarily for computer music. However, it’s still important to define that this is a research on digital electronic instruments.

A musical instrument is an object that has been created with the intention of producing musical sounds. Bongers (2007, p. 9) lists different types of musical instruments:

Computers are digital electronic instruments. According to Bongers (2007, p. 9) digital electronic instruments play an important role in the development of instruments: "Although there have been programmable mechanical systems and analogue electronic computers, the digital computer has had the biggest impact on society and therefore forms a separate category." This doesn’t mean that computers have to be kept separated from other instruments. Just as we mix string instruments with percussive instruments, we can mix digital instruments with some other instruments. And taking the idea even further, the power of digital instruments is that they can mimic the sounds of all the other groups, and go beyond and provide sounds that no other instrument does (Bongers 2007, p. 10).

However, I think the distinction between electric and electronic is important. The evolution from electronic instruments to digital and computer-based instruments has provided freedom for the design of the interface. Players’ interaction with traditional instruments is often tied to the physical qualities of the instrument whereas in digital instruments the sound engine is usually separated from the playing interface. In digital domain, the playing interface can be designed separately, and it’s rather easy to try out different interfaces and come up with the most suitable for the situation. In the context of this research a digital electronic instrument is any computer program that has can be used for producing musical sounds.

One issue that is often underlined in computer music is how the audience feel about experiencing it in a live situation, mainly because the sound source, or the action that triggers the sound, is not visible to the audience. It can cause mixed feelings. One might argue that the listener is very rarely interested in how music is produced or made. I would say, that in live music it makes a difference, at least in terms of expressivity. If the music comes to life via a complex but elegant computer algorithm, but the audience cannot see it, the impact remains small.

One approach to tackle this is to take away the performance and musicians from the live concert and offer acousmatic[14] concert experience without any intention to give visual representation of the music, only loudspeakers. Another approach, my approach, also, is to try to make the computer screen visual and show the audience what kind of interaction is taking place. This can be done, for example, by projecting the screen of the computer to the audience. Collins (2003) puts it bluntly: “How we could readily distinguish an artist performing with powerful software like SuperCollider[15] of Pure Data[16] from someone checking their email whilst DJ’ing with iTunes?”

 

Brief history of computer music

Computers have been making intentional noise since 1947 (Elsea 1996). The start of incorporating computers in music was slow but now, almost 70 years later, computers are used for many tasks in music making. Probably even the most purely acoustic recording is produced with some computing power involved at some phase of the production. In a live set the most common place to see powerful computing is situated at the table next to the performer, in a form of a laptop, usually with a bunch of musical controllers to trigger the sounds and the music.

The earliest programming environment for sound synthesis, called MUSIC, appeared in 1957 and was written by Max Mathews at AT&T Bell Laboratories (Wang 2007, p. 58). The same year Illiac Suite – the first complete computer composition – was created, using computer algorithms (Essl 2007, p. 112). After Mathews’ initial contribution the main development was done in various musical research centers, such as M.I.T., The University of Illinois at Champaigne-Urbana, The University of California at San Diego, The Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University, and the Institut de Recherche et Coordination Acoustic/Music in Paris (IRCAM) (Elsea 1996). Research at these centers was aimed at producing both hardware and software (ibid.). For a decade, the sound synthesis software (and punch cards) were coupled with the particular hardware platform it was implemented on, but in 1968 the computer music programming was implemented in high-level general purpose programming language called FORTRAN and could be ported to any computer system that ran FORTRAN (Wang 2007, p. 60).

In the early days of computer music computers were treated separately from other instruments, even from electronic ones. Since then, computers and electronics have been getting closer to each other, electronic instruments such as synthesizers have become more and more digital. The convergence of computers and other instruments started already in the 60s. However, only in the late 70s composers started to get involved in the computer music scene. The use of computers for composing was difficult due to the location of computers in the research centers and the process of getting your compositions to audible form was very time consuming. For the composers who were interested in the new sounds and new possibilities the analog synthesizer provided faster results than computers. This led to a situation where the full advantage of computers was not utilized and computers were not used for sound source but to control analog synthesizers. This led also to the development of sequencers. (Elsea 1996) 

Finally it was the development of microprocessor in the 70s that meant that computers controlling the synthesizers were accessible to composers and musicians that were not involved in the network of research institutions. In 1981 a consortium of musical instrument manufacturers began talks that led to the MIDI[17] standard in 1983. This made it possible to connect any computer to any synthesizer. Since then the music stores have become stuffed with MIDI devices of all sorts, and the hybrid system is nowadays the norm. (ibid.)

 

According to Elsea (ibid.) the coming of MIDI had little effect on the computer music research institutions, because MIDI was primarily used where quick and simple connections were needed. Research centers were interested in other things, for example developing better ways to give orders to the computer such as Csound (ibid.). MIDI had greatest impact on the commercial sector, not only because of the connectivity but also because of the price. Yamaha DX-7, the first programmable digital music synthesizer, equipped with MIDI, was priced two thousand dollars, whereas the previous generation machine capable of music synthesis, PDP-11, cost a hundred thousand dollars (Schedel 2007, p. 29). It meant that electronic and computer music gradually found their way also to the musicians on the street.

Different ways to use computers in music 

Computers are used in two different tasks in music: 1) electronic methods of producing sound and music and 2) computational methods of making music (e.g. Collins & d’Escrivan 2007). The boundary between those two approaches is getting narrower all the time, since there are DAWs[18] that are mainly used for recording and sequencing but at the same time they include several ways to create and manipulate sounds, and use computational methods for creating music.

Taking the idea a bit further, computers are used for five different tasks (ibid.)

  1. traditional sequencing and multi-track recording
  2. sound synthesis (and effects)
  3. creating algorithmic processes
  4. music research (to study properties of sound or rules embedded in musical aesthetics)
  5. creating digital instruments and augmenting existing instruments, i.e. hyperinstruments

In the context of this research tasks 1, 2, 3 and 5 are most important. In this chapter I mostly concentrate on computer as a tool for creating algorithmic processes; that’s where the real novelty lies in using computer as a musical instrument. The fifth task, creating digital instruments and adding computational power to existing instruments, is discussed in the next chapter in more detail.

For the first task – traditional sequencing and multi-track recording – it’s quite clear to see the advantages of a computer over traditional analog tapes for recording and sequencing. Since everything is done digitally, there are simply no physical tapes to store and manipulate. The scalability in the digital domain is so much bigger. All the cutting and pasting activities and looping have become significantly easier with digital recording studios, not to mention the power of ‘undo’.

The following two tasks are about algorithms. Algorithm means a sequence of instructions for solving a specific problem in a limited number of steps. (Essl 2007 p. 107). Every algorithm can be translated to a computer program, and computers are usually, if not better, at least many times faster to process algorithms than humans. Basically digital sound synthesis and effects are also algorithms, but I treat them separately, because synthesis and effects exist in the analog domain, too.  Computers are very good tools for sound synthesis and effects. Digital signal processing (DSP) is the method in which sounds are constructed sample by sample to form audible sounds. Effects are applied using the same principle: taking in one sample at a time and calculating a new value for it, according to the effect algorithm. Computers can be used to do very precise manipulations, but also a lot of computing power is required. A second of a CD quality audio is constructed of 44 100 samples. In practice it means that the computer needs to process over 44 000 samples every second. This hasn’t been a big task for computers for a long time, though. But for complicated effect calculations even modern processors may be using their full capacity.

The basic pattern how a digital instrument works is that there’s an input by a human player or from another musical application. Then there’s processing according to the parameters of the input and then output. It’s a simple pattern, but the advantage of digital instruments lie in the fact that these simple elements can be highly complex. Input doesn’t necessarily have to be a human player, or there may be several inputs to the same system. Process can be defined and programmed to the finest detail by the instrument maker, and it can be highly complex, too.  

 

Figure 2. How all digital instruments work: input, process, output.

The third task – algorithmical processes – are very much a product of digital domain. There are many ways in which computers can be programmed, many ways to create algorithmic processes. They can play patterns on their own, as accompaniment, or analyze music and play music according to that. Or computers can be used as generative music machines, playing constantly evolving music on their own. Or simply, computers can be used to give random values and thus automatic variation to the musical process.

The fourth task – music research – relates to the fact that synthesis can be seen as the opposite of analysis. If something can be constructed, then it can also be deconstructed, or analysed and computers are used for analysis. There are many aspects that can be analysed in music, starting from the contents of soundwave, like pitch, tone colour, tempo, harmony. But computers can also analyze the contents of music in macro level, such as repeating patterns and musical style. Many of the analysis techniques are also used for effects and sound manipulation, tasks that were not possible using analog electronic equipment.

The fifth task – digital instrument building – is about how computational power is used for building and augmenting instruments; how the aforementioned ways of using computers for music making take place in instruments intended for live playing.


Let the instrument play

Computers in music have made possible new kinds of composition methods at the same time that they have caused disruption in the social and cultural practice of music making (Rowe, 1993). I think what Rowe says is true for many disrupting practices. Music would have survived just fine without the introduction of computers. But their appeal was so strong that researchers wanted to keep using and researching them. Where there’s something new and emerging there’s also conservatism which causes resistance.

One example of new composition techniques is interactive music systems, or interactive instruments. The definition of interactive instruments by Chadabe (1997, p. 291) says: "These instruments were interactive in the same sense that performer and instrument were mutually influential. The performer was influenced by the music produced by the instrument, and the instrument was influenced by the performer’s controls." Simply put, there’s an algorithm in a computer that causes it to change its behaviour based on the input of the player. Interactive music systems may create a shared creative aspect of the process in which the computer influences the performer as much as the performer influences the computer (Drummond 2009, p. 125).

According to Rowe (1993) algorithmic composers explore some highly specific techniques of composition at the same time that they create a novel and engaging form of interaction between humans and computers. Such responsiveness allows these systems to participate in live performances of both notated and improvised music (ibid).

Chadabe (1997, p. 291) writes about interactive instruments: "musical outcome from these interactive composing instruments was a result of the shared control of both the performer and the instrument’s programming, the interaction between the two creating the final musical response." Drummond (2009, p. 125) says that interactive computer music systems such as Chadabe explains challenge the traditional clearly delineated western art-music roles of instrument, composer and performer. And there’s no reason why a computer musician couldn’t be mixing interactive musical systems with other instruments, which have simpler relationship between input and output.

Drummond (2009, p. 124) says that "an interactive system has the potential for variation and unpredictability in its response, and depending on the context may well be considered more in terms of a composition or structured improvisation rather than an instrument." It’s difficult to define the borders between interactive instruments (or systems) and structured improvisation compositions.

Another field where computers provide advantage over other means of making music is generative music. Generative music is a term popularised by Brian Eno, referring to music that is ever-different and changing, and that is created by a system. “[A]ll of my ambient music I should say, really was based on that kind of principle, on the idea that it's possible to think of a system or a set of rules which once set in motion will create music for you.” (Eno, 1996). Computer is good system for setting the rules and create ever-changing music.

I think interactive systems, also systems producing generative music, are interesting and growingly important for me as a solo musician. I’m building a solo set where I want to create a rich soundscape, and play multiple sounds and timbres at the same time. I think that such apps that work as interactive music systems could be helpful. Instead of preparing extensive amount of samples and looping I could use ‘smart’ applications, music systems that interpret my playing and response to that. Or instead of having full control over what’s playing I could give a generative music system some power and let that guide me.

Programming your own music 

In the course of evolution of using computers for music making the focus has turned from getting music out of a computer to making the computer to interact with music. There are many different ways to give orders to the computer and many different interfaces to do so.

There are thousands of software instruments with rich user interfaces that can be played directly without the need to program them. But if the ready-made apps don’t provide with what the musician is striving for, it’s also possible to create your own musical program using dedicated environments for that. There are musical programming environments with graphical user interface (eg. MAX and Pure Data) and text-based interface (eg. Csound and SuperCollider). Graphical interface presents the data flow directly, in what-you-see-is-what-you-get kind of way, whereas the text-based systems don’t have this representation. Understanding the syntax and semantics is required to make sense of the text-based systems. However, many tasks such as specifying complex logical behaviour are more easily expressed in text-based code. (Wang 2007, p. 67)

Text is a powerful and compact way of giving orders. Text-based systems are usually used for run-time modification of programs to make music (ibid.). If this happens in a live situation the action is often called live coding (ibid.). Live coding music means creating a musical performance with a computer whose screen is projected for the audience to see. Live coders make use of audio synthesis and manipulations capabilities of the musical programming environments they are using.

Some musical programming environments (such as SuperCollider) enable also networked music, which means that the musicians don’t have to be physically in the same place. Nearly every computer is connected to some network. It means that musicians are able to make use of the network to communicate with other musicians or audience, and they don’t have to be physically in the same place. This is quite a big topic in computer music but not discussed more extensively in this research.

One method that can be used in live coding and also in composing or creating interactive music systems is algorithmic composition. By using algorithmic methods such as automatisms, random operations, rule-based systems and autopoetic strategies, some artistic decisions are partly delegated to an external instance (Essl 2007, p. 108). This can be regarded as giving out the artistic freedom, but on the other hand it enables the artist to gain new dimensions that expand investigation beyond a limited personal horizon (ibid.). Algorithms can be regarded as powerful means to extend our experience – algorithms might even develop into something that may be seen as ‘inspiration machine’ (ibid.). It’s possible to form algorithms with both graphical and text-based interfaces but in text-form they are often very compact and more easily understandable. The use of algorithms is not solely restricted to computers, but computers are very good tools for algorithmic approach. Due to its rule-based nature, every algorithm can be expressed as a computer program (ibid.).

It’s easy to define computer music as something separate from music played with acoustic instruments. In some extreme cases like live coding it may seem so very distant practice. My view is different, which is reflected by the following quote: “Music has always inhabited the space between nature and technology, intuition and artifice” (Cox & Warner 2004, p. 113). According to Cox and Warner (ibid.) machines are no less important in the evolution of music than human heartbeat and voice. Also acoustic instruments can be regarded as mechanical machines of sorts. Following the same idea we can think of a symphony orchestra as a machine, too, conductor being the player of the musical machine. Actually, at least in theory it would be possible to construct a symphonic orchestra out of computers. Computers are well capable of replicating the sounds of acoustic instruments used in a symphonic orchestra. If the computers were equipped with proper sensors to follow the conductor a symphony orchestra made of computers playing the corresponding instrument sounds, it could be conducted in the similar way as a traditional symphony orchestra consisting of human players.

There are all kinds of wild ideas about the role of computers in music in the future. It’s pretty evident that computers are and going to be a stable part of the recording studio. It’s not clear, however, how computers are going to be used in live gigs in the future. There are many ways to use them and I think the computer musicians have only scratched the surface of their capacity. During the past decades computers have become more and more portable with various methods to interact with them. Making music with computer can be regarded as its own genre. However, I don’t want to make a big distinction between music that is produced with computer and music produced with some other instrument, whether it is a saxophone, a piano, or even a symphony orchestra.

 

New Interfaces for Musical Expression (NIME)

The goal of this section is to find relevant subjects in the field of NIME that can be used later in the definition of my practice and then in the analysis section as reflection points. After every NIME conference, the research papers are made public for researchers of the world to study. It’s easy to see what have been the topics of the each year. In ‘proceedings’, research papers of each conference, the topics of touch screen, tablets and mobile music have been covered in recent years. It’s a nice source of research and inspiration for this research.

The second part of a theoretical background comes from New Interfaces for Musical Expression (NIME). Here’s what’s been said about NIME in their own website[19]: “The International Conference on New Interfaces for Musical Expression gathers researchers and musicians from all over the world to share their knowledge and late-breaking work on new musical interface design. The conference started out as a workshop at the Conference on Human Factors in Computing Systems (CHI) in 2001. Since then, an annual series of international conferences have been held around the world, hosted by research groups dedicated to interface design, human-computer interaction, and computer music.”

NIME is both a yearly conference and a field of research. The topics of NIME range from augmented interfaces for traditional instruments to touch screen interfaces as musical input methods to playing music using gesture recognition without any interface at all – and everything in between. NIME could perhaps exist without computers, but as the research is mainly about digital electronic instruments, so usually there’s a computer involved.  

The research around the the topics of NIME took its first steps at the same time as digital instruments started to become popular. According to Jordà (2007, p. 97) the interest towards alternative music controllers started to grow with the advent of MIDI. The role of MIDI was important in this: it standardised the separation between input (control) and output (sound) of electronic music devices. After MIDI, in the late 1990’s the introduction of OSC[20] provided even more possibilities for the players and makers of experimental musical interfaces to interact with the instrument (Phillips 2008).  

NIME covers concepts of human–computer Interaction for musical instruments. There are different ways how the researchers approach the topic, but the main idea is to explore how a player can give orders to computers to play music in a precise but rich way; and how the response of a computer can be sent back to the player. Miranda and Wanderley (2006) propose a model for digital musical instrument where the instrument contains a “control surface” and a “sound generation unit” conceived as independent modules related to each other by mapping strategies (the arrows between the boxes in figure 3). The model that Miranda and Wanderley suggest is depicted in figure 3 where the main components, gestural controller and sound production, are what could be in the ‘Process’ box in figure 2 (see page 19). Miranda and Wanderley emphasize different forms of feedback from digital musical instruments: primary (tactile and visual) and secondary (audible) feedback.

Figure 3. Approach to represent a digital musical instrument (Miranda & Wanderley 2006)

Sensors play an essential role in many of the NIME research topics and that’s how gestures are fed into digital musical instruments. There are many types of sensors that can be used in musical instruments, such as distance, flex and pressure sensors. Sensors watch the real world actions of a player and transmits them to a computer. Some of the sensors are able to determine gestures out-of-the box, but gesture recognition patterns can also be programmed to the computer.

One important concept in NIME is mapping, that is the connection between gestural parameters (input) and sound control parameters or audible results (output) (Jordá 2004, p. 327). The most direct kind of mapping, which associates each single sound control parameter (e.g., pitch, amplitude, etc.) with an independent control dimension, has proved to be musically unsatisfying, exhibiting a toy-like characteristic that does not allow for the development of virtuosity. More complex mappings, which, depending on the type of relation between inputs and synthesis parameters, are usually classified as one-to-many, many-to-one or many-to-many, have proven to be more musically useful and interesting (Hunt & Kirk, 2000, p. 251).

Figure 4. Different kinds of mappings. Image by Valtteri Wikström (SOPI 2015).

One-to-one mapping maps directly one input to one output and as a mathematical function they take the form y(x).  One-to-one mapping is usually about scaling and transforming data. One-to-many mapping is about using limited controls for a more complex system but it’s mathematically similar to the one-to-one mapping. One-to-many mapping can create conceptual difficulties for the interface, though. Sometimes it makes sense to control a single output device with many inputs, which is called many-to-one mapping. Their mathematical functions take the form y(x1,x2,x3,…,xn). In the most complex case, many-to-many mapping, the designer needs to think conceptually about the relationship between outputs as well as inputs. Mathematically many-to-many mapping is similar to the many-to-one mapping. (SOPI 2015)      

Interfaces

The interfaces of digital musical instruments are free from many physical constraints. A basic touch screen can be configured to have two dimensions, X and Y axes,  whereas a piano has only X axis. It has one-dimensional playing interface: going from left to right the pitch grows. In addition, the amplitude of a piano note can be controlled by pressing the key in different position, which can be regarded as very limited Y axis. In comparison, an XY pad of a touch interface can be configured to send pitch data according the touch point of the X axis and amplitude data according the touch point of the Y axis. Thus the player is able to alter the sound freely in both X and Y axis at the same time.

I see touch screen interfaces as seamless continuation to the evolution of electronic music. I attended Bob Ostertag ‘s lecture in festival of digital art Resonate ‘15[21] in April 2015. Bob Ostertag is an experimental sound artist and writer who has lived through the evolution of synthesizers. He’s been making music with synthesizers and experimenting with different playing interfaces. He gave a very thought-provoking speech.

“Back in the 70’s with the early synthesizers there was a debate about the keyboards on the synthesizer. So Robert Moog had a keyboard on his synthesizer [...] and made a lot of money. Don Buchla did not put keyboards in his synthesizers. [..] My side of the debate said ‘why would you put a keyboard on these things?’ [...] We already have pianos, we already have organs that work really well. So, let’s do something new. And at the time all we had was knobs. So, we imagine that in the future there would be these new machine-human interfaces that were incredible – that will allow us to control synthesizers in a way that was smart, idiomatic to the medium. And over the last 40 years I’ve experimented with almost every interface that’s been proposed, and I think they all fail [...]. They all fail in a sense that there’s no human-machine interface that would inspire you to practice six hours per day for 20 years to become a virtuoso with like a violin that would inspire you, or like an oboe would inspire you.“

After the speech he performed one of his early compositions for analog synthesizers, that he had reconstructed in MAX and using an iPad as an interface to perform. So, even though in his opinion also iPad fails as an interface for controlling the synthesizer at least he believed that, in 2015, iPad was good and interesting enough for him to control the synth..

“Modern improvements to user-interfaces allow one musician to play a larger, more complex and intricate repertoire.“ is how Mann (2007, p. 2) describes the possibilities of digital instruments and interfaces. It’s also a positive prognosis where the development of using computers as instruments and coming up with new interfaces to play them is leading to. It’s an ongoing development that started before computers and electronics: “The harpsichord or piano can be used to play very richly intricate compositions that a single musician would not be able to play on a harp. Similarly, an organist is often said to be ‘conducting’ a whole ‘orchestra’ of organ pipes.” (ibid.)

Mann (ibid.) continues by comparing earlier automated instruments with electronic instruments: “Some instruments, such as orchestrons, player-pianos, barrel organs, and electronic keyboards can even play themselves, in whole or in part (i.e. partially automated music for a musician to play along with). For example, on many modern keyboard instruments a musician can select a ‘SONG’, ‘STYLE’, and ‘VOICE’, set up a drum beat, start up an arpeggiator, and press only a small number of keys to get a relatively full sound that would have required a whole orchestra back in the old days before we had modern layers of abstraction between our user-interfaces and our sound-producing media.“ Similarly, a single computer can be used to conduct multiple sound sources, and a touch screen is one approach to allow non-discrete input. We are gradually able to perform more complex tasks with less effort using computers. "What can be seen in this historical development is a decrease in visibility: everything becomes smaller and less tangible, while at the same time complexity increases. This contradiction urges developers to pay more attention to the design of the interface. A whole field of research and design has emerged in the last few decades, offering us methodological and structured approaches in human-computer interaction." (Bongers 2007, p. 9)

When new interfaces are developed, there are many choices to be made. The interface should be powerful, and not hide the features. It should be simple, but provide easy access to all of its features. Touch screens are just the beginning. There are many interesting musical interfaces to come. For example Apple has plans for creating touch screens that provide tactile feedback. Tactile feedback could be used for telling how the fingers are situated on a touch screen without the need to look at them.

The E in NIME

Humans have feelings, computers don’t. Humans can interpret feelings while computers don’t. This leads to a very important concept in musical performance: expression. Expression is the act of conveying feeling in a work of art or in the performance of a piece of music[22]. Dobrian and Koppelman (2006) have studied expression in new musical interfaces: how to enable expression with instruments that are not traditional in nature, moreover digital instruments which involve computers as sound source.

Traditional instruments come with different options for expression. Digital instruments usually don’t have such qualities if they are not specifically designed into the instrument. Digital musical interfaces should provide ways for the musician to express feelings; musicians should be able to alter the music according to current emotions and audience reactions[23]. Poepel (2005, p. 228) lists different elements of expressions that work on a note level: tempo, sound level, timing, intonation, articulation, timbre, vibrato, tone attacks, tone decays and pauses. Then there are expressive aspects on a phrase level such as rubato and crescendo (ibid.)

A good musical instrument is expressive, in such a way that the player has means to deliver the music to the audience in a desired way. According to Dobrian and Koppelman (2006, p. 278) control enables expression but a controllable instrument isn’t necessarily expressive. Does the expressiveness of digital instruments reach the level of traditional acoustic instruments? Basically all the elements of expression could exist in digital instruments, but they need to be designed and programmed into the instrument separately, in a process where there are always many decisions and compromises to make. Dobrian and Koppelman (2006, p. 278) say that one-to-one mappings are good for precise control but one-to-many mappings and gesture-sound relationships bring better expressive qualities, when they are well designed. Designing them well means a lot of work, and that work should be done together with the players. In traditional acoustic instrument the qualities of expressiveness exist naturally (ibid.). Dobrian and Koppelman (2006, p. 279) say that expression is a product of musical training, something that keeps professional musician interested in the instrument.

What if the player is not a human being but a computer itself? Can a computer be expressive? In my opinion, currently computers are not very good at being expressive and all the expressiveness needs to programmed into the composition. Humans are far more interesting players than computers. If a computer is replaced with a human being, a very important factor, human inaccuracy, is neglected. Inaccuracy and slight mistakes often bring life to music. For me as an artist it’s interesting when human players make use of the computational power of the computer, ability to synthesize interesting sounds, analyze musical content and play simultaneous processes at the same time and use algorithmic patterns in a way that a human player with a single traditional instrument wouldn’t necessarily be able to play. In that kind of approach the inaccuracy and slight mistakes exist in a different form.

It’s also possible to augment existing instruments. According to Bongers (2007, p. 14) adding electronic elements (sensors and interfaces) to the instrument leads to hybrid instruments or hyperinstruments[24]. Bongers (ibid.) continues: “With these hybrid instruments the possibilities of electronic media can be explored while the instrumentalist can still apply the proficiency acquired after many years of training." This is probably one of the reasons why so many interfaces of musical applications take the form of an existing instrument. On one hand it's easy for an existing virtuoso to start playing the new instrument with acquired skills. On the other hand it may be laziness of the instrument designer. If it's a new instrument with different features, why the interface should be the same? Or even worse, why to resemble existing interface if it's not suitable for the medium, e.g. a touch screen? Fortunately there’s existing research on this topic. For example Anderson et al (2015) claim that using major third intervals for instrument layout in a touch screen instrument instead of the usual 4th-interval tuning can be more easily learned by new users without prior musical experience. So, combining aspects of traditional instruments with new digital instruments can lead to a more gradual learning curve, and perhaps even richer experience for the player.  

There’s one major flaw in touch screens in particular as a musical interface, but also in many other digital instrument interfaces: the lack of tactile (or haptic) feedback. Tactile feedback from the instrument to the player is essential so that the player knows how the fingers (or any body part that is used for musical input) are situated on the playing area. Arguably for virtuoso players of traditional instruments tactile feedback is the most important factor, because it’s possible to analyze the outcome prior to playing a sound. “Acoustic instruments typically provide such feedback inherently: for example, the vibrations of a violin string provide feedback to the performer via his or her finger(s) about its current performance state, separate to the pitch and timbral feedback the performer receives acoustically.” (Drummond 2009, p. 130) This also means that the player doesn’t need to pay attention  to the results but may sense the outcome beforehand: "With electronic instruments, due to the decoupling of the sound source and control surface, the tactual feedback has to be explicitly built in and designed to address the sense of touch. It is an important source of information about the sound, often sensed at the point where the process is being manipulated (at the fingertips or lips). This immediate feedback supports the articulation of the sound.” (Bongers 2007, p. 15)

We are approaching the future where digital systems are able to provide useful tactile feedback. But the time is not quite here yet . Papetti et al (2015) think that touch screens are not able to provide meaningful feedback for the player: "The use of multi-touch surfaces in music started some years ago with the JazzMutant Lemur touchscreen controller and the reacTable, and the trend is now exploding with iPads and other tablets. While the possibility to design custom GUIs has opened to great flexibility in live electronics and interactive installations, such devices still cannot convey a rich haptic experience to the performer." The examples in Papetti’s list have the quality that Bongers mentioned: they lack the possibility to articulate the sound before the sound is actually heard.

Instrument vs. controller

Traditionally musical sounds are produced by exciting a physical object, e.g. a string or percussive surface. With digital instruments musical sounds can be produced by sending musical messages; the message is analyzed, processed and corresponding musical output is played. In electronic instruments the method to produce sounds is usually the latter. There’s always a clear separation between input and output in instruments that involve computers. It’s a fine line between an instrument and a controller when talking about electronic instruments.

In the strictest sense, someone could define all computer-based instruments as controllers. But I somewhat disagree, because it’s possible to blur the line between instrument and controller. A controller can be anything: it can be a USB MIDI keyboard or a glove with sensors sending OSC messages to a computer. In both cases the controller objects don’t make any sound on their own, but still there would be no music without the controller. In the context of this research it’s sufficient to note that usually a controller can send the control messages to various processing units. It’s not relevant to this research to argue what’s an instrument and what’s a controller, it’s all part of a bigger topic: digital instrument design, or new interfaces for musical expression.

Jordà (2004, . 321), a developer of Reactable[25], sees the design of new digital instruments as highly collaborative effort: “New digital instrument design is quite a broad subject, which includes highly technological areas (e.g., electronics and sensor technology, sound synthesis and processing techniques, computer programming), human-related disciplines (associated with psychology, physiology, ergonomics and human-computer interaction components), plus all the possible connections between them (e.g., mapping techniques), and the most essential of all, music in all its possible shapes.”

Jordà has classified controllers in three different groups (Jordá 2004, p. 328):

The two first categories are associated with existing instruments. They profit from known playing techniques (Jordà 2007, p. 97). They may address a potentially higher number of instrumentalists who have acquired virtuosity with some traditional instrument. However, many controllers, which are usually ‘midified’ versions of traditional instruments have remained imitative and conservative (ibid.). The third group consists of all the rest, controllers that do not necessarily resemble any existing instruments. Perhaps the third group will yield something new and exciting, something that cannot be foreseen before it has been created.

In this research, the focus is on the third group, in iPad apps that work on a portable touch screen controller with sensors. However, the first two groups should not be neglected. That’s where the traditional musical ideas are easier to implement.

New forms of virtuosity

Traditional instruments have been in the market for hundreds of years and the concept of what virtuosity means with each of them has existed for a long time. Virtuosity is defined as great skill in music or another artistic pursuit[26]. In music virtuosity goes together with the term called instrumentalism, or instrumental technique[27]. Often the virtuosi are instrumentalists: they know every aspect of their instrument and know how to use those. They know which position is the best for certain types of chords or passages. This kind of skill evolves only after hours and hours of practice. This can hardly be said about new musical instruments. The possibilities of creating new digital instruments are vast and there are many ways in which the players of new digital instruments can become virtuosi.

Dobrian and Koppelman (2006, p. 279) say that virtuosity facilitates expression and on the other hand the lack of virtuosity inhibits expression. Jordá (2004, p. 336 - 337) claims that the formula for creating an instrument that enables the growth for virtuosity is “variability + reproducibility”. There needs to be many things that musician can play with the instrument, but it also needs to be possible to repeat musical patterns and passages. Other factors to consider are non-linearity,  control, predictability and confidence in using the instrument (ibid.)

Dobrian and Koppelman (2006, p. 280) give a list of possible directions where to go with the research and discussion of NIME in order to facilitate expression:

It would be important to involve musicians who have highly trained muscles and nerves in the development process of new digital instruments. This would mean that the digital instruments would become better instruments than they are now. And there would be more new digital instruments in the market. Dobrian and Koppelman (2006, p. 280) assume that there are plenty of traditional instrument virtuosi who are too intimidated to try out any new musical instruments, either because they feel that the new instruments are too technology oriented or the players have experience with poor computerized models of their instrument in the past. "For an instrument to be considered potentially expressive by a trained musician, it must necessarily have a certain degree of complexity in the relationship between input control data and sonic result." (ibid., p. 279)

I haven't seen research that examines how new musical instruments are spread . It may be through virtuoso players who are able to create a community around the instrument. On the other hand it also may be through new music that sounds so interesting that people want to find more about the music and they realize they like it because of the new instrument. Thus the word spreads. Perhaps the most effective way would be to put the instrument in the hands of a celebrity, instead of a virtuoso, so that millions of people get to see it. But that approach may also be rather difficult, too.

Even though the definition of virtuosity is clear it’s good to ponder whether virtuosity exists in various forms. Jordà (2007, p. 105) argues that distinct virtuosity paradigms coexist: a classical virtuoso has infinite precision and love for the detail, like for example a goldsmith, whereas a new digital instruments virtuoso, close to a virtuoso in jazz music, could be compared to a bullfighter for the ability to to deal with the unexpected. Even though I don’t appreciate bullfighting as a sport or activity, the idea behind Jordá’s thinking summarises nicely how I see the new form of virtuosity.


iPad as a live instrument

Until this point the research has been about using computers in general for music making. Starting from this chapter the focus is on iPad.

In 1992 improviser and software developer Emile Tobenfeld was asked about desirable features of a software instrument for computer-assisted free improvisation. He listed seven things (Tobenfeld 1992, p. 93–94):

Tobenfeld’s list is for a free improviser who often works unaccompanied and is not much concerned with traditional structures. However, the answer looks pretty much what computer musicians, not only improvisers, desire. All of the features have been covered in the development of computer music during the past 25 years. Furthermore, the computer musicians of today can imagine all sorts of wild gestural controls with combination of screen and other sensors.

The features listed are found not only on laptop computers but all the aspects that Tobenfeld lists are covered as a combination of different iPad apps. I would assume Tobenfeld would have accepted iPad as a good instrument. I find it confirming. My personal approach is somewhat improvisational and experimental, not caring too much for musical conventions. If it’s often said that technology evolves fast, then sometimes it’s amazing to realize how slowly it actually happens. The list was written over 20 years ago. The tasks Tobenfeld listed are all possible, but it’s not entirely clear what are the best ways to accomplish them.  

 

iPad musicianship

The goal of this section is to give an idea of what are the different ways in which iPad is used for music.

I’m a member in an active Facebook group called iPad Musician[28]. It’s a place where musicians and producers interested in adopting iPad in their workflow meet and discuss. It’s a very fruitful source of information and knowledge for me. It’s quite tech oriented group, which is not surprising since many challenges that users face are related to some limitation of an application or communication protocol such as MIDI, or a piece of hardware that is used with the iPad. There are also app developers among the participants. It’s nice because early adopters of the group can take part in beta, or even alpha, testing, and it’s possible to have a really short feedback loop between the developers and users.

There are many types of iPad musicians in the group. Based on my experience, I would divide the members into two larger groups. On one side there are musicians and producers who use iPad as the most portable studio they can imagine. On the other side there are experimental musicians who don’t quite know yet  what they are doing with the iPad, but this surely does produce cool new sounds, even music. iPad musicians know what they are doing, but the path is still somewhat unclear. There are many different ways iPad can be used in music making for both of these groups.

The musicians in the first group – iPad producers – are producing music in a café, at their summer cottages or twisting the virtual knobs by the pool. They use the iPad as their DAW and enjoy the fact that computing power is taking smaller and smaller forms all the time, without the amount of features getting smaller. In fact, there are  more and more features available in the music apps all the time, especially in the DAWs which are evolving quite rapidly. In my opinion, the biggest development has been in the way how the difference between audio and MIDI is diminishing. To people who start making music in 2016 the difference between audio and MIDI might be puzzling, because they go so closely together. To anyone who has started making electronic music since MIDI was first introduced, it might be considered heresy to mix those two worlds. However, nowadays virtually any audio source can be used together with MIDI data. Musicians and producers can create entirely new soundscapes and rhythms.

The musicians in the second group – experimental musicians – are making use of the new possibilities that the iPad provides. It’s a new device with new musical applications developed just for iPad, intended to be played with multiple fingers. There’s a book called Drone, Glitch and Noise: Making Experimental Music on iPads and iPhones written by Clif Johnston, one of the active members of iPad Musician group. The book is a very good kickstart for making music on the iPad even if your musicianship is not necessarily experimental. According to Johnston (2015) experimental musicians usually base their music creation in improvisation and create new kinds of soundscapes with a variety of effects.

Clif Johnston has written another book called iPad Music School which takes a step back from Drone, Glitch and Noise and introduces some of the basic applications that are good to start from, if the reader is interested in using iPad in music making. These two books are currently the only ones that I know to exist as guides on how to use iPad in music making[29]. In addition, internet is full of how-to videos for many iPad apps, there are discussion forums for the users of certain apps and music technology magazines cover the topic of iPad musicianship in their features and reviews from time to time. It mostly lives online, though. Presumably there will be more books, and more and more official information sources about iOS music in the near future. Once that happens, iPad musicianship will slowly start its approach towards mainstream.

Besides iPad producers and experimental musicians there are also users who use iPad as an extension to their current setup. Using mostly their desktop computer as their workstation but enhancing the soundscapes with the iPad apps that are only available as mobile applications. The possibilities have expanded recently. There are new apps that make it possible to send both MIDI and audio between desktop computer and iPad using a data cable. Furthermore, at the end of 2015 a new technology called Link was introduced. It’s developed by Ableton[30], and the main focus is to sync the tempo of iPad apps with the the tempo of Ableton Live which runs on the desktop. This will enable many new ways of collaboration between iPad and desktop musicians. Link works also without Ableton, and it can be used to sync different iPad apps. It’s promising but still a new technology, so we will see whether it will be adopted or not.

iPad in numbers

This section introduces relevant facts and figures about an iPad.

The first iPad came to market in Spring 2010. Since then there has been 11 different iPad models, the most advanced being iPad Pro now in March 2015 , with a big 12 inch screen. The other advanced models are iPad Air 2 and iPad Mini 4. The difference between Air and Mini models is the size of the screen, which in Air is 9.7 inches (250 mm) and in Mini 7.9 inches (200 mm). iPad Pro has more computing power and more RAM than the rest of the models so it performs better with many tasks in music making.

My current iPad is relatively old: 4th generation iPad with 9.7-inch screen, 1.4 GHz dual-core processor (in Air 2 there’s 1.5 GHz triple-core processor) and 1 gigabytes of RAM (Air 2 has 2 GB of RAM). The technical details are not of great importance to this research, but it’s worth noting that using some of the most computing intensive effects cause unwanted clicks and using too many apps at the same time uses all the RAM available and causes problems, too. In practice it means that I need to limit the amount of apps open simultaneously and often use effects that don’t require heavy computing. It’s mostly a side note and it won’t prevent me from pursuing my attempt to build a solo live set upon the 4th generation iPad.  

In addition to audio iPad has many sensors that can be used for input with the musical applications: multi-touch screen, headset controls, proximity sensor, ambient light sensor, 3-axis accelerometer, digital compass, 3-axis gyro, 2 cameras, and a fingerprint sensor[31]. Also GPS and data connections , WiFi and bluetooth, can be used as musical inputs. They can also be used for communication between players and musical apps

There are of course many different ways in which the sensors, or a combination of them, can be used as musical input. Some of the sensors, most significantly the touch screen, can provide very accurate values. iPad could also be sensing different gestures without touching the screen. Many types of different gestures can be done holding the iPad in the hand. The question is how the developer of the application has mapped the input to output. When the values of different sensors are put together, the touch screen with an accelerometer or gyroscope, it’s possible to enhance the possibilities for expressivity: changes in angle, or slight shaking movement could give vibrato to the output sound, or the player could affect the amplitude of the output. The input from a camera can be used as an ambiguous source for noise, and the proximity sensor as a theremin-like instrument. Imagination is the limit.

There’s an app for that

The goal of this section is to highlight that it’s not the iPad itself that is a musical instrument but the the musical apps that has been developed for iPad.

Even though iPad is a piece of hardware and it includes sensors that can be used as musical inputs it’s not hardware that makes iPad an interesting instrument. iPad itself doesn’t provide meaningful sounds. They are the musical apps that make iPad the instrument it is. App developers are constantly providing new musical applications for musicians. The more we gain experience in how to use iPad as an instrument, and how the apps can interplay and how different sensors can be used to provide input for the instruments the more advanced possibilities we get for music making with the iPad. It’s highly fascinating!

It can be a trap, too. There are so many interesting new apps coming almost every week that it’s easy to forget music making and concentrate on the new cool features and sounds that the apps provide. Sometimes the wisest advice is to turn on the airplane mode and concentrate on the existing apps and learn how to use them thoroughly in your own music. There will always be limitations in the existing apps, and there’s always an update to fix some of the limitations. Sometimes the limitations can enhance creativity. The most important thing is to get some music created.

iPad applications are downloaded from App Store. That’s basically how all the apps are installed to the iPad. The emergence of App Store and mobile software as part of music making means that there are numerous applications available for the musicians for a much smaller price than the same features would cost for a desktop computer. The whole application ecosystem runs on volume rather than high pricing. Hopefully the indie developers of musical apps earn enough money from the sales of the apps to keep on developing new apps and also to keep old apps updated. To us musicians it’s an advantage how low the prices are compared to desktop applications.

I’ve done experiments with developing my own musical apps, but never anything else than just prototypes. The digital sound processing requires special skills and careful use of the processing power. It’s rather easy to get started with app development, however, it requires dedication to create an outstanding app that musicians adopt. There are many applications, many musical instrument apps for the iPad that never reach the point where they would be more than experimental prototypes. Frankly, the odds that a new musical app would ever grow to be a true instrument with capabilities for musical expression are quite small. But there is certainly a multitude of applications that are easy and fun to use, have potential for professional use and they can be used for musical performance. I’ll go through some of those apps in the next chapter.

Workflows for building a live piece

The goal of this section is to present approaches on how to build versatile live piece with an iPad, so that it contains multiple instrument layers.

Just as there are plenty of different apps to use for music making there are plenty of different ways to approach creating an interesting and sonically complex composition. Suitable workflow depends on various factors: style of music, personal preferences, performance of the iPad, whether the the musician is working alone or with other musicians and whether the musicians intend to perform live (Johnston, c. 5).

If not aiming at sonic complexity, perhaps the simplest way is to open an instrument app that has a keyboard and has built-in sounds and just play the keyboards, like playing the piano. There are many apps that work just like this.  Basically musical apps with a keyboard interface could be used as a solo performance instrument just as they are. But then it would be just that one instrument and probably it starts to sound boring. In my experience, the expressivity and fun comes from combining different musical apps for a performance, and I’m aiming at interesting and sonically complex compositions.

There are also apps that can be used as stand-alone live performance tools, but they are closed environments and I haven’t found one that would be to my exact liking. Perhaps why I see the use of several apps, instead of just one, as interesting approach is because the touch screen as a means to interact doesn't make the development of virtuosity easy.

There are few ways to approach using iPad as a live instrument. Clif Johnston (2015, c. 5) lists five approaches to making experimental music with the iPad:

  1. Improvisation
  2. Soundscaping and live-glitching
  3. Multi-tracking
  4. Linear sequencing
  5. Pattern sequencing

Improvisation on the iPad can be done in many ways. It can be very joyful. But if the player has any specific goals in mind, a successful workflow requires a fair bit of preparation to create the right sounds and presets, mapping MIDI, setting up automation, getting everything synced and making sure none of the apps crashes and there are no unwanted glitches or breaks in the sound when switching between apps and recording the loops from different apps. Improvisation on an iPad can contain all kinds of preparation tasks. It’s a big concept, and basically soundscaping and live-glitching, multi-tracking, linear sequencing and pattern sequencing can be approaches to do improvisation. The focus of this research is not in improvised music, and the approaches can be used for structured compositions, too.

Soundscaping in this context refers to creating soundscapes and altering its sonic content with effects rather than playing musical notes. Live-glitching refers to using environmental sounds or ambient as the source for soundscaping (Johnston, c. 5). This can be fun and done simply with the input mic of the iPad with interesting results. There are apps for this kind of approach, most notably Soundscaper and Fieldscaper[32]. Either of those apps can provide a nice background soundscape for a live piece.

Multi-tracking generally means working in a DAW with several recorded audio tracks (ibid.). Basically multitracking environments that are available for the iPad are not very good for live performance, but basic looping could be done in a multi-tracking environment. One of the cornerstone iPad musician apps, Apple’s own GarageBand, hasn’t worked well for looping live, because it cuts off the audio when adding a new track to the session. However, there are dedicated apps for looping, so they are generally better alternatives. Performing live with a multitrack DAW would probably require significant amount of presettings.

According to Johnston (ibid.) linear sequencing usually refers to sending musical messages from one app to another. In other words it means using another app as a controller for another app. In practice this could mean creating a MIDI sequence in a piano roll[33] and then sending that sequence to one or more applications which play the sound. Pattern sequencing refers to an approach with ready made audio loops, which is basically just looping with patterns (ibid.). I interpret that pattern sequencing is a more complex version of linear sequencing, having a more complex structure than in linear sequencing.

With the aforementioned approaches and apps that make good use of the different sensors, I would say that there’s a great possibility for the iPad to become an expressive live instrument.


Another dimension in building a live piece is the interplay of different apps. Clif Johnston’s list covers that in linear sequencing (ie. sending MIDI messages from one app to another). These are the ways to achieve the interplay of different apps:

  1. Control messages
  1. Midi sync and control (+ Ableton Link[34], Korg Wist[35], OSC)
  2. Sending MIDI note messages
  1. Routing audio
  1. Audiobus[36]
  2. Inter-app audio[37]

Even though MIDI is a fairly old technology it’s still important. It’s the longest living standard of programming electronic instruments (Billias 2016). MIDI is in the core of electronic music, but it’s not the only way to control electronic instruments. It’s a very useful protocol, because it enables communication even with analog synths. Different types of control messages are the key to build multilayered soundscapes from one source. Korg has had its own technology Wist and also OSC can be used for several different purposes. The latest addition in the field of control messages is Ableton Link, which syncs two or more Link-enabled devices reliably and accurately over WiFi network. That’s a promising take from Ableton and it makes it easier to combine iPad wirelessly to a laptop-based workflow.

MIDI sync covers only syncing devices, and in practice makes it possible to play audio loops from different sources. The actual MIDI notes is a separate way to make apps interact with each other. In iPad’s MIDI environment MIDI messages can be sent from a MIDI note sending enabled app to another app which knows how to receive MIDI note messages and is configured to play notes according to the message content. In iPad, it can all be done inside one device, so in theory there could be several apps playing one pattern sent from one source. It’s possible to combine MIDI sync and MIDI messaging in such a way that there are several MIDI sequences sending messages of their own, and they are synced with MIDI sync. This is depicted in the figure on the next page.

Building a MIDI messaging scheme on the iPad is not very reliable. It’s not always guaranteed whether different apps recognize each other in the MIDI environment (which is usually shown in the settings of the apps). In addition, MIDI messaging seems to cause quite a lot of stuttering and glitching when multiple apps are playing at the same time.  

File_000.png

Figure 5. Example of a MIDI messaging combined with MIDI sync.

The second way, routing audio from app to app, can also be used to build rich musical soundscapes. The usual way is to combine different instruments and effects via audio router such as Audiobus and loop them in a looping app or in a sequencer app.  A simple setup in Audiobus is presented in the next figure, using Bebot instrument app as the sound source, effecting the sound signal in an effect app called ToneStack and finally passing the signal to a looper application called Loopy HD, which supports 12 loops playing simultaneously[38].

File_000.png

Figure 6. A simple Audiobus setup to audio routing: Bebot as an input, filtering the sound in ToneStack and Loopy HD as an output.

The two approaches are not ruling each other out. It’s possible to combine control messaging and audio routing, and usually desired end result can be found in a combination of those two approaches.

 


Main iPad music app categories

In this chapter I list apps that I’ve found useful and inspiring during jam sessions and rehearsals. The goal of this chapter is to provide a limited but comprehensive list of apps to start making music with an iPad. It’s not an extensive list by any means, but hopefully gives a good overview for the reader and provides understanding how to manage with different musical apps. The list of iPad music apps listed here can also be found online at www.tuomasahva.net/padworks. 

The amount of apps is simply too big to be listed completely and it helps to divide the apps in different categories. I divide the apps in five categories:

  1. iPad instruments
  2. Controller apps
  3. Sync and connection apps
  4. Effects
  5. Loopers and DAWs

There are apps that belong to two or more groups, but I introduce the apps in the categories which I think are the main use for them.

IMG_0184.PNG

Figure 7. Different categories of musical apps.


iPad instruments

In this section I list interesting iPad instruments, including the ones I use in the compositions. 

According to my experience, instrument apps can be divided into three major groups. The division follows Jordà’s list and I regard it as a good way to categorise iPad instrument apps. It’s not entirely straightforward, though. In some cases it’s not clear whether an app is an instrument, or something else. The iPad instrument categories are:

  1. Virtual replicas of physical instruments
  2. Instruments with novel interface
  3. Experimental instruments

Virtual replicas of physical instruments

In the first group there are apps that are virtual replicas of physical instruments. They often sound really good. Their sounds are based on instruments that have been developed over many years and played and legitimized by many players. But, many of those apps are not very playable on the iPad. Many times they can be controlled from a MIDI sequencer or played with an external controller, for example with a MIDI keyboard. Virtual replicas are usually very nice sound sources. They are expressive in similar manner as synthesizer with keyboards usually are. The first virtual replica instrument that usually comes up in discussion of iPad apps is Animoog.

Figure 8. Animoog

Animoog was one of the first synths to appear in the App Store. It was first released in late 2011[39], and ever since it has remained as an important synthesizer for iPad musicians. It’s been developed and published by Moog and it’s constantly updated to stay on-par with new features of new iOS versions (and their quirks). It’s not a straight replica of an existing synthesizer but it has qualities from Moog’s different hardware synths. The interface replicates controls of a physical synthesizer.

Animoog is not the most playable iOS synth but the sounds it produces can be used to many different musical styles and purposes. It’s not tied with physical constraints: it has a scale-specific keyboard. After setting a scale, there are only ‘correct’ notes available in the keyboard. The synthesizer engine of Animoog is very versatile: there are over thousand presets for different sounds. Those presets and sound packs are worth exploring. Animoog is almost a must for any iOS musician.

“The EMS VCS3 (or ‘Putney’ as it’s often referred to by vintage connoisseurs) was one of the first mass produced synthesizers of the 1960s and early ’70s. Exotic yet flexible set of features made it a deep resource for experimentation and it quickly became a favorite for legendary rock artists like The Who, Pink Floyd, and Brian Eno. It was also the source of countless Doctor Who effects, as it was one of the crucial synths in the BBC’s Radiophonic Workshop.” (Preve 2015)

Figure 9. iVCS

iVCS is an example where a physical instrument is copied to its finest detail and with good results. It’s clumsy to use, but I believe so was the original instrument. I think iVCS sits perfectly in the (still to some extent experimental) iOS musician community. It leaves a lot of room for exploration and trial and error.

Synthesizers are not the only kind of instruments that are modelled to virtual replicas. There are a couple apps available that can be used for playing audio files from an interface that resembles turntables. Djay 2 is the one that I’m most familiar with.

File_000.png

Figure 10. Djay 2

Some people might argue whether a turntable is a musical instrument. Stephen Webber, Program Director in Berklee College of Music says that turntable is first and foremost for DJ-ing but there is a subset of DJs that play the turntable as a musical instrument, and you would call those 'turntablists.' (in Neal 2004).  Djay 2 for iPad is a virtual replica of two turntables side by side. The records spin as if they were on a turntable, and they can be scratched, the spinning speed can be adjusted, etc. In addition, there are all the advantages of digital sound processing: numerous different effects that can be applied to the sound, and ability to sync the tempos of the records. If a physical turntable can be used as a musical instrument, then definitely the virtual replica, too.

In practice Djay 2 could work as a nice instrument for creating backing tracks and loops. It also works for proper DJ-ing, at least if the DJ is careful not to touch wrong spots on the screen.

Instruments with novel interface

In the second group of iPad instruments there are apps that have a highly playable user interface and they have been developed keeping aspects of good usability in mind – and / or they have a novel approach to manipulating sounds and music. Some of them also make use of the possibilities of iPad’s sensors.They may be rather difficult to approach because the player doesn’t necessarily have a concept in mind of what the app is supposed to do and what to do with the app.

If I had to name a single app that inspires me most as an iPad Musician that would be Samplr. The big waveforms of Samplr invite to play and interact with the audio files.

Figure 11. Samplr

I don’t know of the origins of the interface, but the app definitely has a new interface for musical expression. It has similar qualities as Emile Tobenfeld (1992) described: the processes can be left playing on their own but they are still easy to catch and the player can make adjustments to them. I don’t see many other platforms than a touchscreen tablet where an instrument like this could exist.

Samplr has six slots for waveforms, seven different modes how the waveform can be played and five effects, individually adjustable for each waveform, and also globally. The multitouch gestures can be recorded and looped, in different lengths for each waveform. Samplr is a very versatile instrument. Another app whose interface basically consists of playable waveforms is Borderlands Granular.

Figure 12. Borderlands Granular

Borderlands Granular is a granular synthesis app. It creates new sounds from existing waveforms using a technique called granular synthesis. It’s been developed by an indie developer Chris Carlson. Borderlands Granular has won Ars Electronica prize in 2013. Here’s how Ars Electronica describes Borderlands Granular.  

“Borderlands Granular is a new musical instrument that allows users to explore, touch, and transform sound with granular synthesis, a technique that involves the superposition of small fragments of sound, or grains, to create complex, evolving timbres and textures. The software enables flexible, realtime improvisation and is designed to allow users to engage with sonic material on a fundamental level, breaking free of traditional paradigms for interaction with granular synthesis. The user is envisioned as an organizer of sound, simultaneously assuming the roles of curator, performer, and listener. The user interface emphasizes gestural interaction and visual feedback over knobs and sliders. Users create, drag, and throw pulsing collections of grains over a landscape of audio files, dynamically and polyphonically sampling the waveforms beneath their fingertips. Performers may also use the iPad's built-in accelerometer to sculpt sound with gravity and may record, save, and share their work.” (Ars Electronica 2013)

Borderlands Granular feels like a whole new way of interacting with sound, and the granular synthesis engine leaves much room for exploration. It can be used for soundscapes, but with clever programming of the movement of the grains, it’s possible to build highly musical processes.

Samplr and Borderlands Granular are about playing waveforms whereas TC-11 is perhaps the most extreme synthesizer for the iPad.

Figure 13. TC-11

This is how TC-11 is described in their own website:[40] “TC-11 is a programmable modular synthesizer on the iPad, controlled by multi-touch and device motion controllers. All synthesis parameters can be controlled by these two sources, allowing for countless unique patch configurations. TC-11 does not use on-screen objects like knobs or buttons for synthesis control. Instead, your touches are the controllers. Distances, angles, rotation, speeds, and timings created by the touches are used to push synthesis parameters in real-time. TC-11 opens up every inch of the screen for performance. Plus, the iPad's device motion capabilities can be used as controllers. The accelerometer, gyroscope and compass can be assigned to synthesis parameters to turn your iPad into a expressive motion-controlled synth.”

TC-11 is so versatile that it’s rather difficult the grasp and master. It’s overwhelming already with its presets but in addition it’s possible to program your own synth there, and wire it with sensor data from accelerometer and gyroscope. In the hands of a player who knows the app very well, TC-11 might be a very expressive instrument.

TC-11 is a very interesting example of an app that makes use of sensors but there aren’t that many iPad instruments that would make great use of the accelerometer and gyroscope. This is probably going to change, though. The latest update to GarageBand brought a method to manipulate effects by balancing the device. In addition, also Borderlands Granular has a functionality to control sounds with balancing the device and ThumbJam (later this chapter) has a very expressive playing interface that makes great use of the sensor data.

Geo Synth and Tachyon don’t make use of sensors but they are still quite playable instruments. Both are developed by Wizdom Music[41], a company founded by Jordan Rudess, the keyboard player of Dream Theater. Wizdom Music has created several apps that focus on playability and expressivity, and they are constantly working on new apps. In addition to these two apps, I’ve been using SampleWiz, which is a sampler app.

File_000.png

Figure 14. Geo Synth

The positioning of notes in the Geo Synth is a grid; it resembles a guitar. However, it’s not tied to physical qualities of the guitar. The interface can be adjusted to many sizes on many different locations on the scale. Developing a digital instrument like this probably has benefitted from having a virtuoso player Rudess involved in the development process, like Dobrian and Koppelman (2006, p. 280) have claimed. In skilled hands, it’s truly an expressive instrument. I assume seasoned guitarists enjoy playing an interface like this. By routing the MIDI note messages to another instrument app Geo Synth can be used as a playing interface to another app.

Tachyon has different interface paradigm: instead of being a grid the scale runs only horizontally on the X axis. There’s a good reason for that: Tachyon is about sound morphing and it’s possible to control the tone of the voice on the Y axis. The sound is a combination of two instruments, allowing independent control of pitch and tone in a way which is not possible with traditional instruments. In addition, the visual look of the app is interesting. Each instrument is represented as images that are drawn as dots on the screen. At the same time as the sound is a combination of two selected instruments, also the image representing the sound is a morph of two images.

File_000.png

Figure 15. Tachyon.

Bebot has a similar interface to Tachyon: a horizontal row that controls the pitch and vertical dimension for controlling the tone, or timbre.

File_000.png

Figure 16. Bebot

Bebot is a sweet little creature. Behind the toy-like interface there’s a powerful polyphonic synth engine. Bebot makes things easy for the players: it’s possible to set Bebot to sing in any scale. The notes for the scale can be freely selected from 12 steps and the sound can be made to snap to the played note, or allow more expressivity in the pitch by disabling autotune. Despite its looks, Bebot is definitely not a toy.

Then there are instruments that don’t have an interface for playing. An example of that is the percussive instrument app Impaktor.  

File_000.png

Figure 17. Impaktor

Impaktor is a drum synthesizer that turns almost any surface into playable percussion instrument.  It uses the built-in microphone of the iOS device and lets the player tap out rhythm parts and create percussive compositions. Impaktor has six-channel sequencer, which can be used for creating rich soundscapes only within the app.

Impaktor works as a nice rhythmic platform. Unfortunately it doesn’t have a possibility to sync with another electronic instrument. The hits can be quantized[42] in the sequencer. The app has a nice organic feel in it, it’s responsive without latency and can detect velocity pretty nicely, too. Another responsive and playable instrument is ThumbJam.

File_000.png

Figure 18. ThumbJam

ThumbJam is a swiss tool of performance apps for the iPad; it’s one of those must-have apps every iPad performer ought to have. It’s a credible sounding sample-based instrument that has a playable interface, a loop-based recording environment and MIDI related features to send MIDI in and out. The scale can be freely set to match the scale of the song. You can do the same thing as in Bebot but in ThumbJam there’s a selection of scales to choose from; it’s really quick to limit the available notes to those from a specific scale in a specific key.

It’s possible to add as many as four different instruments on the play area of ThumbJam; creating a whole band in one app: bass, drums, guitar and synth. However, it’s a challenge to play four instruments at the same from the play area of one app.

In addition, there’s a set of features on the play area that make ThumbJam a really expressive instrument: depending on the instrument and how it has been configured the position of the tap (left to right) on a note can be used to control pan or volume and shaking your finger can be used to add vibrato (or tremolo) to a note. Or, it’s possible to configure pitch bend to respond to tilting of your iPad. These can add a lot of expression to a performance.

What does it mean that ThumbJam is sample-based? In practice it means that the sounds cannot be tweaked like in synthesizers like Bebot. However, with MIDI capabilities of ThumbJam, it’s possible to send the MIDI data to any other MIDI-enabled instrument. And since many of the instruments sound fairly good, it’s also worth trying to set ThumbJam to receive MIDI notes from other apps. It has one good feature that I haven’t seen in many instruments: a possibility to transpose the incoming and outgoing MIDI data.

Experimental iPad instruments

As the third group there are instruments that are neither very playable nor come from history – but nevertheless there’s something in them. I call these experimental iPad instruments. They can have some added value, like a unique way of creating sounds, or a unique way of playing the instrument, or they can be just visually interesting.

Color Chime is both visually interesting and the way to play it is very unique.

Figure 19.  Color Chime

There’s basically no need for musical knowledge to play Color Chime. It has a white canvas. Player can add different colored shapes on the canvas, just by tapping the canvas. The shapes start fading into the horizon. Once they reappear, they make a sound: the tone of the sound depends on the shape and the pitch depends on the distance from the centerpoint of the canvas. It’s basically a two-bar loop that keeps playing. In addition, there are settings to control the key in which the notes are played, delay and filter effects and tempo. It’s rather difficult to do same things twice with this app; there are no indications what the pitch of an added shape is, and no metronome.

In all its simplicity the app is both sonically and visually very pleasing. Color Chime is basically a toy, but an innovative one. Another app that might fall into the toy category is AirVox.

File_000.png

Figure 20. AirVox

AirVox makes use of camera as input method. The camera is calibrated and after calibration the distance of the hand determines the pitch to be played. Just as theremin[43], with no tactile feedback of any kind, AirVox is fairly difficult to play. However, like in many other iPad apps, the key and the scale can be set, and AirVox works as a solo instrument, at least if the solo can be more or less improvised.  And, if  you’re a theremin player, you might find AirVox easy to play.

Spinphony is not suited for solos, but it can be used for interesting backing tracks. What does Spinphony actually do? I’m not sure. It is based on image detection and it produces intriguing sounds.

In Spinphony, there’s a spinning disc and three detector spots on the disc area. Apparently a change in an image at detector spots is interpreted as hitting a note. The image that is detected can be changed to be any image.

File_000.png

Figure 21. Spinphony

Spinphony may not be part of a workflow for serious music making, but it’s definitely a source for inspiration. Computers can be programmed to calculate complex algorithms. But they can also be programmed to detect things that human eye doesn’t necessarily see, like turning the image into music. I think Spinphony is a nice example, how randomness can be used for music, and not only by making the computer to give random values. Any image can be turned into music with Spinphony. It has a balance between randomness and control that’s fascinating.

An app called Sector is also based on randomness but randomness that takes place in Markov chain. Markov chain is a random process that’s used for transitions from one state to another[44].

File_000.png

Figure 22. Sector.

Sector probably falls into a category of IDM[45] instrument. There’s a circle In the UI, and it contains an audio file. The file can be anything, but at least a beat track will provide IDM-like results. The circle is divided into sectors, and and each sector contains a slice of the audio file. There are rules that are applied to determine in which order and how each sector is played. The circle visualises a matrix of transition probabilities (or a matrix of Markov chains). The curved lines connect the sectors and show where the playhead is possible to move next. In addition, there are different ways how each sector is be played (called warp) and they are controlled with probabilities, too.

Although the app is specially suited for certain type of music, it can also be used as a normal drum machine. It also invites for experimentation. Along with its probability calculations it could be used for producing music that breaks out from the norms.  

Controller apps

In this section I go through apps that are used for controlling other apps, or audio files.

Since (nearly) every electronic instrument has an interface to play, it’s sometimes challenging to make the distinction between instruments and controllers. Moreover, many controller apps have a default sound; it’s possible to use the same interface as an instrument on its own and at another occasion for controlling other instruments. Some controllers work as sequencers, and thus are very close to being DAW’s and loopers.

It’s not surprising that apps from different categories share common features. Just think of DAW’s on desktop computers: they can be used as an instrument, effect, controller, sequencer, or looper. For example Ableton Live has all these qualities but in its core it’s an app to put a performance or composition together.

I divide controller apps into three categories:

  1. performative controller apps
  2. programmable sequencer apps (and drum machines)
  3. pure controller apps

Performative controllers can be used in a live situation to control a performance live. They lack the possibility to be programmed whereas programmable sequencers can be programmed. They also include drum machines. Then there are also separate controller applications which I call pure controller applications.

Performative controller apps

GuitarCapo+ is essentially a virtual guitar with possibilities to play different chords from an interface that resembles guitar’s six strings. It could be placed in the category of virtual replica instruments but it has features that work for a live performance controller. It has an interface to play different chords, with different strumming or picking styles, accompanied with corresponding bass notes. With MIDI out functionality, GuitarCapo+ can be used as a controller for any instrument that is able to receive MIDI note messages, and that’s what has been its main use for me.

File_000.png

Figure 23. GuitarCapo+

How GuitarCapo+ can be used is an example of what I like in music creation: the guitar picking patterns follow the conventions of a guitar as an interface. When that data is sent to another app, that doesn’t sound like guitar, the results are usually surprising.  

Chordion is a controller that has quite nice built in sounds. It suits very well for live performance. It works in the similar way as GuitarCapo+. However, it’s not possible to have built-in sounds playing at the same time as the sounds that are played with MIDI out messages which I consider a limitation.

Chordion has two playing sides, a bit like an accordion: one side is for chords or arpeggios, and the other side is for melodies. The melody side has the playing surface that changes dynamically according to the chord that is played on the chord side. So, again, there are no wrong notes played. Chordion has also a built-in drum machine. It could be used on its own for building a one-man band.

File_000.png

Figure 24. Chordion

Orphion has background in research, so it works as a nice example of an iPad app as NIME. The design process, tests and decisions that were made during the development process have been presented in NIME proceedings in 2014.

File_000.png

Figure 25. Orphion

Orphion is an expressive polyphonic controller app with nice built-in sounds. The interface can be edited to contain any combination of notes, and the each individual touch area can be set to many sizes. It’s possible to create generic layouts for certain scales and intervals: a layout resembling piano keys or a grid layout. However, it’s also possible to create a separate layout for each individual track, or segment of a song. That’s where the app probably works best.

The MIDI out functionality seems very well made and works without any problems. There are different touch styles to interact with the playing area: it’s possible to send notes with different MIDI velocity by touching the note area in different ways. The pitch and timbre associated with

each pad depends on the initial point of touch, touch point size and size variation, and position after the initial touch (Trump & Bullock 2014, p. 159). The different touch inputs are highlighted in different colors.

Whereas Orphion is a result of the research on touch screen as a musical interface, Launchpad is basically a virtual model of a physical controller. Launchpad is an app developed by Novation, a British musical hardware manufacturer. They are taking steps into software world with their Launchkey and Launchpad apps. Both of them are designed to work together with their hardware, but they work on their own as well. And in the software version, it’s easy to spot the advantages of software instruments.

File_000.png

Figure 26. Launchpad

Launchpad might just as well be labelled as an instrument as a controller, but I’ve placed itin the controller section for historical reasons: Launchpad hardware by Novation has existed for a few years now, and that hardware version of Launchpad is clearly a MIDI controller.

Launchpad app is about launching samples, and looping them. Launchpad doesn’t support live recording of loops; there are loop packs available as separate purchase, but it’s also possible to import your own samples. The nice feature is that the software version gives better visual feedback: the file names for each pad are present on the UI. In the hardware launchpad such feature doesn’t exist. In addition, the same interface contains both loop triggering and effecting of the loops. As physical versions they ought to be two different controllers.  

Launchpad is designed for live performance. It’s possible to play eight simultaneous loops at the same time, all of different lengths, and each channel can be individually affected, so that it’s possible to build fairly organic sounding live performance with the app. For sample triggering without a need for further sound mangling Launchpad works nicely as a Ableton Live type of controller.

Programmable sequencer apps

The difference between performative controller apps and programmable sequencer apps is that it’s possible to program song structure in programmable sequencers. Performative controllers need to be played live, whereas programmable sequencers work – to some extent – on their own.

Midisequencer is simple 16-step MIDI sequencer app. 16 steps is a modest number, but otherwise it’s full of features. It’s possible to program MIDI effects in the sequence, such as sending chords instead of just notes on certain steps. It’s possible to launch new file – for example different parts of the song – while performing. However, I find Midisequencer a bit clumsy for that.

Just as in MIDI controllers normally, it’s not confined where the MIDI note data is sent. It could be a  synth, a drum app, or even an effect app.

File_000.png 

Figure 27. Midisequencer

For building an entire song with MIDI piano roll, there’s an app called Auxy. It doesn’t have the MIDI effects that Midisequencer has but it’s easier to control an entire song structure with an interface like Auxy’s. Thanks to its built-in sounds Auxy works quite well for translating a song idea to an entire song and – if the idea works – it’s easy to expand the sonic qualities and send MIDI notes to external instruments. Despite all of its limitations Auxy works nicely for live performance.

File_000.png

Figure 28. Auxy

Drum machines are a very established part of electronic music. The screen of the iPad is ideal for creating a drum machine: just like buttons on a physical drum machine, it’s possible to push multiple buttons at the same time on a virtual drum machine. However, virtual versions are not limited to the limitations of physical drum machines; it’s possible to pack multiple views and multiple control interfaces in an individual app.

I use drum machines quite rarely as controllers. However, the whole paradigm is very similar to other programmable sequencer apps. All the drum machines that I’ve used on the iPad support MIDI in and out messages as well as MIDI sync.

DM1 was the first drum machine app that I used on the iPad. The developers of the app, Fingerlab say themselves that DM1 is an advanced vintage drum machine; it has modeled vintage drum kits to choose from. It’s very quick to start playing with DM1, but it takes a bit of studying to get the best out of DM1.

One feature that I especially enjoy in DM1 is the possibility to randomise different parameters: note length, pitch, pan and velocity. Randomisation creates variation in the rhythm that can either be used for making the drum machine sound more human or sound even more machine-like.

File_000.png

Figure 29. DM1

Funkbox is another vintage drum machine, similar to DM1. However, Funkbox has separate bass sequencer, unlike DM1, so the same app can conveniently be used to create rhythm part and a separate bass line. That’s an example of a regular case: one iPad app of the same category is missing some features that the other has. Many apps have some feature that they work very well for, at the same time missing some other features that other similar apps have. In addition, according to my experience Funkbox has more reliable MIDI implementation: when DM1 has failed in MIDI sync, usually Funkbox has worked.

File_000.png

Figure 30. Funkbox.

Elastic drums is quite different from DM1 and Funkbox. It’s based on synthesized sounds, which in practice means that the sounds in FunkBox can be tweaked to many sonic directions. Elastic Drums has six tracks that can contain not only drum sounds but other type of sounds as well. Even the bass line can live in the same sequence.

Elastic drums has a nice effect automation feature. There are many effects than can be applied to sounds, and the effects can be automated by recording a knob movement for the effect or record a pattern on an XY pad. It also has similar live performance effect features as Launchpad, so it’s a really versatile controller / instrument. Automating many effects uses a lot of processor power, which can cause unwanted clicking and popping.

File_000.png

Figure 31. Elastic Drums.

Pure controller apps

There are apps that cannot be used for music on their own at all; they’ve been designed to be controllers for other apps from the start. I call them pure controller apps.

Audiobus Remote is a recent addition in the App Store, and a game changer in the performance workflow. It’s an iOS-born controller app: a controller for Audiobus. It provides better access to all the apps that are open in Audiobus. It can be used from another iDevice over bluetooth or from the same iPad.

It’s really convenient to switch between apps with Audiobus Remote, turning effects on and off and start recording in a looper or in DAW. What controls are available in Audiobus Remote depends on the developers of the apps.

File_000.png

Figure 34. Audiobus, GeoSynth, Loopy HD and Samplr on Audiobus Remote on an iPhone. GeoSynth and Samplr haven’t implemented any additional controls for Audiobus Remote.

Sync and connection apps

Sync and connection apps are kind of utility apps that cannot be used to produce any music on their own but they are the glue to combine different apps together. Frequently it’s the sync and utility apps that define the workflow of a building a composition.  

Even though many of the apps that I have mentioned can be used on their own to create entire compositions, the truly interesting possibilities lie in combining different apps. Sync and connection apps may be the most important group of apps when building an interesting and sonically complex performance. They are the way to take sound sources to unexplored sonic territories before finally passing the signal to output.

Perhaps the most important sync and connection app is Audiobus. When Audiobus came into being in 2012 it basically changed the whole iPad music scene. Before that all the apps had been individual boxes that didn’t talk to each other. In fact, there was no need for separate effect apps on the iPad before Audiobus, because it wasn’t possible to feed sounds from one instrument app into an effect app.

The idea of Audiobus is simple: to route audio from one app to another. The development of the first versions must have been a tedious job; this kind functionality probably wasn’t documented in the iOS APIs[46]. I’m really glad the developers of Audiobus succeeded. Nowadays Audiobus is somehow involved in most of my test sessions and jams, and also major part of the final compositions, too.

In Audiobus different apps are divided into input, effects and output. It’s not clear cut whether an app is an input, effect or output so it’s worth exploring what options the app gives. The basic rule is that instrument app is an input, effect app is an effect and DAW’s and loopers are output – but many apps can be used in two or in all three positions, depending on the case.

File_000.png

Figure 32. Empty Audiobus scene with slots for input, effect and output.

After the introduction of Audiobus Apple introduced its own protocol of making apps talk to each other: Inter-App Audio (IAA). It’s does the same thing as Audiobus. The main difference is that Audiobus is a separate app, IAA resides within apps that support IAA.

Major advantage of Audiobus over IAA – because Audiobus is a separate app – is the possibility to save presets in Audiobus. It means that a set of apps configured to talk to each other can be opened from Audiobus. Audiobus even remembers states of the apps.  

AudioShare is an essential part of music making workflow: it can be used for storing and organising audio files and documents on the filesystem. But it also works nicely as an audio router, and using Audiobus or directly with its IAA functionality. It’s an easy tool for recording improvisations and jam sessions because it works easily as a memo app, with ability to play the sound through IAA pipe. If I’m improvising and just want to record the output of the whole audio chain I usually use AudioShare. That’s how the files stay in one place and better organized. It can also be used for adding effects to instruments in a live performance, just like Audiobus. AudioShare has very handy features of audio trimming and converting, so if I need to pass audio files from one app to another, I usually use AudioShare to do that.

File_000.png

Figure 33. AudioShare

Mimix is a mixing app, intended to be used together with Audiobus. It can be used to set levels and panning of different instruments in the Audiobus scene. It’s sometimes nice to configure the stereo field for a live performance. There’s a new app called AUM – developed by Kymatica, the developer behind AudioShare and Sector – that works as a similar mixer as MiMix. It was just released at the time of finishing the research so I haven’t had time to play with it yet.

File_000.png

Figure 35: Inputs from AUFX:Space, Animoog and Caramel in MiMix.  


Effects

In this section I cover the effect apps that have role in the compositions or I’ve found otherwise interesting.

Many iPad apps have built-in effects but in order to get exactly the sound that’s in mind, additional effects are needed. Just like instruments, there are effects that are modelled after physical effect modules such as guitar pedals and amps, but then there are also effects that are built directly for the touch screen of the iPad. iPad works quite nicely as a digital effect processor for an electric guitar, and the effects modelled after guitar effects work well in that case. They usually contain far more than a player can need during one gig. One example is ToneStack, which contains also practical utilities like a metronome and a tuner. However, some kind of MIDI pedal board makes the use of an effect with a guitar much more convenient.

File_000.png

Figure 36. ToneStack effect chain: octaver,  Crystalline (external app) and an amp modeller.

ToneStack has one significant feature for all iOS musicians: it enables the use of external effect apps via IAA. It means that the effects on their own can be routed in similar way as different instruments can be routed in Audiobus. With this possibility – even if the effects are used for a guitar – it’s possible to include all kinds of effects in the effect chain.

Developer Holderness Media has developed a set of effects that are specially designed for a touch screen interface. They work nicely with Audiobus, and Audiobus Remote. Crystalline is  shimmer reverb/delay effect, Caramel is a crunch and crusher effect and Johnny is Multiwave Tremolo effect.

All of them have two modes: perform and tweak. Perform mode has two playable XY areas that that can be manipulated. This makes them very interesting for live performances.  

File_001.png

Figure 37. The XY pads of Crystalline effect app.

AUFX:Space is an effect app developed by Kymatica, the same developer who’s behind AudioShare and Sector. There’s nothing special about AUFX:Space but it works very reliably, doesn’t eat all the process power, not even on my iPad 4, and has some very imaginative presets considering the fact that AUFX:Space is basically just a reverb effect app. Kymatica has also other effect apps but I haven’t been playing around with them so much.

File_000.png

Figure 38. AUFX:Space

All the aforementioned effects take in an audio signal and effect that. Jam Synth, on its part,  converts audio signal to MIDI data and creates different MIDI-based effects. Jam Synth is designed to be used as a guitar effect but I usually feed input to it from a microphone, singing or talking. The effects are fairly expressive, because Jam Synth recognises the amplitude of the incoming signal quite well and produces very cool  sounds. The pitch tracking seems to work fairly well.

File_000.png

Figure 39. Jam Synth

DAW’s and loopers

In this section I present the DAW and looper apps that I use.

Depending on the use case, there are a few DAW softwares to choose from when working on the iPad. The DAW’s on the iPad, however, don’t work that well for performing live. They are mostly designed for recording and editing.

One DAW requires special mentioning, though. Apple’s own GarageBand for the iPad has excellent expressive built in instruments that are joy to play. It’s even possible to build a whole live piece in GarageBand’s multitrack sequencer, but it requires a lot of preparations, and the instrumentation, length of loops and length of different song sections all need to be predefined. It doesn’t leave much room for improvised live song building.

However, GarageBand is becoming better for live playing. The latest update in the late 2015 introduced multiple new things. Live loops is a whole new view, which works similar way as Launchpad; there’s a grid of samples that can be launched and a live performance can be arranged on the fly. Another notable new feature are effects, than can be applied to the song in a same way as in Launchpad. Many good features of Launchpad have reborn in Garageband. Surprisingly GarageBand takes it a bit more experimental: it’s possible to control the effects by turning the iPad and controlling the gyroscope. Third new thing is a new way to create drum tracks, a kind of drummer robot that doesn’t have to be explicitly programmed: it takes in a set of quite ambiguous orders.

GarageBand is a very nice app to start from. The synth sounds are good, the instruments and easy to to play, especially playing the string sections is fun. I believe GarageBand could be used to record string section to a desktop project in an expressive way. It’s a very limited system, though: IAA instruments work, but it’s not possible to make GarageBand to act as an instrument in Audiobus. That would be a nice addition. Also the simple sampler of Garageband has been a backbone for many enjoyable minicompositions that I’ve created. For more serious music making probably DAWs like Cubasis or Auria Pro are better options.

 

File_000.png

Figure 40. GarageBand with new effect view opened on iPhone version of the app.

The looping apps in iPad are better suited for a live performance than the DAWs. DAWs seem to be modeled after the software used in recording studios not on the stage. Probably loopers work better because they are more idiomatic to electronic music that is often built on repeating patterns and samples that are represented in sequences or loops.  

Loopy, or Loopy HD, is a looping app by A Tasty Pixel, who’s also associated with the development of Audiobus and AudioBus Remote. Loopy can record altogether 12 loops, and the app can be integrated with other software with MIDI, Audiobus or IAA.

File_000.png

Figure 41. 12 tracks on Loopy HD.

There are other looping apps available too, but Loopy is a very good go-to looping app, because it’s associated with Audiobus and works very well with that. I hope it’s a small guarantee that their interplay is going to work even if Apple changes audio-related code again with the next iOS update.

Description of the compositions

This chapter describes the practice part of the research: compositions and how they came into being. One of the goals is to contribute to the iPad musician community and give instructions for other musicians how to build a similar live setup. I also introduce the composition process, and present the final compositions in a written form. Videos of live performances are available online at www.tuomasahva.net/padworks.

 

My live setup

This section describes the live setup built for performing the compositions. The goal is to provide enough information so that a similar setup could be built by other musicians.  

My live setup is built around an iPad. In addition, I have an assumption that using external devices to play in sounds have bigger impact from the audience’s perspective[47]. It’s good to acknowledge that this assumption has affected how I’ve built my live setup. The live setup contains two parts: audio to play the music and visuals for showing to the audience what’s happening on the stage.

Audio setup

One of the most common reactions that I’ve got when I’ve discussed my intention to build a solo set around an iPad with my musician friends is related to the reliability of the iPad as a sound source. Is it reliable enough just to plug in the 3.5 mm audio plug and play the gig? Doesn’t the cable slip out by accident? This is of course bigger deal with an iPad than with for example an electric guitar, because there is usually automated or looped audio playing in the iPad and if all of it cuts out quickly, the effect is more drastic than just losing the guitar sound from the band’s soundscape.

Having the 3.5 mm audio jack as the only sound source does require a bit of carefulness, but I’ve never had problems during my gigs. In addition, there are other ways to to get the audio out from the iPad: hardware docks. A dock is a good alternative for those who need to input audio into the iPad, because they usually offer inputs for both 6.35 mm audio input and XLR cable. The default audio input (internal microphone) of an iPad is prone to cause feedback and probably not a good choice for a performance.

In order to have better control for audio input and output I have a dock where I place the iPad in. By placing the iPad in the dock I lose in some of its expressive qualities: I can’t lift, bend or shake the iPad equally well. But on the other hand I gain the ability to charge the iPad during the set and have several audio inputs (including a microphone cable). Audio input is essential part of my music and having a full battery is rather important, too.

The iPad connects to a dock with a digital data connection. In older iPads[48] the connection was 30-pin dock connector but since the 4th generation iPad the connection is called Lightning. Lightning can be used to transfer any digital content in and out of the iPad, both audio and visuals. Using dock brings other limitations, too: projecting the screen of the iPad at the same time as using the lightning connector as audio output is not possible. I assume this isn’t a problem for many, but to me it is, because I would like to project the iPad screen at the same time I’m performing.

The dock that I’m using is Focusrite iTrack Dock. It has two dual audio inputs, so both 6.35 mm audio plug and an XLR and two audio outputs fit in the inputs. The dock works with all the iPads with lightning connector. There’s also a USB MIDI connection, but unfortunately I haven’t been able to make it work with my USB keyboard. This is not a rare case with iPad music hardware: the connections can be tricky to setup and apps don’t work with all the possible hardware. However, the majority of my use is for audio input, and for that iTrack Dock works just fine.  

itrack-dock-ipad-overhead.jpg

Figure 42. Focusrite iTrack Dock (photo from Focusrite website)

One of the big advantages of iPad is that it’s portable and contains many sensors out-of-the-box. In principle it would be possible to plug the iPad in the PA and play a gig, without any other cables or cords. In practice quite many cables are needed. I usually use two stereo cables and a microphone cable to setup the input and two mono cables for output to the PA system.

audio_setup.PNG

Figure 43. My audio setup.

In addition, I often use Audiobus Remote for better control of switching between apps, recording and and launching loops and controlling effects. Audiobus Remote could be used from the same device where the music is played from, but I use it from another device via bluetooth so I don’t need a separate WiFi network for that.

Setup for visuals

For the live set I prepare a system that used as live visuals, projecting to the audience what’s happening on the stage and on the iPad. I want to enhance what Trump and Bullock (2014, p. 159) call traceability of a musical instrument: the public should be able to see the causal connection between gestures and sound on the stage. With a cello, for example, the movements of the bow give traceability for the listener (ibid.) The gestures of playing musical iPad apps are usually very small and traceability is presented on the user interface, in some apps more clearly than in others. I’d like to enhance that at least a little bit.  

I want to display the playing interface for the audience, and show them what’s happening with the apps, and how I maneuver them. I want to show that I’m actually playing the music, not just pressing play and then pretending to be playing.

The visual system for Padworks is based on the iPhone app RecoLive Multicam, which is designed to be used as a production switcher for streaming live video. It’s possible to connect wirelessly over WiFi up to four iPhones or iPads to one host iPhone. The host iPhone takes in video stream from each of the connected devices, and it’s possible to switch live between the video streams. I’m using two old iPhones as webcams and streaming their video to third iPhone which I’m using as a host. It all works wirelessly, which is nice, but in practice the iPhones streaming video need to be connected to chargers. And if I’m playing in a place with no WiFi, have to setup a separate Wifi network for that.

It makes the live setup a bit more complicated but it serves a purpose: I’m able to  show to the audience what’s happening on stage, and to some extent I’m able to control the video output at the same time as I’m playing.

Figure 44. Setup for projecting live visuals  

I’d like to project the screen of the iPad as a live visual, too, but that cannot be done while using the dock. However, my current setup has an advantage over just projecting the current screen: it’s possible to display the interaction between player and iPad in a better way.

Aspects of iPad music apps that I find interesting

At some point of the research process I realized I need to lock down the things I want to do in the  practice part of this research. I had played with tens of different apps, read about many others, installed many apps without ever opening them even for testing and it seemed like a never-ending process. I realized that it’s virtually impossible to include all the apps, not even all of the good ones, in the compositions. At that point I made a list of things that I wanted to include in the compositions. It was a list of things that I found worth trying explore more thoroughly than mere testing. The list looked like in figure 45 below.

IMG_0178.PNG

Figure 45. Aspects that I find interesting in building a live performance with iPad.

Why did I came up with this kind of list? To me, Samplr is one of the apps that stands out as an app that is hard to imagine existing without the iPad. It makes use of the touch screen, visualises the sound source and makes waveforms playable. This all has been done in a way that it’s easy to create nice-sounding interesting music. It’s not the most expressive instrument; it doesn’t make use of sensors like accelerometer or gyroscope, but I think the real estate of the the touch screen is really well used. Currently Samplr is not receiving regular updates, and there are a couple of features I really would like there to be, like being able to record changing the pitch of a sample. However, it’s a killer app already as it is now and it deserves to be presented to wider audiences. I think all the iOS musicians should tell about their love for Samplr and perhaps developer Marcos Alonso will be updating it more frequently.

Loopy (or Loopy HD) is a looping app developed by an indie developer Michael Tyson. There’s basically nothing special about Loopy, it’s a digital looping machine just as there has been digital looping machines available for musicians since the 80’s (Peters, 1996). Of course it’s a compact one, and the real advantage is that it’s not tied to any physical interface, and touch screen interfaces can be quite intuitive and informative when interacting with a looping machine.

Loopy is interesting not only because it’s a handy little looping machine but because the developer Michael Tyson is also behind Audiobus and Audiobus Remote. When I heard about Audiobus Remote in late 2015 I first thought it’s probably not very useful for me. But when I tried it, I realized it speeds up many situations, and enables effecting and loop recording and launching in a way that would not be possible before. That’s why I wanted to include Audiobus Remote in the compositions.

MIDI sync is something that has existed for a long time in electronic music. In iPad context it’s possible to sync different apps to the same tempo. However, if I learn how to do MIDI sync with apps on the iPad I can probably quite easily control analog synths with iPad and thus integrate iPad to be part of bigger electronic music setup. I haven’t tried that yet myself though[49]. In late 2015 Ableton introduced their Link technology. Link syncs Link-enabled apps together over WiFi with less hassle than using MIDI sync. That’s another thing that I haven’t yet had time to try, but timing for this kind of functionality seems right for it to become widely used.

While MIDI sync can be used for syncing two or more apps to be in sync with each other and play their individual parts separately, MIDI note sending to another app is functionality where controller app is sending note information to instrument apps. Since, at least in theory, all MIDI enabled apps can communicate with each other, it’s an interesting opportunity to send the same message to different apps in order to create multi-layered soundscapes.

It’s difficult to define what is iPad virtuosity. Perhaps it’s virtuosity if I play ThumbJam’s expressive interface and make it sound like an actual acoustic instrument. On the other hand virtuosity probably takes different forms if instrument is iPad. There isn’t always MIDI sync or Ableton Link available. I think recording a loop without MIDI sync might be one of the things that virtuosi do without any problems.

Along with Samplr, I think Borderlands Granular is an application that would not exist if there were no touch screen invented. To me, Borderlands represents new soundscapes. By using it with it’s default settings and sounds it’s easy to get interesting soundscapes out of it. However, if a novel instrument like Borderlands is examined thoroughly coming up with different ways of creating music I believe that it’s possible to create music that is totally unheard of. One of this kind of moments came when I realized that Borderlands can be used for creating beats and rhythms.

TC-11 is an electronic instrument which has endless possibilities for creating sounds. TC-11 is a very expressive instrument; it can make use of different sensor data of the device like accelerometer and gyroscope. Since I’m usually using a dock for the iPad, it’s not possible to lift the iPad and play TC-11 with the accelerometer data, etc. Developer of TC-11, Kevin Schlei , has thought of different use cases, and probably thought that iPad is pretty big for waving in the air. He has developed TC-Orbiter, an application that sends the control data to host TC-11. It can be played in a very expressive ways using two devices.

ThumbJam has an interesting feature, which seems to work pretty well, too. It has a quick access pitch detector which translates the audio signal to MIDI and plays the corresponding notes on the chosen instrument. This gives a nice option to play totally different instruments while playing something else through ThumbJam’s pitch detection. This could be used for a jazz solo with a trumpet, played by a player who don’t know how to play trumpet.

iPad is a good platform for audio effecting and there are forward-thinking guitarists like Adrian Belew[50] who are doing traditional effecting on the iPad for their guitar. Since all the effects are digital, and the are physically located in the iPad, it’s possible to build all kinds of combinations of effects, at least as much as the processor allows and it doesn’t produce too much latency.

Conceptually Animoog is not very interesting. I tend to think that, with Animoog, Moog is doing the same thing as they did with traditional synthesizers in the 1970’s. Moog synthesizers became popular because they had a familiar interface, the keyboard; Buchla synthesizers didn’t have keyboard and they didn’t became so popular (Ostertag 2015). It’s a bit similar case now with apps with novel interfaces vs. virtual replicas. I tend to think that unfamiliar interface can lead to yet unexplored soundscapes. Why does Animoog interest me then? It has fantastic sounds, and luckily it can be controlled with controller apps.  

SoundScaper is an example of generative music on the iPad. I think letting the computer create part of the sounds and music opens interesting possibilities for the musician. A nice little Spinphony app is another example of generative sound engine. My plan is to add some interesting iPad effect like Jamsynth to its electronic sounds to produce new soundscapes.

iPad provides also other possibilities than generative music that are essentially computer music, for example live coding. One app that can be used as a live coding app is BitWiz by Kymatica. It can be used for creating sounds in style of bytebeat[51], and in addition to the algorithm used for creating the sounds, some of the algorithm components can be controlled with XY-pads. Spacevibe doesn’t have the possibility for live coding but it’s XY-pads are built in a way that it’s rather difficult to play it very accurately. That makes it an interesting addition to the iPad instrument arsenal.  

One of the basic structures of a touch screen instrument is an XY-pad: X axis for pitch and Y axis for parameter like amplitude. A nice example of an instrument which is based on XY pad is Bebot, which also happens to have great sound engine. Furthermore, digital interfaces allow the user to set the X axis and Y axis to control anything. In Bebot’s case the X axis can be set to any scale and Y axis can control different effect parameters. However, Y axis can be mapped to a parameter in one-to-one mapping, but in practice it has many times proven to be sufficient for me, at least when combined with some other effects via Audiobus or IAA.

One aspect in electronic music that I’d like to contribute to change is making it more transparent what’s happening on the stage. Many app interfaces of work nicely as visual element to the performance, like for example Tachyon. But not only as an abstract visual element, I think it would be valuable for the community to provide some live visuals from the iPad to show what’s happening on the stage.

tuomaksan kanssa.jpg   

Figure 46. Tachyon as abstract visual element.

After a couple of months of experimenting and jamming I soon realised that he compositions would not follow basic song structures of pop music. It wasn’t entirely my goal either, but I thought that it would be interesting to perform a cover song to highlight basic song structure in pop music. Perhaps it was my song writing style or the apps that I was leaning towards that were not making me follow any specific patterns. However, it would be interesting to see how iPad supports solo performance with a more traditional ABABCBB[52] pattern for example.

 

Many of my recent favorite artists (like a band called The Books) use a lot of spoken samples in their music. I’ve been thinking of using vocal sounds in my own music and iPad is a very good platform for that, because the source material can be recorded directly to the iPad, and then there are many possibilities how to use samples in iPad. One interesting approach is to time-stretch the audio sample. Or, they could be routed into vocoder or plain vocal looping often produces interesting results.

One thing that digital instruments provide is the possibility for the interface to be dynamic. One simple example is automatic octave change that for example Geo Synth has: when the scale is played in a certain pattern, the octave is changed automatically. That enables playing passages that extend to many octaves.

Camera is an interesting input method. It’s not precise at all, and doesn’t provide any tactile feedback. The player gets only audible feedback, a bit like with theremin. My idea is to use Airvox for theremin style playing. It’s not a very playable or accurate method to play, but it’s a kind of thing that might look interesting from audience’s perspective. Another app that makes use of the camera is Nature oscillator, which provides interesting, and fairly noisy, sounds but also works nicely as a live visual.

Impaktor is one of the unique iPad instruments. When the rhythm is tapped into a physical object like microphone, it’s also a performative element. Impaktor shines when played with the built-in microphone, so it’s not very good for big live performances, but when used with an external microphone it gives a nice organic feel to the performance. As a matter of fact, one of my main goals is to maintain a certain organic feel in my playing. I think iPad, with its large multitouch screen and different built-in sensors, provides an opportunity for that in more approachable way than laptops as a electronic music instruments. I like the idea that the main process which produces the music is in my fingertips and not inside the computer’s processor, much like with acoustic instruments.

Compositions

Even though I rely on improvisational methods in music making I wanted the end results of this research to be fixed compositions that can be reproduced. The goal of this section is to describe the end result of the practice part of this research: four compositions and factors that affected the composition process. Brown and Sorensen (2008, p. 160) say that in media art research aesthetics generally plays a critical role. I agree with them. But even if the aesthetics of the end result please me as an artist, I cannot validate the success or failure of my research project. I must leave the judgement to general public. Therefore I’m making the results, sounds and videos, public. They can be found online at www.tuomasahva.net/padworks.  

With the ideas presented in the previous section I started composing my own music. I was striving for creating a playable solo set for myself, that would be interesting for the audience, musically versatile and showcase the possibilities of iPad as a musical instrument. I had a vision that I could create something meaningful by picking a component or two from the list and start improvising. I had the plan to add more elements on top once I get further in the composition. Whenever I got stuck, I looked at the list and let it guide me.

I had an ambitious plan to include each and every one of the items from my wishlist in the compositions. However, pretty soon I realized that the amount of ideas would suffice to an 80-minute double album and I was only striving for a five song EP. Eventually, five song  EP diminished to four songs and Padworks is now four compositions, in total about 35 minutes worth of music.

 IMG_0177.PNG

Figure 47. Accomplished ingredients.

Now looking back, the amount of achieved goals is pretty good. There are many items on the list that I didn’t achieve though, but they can be left for the future compositions.

Each of the songs is an actual notated composition with some improvised sections. The notation for the songs is presented as text.[53] I wrote instructions for demo songs in the research diary and found out that it was actually a good way to present notation for the compositions, too. I’m able to read musical symbols on staff notation but very slowly. Even though I would be more fluent with traditional musical symbols, many things that I needed to remember for the compositions were iPad-specific, related to settings of the musical apps on the iPad, and those would require special markings combined with musical symbols. I decided that written notation would work the best for me, and probably for other iPad musicians too.

 I composed four songs in total. The songs are called:

 

The compositions don’t represent any specific musical style. I was aiming at a versatile end result, taking the compositional ingredients into account. The videos of me performing the compositions live can be found at www.tuomasahva.net/padworks.

I describe each composition and list what are the apps used. I open up the compositional goals and what was the composing process like. For each composition, I list the following components:

These components can be used in the conclusion for defining how to build a interesting and musically versatile live performance with iPad.

Clorochime

A song called Clorochime started off with the idea of building something with a simple toy-like Color Chime synth. I wanted show that even simple apps like Color Chime can be used for serious music making, and with some thinking it can be just as good iPad instrument as any other, with its own twist and limitations.

I used exceptionally many app for the song because I wanted to add effects to the sounds of the instrument apps I was using. Instruments were Color Chime, Tachyon, Bebot and Impaktor. The main sound for the song comes from Crystalline effect, that I’m using with both Tachyon and Bebot. Other effects used are Caramel, Johnny and AUFX:Space. The apps are presented in the table below.

Table 1. Apps used in Clorochime.

iPad instruments

Controllers apps

Sync and connection apps

Effect apps

DAWs and loopers

Color Chime

Audiobus Remote

Audiobus

Chrystalline

Loopy HD

Tachyon

MiMix

Caramel

Bebot

Johnny

Impaktor

AUFX:Space

I was looping everything with Loopy HD. The connection and sync apps used for the song were MiMix for setting gain levels and stereo panning, and Audiobus for routing the instruments to Loopy and also for adding the effects to the sounds. I used Audiobus Remote for better control of loop recording and switching between apps.

File_000.png

Figure 48. Audiobus setup for Clorochime.

Goal of the composition was to be able to build an entire song from something that I started with Color Chime. The challenge was that there wasn’t a clear way how to sync Color Chime with another app. I decided to sync it manually by setting the BPM[54] same in Loopy HD and in Color Chime, and then record a short 4-bar loop from Color Chime. Thus the differences between clocks of two different apps would be practically unnoticeable. It took a bit of practice to be able to repeat the recording of the loop to Loopy, but I managed to do it. It also meant to that once I recorded a loop from Color Chime, I didn’t want to use Color Chime for another layer of loops, because the apps would have gone out of sync quite swiftly. Another goal was to add more organic percussive elements to the song by using Impaktor with a contact mic.

The composition includes two apps that have an XY axis interface. It’s easy to setup two instruments like that to be in the same key and scale and once that’s done, it’s easy to play and loop the instruments without worrying of hitting wrong notes.  

Screenshot 2016-02-11 08.34.19.png

Figure 49. Playing Bebot while performing Clorochime. Audiobus Remote as a controller app opened in iPad Mini.

At first, during the composition process, I was using DM1 as a drum machine for the song. It was producing too much load for the processor that it made the iPad glitch that I decided to drop it, and record only loops from it that I trigger during the performance. A drum machine would bring one layer of spontaneity as it would be possible to create any kind of drum pattern on the fly. I don’t think it affected the end-result that much, though. However, it’s still, after hours of practice, challenging for me to record drum loops from Impaktor to a tightly synced song like Clorochime. It worked out somehow well, but I think the more important aspect of it is that it adds human factor to the beats.

Table 2. Components of Clorochime.

Clorochime

Ingredients

  • Loopy
  • Recording loops without MIDI sync
  • Live visuals from the iPad & Tachyon for visuals
  • Bebot
  • Impaktor
  • Traditional effects
  • Organic feel
  • Audiobus Remote

Computational methods for creating and processing music

  • Changing the song key on the fly is a trivial task for a computer but would be challening for many human players. That kind of feature exists in synthesizers, but couldn’t be done with more traditional instruments.
  • Clorochime was the first composition I came up with. I think I wasn’t yet inspired by the research material, so it doesn’t contain many things that I do with iPad’s computing power. Basically the whole song could be played with synthesizers without any computer involved.

Input methods:

  • Audio input (Color Chime from another iDevice)
  • Contact mic input for Impaktor
  • Touch screen: XY pads set to specific scale (Tachyon and Bebot), looper app interface (Loopy), experimental interface (Color Chime), controller interface (Audiobus Remote)
  • Bluetooth for control messages (switching apps, starting/stopping loops and recording with Audiobus Remote)

Approaches for creating a musically versatile live performance:

  • Approach is multi-tracking from Johnston’s (2015) list.

iPad virtuosity factors:

  • I think I managed to do what I planned with syncing Color Chime with Loopy’s clock. I think I also proved that a toy-like app Color Chime can be used in serious music making.
  • Both Tachyon and Bebot set on the same scale allow semi-improvisational passages.
  • Due to the experimental interface of Color Chime it’s rather difficult to do the same thing twice with Color Chime. The song always begins slightly differently because of that. However, I think it nice that the app works as a kind of random generator.

Special artistic decisions for making the composition interesting:

  • I wanted to use contact mic as input for Impaktor because it gives a more physical feel to the song.

Parkfun

The starting point of Parkfun was that I simply wanted to use the unique touch screen instrument Samplr and play its waveform. In the final version I used Samplr and Geo Synth as instruments and Loopy HD as a looper. In addition, I used Audiobus Remote as a remote control, for better control of loop recording and playback. All the effects for the instruments are built-in effects in the instrument apps.

The goal of the composition  was to use elements that can be created from the scratch when performed live. In practice I was aiming at using Samplr as both instrument and sampler, not recording the loops in Loopy. I wanted to add some solo guitar shredding from Geo Synth, using its automatic octave change; climbing automatically up and down the octaves as I played. I was aiming at using the apps as instruments, without any other controllers involved.

Table 3. Apps used in Parkfun.

iPad instruments

Controllers apps

Sync and connection apps

Effect apps

DAWs and loopers

Samplr

Audiobus Remote

Audiobus

Loopy HD

Geo Synth

The approach in Parkfun was close to pattern sequencing with improvised solo. Every app has its limitations. As much as I love Samplr, the number of samples (six) is a feature that I’ve experienced as a limitation quite many times. In order to build a soundscape that I wanted, I needed more layers than six. A way to achieve what wanted was to record loops from Samplr to a looper app, and then construct the whole composition as a combination of looping from Samplr and Loopy. All the pre-recorded samples that I launch from Loopy are originally recorded from Samplr.

Screenshot 2016-02-11 08.31.45.png

Figure 50. Performing Parkfun.

I think Samplr is a truly unique instrument that could not exist without the multitouch touchscreen. The way I used Samplr in this composition is relatively conservative, but I truly believe that only Samplr lead this song to sound as it sounds now.  


Table 4. Components of Parkfun.

Parkfun

Ingredients:

  • Samplr
  • Loopy
  • Automatic octave change
  • MIDI sync
  • Audiobus Remote
  • Live visuals from the iPad

Computational methods for creating and processing music:

  • The soundscape of the song is very much a result of manipulating samples from the waveform view. Waveform presentation is how sound files are usually presented in computer interfaces.
  • Some arpeggios in Samplr are based on random values
  • I use automatic octave changer with Geo Synth, that’s something that’s easy to program but it extends the playing interface quite radically.

Input methods:

  • Touch screen (manipulating samples in Samplr, a grid interface of Geo Synth, looping interface in Loopy)
  • Bluetooth for control messages (switching apps, starting/stopping loops and recording with Audiobus Remote)
  • MIDI sync control messages from Loopy to Samplr

Approaches for creating a musically versatile live performance:

  • The song is a combination of pattern sequencing and improvisation from Johnston’s (2015) list.  

iPad virtuosity factors:

  • Syncing different musical apps together is sometimes difficult, and it’s an acquired skill to be able to sync apps together.
  • Mastering a solo with an interface that has automatic octave changer would require a lot of practice.
  • Making a good use of all the aspects and features of Samplr app is something to strive for.

Special artistic decisions for making the composition interesting:

  • I wanted to use as much Samplr app as possible because I think it’s a truly magnificent instrument.

Shenanigans Love

Starting point for the song was a very inspiring video I saw on Sound Test Room[55] where GuitarCapo+ was configured to send MIDI messages to Animoog creating very fascinating sounds. At the same time I was testing how Jam Synth works using voice as input. Other apps used in the song were ThumbJam and Bassline as instruments, Caramel as an effect and AudioShare as audio router, routing ThumbJam’s drums through Caramel’s overdrive.

Table 5. The apps used in Shenanigans Love.

iPad instruments

Controllers apps

Sync and connection apps

Effect apps

DAWs and loopers

Animoog

GuitarCapo+

AudioShare

Jam Synth

Bassline

Caramel

ThumbJam

A practical goal for the composition was to create something where the arpeggio sent from GuitarCapo+ to Animoog and Jam Synth would work together. Another goal for the composition was to use linear sequencing in controlling the composition. I didn’t want to use any app to store patterns, or launch any loops but control the instruments from one control app without any other ongoing process than the arpeggio. In practice it would mean that I could play the pre-selected notes in any order and any length that I wanted.

Screenshot 2016-02-11 08.42.31.png

Figure 51. Playing Shenanigans Love.

The song starts with a simple voice-to-MIDI Jam Synth melody, accompanied by bass notes from GuitarCapo+. Then I start the arpeggio, using the built-in acoustic guitar sound of GuitarCapo+. Seemingly the simultaneous playing of Jam Synth and GuitarCapo+ and Animoog combination require so much processing power that it causes quite bad glitches. I assume that with a newer iPad the defects wouldn’t be so drastic. When I leave Jam Synth and just start adding new layers of Instruments, the glitching and lagging still exists, but it’s somehow manageable. However, the end result would require a bit more practice: I should learn to trust my ears instead of the beat counted with my foot, because that’s how the lag won’t be so noticeable.

There are a bit too much glitching and lagging that I could honestly say that this kind of combination of audio processing using this combination of apps with my 4th generation iPad could work in a professional setting. However, as an experimental composition, the pieces work nicely together and in the end the glitches and lag are part of the fun.  

Table 6. Components of Shenanigans Love

SHENANIGANS LOVE

Ingredients:

  • Jam Synth
  • MIDI note sending from one app to another
  • Animoog
  • Organic feel
  • Live visuals from the iPad

Computational methods for creating and processing music:

  • Audio-to-MIDI conversion
  • Sending MIDI inputs to several apps within the same device

Input methods:

  • Touch screen: notes / chords in GuitarCapo+, drum pads in ThumbJam
  • Mic input to Jam Synth
  • MIDI note sending from GuitarCapo+ to Animoog, Bassline and ThumbJam

Approaches for creating a musically versatile live performance:

  • Approach is linear sequencing from Johnston’s (2015) list.

iPad virtuosity factors:

  • Managing MIDI note sending from one app to another, or to multiple apps
  • Constructing the sonic landscape by sending MIDI messages to several apps at the same time

Special artistic decisions for making the composition interesting:

  • I think Jam Synth gives a nice natural feel to the otherwise glitching composition.
  • I wanted to use one app as a controller and send the note messages to multiple apps and see when the iPad starts glitching.

Luaka Bebop

Starting point of the composition was highly ambitious. I wanted to create a simple chord structure and then improvise on top that, using my voice.

I constructed the backbone for the song in Elastic Drums. It worked well for that purpose because I could build the chord structure, bass line and also song structure in the same app. I use ThumbJam for playing the trumpet samples, and most importantly for pitch detection and playing the sung melodies with another sound, in this case a trumpet sound. Then I use AirVox as a theremin like instrument.

Table 4. Apps used in Luaka Bebop.

iPad instruments

Controllers apps

Sync and connection apps

Effect apps

DAWs and loopers

AirVox

Elastic Drums

ThumbJam

The composition starts with a pre-programmed sequence from Elastic Drums, and then moves on to a full pattern of 4 sequences, repeating those, for the half of the song. The main melody is played with a trumpet sound from ThumbJam, then a sung pitch-detected layer of trumpet is added on top, until there’s only pitch-detected trumpet sound left. That leaves my hands free to switch AirVox in foreground and play a simple solo passage for the song. After the solo, there’s a bridge, the song structure is controlled with pre-saved song files. Elastic Drums allow changing of file without the sound stopping. This leads to a closing passage, with a more upbeat rhythm and more cheery trumpet melody, now decorated with strings from ThumbJam. ThumbJam’s layout allows more than one instruments to be loaded and played at the same time.

Camera as an input method is pretty interesting. It also gives a nice performative element for the live performance, something that easily catches viewers’ attention. The pitch detection also works well in Thumbjam (even though the dual-instrument layout somehow messes with it, and changing of output octave sometimes gets stuck) and app like Elastic Drums works nicely as a backing track. However, Elastic Drums doesn’t have exact scale settings so the programming of melodic and harmonic elements need to be done carefully.

I think I achieved the overall goal of the composition: I was able to improvise quite freely on top of the backing track. However, the combination of these apps produce a bit of glitches. I intentionally used samples from acoustic instruments: trumpet and cello, because I wanted to highlight the expressive quality of an instrument on an XY-axis like ThumbJam. It would have been even more expressive if I hadn’t use the dock; parameters like vibrato and pitch bend can be controlled with the accelerometer of the iPad. Now I set those expressive controls in my fingertips, height in the Y axis controlled amplitude and movement of the finger vibrato.  

Screenshot 2016-02-11 08.47.30.png

Figure 52. Playing AirVox ‘the iPad theremin’ during Luaka Bebop.

In addition, I didn’t use the full potential of Elastic Drums. It’s a very versatile drum machine app, that has something that may make music very interesting: randomisation. There are also effects that can be set for each instrument in the drum sequence, and those effects can be randomised and automated in a very flexible way.

One last notion about this composition is that I didn’t have to write full written instructions for me, because I composed the song right before Media Lab’s Christmas Demo Day. Intensive rehearsal made it stuck in my head without instructions (for the other songs I had to write down a list of instructions). If there are gaps in the composition process, writing down the instructions is a very advisable thing to do. However, I did film myself playing it so that I could lay down the best ideas from experiments.

Table 8. Components of Luaka Bebop

Luaka Bebop

Ingredients:

  • AirVox “theremin”
  • vocals-to-MIDI solo in ThumbJam
  • Organic feel
  • Live visuals from the iPad

Computational methods for creating and processing music:

  • Audio-to-MIDI conversion
  • Effect automation
  • Sending MIDI inputs to several apps within the same device

Input methods :

  • Touch screen: ThumbJam’s XY pad, controller buttons of Elastic Drums
  • Gyroscope in ThumbJam for slight alterations to the sound
  • Mic input, audio-to-MIDI in ThumbJam
  • Camera in AirVox

Approaches for creating a musically versatile live performance:

  • Approach is a combination of pattern sequencing and improvisation from Johnston’s (2015) list.

iPad virtuosity factors:

  • Giving input from mic and manipulating and switching between apps at the same time
  • Playing multiple instrument on the same interface (two instrument loaded in ThumbJam)

Special artistic decisions for making the composition interesting:

  • Even though AirVox is not very well controllable instrument I wanted to use it because it looks good from the audience’s viewpoint

Conclusions

Based on the research process and the evaluation of different aspects of iPad as a musical instrument I draw the conclusion how iPad works as a musical instrument, and how a potentially interesting solo performance can be built with it.

I think iPad makes the musician think about live gigs and playing music in general in a little bit different way – in the way that only iPad inspires, partly because of its limitations. It has all the qualities of what makes it a nice instrument for electronic music, for building a sonically complex multilayered music by only one player. On the other hand the touch screen and all the other sensors give it a more tactile and tangible feel, a bit more like an acoustic instrument.

I created four compositions for this research. The compositions don’t represent any specific musical style or genre but it’s all music that I enjoy playing and I’m happy to say that I’ve created it. I want to keep on playing it live and develop the whole concept further.

iPad – an instrument, a controller, or an interactive music machine?

What is the essence of iPad as a musical instrument? The goal of this section is to describe that.  

One significant feature of an iPad is the interplay of different apps. This is often done using control messaging such as MIDI, or routing audio between apps. This leads to a situation where the musician uses an instrument to control another instrument. That’s a clear example of how the definition of controller can be ambiguous. Of course, there are also apps that have been designed to work as controllers, but many times also those apps have some default sounds, which, like in case of Orphion (Trump & Bullock 2014), have been built to be used on their own as nice sounding instruments, too.

Is iPad an instrument or a controller? Or does it make any difference? Separating gesture and timbre, as in MIDI, has been essential to digital instruments and MIDI was developed to send messages between different electronic hardware instruments. iPad makes a good use of MIDI, many times without any external cable. The MIDI implementation of iPad apps provides that the data should move equally well to an app as to a hardware synthesizer.

Nearly every iPad instrument offers some kind of interface for playing. There are many examples what an iPad instrument app looks like. Without any other sensors involved the interface is just glass. Separation of input and output reduces the “feel” associated with producing a certain kind of sound (Roads, 1996). That’s the case with digital instruments, including iPad. However, iPad’s touch screen and other sensors provide better opportunities for coupling gesture and timbre.

iPad has advantages over traditional and electronic instruments. Even though iPad is not the most powerful computing machine of the day, it can be used for many tasks that wouldn’t be possible with other instruments. iPad can be used just as any computer to perform any of the tasks described in chapter three: for traditional sequencing and multi-track recording, sound synthesis (and effects). Basically there’s no difference in use between an iPad and a desktop computer. However, iPad has existed only for 6 years, so there are less ready-made applications available for iPad than for a desktop computer.

In theory, iPad could be used for algorithmic composition, live coding or generative music, or as interactive music machine. It has many built-in sensors that could be used for interactive music. In early 2016 Apple released a fairly simple app that builds a lot of expectations for the future; Music Memos is a recording app which detects the pitch and beat of incoming sounds. Using that data, an algorithm automatically creates a backing rhythm track. The idea is promising: you could be carrying an entire band in your pocket, ready for whatever composition and improvisation you come up with. Magic Piano by the company Smule is another naive example. There is no tempo constraints; it is a game built around expressive musical timing: the player is completely free to express each note in time – at any tempo, with variation, rubato, swing, rolling chords, and trills (Wang, 2015). I think Magic Piano is a nice example, an entry level musical application that is easy and fun to play but provides reward for more skilled players, too. It’s not a very sophisticated interactive system, but it makes good use of the possibilities of iPad.

Moreover, I see Magic Piano and similar apps as continuation to Max Mathews’s Baton. Max Mathews has worked with instruments and systems which allow the player to control certain musical effects like amplitude, tempo and balance over course of an entire piece of music, but not the notes (Jordà 2004, p. 324). These simple interactive musical instruments are nice attempts to provide the similar joy of playing for amateur players as for professionals when they play using more traditional instruments. All in all, there are not many apps in the app store that would truly work as interactive music machines. Not yet.

I also agree with Drummond (2009, p. 124) as he states that "Interactive systems blur traditional distinctions between composing, instrument building, systems design and performance." I think the same idea lies in many musical computer applications, especially in iPad applications. Some of them are clearly instruments, they resemble existing traditional instruments. Some applications, or perhaps they could be called systems, like Liine’s Lemur[56], are pure controllers. Lemur is designed to be an additional control interface to music software, especially designed for live performance. Lemur doesn’t make any sounds as a standalone application, it’s a pure controller software. But it’s possible to build your own interface to the software, and even develop automation there.  

Then there are some examples of DAW-like apps, that work equally well for live playing as for studio work, like Korg’s Gadget[57]. Some apps are blurring the line between traditional instruments and digital instruments with familiar interfaces, which are optimized for touch screen, like Geo Synth and iFretless apps. They all have one of the usual interfaces for an iPad instrument: a grid that resembles the fretboard of a bass or a guitar.

There are some examples of new musical interfaces that will augment the possibilities of the iPad as an alternative controller even further. I believe that gesture recognition (without the need for touching the screen) will be quite soon implemented in the iPad and then a whole new approach will be made possible. This is already done in larger scale with Kinect and for gestures of a hand with Leap Motion. Already fifteen years ago Cutler et al (2000) listed alternative control interfaces and there were notable examples XY surfaces with pressure and angle sensitivity. In Apple Watch there is pressure sensitivity, and quite soon it becomes available for all the bigger screens, too. Now we are waiting for the first clever uses for the 3D touch in touch screen instruments.  

I think iPad is a bit of everything: an instrument, a controller – and very potentially – an interactive music system.

How to build a solo live performance

Audiobus and other apps were used. But how to build a live solo performance that is musically versatile and interesting for both the player and the audience?

Based on this research there are three main ways to build an interesting and musically versatile solo live performance with iPad. Those three main ways are:

  1. Automation
  2. Looping
  3. Sending musical messages from an app to another

Automation means that there’s one or more automated processed playing at the same time; different apps playing at the same time without linking them. The word automation in this case means that different apps have a sequence going on that the player has played in, or just pushed the play button to initiate it. It can be more, though. There could be an automated process controlling the song structure; changing from verse to chorus and back. There could be automation of effects so that it gives an interesting feel to the song. Or there can be an algorithmic composition as part of the song. Or there can be an interactive music machine reacting to what’s being played. Automation approach together with live playing on top provides nice results. Basically any musical app which can play patterns or automated soundscapes on its own on the background can be used for this approach. My preferred automation apps are Samplr, Borderlands Granular and Soundscaper.

The second approach, looping, is similar to automation. To some extent looping is a subcategory of automation, because automation can include looping. But looping differs from automation in the way that usually there’s one main app, where the looping takes place. Loops are recorded, muted and stopped in the main app, or using a controller app like Audiobus Remote to do that. Usually there’s a possibility to add effects on top of the loops, too. This approach is very fruitful for creating compositions out of vocal input, singing and beatboxing. My preferred looping applications are Loopy HD, Djay 2 and Launchpad.

The third approach, sending musical messages from one app to another means that there’s an app that sends instructions to the other apps what to play. The patterns that are played are stored in the sender app, and the apps that produce the sounds are just slaves to the messages. The most common thing that I do is to send MIDI sync messages from one app to another. The apps are able to play sequences and loops in sync, without the beat going off. There are many other ways MIDI can be used in iPad music: notes, control data, program change. The iPad could be used for sending MIDI to hardware instruments. It’s worth noting that some apps provide more reliable MIDI implementations than others. My go-to apps with MIDI clock are Loopy HD and FunkBox, and I usually sync Samplr with other apps.

Even though I’ve simplified the approaches to build a live performance to three, the usual approach is to combine two or three of these approaches and play an instrument on top so that it’s not part of any of the processes. I sometimes use iPad in a similar good old way as a keyboard instrument, as was described by Mann (2007): having separate track for beat, bass line and style, and then altering them according to the song structure. For creativity it’s sometimes interesting to try to set  limitations and try to come up with something creative using only approach. The five approaches from Drone, Glitch and Noise (Johnston 2015, c. 5) can be seen as combinations of the three approaches I have described.

Computers are good at repeating patterns, executing algorithms and being accurate, but in order to give the composition an organic feel, using one or more of iPad’s expressive instruments on top of the ongoing processes usually makes the composition.

In this research I’ve created compositions according to my aesthetic choices, but using iPad as a live instrument is not limited to any musical genre. In addition, there’s no choice what’s the best app for certain purpose. There are so many factors that affect the process.

Performance challenges when using iPad as a live instrument

Can a single app become so versatile that it's an expressive instrument on its own? Is it possible to develop touch screen mobile instruments that enable virtuosity? If so, what kind of virtuosity is it? 

I’d like to go back to Jordá’s (2007, p. 105) quote about virtuosity with NIME: a classical virtuoso has infinite precision and love for the detail, like for example a goldsmith but a new digital instruments virtuoso, close to a virtuoso in jazz music could be compared to a bullfighter for the ability to to deal with the unexpected.

Being a iPad player doesn’t necessarily mean ability to play a passage as quickly as possible. To me, playing like a virtuoso violin player is not in the essence of iPad musicianship. This is highlighted when the iPad is used as solo instrument. There are usually automated sound processes and loops playing on the background, and it’s not so much about quick movement of fingers on the touch screen but about the interplay of different apps.

While there are automated processes playing musical patterns for the players, and player is sending control data to control the sounds and processes, it’s possible that something unexpected occurs. That’s something different from a traditional instrument. They don’t usually play any sounds on their own. In my opinion, the essence of iPad virtuosity lies somewhere between being expert on the interplay of different apps and being able to play expressive music on the touch screen. There’s a certain amount of love for technical stuff needed in order to be able to really like iPad as an instrument. But on the other hand, I see that iPad is bringing computer music back to being a closer companion to music played with traditional instruments.

Based on the iPad virtuosity factors that I’ve listed for each of the compositions here is a list of what virtuosity for iPad musician means:

Based on my experience, the most significant factor of iPad virtuosity is managing several ongoing processes at the same time. Managing multiple processes on iPad requires keeping track of some of them in mind; there’s no one single interface that would indicate what is on and what is off. Fortunately apps like Audiobus Remote and AUM are clearing the way and providing better ways for managing multiple apps at the same time. Of course it’s not always true, but the more apps playing at the same time the player can handle, the more versatile the musical output can be.

"With the freedom of design in the case of electronic instruments, the learning curve does not need to be steep, while at the same time the instrument should facilitate the development of virtuosic levels." (Bongers 2007, p. 15) Still, many musical apps for the iPad are either imitations of an existing instrument or simple one-trick-ponies. iPad provides a platform with many possibilities for the app developers. The possibilities to become a digital instrument builder has expanded significantly with the mobile app ecosystem. Perhaps virtuoso piano players are able play iPad piano at a marvelling level. And perhaps the simple apps with simple sounds are not intended to be used by musicians. But the question is: is it possible to create unique instrument apps for the iPad that don’t have a violin-like learning curve but it still provides the possibility to become a virtuoso player? There are some attempts to show that it is, but basically no proven results yet. Time will tell.

Just like a virtuoso cello player knows all the possibilities of the instrument an iPad player has to become familiar with the  features of the mobile tablet computer before it’s possible to become a virtuoso. Still, probably it's a very narrow set of applications, or a combination of applications, that the player learns to play in a virtuosic way. Both worlds, hardware and software, are constantly evolving and they need to be working together in order to be able to provide meaningful platform for instrumentalism.

In my own experiments I've mostly been taking general glimpses of the existing apps, not trying to master them with the goal of becoming a virtuoso player. The possibilities are so vast that it's hard to say, and I don’t even want to say, which app is my instrument of choice. But I think there are apps for all the skill levels and many purposes: for virtuosi, generalists, sound designers and novices.

Strengths and weaknesses of iPad as a musical instrument

What are the iPad’s strengths, weaknesses, opportunities and threats of iPad as a musical instrument? The points presented here are mostly related to iPad regarded as an electronic instrument and NIME. 

Jordà (2007, p. 104) points an important aspect of a good instrument: ”Good new instruments should learn from their traditional ancestors and not impose their music on the performers. A good instrument should not be allowed, for example, to produce only good music. A good instrument should also be able to produce terribly bad music, either at the player’s will or at the player’s misuse.” If these conditions are not met, it may be the case that the player is not able to play music but plays with the music. Sometimes it’s just rewarding to play along, but the player should always have control in the end. This is clearly an opportunity for the iPad. iPad brings all the sophisticated and futuristic computational music tools in the pockets of consumers. It’s far more easy just  to try out new things. However, this may be a pitfall in some of the interactive musical systems.

It’s clearly a weakness that iPad may be regarded as a toy, not a professional musical instrument. However, with the multitude of apps that there is for iPad, there’s no worry that iPad would only be a musical toy. So, it’s not wise to wait for the perfect combination of apps to appear but to learn how to use the current apps in the way that work for your music.

It’s one of the key things for making compositions for the iPad to be able to remember what the settings for each app use were. Apps like Audiobus with their state saving capability makes life of iPad musicians a bit easier. State saving helps a lot in creating sonically complex compositions because it’s easier to have more than just a couple apps in the chain of sound. However, this regards usually only sounds and programmed patterns. In many cases it’s still more difficult to remember what was played on an iPad instrument than if it had been played on a guitar or a piano; touch screen software instruments doesn’t provide tactile feedback.

Tactile feedback is often naturally present in traditional instruments: “Despite the many advantages of the separation of user interface from sound-producing medium, a price we pay for this separation is a loss of physicality.” (Mann 2007. p 2) This may hinder the development of virtuosity as we know it, and it can also be regarded bad for the audience. Papetti et al (2015) see many flaws but also confirm tactile feedback as the biggest: "Looking at current musical interfaces, that of tactile feedback seems like a minor issue as compared to ergonomics or gesture mapping. Nevertheless, several recent studies suggest that the development of musical skills strongly relies on tactile and kinesthetic cues: These would inform sophisticated control strategies that allow experienced musicians to achieve top performance levels (for example in terms of precise timing and accurate intonation), and enable expressivity and self-monitoring."

Placing fingers on the right spots of the screen needs care. At some point I thought I would be able to lift my head while performing and look at the audience but that’s basically not possible. The connection between audience and the musician needs to be built in different way. I chose to display the stage to the audience via live video. I wanted to project the screen of the iPad as it was, but that wasn’t possible at the same time as using a dock. I believe that it was also due to the lack of tactile feedback from the musical apps that made it difficult to remember correct settings for each compositions and correct fingerings for each passage

I created written notation for the compositions, and used written instructions for practicing the songs. However, in the end the best way to take notes for the final compositions was shooting video while rehearsing and then going through the video in detail and taking notes from the video. Three was the lowest amount of apps that I used for a single composition but it’s still such a big amount of instruments playing at the same time that it’s fairly difficult to remember what’s happening in each of them. What are the correct settings and what to do with each app as the composition goes forward? That’s why it’s relieving to include also improvised sections in the compositions. A real strength of iPad as a musical instrument is in improvised passages: it’s possible to set the scale for many iPad instruments so that improvisation on the scale becomes very easy. When the scale is set correctly, there are no “wrong” notes played, because they can be taken away from the interface.

The big touch screen and the sensors provide a way to give non-discrete input to iPad, just like it’s possible to bow cello very hard, very softly and everything in between. The sensors are already there; there’s no need to add anything to the iPad, it may be used as an expressive instrument as it is now. Everything is in a compact form in one device. It’s a very practical thing that there are no extraneous cables lying around when playing the iPad.  

The fact that iPad is not an instrument by design can be seen as an advantage, too. Wang (2009, p. 303) has observed that most users of the social instrument Ocarina are not musicians, and yet are able to be musically expressive. According to the company behind Ocarina, Smule, Ocarina serves as an experiment in making use of technology to explore different types of musical mobile and social experiences (ibid.). iPad (and perhaps iPhone even more so) are good platforms for this kind of experimentation.

Despite all the good points iPad is not quite ready yet. At least not ready as a musical instrument. It’s a very promising platform for music making and live performance, but there are things that aren’t just there yet. The performance of the iPad’s processor is limited, and so is the amount of memory. I had to limit the apps used for some of the songs. I needed to pre record some of the parts as samples, because I wasn’t able to perform them live. etc..

The amount of apps that make good use of the sensors is not very big yet. There are only few clear examples of meaningful ways to incorporate sensors in the live playing, like ThumbJam’s accelerometer vibrato and camera theremin of AirVox and the use of accelerometer and gyroscope in Borderlands Granular. Or perhaps it’s just that we haven’t yet got used to the idea of playing computers using gestures; time is not yet right for that. The apps that I presented are much more about touch screen user interface than the whole iPad with all its sensors as an interface. So far sensors remain as something experimental. Perhaps the latest update to GarageBand convince other app developers to start experimenting with alternative gesture input methods.

Even though iPad is a computer, not all means of creating computer music make sense on the iPad, one example being live coding. But on the other hand, live coding has probably been developed because of the limited ways (keyboard and mouse) to give musical input to the computer. That’s not true for the iPad: big touch screen is playable, and there are apps that have good expressive qualities.

Also the application ecosystem has advantages and disadvantages: every operating system update brings something good and something bad. There are apps that make use of new features (like AB remote and its bluetooth connection) but on the other hand some apps may stop working and may work with glitches after an update. During test sessions the iPad started glitching and some app crashed from time to time. It’s not totally reliable to run multiple apps at the same time. iOS is still a fairly new technology. It’s not in a phase where everything works seamlessly together, yet. Every iOS update breaks something, and audio apps seem to be very fragile in that sense. In practice it means that it may be wise to use apps coming from same developer, because it may reduce the risk of having broken software.

However, I assume three things would make my life as an iPad musician playing live gigs much less stressful. Firstly, I should compose songs for a defined and locked set of apps with their flaws. Not updating the apps, or the OS. Secondly, I should have two iPads for the performance and it would be good to have somebody to start all the necessary apps and make necessary settings before the next song starts. It takes a bit too much time now to make sure everything is 100% as planned before the audience (and me) starts feeling nervous. And thirdly, I should dedicate the iPad just for music. Some of the apps remember the settings really well, without explicitly saving them, and even if closing the app and iPad. But if you do something else with the iPad, there may be an occasion that you’re showing something from the iPad to somebody else and you accidentally change something and you don’t remember that until you’ve already started the gig and then you realize you really need to find an alternative solution.

All the cons exist because iPad is not only an instrument but much more. However, it’s a good thing too, because you end up carrying your instrument to almost everywhere and you have it there when the inspiration strikes. Just like so many things, it’s a double sided coin.


Next steps

What do I intend to do with my findings? This chapter explains what future uses this work could provide, both for me and for others in the field.

In this artistic research I used the theoretical backgrounds of electronic music and research from NIME community to analyse my own practice as an iPad musician. The research question was how to build an interesting and musically interesting solo live performance with iPad. I believe this research has given an answer to that.  

I think this research benefits the whole field of electronic music, because it provides insight into emerging technology from artistic viewpoint, concentrating on the live use of iPad. I think this artistic research could give inspiration to research the topic further and define what iPad musicianship means. According to Jordà (2007, p. 99) graphic tablets with good resolution along the XY axes and with pressure and angle sensibility, have also proven use for music. I think I have proven that graphic tablet with good resolution, angle sensibility but no pressure sensibility has good use for music.

I was striving for an interesting and musically versatile end result. The compositions consist of examples of each of the three approaches to use iPad as a live instrument for layered music. I used the aforementioned apps for the compositions. I tried to make use of the features that feel natural to use without massive settings. To some extent that was the case. The amount of things needed for preparing a track for live performance may grow overwhelming to the extent that I simply didn’t remember everything that I was supposed to do. Then it took a bit of improvisation to be able to achieve what I had planned, like playing in and prerecording loops for the compositions.

The field of using iPad as a musical live instrument is rather young. Building entire compositions is still somewhat an experimental practice. There are numerous interesting, playable, even expressive musical apps available for the iPad. I have presented different groups of musical iPad apps for different purposes. The grouping is based on my own experience. The groups are overlapping and  the update cycle is shorter for mobile software than for desktop software. When the software gets updated and the apps get new features, the apps may move from one group to another.

Next steps as a performance artist

How would I like to develop the performance further?

During the final months of the research process I realized that I need to pay attention to how to present the musical iPad performance to the audience. I have constructed a system that projects the interface for the audience to see. In fact, a Padworks performance on its own answers the research question. Looking at Padworks performance it provides insight into how an interesting and musically versatile solo performance can be built with iPad.

I want to take that idea further, not only focusing on the educative aspect. I want to build an audio-visual platform for solo musicians. There are plenty of musical apps available, more than one iPad musician will ever need. But currently, even though it would technically be possible, there’s no way to blend and project visuals from different apps during the performance in a more flexible way. I’d love a system where I could stream different camera images, apply effects on them and at the same time I could blend in some other video material too. I’d like to build Videobus – Audiobus of visuals, that could be used for projecting different visual content in the way that a DIY artist could do that.  

Next steps as a musician

How am I going to continue the music making after the research? I’ve listed the apps that are presented in this research at www.tuomasahva.net/padworks, but the online list contains also apps that are not presented in this research. I’m planning to keep that up-to-date at least to some extent.

There isn’t yet a multitool app that would have its roots in every category. Perhaps iOS is such a new platform that the first all-around DAWs that can be used in many ways haven’t seen the daylight. I predict that this development is only at its early stage and we just haven’t seen the iOS audio multitools yet. I want to stay informed how the iOS as music making platform evolves. What are the new innovative apps that are yet to be invented? How does the use of different built-in sensors in the musical apps develop in the future? What are the first musical apps that make use of Apple Pencil[58]? I want to see the answers to all these questions.

I also want to be on-par with what Michael Tyson is developing, because he looks like a driving force (along with Jonathan Liljedahl of AudioShare and AUM) on putting out forward thinking high quality audio software. Michael Tyson is currently working on a project called ‘Loopy: Masterpiece edition’[59] which is one or two levels more ambitious audio software than Audiobus and Loopy combined, probably aiming at similar functionalities as Ableton Live has, but on its own on the iPad.

Many ideas that I have had for composing music using interesting aspects of the iPad and the apps was left out of this research, simply because the time has been limited. I want to keep working on those. In addition, I feel that fewer elements per composition provide more coherent results; I wasn’t adding that many elements once I had the composition skeleton figured out.

Even though I’m aiming at a solo performance with this research I think it keeps me motivated. By making computer music, using iPad that invites me to interact with its direct touch, I’m able to create such rich soundscapes and interesting music that it’s appealing me to spend an excessive amount of time with it. I feel that spending time with my iPad playing music with will lead me to a new, unexplored level.

This all goes well along with the thoughts of John Cage. He has written: “When Theremin provided an instrument with genuinely new possibilities, Thereministes did their utmost to make the instrument sound like some old instrument, giving it a sickeningly sweet vibrato, and performing upon it, with difficulty, masterpieces from the past. Although the instrument is capable of a wide variety of sound qualities, obtained by the mere turning of a dial, Thereministes act as censors, giving the public those sounds they think the public will like. We are shielded from new sound experiences.” (Kostelanetz 1991)

Now us iPad musicians act as censors. Even though iPads are good at imitating already existing instruments I feel that we should concentrate in new inventions. Thanks to the big touch screen many ways of interactions resemble interacting with a more traditional instrument and I think iPad works nicely in education, providing easy access to a variety of sounds. However, I’m personally interested in creating something new and unheard of and unveiling that to the world.

References

Anderson, Hans; Lin, Kin Wah Edward; Agus, Natalie and Lui, Simon. 2015. Major Thirds: A Better Way to Tune Your iPad. In Proceedings of the International Conference on New Interfaces for Musical Expression, 2015. http://www.nime.org/proceedings/2015/nime2015_157.pdf (accessed 3 April 2016).

Ars Electronica 2013. Borderlands Granular Prix Winner. http://prix2013.aec.at/prixwinner/9168/ (accessed 2 February 2016).

Barrett, Estelle & Bolt, Barbara. 2007. Practice as Research: Approaches to Creative Arts Enquiry. I B Tauris & Co Ltd. 224 pages.

Billias, Athan. N.d. MIDI history: Chapter 2 – Player Pianos 1850–1930. https://www.midi.org/articles/midi-and-player-pianos (accessed 6 August 2015).

Bongers, Bert. 2007. Electronic Musical Instruments: Experiences of a New Luthier. Leonardo Music Journal, Volume 17, 2007, MIT Press, pp. 9-16.

Brown, Andrew. R. and Sorensen, Andrew.. 2008. Integrating Creative Practice and Research in the Digital Media Arts. Practice-led Research, Research-led Practice in the Creative Arts. H. Smith and R. Dean. Edinburgh, Edinburgh University Press, pp. 153–165.

Chadabe, Joel. 1997. Electric Sound: The Past and Promise of Electronic Music. Upper Saddle River, NJ: Prentice Hall. 270 pages.

Cixous, Hélène 2008. White Ink. Interviews on sex, text and politics. Susan Sellers (editor) New York: Columbia University Press.

Collins, Nick. 2003. Generative Music and Laptop Performance. Contemporary Music Review 22 (4). Pp. 67–79.

Collins, Nick & d’Escrivan, Julio. 2007. The Cambridge Companion to Electronic Music. Cambridge University Press, United Kingdom. 287 pages.

Cox, Christoph & Warner, Daniel, eds. Audio Culture: Readings in Modern Music. New York: Continuum, 2004. 454 pages.

Cutler, Marty; Robair, Gino and Bean. 2000. The Outer Limits.  Electronic Musician. August 1, 2000. http://www.emusician.com/gear/1332/the-outer-limits/31755 (accessed 3 April 2016).

Dobrian, Christopher and Koppelman, Daniel. 2006. The 'E' in NIME: Musical Expression with New Computer Interfaces. Proceedings of the 2006 International Conference on New Interfaces for Musical Expression (NIME06), Paris, France. http://www.nime.org/proceedings/2006/nime2006_277.pdf (accessed 3 April 2016).

Drummond, Jon. 2009. Understanding interactive systems. Organised Sound / Volume 14 / Issue 02 / August 2009, pp 124-133. Cambridge University Press 2009.

Elsea, Peter. 1996. A short history of Computer Music. University of California, Santa Cruz, CA http://artsites.ucsc.edu/ems/music/equipment/computers/history/history.html (accessed 8 June 2015).

Eno, Brian. 1996. Generative Music. http://www.inmotionmagazine.com/eno1.html (accessed 3 February 2016).

Hannula, Mika; Suoranta, Juha & Váden, Tere. 2003. Otsikko uusiksi. Taiteellisen tutkimuksen suuntaviivat. Niin & näin -kirjat 2003. 23°45-sarja. 100 pages. Published as a free PDF file in 2012: http://netn.fi/sites/www.netn.fi/files/Hannula_Suoranta_Vaden_Otsikko_uusiksi-web_0.pdf (accessed 3 April 2016).

Hunt, A. and Kirk, R. 2000. Mapping Strategies for Musical Performance. In M. M. Wanderley and M. Battier (eds.) Trends in Gestural Control of Music. Paris: IRCAM–Centre Pompidou.

Johnston, Clif. 2015. Drone, Glitch and Noise: Making Experimental Music on iPads and iPhones (Apptronica Music App Series Book 1) Kindle Edition. Retrieved from Amazon.com.

Jordà, Sergi. 2004. Instruments and Players: Some Thoughts on Digital Lutherie. Journal of New Music Research, 33:3, pp. 321–341.

Jordà, Sergi. 2007. Interactivity and Live Computer Music. In: Collins, Nick & d’Escrivan, Julio, eds.  The Cambridge Companion to Electronic Music. Cambridge University Press, United Kingdom.  pp. 89–106.

Klein, Julian. 2010. What is Artistic Research? Research Catalogue 30 March 2010. http://www.researchcatalogue.net/view/15292/15293/0/0 (accessed 10 June 2015).

Kostelanetz, Richard. 1991. John Cage, An Anthology. Da Capo Press. March 21 1991. 237 pages.

Mann, Steve. 2007. Natural Interfaces for Musical Expression:Physiphones and a physics-based organology. In Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME07), New York, NY, USA. http://www.nime.org/proceedings/2007/nime2007_118.pdf (accessed 3 April 2016).

Miranda, Eduardo R. and Wanderley, Marcelo M. 2006. New digital musical instruments: control and interaction beyond the keyboard. The Computer Music and Digital Audio Series, Vol.21. A-R Editions, Inc., Middleton, Wisconsin, USA.

Neal, Rome. 2004. Turntablism 101. CBS News March 25, 2004. http://www.cbsnews.com/news/turntablism-101/ (accessed 7 February 2016).

Norvio, Tuomas 2015. Lecture. Sound design day organized by the light and sound design department of the Theater Academy of Helsinki in May 3 2015.

Papetti, Stefano; Schiesser, Sébastien & Fröhlich, Martin. 2015. Multi-point vibrotactile feedback for an expressive musical interface. In Proceedings of the International Conference on New Interfaces for Musical Expression, 2015. http://www.nime.org/proceedings/2015/nime2015_118.pdf (accessed 3 April 2016).

Peters, Michael. 1996. The Birth of Loop. Version 1.1 (created Oct 13, 1996, modified 2004 and 2006). http://www.loopers-delight.com/history/Loophist.html (accessed 3 February 2016).

Phillips, Dave. 2008. An Introduction to OSC. Linux Journal, Nov 12 2008. http://www.linuxjournal.com/content/introduction-osc (accessed 31 July 2015).

Poepel, Cornelius. 2005.  On Interface Expressivity: A Player-Based Study. Proceedings of the 2005 Conference on New Interfaces for Musical Expression (NIME05), Vancouver, BC, Canada, 2005. Pp. 228–231. http://www.nime.org/proceedings/2005/nime2005_228.pdf (accessed April 3 2016).

Preve, Francis. 2015. ApeSoft iVCS reviewed: the EMS Putney synth on your iPad. Published May 13 2015. http://www.keyboardmag.com/gear/1183/apesoft-ivcs-reviewed-the-ems-putney-synth-on-your-ipad/52186 (accessed 4.2.2016).

Roads, Curtis. 1996. Computer Music Tutorial. The MIT Press. Cambridge Massachusetts. 1256 pages.

Rowe, Robert. 1993. Interactive Music Systems: Machine Listening and Composing. The MIT Press. Cambridge Massachusetts. 278 pages.

Schedel, Margaret. 2007. Electronic Music and The Studio. In: Collins, Nick & d’Escrivan, Julio, eds.  The Cambridge Companion to Electronic Music. Cambridge University Press, United Kingdom. Pp. 24–37.

SOPI. 2015. Sound and Physical Interaction. SOPI research group. Mapping. http://sopi.aalto.fi/teaching/pid/mapping/ (accessed 30 November 2015).

Swift, Andrew. 1997. A brief Introduction to MIDI, SURPRISE (Imperial College of Science Technology and Medicine). http://www.doc.ic.ac.uk/~nd/surprise_97/journal/vol1/aps2/ (accessed 12 June 2015).

Szanto, Gabor & Vlaskovits, Patrik. 2015. Android’s 10 Millisecond Problem: The Android Audio Path Latency Explainer. http://superpowered.com/androidaudiopathlatency/#axzz3achzqkYV (accessed 3 April 2016).

Tobenfeld, Emile. 1992. A System for Computer Assisted Gestural Improvisation, in Proceedings of the 1992 International Computer Music Conference. International Computer Music Association, San Francisco, CA. Pp. 93–96.

Trump, Sebastian; Bullock, Jamie. 2014. Orphion: A Gestural Multi-Touch Instrument for the iPad. In Proceedings of the International Conference on New Interfaces for Musical Expression 2014. Pp. 159–162.

Wang, Ge. 2007. A History of Programming and Music. In: Collins, Nick & d’Escrivan, Julio, eds.  The Cambridge Companion to Electronic Music. Cambridge University Press, United Kingdom. Pp. 55–71.

Wang, Ge. 2009. Designing Smule’s Ocarina: The iPhone’s Magic Flute. In Proceedings of the International Conference on New Interfaces for Musical Expression, 2009. http://www.nime.org/proceedings/2009/nime2009_303.pdf (accessed April 3 2016).

Wang, Ge. 2015. Game Design for Expressive Mobile Music. In Proceedings of the International Conference on New Interfaces for Musical Expression, 2015. https://nime2015.lsu.edu/proceedings/143/0143-paper.pdf (accessed 3 April 2016).

Zappi, Victor & McPherson, Andrew P. 2014. Dimensionality and Appropriation in Digital Musical Instrument Design. In Proceedings of the International Conference on New Interfaces for Musical Expression (Baptiste Caramiaux, Koray Tahiroglu, Rebecca Fiebrink, Atau Tanaka, eds.), Goldsmiths, University of London, 2014. Available: http://www.nime.org/proceedings/2014/nime2014_409.pdf (accessed 21 May 2015).


Appendix – written notations for the compositions

Notations taken from the research diary.

Clorochime

Preparations

  1. Open Loopy
  2. Open Mimix
  3. Open Bebot
  4. Open Crystalline
  5. Open Tachyon

Start playing Color Chime on the iPhone (Show Color Chime as background video)

Loopy: Record 2 bars of Color Chime noise notes, in a way that they are not audible

Color Chime: Do another delay mayhem, together with the effect, stop Color Chime

Loopy: Raise the level of the volume to ~zero

iPhone: Open Audiobus remote

iPad (switch background video to iPad video)

And what happens after that, I don’t know :) Perhaps something like this:


Parkfun

Samplr, Borderlands, Loopy

|Dm   |F          | F       |Dm       |G         |Dm C|

| D+A | F + A  | F + G | D + A  |  G + C | D + A  C + G   |


Shenanigans Love

Jam synth little guitorgan in a minor

Humming through mic

Chord progression: C, Em, Am, Em, C, Em, G

Turn on GuitarCapo, AudioShare, and open ThumbJam and Caramel there, Open Animoog with a correct sound

Open Jam Synth

Start humming

Play bass notes (C, Em, Am, Em, C, Em, G, G) from GuitaCapo on top of the humming

Start playing and singing with automatic strumming

Leave it to C

Take off 12 string mode

Play the B part: F, Em, G x2

Leave it to C and turn on 12 string mode

Turn on Animoog

Play the basic chord structure x 2

Play B part x 2

Turn off Animoog Midi Sync

Turn on Animoog and ThumbJam (drums with 2 octave lower midi receive)

Play the basic chord structure x 2

Play B part x 2

Leave it to C

Turn on automatic strumming

Hum the melody

Leave it to freeze


Luaka Bebop

The idea is to create a song that's kind of resembles Miles Davis type of bebop jazz song. So so the point was to create simple backing drum track then a simple chord progression ration and then kind of improvise with my own voice.

Open Luaka Bepop project on Elastic drums

Thumbjam with Tenor sax

Start off with elastic drums

Do something interesting with the first scene

Start playing the whole progrssion of ed

Play single notes of sax

Play sax chords

Play sax chords with left hand and something else with right hand

Start humming a solo kind of thing with +2 octave

Swith to -1 octave

Hum single chirds

Start airvox

Play solo

Go back to thumbjam

Play stuff with trumpet and cello

Profit


[1] Latency is a time interval between the stimulation and response, or, from a more general point of view, as a time delay between the cause and the effect of some physical change in the system being observed. In practice it’s the time between touch of the screen and the audio that is heard as a consequence of the touch.

[2] NIME – New Interfaces for Musical Expression is an international conference dedicated to scientific research on the development of new technologies for musical expression and artistic performance. In this research the term is also used in a wider sense discussing all the research around the conference.

[3] Some material from the concert can be found online at www.tuomasahva.net/padworks.

[4] Live coding is a programming practice centred upon the use of improvised interactive programming. Live coding is often used to create sound and image based digital media, and is particularly prevalent in computer music, combining algorithmic composition with improvisation (Collins, 2003). It will be covered briefly later in this research.

[5] EDM stands for Electronic Dance Music and is one of the most popular genres of modern pop music. A live EDM concert often involves DJ’s performing on huge stages, not involving any traditional instrument players.

[6] MAX is a visual programming language for music and multimedia developed and maintained software company Cycling '74

[7] Magic Piano is is an app that enables its users to play a song with a piano or other instruments just by tapping on the beams of light scrolling down on the screen.

[8] GarageBand is Apple’s own entry level music app, originally for desktop computers but nowadays its development seems to be focused on iOS version. GarageBand for iOS is good for it’s built-in instruments and their intuitive touch screen playing interfaces. It’s a good starting point to explore musical iOS apps.  http://www.apple.com/ios/garageband/ 

[9] SampleTank is an app that contains many sounds but only a very simple playing interface. It’s more intended to be played with another controller as an interface. There are many different sound banks to choose from.

[10] GeoSynth is an instrument app with playable grid interface. It contains its own sounds but also can be used as a playing interface for other apps.

[11] Animoog is a synth app by Moog Music, the company behind some of the most popular hardware synthesizers.

[12] Impaktor is a drum synthesizer that uses the microphone of the iOS device to turn any surface into a playable percussion instrument.

[13] Analog, digital and discrete are words that may cause confusion. In this context the question is whether input method is analog, ie. continuous, or discrete.

[14]Acousmatic music; “The whole point of a acousmatic music, expressed in the meaning of the word ‘acousmatic’, is that there is nothing to watch, no observable activity to confirm how the sounds are made, and often no certainty about where the sounds originate. The implication is that we should perceive and respond to the sounds – the music– through listening alone. Acousmatic music is by definition an invisible sonic art, which invests in the liberty of an open sound world and in the imagination of the interpreting listener” (Denis Smalley in Collins & d’Escrivan 2007, pp. 78–79)

[15] SuperCollider is a text-based open source programming language and environment for real time audio synthesis and algorithmic composition.

[16] Pure Data, or PD in short, is an open source visual programming language often used for processing and generating sound.

[17] MIDI – Musical Instrument Digital Interface is a technical standard that describes a protocol, digital interface and connectors and allows a wide variety of electronic musical instruments, computers and other related devices to connect and communicate with one another (Swift 1997)

[18] DAW – Digital Audio Workstation, a common term used for software for recording, editing and mixing music

[19] http://www.nime.org/

[20] OSC (Open Sound Control) is a protocol for networking sound synthesizers, computers, and other multimedia devices for purposes such as musical performance.  

[21] Resonate  is a yearly festival of digital arts held in Belgrade, Serbia http://resonate.io/ 

[22] Definition from Oxford dictionary.

[23]  I think it’s boring and dismissive towards the audience if an electronic musician is performing without actually playing anything (perhaps only ‘start’, ‘stop’ and twisting some fader knobs) , thus not being able to adapt the performance to audience reactions. If that’s the true nature of the music that’s been played, like in acousmatic performance, it’s ok. But if it’s just for making things easier, then I find it difficult to understand. I think the artists should be honest to themselves and to the audience.

[24] For more information about hyperinstruments, see the work of the group of Tod Machover at the MIT Media Lab, and Joe Paradiso, “New Ways to Play: Electronic Music Interfaces,” IEEE Spectrum 34, No. 12, cover article and pp. 18–30 (1997).

[25] Reactable is an electronic musical instrument with a tabletop user interface that has been developed within the Music Technology Group at the Universitat Pompeu Fabra in Barcelona, Spain. http://mtg.upf.edu/project/reactable 

[26] Definition of virtuosity from Oxford Dictionary of English

[27] Definition of instrumentalism from Oxford Dictionary of English

[28] iPad Musician Facebook group can be found at www.facebook.com/groups/Ipadmusician/.

[29] If any of the readers are thinking of starting using iPad in the music making process, I can warmly recommend Clif Johnston’s books. They are available as Kindle books. You don’t need a Kindle to read them. There is an app for that, too.

[30] Ableton is a German company behind Ableton Live, one of the DAW’s that are clearly designed just as well for live playing as studio recording.

[31] The list is gathered from Wikipedia https://en.wikipedia.org/wiki/iPad.

[32] Soundscaper and Fieldscaper are experimental sound apps, both developed by Igor Vasiliev: http://audio-mastering-studio.blogspot.com/. The user can use ordinary sound samples to create new and unusual sounds.

[33] Piano roll is a continuous roll of paper with perforations (holes) punched into it. The perforations represent note control data. (https://en.wikipedia.org/wiki/Piano_roll). Many DAWs handling MIDI include a virtual piano roll that defines the MIDI messages to be sent.  

[34] Ableton Link is a technology that keeps devices in time over a wireless network.

[35] WIST (Wireless Sync-Start Technology) is Korg's technology which allows for wireless sync-start between two WIST-compatible apps on two iPads and/or iPhones located near each other. http://www.korguser.net/wist/ 

[36] Audiobus is a standalone app that can be described as a glue between different music apps. It can be used to send audio from one app to another, and place filters in between. More about Audiobus in the coming chapters.

[37] Inter-app audio (IAA in short) is Apple’s own way to route audio between apps. It’s not a separate app, but implemented within musical apps.

[38] More about all of the mentioned apps in the next chapter.

[39] More info can be found for example at https://en.wikipedia.org/wiki/Animoog 

[40] More info at http://www.bitshapesoftware.com/instruments/tc-11/ 

[41] More info at http://www.wizdommusic.com/ 

[42] Quantisation is a way to repair imperfection in digital music. Quantization is the process of transforming performed musical notes, which may have some imprecision, to an underlying musical representation that eliminates this imprecision. The process results in notes being set on beats and on exact fractions of beats. More info can be found at https://en.wikipedia.org/wiki/Quantization_(music). 

[43] Theremin is an early electronic music instrument controlled with two hands but without physical contact. It usually consists of two metal antennae that sense the distance of the hands of the player: one hand controls the pitch and another the amplitude (i.e. volume), More info can be found at https://en.wikipedia.org/wiki/Theremin. 

[44] See more for example in https://en.wikipedia.org/wiki/Markov_chain.

[45] IDM stands for Intelligent Dance Music, a genre of experimental electronic music, where artists like Aphex Twin, Boards of Canada and Squarepusher are usually labelled to, more info in https://en.wikipedia.org/wiki/Intelligent_dance_music 

[46] API – Application Programming Interface is a common term in computer programming. It roughly means the description and interface how one program can access and use another program.

[47] Is it so? Perhaps that could be a topic for further research.

[48] 1st, 2nd and 3rd generation iPads.

[49] I’m sure there are several examples on Youtube.

[50] More info at http://flux.noii.se/ 

[51] For more info, see for example http://thecreatorsproject.vice.com/blog/meet-bytebeat-a-brand-new-electronic-music-genre 

[52] ABABCBB A – verse, B – chorus, C – middle part

[53] Musical notation for the compositions are presented as an appendix to this research. They can also be found online at www.tuomasahva.net/padworks. 

[54] Beats per minute is how a tempo in electronic music is usually measured

[55] Sound Test Room is very good source for information about new musical iPad apps. They provide good walk-throughs and reviews at http://thesoundtestroom.com/.

[56] Lemur was one of the first touch screen interfaces to prove that there’s use for a touch screen interface in music. It was relatively expensive hardware at the time, more than an iPad costs. Now the same functionality, and more, is available for tablet users with a couple of tens of euros. More info can be found at http://createdigitalmusic.com/2010/11/jazzmutant-lemur-controller-is-dead-long-live-multitouch/ (accessed March 31 2016).

[57] Gadget is a collection of software instruments in one DAW-like music production app.

[58] Apple Pencil is a digital stylus pen that can be used as an input device for touch screens. It has force sensitivity and angle detection. It’s easy to think innovative use for both features in musical apps. More information about Apple Pencil can be found at https://en.wikipedia.org/wiki/Apple_Pencil 

[59]See more at  http://masterpieceedition.tumblr.com/.