Ximena Alarcón introduces the technologies for Telematic Sonic Performance, which she broadly defines as a performance playing with words and sounds between people in distant locations using the Internet.
Part one explores the history, context, and practices of Telematic Sonic Performance
Part two looks at the technical elements of performing telematically, including software.
A BASIC KIT
When musicians first play music together via Internet they notice with disappointment the delay in the signal transmission caused by the medium. This delay is experienced by a combination of technical reasons such as: the Internet bandwidth and its upload and download speed, the types of sounds, the quality of the audio signal, and the access to the Internet via Wi-Fi or Ethernet. For synchronous music-making online, latency must be 25 milliseconds or less. Incorporating “delay” and “accessibility” creatively can become an intrinsic part of the performance. To start making and playing any kind of sound related performance, you need people in distant locations. They might have a different kit, but all sides need to share the same streaming software.
Choosing your streaming sound software (bi-directional or unidirectional):
This is software that sends and receives sound. Musicians, audio and network engineers have developed bidirectional audio streaming software solutions to play music with good audio quality and experiencing the minimum delay. (See below for some popular choices)
All types of sounds and/or data to transmit (acoustic, electronic).
It is worth thinking which sounds and how much data are you sending and what is viable. That could make you think of accessibility, and envision environmentally friendly streaming practices and technologies.
Sound processing software
Such as microphones, sound cards, sensors, mobile phones.
Used for sending and incoming sounds from distant locations. You need to use headphones in most of the scenarios to avoid sound feedback into the microphone (at least if this is not your artistic intention).
If you intend to make public your performance you need to use broadcast software such as OBS and/or rely on integration of the videoconferencing system with YouTube channels. For example, Zoom can livestream directly to YouTube or Facebook, as can StreamYard.
Some general comments on available software for telematic performances:
The CCRMA research group developed the open source software Jacktrip exclusively dedicated to the bidirectional streaming of high quality audio using uncompressed sound. Jacktrip allows modifying the delay of sending and receiving signals, between locations, as well as the number of audio channels (up to 8) that you would like to use. You can decide which location will act as a server and which as clients. It is a free open source software, for Mac, Windows and Linux, and you need to use using the Terminal window, along with the Jack router software. Jacktrip needs a good broadband Internet access, and it has been used mainly from Universities that allow this capacity. Using a domestic Internet connection, you have all the permissions to allow the incoming and outgoing signals, but maybe you will struggle with bandwidth; however if you would like to take advantage of an institutional network (e.g. University) you need to ask for permissions to open ports, and to be assigned with public and external IP addresses. An active community via forum supports Jacktrip. Michael Dessen created a tutorial (2016) which is a good introduction. Kenneth Fields has created a graphic interface to use with Jacktrip called Artsmesh.
Soundjack is another dedicated bidirectional audio streaming software using high quality sound compression, and it is more accessible using domestic connections. It was created by Alexander Carôt and has evolved from being a desktop-based simple interface to a browser-based interface. You need to open an account, download Soundjack box, and then login via browser to access a stage. In the stage, you can see all the parameters of your network and audio streaming and join others who are jamming, or to create your own room. There is a chat window and a basic video feature to communicate with others. Recently Soundjack released tutorials and the community is very active on the web. Another software worth exploring that uses audio compression is Jamulus, which was recently tested by the C4 choral ensemble. For an in-depth technical overview of other available software for networked based collaborative music making, you can access a full lecture that has been released by Stefano Fasciano, from the University of Oslo.
Videoconferencing systems are now the most accessible for the general public and are currently used to perform accepting the delay and the audio compression that these systems have. Some of these systems have incorporated sound friendly features, such as the routing of sound cards as the devices for sound input and output.
Zoom especially offers the possibility to “enable original sound” which removes built-in compression and has been recently used in the OptoSonic concert in NowNet Arts performance series. For details on setting up Zoom for music, see Zoom for Composers, put together by Sound and Music, or Jim Daus Hjernøe’s instructional videos.
Google Hangouts has options to control the incoming sound features if you are the host.
With any bidirectional transmission of audio ideally you must connect always via Ethernet cable to transmit sound.
In April 24th 2020 I ran an online workshop as part of Sound and Music’s series of zoom workshops for composers. Considering technologies, and artistic intentions, I asked attendees, how would they like to connect with others creatively using sound across distant locations, at this historical time (of the Covid-19 pandemic)?
The answers ranged from considering access to the highest audio technologies, or the choices of music genre specifically for telematics, and the creation of scores for this setup. Using the technology we had in the talk, which was the Zoom app, we sent and received sound by performing The New Sound Meditation by Pauline Oliveros.
Hopefully this performance exercise and this article have opened possibilities of how and why to engage in telematic sonic performance and music making in this historical time. My personal answer to the question of why is that today, more than ever, I would like to be in touch — playing with silences, sounds and words.
Note: The score used above is Deep Listening; A Composer’s Sound Practice (Pauline Oliveros 1998, Copyright, © 2005 Deep Listening Publications) courtesy of Pauline Oliveros Publications / Members ASCAP. All Rights Reserved
Sound and music have the ability to touch us and to transmit vibrations, stimulating memories and moving us emotionally and physically. I believe in intentions that go further than music and that create connections for listening, for democracy, balance and transformation; an understanding of the best possible human capabilities which, amplified and mediated with technologies, help us to creatively devise ways of listening and sounding together when we are physically close to each other, and in the distance.
Dr Ximena Alarcón is a sound artist interested in listening to in-between sonic spaces in the context of human migration. Since 2012, Ximena has practiced the making of telematic sonic improvisatory performances with people sharing experiences of migration all over the world. She includes Deep Listening® practice and improvisation, as forms of expanding the perception of place, space, time, identities and narratives, using a wide range of technologies. Ximena has collaborated with sound artists, musicians and performance artists, exploring possibilities and challenges of this medium. She is a Sound Artist Researcher and Project Leader of INTIMAL Project, and teaches Deep Listening practice using online and physical locations.