Andy Ingamells writes about his journey when composing Petting Zoo – a piece in which he manipulates the hands of…
Nicola Jane Buttigieg recently attended the UK Beatbox Championships and found herself considering the future of beatbox and how it might integrate with other notated music.
Recently presented again to sensational, annual acclaim by London’s 5th Element and Lyrix Organix was the 2019 UK Beatbox Championships at Islington’s The Garage. As per their titles, I witnessed both organisations gear themselves to promote beatbox, the famous 5th and key hip hop element built on organic multi-vocalist techniques.
Reflecting on this new-generation instrument of ever-developing ‘multivocalism’, it appears to have grown largely from the mimicking of computers and sound design. Musicologists may wonder about any potential for its written notation method. Could its language perhaps grow from the existing code formations from which its digital sound qualities originate, or even be made more accessible to vocalists if coupled with existing speech phonetics and secure traditional rhythm notation concepts? To contemplate this idea, I imagine one must first understand beatbox sound production foundations. Fortunately, the recent UK Beatbox Championships assisted me with just that.
Pre-event happenings included a panellist feature workshop by the Netherlands’ own Bart Voogt (aka B-Art Human Beatbox), where younger competitors lavished in the opportunity to be guided in true beatbox masterclass fashion. Tutored toward specific goals for the impending (and competitive) evening’s revelry ahead, topics phased and flanged from musicality concepts, such as mitigating monotony in rhythm or avoiding ‘mistakes’ according to timely decisions about dynamic power, speed and tonal appeal, to those of technical precision, such as efficient delivery of laryngeal melodies in counter-pattern with beat-keeping mouth obstructions, or leading to where the ‘hook’ might be brought in. I saw compelling demonstrations of good mic technique, preventing visuals of the teeth whilst still working with a ‘tight sound’, and the bizarrely effective use of one finger above the nostril to enhance sound quality and accuracy. Novices were firmly reminded not to ‘fill up the room’ but rather ‘fill up the mic’ during multivocal execution, as it never serves a musical experience to be ‘loud’ on the whole. Mastering effective beatbox technique is certainly not about ego, although to hear one’s self properly in the speaker is always considered of utmost importance.
Audiences later had the pleasure of engaging in an interlude performance by B-Art’s neighbouring judge from Belgium, Senjka Danhieux (aka Roxorloops), boasting practical skill demonstrations of the notable beatboxer greats. Progressing through generational examples, the audience marvelled at his skills shifting from the establishment of old-school click rolls by Doug E. Fresh, to first iconic breathy separations of Darren Robinson, also known as ‘Buffy’ from The Fat Boys hip hop trio. First trends to ‘talk’ (rap) between beats were revealed as a habit of ‘Biz Markie’ Marcel Theo Hallman, and the ever favoured ‘wind’ and ‘water techniques’ of Kenny Muhammad were acknowledged as the first to popularise the now most skeletal ‘inward K’ beatbox sound. Before drawing attention to the UKs own inspirational ‘Killa Kela’ who popularised the virtuosic term ‘multivocalism’ promoting the idea of technical accuracy in beatbox, Roxorloops’ lime-light guest-lecture would not have been complete without his final rendition of Rahzel Manely Brown’s simultaneous ‘beat and chorus’ delivery of ‘If your mother only knew’. This final demonstration piece reflected the epitome of multivocal polyphony in the beatbox skillset and he went on to express that mimicking these greats had led him to his own brand of sound and delivery, and therefore his demonstration that evening may in turn inspire younger audiences to mimic and likewise go on and build their own brands of sound. Watch Roxorloops’ performance below
Using their own brand however, beatboxers typically blend into collaborative music making by intuition only. They essentially engage as a form of ‘mini-composer’ working within a main arrangement, similar to the concept of jazz music delivery, which has in fact developed its own improvisation-led notation over time. What then might be the realistic possibility for written notation of exact beatbox-specified articulations in music? Elaine Gould, author of ‘Behind Bars: The Definitive Guide to Music Notation’, focuses her final chapter on traditional notation methods for evolving notational developments in electroacoustic music. Recognising that new technological developments enable composers to work with sophisticated live synthesis and transformation of sounds using computers, the rapidly developing area and its new works often present ‘new notational issues’, alike to this one above.
5th Element UK Beatbox Championships co-manager Zani suggested there are existing systems available to convert recorded beatbox audio into digital MIDI data (able to be applied roughly to a traditional paper score), but more so for the purpose of further editing in a Digital Audio Workstation (DAW), such as Ableton. Whilst useful for workflow in production, another new software, ‘Vochlea’, further enables real time MIDI conversion of beatbox or sung sample audio direct from a mic. He elaborated that there can be limitations with Vochlea however, as substantial prep is required to program activations for the specific MIDI data.
Ideally, formulation of a well-recognised beatbox notation would end the barrier separating multivocalism from other traditional ‘examinable’ instruments, whose player-learning standards are judged and discussed via referral to written score mediums. Considering the nature of changing technologies and devices being mimicked, how could this newly proposed language then be ‘fluid’ enough to account for the ongoing trends and developments in sound design, as formally acknowledged by Gould?
Berlin-based digital musician Cortexelus James of Dadabots recently collaborated with the UK’s own Reeps One on a live performance medium designed to integrate machine learning and generative audio at Ars Electronica Festival 2019, Austria. He considers this barrier could be tackled using tailored application tools, following the more recent trend of programming styles using ‘deep neural networks’. Once there is a phonetic language for beatbox, and a large dataset of manually-coded annotations of beatbox recordings, then application ‘training’ would be possible for a text-to-speech engine (a beatbox synthesizer) and a speech-to-text (automatic beatbox annotator). This technique follows the existing concept of ‘zerospeech’, stemming toward possibilities such as the automatic discovery of new phonemes using component data ‘clustering methods’, for example.
Could these predictions by James and Gould come together to guide musicologists toward an as yet undiscovered written beatbox form, ideal for ‘multivocalism’ hardcopy publications?