We have had some technical issues over the past week or so, mostly involving microphones not working, which means we are now behind schedule again. This can’t really be helped due to the lockdown so I am not majorly concerned and now that we have working microphones all recording has been finished, leaving just the mixing of the second track and mastering of both. I also finished the cover art some time ago.
In this post I wanted to discuss how this collaboration has been beneficial as I get to use recorded material rather than my own use of VST instruments. The main reason is for authenticity, with VST instruments there is only so much realism which can be achieved, especially for non-keyboard instruments such as flute or guitar. For example, when finishing the first track I added my own electric guitar sound but Ciaran wanted to replace it as it’s ‘not real guitar’ which I then realised was a good improvement. Of course, from the beginning of the project I had wanted him to record flutes as I don’t have a virtual version of those, and I imagine it is fairly hard to create realistic virtual flutes, but I have now further realised the benefits of using audio recordings.
With a MIDI input you can only control note velocity and placement in order to give expression, rather than small authentic details and human imperfections that come from live performance. This led me to look into the ‘uncanny valley’ theory, developed by Masahiro Mori in 1970 it concerns robots and virtual characters where being close to human likeness causes the people to react negatively (Schneider et al., 2007). Avdeeff discusses how this can also be seen in the audio world when a lack of ‘soul’ and ‘human creativity’ is perceived, for example in the fourth movement of Hiller’s Illiac Suite (1956) which was composed using Markov chains and arguably lacks a strong narrative, which causes unease (Avdeeff, 2019, p. 4-8). I can see how this would relate back to my practice, perhaps not on the same extreme level of unease, but in how a ‘too perfect’ MIDI file could be perceived as boring, expressionless, and lacking humanity.
I have learned a lot over the course of this project about mixing audio recordings, especially vocals as I don’t have much experience with this, for example by reading this article on parallel compression. Roads (2015, p. 371) explains how pop music has very defined stages of production from songwriting to performing and on to mixing, but in contrast in electronic music these phases become tangled together and mixing happens throughout almost the whole production process. I have realised that I prefer the workflow of pop music as having defined stages of production feels more manageable compared to the constant tweaking which is needed in electronic music, though I have ended up combining both approaches in this project by adding in and mixing virtual instruments both before and after the recording stage.
References
Avdeeff, M. (2019). Artificial Intelligence & Popular Music: SKYGGE, Flow Machines, and the Audio Uncanny Valley. Arts, Volume 8(4), pp. 1-13.
Mayzes, R. (2020). 8 PARALLEL COMPRESSION HACKS: TAKE YOUR MIXING TO THE NEXT LEVEL. Musicianonamission.com. [Online] Available at: https://www.musicianonamission.com/parallel-compression-guide/?utm_source=ONTRAPORT-email-broadcast&utm_medium=ONTRAPORT-email-broadcast&utm_term=%28PROMO%29+Compression+Breakthrough+Launch&utm_content=8+parallel+compression+hacks&utm_campaign=23032020 [Accessed 28th April 2020].
Roads, C. (2015). Composing Electronic Music : A New Aesthetic. New York: Oxford University Press.
Schneider, E., Yifan, W. and Shanshan, Y. (2007). Exploring the Uncanny Valley with Japanese Video Game Characters. Paper presented at the DiGRA 2007 Conference: Situated Play, Tokyo, Japan, September, pp. 24–28.