The history of recording studios traces back to the 1800s with Thomas Alva Edison and his invention of the phonograph. He was initially supposed to improve the telephone, however, the phonograph created some way to record and reproduce sound. Edison applied for a patent in 1877 for his talking machine that used foil cylinders.
Through the years, as a result of scientific experiments and innovations, electrical recording emerged. In the electrical recording progression, the sound is recorded using electronic devices such as microphones, amplifiers and other appropriate equipment such as electrical record cutters rather than a phonograph horn that was used to record sound mechanically. (Barrett, 2005)
Since the invention of sound reproduction within the late nineteenth century, studio practices in musical recording evolved in parallel with technological enhancements. In early mechanical recordings, technical constraints drove sound engineers to develop inventive methods of the putting of microphones very close to acoustic sources to capture sounds that will facilitate the readability of musical discourse (Tyler, 2016).
Several authors agree that the scope of early recordings geared toward excellent fidelity, however, Milner (2009) reported that Edison already “believed that perfect recording requires music that was truer, purer, more real than the music event it documented” which suggests that completely different approaches to musical recording theoretically existed since the invention of sound reproduction. Over the years, technological enhancements have created Edison’s wish come true and “recording’s metaphor has shifted from figurative to actual reality (mimetic space) to the reality of illusion (a virtual world where everything is possible)” (Moorefield, 2005).
Recording technology powerfully revamped the way artists perform and compose music; you can check this live shows and festivals by Slingo.com that have been around ever since the dawn of time. Consequently, paying attention to recordings has become the primary means of hearing music (Gracyk, 1997). To date, this research is centered on music production in a Western context. However, both Internet and digital technologies extend the access to recorded music, and although they are responsible for the downfall of physical media (i.e., CD), they also give rise to new musical productions in emerging countries. Later that decade, the birth of Spotify in 2008 made a huge difference, as they offer a huge catalog for a subscription fee or ads in between every few songs. Spotify, just like other streaming apps and websites, was criticized for the low royalties the service generates for the artists. Spotify, however, declared that 70% of their revenue goes to rights-holders and emphasized that their service diverts people from illegal download and streaming sites which do not give revenue for the artists. (MN2S. 2015)
On the present day, Spotify has recorded 108 million subscribers, 232 million number of monthly active users, 13+ billion revenue paid to rights holders since launch, 3 billion+ playlists, 50 million+ tracks, 450 thousand+ podcasts, and available in 79 markets. (Perez TC and Spotify 2019)
Spotify requires WAV and FLAC audio formats. The streaming platform runs it through a system and converts all audio files that were uploaded by the aggregator to suit the streaming platform’s quality standards. Loudness is required to be at -14db integrated LUFS (Loudness Unit Full Scale). Spotify then converts the tracks to lossy formats such as OGG/Vorbis and ACC which is being used internally by the streaming platform. (Spotify, 2019)
Because of the continuous growth in technology, streaming, in general, has taken over the music industry. In fact, from $17.1 billion in 2017, global recorded music revenues jumped 9.7% to reach $19.1 billion in 2018. A huge part of it is streaming music revenues that accounted for 47% of global revenue – in which 37% came from paid streaming and 10% from ad-supported streaming. Though it’s still less compared to physical copy revenues in the early 2000s, streaming portrays an upward trend. (Spotify, IFPI, 2018)
Sound is an airborne version of vibration. It travels through the air before it reaches our ears. Sound can be due to a one-off event known as percussion, or a periodic event such as the sinusoidal vibration of a tuning fork. The sound due to percussion is called transient whereas a periodic stimulus produces steady-state sound having a frequency (Watkinson, 2001).
The main task of recording is to translate waveforms into different media so it can be stored, manipulated and played back. Digital recording samples the waveform at evenly-spaced time-points, representing each sample as a precise number. Digital recordings, whether stored on a compact disc (CD), digital audio tape (DAT), or on a personal computer, do not degrade over time and can be copied perfectly without introducing any additional noise (Audacity, 2019).
Digital audio can be edited and mixed without introducing any additional noise. Notably, many digital effects can be applied to digitized audio recordings, for example, to simulate reverberation, enhance certain frequencies, or change the pitch. (Audacity, 2019) Multi-track recording or ‘multi-tracking’ is a way of recording music in which individual recordings of multiple sound sources or ‘tracks’ are fused to create a single recording. This is the most common method of recording popular music and virtually all popular music is now made in this way. Within this process, separate instruments or voices are recorded onto an individual ‘track’ and can then be simultaneously played together. Individual tracks can also be mixed to the correct volume through a mixing board and a wide range of audio effects (such as reverb, delay, compression, etc) can be added.