Updated: Jul 9
In this particular case, the client wanted us to transform a mono home recording of voice and acoustic guitar into a streaming-ready song, thus avoiding the extra costs of studio time.
When I work with stems or material that I know was recorded properly, I can normally go straight into the DAW - in this case Logic Pro X - and use an external editor to do a few touches in the audio if needed.
But in this case the issues were severe, hence it was necessary to go to full editing mode with Izotope RX first, and then work on the DAW for the final master.
Note that there is no real mixing stage here, since the recording supplied by the client already had a blend between voice and acoustic guitar.
In terms of home recording, one interface I recommend due to the cost and great feature set is the Focusrite Solo - if you are starting your journey in audio it's worth considering it as a starting point.
The Editing Stage
By listening to the original mono recording, we could identify the following issues that needed treatment before a master was attempted:
General hiss and noise;
A few 'clangs' and other random noises, probably due to movements made by the singer during the performance;
Abrupt interruptions of sound at the beginning and end of the song.
For the first issue, I've sampled the hiss on the few moments where the song paused, then used hiss control settings but moving the quality control to 'best' instead of 'fast'.
A few important notes about this step:
Spectral de-noise favoured over voice de-noise as the track is a blend between voice and music (acoustic guitar);
Learning the hiss is normally better than using the adaptive mode since the attempt to vary the level of the noise floor can cause audible artefacts on the resulting sound. Ideally, I would have told the singer to record a bit of silence before she started singing, so I could have a few seconds of hiss only;
Small adjustments to the artefact control settings are recommended - use your ears and preview before applying
Don't go for a very aggressive attenuation, output the noise if needed and make sure the song quality is not being impacted
For the second issue, I had to go case by case, removing the issues with the spectral repair attenuate option.
Here are the important observations:
While I started from the 'attenuate unwanted event' setting, I've tweaked the parameters using my ears, and increased the number of bands to improve accuracy
Each individual 'clang' required slightly different settings, and careful selection of the audio region to minimise the impact over the sound
Have a look at the before and after graph below
The third issue was easier to fix, it just required a few strategically positioned fades to get rid of the abrupt transitions and the beginning and the end of the song.
The Mastering Stage
I've then approached the mastering stage of the song generally in the same way I approach other songs, with the exception that I've used the imager on Ozone 9 in a more pronounced way to create a simulated stereo effect since this was a mono recording
The building blocks of the mastering chain were:
UAD Ampex ATR-102 tape simulator with 30 ips settings, to create the desired saturation and compression effect
Gullfoss Master by Sound Theory, intelligent EQ to spice up the sound
Ozone 9 Equaliser, just a few touches to fix the final tonal balance
Ozone 9 Imager as seen on the picture above
Ozone 9 maximiser calibrated to -2 dB peak -14 LUFS, respecting the loudness levels required by most streaming services and allowing headroom for the lossy compression some of them apply
Izotope MBIT+ Dither for 24 bits, exporting the project as a 48kHz 24-bits WAV file