Updated: Jun 18
Recently a client approached me with a challenging task.
They had a song that was already mastered - really loud - squashed to -6 LUFS... two segments of the song had a spoken word that they wanted to be louder than the surrounding song at that stage.
So the challenges:
- Music Rebalance on Izotope RX 7 can do the job of rebalancing a vocal from a song that is already mixed, but what about headroom ?
- How to make the transition from the modified part of the song to the rest seem seamless ?
Well LUFS measures our perception of loudness so as long as I kept the momentary LUFS between the edited and the non-edited parts of the song intact that should be sufficient.
So the job was:
- Using markers, highlight the section where the vocals are
- Check stats to see what the initial headroom is (peak, RMS, LUFS)
- Reduce the gain of the whole song to open space for the rebalance. Only the amount necessary as we don't want to lose resolution. It's a loud song and I was operating at 32 bits float so unlikely to really truncate any important information by doing this
- Increase the spoken word in relation to the song using music rebalance. The improvement of 6 dB was enough to please the client
- That lead to that segment of the song 'sticking out' when compared with the rest. So the next step was to measure LUFS of segments immediately before and after the affected part, and then use loudness to even out the loudness on that segment accordingly.
- Bring back the gain by applying back the reduced amount
- Checking stats again to make sure no clips occurred (client wanted the song as loud as possible). Job done.
I took a few snapshots of the RX7 screens throughout the process to help you understand what was done.
Interested in editing services ? Book a session below