TRANSCRIPT
Something Christmassy on the soundtrack today, and I’ve delved deep back in the archives, back to 2004, when the quintet got together and we recorded some Christmas stuff, which has been sitting there on my hard drive more or less ever since. So it’s about time that it saw the light of day.
It’s been another week of roving Christmas gigs for me. I’ve got a couple of Mood Merchants ones coming up this week, and a few more enquiries coming through, so hopefully I’ll have some more of that stuff to share with you sometime after Christmas.
Today I’m trying to solve a couple of problems in one hit. I’m looking to develop a technique for going out to other musicians’ houses, and recording audio there, and also video, to add to a multitrack recording, which is going to end up being a multitrack video recording. And that’s the kind of thing that people have done a lot, obviously, during Covid, and we’ve emailed files to each other and put them all together, and that all works fine, but it’s pretty labour intensive to get all of that video stuff all synced up and edited on a video editor, particularly if you’re doing multiple takes, which often you’ll want to do, if you want something that sounds really good. I’m going to talk through a little bit today about how I’ve set up this studio here to be able to record a little bit of tuba quartet stuff. I’m going to use that for my Christmas video next week, because I didn’t think anyone would want to sit down and listen to me talk, but I did want to put something up there that’s going to spread a little bit of Christmas cheer, and give the YouTube algorithm some love as well.
So that’s what I’m going to be talking about today. I’m going to try and keep it pretty brief, not too technical, so I hope you’ll stick around and check out what I’ve been up to.
So when I make my videos, as much as possible I avoid using the audio that’s being recorded by the camera. I always record the audio separately, and then I combine them later in software. The software I use is called DaVinci Resolve. It’s free, it’s great, highly recommended, and that makes it very easy to record your audio separately, so I’ve got a microphone up here which is being recorded through a proper audio interface onto my computer right now, and Resolve enables you to take your video which has its own audio recorded by its own microphones, doesn’t sound very good, but it’s good enough to match it up with the audio that I’m recording properly and automatically replace the bad audio with the good audio. So let’s have a look and see how that works just for basic syncing of audio recorded externally to the camera.
So I’ve got all my individual clips here, this is in DaVinci Resolve, and I’ve recorded them one at a time, and I’ve got one big long audio track, and that’s how I usually do it, that’s been recorded in this case on my Zoom recorder. So I just select everything, right click, and say “Autosync Audio”, “Based on waveform”, and then wait for about twenty seconds, and it’s going to go and replace all of the crappy audio that’s been recorded by the camera with nice clear audio recorded by a proper mic plugged into a proper recorder.
So that works great for stuff like this where I’m recording the same audio into the camera that I’m recording into the microphone. It’s obviously not quite the same quality, but Resolve has no trouble matching those two things up and making them sync.
When I’m doing a multitrack recording, though, the audio that I want to match my video recording up to is going to be different, because let’s say that I’m recording a single part on a single instrument, a baritone or a tuba, the camera’s going to be hearing just that part. But the audio I want to sync it up to, the audio that you’re going to hear in the final product, is a mixed multitrack version, which contains that part, but it also contains all the other layers that I’ve recorded. And Resolve is not smart enough to be able to hear, oh, that’s where the second baritone is, I’ll put that clip at that point in the timeline.
So what I’m developing today is a very cheap and easy way to try to trick Resolve into matching up those clips, and to be able to get them positioned at the right place on the timeline, even when the audio is something different, and I’m going to use what’s called a sync track.
Now, a proper sync track will be done with a thing called a time code generator, and that’s what they do on movie sets to get all their microphones synced up, and all of their cameras all synced up. I’m obviously operating on a constrained budget, so, what I’ve got instead is an audio book of The Hound of the Baskervilles, Arthur Conan Doyle, famous Sherlock Holmes mystery, which I’ve downloaded for free from the Guterberg website, where they’re freely available. And I’m going to use that as my sync track. I figured, well, the camera’s obviously good at matching up spoken word, like I do every week when I make these videos, so let’s use some spoken word as a special track that’s going to go onto my recording, you’re not going to hear it, I’m not going to hear it, but the camera is going to hear it, and Resolve is going to use it to align those clips and make sure they all stay together. So let’s see if this is going to work.
All the audio is going through Reaper here, and you can see I’ve got all my tracks set up, and on those tracks I’ve got dozens of different takes. It took me a bit of effort, especially on the baritone parts, to get all those sorted out. I’ve got this track down the bottom that says ‘sync’, and I might just turn it on so you can hear it. What the sync track is, is the Hound of the Baskervilles (on track: LibraVox dot org). This particular track, the sync track, is being fed out of this, and then that, feeds straight into the camera, this one here.
Here’s our sync file, this is just the audiobook (“Chapter One of the Hound of the Baskervilles”). It’s pretty interesting. If you listen to what this actually sounds like, you’ll be able to see me playing tuba here, and you might be able to hear it (“picked up the stick which our visitor had left behind him”). You can hear me just maybe in one side of your headphones, and in the other side you can hear Sherlock Holmes (“not infrequent …”) which is what we’re going to use to sync up the audio.
I’ve just selected all the movie clips and I’m going into the clip attributes here. Channel 2 is the Sherlock Holmes, Channel 1 is the microphone audio. So I’m just going to put Sherlock Holmes on both channels, like this.
Okay, well it’s sort of working. I’ve had a few issues, which I think I’ve sorted out now. I’m going to the baritone 2 part here, I’ve organised all the clips into a bin, and there’s also an audio bin with the two audio files in it, one of which is the sync, and the other is the actual audio. So by selecting both bins at once I can see all those files. I’m going to select everything, but then I’m going to deselect the actual mixdown, because I’m not going to be using that for the sync, I’m going to be using the sync for the sync. Right click, and I’m going to say “Create new multicam clip using selected clips”, and I’m going to call this “Bari 2 sync”. Important to have the angle sync on “sound” here. I’m going to say ‘Create”. It’s going to go ahead and analyse everything, and it’s going to create a new multicam clip here, with 16 angles, it looks like, so I’m going to open that up in the timeline and see what it looks like. And you can see, here is our sync track, and scattered about on the timeline are all of the various clips which have been automatically synced up according to where they were recorded in the piece. Now you can see that there’s a whole lot of angles there, which is going to be a bit annoying to edit. You can see there’s lots of different takes of this one.
So having done all that, I’ve realised I can probably do better, and this is again going back to the way it’s properly done with timecode. Instead of having the sync track as a track on my recorder, being played back constantly into the camera every time I layer a new track, what would be better I think would be to have the sync track separate to both the recorder and the camera, and playing a continuous track that went for the whole length of the recording session. In each take that I’m recording, the audio has obviously the audio from the instrument, but also the audio from the sync source. And that, I think, is doable, within Reaper. If each take had its own sync audio, then after I’ve done my editing, and putting together of all the good takes, within Reaper, I can then export that into Resolve, and also export the sequence of sync audio, which corresponds to those exact takes that I’ve actually used. It would be even better if I could throw a whole heap of clips into Resolve, and run a sync, and it would look to find whether there’s a spot on the sync track which corresponds to the sync on the camera, and if it does, it will throw it onto the timeline, and if it doesn’t, it will ignore it. Now I think that’s possible. I don’t know if it will work the way I have in mind, but that’s maybe what I’m going to try next time.
I hope the leadup to Christmas is great for you. I’ve got a few gigs left to go, and working Christmas Day, so I’m going to release the video of the multitrack that you’ve been checking out today on Christmas Day. You don’t have to watch it on Christmas Day, if you’re still digesting your lunch or whatever, but it’s going to be up there if you’d like to see me play some tuba quartet stuff for a little bit of celebration.
It’s been great to have you with me today, and I hope to catch you again after Christmas and into the New Year. I’m going to keep them coming, and I hope you’re going to keep watching. Thankyou, bye for now.