Reply to Thread Bookmark Thread
Posts 1 to 27 of 27
  1. #1

    User Info Menu

    Thanks in advance for any help.

    Some friends and I decided to make one of those split screen videos. The idea is to post a video with the screen divided into 5 sections. Each one shows a different musician and we're all playing the same song.

    So, to get started, one guy send around some mp3s and charts. Apparently, I'm supposed to listen to an mp3 and record a guitar track with video. They want the audio better than the usual phone audio. It's supposed to be a lossless format like wav. Then I send it to the Mac/Logic guy who figures out how to put it together into a split screen video.

    I'm trying to figure out the best way to do it. I have Windows 10 and a Scarlett interface.

    Initially, I thought I'd play the mp3 on my phone and use some kind of video software on the laptop with the Scarlett as the microphone input.

    That might be best, but the monitoring would be odd. I'd have the mp3 of the backing track on my phone, but how would I hear the guitar? If I monitor from the laptop, I'm going to get some latency. I could split the guitar signal and play one side through an amp in the room. Then, I'd hear the mp3 thru the earbuds and the guitar via leakage around the earbuds.

    Maybe I could put both the mp3 and the guitar through the Ableton? It's the two channel model.

    Or, do I have to load the mp3 into something like Ableton (I have an old version that came with the Scarlett, but it's a pain to use) and then figure out how to monitor and play without latency? I know that can be done, but it feels like a brain twister to get all the right buttons pressed.

    What's the easiest way to do this that doesn't require learning some complicated, unintuitive software?

    Thanks.

  2.  

    The Jazz Guitar Chord Dictionary
     
  3. #2

    User Info Menu

    Final Cut Pro on 90 day trial (how I did it)

  4. #3

    User Info Menu

    Oh shit Windows. No idea.

    (But well done not falling for the Mac hype. I have a very expensive sleek brushed metal laptop with a horrible carbunkle sticking out the side to have silly fripperies like USB ports, HDMI sockets, that kind of useless shit.

    The reason this is necessary is because Apple wanted to use the USB-c port which they call some stupid name, to double as a power input and a video out. You can do it all with one type of port right? Elegant design.

    This would be fine, if needlessly wanky if the COMPUTER HAD MORE THAN 2 USB-C PORTS)

    If I touch it, for instance to plug in a peripheral, it loses contact, both screens turn off and I lose the audio interface.)

    So I’ll be joining (for this and many many other reasons) you next computer. Screw Apple.

    Can anyone use Macs to get work done any more?

  5. #4

    User Info Menu

    The focusrite interface has direct monitoring, latency shouldn't be a problem. The guitar into the interface goes directly back to you via the headphones without making the round trip in and out of the computer. Just have the track you are recording on mute when you record and you won't hear the latency. It is called zero latency recording that way.

    There will be a tiny bit of latency from your interface to your DAW, that's there no matter how you record. And by tiny, I mean maybe 3ms, about the time it takes sound to travel from a guitar on your lap to your ear, less time than it takes sound to travel from the drums 8 feet away to your ear. Just to put ms of latency in context. Sound travels about 1100 feet per second or 1.1 feet per ms.

    I do this all the time...

    These steps make it easy for the engineer to line things up:

    Ideally they sent you the track (wave or mp3) with two or three lead in bars that have a click. They also tell you the bpm so you can set your DAW (Ableton) to the same tempo), it's just cleaner and easier to communicate this way.

    You put the mp3 at the beginning of the time line, the very front edge to the left of the DAW.

    When you render, you need to send you're individual track so it can be mixed in. Render for the entire song so that the silent lead in bars are included in your track. The engineer now just sets your track to the beginning of the timeline and it should be perfectly lined up, no guess work involved.

    I close mic my guitar amp (I use an sm57), mic into the interface. Dial in your sound. If I was the engineer I'd ask you to have no effects other than any pedal or amp gain that you like, but if for performance purposes you like to have reverb and/or delay etc. I'd say okay. Performance is the most important.

    It's not too important, but some engineers prefer that you set the input gain on the interface so the meters on the daw peak around 2/3rds of the way up. Just don't clip is the important part.

    Monitor through headphones so there is no mic bleed.

    Record the video from your phone simultaneously while recording to the DAW. The phone will pick up your guitar playing to the video which will help the video editor line things up in the video editing process. You also could do a pick rake, finger snap, or hand clap on the 1 of bar 2 of the count in before you start playing for video lining up purposes, that's what those movie clapboards are for.

    You will be sending two files, a video file and an audio file. The audio recorded on your smartphone will be muted as part of the video editing process. The audio file from your DAW will be used for the audio mixing, that entire audio mix becomes a track in the video editor software.

    It sounds like a bit of a process but once you've done it a few times you can do it without thinking. The hard part is the audio mixing and video editing but you're not doing that.

    Last edited by fep; 04-24-2020 at 08:14 PM.

  6. #5

    User Info Menu

    Whatever you do, does it matter if there’s some latency between the backing track and you playing? I’d assume that the chap who is going to have to sync up 5 different videos would only want each video to contain that person’s audio and not the backing track? So you would exclude the backing track when finalising your video (of course your video has to be in sync with your own guitar audio).

    Otherwise he will have a terrible job trying to mix 5 audio streams which all include the same backing track.

    You could also send him the video and your audio as separate files if that’s what he wants.

    Unless of course I have misunderstood the process here.

  7. #6
    My assumption was that I'd listen to the backing track through earbuds. I'd record a track with guitar only. i

    Apparently, we're using a 16 beat count-in, so I'd play quarters on that and omit the last one. That would help line up the tracks in the final video.

    I'm assuming the guy who assembles the video can drag it back and forth in time, so latency isn't an issue for me. I could be wrong.

    I'm trying to figure out a way to do this that doesn't require that I learn how to use Ableton, or buy something else. The bassist and I did some duos with Ableton a couple of years back. We figured it out, but it was really unpleasant to use.

    Here's my current thinking, based on the idea that I can put the phone output into the Scarlett interface and monitor without latency.

    Then, all I need to do is record into the Camera app on the laptop. That app is pretty simple. In fact, it only seems to have one button.

    Apparently, I use the windows settings menu to change from the built in mic to the Scarlett. Output of Scarlett becomes the microphone input on the laptop.

    Then, I start playback on my cell phone, into the Scarlett channel 1.Plug the guitar into channel 2. Monitor from the Scarlett. Headphones or home stereo - room sound can't get onto the track.

    Hit the record button, and I should be recording video and guitar. If it records in stereo, I might have the backing track on one side and my guitar on the other, which I think would be fine. Otherwise, I have to make sure that the Scarlett doesn't send the backing track along to the laptop. So, I have to monitor both channels but output only the channel with the guitar.

    Or, am I missing something?

    For example, when the last guy tries to line up all the audio and video, will his software have, as its minimum time unit, one frame? That's fine for the video, but it will screw up the audio, if he can't be more precise.

  8. #7

    User Info Menu

    This I have also done with pretty good results:

    With playing quarters on the 16 count-ins... The simplest way is to monitor with earbuds or headphones from your wifes phone or from your computer and then just play your guitar recording video with your smartphone. That's it, done. No mics, no computer, nothing.

    Now you just send one video file of your guitar playing.

    The video file from your smartphone, it has two tracks video and audio that can be split. In my DAW, Reaper, I can import a video file, and it automaticaly splits the audio and video so I can mix the audio just like any other audio track.

    I should also note, I transfer my video files from my phone to my computer with the usb cable. Video files are very large and I haven't been able to send them directly from my samsung phone, this may be different with Apple phones. I then send the file via gmail which automatically interfaces with google drive storage to send large files. Also, the person doing the video editing is going to need a powerful computer to be able to run 5 videos simultaneously in video editing software.

    BTW, on my projects I do the audio mixing and video editing.

    My guess, this is what John Legend did on this video, his sound wasn't as good as the others but it was pretty good. With electric guitar it wouldn't matter as much as voice:
    Last edited by fep; 04-24-2020 at 08:40 PM.

  9. #8

    User Info Menu

    Quote Originally Posted by rpjazzguitar

    For example, when the last guy tries to line up all the audio and video, will his software have, as its minimum time unit, one frame? That's fine for the video, but it will screw up the audio, if he can't be more precise.
    This is all true, you can only move by one frame, at least in my video editing software VSDC.

    But it is not a problem as I'm importing the audio to my DAW and mixing there. In the video editing softwaye I mute all the videos and import the audio track from the DAW. I line the videos up to that one audio track. So, in the final edit you only hear the audio mix from the DAW.

  10. #9

    User Info Menu

    One more thing, I often have an inferior video because of the lighting. Experiment with lighting and placement of the camera. You also may want to be concious of the background. Some guitars and amps in the background might be cool.

  11. #10

    User Info Menu

    I would listen to the MP3 through phone and earbuds. If recording an amp,I would use that for monitoring and just send it to the scarlet. If recording direct to the scarlet, I would either use an ab box to send the signal to an amp before the scarlet, or use an output of the scarlet for zero latency monitoring.

    It is easy to sync everything when editing the videos. Here's one I made recently. The violinist sent me her melody video, i just played by listening to it through headphones, and then synced the two videos using VSDC. These programs can mix video and audio very accurately, like daws do.

    On this video, she recorded herself with an iPhone, so no syncing needed on her video. I recorded sound and video separately (video camera and sound recorder), and synced them using VSDC. Then I synced the two videos together. (I actually used reaper as well cause my audio had two sources that needed to be synced as well, but you don't have to do that).

    It's not that difficult to do. You just zoom in on the sound waveform till you see it clearly, and then move it around while watching the video.


    If you do it often, the easiest way by far is to buy a camera with a line in. Then you do the videos without even using a computer.
    Last edited by Alter; 04-24-2020 at 11:59 PM.

  12. #11
    First of all, thanks to all who replied.

    Here's what I ended up doing.

    I plugged my phone into the mic/line channel of a KC 150 using a 1/8 to 1/4 adapter. That was my playback.

    I found an app called "Camera" on my laptop. Very simple. Push the button, it makes a video using the laptop's camera and microphone.

    To improve the audio, I used a Focusrite Scarlett 2 channel interface. Plugged the guitar into it. Right clicked on the speaker icon in the windows system tray and set the microphone input to the Focusrite box.

    Tested it, and, sure enough, I got a video with guitar. An Mp4, which seems to be okay with the rest of my group.

    HIt the video record button, started the playback on my phone and recorded.

    The result is a video with guitar only. As it turned out, the backing track did not have the promised 16 bar count-in. But, the arrangement had the guitar resting for the first 8 measures while there was a bass drum beat on the backing track, so I strummed muted strings along with that. I figure that will help the video guy later line it up and then he can mute the audio in that section.

    Anyway, it worked and I didn't have to descend to the 7th circle of Ableton to do it.

  13. #12

    User Info Menu

    A lot of small details to get this done. But once you've done it once with success you have a template going forward and it's pretty easy.

    Fingers-crossed that the video editor has a powerful machine that can handle 5 videos at a time. It's highly dependent on how high of resolution everyone used when recording the videos. I have some work arounds for that too if it becomes a problem.

  14. #13
    Things evolved. Now the goal is to do audio only, but in a lossless format. The guy at the end of the chain wants everybody to use a DAW.

    I'm still in denial, I think, trying to avoid going down a rabbit hole with Ableton, so I'm going to see if Audacity will do it. Should be dirt simple. I put in the supplied backing track as track 1 and record onto track 2, somehow avoiding latency.

    I'm guessing, please enlighten me, that I monitor direct from the Focusrite box. The guitar is plugged right into it. Audacity can feed into it.

    So, I play along and it sounds synced in my headphone.

    Now, it can't be synced with the original track because of the D/A conversion. Does the DAW re-write Track 1 as it adds Track 2? Is that how it eliminates latency?

    If not, can you explain, conceptually, how latency is avoided? Or do you have to drag and drop one of the tracks later, to compensate?

    Thanks. n88

  15. #14

    User Info Menu

    I'm wondering if your final chain guy realizes that audio can be split out of the video and placed in the DAW. Maybe Reaper does this but the DAW he uses doesn't?

    You have recorded direct from an electic guitar I assume? I would have done it with a mic on your amp and the mic plugged into the focusrite. But with the direct guitar it's fine, I would probably run it through an amp sim in my DAW if I was doing the mixing. Unless it's an acoustic guitar with a piezo or something.

    If you want me to seperate the audio from your video you can send me the video. Note I don't rerecord your video file, I just import into Reaper and it shows the audio track by itself, I just then render the audio track, it's completely lossless. PM me if you want to try this.

  16. #15
    Very kind of you to offer. Looks like I won't need to bother you.

    The assignment evolved. No more video. So, it looks like I'm going to load the backing track into Audacity and record an overdub. I can get my sound by going right out of my pedalboard. I could mic an amp (and maybe I will), but the recorded guitar sounds okay to me. OTOH, they just added a guitar solo into the arrangement, so maybe I need to rethink that. No big deal. I have a mic and the Focusrite will take it.

    My main concern was learning how to eliminate latency from Audacity. I watched a couple of videos and learned how to use the compensation parameter. That ought to take care of it.

    I confess to being a little surprised by this. I'd have thought that the computer, which often seems to be doing stuff I know nothing about, would produce annoying variability in the latency number. But, apparently not.

    Alternatively, I could have just played the backing track on my phone into one channel of the Focusrite and the guitar into the other. The Final Chain Guy would have to line up the result by dragging it, but he'd have the original backing track to work with.

    But, he's a dedicated techie. He hates "quick and dirty" as much as I like it. So, he wants to load my file and have it line up because I started by loading the backing track into Audacity, played along with it and fixed the latency with the compensation parameter. If we all do that, all he has to do is load the files into his DAW. He's a Mac guy and has Logic, among other things. He worked as a programmer for a Logic competitor at one time, so he's intrepid.

    Fep, you have been very kind. Thanks!
    Last edited by rpjazzguitar; 04-25-2020 at 11:53 PM.

  17. #16

    User Info Menu

    If necessary, you can also correct latency problems post-recording in Audacity, by using the Time Shift tool, i.e. select the delayed track and 'slide' it back into the correct position. It's a button in the top row, looks like a double-headed arrow.

  18. #17

    User Info Menu

    If he's the guru, he should be able to deal with syncing. If it's an audio only file, latency isn't an issue, AFAIK. It's not rocket science to sync the audio files, people have been doing that since long before personal computers existed, perhaps before any electronic computers existed, and certainly without using any of them. It's easier to sync digital audio files than tape.

  19. #18
    Thanks for that.

    What happened since is more related to group dynamics than music or audio engineering.

    The group split along the same lines as liberal arts and engineering majors.
    It was 3:2 for the quants.

    Then, one of them, who is the de-facto leader (it's his tune we're recording) switched sides. He monitored with earbuds from his phone and recorded through his amp into the built-in microphone on his Zoom recorder. Any lower tech and he'd have needed cans, string and a candle. To be clear, he listened to what he was playing via leakage around his earbuds.

    One the other quants suggested that learning to use a sequencer would help protect against Alzheimer's Disease. I'm guessing that was meant partly as a joke.

    I didn't want to monitor with leakage, as appealing as that may sound.

    So, I played my phone into my keyboard amp. That was my monitoring -- not connected to anything else. No headphones.

    Then I plugged the output of my pedalboard into the Focusrite. Plugged the USB output into my laptop. Plugged the 1/4 inch output into my Little Jazz.

    Started Audacity and hit Record. Then started the playback on my phone. I then played along with the backing track which was coming out of the kb amp.

    Finished the tune, hit Stop.

    Hit the arrow for playback and heard it quite nicely through the LIttle Jazz. Sounded the same as what I heard as I was playing it.

    I exported the track to mp3 and emailed it.

    Done.

    Next up, will be trying to do the playback from Track 1 in Audacity and add a guitar track on Track 2. I can either measure the latency or just slide the track into place later.

    One thing I don't understand is how somebody can record without audible latency. When I tried it was beyond obvious and made the tracks unusable as is.

    This occurred to me: If I had a three channel interface .... Audacity Track 1 is the reference recording. Play along with it and record that version on Tracks 2 and 3. I guess you jumper the output of Track 1 into the input of Track 2.

    Then, if this makes any sense, Tracks 2 and 3 will be in sync with each other, even though they are out of sync with Track 1.

    It seems like software could do that automatically, rather than making the user fiddle around with latency numbers.

    What am I missing?


    Edit: The leader called and said my guitar tone sounded flat and was in mono. He wanted stereo. I play a mono instrument. He just learning Reaper and hadn't tried to simply cut and paste into a new track. So, I did another version. Took the L and R outputs of my pedalboard into the two channels of the Focusrite. Audacity recognized this and recorded in stereo with no action required on my part other than hitting Record. I turned the treble all the way up on the guitar, although I doubt that was the problem. Playback sounded fine to me both times, but that was through the Little Jazz, which always sounds great.
    Last edited by rpjazzguitar; 04-26-2020 at 11:12 PM.

  20. #19

    User Info Menu

    Quote Originally Posted by christianm77
    Can anyone use Macs to get work done any more?
    Many, many people, me included. But I never bought a 2-port computer of any description.

    I guess you must have bought that 2-port laptop when it first came out, all light-weight and shiny; otherwise you would have bought a Mac with more ports.

    I'm sorry you are so frustrated with your laptop. But as someone who spent 20 years diagnosing and fixing computers professionally, there is no way I would ever own a machine running Windows. I spend my life needing a DAW, graphics editing (I have abandoned Adobe for Affinity), and video production, in addition to the workaday applications. I have lots of 3rd-party utilities that make my life easier, and I have 8 external hard drives, 3 printers, and an audio interface hanging from my $2000 2018 Mac mini; I restart the computer maybe once a month. I have an 8-year-old MacBook Pro retina that will run the latest operating system for running audio apps like MainStage on gigs, or when I just need to travel.

    All that said, Apple does give the unexpected curveball every so often with OS updates, but I've learned – I'm no longer the first one to try out the latest update to the operating system. I'd go with Linux, except the audio, video, and graphics offerings are so lacking.
    Last edited by Ukena; 04-27-2020 at 07:48 AM.

  21. #20

    User Info Menu

    Mp3 is not a lossless format, it's lossy. For lossless, you can use FLAC or WAV. But if the boss accepts mp3, good enough. Audacity will happily export to any format you specify.

  22. #21
    Understood. Audacity, or even my pocket recorder (yamaha pocketrak), will do either MP3 or WAV. So, I sent an MP3 and a note that I can send a WAV if the group decides to go that route. I got a note back saying it was fine.

    I usually use MP3. The files are easier to work with and I don't hear much difference on the playback gear I usually use.

  23. #22

    User Info Menu

    Audacity is not really designed as a recording tool, it is more for editing audio files. I’m not a techie, but I believe it is not optimised for low latency (doesn’t use ASIO drivers or something? not that that means a lot to me).

    If you want to do multitrack recording, you probably need to consider other methods. I use a small Korg digital recorder which allows overdubs and has hardly any latency.

    Having said that, I recently got a new more powerful PC (my old one was truly ancient and really slow), it also has a fast solid state drive for the software to run on. I did a quick overdub test in Audacity, there was very little latency. So more computer power probably helps.

  24. #23

    User Info Menu

    Quote Originally Posted by grahambop
    Having said that, I recently got a new more powerful PC (my old one was truly ancient and really slow), it also has a fast solid state drive for the software to run on. I did a quick overdub test in Audacity, there was very little latency. So more computer power probably helps.
    lol ignore that. I was a bit suspicious of this so I investigated further, turns out my Audacity settings already have a negative latency correction in place. It is about the right amount so it reduces the perceived latency to under 10 ms.

    Since I recently downloaded and installed the latest version from scratch on my new PC, I don’t know if it came with that as a standard setting, or maybe it picked up some kind of settings which I already had (I copied all data over from my old PC first). Either way it’s quite handy.

  25. #24

    User Info Menu

    Quote Originally Posted by rpjazzguitar

    Edit: The leader called and said my guitar tone sounded flat and was in mono. He wanted stereo. I play a mono instrument. He just learning Reaper and hadn't tried to simply cut and paste into a new track. So, I did another version. Took the L and R outputs of my pedalboard into the two channels of the Focusrite. Audacity recognized this and recorded in stereo with no action required on my part other than hitting Record. I turned the treble all the way up on the guitar, although I doubt that was the problem. Playback sounded fine to me both times, but that was through the Little Jazz, which always sounds great.
    He wanted stereo... Yikes. Unless you are recording with a stereo effect I can't make sense of this. I have never heard anyone make that request and I'm old.

    A mono track in stereo, sounds the same as a mono track. Unless there is something different between the right and left of the stereo track, it will sound as though it is coming right down the center. It's the same as mono, both speakers are playing the exact same thing, just like a mono track panned to the center. If I wanted your guitar to have a stereo sound I would start with a mono track, there are a bunch of ways to process and create a stereo sound, duplicating the track panning each track opposite sides and processing the tracks differently with eq or chorus or stereo width effect, etc.; or my most often used, no duplicate tracks and the track panned favoring one side and a send to reverb bus with that reverb send panned to the other side.

    When I receive a stereo track, given there are no stereo effects or qualities to the track, I would downmix the track to mono (which is a command in Reaper). A true stereo track can be tricky to pan (although Reaper has ways to do it), normaly when you pan a stereo track towards the right for instance, you are lossing whatever was on the left. Imagine panning to the right a stereo keyboard track, you'd be reducing the volume of the lower notes and increasing the volume of the higher notes. In general you want to mix with mono tracks so you can pan the various tracks to create stereo effect and seperation between the instruments. A big part of panning is giving the mix more clarity.

    In Reaper, with your mouse right click over the track in the track panel of the screen (i.e. the left section of the screen) and from the menu that appears select duplicate track. Takes one second.
    Last edited by fep; 04-27-2020 at 10:28 AM.

  26. #25
    I think what happened was pretty basic. He got the track, loaded it in Reaper, and could only hear the guitar through one of his earbuds. He didn't know how to make the track come out of both earbuds. I had something I wanted to improve in my track, so I just re-recorded using both sides of the Focusrite.

    As an aside: there was a repeated figure which I screwed up one time. Since I liked the rest of the track, I cut and pasted a good rendition of the figure to replace the bad one - in Audacity. Seemed to work fine - but I was only hearing the guitar -- no reference track. When he put it together with the rest of the tracks, it turned out there was a fraction of a second of delay in the guitar after the edit. So, he either has to slide it over in Reaper or I have to re-record.

    Apparently, you have to carefully match the length of the cut/paste pieces, or something. The figure was followed by silence in both cases, so I figured if I matched the beginning perfectly, the end (which was in the silent section) wouldn't matter. Live and learn.

    If we're lucky and the virus continues for a long time, I'll learn how to do this.

  27. #26

    User Info Menu

    I'm sure an experienced Reaper user could make your punch work. It's really easy to position your punch, trim it, stretch it, change it's tempo, break it into pieces, etc. It's crazy what can be easily done. If I'm imagining this correctly, he just needs to trim the silence, and just slide it in place.

    One of the great things about Reaper is this guys tutorials. Just to illustrate this, here is his punching in video. You may want to go to 04:15 of the video where he gets into "hands free" punching. Maybe Audacity does something similar. You should be able to do this seemlessly.

    You might want to turn your friend on to the Reaper Mania videos on youtube, they are also on the Reaper.fm site.


  28. #27
    In case this of any interest ...

    The project morphed again.

    The guy who wanted everyone to use a "sequencer" prevailed, and probably, for good reason.

    So, I switched from Audacity to Reaper.

    Latency was an issue with Audacity but not Reaper. I wouldn't swear I had the settings right on Audacity.

    I watched a very well done tuturial video. This one.


    Recording is a couple of clicks. Overdubbing about the same, and I could hear a problem with latency, which was reported by the program as 9ms each way.

    Moving tracks around to synchronize them (it's a motley recording crew contributing tracks on this project) was a little laborious, but straightfoward.

    In Ableton you mark the section you want to replace, start somewhere in front of it, play along with the track and, Ableton punches you in and out.

    In Reaper (if I understand it) and Audacity, you just record a new track and then silence the part of the old track you don't want. Then, in mixdown you hear only the replacement.

    And, for this purpose, that was all I needed to do.

    Next time, I'll be able to record my parts in Reaper and they'll line up with no sliding of tracks backward or forward in time.

    Reaper seems very well done, was stable all day, not too difficult to learn and, apparently, very deep.

    Next virus: video editing.