Favorite method to time stretch part of a cue?

Cant cut paste. I am new to doing this particular thing, but I know enough to know that you cant cut/paste. Time stretch… how? If the track is 24 bars long - how to time stretch bars 9-12 only?

Cant cut. Cutting is out. Do you know what a violin section recorded with hall ambience sounds like? Can not cut.

So I shoudl d/l a competitors product? :laughing: I dont care about cumbersome. Even if it is, it’s probably easier to do it in Cubase, rather than d/l and install Reaper. So, Cubase does this? How? Free Warp as JSQ suggests?

This sounds like what I need. WIll try FW later - thanks!

Watch a couple of vids on YouTube, not that I can suggest any, but that’s how I usually find out the really practical stuff - like which buttons to press and where those buttons are! Useful for building sheds, connecting solar panels as well as cubase tutorials :slight_smile:

Mike.

Freewarp, will do this … but it is not intuitive nor easy. I pretty much do what you are trying to do all the time. If you mess with it and get it to do it easily, I’d love to hear how.

I find it quite intuitive. When you switch FreeWarp on it on it puts markers on each barline (there may be a preference for this or it may be my particular audio events??), and I delete the ones I don’t want by shift-clicking them. Then I click and drag on note starts (using eye-balling of the waveform and playing/rewinding) to add a marker and drag it to the beat I need it to be at. I may add many of those markers, sometimes one for each note of the instrument if it’s really randomly out of time.

But overall, I prefer my other method where I adjust the click track to match the music first, it seems easier to me…

Mike.

My problem with that is orchestra tracks have legato passages that don’t detect against the timeline. The manual insertion process disrupts the entire audio event instead of just the event between the markers. Or at least, I can’t get it to work that way.

JMCecil - this is off topic of course, but is it to do with the events becoming musical events but not having their tempo definition set correctly? I.e. once you make an adjustment using FreeWarp it puts the event into musical mode which could radically change the way it plays because it then conforms (incorrectly) to the tempo map? I stand by my use of ‘Set Tempo From Definition…’ to make the clip conform to the current tempo map first, it may solve your problems…

But at the end of the day it’s always trouble when you start tweaking the tempo map to match video cues. I always end up with cues which overlap but are in different tempos, etc. Nightmare. I tend to go with the quickest solution and then bounce it all down as soon as I can! Or work with two projects and copy and paste mixdowns etc.

Mike.

I get how it works Mike. I’ve spent days trying to get it to work. I’m saying that the “set tempo from definition” does not work with legato instrument passages in my experience. It totally F’s them up. But even using that method assuming it works, is WAY cumbersome on 32 track mixdowns with multiple stems and trying to fight each one individually. The other problem is that when you insert your own markers, it adds it as a “quantized” location. Which kills the timing down the length of the file.

I gave up, and now do this, which literally takes seconds. And, I don’t need a physics degree and decoder ring to do something simple.

This is how freewarp should work. I shouldn’t need any kind of tempo markers, or special mode. Just mark 2 boundary locations and stretch in the middle. It needs to work from the project window as well.

I’m willing to be wrong … VERY WILLING. Please tell me this simple process is doable and I’m just over complicating things?

That movie looks very similar to Cubase to me, adding anchor points and then adjusting the middle. But one thing is that you can’t use those stretching techniques in the timeline directly, only in the waveform/sample editor, which is a pity! However, I sometimes cut up the notes and stretch the resulting events, and I do that on the timeline (using the time-stretch tool). If I’ve got a live concert to trawl through then I’ve got special shortcut keys to speed it up, one hand on the keyboard, the other on the mouse, and it goes pretty quickly - although it is rather tedious!

I’ll be interested to hear from the OP how the time-stretching sounds because I usually avoid it due to the burbling, stuttering or distortion effects…

Mike.

I’m very interested in Jeff’s take on this as well. Let us know how it goes. All I can say is it DOESN’T work the way the video shows in my experience. Because you have to set a reference tempo or grid, it never works on media without good beat detection. That’s just my take. Again, I’m really curious how Jeff gets along. Maybe I need to take another crack at it.

I say to stay away from timestretch. I have found that your sound will only worsen unless the stretch is very short.

Best to do what you can with scissors (even remove a complete section of the performance if you have to). It’s salavage, it’s not going to be pretty.

I usually start by hitpoint slicing then try to see what is optimal for slicing (eg percussive-like notes). If individual notes are accessible your golden, if not just start muting or cutting areas that are bad…

Also bounce the audio around those sections, don’t try to edit a long piece of audio, break in up into short chunks for this kind of thing.

GL

Me too! :slight_smile:

I have the project open now, and am at p. 345 of the manual - the FW section.

Again with the scissors! :angry:

Ok, you scissorheads - here is the portion of the track that needs to be fixed.

http://www.jeffreyhayat.com/vtimetest.mp3

It’s 79 bpm. Go head - grab the scissors tool, and tell me where I am supposed to slice…

Eh, FW doesnt seem to be doing it. If I insert a warp marker, then drag the warp marker to a new position as instructed in the manual, it moves ALL of the audio. :confused:

Maybe a screenshot would also help…

To me that sounds like a candidate for hit-point slicing… If needed apply a volume fade out at the ends of each section.

I think that is more acceptable than using audiowarp because that way your stereo image will be completely destroyed where ever there is stretching.

At least in my experience, minor edits could be acceptable though. That Reaper video looks much easier than the multiple-stage editing have to do to get a similar result.

Alright, got it done.

I am proud to announce, that FW does not work as the manual says it does. :unamused:

It actually works exaclty as I expected it to. In the S.Ed., place a warp line where need be, grab (select) a section of the track/event, and move the mouse slightly to the left, or slightly to the right. It’s actually pretty easy. All you need is a good ear, and some patience.

Here’s a before and after, with a basic k&s for reference:

http://www.jeffreyhayat.com/vtimetest-before.mp3

http://www.jeffreyhayat.com/vtimetest-after.mp3

Still not perfect - it will never be (which is one of the reasons live players are better than samples) - but it’s better. And any artifacts that are there will be hidden by some extra reverb and the rest of the orchestra.

Thanks all! :slight_smile:

Oh, I’ve never read the manual :wink:

Glad you’ve got it sorted. You can change the type of algorithm it uses in the Pool too to minimise nasty effects. Also I think if you bounce down then it might even apply a more accurate algorithm than the real-time version (but best check, that could be old info!!). Well, at least bouncing stops and random processing fluctuations, which I’ve noticed in the past with real-time processing…

Mike.

I’m very wary of trying to align all acoustic performances to hitpoints, because it can kill the flow of that stream.

Acoustic music has the opportunity to ‘breathe’, and each instrument does not have to align to each other, as they stop ‘dancing’ and just ‘march’. Music can be marching at ‘reinforcement’ points, and drifting away from each other at other times.

The trick is to adjust the minimum to make the performance sound like it was supposed to be that way.

We record all our tracks in timebase mode, with my wife setting the basic timing with her guitar and vocals. She is not playing, and definitely not singing, ‘on the bar’ of some rigid beat.

When we do extra tracks, we are playing to that basic track with no metronome, but playing to the feel, and so the timing can vary quite a lot, but sounds natural.

Now, I am not an accomplished musician and still suffering from performance anxiety, so while my lead has the feel, my fingers don’t always play at the time I want them to, so I have to fix up a few timings.

I wish Cubase had a simple means of defining two fixed points and varying a third to get the timing, like someone wrote that Reaper does.

However, not being one to have multiple DAWs, in the tracklist, I:

  1. Set resolution to 1ms.
  2. Set selector mode to timestretch.
  3. Turn off scrolling.
  4. Set looping on.
  5. Select the track to show it vertically expanded (or manually adjust its size).
  6. Set the locators to include a couple of bars before and after the problem area, so that when looping, I can hear the effect of any changes as I am doing them in sufficient musical context.
  7. Expand the display to cover only the notes of interest.
  8. Cut the track at the note before and after and at the note to be changed, trying to make the cut in an area of low signal (why explained later).
  9. Start playback.
  10. Adjust the note forward or back until it sounds ‘right’.
  11. Adjust the tail of the note before to join the head of the adjusted note. If it is ‘under’ the affected note’s segment, click just to the left of the lower start of the affected note and you will be adjusting previous note’s tail.
    This can take some trial and error, and I sometimes do an exaggerated movement forward and back just to see what seems better, then adjust from there.
  12. If nothing seems to works, you may have to adjust the note before or after instead, or even further. I just undo (ctrl-Z) to get back to before the cuts.
  13. If you get the right feel, bounce the track if you want to make sure you have captured the changes.
  14. Repeat steps 6 to 13 for each of any other changes.
  15. Bounce the track if you haven’t already done so.


    These changes will produce an artefact right at the cut points, which can be seen in RX’s spectral view as a thin vertical line through all frequencies. Now, if you have chosen a low signal area, the line will probably not be audible, but to delete it in RX, I:
  16. Select 1ms as the feathering cross-fade in the Miscellaneous Preferences.
  17. Select the Time-Frequency select mode.
  18. Expand the timebase until the artefact line is about 15-20mm (~1/2in) on the display.
  19. Click-and-drag a selection rectangle from the lowest frequency up to the top of the artefact (but definitely not right to the top, otherwise deleting below will cut out a complete vertical segment, rather than just reducing the levels), and about 1/2 a mm on each side of the ‘shaft’ of the artefact.
  20. Press the Delete key, which if the selection was done correctly, should produce a slight vertical black line, if you can see anything at all.
  21. Repeat steps 3 to 5 for each artefact.
  22. Save the file.


    Note that I record and edit everything at 192ksps, so the audible processing artefacts from the stretching and RX are minimal.

We record video of our performances for YouTube.

I have noticed that trying to hitpoint-align EVERY note variance can result in the video looking like it was badly dubbed, whereas adjusting only the few notes that MUST be changed to get the musical feel right can go totally unnoticed, even if varied by up to 100ms, except if you know they are there and specifically watching for them.

Just for fun I downloaded the mp3 of the strings. I made a project in Cubase, added BFD just on the half notes. Dicked around for about 20 minutes and couldn’t get it aligned.

I made a Reaper project, added BFD, set 3 markers and exported this. From beginning to end in less than 3 minutes, which included putting the beats in BFD and waiting for the export.