Support for Drawing Tablets?

Have you seen the Seaboard- MTF Exclusive: Jamie Cullum on the Seaboard @ Music Tech Fest 2013 - YouTube

I use a Wacom graphic tablet for note insertion in Sibelius. Granted it’s nothing more than mouse emulation (read: rudimentary and nothing particularly sophisticated), it actually works much better than using a mouse, because it’s more natural and more like writing with a pencil on paper. Incidentally, I was doing the same years before, when I was using Finale. I haven’t used any recent version of Finale, but I’m absolutely sure that graphic tablet mouse emulation works with that software too.

I hope Steinberg’s new notation program will include native touch and graphic tablet support.

Thanks for the input. :smiley:

I got a Wacom… It’s not just hooking it up. I’d also have to do some real work to make my DAW desk accommodate it ergonomically–my setup is currently all oriented around a MIDI keyboard and controller sliders. I wonder how one fits all this hardware around ya so it’s easy not more of a PITA.

For example, I’ve had as many as 4 screens, but then I realised they were making my -monitors- sound like crap.

Always trade-offs.

—JC



Has nothing to do with what I’m trying to achieve, but I’m sure it’s very ‘expressive’.

—JC


I keep the Wacom (it’s a 20" model, i.e. fairly large) in a slot on a custom-built desk that holds computer, monitors, keyboards, near-fields etc. and only pull it out when I use it. When in use, I typically keep it on my lap. It doesn’t need to be ON your desk.

Wow.
I use these in my day job everyday, they aren’t all that if you aren’t actually drawing.

The right click is gonna be what gets ya! Also “in-between fingers” fatigue, cramping, wrists… We have evolved to mouse using people for PC tasks. I sit here and try to use the stylus for everyday tasks, it never lasts too long. Switching from stylus to mouse over and over again aint too fun either, but you can get very good at it over time.

I agree with this. It took me -forever- to warm up to a Wacom simply because it’s so -annoying- (for me anyway) for anything -except- real drawing. I’ve seen guys demo it and it reminds me of vacuum cleaner salesmen—they do magic, but in the real world? Eh… not so great. :smiley:

That said, for -drawing-, a tablet became FANTASTIC after about 100hrs of fighting it. And -if- that could be transferred to a music notation metaphor, it would be -wonderful-. But as I began by ranting… getting the notes onto a ‘stave’ is only half the battle. -Then- the palette has to be smart enough so that when you put in a ‘>’ or ‘.’ or hairpins, it ‘tells’ VSL… ‘switch to marcato’… now cresc… now stacc… OK, now slur these two notes… The symbols have to be tightly integrated with all the controller and keyswitch junk I HATE.

For -me-, my ‘Starship Enterprise’ would be:
a) a notation program that worked with a tablet well
b) and then note-expression stuff that auto-magically converts the symbols into a universal format that all big sample libs understand.

OR…
a) sample libs that work like samplemodeling so you can capture a real-time performance and all the controller junk gets automagically converted to all the relevant articulations.

As it stands, DAWs remind me of those fancy video editors like After Effects or Blender. You sketch out a ‘wireframe’ and then spend a TON of time tweaking it to get the final ‘rendering’. And it’s my belief that this is why so much digital ‘art’ is so mechanical. You need the realtime feedback.

Ironically, this is not a problem for me with pencil and paper. Since you’re just ‘hearing it in yer head’, I feel zero frustration. Yeah, I can’t hear it, but I know that when I -do- get it in front of players they -magically- take very simple ‘instructions’ (notes, dots, hairpins) and turn it into ‘music’.

With DAWs, it still feels like more -programming- than anything else.

—JC


True, but entering notes with a graphic tablet and durations/rests with the computer keyboard still beats entering notes with mouse & computer keyboard or music keyboard & computer keyboard. Like I said, it’s rudimentary and far from perfect, but if you’re used to writing music with pencil and paper, that’s as close as you’ll ever get today. Of course I totally wish that Steinberg’s new notation software will be far more advanced in that regard.

You’re lucky you get a response! :wink: :wink:

Generally, people can only see a solution that is just a step in front of where they are. Quantum leaps or left field ideas generally don’t get traction, regardless of how much time/effort/money they would save or how many opportunities they would open up.

Look at how many centuries it took to get something like Leonardo’s helicopter to actually fly.

But when an idea’s time has come, it flies!

The problem with trying to do such ‘playing’ indirectly by keyboard/tablet/whatever is that:

a) sample libraries only model discrete scenarios of the continuous spectrum that real performances can transition freely between.

b) the controller action repertoire is very generic and tends to isolate parameters that are interacting dynamically within an actual performance.


For example, on a SoundsOnline thread (EastWest Sounds), someone was trying to model a classical violin performance from a video. Their first attempt was good, but exhibited some of the stiffness of a lot of sample-based stuff.

When I looked at the video, I noticed that:

a) during the stronger sections, the notes were not only louder, but the performer took shorter and more abrupt bow strokes, probably reflecting the higher tension in their arms, so that the notes were slightly ahead of the orchestra.

b) during the quieter sections, the performer drew the bow longer, and seemingly more relaxed, so the notes were not only softer, but slightly behind the orchestra.

I pointed these out to him, and his second attempt required a lot of tweaking, but also sounded more natural.


To me, this says we are only going to get good sampler performances if we can:

a) set up parameters so that they interact in the same way a true performer’s physiology/temperament/emotion would have them in relation to the actual instrument’s dimensions/inertia.

b) control the interaction by just a couple of abstracted meta-parameters, making it easier to perform in real time, or using automation curves.


For example, to get a more realistic violin performance, an ‘intensity’ parameter, perhaps controlled by foot pedal or automation curve, could:

a) with increasing ‘level’, simultaneously:
___1) increase the level of notes.
___2) move the notes more forward in time.
___3) blend-in/select the more staccato patches.
___4) increase the initial bow bounce.

b) with decreasing ‘level’, simltaneously:
___1) decrease the level of notes.
___2) retard the notes more in time.
___3) blend-in/select the more legato patches.
___4) soft start the notes.

Now also imagine another meta-parameter for feel/genre that changes the bias amongst the patches, in much the same way that ‘volume’ selects between patches that match the timbre for different playing levels.


For guitar samples, tempo would have to inversely vary the time between individual strings in a strum.


I see that while artistry shifts the upper boundary of what can be manifest, analysis and quantification of what makes them so helps to shift up the lower boundary for everyone else.

One just has to see how difficult it was even for a trained professional to use photo editing programs to touch up portraits compared to what an untrained person can do with Portrait Professional in a few simple keystrokes in 10 minutes. That was because someone distilled all the complexity into a few simple key parameters and made a program that made it easy for ANYONE to do it.

Keep making suggestions suntower. One day someone will take up the challenge!

To follow on from what I wrote above, maybe we won’t really get more realistic and easily played sampler-based instruments until samples are more micro-adjustable depending upon meta-parameters.

Imagine a sample instrument consisting of a whole lot of very short impulses (from each stage of several notes), which, according to a couple of meta-parameters, are dynamically selected and morphed (in time and timbre) between.

Just maybe these huge multi-GB sample libraries, sampling complete notes, could be substantially reduced, at the expense of increased CPU, to just a few MB, while being a whole lot more versatile!

@Patanjali… all that is way too up in the clouds for =me=. All I can tell you is that the -performance- part of the equation is mostly possible NOW. Samplemodeling gets =much= closer, using just Kontakt scripting. VSL can create ‘auto-switching’ patches that are pretty good. Not perfect, but pretty good.

The problem is that, all that controller data doesn’t get translated to notation properly. And/or there’s no support for ‘expression’.

It’s -possible-, it’s just that sample lib makers don’t choose to support Note Expression and notation programs don’t translate the symbols into universally understood MIDI/controller stuff… again Note Expression could probably handle this. The reason they don’t support it? Lack of demand by… as you say… people who can’t see beyond the status quo.

If notation could hold all the levels of detail that can be expressed in MIDI, they would probably be too cluttered, or have too many symbols to remember. After all, the few level symbols (ppp pp p mp mf f ff fff) don’t really cover the 127 of MIDI. Or we would have to depart from the traditional symbols and go for things like L54 or L114.

I remember one particular MIDI rendering of a recording of ‘Somewhere Over the Rainbow’ that was far too complex, and contained many shifts in tempo, indicating that to be notationally defining it specifically enough to ensure it is reproduced exactly would take far too much time to encode.

Unfortunately, MIDI is still the #1 bottleneck we have to deal with. And it’s starting to make less and less sense to cling to a 1982 standard. But then again, no one has to courage to just discard it and rebuild it from scratch, because everything third-party (not to mention all MIDI hardware) would become instantly incompatible. But maybe it’s possible to develop a new, truly modern standard that maintains MIDI backward compatibility?

Seems to be tablet-only at present. Maybe they’ll consider it worth porting to “proper” computers with touch-screen or pad. Maybe if enough people asked?

Some might say that Cubase was made for use with tablets

mild tranquilizers and painkillers being particularly popular.

:laughing:

Finally, a company that hasn’t fallen for the con that they have to ‘sell their apps for a couple of dollars and still make money’.

It isn’t cheap compared to most apps, but it certainly is not expensive for those for whom it works.

It’s -never- been the product that makes me reach for the valium.

It’s the (almost) complete lack of communication from the company.

I view Cubase and any organisation (government, church, etc.) as being like a supertanker… it takes a LONG time to make -any- course changes. Not a problem. It’s my living so I -want- them to be careful. But when one -depends- on it and you have no idea what direction we’re sailing… or when or -if- we’re stopping off at certain places? Yeah, that makes me seasick.

All people like me want is -communication-. Good docs. If I ask, ‘are you working on “X”?’ a simple, yes or no. A general timeline. Stuff that people have expected from mission-critical software for decades. I think one reason I get more frustrated than most is because I used to work for companies where that is simply -expected- from software vendors and I figured -everyone- would want that. I keep forgetting that the biggest question most ‘musicians’ have is: ‘Does it come in Fiesta Red?’

—JC

WOW! $22? Hell, if it actually -works-, I’d pay 10x’s that. I’d get it now but I don’t have a full-size iPad. I doubt it’s more than a toy for a phone. But THANKS! Definitely worth checking out!!!

—JC


Should have got a Note 3 then!