Support for Drawing Tablets?

The problem with trying to do such ‘playing’ indirectly by keyboard/tablet/whatever is that:

a) sample libraries only model discrete scenarios of the continuous spectrum that real performances can transition freely between.

b) the controller action repertoire is very generic and tends to isolate parameters that are interacting dynamically within an actual performance.


For example, on a SoundsOnline thread (EastWest Sounds), someone was trying to model a classical violin performance from a video. Their first attempt was good, but exhibited some of the stiffness of a lot of sample-based stuff.

When I looked at the video, I noticed that:

a) during the stronger sections, the notes were not only louder, but the performer took shorter and more abrupt bow strokes, probably reflecting the higher tension in their arms, so that the notes were slightly ahead of the orchestra.

b) during the quieter sections, the performer drew the bow longer, and seemingly more relaxed, so the notes were not only softer, but slightly behind the orchestra.

I pointed these out to him, and his second attempt required a lot of tweaking, but also sounded more natural.


To me, this says we are only going to get good sampler performances if we can:

a) set up parameters so that they interact in the same way a true performer’s physiology/temperament/emotion would have them in relation to the actual instrument’s dimensions/inertia.

b) control the interaction by just a couple of abstracted meta-parameters, making it easier to perform in real time, or using automation curves.


For example, to get a more realistic violin performance, an ‘intensity’ parameter, perhaps controlled by foot pedal or automation curve, could:

a) with increasing ‘level’, simultaneously:
___1) increase the level of notes.
___2) move the notes more forward in time.
___3) blend-in/select the more staccato patches.
___4) increase the initial bow bounce.

b) with decreasing ‘level’, simltaneously:
___1) decrease the level of notes.
___2) retard the notes more in time.
___3) blend-in/select the more legato patches.
___4) soft start the notes.

Now also imagine another meta-parameter for feel/genre that changes the bias amongst the patches, in much the same way that ‘volume’ selects between patches that match the timbre for different playing levels.


For guitar samples, tempo would have to inversely vary the time between individual strings in a strum.


I see that while artistry shifts the upper boundary of what can be manifest, analysis and quantification of what makes them so helps to shift up the lower boundary for everyone else.

One just has to see how difficult it was even for a trained professional to use photo editing programs to touch up portraits compared to what an untrained person can do with Portrait Professional in a few simple keystrokes in 10 minutes. That was because someone distilled all the complexity into a few simple key parameters and made a program that made it easy for ANYONE to do it.

Keep making suggestions suntower. One day someone will take up the challenge!