Friday, December 23, 2011

Almost done with audio

TL;DR: Still no pretty pictures. Sorry.

Since last post:

  • Mixer.  Per-source volume and pan.  It can have multiple modules as sources, and forces any mono sources to fill all output channels (dependent on pan setting). 
  • Stk Voicer support. Can essentially make polyphonic versions of any Instrmnt and trigger it from the script environment.
  • Started on Stk Filters, but apart form delay, most of them seem ill-fitted for my purposes (see below).

Currently only stereo linear-cross-fade panning is supported by the mixer module. I'll re-visit multi-channel placement later. Audio positioning for game objects will only rely on volume and pan for now.

What's still left to do?

  • Ignore all Stk Generators for now... they are too low-level to merit script instantiation.. They might make a comeback as LFO-type control inputs in the future...
  • Ignore most Stk filters. Filters in Stk seem to be meant as helper classes for internal workings in custom Instrmnt derived classes. Many filters do not normalize and without extreme tweaking aren't useful in an intuitive real-time modular fashion. Compromise: Script instantiated filter which can be set up as a HP,BP,LP or BS filter. Internally an IIR Stk filter can be used with a known calculated set of normalized coefficients for Butterworth filters. This will be sufficient for games.
  • Ignore score files for now... The effort which will have to go into creating a scoring environment is beyond the scope of a gaming project. It is a love of mine, hence the long interlope. I will come back to this, but not yet. 
  • Ogg file loading/playback for music needs to happen (Easier to compose music in Renoise for now).
  • Stk Effects to support : Chorus, JC-Rev, Echo ... the rest can come later.
  • Implement tying motion-states to audio sources.
Tick these off and I'll have enough audio tech to my liking. 

After that? Menu GUI and exposing the keyboard-map to the script environment, then we'll be close to ThingMaker version zero point something!

Ok, so it is still a way to go, but an end is in sight.

Tuesday, December 20, 2011

No pretty pictures here, please move along...

TL;DR: I made a sound from within the scripting environment, and it was hard!

Disclaimer: programmers only.

It has been almost a month since the last post and no "visible" progress has been made...
Lots of code have been committed for the purpose of making a couple of extra lines of script work.

Basically, I had to write an interface class for an abstract "sound module"... which is something with an internal audio buffer and references to inputs and outputs to other sound modules. Script interfaces had to be written for connecting modules together with logic to detect any feedback between modules. The only sub-classed objects so far is a master module, which abstracts the "default" audio output device, and a Stk Instrmnt wrapper class.

When I designed for using Stk, I planned around the Stk base interface classes as much as possible, and being obsessed with the sound module idea I settled for the most common audio processing virtual function shared by all higher level Stk interface classes... namely : StkFrames &tick(StkFrames&frames).

Since this most common tick function processes a block of audio frames, I thought it best to have the algorithm which processes the audio frames in as large a chunk as "real-time" permits... Stk defaults to 512 frames, which at 44k1Hz equates to about 11ms (good enough for games!).

Wasting of time, feeling stupid and doing more work than necessary happened when I decided to design for feedback support after already having "chunk" optimizations in place.

What am I talking about? Say you have an instrument which you plug into a mixer. That mixer goes through an EQ and into a splitter. That splitter then has an output to a chorus effect and another to a re-verb effect. The chorus goes to the master output, but the re-verb goes back to the mixer the instrument is plugged into. Why do you want to do this? Well, why not? It might sound cool... who knows, but the point is you might just NEED it somewhere.

The problem is now apparent since it is clear that the mixer's second input is dependent on the output of some later stage which is in turn dependent on the mixer's output. The best you can hope for in a digital system is to allow the mixer to do a single frame's tick, propagate that tick though all modules until you update the module the mixer input is dependent upon which will then serve as the input to the mixer's next single-frame tick operation.

So, if like me, you went ahead optimizing for chunks, and then wanted to go and have a special feed-back case which just does single ticks, then you're probably also half-wasting your time (I'm sure it's possible, just probably not with the way I went about it.).

What I kept from this misadventure was just the ability to detect if there is feedback when a connection is created, and throw a script exception if so, not allowing the connection. A lot can be done still with just feed-forward modular audio networks, so apart from the wasted time, I'm not too bumbed.

So, I now have the master module which is the entry point for connection to other modules and the Stk Instrmnt wrapper  (which currently supports all the sub-classed instruments Stk has to offer). Further sub-classed modules still need to be added to complete the whole sound experience, but the minimals are coming together nicely.

The basic functioning of the audio engine is like:

  • audio callback requests buffer
  • audio engine processes the master module
  • master module processes its source modules recursively
...still not noticing any non-stupidity-induced breaking sound artefacts, so all is still well and real-time(ish).

Hopefully the next update will have some video with "actual" sound. In the distant future, I hope to re-visit the modular feedback issue.