Audio Developer Diary – Episode 7 – What? Don’t you do that?

Today a little detour through some of the “stuff” that needs to get done with every release. Here I’m going to talk (carefully) about clients who engage me to build products for them, and the assumptions some clients make or some of the things they either are not aware of, or that get forgotten.

So I build plug-ins for myself and for other people, and it’s pretty easy to get caught up in the thrill of the new piece of software that makes the noises you’ve always dreamed of. But if there’s only one thing potential customers should take away from this post it’s this:

A cool piece of audio software alone, is NOT a product.

There’s a whole range of additional processes, tools, and services that need to be in place or need to get done, often these can’t get done by your coder (or at very least shouldn’t get done by them). Let look at a few:

Codesigning and Notarization. So these days (and especially on the MacOS platform) there’s a bunch of hoops that the operating system or DAW supplier will require the plug-in development team to jump through. All of this is to validate that the software that is being shipped is valid and secure, and not full of nasty malware and virus’. So this is quite a reasonable thing to require, but it involves the owner of the plug-in to validate the software in a bunch of different ways – and this often includes setting up developer accounts with these people (like AVID for ProTools/AAX work and Apple for anything on the Mac) . These accounts have passwords and are used to validate software, so you don’t want to be handing out these passwords to anyone. Perhaps you can get your coder(me) to do these steps but in the end its a bad security idea and most coders (me) are very reluctant to do this for you.

Installers. End-users expect to have a one click install process so these need building for each of your target platforms (e.g. MacOS, Windows, Linux etc.). There are lots of software options to help out with this process, but as your installer runs as a stand alone piece of code it too needs to go through codesigning and notarization above. So it gets convoluted (on the Mac) very quickly.

Support. This one’s a biggie. Not because of how important it is – and its VERY important – but understanding who does what, and what they will need in order to do it. Lets divide Support into the three classic IT industry types:

Level 1 Support: – Some one who helps end users with their web-based access to your products and services, it might be about resetting their forgotten password, or dealing with problems they are having navigating thru your e-commerce platform, or helping them trace where their order is up to. Note here how this has nearly nothing to do with any single product.

Level 2 Support: – Someone who helps with end users difficulties with a given product – it might be getting the installer to run properly on their operating system, making sure they’ve put the plug-in in a folder their DAW will recognise, or getting the plug-in to do something they think it should do. Again note here none of this is because “the plug-in is broken”, though end-users often report it that way. It’s that the end user hasn’t read the manual(and really these days who does?), or hasn’t followed the instructions in their DAW, or assumes the software will do something it was never designed to do. It’s easy to identify “Level 2 issues” – once everyone understands what they should and shouldn’t be doing then the software works as advertised, there is no need for any change to the code itself.

Level 3 Support: – Someone who makes changes to the code in a product because that code is broken. Software comes with bugs, and they break out into the wild and they need fixing – that’s this role. Note this is where the software fails in exactly the same way for everyone who has the software and is running it in exactly the same way, or for a suitably large percentage of the user-base.

The coder should ONLY do Level 3 Support. In an ideal world the coder should never talk to the end user – just a person from Level 2 Support who has verified that the bug is real. Honestly I’m a coder – and I know a lot of them, mostly you shouldn’t let us anywhere near the paying public….

So someone, different someones, is going to do Level 1, 2, and 3 Support, and in order to do it correctly they are each going to need to have systems configured for all the possible end-user set ups. So as a *minimum* they will need computers running each of the operating systems you intend to provide software for (e.g. MacOS, Windows, Linux etc.). Often you may need different versions of those operating systems too. There’s more. Level 2 and 3 support are going to need a very wide range of DAW software too(e.g. ProTools, Live, Reaper, Cubase, Studio One etc. etc. the list can get very long) .

Often clients expect or assume that I will be doing all the things I’ve mentioned here (or a very large part of them), and I’m not. I *could* do all of it but it would turn prohibitively expensive for them very very quickly, so at some point in every potential-client conversation I say that I’m not doing these things and why (basically for security and cost reasons) , but much of it seems so far away and so abstract that many potential clients either ignore it or assume its fixable later – which it is, *IF* you remember you are doing it not me. Which is a project management issue, but don’t get me started on that topic. Otherwise there’s a sentence that gets spoken in every client-developer relationship at some point that goes like this:

“What? Don’t you do that?”……

Audio Developer Dairy – Episode 6 – A Balancing Act

So everything is a compromise, time, effort, capabilities features, ease of use – it just goes on and on. Today I want to look at just one small compromise – audio output capability. By this I mean the noise you *can* make with an instrument. So down one end of this spectrum I can limit the kinds of sounds an instrument can make to be just a set of “acceptable” or “nice” sounds. The problem down this end is that this sort of restriction is that you tend to end up with a set of samey, already been heard before sort of things. Whilst at the other end its a jungle mostly populated with strange unlikeable noise, that users have to wade through to find any sort of (by then very well hidden) gem. So somewhere in the middle then.

Instead of coming out with some sort of wise-sage rule-of-thumb here it might be worth walking some way through the development process to see how I got to where I am at right now. I think this is a good idea because frankly I don’t have a wise-sage rule-of-thumb to offer up….so its this or nothing…

Quattro is, as I’ve already mentioned, a 4-voice virtual instrument (where voice means some sort of independent audio source) so lets look at one of those audio sources:

Each of the audio sources(voices from here on in) actually uses more than one type of audio generation. Each voice has a Sampler – which can be loaded with any multisampled round-robined, set of audio wave files to give you a close approximation of a real-world instrument, or a designed sound that I or some mad audio-design genius has created. On top of that there’s a “classic” subtractive synth. So to be entirely honest there’s 8 sound sources not 4, but who’s counting?

Lets start with the Sampler section:

Well to start with it was just that “a sampler” – which within and of itself is no small piece of coding, disk-streaming, playback, pitch shifting/tuning voice loading and mapping etc. etc. All the usual stuff we expect of a sampler engine. Not that I did any of that coding ..(hello Christoph), I just leveraged it.

It took no time at all before I decided to add in reverse and offset. Reverse does what it says and plays the sample in reverse, offset starts each sample a little way (user selected) into each wave file – and adds a surprising amount of timbral difference to the outcome. The sampler stayed like this for a while. However if you’ve seen any of the CR Kontakt instruments (or the Audio Reward ones) you will know I wrote a granular “engine” to apply to loaded samples like these here, so it was only a matter of time before it got ported to this environment and added here. You get a very different bang-for-your-buck with this engine, there’s a lot of different and usable sounds that can be massaged out of a pretty simple set of samples. The bad news? It’s not cheap on CPU, and I was adding 4 of these… it took a long time to trim down the original engine to be more CPU friendly and still retain 90% of the sound sculpting possibilities. Nice side effect was this reduced the number of UI widgets making it easier to use.

I’m sure there will be something else that occurs to me to add – but it hasn’t shown up in the last 3 months so maybe it never will (yes yes I can hear everyone saying wavetable… me – its noted.)

On to the subtractive synth:

So this is the standard HISE wave generating set up – two, not one, oscillators that you can cross mix. So where I said “4 sound sources”, it was really 8, well actually….. it’s 12.

There’s nothing in here right now that seems out of the ordinary – all the normal oscillator shapes, octave, detune, and pulse width for those square waves. Except…..

I’ve been playing around with noise generation, and using highly tuned filters to get to a given note, so instead of playing a note the oscillators just generate white-noise, and I use (essentially) a band-pass filter that moves to the played notes frequency only allowing the played note part of the noise spectrum through. It’s more complicated than that (a bit) and it forces the synth into playing mono but it does give you some interesting sound. So if there’s anything going in here next that is likely to be it.

So when I look back I haven’t really done that much to enhance the 1 Sampler + 1 Subtractive Synth = 1 Voice layout. Though even that’s pretty deep when you have 4 “Voices”. But even here I’m balancing CPU vs Capability and Capability vs Audio Originality and all of that vs Usability. It’s an ongoing struggle, and I’m tired and need to lay down.

Audio Developer Diary – Episode 5 – Soul of a New Machine

So this week will have to be quick – ’cause I’m mad-busy.

I wanted to talk about all the stuff round the outside of what we think of as “the product” that actually in the end contributes heavily to its definition, or its (cough) “soul”.

We think of the sounds you can make with a VST or AU (effect or instrument) as the defining thing about it, and sure that’s what we buy but there’s a bunch of other stuff, some of which I’ve alluded to here, that makes it a worthwhile experience. I’m sure you can think of some, support for instance readily comes to mind. If a company puts out a product but wont address bugs or help users with complex or difficult install or usability issues then the value of said product falls – sometimes drastically.

But leaving aside the obvious for a minute these a whole bunch of other stuff: the aforementioned installer for instance. Products often need easy to use installers for each of the platforms they run on, so usually at least Mac and Windows – so that’s two more programs to write. Mentioning MacOS then these days these a whole set of hoops Apple require developers to jump through to get a product that will even open or run on your shiney new Mac-machine. We have to Code-Sign all the software and Noterize anything that runs stand-alone (so that means the installer too) – I dont want to get too far into the technical how-tos here but it means joining(and paying for) the Apple developer program and communicating in a very very specific way with the Apple servers – and Apple have been known to change the rules with very very little notice. It’s a pain, but it makes the user experience day-1 much better, so we do it.

There’s a bunch of stuff inside the program itself that is required too. These days many users want a LOT of presets – so these get to be really important – that they sound good, and that the user can find what they are looking for easily and can add and subtract presets in easy and intuitive ways.

Which brings me to the preset browser in Quattro. As I may have mentioned I wanted Quattro to be an extensible platform for content (sounds) – so I envisage adding tens or even hundreds of “extension packs” that the user can buy and install (theres that installer again) at any time they wish.

So that means there may well be hundreds if not many thousand presets available to the user, how to manage this finding and managing? A tag based system seemed easily the best option – so that’s what I’ve built – but not just any old tag based system…

First off I hate tag based systems where I choose a category(over on the left in the image above) and it shows me a set of tags – not all of which result in finding at least one preset. If I select a tag called: “Bonkers” then I expect to see at least one preset that is tagged in that way, sure if I select “Bonkers” and “Sleepy” then there may be no results – that’s fine, but why show me a tag if the program already knows there are no presets tagged with it? It’s these little things that bug me – so the thing above doesn’t mess you around like that.

Also why have a tag based system if the tags are just fixed, I cant add to them (with my own unique “Hull-based humour”) or remove them if I think they are just plain wrong? Again – the preset browser above lets you add tags to existing presets, and remove them. Of course it lets you add new presets, overwrite(save) presets, and delete presets.

What about if you get some massive long list of presets, but you sorta remember the name you were looking for included “Bonk” in it’s name somewhere – again a search filter for the returned results. Finally the ability to just page through all the search results trying them out (those little fwd/back arrows next to the name in blue….

All of these with variable amounts of presets in variable amounts of expansion packs – including no expansion packs at all…

It’s taken months of effort to get it where it is, and all so the machine has a really nice “soul”….

Audio Developer Diary – Episode 4 – Getting clean…

Today I’m going to talk about “look” – well mainly about “look”. This often has a massive impact on people buying or not buying your plug-in.

As an audio developer I spent a huge amount of clock-cycles(machine and brain) on getting the instrument to sound good, where “good” is often more than a bit subjective, but this tends to skew my thinking about “look” – and here I’m talking about asthetics, so NOT (cough) “practical issues”. There’s a whole other discussion about layout, widget choices like combo-box or radio buttons?, what should and shouldn’t be on a given page, but no: here I’m talking about what it actually looks like.

At the highest level this “look” breaks down into skeudomorphic or vectorised. Skeudomorphic is a fancy term for “mimic the real world”, so make the knobs look like knobs that exist in the real-world, in fact make the whole interface look like a real object – I guess we’ve all seen EQ plug-ins that slavishly try and look exactly like a photo of a Pulteq EQ, right down to teh scratches and rust, or the fake wooden ends on soft synths. The arguments for this go like this:

  1. its a real-world thing, it’s already proven to be a good design, why change it?
  2. People will recognise each widget(object) for what it is and will expect it to work in the way it does – knobs turn, switches flick etc. etc. Its sometimes called “cultural association” or “cultural mapping”

The counter-arguments go like this:

  1. Are you sure its a proven best design? There are many many examples in the “real-world” of very very poor interface design. I recommend every user-interface designer read “The Design of Everyday Things” by Don Norman, it’s full of bad real world design. My fave is: any door that has to have a user manual is a failure. User Manual? Yeah having a sign on the door that says “PUSH” or “PULL” is a user manual – you really shouldn’t need it.
  2. Things in the virtual environment don’t exactly work the same way as in the real-world – you cant actually grab a knob, you are clicking and dragging a mouse, so its not the same and introduces whats sometimes called “cultural dissonance” – its subtle but its there..
  3. Limiting yourself to real-world styled interactions reduces the flexibility and power you can invest in your interface – look at the image at top – right in the middle is an X/Y pad – sure some synths have joysticks – and I’ve even seen those emulated in software – but actually that X/Y Pad up there is showing two cross-hairs (the blue is user movable – the grey is showing the “movement” added by the randomisation process) so that sorta thing is never really going to work well with a skeudomorphic design.

The argument for and against something more abstracted to the computer interface (what I called vectorised above) are pretty much just those arguments swapped over.

So clearly I went the vectorised route. Nearly everything in the UI is vectorised(drawn by a set of commands not as little pictures of things). A few are not. The knobs for instance, there are some nice features that the dev. lib. I use allow me instant access to if I use “picture” based knobs:

You can shift-click on the widget and it opens a value editor

As you drag over the knob a little pop-up shows you the current value

You can do all this AND have a vectorised knob- using a trick from the ever great David Healey – but its a bit of additional work and I haven’t got round to it – not sure I ever will.

So having decided on this “vectorised” or “clean” approach I’ve spent a huge amount of time on stuff like colours and layout, here as an example is what the Main UI looked like earlier:

Yeah mostly sort of the same but some little changes to make things cleaner, compare the two:

  1. The preset area has its “loud” colour scheme tempered
  2. A bunch of controls in the X/Y pad area have been changed and hidden – as well as whole new features added (“Humanise” and “Freq”)

So point 2 here is another example of one of the decisions developer make: hide complexity(ease of use) vs instant accessibility(power). In the old UI can you see those two grey-bar-thingys below the X/Y pad and to the right? Should the user select to have velocity-based modulation applied to the X/Y position – these controls allowed them to set the X and Y range for this modulation. Now they are hidden: if you click the word “Velocity” in the top left of the new UI it pops a window to allow you to set how much effect you want to have:

Same for the Humanise – you get to set the amount in a pop that shows up if you click on the word “Humanise”:

So I’ve traded away visual complexity (a good thing) and lost some functional immediacy (a bad thing). Why this decision? Well it’s mostly about how often you would be changing these values – and I decided not often enough to be seeing them messing up the screen every day, plus the old UI was dealing with just velocity based gain modulation, and the new UI deals with that as well velocity based filter freq. modulation, and the humanisation controls too – so it would be pretty crowded in there.

So every little widget and its placement and how it looks gets a level of consideration that you might not imagine when you’re just looking at the UI from a user point of view, and that’s a good thing – really you dont want to be spending ANY time worrying about that stuff – in the end its what you pay the developer to DO.

Audio Developer Diary – Episode 3 – What’s the name of the game?

OK, so here it is the post about naming things. So why does this matter? Well…an example might help:

I released a product a while back called Atmoisia 2. It was a BIG upgrade from Atmosia – moving from Kontakt-only to VST/AU plugin, adding in every sound from the 3-products in the “…ia” series, and a whole slew of functionality, and (crucially to this story) moving from two sounds at the same time to 4-sounds at the same time. These “sounds” were called “Voices”. So the release materials said “…is a 4-Voice ROMpler system…”. That’s just 5 words right there – but it confused a few people: First a small number thought that meant it was 4-note polyphonic, it wasn’t it was 256 note polyphonic, and at least one person decided that “ROMpler” meant the developer had stolen all the sounds in it from other sources and was passing it off as their own. He went on a fairly popular facebook group and said so – several times, until I and several other people corrected him.

This last bit was just plain ignorance – which is fine: I’m ignorant about a lot of things – but he did include it with broadcasting this view to several thousand of my potential customers, in any case I cant do anything about that, but the other confusion about it being only 4-note polyphonic? How many people had decided this was the case and (as I would) decided it wasn’t powerful enough for real work and passed?

The problem stems from those hardware manufacturers I mentioned before ( I named a few but they are all pretty much as bad as each other). They can agree on a lovely and not simple standard like MIDI 1.0 but they can’t decide what to call the actual noises (or the component there of) coming out of their boxes? So we get “Voices” , “sounds”, “layers” and a swag more too. But I’m getting hung up on this example – it’s really here just to demonstrate anything I name can be (mis)construed as something else, so careful naming is required.

Quattro makes the “Voices” problem much worse, because of the depth of each (cough) “voice”. So to do an end-around a bit(and shorten this post by several hundred words) I decided to call these “Noise making groupings” Layers. There are 4 layers in Quattro (hence the name) – but each of these layers has a sampler and a synth in it. The sampler is playing back multi-sampled sounds, and the synth is (at the moment) playing back classic subtractive oscillator noises. When (as I’m sure I will) I get to using wavetables the line between even these two well understood concepts start to technically blur.

Now when you look at Quattro’s “Main” interface it doesn’t even mention layers or voices, so why does it matter – well as I said marketing materials and the manual that’s why.

But it gets worse – “Main interface”? Whats that? Well its the page of stuff above, but is “Main” the correct title for it? This title appears in the buttons along the top (so users can select different pages/ or tabs – see? even this full of alternatives?). The next button is called “Editor” and you get this sort of thing there:

You can edit and change the details of each layer – so “Editor”? or “Details”?(it was actually called Details for a while). Next along is “Effects” – phew an easy one…then (in the first of the screen shots here) is “Arps”, and in the second is “Player” – yep it was just a pretty nice Arpeggiator, but I’m currently planning to give it a per-note-sequencer (that I have no idea how to name), a step-sequencer(xoxoxo style) and a chord-player, so “Arps” wasn’t going to work.

Actually you can look at almost any piece of text in the screen shot above and theres a 50/50 chance its open to interpretation, look at the SAMPLER section:

  • “Voice” – on no not that again? – it means the noise that gets loaded – often, but not always, a sampled instrument
  • “Reverse” – this I’m hoping is pretty obvious – it plays the voice backwards
  • “Off Set” – a thing or an action? What’s it even mean? It actually means move down the sample from the start a ways…
  • DONT get me started on those Granular names…

Look at the Preset browser on the “Main” page, even the headings: “Expansion” or “Categories”? “Filters”(which can get confused with audio-filters) or “Tags”? Which is actually what they are. “Search” – which isn’t even a heading it’s a text entry area so you can narrow the results that appear below it.

So the point here is that every time a developer writes some text, any text, in an interface, they should, and do, think “is this to right word?” “Does this convey enough meaning?” Because if you get this naming right then the user gets to have fun and NOT have to learn a whole new naming convention for each product they try and use, and in the end that’s what we all want.

So it may seem trivial, silly even, to spend hours(And I do – often at night in bed) angsting over the name of some bit of the interface – but get this wrong and it can really ruin the user experience. There’s been nothing unique or different about this process this time – I’m worrying at the problem of naming things, including the product name, even as I type…..

Audio Developer Diary – Episode 2 – Design vs Evolution

Very few customers show up at my virtual door with any sort of instrument design. UI design yes, an overall idea of how they want it to sound also yes. But audio routing? Internal instrument structure? Nope not so often. That’s quite understandable though if you think about it – because all that design work is hidden beneath the UI design, so it seems to happen by magic – except of course it doesn’t.

The internal design of the instrument is really really important. It can allow some features easily and deny others, or make them incredibly hard to do. In truth (with a nod to our title here) a good design will allow lots of great evolution to happen, you get a good instrument to start with and as you are working if a cool idea occurs to you then it can be lots easier to build it in.

So I started on design. This was going to be a sample-based instrument so how many “voices” should be included? (A quick aside here: see how “voices” is in quotes? Well that’s because its a problem what you name a thing, and the hardware audio industry hasn’t helped any – I’m looking at you Roland, Yamaha, Korg, ..actually I could go on all day with this list of manufacturers. Suffice to say I will do another post about the names of things – because its pretty important, and there is no definitive right answer….sadly).

Back to our first question. Well I’d built 2-voice, 3-voice and 4-voice instruments (tho’ to be fair only synths seem to ever have 3-voices, 3 oscillators usually) . But why not 5-voice, 8-voice, hell 12-voice? Well apart from the added load on the end users system, it was in my experience a land of diminishing returns: 2-voices was nice, 4-voices was really kinda good , and anything past this seemed to add not enough “umph” (<-technical term here) to be worth the effort. So 4 “voices” was the sweet spot.

I already had a 4 “voice” product – called Atmosia 2 so, remembering my list of “learnings”, I took that and loaded it up with a bunch of semi-random sound sources to see what it gave me. It gave me 3 things:

  1. 4 “voices” was a good choice – I could make really interesting and good things with it.
  2. The multi-menu system of Atmosia 2 wasn’t ideal as an approach to sound-design. I would have to change that around somehow.
  3. The effective controls for getting nice new sounds was (no surprises) the individual voice volume controls – and tweaking each one in turn was a pain – I would need a better way to manage and control the voice volumes

So for number 1 – great – no surprises there. For number 2? Well I’d already decided on a different layout that might help there, so hold off on judgement and try the new approach to see if things get easier to use. For number 3? Sigh, I already *knew* what the solution was, an X/Y pad. I just didn’t want to have to build it. It meant messing about with vector drawing, log maths and discreet event handling, there was that internal voice saying “please, please dont make me do this..its a PITA, come on, it’ll all be fine..”, of course that voice lost, but it did bargain me down to “lets do that later…”.

So 4 “voices”, each would have to have a set of editing controls what should they be? Well the “standard” stuff: volume envelope, multi-mode filter, a set of modulators (LFO’s for volume and pitch, and Filter frequency), some per-voice effects (I chose delay and wave-shaping/saturation – I cant remember why now I just did…call it whimsy), I had the “field effect” – a convolution processor aimed at changing timbre not adding reverb – so I added that in too, then a set of “controls”: pan, stereo width, pitch, pitch wheel amount.

Next a set of global (instrument level) features, first some effects: Chorus, Phaser, Reverb, Delay, Compressor and Limiter. Some instrument level controls too: Mono/Stereo, Glide, my Infinite Round Robin Engine (IRRE) and my Drift-processor(emulates older-synth style tuning drift).

That seemed enough for a start.. so off I went to (re)build all this.

Audio Developer Diary – Episode 1 – Patience Grasshopper…

(OK so even before we begin, an admission – just like the last time I did this I am already well on the way with development of the plug-in I will be talking about, so (again) I’m having to back track a bit to cover the start-up phase of this development, but anyway: QUATTRO…..)

Over the last couple of years I’ve built a whole swag of plug-ins for myself and for other people – about 25 of them, and whilst there were unique ones most were “ROMpler-style” products, and unsurprisingly there was also a huge chunk of functionality that was very similar in each: voice selection, filters, effects, envelopes etc. etc. So for myself I wanted to build a one-product-to-rule-them-all type thing, a “player”, where I could enhance and improve it over time – never having to go back to the base metal.

But that meant being able to install a plug-in and then add to it with expansions, only updating the plug-in itself when I had new functionality – and allowing users to mix-and-match which sound sets they wanted to use, yeah like Nexus – but with deep(er) editing…. So this expansion capability was “coming soon” to HISE (the product I mostly use to build things) but at the start of all this it wasn’t here yet. So just like users the developer community sometimes has to have a fair degree of patience waiting for a feature they want to be added to be underlying building blocks of their product set. So I waited and tried not to get “naggy” about it.

The nice thing about waiting for a thing is you get to have some thinking space, so that’s what I did; a bit of thinking.

So to stretch this Kung-fu TV series analogy even further… if you’ve seen it then there’s bits where “grasshopper” has to learn how to walk on rice paper without leaving a mark – and that requires considerable restraint and balance, and I realised that was something I was going to need too…

Building a massively powerful engine with tons and tons of capability is all well and good but I’d come to learn that what most users really-really want is a simple and intuitive interface that’s quick and easy to use. These two things don’t fit together easily. So there are two classic solutions to this dilemma:

  1. Hide it all for ever – give the user a set of simple interface controls and make these work on a whole range of features under the covers where the user never sees them.
  2. Hide them – until they ask for them. Put the controls for all this cool new functionality in a separate part of the interface and only go there when the user asks for it.

Option 1. is great – but gets hairy the more features you add, and can get counter-intuitive. Having one dial that controls (say) all the time related stuff makes perhaps some sense for modulator speeds, but delay times too? Yeah its messy quickly. Added to that was I wasn’t sure what all the functionality would eventually be – and adding stuff that broke the “paradigm” would be deeply frustrating to the end-user, not to mention all the undefined and yet to be thought up content that might end up in here

Option 2. This let me set up some overarching simple UI that would get a lot of good stuff done quickly and then to offer users an “Editor” page where they can dig deep and get to all the really cool stuff.

So I went with option 2 – good job – ’cause there was a truck load of functionality coming down the pipe….

An Audio Developer Diary -Take 2

Hello and welcome back to the first blog post in well over a year – I will try not to let it all slip away like that again.

So I accidentally ended up going back through the blog posts in this site – and re-read all the ones about being a Kontakt developer and the trials and tribulations of developing a product called Orbiter. It was informative and (surprisingly) quite enjoyable, though more than a bit geeky. So I thought it might be fun to do it all again, only this time from a slightly different perspective. A lot has changed in those two years for all of us.

The other thing I will try hard to do, and probably fail from time to time (so please forgive me), is not get so detailed about the ins and outs of the product. I built Orbiter and even I got lost reading back through all 24 posts about what a feature did or didn’t do, god help anyone else. The things I found most useful were the more overall “learnings”/ observations – some of which I even managed to keep in mind and execute(eventually).

So what were these “learnings” and how did I do?

  • I should know my limits – yeah I’m a bit better at this; but not much. I still try and make stuff that may be beyond my current knowledge/ability.
  • I should move to another development platform – Done! Tick! yep! (see below)
  • Try not to add a million things (less might well be more) – hmm, I think I’ve cut it down from a million to maybe a 100 thousand things – so there’s been some improvements here.
  • Finish one feature, and test it, before starting another – yes – but truth be told I think that’s easier to achieve in the development environment I now use – so I’ve been sorta forced into it.
  • Once a feature is IN then get it done – yes – again the environment is leading me down this path
  • An idea for a feature should stay just an idea for a week before committing to it, because it will change as you think about it. – oh well “sometimes” but other times the idea is just too cool to not be implemented the first chance I get.
  • Nothing gives better insight than playing the instrument – duh. Tho I’m known to forget this.
  • Refactoring is your GOOD friend – But no developer REALLY loves doing it. Still it gets done (sometimes).
  • Reusable components will pay off in the end – yep, again the environment I use has forced a lot of this on me.
  • Nobody cares, stop whining – really? what nobody? OK no whining….

So what’s different?

Well from a development perspective, I don’t build in Kontakt anymore. I use a development library/environment called HISE, along with a little C++ to allow me to ship plug-ins(VST, AU, AAX) and standalone apps (MacOS, iOS, Windows).

Next: how I make my living. I used to build Kontakt instruments for myself(and my partners) as a side activity from my day job as a real-time telecommunications person. These days I spend all my working hours building audio products, for my customers and for myself. Yep I’m a professional audio developer – with 15+ years of experience – man where did the time go?

Next time I will talk about one of the products I’ve been working on – called QUATTRO, – there’s an image of the main screen at the top there…so watch out for the next post…