
(Note: This post is going up quite late into the month, but I actually wrote most of it up at the start of December. The necessity of capturing images and movies to accompany these posts has become a significant bottleneck, not because it’s so very much work but simply because I’m either working on the game and want to be focusing on making progress or not working on the game in which case I don’t want to open the project when I’m meant to be taking a break. I probably just need to get in the habit of taking more screenshots and recordings as I go.)
This month was quite a divergence. I mentioned in the last DevBlog that I’d started working on replacing the sound effect generation system used in the project; I linked to a little cohost post going into some detail on the challenges as I think perceived them, but I didn’t go into much detail in the post itself. However, as the last month has been spent almost entirely on working on this sub-project, I’m going to go into detail here. Before I do, though, here’s a little video of the whole thing in action:
First, some background: Back in 2007 a little sound-generation tool called sfxr was released. This was shared with the indie game-dev community as a tool for quickly and easily generating sounds particularly for use in game jam projects, where impressive quality was a lower priority than speed and convenience. It quickly became a standard tool; it’s very likely that something you’ve played uses sounds generated by sfxr, particularly if it’s a retro-style indie game. In the 15 intervening years, various ports and variations have been developed, with two in particular of interest: First, bfxr, a version which adds several new parameters and which can save the sound parameters to files for later modification; and, second, the lesser-known ufxr, a Unity port of sfxr with most of bfxr’s features.

This latter was what I found when, a year and a half ago, I started this project. I was curious whether someone had made a version of sfxr that ran natively in Unity, which simulated the sounds in real-time, and wondered what-all one might accomplish with such a tool. ufxr did not turn out, in the end, to be such a tool: While it did have ways to play sounds directly, the way it achieved this was by creating an object, completely rendering the sound, loading it into the object, and then deleting the object once the sound playback was complete. This works fine if you’re just playing occasional sounds with little concern for latency, but caused significant performance issues when new sounds needed to be played every frame – and, of course, I wanted every sound played to be a bit different, because otherwise what’s the point of simulating them, rather than just rendering out a sound and playing it normally?
As a sort of stop-gap, I built a system that cached a set number of mutations for each sound and played them back randomly. This still wasn’t great! The game lagged significantly on start-up as it rushed to render and cache every sound, and the sounds only started playing once the caching operation had finished so the game was mute for a while after starting. Furthermore, because the way mutations worked was just a random percentage deviation for each parameter, it was very common for sounds to end up being a bit off – becoming way too loud or too long as some parameter drifted outside of the expected zone and completely changed the feel of the sound. This outcome was invisible beforehand and difficult to address once it came up.
This obviously wasn’t going to work. I needed to do something: I could strip out the whole apparatus, render the sounds to wave files, and play them back. This would be the easy and smart thing to do, being pretty much how most games handle sound playback. It was, of course, unacceptable to me, because I am a maniac. No, rather than this completely reasonable solution, I decided to create my own sound effect synthesizer in Unity – not just a port of sfxr, though I was certainly inspired by it and shamelessly stole all of the ideas I could get my hands on, but something built from the ground up in the way that made the most sense to me.
This sub-project, which I now call Soundmakr, is basically done now. Developing this system took the whole month, which is honestly fine – while I was hoping it would take less, I was also concerned it might take more, and I am quite pleased with the result. I’m going to go into some detail onto what went into this now, and the steps it took to bring this to fruition.
First things first, I had to generate a sound. A sound, from a programmer’s perspective, is just a big array of numbers between -1 and 1 representing the waveform. Normally one would play such a sound by loading it into a Unity Audio Source and letting it play, but that means you have to do the whole thing at once which just leads back to the unacceptably slow methods I was trying to avoid. Fortunately, Unity provides another way: A function called “OnAudioFilterRead” which is automatically called on any script that shares an object with an audio source. Though, as the name suggests, this is meant to process audio, it also makes it possible to write your own data into the audio buffer – which is exactly what I did. I copied all of the basic waveforms used by ufxr, most of which were pretty straightforward, and was able to play them using this function. A good start!
In addition, I created equivalents to the other basic waveform parameters: The duty cycle value, previously only used by square waves, was replaced with a generalized “cycle bias” value that could be used to modify other waveforms in the same way, such as by shaping the triangle wave into a saw wave. I don’t know if this is a common synthesizer ability or if there’s a more established name for it that I’m ignorant of. I recreated the phase-shift effect, which combines the waveform with a slightly shifted copy, and the harmonic overtone effect which combines the waveform with copies at various multiples of the frequency. The “bit crush” effect was replaced with simply having a variable sample rate – though the true output sample rate was locked at whatever Unity’s audio settings had set, the actual synthesizer only simulates at the set sample rate. This makes the synthesizer significantly more efficient, but I’m still not sure how many troublesome side-effects it may be creating…
Regardless, once I had a basic waveform, I had to control how loud it was. Sfxr uses a simple sound attack-hold-release envelope system, where the sound fades in over the attack period, is held for a period, then fades out over the release period. These are the standard terms used in musical synthesis, though in sfxr itself it uses the terms sustain and decay for hold and release, and adds a sustain punch value which is added to the volume during the hold phase. I replaced this envelope with the much more standard attack-decay-sustain-hold-release envelope: This fades in during attack, decays down to the sustain level (equivalent to “sustain punch” + 1), is held for the duration of the hold value, then fades out for the duration of the release. This may seem like a pretty minor change, but one of the things I would like to do someday is adapt this towards musical use – in which case little changes like this could be pretty important.
Okay, I have a sound that fades in and out! It’s a synthesizer! However, to have something equivalent in functionality to what I was replacing, I needed to add filters. This was a task I severely underestimated: While I knew enough about waveforms and envelopes to bang out serviceable ones in a couple of days, I quickly found out I didn’t know shit about filters. For those who haven’t done audio work, a filter is usually a tool for boosting or attenuating certain frequencies of a sound – but what is a frequency? When you’re working at the level of individual audio samples, values between -1 and 1, nothing you can see in a sample indicates what the frequency of the sound is. Looking at the source code to the predecessor tools I was working from didn’t seem to help: The filter functions remained bafflingly opaque. What I learned, in the many days subsequently spent trying to understand filters, is that filters are very complicated.
I’d need to take at least one class to really understand filters, but I managed to scrape together enough to have enough functional knowledge to move forward. To sum up: We can infer the frequency of a sound based on the rate it’s changing over time, and we can calculate that based on the differences between the current sample and N previous samples. We can also attenuate the sound by mixing in the same sound offset a given number of samples – since a sound added to an inverse version of itself is silence, and a sound phase-shifted by its cycle length is more-or-less inverted. This is, as I understand it, the principle noise-canceling-headphones work on. This was all nowhere near enough to allow me to devise a suitable filter, or even really to understand the filter code I was working at, but it was enough to allow me to look through a number of pre-built filter solutions online until I found one that mostly worked and then bang on the relevant parameters until it was what I needed. I actually thought I finished getting filters working several times before finally getting something functional – I at least hope that it’s finished now.
Okay! But… the sfxr synths don’t just play a sound that fades in and fades out, do they? They have all these other parameters: Vibrato, frequency sweep, change amount, and so forth! Here’s the part of the project I’m proudest of, my unique contribution to the lineage: Rather than having all these individual sliders and oscillators, I created a system of Low Frequency Controllers. There are four of these: Sweep, Oscillators 1 and 2, and Step. Sweep changes the controlled value proportional to time, with an additional acceleration parameter; the oscillators control the value according to a waveform and frequency, with the waveform having all the same available shapes as the main tone-generating oscillator; and the step controller changes the value at set intervals, with an option to periodically repeat.
While in the *fxr synthesizers only a few parameters have controls like these, in my synth nearly any value can be controlled. One of the reasons this is possible, from a purely performance-related perspective, is most of these only need to be simulated at most a hundred times a second to sound smooth, as opposed to the thousands of updates per-second needed to generate a tonal waveform, so adding new effects should cost almost nothing in terms of performance. The challenge, however, was how to create an interface for having and modifying all of these available controls without becoming a complete overwhelming mess: I am quite happy with the solution I developed.

New controls can be quickly added (or removed) by clicking the icons to the left of the value to be controlled. Controls are color-coded for quick visual reading, and this with the connecting lines makes it quickly obvious which control type maps to which icon. Universal control values, such as the oscillator waveforms and sweep speed multiplier, are all in their own color-coded sections at the bottom.
Though I’m quite happy with where this interface ended up, working with Unity’s Editor was A Struggle as always. You may have noticed the knob controls on the right and that some sliders have a range of values instead of a single value: This controls the sound mutations, changes that happen every time the sound is played. This particular knob control interface is a largely undocumented and unused Unity feature, and it took a lot of trial and error to get it working – that I bothered at all is mostly a testament to how perfectly this control type met my needs for controlling this particular parameter. Nearly every value can be mutated, and once it is the slider is converted to a double-slider so you can easily set the acceptable bounds for the sound. Another subtle challenge was that of playing the sound when the game was not running – as it turns out, OnAudioFilterRead doesn’t work in the editor, at least as far as I was able to tell. To handle this, I simply created a hidden background object to play the sounds. This required rendering the sound completely before playback, something I largely wanted to avoid, but since it’s just for one sound at a time while running the editor it’s not a big deal (this technique was necessary anyway in order to export sounds as wave files).
Now I had the sound editor and the synthesizer working almost perfectly, as far as I could tell. I began the work of porting it into the Bound City project… and quickly noticed Some Issues. Most immediately, every sound in the old system was addressed by string identifiers, names which were not present at all in the new sound type. It would be easy enough to add a name to each sound, but this felt pretty sloppy – why should a sound need a name? Instead, I gave each Sound Player a library, a little list of known sound files which could be assigned string aliases for quick and easy playback. I also did a little interface work to make building sound libraries quick and painless.

The biggest stumbling block here, near the end of the sub-project, turned out to be the difficulty of porting sounds from old system to the new one. I hadn’t set out to do it this way, but the whole structure of my synthesizer was almost completely different to that used by sfxr – rather than building out each sample one value at a time, modifying each parameter as it went, my synth merely took in whatever the currently elapsed time since the beginning of simulation was and returned whatever sample was appropriate at this point in time. Doing this made it very difficult to convert from *fxr parameters to the sorts of parameters I was using. Additionally, my filters worked in a completely different way than the *fxr filters, so I have no idea how one could meaningfully convert those values. For the most part, I just ended up experimenting via trial and error to figure it all out – and, for the most part, this was enough to import sounds vaguely similar to the ones I had before, but each and every one of them required lots of tweaks and modifications. I got all of the sounds ported from the old system to the new one, but it took several days.
Okay! It’s done! Is it? Well, mostly, for now. I have a number of ideas for improvements I would like to make, and I’m still finding little bugs and issues and fixing them as I get to them. Most improvements can be deferred for a while yet, since they’re more in the realm of expanded and advanced functionality. A few tweaks, making some behaviors more consistent and intuitive and getting a couple of minor features working, might happen much sooner. I’ve also been considering how I might want to go about publishing and/or selling this work: Though this required significant labor on my part, at the start of it all it was based on a freeware tool so I feel slightly scummy just selling it outright. On the other hand, I am very proud of it and would like to both share it and to make money from it! Most likely in the near future I will build a Unity UI for it – that is, the Unity player, not the editor – and release that as a freeware version of the tool, one capable of building and playing and saving all of the same sounds. Meanwhile, the version I’m using, the one that simulates sounds in real-time during gameplay, will be available for purchase on the Unity Asset store… that is, once I build the demo version, once I write up the documentation, once I make the store page, etc. I may try to focus on that more around the dead area that always pops up near the end of the year. I will certainly post about it here once I do.
So, with that all mostly behind me, I’m working on the game again. Much as I left it, there’s not much left to do for this demo version… the demo version I originally planned on reworking the sound after releasing. Will it be done by the end of the year? Maybe! I’ve mostly been fidgeting with this and that, making slight improvements, making everything a little nicer and fixing minor bugs. As I settle into it more, I hope to finish banging out the largely-minor changes that need to be done to wrap up, testing everything, ensuring everything’s in place, adding stuff that I only belatedly notice is missing, and so forth. It feels good to get something done, at any rate.
If you’d like to help support this project or my writing, please consider supporting me on Patreon.