The synthesizer-related classes Synth, SynthSound, SynthParameters have already been covered on the overview page. This leaves only SynthOscillator, SynthEnvelopeGenerator, and SynthVoice. SynthVoice is by far the most important, so let's start with that. I'm not going to pore over every line of code; instead my goal is to point out what's important to help you understand the code as you read it for yourself.
Because our synthesizer class Synth inherits from juce::Synthesiser, it inherits a whole pre-built mechanism for dynamically assigning SynthVoice objects to MIDI notes as they are played. We don't have to write any of that tricky code at all. (In fact, the main reason I wrote VanillaJuce at all was because Obxd, the only complete, uncomplicated JUCE-based synthesizer example I was able to find, was not based on juce::Synthesiser.)
The key juce::SynthesiserVoice member functions that SynthVoice overrides are:
void startNote(int midiNoteNumber, float velocity, SynthesiserSound* sound, int currentPitchWheelPosition) override; void stopNote(float velocity, bool allowTailOff) override; void pitchWheelMoved(int newValue) override; void controllerMoved(int controllerNumber, int newValue) override; void renderNextBlock(AudioSampleBuffer& outputBuffer, int startSample, int numSamples) override;
The juce::Synthesiser::renderNextBlock() function calls each active voice's renderNextBlock() function once for every “block” (buffer) of output audio. Our implementation is this:
void SynthVoice::renderNextBlock(AudioSampleBuffer& outputBuffer, int startSample, int numSamples) { while (--numSamples >= 0) { if (!ampEG.isRunning()) { clearCurrentNote(); break; } float aeg = ampEG.getSample(); float osc = osc1.getSample() * osc1Level.getNextValue() + osc2.getSample() * osc2Level.getNextValue(); float sample = aeg * osc; outputBuffer.addSample(0, startSample, sample); outputBuffer.addSample(1, startSample, sample); ++startSample; } }
Don't worry about all the details yet. At this point, just note that the outer while
loop iterates over all the samples in outputBuffer, and at each step, a new sample value sample is computed for this voice, and added in to whatever may already be there in outputBuffer, using its addSample() function. In this way, all of the active voices (sounding notes) are effectively summed together into the output buffer.
startNote() gets called each time a voice is assigned to play a new MIDI note:
void SynthVoice::startNote(int midiNoteNumber, float velocity, SynthesiserSound* sound, int currentPitchWheelPosition) { ignoreUnused(midiNoteNumber); // accessible as SynthesiserVoice::getCurrentlyPlayingNote(); tailOff = false; noteVelocity = velocity; pParams = dynamic_cast<SynthSound*>(sound)->pParams; double sampleRateHz = getSampleRate(); setPitchBend(currentPitchWheelPosition); setup(false); ampEG.start(sampleRateHz); }
The function arguments contain all the details to specify which note is to be played, at what key velocity, based on which SynthSound (we dynamic_cast
the incoming SynthesiserSound* pointer to SynthSound*), and also tells where the MIDI controller's pitch-wheel is positioned (it might not be in the middle).
Most of the work of setting up the new note is delegated to the setup() function, and startNote() finishes by telling the ampEG (amplifier envelope generator) to start the attack-phase of the note. We'll get to setup() in a moment, but for now, note that it is also called from soundParameterChanged() (when any parameter is changed via the GUI) and pitchWheelMoved(). (It would normally also be called from controllerMoved(), but VanillaJuce's implementation of that function is empty.)
void SynthVoice::soundParameterChanged() { if (pParams == 0) return; setup(false); } void SynthVoice::pitchWheelMoved(int newValue) { setPitchBend(newValue); setup(true); }
pitchWheelMoved() delegates the real work to setPitchBend(), which transforms the 14-bit unsigned MIDI pitch-bend value newValue to a signed float
value in the range -1.0 to +1.0, and then calls setup() with its Boolean parameter set to true. Here is setup():
void SynthVoice::setup (bool pitchBendOnly) { double sampleRateHz = getSampleRate(); int midiNote = getCurrentlyPlayingNote(); float masterLevel = float(noteVelocity * pParams->masterLevel); double pbCents = pitchBendCents(); double cyclesPerSecond = noteHz(midiNote + pParams->osc1PitchOffsetSemitones, pParams->osc1DetuneOffsetCents + pbCents); double cyclesPerSample = cyclesPerSecond / sampleRateHz; osc1.setFrequency(cyclesPerSample); if (!pitchBendOnly) { osc1.setWaveform(pParams->osc1Waveform); osc1Level.reset(sampleRateHz, ampEG.isRunning() ? 0.1 : 0.0); osc1Level.setValue(float(pParams->oscBlend * masterLevel)); } cyclesPerSecond = noteHz(midiNote + pParams->osc2PitchOffsetSemitones, pParams->osc2DetuneOffsetCents + pbCents); cyclesPerSample = cyclesPerSecond / sampleRateHz; osc2.setFrequency(cyclesPerSample); if (!pitchBendOnly) { osc2.setWaveform(pParams->osc2Waveform); osc2Level.reset(sampleRateHz, ampEG.isRunning() ? 0.1 : 0.0); osc2Level.setValue(float((1.0 - pParams->oscBlend) * masterLevel)); } if (!pitchBendOnly) { ampEG.attackSeconds = pParams->ampEgAttackTimeSeconds; ampEG.decaySeconds = pParams->ampEgDecayTimeSeconds; ampEG.sustainLevel = pParams->ampEgSustainLevel; ampEG.releaseSeconds = pParams->ampEgReleaseTimeSeconds; } }
Don't worry about the details; they'll become clear as you study the code for yourself, and especially when we look at the SynthOscillator and SynthEnvelopeGenerator classes. For now the important points to note are:
stopNote() is a little bit complicated, because of the Boolean allowTailOff parameter. allowTailOff will normally be true, to indicate that the note should continue sounding, but begin its release phase, because the MIDI key which was down is now up. allowTailOff will be false, however, in the event of a MIDI “panic” (“all notes off”) situation, in which case notes should stop sounding immediately and not “tail off”.
void SynthVoice::stopNote(float velocity, bool allowTailOff) { ignoreUnused(velocity); if (allowTailOff & !tailOff) { tailOff = true; ampEG.release(); } else { clearCurrentNote(); } }
The tailOff member variable is used to ensure that the “tail-off” (release) operation happens only once. SynthesiserVoice::clearCurrentNote() tells the controlling Synthesiser instance that the voice is no longer active; renderNextBlock() will no longer be called until the voice is reassigned.
The only other interesting aspect of the SynthVoice class are its osc1Level and osc2Level member variables, which are defined as LinearSmoothedValue<float>. This is due to two rather tricky aspects of juce::Synthesiser's voice-assignment algorithm:
If you are not careful, the result of playing a note first loudly and then very softly will be an audible click as the sounding note's amplitude suddenly drops from the old louder level to the new softer one. Use of LinearSmoothedValue objects ensures that this volume change will get stretched out over a short interval (VanillaJuce uses 100 milliseconds. (See the calls to osc1Level.reset() and osc2Level.reset() in setup().)
VanillaJuce's oscillator class is designed for coding simplicity, not CPU-efficiency or sound quality. Here's the whole thing:
class SynthOscillator { private: SynthOscillatorWaveform waveForm; double phase; // [0.0, 1.0] double phaseDelta; // cycles per sample (fraction) public: SynthOscillator(); void setWaveform(SynthOscillatorWaveform wf) { waveForm = wf; } void setFrequency(double cyclesPerSample); float getSample (); }; SynthOscillator::SynthOscillator() : waveForm(kSawtooth) , phase(0) , phaseDelta(0) { } void SynthOscillator::setFrequency(double cyclesPerSample) { phaseDelta = cyclesPerSample; } float SynthOscillator::getSample() { float sample = 0.0f; switch (waveForm) { case kSine: sample = (float)(std::sin(phase * 2.0 * double_Pi)); break; case kSquare: sample = (phase <= 0.5) ? 1.0f : -1.0f; break; case kTriangle: sample = (float)(2.0 * (0.5 - std::fabs(phase - 0.5)) - 1.0); break; case kSawtooth: sample = (float)(2.0 * phase - 1.0); break; } phase += phaseDelta; while (phase > 1.0) phase -= 1.0; return sample; }
Every time getSample() is called, the phase member variable, which is a number in the range 0.0 to 1.0, is used in a simple math expression, to generate a sample of the appropriate waveform—sine, square, triangle, or sawtooth. Then phase is advanced by adding a small fraction phaseDelta, with wraparound so it remains in the range 0.0 to 1.0. As you can see in the SynthVoice::setup() code above, phaseDelta is computed by dividing the desired note frequency in Hz (cycles per second) by the plugin host's current sampling frequency (samples per second), yielding a samples per cycle value (aka normalized frequency).
This simplistic code is acceptable for low-frequency oscillators (LFOs), but it's not good enough for audio-frequency oscillators, because mathematical functions which define the waveforms are not band-limited (with the exception of the sine waveform, which is in fact perfectly band-limited). As a result, higher-frequency harmonics will be “aliased” to completely different audio frequencies when you play higher notes.
VanillaJuce is essentially an early iteration of a project which was eventually renamed SARAH (Synthèse à Rapide Analyse Harmonique, or “synthesis with fast harmonic analysis”), which I plan to publish soon.
The SynthEnvelopeGenerator class implements a simple “ADSR” envelope function with a linear Attack and Decay ramps, a constant Sustain level, and a linear Release ramp. juce::LinearSmoothedValue is used to facilitate generating the linear ramps. To understand the code, it will be helpful to understand that the ADSR envelope always begins and ends at the value 0.0:
Here is the class declaration for SynthEnvelopeGenerator:
typedef enum { kIdle, kAttack, kDecay, kSustain, kRelease } EG_Segment; class SynthEnvelopeGenerator { private: double sampleRateHz; LinearSmoothedValue<double> interpolator; EG_Segment segment; public: double attackSeconds, decaySeconds, releaseSeconds; double sustainLevel; // [0.0, 1.0] public: SynthEnvelopeGenerator(); void start(double _sampleRateHz); // called for note-on void release(); // called for note-off bool isRunning() { return segment != kIdle; } float getSample (); };
juce::LinearSmoothedValue is a template class, which in this case is instantiated with a base type of double, to define the member variable interpolator. It has several member functions: the following four of which are used in SynthEnvelopeGenerator:
Unfortunately, the juce::LinearSmoothedValue class does not provide a function to set the interpolator's current value, so we have to resort to calling setValue() to set the target value, followed immediately by a call to reset(), which happens to set the current value to the target value (I only know this because I peeked at the juce::LinearSmoothedValue source code), followed by a second call to setValue() to set the new target value. You'll see this pattern more than once in the SynthEnvelopeGenerator code:
SynthEnvelopeGenerator::SynthEnvelopeGenerator() : sampleRateHz(44100) , attackSeconds(0.01) , decaySeconds(0.1) , releaseSeconds(0.5) , sustainLevel(0.5) , segment(kIdle) { interpolator.setValue(0.0); interpolator.reset(sampleRateHz, 0.0); } void SynthEnvelopeGenerator::start (double _sampleRateHz) { sampleRateHz = _sampleRateHz; if (segment == kIdle) { // start new attack segment from zero interpolator.setValue(0.0); interpolator.reset(sampleRateHz, attackSeconds); } else { // note is still playing but has been retriggered or stolen // start new attack from where we are double currentValue = interpolator.getNextValue(); interpolator.setValue(currentValue); interpolator.reset(sampleRateHz, attackSeconds * (1.0 - currentValue)); } segment = kAttack; interpolator.setValue(1.0); } void SynthEnvelopeGenerator::release() { segment = kRelease; interpolator.setValue(interpolator.getNextValue()); interpolator.reset(sampleRateHz, releaseSeconds); interpolator.setValue(0.0); } float SynthEnvelopeGenerator::getSample() { if (segment == kSustain) return float(sustainLevel); if (interpolator.isSmoothing()) return float(interpolator.getNextValue()); if (segment == kAttack) // end of attack segment { if (decaySeconds > 0.0) { // there is a decay segment segment = kDecay; interpolator.reset(sampleRateHz, decaySeconds); interpolator.setValue(sustainLevel); return 1.0; } else { // no decay segment; go straight to sustain segment = kSustain; return float(sustainLevel); } } else if (segment == kDecay) // end of decay segment { segment = kSustain; return float(sustainLevel); } else if (segment == kRelease) // end of release { segment = kIdle; } // after end of release segment return 0.0f; }
Remember where I talked about how juce::Synthesizer will re-trigger a voice back to its attack phase if the same MIDI note goes on, then off, then on again? That requires an even uglier version of the setValue, reset, setValue sequence of function calls, where the argument to the initial setValue() call is obtained by calling getNextValue(), to ensure that the new ramp begins exactly where the one being truncated leaves off, to avoid another type of “click” transient.
When the VanillaJuce plugin is compiled and instantiated in a DAW, and the user presses down a note on a MIDI keyboard (or the same sequence of MIDI-events occurs during the playback of a recorded MIDI sequence), the following things happen:
this
pointing to the one and only Synth object (member variable synth of VanillaJuceAudioProcessor)When the MIDI note-off event occurs in the MIDI input sequence:
There are two special voice-assignment scenarios you should be aware of. The first one, note reassignment was already discussed above. If a MIDI note-on event occurs while there is already an active voice sounding the same MIDI note-number, juce::Synthesiser::noteOn() will simply call startNote() again on the active SynthVoice instance. Care must then be taken to ensure that there is not much of an audible “click” as the note goes back to its Attack phase.
The second special case concerns what happens when the synthesizer runs out of voices. That case is actually almost identical to the first one; the only difference is how the juce::Synthesiser code selects which active voice to reassign—a process called note-stealing. Have a look at the juce::Synthesiser source code to learn exactly how its note-stealing algorithm works, and be aware that this is just one of several possible ways to do it. If you wanted a different note-stealing algorithm, you would simply have to override more of the juce::Synthesiser member functions.