GetDunne Wiki

Notes from the desk of Shane Dunne, software development consultant

User Tools

Site Tools


overview

This is an old revision of the document!


Overview of the VanillaJuce code

In the following, whenever I want to refer to a pair of files, e.g. PluginProcessor.h/.cpp, I'll use the typewriter font, but leave off the file extension, e.g. PluginProcessor.

The VanillaJuce code consists of three groups of files:

  1. PluginProcessor and PluginEditor represent the VanillaJuce plugin, as seen from the outside, i.e. by a DAW or other plugin host program.
  2. All the files starting with Synth represent the synthesizer (DSP) aspect.
  3. All the files starting with Gui represent the GUI aspect.

The "processor" object

The PluginProcessor files are the most important. These define a new C++ class VanillaJuceAudioProcessor, derived from the JUCE superclass AudioProcessor. Every plugin needs to an AudioProcessor-derived object (this object instance is the plugin). The GUI, which is defined by the PluginEditor files, is actually optional; the fact that VanillaJuceAudioProcessor::hasEditor() returns true is what tells the plugin host that this particular plugin also has a custom GUI.

The processor needs to be able to notify the GUI editor when it changes one or more synth parameters (e.g. when a new preset is selected), so it can update the GUI display. This can be done in any number of ways, but I chose to have the VanillaJuceAudioProcessor class also derive from the JUCE mix-in class ChangeBroadcaster, and the VanillaJuceAudioProcessorEditor inherit from ChangeListener. The processor calls its sendChangeMessage() function to notify the editor, which results in a call to the editor's changeListenerCallback() function.

The "Synth" objects

The DSP aspect of VanillaJuce is represented by four main classes as follows:

  • Synth (derived from the JUCE Synthesiser class) represents the synthesizer itself
    • There is exactly one Synth instance, which is a member variable of VanillaJuceAudioProcessor.
  • SynthVoice (derived from SynthesiserVoice) represents the whole sound-generating apparatus.
    • SynthVoice encapsulates two SynthOscillator objects and one SynthEnvelopeGenerator object, which it uses to render incoming MIDI to output audio
    • The VanillaJuceAudioProcessor constructor creates 16 SynthVoice objects and adds them to the Synth instance.
  • SynthParameters (not derived from any JUCE class) is basically a struct full of member variables representing, e.g., oscillator waveforms, ADSR settings, etc.—all the details which collectively define one synth preset (or “program” in plugin parlance)
    • The VanillaJuceAudioProcessor object has a programBank member variable, which is an array of 128 SynthParameters objects.
  • SynthSound (derived from the JUCE class SynthesiserSound) serves to link the other three classes.
    • The VanillaJuceAudioProcessor constructor creates exactly one SynthSound object and adds it to the Synth instance, but retains a pointer to it in its pSound member variable.
    • The SynthSound object contains a reference to the Synth object (which never changes), and a pointer to the currently-selected preset (a SynthParameters object, one of the elements of the processor's programBank array)

The SynthSound object

The JUCE documentation says very little about the SynthesiserSound class. The class itself is almost trivial:

class JUCE_API  SynthesiserSound    : public ReferenceCountedObject
{
protected:
    //==============================================================================
    SynthesiserSound();
 
public:
    /** Destructor. */
    virtual ~SynthesiserSound();
 
    //==============================================================================
    /** Returns true if this sound should be played when a given midi note is pressed.
 
        The Synthesiser will use this information when deciding which sounds to trigger
        for a given note.
    */
    virtual bool appliesToNote (int midiNoteNumber) = 0;
 
    /** Returns true if the sound should be triggered by midi events on a given channel.
 
        The Synthesiser will use this information when deciding which sounds to trigger
        for a given note.
    */
    virtual bool appliesToChannel (int midiChannel) = 0;
 
    /** The class is reference-counted, so this is a handy pointer class for it. */
    typedef ReferenceCountedObjectPtr<SynthesiserSound> Ptr;
 
 
private:
    //==============================================================================
    JUCE_LEAK_DETECTOR (SynthesiserSound)
};

The constructor and destructor are empty, and the two pure-virtual member functions appliesToNote() and appliesToChannel() are very simple. appliesToNote() is clearly there to support things like keyboard splits, where different sounds are used for different note ranges, and appliesToChannel() would appear to work similarly to support multi-timbral synths, where different MIDI channels trigger different sounds. But what is this mysterious “sound” thing, and why does this class even exist?

The answer can be found in class SynthesiserVoice, specifically SynthesiserVoice::startNote(). Have a look at this collection of override functions in class SynthVoice. (The ellipses … indicate where other code has been omitted for clarity.)

class SynthVoice : public SynthesiserVoice
{
    ...
 
    bool canPlaySound(SynthesiserSound* sound) override
    { return dynamic_cast<SynthSound*> (sound) != nullptr; }
 
    ...
 
    void startNote(int midiNoteNumber, float velocity, SynthesiserSound* sound, int currentPitchWheelPosition) override;
    void stopNote(float velocity, bool allowTailOff) override;
    void pitchWheelMoved(int newValue) override;
    void controllerMoved(int controllerNumber, int newValue) override;
 
    void renderNextBlock(AudioSampleBuffer& outputBuffer, int startSample, int numSamples) override;
 
    ...
};
overview.1504108818.txt.gz · Last modified: 2017/08/30 16:00 by shane