Tag Archives: synthesizer

Announcing Fragment – The Collaborative Spectral Synthesizer

Although i developed a proof of concept one year ago, this more serious version begun in September 2016 with the original codebase, with some hard work for a few month, it is now almost ready to be shipped to the world!

So, what is Fragment?

Fragment is a web-based collaborative spectral musical instrument driven by real-time visuals generated by its users from shared GLSL scripts.

If this sound cryptic, it is alright, i will explain its concept here.

Fragment is first and foremost a simple additive synthesizer, meaning that under the hood, the synthesis engine just add sine waves of different frequencies together!

The originality come from the method of control of those frequencies, in Fragment you have three essential visual components, a GLSL code editor, a canvas (which can be called the score) and what i call slices, these components are the core of Fragment with the global timer and the synthesis engine that you cannot see but hear.

  • The GLSL code editor is used to generate the visuals, it is essentially a fragment shader it compute the color of each pixels of the canvas individually
  • The score (or canvas) is where the output of the fragment shader is shown
  • Slices are parts of the score, they are 1 pixel in width and the height of the canvas, they are vertical parts of the score

So, how does it work?

Fragment simplified flow is essentially : GLSL code > canvas > slice > synthesis engine

The code that users type produce visuals (a bunch of pixels), pixels are captured by the vertical slices, those vertical slices are unified into one and a conversion process happen to transform the pixels into “notes”, one pixel will equate to a sine wave with the amplitude defined by the pixel brightness, the frequency defined by the vertical position of the pixel in the slice and the stereo channel defined by the color of the pixel, this happen at a rate of 60 fps.

Fragment add alot of goodies on top of that such as MIDI controllable uniforms (variables that you use in your GLSL code that are updated by you from controllers), an unlimited number of slices possible, frequency shifting of individual slices, many other small features, textures import, configurable score and most importantly collaborative features, Fragment can be used locally or online, peoples can join and share sessions to experiment together, almost everything is synchronized between users.

Fragment will be released sometimes in january, the beta testing phase is just beginning and there is still some things to affine.

Fragment – The Collaborative Spectral Synthesizer

If you like this project and want to see its progression, you can follow us through social media :

facebook twitter soundcloud youtube

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

KORG Audio Gallery AG-10 – ai² PCM synthesizer

Some weeks ago i  got my hands on a Korg AG-10, the lowest-end model of ai² synthesis series, which started with the 01/W in 1991.

The AG-10 was basically sold as a General MIDI Wavetable Sound Module and it came with two floppy with bundled software (Passport Trax, KORG MIDI driver, KORG SMF converter, Passport MIDI player, Passport QuickTunes), it was a great compact GM box for its time but it is more than a simple “rompler”, it had “hidden” editing capabilities, there was no softwares to program the synthesis engine at the time it was sold although most of the informations to program it is found in the manual.

Here are the specs :

  • AI² synthesis engine, same as 01/W, full digital processing with 4 Mb ROM
  • GM compliant
  • 32 voices polyphony
  • 16 parts / 16 channels
  • Two FX units (Reverb, Chorus)
  • Drum kits : 4
  • Outputs : Head phone jack, L/R RCA jacks, 1 x MIDI out, 1 x MIDI Thru, To Host Computer (PCI/F) interface
  • Inputs : L/R RCA jacks, 1 x MIDI in
  • PC1/2 host select switch on the back side
  • Front volume slider, power button and power LED/MIDI led indicator
  • Power : DC 12v 400ma

With its editing capabilities, the AG-10 unit can be used as a kind of synthesizer, you can get some serious vintage sonic characters out of it, it approach the sonic capabilities of the other ai² series synthesizers for a cheap price and with its own character since it also have hidden waveforms/hidden modulations making it possible to make it sound like Roland D-50 / D-110, Ensoniq VFX / SQ-R and Emu Morpheus. Roland D-70‘s DLM, circuit-bending-like loop modulation.

The only issue is that there is no available editor right now to edit the AG-10, there used to be one which included hidden waveforms/hidden modulations but i cannot find it anymore… however it may be possible to do another easily.

I made images of the floppy disks to test and archive the bundled softwares with DosBox and Windows 3.1, those images are available below.

Download :

These are fairly hard to find and you won’t see many available for sale these days but if you see it, it might be of your interest to pick one up because it might be cheap and is not a simple GM sound module!

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Fragment Synthesizer : GLSL powered HTML5 spectral synthesizer

Some years ago i found out the Virtual ANS synthesizer, this is a very good spectral synthesizer simulating the Russian ANS synthesizer, an unique photoelectronic musical instrument where the score is a drawn sound spectrogram, the x axis of the score represent time and the y axis represent frequency.

Since then i have a great interest in this sort of synthesis and method of composing and have an ongoing large project heavily related to the Virtual ANS.

Fragment Synthesizer is a fun side experiment made quickly where the initial thought was : What if you set GLSL produced images as the source of a spectral synthesizer?

The result is the Fragment Synthesizer web application

This is a full blown stereophonic (the color matter, red for left, green for right) spectral synthesizer which is constantly playing a slice of a GLSL produced image/animation, you can compose by editing the fragment shader or by just copy-pasting code from Shadertoy and then convert it to the Fragment Synthesizer format by clicking on the convert button.

The web app. consist of 3 parts :

  • The score produced by a fragment shader with a vertical bar representing the slice which will be played by the synthesizer
  • A live code editor with the ability to compile as you type (with errors reporting), this is used to edit the GLSL fragment shader and subsequently to compose. (the code editor is powered by the CodeMirror library)
  • Controls (volume slider, button to convert Shadertoy code to Fragment Synthesizer code and a slider to move the playing slice) powered by my own JavaScript widget library

How does it work?

Audio side :

The Fragment Synthesizer is just a simple additive synthesizer under the hood, it is powered by a simple wavetable which is generated with this code :

        _wavetable_size = 32768,
        
        _wavetable = (function (wsize) {
                var wavetable = new Float32Array(wsize),

                    wave_phase = 0,
                    wave_phase_step = 2 * Math.PI / wsize,

                    s = 0;

                for (s = 0; s < wsize; s += 1) {
                    wavetable[s] = Math.sin(wave_phase);

                    wave_phase += wave_phase_step;
                }

                return wavetable;
            })(_wavetable_size),

There is then an oscillator for each lines of the score, oscillators are generated by this function :

    var _generateOscillatorSet = function (n, base_frequency, octaves) {
        var y = 0,
            frequency = 0.0,
            octave_length = n / octaves;
        
        _oscillators = [];

        for (y = n; y >= 0; y -= 1) {
            frequency = base_frequency * Math.pow(2, y / octave_length);

            var osc = {
                freq: frequency,
                
                phase_index: Math.random() * _wavetable_size, 
                phase_step: frequency / _audio_context.sampleRate * _wavetable_size
            };
            
            _oscillators.push(osc);
        }
    };

On the Fragment Synthesizer the starting frequency is 16.34 hertz (bottom of the score) and the y axis span 10 octaves, this is hardcoded but could be fun to let the user change it, the number of oscillators change as the score height change and it depend of the window height, if the user resize the window, the number of oscillators will change.

One of the most important function is the _computeNoteBuffer function, it will transform the pixels array of a vertical slice into an usable and fast to process data structure consisting of a single float32 typed array, each entries (an entry is in fact 5 entries because data is packed linearly into the array) of this array describe which oscillator to play along with data related to how it should play, here is what the 5 entries are :

  • The index of the oscillator to play
  • The previous left side gain value for this oscillator
  • The previous right side gain value for this oscillator
  • The current left side gain value for this oscillator
  • The current right side gain value for this oscillator

The gain value for each side is determined by the red and green component of the pixel value.

The previous gain value is used because it is interpolated when played, this produce a better sound without crackles when the gain value vary greatly.

This function is actually called in the audio callback (it is very fast so ok), it is called for each frames, here is how the number of samples before the next note is computed :

        _fps = 60,
        _note_time = 1 / _fps,
        _note_time_samples = Math.round(_note_time * _sample_rate),

Here is the code of the _computeNoteBuffer function :

    var _computeNoteBuffer = function () {
        for (i = 0; i < _note_buffer.length; i += 1) {
            _note_buffer[i] = 0;
        }
        
        var note_buffer = _note_buffer,
            pvl = 0, pvr = 0, pr, pg, r, g,
            inv_full_brightness = 1 / 255.0,

            dlen = _data.length,
            y = _canvas_height - 1, i,
            volume_l, volume_r,
            index = 0;

        for (i = 0; i < dlen; i += 4) { pr = _prev_data[i]; pg = _prev_data[i + 1]; r = _data[i]; g = _data[i + 1]; if (r > 0 || g > 0) {
                volume_l = r * inv_full_brightness;
                volume_r = g * inv_full_brightness;
                
                pvl = pr * inv_full_brightness;
                pvr = pg * inv_full_brightness;

                note_buffer[index] = y;
                note_buffer[index + 1] = pvl;
                note_buffer[index + 2] = pvr;
                note_buffer[index + 3] = volume_l - pvl;
                note_buffer[index + 4] = volume_r - pvr;
            } else {
                if (pr > 0 || pg > 0) {
                    pvl = pr * inv_full_brightness;
                    pvr = pg * inv_full_brightness;

                    note_buffer[index] = y;
                    note_buffer[index + 1] = pvl;
                    note_buffer[index + 2] = pvr;
                    note_buffer[index + 3] = -pvl;
                    note_buffer[index + 4] = -pvr;
                }
            }

            y -= 1;

            index += 5;
        }
        
        _prev_data = _data;
        
        _swap_buffer = true;
    };

Now the core audio code where the magic happen (nothing really fancy here except the interpolation) :

    var _audioProcess = function (audio_processing_event) {
        var output_buffer = audio_processing_event.outputBuffer,
            
            output_data_l = output_buffer.getChannelData(0),
            output_data_r = output_buffer.getChannelData(1),
            
            output_l = 0, output_r = 0,
            
            wavetable = _wavetable,
            
            note_buffer = _note_buffer,
            note_buffer_len = note_buffer.length,
            
            wavetable_size_m1 = _wavetable_size - 1,
            
            osc,
            
            lerp_t_step = 1 / _note_time_samples,
            
            sample,
            
            s, j;
        
        for (sample = 0; sample < output_data_l.length; sample += 1) {
            output_l = 0.0;
            output_r = 0.0;

            for (j = 0; j < note_buffer_len; j += 5) { var osc_index = note_buffer[j], previous_volume_l = note_buffer[j + 1], previous_volume_r = note_buffer[j + 2], diff_volume_l = note_buffer[j + 3], diff_volume_r = note_buffer[j + 4]; osc = _oscillators[osc_index]; s = wavetable[osc.phase_index & wavetable_size_m1]; output_l += (previous_volume_l + diff_volume_l * _lerp_t) * s; output_r += (previous_volume_r + diff_volume_r * _lerp_t) * s; osc.phase_index += osc.phase_step; if (osc.phase_index >= _wavetable_size) {
                    osc.phase_index -= _wavetable_size;
                }
            }
            
            output_data_l[sample] = output_l;
            output_data_r[sample] = output_r;
            
            _lerp_t += lerp_t_step;
            
            _curr_sample += 1;

            if (_curr_sample >= _note_time_samples) {
                _lerp_t = 0;

                _curr_sample = 0;

                _computeNoteBuffer();
            }
        }
    };
Visual side :

On the visual side, it is powered by WebGL and GLSL, there is a basic screen aligned quad with a fragment shader applied to it and then for each frames a readPixels call is made to get the user chosen (_play_position) vertical slice pixels array which will get converted in the audio callback by the computeNoteBuffer function, here is the code called for each frames :

    var _frame = function (raf_time) { 
        _gl.useProgram(_program);
        _gl.uniform1f(_gl.getUniformLocation(_program, "globalTime"), (raf_time - _time) / 1000);
        _gl.uniform2f(_gl.getUniformLocation(_program, "iMouse"), _mx, _my);

        _gl.drawArrays(_gl.TRIANGLE_STRIP, 0, 4);

        if (_swap_buffer) {
            _gl.readPixels((_canvas_width - 1) * _play_position, 0, 1, _canvas_height, _gl.RGBA, _gl.UNSIGNED_BYTE, _data);

            _swap_buffer = false;
        }

        _raf = window.requestAnimationFrame(_frame);
    };

And voilà, the core of the synthesizer explained.

Note : Borrowed from Shadertoy, the resolution, iMouse and globalTime uniforms are defined and can be used in the fragment shader to do the cool stuff! 🙂

The full source code is available on github

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...