Tag Archives: app

Multi-user collaborative text editing with ShareDB and CodeMirror

Do you desire to add a multi-users collaborative text editing to your application? This article will show you how to do it by using ShareDB and the CodeMirror library.

ShareDB which used to be ShareJS is a real time database backend based on Operational Transformation (OT) of JSON documents, it allow real-time synchronization of documents and many other nifty things, in this article, i will focus on the implementation of a multi-user collaborative text editor by using ShareDB and CodeMirror which is a versatile text editor implemented in JavaScript for browsers.

There is some projects that exist already such as a CodeMirror binding for ShareDB and ShareJS but they are outdated or not working correctly, as there is limited resources on this subject, i am sharing my implementation for CodeMirror, it can also be useful for other editors such as the Ace Editor.

In this article, i assume that there is already a CodeMirror instance which is named “code_editor” on the client side.

The Node.js server is extremely simple, ShareDB will do everything, all alone… we will just make ShareDB listen to a WebSocketJSONStream :

var http = require('http');
var express = require('express');
var ShareDB = require('sharedb');
var ShareDB_logger = require('sharedb-logger');
var WebSocket = require('ws');
var WebSocketJSONStream = require('websocket-json-stream');

var share = new ShareDB();

var sharedb_logger = new ShareDB_logger(share);

var app = express();
var server = http.createServer(app);

var wss = new WebSocket.Server({server: server});
wss.on('connection', function(ws, req) {
    console.log("client connected");
    
    var stream = new WebSocketJSONStream(ws);
    share.listen(stream);
});

wss.on('close', function(ws, req) {
    console.log("client disconnected");
});

server.listen(3000);

Here are the dependencies to add to the “package.json” file to build the server with “npm” :

    "dependencies": {
        "sharedb": "1.0.0-beta.6",
        "ot-text": "1.0.1",
        "websocket-json-stream": "0.0.3",
        "express": "4.14.0",
        "ws": "1.1.1",
        "sharedb-logger": "0.1.4"
    }

sharedb-logger” is useful to monitor all ShareDB messages.

On the client side, you need to connect to the server through WebSocket (ShareDB is transport agnostic but it will require additional code to implement other transports) :

var ws = new WebSocket("ws://127.0.0.1:3000"); // Connect to localhost on port 3000

Then bind ShareDB to the WebSocket :

var sharedb_connection = new ShareDB.Connection(ws);

Now it is time to get the document we are interested in :

sharedb_doc = sharedb_connection.get("my_collection", "my_document");

We are getting a document named “my_document” in the collection “my_collection”, if the connection is established and the document exist on the server, we will get the latest snapshot of that document from the server, if it does not exist already, we will create it and set the document content to the text editor content, if it exist we just assign the document content to our text editor, we do that by using the document function “subscribe“, we will also start to listen to the “op” event to integrate remote document changes to our CodeMirror instance.

    sharedb_doc.subscribe(function(err) {
        if (err) {
            console.log(err); // handle the error
        }
        
        if (!sharedb_doc.data) { // does not exist so we create the document and replace the code editor content by the document content
            sharedb_doc.create(code_editor.getValue());
        } else { // it exist, we set the code editor content to the latest document snapshot
            code_editor.setValue(sharedb_doc.data);
        }

        // we listen to the "op" event which will fire when a change in content (an operation) is applied to the document, "source" argument determinate the origin which can be local or remote (false)
        sharedb_doc.on('op', function(op, source) {
            var i = 0, j = 0,
                from,
                to,
                operation,
                o;
            
            if (source === false) { // we integrate the operation if it come from the server
                for (i = 0; i < op.length; i += 1) {
                    operation = op[i];
                    
                    for (j = 0; j < operation.o.length; j += 1) {
                        o = operation.o[j];
                        
                        if (o["d"]) { // delete operation
                            from = code_editor.posFromIndex(o.p);
                            to = code_editor.posFromIndex(o.p + o.d.length);
                            code_editor.replaceRange("", from, to, "remote");
                        } else if (o["i"]) { // insert operation
                            from = code_editor.posFromIndex(o.p);
                            code_editor.replaceRange(o.i, from, from, "remote");
                        } else {
                            console.log("Unknown type of operation.")
                        }
                    }
                }
            }
        });
        
        sharedb_doc_ready = true; // this is mandatory but we will use this to determine if the document is ready in the "change" event of CodeMirror
    });

You can look here for informations concerning the text operations, this is pretty easy to manipulate because there is basically two types, insert, delete and both come with the position from which it happen, we need to do some conversions for the position because CodeMirror use lines and characters, ShareDB just use characters. The very important things is the last argument that we pass to CodeMirror “replaceRange” function, because all of these operations will fire the “changes” event of CodeMirror which we will listen to submit operations… so if we want to avoid cyclic changes problem, we need to “type” the origin by setting it as “remote” and we will ignore the change with this origin in the CodeMirror “changes” event.

Now it is time to listen to changes from CodeMirror through the “changes” event and push the operation to ShareDB :

    CodeMirror.on(code_editor, 'changes', function (instance, changes) {
        var op,
            change,
            start_pos,
            chars,

            i = 0, j = 0;

        if (!sharedb_doc_ready) { // if the document is not ready, we just ignore all changes, a much better way to handle this would be to call the function again with the same changes at regular intervals until the document is ready (or just cancel everything if the document will never be ready due to errors or something else)
            return;
        }

        op = {
            p: [],
            t: "text0",
            o: []
        };

        // we must do it in order (this avoid issue with same-time op)
        changes.reverse();

        for (i = 0; i < changes.length; i += 1) {
            change = changes[i];
            start_pos = 0;
            j = 0;

            if (change.origin === "remote") { // do not submit back things pushed by remotes... ignore all "remote" origins
                continue;
            }

            while (j < change.from.line) {
                start_pos += code_editor.lineInfo(j).text.length + 1;
                j += 1;
            }

            start_pos += change.from.ch;

            if (change.to.line != change.from.line || change.to.ch != change.from.ch) {
                chars = "";

                for (j = 0; j < change.removed.length; j += 1) { chars += change.removed[j]; if (j !== (change.removed.length - 1)) { chars += "\n"; } } op.o.push({ p: start_pos, d: chars }); } if (change.text) { op.o.push({ p: start_pos, i: change.text.join('\n') }); } } if (op.o.length > 0) {
            sharedb_doc.submitOp(op);
        }
    });

Here we are looping through all changes that happened, determinate what kind of change it is and the start position (remember: CodeMirror use lines and characters so we have to do some computations to transfom lines&characters to a character based position), we then submit the operation to ShareDB.

This is it, you should have a multi-user collaborative text editor working fully with CodeMirror…

For an easier way to add collaborative stuff to your application , you can also integrate TogetherJS which a library which will almost “automatically” allow it for many things and provide some nice widgets such as a chatbox, audio chat… my preference goes for ShareDB as it seem easier to control and manage while remaining fairly easy to use when doing advanced stuff and it does not have any fancy stuff.

09/08/2017: Added `changes.reverse()` to the CodeMirror changes function so that changes are processed in order, this avoid issues with same-time operations.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Fragment Synthesizer : GLSL powered HTML5 spectral synthesizer

Some years ago i found out the Virtual ANS synthesizer, this is a very good spectral synthesizer simulating the Russian ANS synthesizer, an unique photoelectronic musical instrument where the score is a drawn sound spectrogram, the x axis of the score represent time and the y axis represent frequency.

Since then i have a great interest in this sort of synthesis and method of composing and have an ongoing large project heavily related to the Virtual ANS.

Fragment Synthesizer is a fun side experiment made quickly where the initial thought was : What if you set GLSL produced images as the source of a spectral synthesizer?

The result is the Fragment Synthesizer web application

This is a full blown stereophonic (the color matter, red for left, green for right) spectral synthesizer which is constantly playing a slice of a GLSL produced image/animation, you can compose by editing the fragment shader or by just copy-pasting code from Shadertoy and then convert it to the Fragment Synthesizer format by clicking on the convert button.

The web app. consist of 3 parts :

  • The score produced by a fragment shader with a vertical bar representing the slice which will be played by the synthesizer
  • A live code editor with the ability to compile as you type (with errors reporting), this is used to edit the GLSL fragment shader and subsequently to compose. (the code editor is powered by the CodeMirror library)
  • Controls (volume slider, button to convert Shadertoy code to Fragment Synthesizer code and a slider to move the playing slice) powered by my own JavaScript widget library

How does it work?

Audio side :

The Fragment Synthesizer is just a simple additive synthesizer under the hood, it is powered by a simple wavetable which is generated with this code :

        _wavetable_size = 32768,
        
        _wavetable = (function (wsize) {
                var wavetable = new Float32Array(wsize),

                    wave_phase = 0,
                    wave_phase_step = 2 * Math.PI / wsize,

                    s = 0;

                for (s = 0; s < wsize; s += 1) {
                    wavetable[s] = Math.sin(wave_phase);

                    wave_phase += wave_phase_step;
                }

                return wavetable;
            })(_wavetable_size),

There is then an oscillator for each lines of the score, oscillators are generated by this function :

    var _generateOscillatorSet = function (n, base_frequency, octaves) {
        var y = 0,
            frequency = 0.0,
            octave_length = n / octaves;
        
        _oscillators = [];

        for (y = n; y >= 0; y -= 1) {
            frequency = base_frequency * Math.pow(2, y / octave_length);

            var osc = {
                freq: frequency,
                
                phase_index: Math.random() * _wavetable_size, 
                phase_step: frequency / _audio_context.sampleRate * _wavetable_size
            };
            
            _oscillators.push(osc);
        }
    };

On the Fragment Synthesizer the starting frequency is 16.34 hertz (bottom of the score) and the y axis span 10 octaves, this is hardcoded but could be fun to let the user change it, the number of oscillators change as the score height change and it depend of the window height, if the user resize the window, the number of oscillators will change.

One of the most important function is the _computeNoteBuffer function, it will transform the pixels array of a vertical slice into an usable and fast to process data structure consisting of a single float32 typed array, each entries (an entry is in fact 5 entries because data is packed linearly into the array) of this array describe which oscillator to play along with data related to how it should play, here is what the 5 entries are :

  • The index of the oscillator to play
  • The previous left side gain value for this oscillator
  • The previous right side gain value for this oscillator
  • The current left side gain value for this oscillator
  • The current right side gain value for this oscillator

The gain value for each side is determined by the red and green component of the pixel value.

The previous gain value is used because it is interpolated when played, this produce a better sound without crackles when the gain value vary greatly.

This function is actually called in the audio callback (it is very fast so ok), it is called for each frames, here is how the number of samples before the next note is computed :

        _fps = 60,
        _note_time = 1 / _fps,
        _note_time_samples = Math.round(_note_time * _sample_rate),

Here is the code of the _computeNoteBuffer function :

    var _computeNoteBuffer = function () {
        for (i = 0; i < _note_buffer.length; i += 1) {
            _note_buffer[i] = 0;
        }
        
        var note_buffer = _note_buffer,
            pvl = 0, pvr = 0, pr, pg, r, g,
            inv_full_brightness = 1 / 255.0,

            dlen = _data.length,
            y = _canvas_height - 1, i,
            volume_l, volume_r,
            index = 0;

        for (i = 0; i < dlen; i += 4) { pr = _prev_data[i]; pg = _prev_data[i + 1]; r = _data[i]; g = _data[i + 1]; if (r > 0 || g > 0) {
                volume_l = r * inv_full_brightness;
                volume_r = g * inv_full_brightness;
                
                pvl = pr * inv_full_brightness;
                pvr = pg * inv_full_brightness;

                note_buffer[index] = y;
                note_buffer[index + 1] = pvl;
                note_buffer[index + 2] = pvr;
                note_buffer[index + 3] = volume_l - pvl;
                note_buffer[index + 4] = volume_r - pvr;
            } else {
                if (pr > 0 || pg > 0) {
                    pvl = pr * inv_full_brightness;
                    pvr = pg * inv_full_brightness;

                    note_buffer[index] = y;
                    note_buffer[index + 1] = pvl;
                    note_buffer[index + 2] = pvr;
                    note_buffer[index + 3] = -pvl;
                    note_buffer[index + 4] = -pvr;
                }
            }

            y -= 1;

            index += 5;
        }
        
        _prev_data = _data;
        
        _swap_buffer = true;
    };

Now the core audio code where the magic happen (nothing really fancy here except the interpolation) :

    var _audioProcess = function (audio_processing_event) {
        var output_buffer = audio_processing_event.outputBuffer,
            
            output_data_l = output_buffer.getChannelData(0),
            output_data_r = output_buffer.getChannelData(1),
            
            output_l = 0, output_r = 0,
            
            wavetable = _wavetable,
            
            note_buffer = _note_buffer,
            note_buffer_len = note_buffer.length,
            
            wavetable_size_m1 = _wavetable_size - 1,
            
            osc,
            
            lerp_t_step = 1 / _note_time_samples,
            
            sample,
            
            s, j;
        
        for (sample = 0; sample < output_data_l.length; sample += 1) {
            output_l = 0.0;
            output_r = 0.0;

            for (j = 0; j < note_buffer_len; j += 5) { var osc_index = note_buffer[j], previous_volume_l = note_buffer[j + 1], previous_volume_r = note_buffer[j + 2], diff_volume_l = note_buffer[j + 3], diff_volume_r = note_buffer[j + 4]; osc = _oscillators[osc_index]; s = wavetable[osc.phase_index & wavetable_size_m1]; output_l += (previous_volume_l + diff_volume_l * _lerp_t) * s; output_r += (previous_volume_r + diff_volume_r * _lerp_t) * s; osc.phase_index += osc.phase_step; if (osc.phase_index >= _wavetable_size) {
                    osc.phase_index -= _wavetable_size;
                }
            }
            
            output_data_l[sample] = output_l;
            output_data_r[sample] = output_r;
            
            _lerp_t += lerp_t_step;
            
            _curr_sample += 1;

            if (_curr_sample >= _note_time_samples) {
                _lerp_t = 0;

                _curr_sample = 0;

                _computeNoteBuffer();
            }
        }
    };
Visual side :

On the visual side, it is powered by WebGL and GLSL, there is a basic screen aligned quad with a fragment shader applied to it and then for each frames a readPixels call is made to get the user chosen (_play_position) vertical slice pixels array which will get converted in the audio callback by the computeNoteBuffer function, here is the code called for each frames :

    var _frame = function (raf_time) { 
        _gl.useProgram(_program);
        _gl.uniform1f(_gl.getUniformLocation(_program, "globalTime"), (raf_time - _time) / 1000);
        _gl.uniform2f(_gl.getUniformLocation(_program, "iMouse"), _mx, _my);

        _gl.drawArrays(_gl.TRIANGLE_STRIP, 0, 4);

        if (_swap_buffer) {
            _gl.readPixels((_canvas_width - 1) * _play_position, 0, 1, _canvas_height, _gl.RGBA, _gl.UNSIGNED_BYTE, _data);

            _swap_buffer = false;
        }

        _raf = window.requestAnimationFrame(_frame);
    };

And voilà, the core of the synthesizer explained.

Note : Borrowed from Shadertoy, the resolution, iMouse and globalTime uniforms are defined and can be used in the fragment shader to do the cool stuff! 🙂

The full source code is available on github

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...