Kotoist Is A FREE Plugin For Algorithmic Music Composition

4

Ales Tsurko, an indie developer from Belarus, has released Kotoist, a freeware algorithmic composition plugin based on the Koto programming language. At the moment, a macOS binary is available for download but you need the build the Windows version using the provided source code.

Now, as a fair warning, this article is going to get a bit technical. 🤓

Kotoist is a new VST plugin that can be used for lightning-quick music creation based on algorithms and patterns. It can be used for ‘Live Coding’ and ‘Algorithmic Composition.’ You can edit each and every note of your composition separately and create some fascinating sounds.

The source code for Kotoist can be found on GitHub. It is available as macOS binaries. Windows users will have to build the program from the source to try it out (for now). The installation instructions are provided on the GitHub page.

It has been published on a public license, which means you are free to use it as you please. The source code can be obtained free of cost and modified to your heart’s content without any worries.

Kotoist’s interface may look like a command prompt, but you hardly need any programming know-how to operate it. It runs on very simple commands.

You input commands into the editor to get desired results. You can have a “Snippet” of code for each MIDI note.

To choose a Snippet for editing, click on the Snippet button, which fashions a graphical table on it. This will enable the “Snippet chooser” view. Select a Snippet to work on; it will be highlighted in yellow.

Kotoist uses the ‘midi_out’ function to connect to your DAW. You input two types of arguments for results –

  1. The pattern used for playing the music.
  2. The quantization in beats.

Quantization is relatively straightforward, but the pattern argument is where most of the musical magic lies.

The pattern argument has various values which are used to set the style and tone of the music. These values include:

  • dur – note duration
  • length – note length
  • degree – step in the scale
  • scale – to view available scales, you can execute the print_scales function
  • root – root note
  • transpose – simple transpose
  • mtranspose – transpose relating to the scale
  • octave – octave number (from 0)
  • channel – MIDI channel number
  • amp – amplitude (from 0.0 to 1.0)

These different values can be called upon by writing code in the editor.

For example, “pattern.dur” is used to set a particular tone duration, followed by the desired value.

To get your output as music, you evaluate your code using the build button, which has a hammer icon on it. You can get the result for your entire code or specific parts by selecting the part you need before using the build option.

To evaluate, you need to be in the “Snippets” view. To hear the result, play the note corresponding to the evaluated Snippet. Once you have several snippets in place, the process of composing new music becomes a lot faster.

Now, I know most musicians aren’t comfortable with coding to make good use of this tool just yet. But it represents a new era in digital music, where we can produce hours of randomly generated music using a few algorithms.

So, do you think ‘algorithmically generated music’ could ever replace a human? Or are they merely tools to enhance our musical creativity? We’d love to hear your feedback in the comments.

Download: Kotoist (macOS binary, Windows source code)

More articles:

Share this article. ♥️

About Author

Anshul Jain is a content writer and session guitarist from India. An avid fan of rock and metal, he is 1/6th of the progressive act Black Flower.

4 Comments

      • I’m no programmer (though I used to dabble) so this is very likely inaccurate, but should approximate the concept enough that you’ll be a lot closer to understanding it than you were before.

        Think of it like how Minecraft (or any other game with procedural world generation) generates worlds from seeds; if you edit a parameter (or part of the seed, which is just a string of text fed into the program and interpreted however the programmer designed it to) something about the music/world will change along with it. It’s pretty cool; it’s like computer-augmented creation.

  1. As someone looking into getting into Eurorack and who LOVES making generative patches on my Behringer Neutron and Volca Modular (all the modular/semi-modular gear I have for now), this is pretty cool and reminds me a lot of a digitized, code-based version of making a generative patch in Eurorack.

    I still think what we know of now as modular synthesis will remain well into the future, as it has a tangible magic that can’t be beaten. I can’t see a computer code-based algorithmic composer ever outright replacing the hands-on cable-plugging and knob-turning experience completely. Same for human-made compositions, as there’s a certain organic-ness that you can pick up on when something is human-made.

    I naturally and easily see through a person when they’re not being genuine, so I would be surprised if people didn’t start picking up on differences between computer-generated and hand-crafted music. People are more perceptive than many give us credit for, which includes that which is beyond surface-level perceptual reality.

Leave A Reply