MicroMusic Is A FREE AI Tool For Windows That Recreates Synth Sounds Using Vital

34

MicroMusic is a new AI sound reconstruction tool for Vital, and it is free to download for Windows users. The tool, developed by a team of software engineering students at the University of Waterloo, works with the Vital synthesizer plugin, which is also free to download

The purpose of MicroMusic is to make recreating synth sounds faster and easier. 

As the developer explains, “Even when you know exactly how you want your synth to sound, and even if you have a reference sample, it can take hours of hard work and iteration to tune your synth to sound just right.”

However, MicroMusic will now supposedly do the hard part for you, so you can spend more time making music and less time fiddling with dials. 

You can see the tool in action via the demo linked above, which showcases the process of replicating sounds from audio files.

The process for replicating a sound involves inputting an audio sample and then allowing MicroMusic to output a Vital preset file. The developers used machine learning to find the “optimal parameters to create the closest matching preset it can.”

Downloading MicroMusic is quite simple, as there’s no email or login required to get the ZIP file. 

MicroMusic was created by a team of software engineering students at the University of Waterloo in Ontario, Canada. 

The developers explained that they started the project due to their personal experience of frustration after countless hours of tweaking synth parameters to try to get the sound they wanted. 

The team started development last year and aims to make MicroMusic the “go-to tool for all music producers who work with synthesizers.”

MicroMusic has been a big hit with producers, with users describing the plugin as revolutionary and praising the team’s generosity in releasing it for free. 

The MicroMusic AI was trained on over one million unique presets, and the developers gained this data by randomly generating Vital presets. The AI was also trained to be “note agnostic” and can handle polyphonic sounds. 

The developer notes that MicroMusic is currently a work in progress, and users can expect further refinements to be made in the future. They also state that Mac and Linux are “not supported yet.”

And for those who wish to support the developers, the team does operate a Patreon page.

Download: MicroMusic (FREE, Vital synth required)

More:

Share this article. ♥️

About Author

Avatar photo

Steve is a musician and journalist who hails from Melbourne, Australia. He learned everything he knows about production from Google and used that vast knowledge to create a series of records you definitely haven’t heard of.

34 Comments

  1. So, no one has to ever learn anything anymore because now we have the tool that rips off (better said, tries to do so) the sounds people have already created … Meh …

    • So I guess we should never use a piano again, someone already used that sound. Should have stopped with Mozart, Beethoven was wasting his time copying the piano sounds Mozart was using.

      • We’re talking about the ability to create sounds using a synthesizer and yes that includes piano – one should be able to create piano sound, any other known instrument as well as the sounds that they imagine (that’s the hard part but AI tool will surely not help with it).

        • The tree that does not bend with the wind direction, breaks. Adapt or die.

          I’m a sound designer excited about this because it will make my life easier.

          Being a purist is a mediocre way to cope with the inability to adapt to changes and learn new things.

          • What’s there to adapt to? Those who know how to program synths don’t need a tool to do that for them. I’ve always loved new tools when they make sense.

            • Alex, your comments are daft. That’s why I suspect you have an agenda and/or are somehow threatened.

              AI and Vital are tools. A hammer is a tool. One can use it to smash someone’s skull or to pound nails to help build a house.

              Another of the many cool things about FLOSS (Free/Libre Open Source Software) like Vital is that one doesn’t need to wait around to see if a proprietary software developer wants to ok a change. They simply look at the code and leverage/augment it, such as with artificial intelligence code that is already/readily available.

          • Holy, You seem to have really mixed up that one. First as far as I know, people are not plants and definitely are not static like trees (unlike robots), so I am not sure what you are trying to say in reality (not virtual one) that makes sense. Perhaps AI will understand you though, I’m sure of it :)

            Second, from what I can tell, if someone disagrees with the use of AI in music production or sound design, that doesn’t make them a purist. What sort of thinking is that? So if a robot disagrees with humans, that makes a “purist” robot, I guess. lol

            Last thing, perhaps you are right that using AI will make your life easier. After all, there are so many people having an online (virtual) identity these days. Someone would almost think it’s the next step in human evolution, right? ;)

            As an actual sound designer though? Not so much. At least not one that is really honest with his/her work. That’s all I want to say .

    • Scherbenfabrik

      on

      If you don’t learn anything you won’t be able to use the sounds you generate. But creative-wise, i hope it will run on win7. Can’t wait to trow all kinds of weird stuff in it and see what comes out. Wanna see an AI do that.

      I get your point tho, especially with music being produced like on a assembly line these days. But isn’t that the consumers fault rather than the creators or tools. I think some people mistake more creativity with more output.

      • Neither consumers nor creators. Consumers consume what they’re given by the industry and the industry makes creators to make the kind of music you described :)

    • This tool doesn’t do that. I’d actually respect a tool that could recognize the effects and wave shapes in a synth sample, but this doesn’t do that.

      If you watch the original video, you’lll see that on the 4th preset supposedly generated, he pulls up vital without actually loading the preset, which means that the Vital instance already had the preset loaded before he used the app to generate it, which is impossible. Unless the entire video is staged and he’s a scammer farming excited people with a staged video and software that doesn’t do what he says.

      Don’t believe me? Search the app’s folder for “wav” and watch one of the five results be a vital preset with everything already configured, that just so happens to match every result that occurs in both the video and every test I’ve tried with the software.

      • Wow, strong accusations there…
        I did not try the software yet, but I checked what I downloaded and there are several references to Tensorflow and in data files (*.h5) you can see conv2d, batch normailzation and dense layers, which are real neuronal network AI concepts, so I had the hope there was some true research behind this…

    • People still need at least some artistic ability, creativity, and production skill to create the rest of the mix. All this does is create individual synths.

  2. “MicroMusic has been a big hit with producers, with users describing the plugin as revolutionary”
    “MicroMusic will now supposedly do the hard part for you”

    Have you even tried it?

    Most of the top comments appear to be just hype from people who haven’t tried it (..like this article) or can’t use it (mac/linux) but still post revolutionary fire mojis!!! and those who claim to have tried it either say it doesn’t work or just produces supersaws/noise and nothing close to what’s shown in the video.

    I can’t find one positive comment outside of youtube, and no other videos demonstrating a product that claims to be better than synplant2 on polyphonic sound recreation.

    • As far as I can tell it’s a scam to get internet clout and patreon donations.

      I have tried it extensively and I’ve also dug into the code and the application’s folders. It takes the same amount of time processing an 8 minute song as it does a 2 second sample one shot. The results are the same. All it does is randomly shuffle the parameters of a vital preset hidden inside the install folder. Every single result it produces is the same preset with a couple of minor tweaks.

      I have yet to see it ever make anything remotely close to the sample inputted, even when I had it try to make a preset of a oneshot sample that was rendered from a preset it made. It couldn’t even copy itself.

      If you watch the original video, you’lll see that on the 4th preset supposedly generated, he pulls up vital without actually loading the preset, which means that the Vital instance already had the preset loaded before he used the app to generate it, which is impossible. Unless the entire video is staged and he’s a scammer farming excited people with a staged video and software that doesn’t do what he says.

      Don’t believe me? Search the app’s folder for “wav” and watch one of the five results be a vital preset with everything already configured, that just so happens to match every result that occurs in both the video and every test I’ve tried with the software.

    • Just another attention seeking tech bro wanting investors.
      It was trained with “edm” sounds only. the standard american edm starter kit, porter, zedd and mau5.

  3. Frits van Zanten

    on

    PeaZip won’t extract, hints: not accessible, corrupted, password

    I also think ‘gui’ is not a good name for a main folder in an archive zip file

    Extracting with 7-zip work but starting the app renders:

    WARNING:tensorflow:AutoGraph is not available in this environment: functions lack code information. This is typical of some environments like the interactive Python shell. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md#access-to-source-code for more information.
    scipy\__init__.py:169: UserWarning: A NumPy version >=1.18.5 and <1.26.0 is required for this version of SciPy (detected version 1.26.0

    I pass

  4. Frits van Zanten

    on

    Since my earlier comment seems to have disappeared: Extracting with Peazip gave error messages. 7-Zip did the job. But the exe gave errors about Python.

  5. I am so frustrated with all of the people telling me that my personal experience with this software not working is “just a bug lol”. I’ve tried this with samples that are short, and samples that are 4-8 minutes long. I’ve tried this on samples I’ve made, on pro sample pack samples I own, and even on samples made by presets that this software generates.

    It never works. It never sounds remotely close to what I input. It takes the same amount of time processing a 1 mb wav file as it does a 60+ mb render of a song I made. The result is always the same two wavetables, with the same effects, with the same modulation routing. Every time.

    If you look at the video, you’ll see that he forgets to actually load the preset into vital on the 4th or 5th preset. He just pulls up vital with the preset already loaded. Which is impossible because that means he loaded the preset BEFORE it had been “generated”. That means he staged and faked at least that part of the video. Deceptive.

    If you try it yourself, go ahead and look at the various python scripts. Look at generator.py and explain why it uses code to generate RANDOM parameter values if it’s actually analyzing the sound files you put it.

    Search “wav” in the install directory. You’ll find the default vital preset file that it renames and shuffles the parameters on.

    Try to input a wav file with “.WAV” extension vs “.wav”. It errors out, meaning that somehow a developer that supposedly knows how to use PyTorch to employ AI software features failed to handle text case correction anywhere in their code. And that also means that it can’t tell that a file is a wav file by nature of the data it holds, it instead uses the the case of the file extension to do so. That suggests that the code isn’t even looking at the contents of the wav file.

    I was so excited when I downloaded this software, and now that I’ve tried it so much and flooded my Vital with these multiple presets that are all just copies of the same basic preset hidden inside of it… I feel so deceived, cheated of my time, my attention, and my hype for AI development.

    I’m so sick of all this hype from people who haven’t tried the software at all, and I know I can’t be alone in feeling misled and cheated. I feel really sorry for all the people who donated to the patreon after watching that staged video.

    I’ll debate AI in music up and down, all day, I will. To the best of my knowledge, this isn’t real, and if it is, this AI is being exploited by a deceptive developer that grossly misrepresents what’s actually happening with this software.

  6. After trying, it seems to oddly work somewhat better with a full song than a rendered single preset from another synth. Trying to reproduce a preset from Surge XT, it wasn’t even close, spitting out 10 almost identical presets that sounded nothing like the original. After dropping a full song containing 3 or 4 layered analog synth sounds from a Moog Matriarch. It produced 10 different presets, with better results, seemingly trying to reproduced the various sounds throughout the tune. It wasn’t amazing, but it was interesting. I give the team credit for the work and use of newer technology. I haven’t used Synplant 2 yet, but from it’s demo and reviews I’ve seen, Synplant 2 seems to be pretty far ahead. Although, for a free tool, this is interesting for sure.

  7. Jeffery Wright

    on

    WARNING:tensorflow:AutoGraph is not available in this environment: functions lack code information. Do I need to be running an AI environment? Interesting concept, but not practical.

  8. Drag and drop insanity!: it does not generate 100% replicas, but finding hundreds of similar ones in seconds is crazy, I have a cascade of ideas right now. I quite like working on hundreds of variations of my own presets used in melodies.

  9. Some presets are json or xml. They trained it with one million presets? the people who created the royalty free presets didnt want them to be re-sold nor used for AI training.

  10. Very disappointing. I was quite excited about the potential of this. Like people mentioned before, create variations of my own leads and good starts for recreating more complex patches. But the results are pretty terrible. I think the video is very misleading, as it only really works for very generic edm sounds, like basic square plucks and saw leads. Insert anything else into it, like acoustic sounds (guitar samples, bell sounds, piano whatever) and it will just export the same basic square pluck and saw leads. What this plugin does do, is something every electronic musician should be able to do in the first place. Like basic plucks and leads and recognize what’s a square wave and what’s a saw wave. It completely misses the intricacies of sounds, like vibratos and other pitch changes, changes in timbre and in parameters, it also does a very poor job in recreating the transient and sustain of sounds.

Leave A Reply