Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Google releases Magenta studio beta, an open source python machine learning library for music artists

Save for later
  • 3 min read
  • 14 Nov 2018

article-image

On 11th November, the Google Brain Team released Magenta studio in beta, a suite of free music-making tools using their machine learning models. It is a collection of music plugins built on Magenta’s open source tools and models. These tools are available both as standalone Electron applications as well as plugins for Ableton Live.

What is Project Magenta?


Magenta is a research project which was started by some researchers and engineers from the Google Brain team with significant contributions from many other stakeholders. The project explores the role of machine learning in the process of creating art and music. It primarily involves developing new deep learning and reinforcement learning algorithms to generate songs, images, drawings, and other materials. It also explores the possibility of building smart tools and interfaces to allow artists and musicians to extend their processes using these models.

Magenta is powered by TensorFlow and is distributed as an open source Python library. This library allows users to manipulate music and image data which can then be used to train machine learning models. They can generate new content from these models. The project aims to demonstrate that machine learning can be utilized to enable and enhance the creative potential of all people.

If the Magenta studio is used via Ableton, the Ableton Live plugin reads and writes clips from Ableton's Session View. If a user chooses to run the studio as a standalone application, the standalone application reads and writes files from a users file system without requiring Ableton.

Some of the demos include:

#1 Piano Scribe


Many of the generative models in Magenta.js requires the input to be a symbolic representation like Musical Instrument Digital Interface (MIDI). But now, Magenta Converts raw audio to MIDI using Onsets and Frames which  a neural network trained for polyphonic piano transcription. This means that only audio is enough to obtain an output of MIDI in the browser.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime

#2 Beat Blender


The Beat Bender is built by Google Creative Lab using MusicVAE. Users can now generate two dimensional palettes of drum beats and draw paths through the latent space to create evolving beats.

#3 Tenori-of


Users can utilize the Magenta.js to generate drum patterns when they hit the “Improvise” button. This is more like a take on an electronic sequencer.

#4 NSynth Super


This is machine learning algorithm using deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds. For instance, users can get a sound that’s part flute and part sitar all at once.

You can head over to the Magenta Blog for more exciting demos. Alternatively, head over to magenta.tensorflow.org to read more about this announcement.

Worldwide Outage: YouTube, Facebook, and Google Cloud go down affecting thousands of users
Intel Optane DC Persistent Memory available first on Google Cloud
Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more