Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

OpenAI introduces MuseNet: A deep neural network for generating musical compositions

Save for later
  • 4 min read
  • 26 Apr 2019

article-image

OpenAI has built a new deep neural network called MuseNet for composing music, the details of which it shared in a blog post yesterday. The research organization has made a prototype of MuseNet-powered co-composer available for users to try till May 12th.

https://twitter.com/OpenAI/status/1121457782312460288

What is MuseNet?


MuseNet uses the same general-purpose unsupervised technology as OpenAI’s GPT-2 language model, Sparse Transformer. This transformer allows MuseNet to predict the next note based on the given set of notes. To enable this behavior, Sparse Transformer uses something called “Sparse Attention”, where each of the output position computes weightings from a subset of input positions.

For audio pieces, a 72-layer network with 24 attention heads is trained using the recompute and optimized kernels of Sparse Transformer. This provides the model long context that enables it to remember long term structure in a piece.

For training the model, the researchers have collected training data from various sources. The dataset includes the MIDI files donated by ClassicalArchives and BitMidi. The dataset also includes data from online collections, including Jazz, Pop, African, Indian, and Arabic styles.

The model is capable of generating 4-minute musical compositions with 10 different instruments and is aware of different music styles from composers like Bach, Mozart, Beatles, and more. It can also convincingly blend different music styles to create a completely new music piece.

The MuseNet prototype, which is made available for users to try, only comes with a small subset of options. It supports two modes:

  • In simple mode, users can listen to the uncurated samples generated by OpenAI. To generate a music piece yourself, you just need to choose a composer or style and an optional start of a famous piece.
  • In advanced mode, users can directly interact with the model. Generating music in this mode will take much longer but will give an entirely new piece. Here’s how the advanced mode looks like:


openai-introduces-musenet-a-deep-neural-network-for-generating-musical-compositions-img-0

Source: OpenAI

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime


What are its limitations?


The music generation tool is still a prototype so it does has some limitations:

  • To generate each note, MuseNet calculates the probabilities across all possible notes and instruments. Though the model gives more priority to your instrument choices, there is a possibility that it will choose something else.
  • MuseNet finds it difficult to generate a music piece in case of odd pairings of styles and instruments. The generated music will sound more natural if you pick instruments closest to the composer or band’s usual style.


Many users have already started testing out the model. While some users are pretty impressed by the AI-generated music, some think that it is quite evident that the music is machine generated and lacks the emotional factor.

Here’s an opinion shared by a Redditor for different music styles:

My take on the classical parts of it, as a classical pianist. Overall: stylistic coherency on the scale of ~15 seconds. Better than anything I've heard so far. Seems to have an attachment to pedal notes.

Mozart: I would say Mozart's distinguishing characteristic as a composer is that every measure "sounds right". Even without knowing the piece, you can usually tell when a performer has made a mistake and deviated from the score. The Mozart samples sound... wrong. There are parallel 5ths everywhere.

Bach: (I heard a bach sample in the live concert) - It had roughly the right consistency in the melody, but zero counterpoint, which is Bach's defining feature. Conditioning maybe not strong enough?

Rachmaninoff: Known for lush musical textures and hauntingly beautiful melodies. The samples got the texture approximately right, although I would describe them more as murky more than lush. No melody to be heard.”

Another user commented, “This may be academically interesting, but the music still sounds fake enough to be unpleasant (i.e. there's no way I'd spend any time listening to this voluntarily).”

Though this model is in the early stages, an important question that comes in mind is who will own the generated music. “When discussing this with my friends, an interesting question came up: Who owns the music this produces? Couldn't one generate music and upload that to Spotify and get paid based off the number of listens?.” another user added.

To know more in detail, visit the OpenAI’s official website. Also, check out an experimental concert by MuseNet that was live-streamed on Twitch.

OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence

OpenAI Five bots destroyed human Dota 2 players this weekend

OpenAI Five beats pro Dota 2 players; wins 2-1 against the gamers