Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Music Generation with Magenta

You're reading from   Hands-On Music Generation with Magenta Explore the role of deep learning in music generation and assisted music composition

Arrow left icon
Product type Paperback
Published in Jan 2020
Publisher
ISBN-13 9781838824419
Length 360 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Alexandre DuBreuil Alexandre DuBreuil
Author Profile Icon Alexandre DuBreuil
Alexandre DuBreuil
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Section 1: Introduction to Artwork Generation
2. Introduction to Magenta and Generative Art FREE CHAPTER 3. Section 2: Music Generation with Machine Learning
4. Generating Drum Sequences with the Drums RNN 5. Generating Polyphonic Melodies 6. Latent Space Interpolation with MusicVAE 7. Audio Generation with NSynth and GANSynth 8. Section 3: Training, Learning, and Generating a Specific Style
9. Data Preparation for Training 10. Training Magenta Models 11. Section 4: Making Your Models Interact with Other Applications
12. Magenta in the Browser with Magenta.js 13. Making Magenta Interact with Music Applications 14. Assessments 15. Other Books You May Enjoy

What this book covers

Chapter 1, Introduction to Magenta and Generative Art, will show you the basics of generative music and what already exists. You'll learn about the new techniques of artwork generation, such as machine learning, and how those techniques can be applied to produce music and art. Google's Magenta open source research platform will be introduced, along with Google's open source machine learning platform TensorFlow, along with an overview of its different parts and the installation of the required software for this book. We'll finish the installation by generating a simple MIDI file on the command line.

Chapter 2, Generating Drum Sequences with the Drums RNN, will show you what many consider the foundation of music—percussion. We'll show the importance of RNNs for music generation. You'll then learn how to use the Drums RNN model using a pre-trained drum kit model, by calling it in the command-line window and directly in Python, to generate drum sequences. We'll introduce the different model parameters, including the model's MIDI encoding, and show how to interpret the output of the model.

Chapter 3, Generating Polyphonic Melodies, will show the importance of Long Short-Term Memory (LSTM) networks in generating longer sequences. We'll see how to use a monophonic Magenta model, the Melody RNN—an LSTM network with a loopback and attention configuration. You'll also learn to use two polyphonic models, the Polyphony RNN and Performance RNN, both LSTM networks using a specific encoding, with the latter having support for note velocity and expressive timing.

Chapter 4, Latent Space Interpolation with MusicVAE, will show the importance of continuous latent space of VAEs and its importance in music generation compared to standard autoencoders (AEs). We'll use the MusicVAE model, a hierarchical recurrent VAE, from Magenta to sample sequences and then interpolate between them, effectively morphing smoothly from one to another. We'll then see how to add groove, or humanization, to an existing sequence, using the GrooVAE model. We'll finish by looking at the TensorFlow code used to build the VAE model.

Chapter 5, Audio Generation with NSynth and GANSynth, will show audio generation. We'll first provide an overview of WaveNet, an existing model for audio generation, especially efficient in text to speech applications. In Magenta, we'll use NSynth, a Wavenet Autoencoder model, to generate small audio clips, that can serve as instruments for a backing MIDI score. NSynth also enables audio transformation like scaling, time stretching and interpolation. We'll also use GANSynth, a faster approach based on GAN.

Chapter 6, Data Preparation for Training, will show how training our own models is crucial since it allows us to generate music in a specific style, generate specific structures or instruments. Building and preparing a dataset is the first step before training our own model. To do that, we first look at existing datasets and APIs to help us find meaningful data. Then, we build two datasets in MIDI for specific styles—dance and jazz. Finally, we prepare the MIDI files for training using data transformations and pipelines.

Chapter 7, Training Magenta models, will show how to tune hyperparameters, like batch size, learning rate, and network size, to optimize network performance and training time. We’ll also show common training problems such as overfitting and models not converging. Once a model's training is complete, we'll show how to use the trained model to generate new sequences. Finally, we'll show how to use the Google Cloud Platform to train models faster on the cloud.

Chapter 8, Magenta in the browser with Magenta.js, will show a JavaScript implementation of Magenta that gained popularity for its ease of use, since it runs in the browser and can be shared as a web page. We'll introduce TensorFlow.js, the technology Magenta.js is built upon, and show what models are available in Magenta.js, including how to convert our previously trained models. Then, we'll create small web applications using GANSynth and MusicVAE for sampling audio and sequences respectively. Finally, we'll see how Magenta.js can interact with other applications, using the Web MIDI API and Node.js.

Chapter 9, Making Magenta Interact with Music Applications, will show how Magenta fits in a broader picture by showing how to make it interact with other music applications such as DAWs and synthesizers. We'll explain how to send MIDI sequences from Magenta to FluidSynth and DAWs using the MIDI interface. By doing so, we'll learn how to handle MIDI ports on all platforms and how to loop MIDI sequences in Magenta. We'll show how to synchronize multiple applications using MIDI clocks and transport information. Finally, we'll cover Magenta Studio, a standalone packaging of Magenta based on Magenta.js that can also integrate into Ableton Live as a plugin.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime