In this chapter, we'll use the following tools:
- Command line or Bash to launch Magenta from the terminal
- Python and Magenta to convert trained models for Magenta.js
- TensorFlow.js and Magenta.js to create music generation apps in the browser
- JavaScript, HTML, and CSS to write Magenta.js web applications
- A recent browser (Chrome, Firefox, Edge, Safari) for up-to-date web APIs
- Node.js and npm to install Magenta.js and its dependencies server side
- FluidSynth to listen to generated MIDI from the browser
In Magenta.js, we'll make the use of the Music RNN and MusicVAE models for MIDI sequence generation and GANSynth for audio generation. We'll cover their usage in depth, but if you feel like you need more information, the Magenta.js Music README in the Magenta.js source code (github.com/tensorflow/magenta-js/tree/master/music) is a good place...