Jump to content

NSynth

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Rustycandle (talk | contribs) at 08:20, 3 November 2022. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

NSynth neural synthesizer
Original author(s)Google Brain, Deep Mind, Magenta
Initial release6 April 2017; 7 years ago (2017-04-06)
Repositorygithub.com/magenta/magenta/tree/main/magenta/models/nsynth
Written inPython
TypeSoftware synthesizer
LicenseApache 2.0
Websitemagenta.tensorflow.org/nsynth

NSynth (a portmanteau of "Neural Synthesizer") is a software algorithm, outlined in a paper in April 2017[1], that generates new sounds through a neural network based synthesis, employing a WaveNet-style autoencoder to learn its own temporal embeddings from four different sounds.[2]. Google then released an open source hardware interface for the algorithm called NSynth Super[3], used by musicians such as Grimes[4] and YACHT [5] to generate experimental music using AI. The research and development of the algorithm was part of a collaboration between Google Brain, Magenta and DeepMind.[6]

Development

Dataset

The NSynth dataset is composed of 305,979 one-shot instrumental notes featuring a unique pitch, timbre, and envelope, sampled from 1,006 instruments from commercial sample libraries.[7] For each instrument the dataset contains four-second 16kHz audio snippets by ranging over every pitch of a standard MIDI piano, as well as five different velocities [8]. The dataset is made available under a Creative Commons Attribution 4.0 International (CC BY 4.0) license. [9]

Model

A spectral autoencoder model and a WaveNet autoencoder model are publicly available on GitHub [10]. The baseline model uses a spectrogram with fft_size 1024 and hop_size 256, MSE loss on the magnitudes, and the Griffin-Lim algorithm for reconstruction. The WaveNet model trains on mu-law encoded waveform chunks of size 6144. It learns embeddings with 16 dimensions that are downsampled by 512 in time[1].

[11]


NSynth Super
The NSynth Super front panel: a metal box with a bright colored screen input.
NSynth Super Front Panel
ManufacturerGoogle Brain, Google Creative Lab
Dates2018
Technical specifications
Synthesis typeNeural Network Sample-based synthesis
Input/output
Left-hand controlPitch bend, ADSR
External controlMIDI

NSynth Super

Later in 2018 Google released a hardware interface for the NSynth algorithm, called NSynth Super[12], designed to provide an accessible physical interface to the algorithm for musicians to use in their artistic production[13].

Design files, source code and internal components are released under an open source Apache License 2.0[14], enabling hobbyists and musicians to freely build and use the instrument[15].

Hardware

I'm baby four dollar toast deep v subway tile small batch affogato. Celiac bitters cray post-ironic, DSA coloring book sustainable whatever. Cronut trust fund lo-fi, ugh flexitarian dreamcatcher paleo. XOXO mumblecore listicle man braid lomo poke blog. Photo booth cornhole mukbang edison bulb, put a bird on it 3 wolf moon

Influence

The instrument includes features from notable artists, such as Grimes and Yacht, using Nsynth Super in their music productions.[16]

google IO / Grimes etc baby four dollar toast deep v subway tile small batch affogato. Celiac bitters cray post-ironic, DSA coloring book sustainable whatever. Cronut trust fund lo-fi, ugh flexitarian dreamcatcher paleo. XOXO mumblecore listicle man braid lomo poke blog. Photo booth cornhole mukbang edison bulb, put a bird on it 3 wolf moon

References

  1. ^ a b Engel, Jesse; Resnick, Cinjon; Roberts, Adam; Dieleman, Sander; Eck, Douglas; Simonyan, Karen; Norouzi, Mohammad (2017). "Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders". arXiv:1704.01279 [cs.LG].
  2. ^ "Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders". research.google.
  3. ^ "Google's open-source neural synth is creating totally new sounds". wired.co.uk.
  4. ^ "73 | Grimes (c) on Music, Creativity, and Digital Personae – Sean Carroll". www.preposterousuniverse.com.
  5. ^ "Music and Machine Learning (Google I/O'19)". youtube.com.
  6. ^ "NSynth: Neural Audio Synthesis". Magenta.
  7. ^ "NSynth Dataset". activeloop.ai.
  8. ^ A bot will complete this citation soon. Click here to jump the queue arXiv:1907.08520.
  9. ^ "The NSynth Dataset". tensorflow.org.
  10. ^ "NSynth: Neural Audio Synthesis". GitHub.
  11. ^ "NSYNTH SUPER". NSYNTH SUPER.
  12. ^ "Google built a musical instrument that uses AI and released the plans so you can make your own". CNBC. 13 March 2018.
  13. ^ "NSynth Super is an AI-backed touchscreen synth". The Verge.
  14. ^ "googlecreativelab/open-nsynth-super". April 1, 2021 – via GitHub.
  15. ^ "Open NSynth Super". hackaday.io.
  16. ^ "73 | Grimes (c) on Music, Creativity, and Digital Personae – Sean Carroll". www.preposterousuniverse.com.

Further reading

Engel, Jesse; Resnick, Cinjon; Roberts, Adam; Dieleman, Sander; Eck, Douglas; Simonyan, Karen; Norouzi, Mohammad (2017). "Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders". arXiv:1704.01279 [cs.LG].



References