Jump to content

Music and artificial intelligence: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Added EU copyright circumstances and edited the previous paragraph accordingly.
Pluxquba (talk | contribs)
 
(46 intermediate revisions by 30 users not shown)
Line 1: Line 1:
{{Short description|Common subject in the International Computer Music Conference}}
{{Short description|Usage of artificial intelligence to generate music}}
{{Use dmy dates|date=November 2024}}
{{Artificial intelligence}}
{{Artificial intelligence}}
Music and artificial intelligence is the development of [[music software]] programs which use AI to generate music.<ref>{{cite journal |author1=D. Herremans |author2=C.H. |author3=Chuan, E. Chew |year=2017 |title=A Functional Taxonomy of Music Generation Systems |journal=ACM Computing Surveys |volume=50 |issue=5 |pages=69:1–30 |arxiv=1812.04186 |doi=10.1145/3108242 |s2cid=3483927}}</ref> As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment.<ref>{{Cite web |last=Dannenberg |first=Roger |title=Artificial Intelligence, Machine Learning, and Music Understanding |url=https://pdfs.semanticscholar.org/f275/4c359d7ef052ab5997d71dc3e9443404565a.pdf |url-status=dead |archive-url=https://web.archive.org/web/20180823141845/https://pdfs.semanticscholar.org/f275/4c359d7ef052ab5997d71dc3e9443404565a.pdf |archive-date=August 23, 2018 |access-date=August 23, 2018 |website=Semantic Scholar |s2cid=17787070}}</ref> Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control.Current research includes the application of AI in [[Musical composition|music composition]], [[performance]], theory and digital [[Audio signal processing|sound processing]].
'''Music and artificial intelligence (AI)''' is the development of [[music software]] programs which use AI to generate music.<ref>{{cite journal |author1=D. Herremans |author2=C.H. |author3=Chuan, E. Chew |year=2017 |title=A Functional Taxonomy of Music Generation Systems |journal=ACM Computing Surveys |volume=50 |issue=5 |pages=69:1–30 |arxiv=1812.04186 |doi=10.1145/3108242 |s2cid=3483927}}</ref> As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment.<ref>{{Cite web |last=Dannenberg |first=Roger |title=Artificial Intelligence, Machine Learning, and Music Understanding |url=https://pdfs.semanticscholar.org/f275/4c359d7ef052ab5997d71dc3e9443404565a.pdf |url-status=dead |archive-url=https://web.archive.org/web/20180823141845/https://pdfs.semanticscholar.org/f275/4c359d7ef052ab5997d71dc3e9443404565a.pdf |archive-date=August 23, 2018 |access-date=August 23, 2018 |website=Semantic Scholar |s2cid=17787070}}</ref> Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in [[Musical composition|music composition]], [[performance]], theory and digital [[Audio signal processing|sound processing]].


[[Erwin Panofsky|Erwin Panofksy]] proposed that in all art, there existed 3 levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject.<ref>[http://tems.umn.edu/pdf/Panofsky_iconology2.pdf Erwin Panofsky, ''Studies in Iconology: Humanistic Themes in the Art of the Renaissance''. Oxford 1939.]</ref><ref>{{Citation |last=Dilly |first=Heinrich |title=Panofsky, Erwin: Zum Problem der Beschreibung und Inhaltsdeutung von Werken der bildenden Kunst |date=2020 |work=Kindlers Literatur Lexikon (KLL) |pages=1–2 |editor-last=Arnold |editor-first=Heinz Ludwig |url=https://doi.org/10.1007/978-3-476-05728-0_16027-1 |access-date=2024-03-03 |place=Stuttgart |publisher=J.B. Metzler |language=de |doi=10.1007/978-3-476-05728-0_16027-1 |isbn=978-3-476-05728-0}}</ref> AI music explores the foremost of these, creating music without the "intention" which is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning.<ref>{{Cite journal |title=Handbook of Artificial Intelligence for Music |url=https://link.springer.com/content/pdf/10.1007/978-3-030-72116-9.pdf |journal=SpringerLink |date=2021 |language=en |doi=10.1007/978-3-030-72116-9 |isbn=978-3-030-72115-2 |editor-last1=Miranda |editor-first1=Eduardo Reck }}</ref>
[[Erwin Panofsky|Erwin Panofksy]] proposed that in all art, there existed three levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject.<ref>[http://tems.umn.edu/pdf/Panofsky_iconology2.pdf Erwin Panofsky, ''Studies in Iconology: Humanistic Themes in the Art of the Renaissance''. Oxford 1939.]</ref><ref>{{Citation |last=Dilly |first=Heinrich |title=Panofsky, Erwin: Zum Problem der Beschreibung und Inhaltsdeutung von Werken der bildenden Kunst |date=2020 |work=Kindlers Literatur Lexikon (KLL) |pages=1–2 |editor-last=Arnold |editor-first=Heinz Ludwig |url=https://doi.org/10.1007/978-3-476-05728-0_16027-1 |access-date=2024-03-03 |place=Stuttgart |publisher=J.B. Metzler |language=de |doi=10.1007/978-3-476-05728-0_16027-1 |isbn=978-3-476-05728-0}}</ref> AI music explores the foremost of these, creating music without the "intention" which is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning.<ref name="miranda">{{Cite journal |date=2021 |title=Handbook of Artificial Intelligence for Music |url=https://books.google.com/books?id=p7I2EAAAQBAJ |journal=SpringerLink |language=en |doi=10.1007/978-3-030-72116-9 |isbn=978-3-030-72115-2 |editor-last1=Miranda |editor-first1=Eduardo Reck}}</ref>


==History==
==History==
Artificial intelligence finds its beginnings in music with the transcription problem: accurately recording a performance into musical notation as it is played. [[Marie-Dominique-Joseph Engramelle|Père Engramelle]]'s schematic of a "piano roll," a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1752.<ref name=":0">{{Cite journal |title=Research in music and artificial intelligence |url=https://dl.acm.org/doi/epdf/10.1145/4468.4469 |access-date=2024-03-06 |journal=ACM Computing Surveys |date=1985 |language=en |doi=10.1145/4468.4469 |last1=Roads |first1=Curtis |volume=17 |issue=2 |pages=163–190 }}</ref>
Artificial intelligence finds its beginnings in music with the transcription problem: accurately recording a performance into musical notation as it is played. [[Marie-Dominique-Joseph Engramelle|Père Engramelle]]'s schematic of a "piano roll", a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1752.<ref name=":0">{{Cite journal |title=Research in music and artificial intelligence |url=https://dl.acm.org/doi/epdf/10.1145/4468.4469 |access-date=2024-03-06 |journal=ACM Computing Surveys |date=1985 |language=en |doi=10.1145/4468.4469 |last1=Roads |first1=Curtis |volume=17 |issue=2 |pages=163–190 }}</ref>


In 1957, the ILLIAC I (Illinois Automatic Computer) produced the "Illiac Suite for String Quartet," a completely computer-generated piece of music. The computer was programmed to accomplish this by composer [[Lejaren Hiller]] and mathematician [[Leonard Isaacson]].<ref name=":1">{{cite journal |title=Artificial Intelligence and Music: History and the Future Perceptive |journal=International Journal of Applied Research |first1=Sourav |last1=Verma |date=2021 |volume=7 |number=2 |pages=272–275 |doi=10.22271/allresearch.2021.v7.i2e.8286 |url=https://www.academia.edu/78306347 |via=[[Academia.edu]] }}</ref>
In 1957, the ILLIAC I (Illinois Automatic Computer) produced the "Illiac Suite for String Quartet", a completely computer-generated piece of music. The computer was programmed to accomplish this by composer [[Lejaren Hiller]] and mathematician [[Leonard Isaacson]].<ref name=miranda/>{{rp|v–vii}}
In 1960, Russian researcher Rudolf Zaripov published worldwide first paper on algorithmic music composing using the [[Ural (computer)|Ural-1]] computer.<ref>{{cite journal|last=Zaripov|first=Rudolf|title=Об алгоритмическом описании процесса сочинения музыки (On algorithmic description of process of music composition)|journal=[[Proceedings of the USSR Academy of Sciences]]|year=1960|volume=132|issue=6}}</ref>


In 1965, inventor [[Ray Kurzweil]] developed software capable of recognizing musical patterns and synthesizing new compositions from them. The computer first appeared on the quiz show ''[[I've Got a Secret]]''.<ref>{{Cite web |title=Ray Kurzweil |url=https://nationalmedals.org/laureate/ray-kurzweil/ |access-date=2024-09-10 |website=National Science and Technology Medals Foundation |language=en-US}}</ref>
In 1960, Russian researcher Rudolf Zaripov published worldwide first paper on algorithmic music composing using the "[[Ural (computer)|Ural-1]]" computer.<ref name=":1" /><ref>{{cite journal|last=Zaripov|first=Rudolf|title=Об алгоритмическом описании процесса сочинения музыки (On algorithmic description of process of music composition)|journal=[[Proceedings of the USSR Academy of Sciences]]|year=1960|volume=132|issue=6}}</ref>


By 1983, [[Yamaha Corporation]]'s Kansei Music System had gained momentum, and a paper was published on its development in 1989. The software utilized music information processing and artificial intelligence techniques to essentially solve the transcription problem for simpler melodies, although higher-level melodies and musical complexities are regarded even today as difficult deep-learning tasks, and near-perfect transcription is still a subject of research.<ref name=":0" /><ref>{{Cite journal |last1=Katayose |first1=Haruhiro |last2=Inokuchi |first2=Seiji |date=1989 |title=The Kansei Music System |url=https://www.jstor.org/stable/3679555 |journal=Computer Music Journal |volume=13 |issue=4 |pages=72–77 |doi=10.2307/3679555 |jstor=3679555 |issn=0148-9267}}</ref>
In 1965, inventor [[Ray Kurzweil]] developed software capable of recognizing musical patterns and synthesizing new compositions from them. The computer first appeared on the quiz show [[I've Got a Secret]].<ref name=":1" />

By 1983, [[Yamaha Corporation]]'s Kansei Music System had gained momentum, and a paper was published on its development in 1989. The software utilized music information processing and artificial intelligence techniques to essentially solve the transcription problem for simpler melodies, although higher-level melodies and musical complexities are regarded even today as difficult deep learning tasks, and near-perfect transcription is still a subject of research.<ref name=":0" /><ref>{{Cite journal |last1=Katayose |first1=Haruhiro |last2=Inokuchi |first2=Seiji |date=1989 |title=The Kansei Music System |url=https://www.jstor.org/stable/3679555 |journal=Computer Music Journal |volume=13 |issue=4 |pages=72–77 |doi=10.2307/3679555 |jstor=3679555 |issn=0148-9267}}</ref>


In 1997, an artificial intelligence program named Experiments in Musical Intelligence (EMI) appeared to outperform a human composer at the task of composing a piece of music to imitate the style of [[Bach]].<ref>{{cite news |last1=Johnson |first1=George |title=Undiscovered Bach? No, a Computer Wrote It |url=https://www.nytimes.com/1997/11/11/science/undiscovered-bach-no-a-computer-wrote-it.html |access-date=29 April 2020 |work=The New York Times |date=11 November 1997| quote=Dr. Larson was hurt when the audience concluded that his piece -- a simple, engaging form called a two-part invention -- was written by the computer. But he felt somewhat mollified when the listeners went on to decide that the invention composed by EMI (pronounced ''Emmy'') was genuine Bach.}}</ref> EMI would later become the basis for a more sophisticated algorithm called [[Emily Howell]], named for its creator.
In 1997, an artificial intelligence program named Experiments in Musical Intelligence (EMI) appeared to outperform a human composer at the task of composing a piece of music to imitate the style of [[Bach]].<ref>{{cite news |last1=Johnson |first1=George |title=Undiscovered Bach? No, a Computer Wrote It |url=https://www.nytimes.com/1997/11/11/science/undiscovered-bach-no-a-computer-wrote-it.html |access-date=29 April 2020 |work=The New York Times |date=11 November 1997| quote=Dr. Larson was hurt when the audience concluded that his piece -- a simple, engaging form called a two-part invention -- was written by the computer. But he felt somewhat mollified when the listeners went on to decide that the invention composed by EMI (pronounced ''Emmy'') was genuine Bach.}}</ref> EMI would later become the basis for a more sophisticated algorithm called [[Emily Howell]], named for its creator.


In 2002, the music research team at the Sony Computer Science Laboratory Paris, led by French composer and scientist [[François Pachet]], designed the Continuator, an algorithm uniquely capable of resuming a composition after a live musician stopped.<ref name=":1" />
In 2002, the music research team at the Sony Computer Science Laboratory Paris, led by French composer and scientist [[François Pachet]], designed the Continuator, an algorithm uniquely capable of resuming a composition after a live musician stopped.<ref>{{Cite journal |last=Pachet |first=François |date=September 2003 |title=The Continuator: Musical Interaction With Style |url=http://www.tandfonline.com/doi/abs/10.1076/jnmr.32.3.333.16861 |journal=Journal of New Music Research |volume=32 |issue=3 |pages=333–341 |doi=10.1076/jnmr.32.3.333.16861 |issn=0929-8215|hdl=2027/spo.bbp2372.2002.044 |hdl-access=free }}</ref>

[[Emily Howell]] would continue to make advancements in musical artificial intelligence, publishing its first album ''From Darkness, Light'' in 2009.<ref>{{Cite news |last=Lawson |first=Mark |date=2009-10-22 |title=This artificially intelligent music may speak to our minds, but not our souls |url=https://www.theguardian.com/commentisfree/2009/oct/22/music-computer-compose-copy |access-date=2024-09-10 |work=[[The Guardian]] |language=en-GB |issn=0261-3077}}</ref> Since then, many more pieces by artificial intelligence and various groups have been published.


In 2010, [[Iamus (computer)|Iamus]] became the first AI to produce a fragment of original contemporary classical music, in its own style: "Iamus' Opus 1". Located at the Universidad de Malága (Malága University) in Spain, the computer can generate a fully original piece in a variety of musical styles.<ref>{{Cite news |date=2013-01-02 |title=Iamus: Is this the 21st century's answer to Mozart? |url=https://www.bbc.com/news/technology-20889644 |access-date=2024-09-10 |work=BBC News |language=en-GB}}</ref><ref name="miranda" />{{rp|468–481}} In August 2019, a large dataset consisting of 12,197 MIDI songs, each with their lyrics and melodies,<ref>{{Citation |last=yy1lab |title=yy1lab/Lyrics-Conditioned-Neural-Melody-Generation |date=2024-11-13 |url=https://github.com/yy1lab/Lyrics-Conditioned-Neural-Melody-Generation |access-date=2024-11-19}}</ref> was created to investigate the feasibility of neural melody generation from lyrics using a deep conditional LSTM-GAN method.
Emily Howell would continue to make advancements in musical artificial intelligence, publishing its first album "From Darkness, Light" in 2009, and its second "Breathless" by 2012. Since then, many more pieces by artificial intelligence and various groups have been published.<ref name=":1" />


With progress in [[generative AI]], models capable of creating complete musical compositions (including lyrics) from a simple text description have begun to emerge. Two notable web applications in this field are [[Suno AI]], launched in December 2023, and [[Udio]], which followed in April 2024.<ref>{{Cite web |last=Nair |first=Vandana |date=2024-04-11 |title=AI-Music Platform Race Accelerates with Udio |url=https://analyticsindiamag.com/ai-music-platform-race-accelerates-with-udio/ |access-date=2024-04-19 |website=Analytics India Magazine |language=en-US}}</ref>
In 2010, Iamus became the first AI to produce a fragment of original contemporary classical music, in its own style: "Iamus' Opus 1." Located at the Universidad de Malága (Malága University) in Spain, the computer can generate a fully original piece in a variety of musical styles in the span of eight minutes.<ref name=":1" />


==Software applications==
==Software applications==

===Interactive scores===
Multimedia Scenarios in interactive scores are represented by
temporal objects,
temporal relations, and
interactive objects. Examples of temporal objects are sounds, videos and light controls.
Temporal objects can be triggered by interactive objects (usually launched by the user) and
several temporal objects can be executed simultaneously. A temporal object may contain
other temporal objects: this hierarchy allows us to control the start or end of a temporal
object by controlling the start or end of its parent. Hierarchy is ever-present in all kinds
of music: music pieces are often characterized by movements, parts, motives, and measures,
among other segments.<ref>Mauricio Toro, Myriam Desainte-Catherine, Camilo Rueda. Formal semantics for interactive music scores: a framework to design, specify properties and execute interactive scenarios. Journal of Mathematics and Music 8 (1)</ref><ref>{{cite web|title=Open Software System for Interactive Applications|url=https://ossia.io/|access-date=23 January 2018|language=en-EN}}</ref>

===Computer Accompaniment (Carnegie Mellon University)===
The Computer Music Project at Carnegie Mellon University develops computer music and interactive performance technology to enhance human musical experience and creativity. This interdisciplinary effort draws on [[music theory]], [[cognitive science]], [[artificial intelligence]] and [[machine learning]], [[human computer interaction]], [[real-time systems]], [[computer graphics]] and animation, [[multimedia]], [[programming languages]], and [[signal processing]].<ref>[http://www-2.cs.cmu.edu/~music/ Computer Music Group]. 2.cs.cmu.edu. Retrieved on 2010-12-22.</ref>


===ChucK===
===ChucK===
Line 61: Line 48:
===AIVA===
===AIVA===
{{main|AIVA}}
{{main|AIVA}}
Created in February 2016, in [[Luxembourg]], [[AIVA]] is a program that produces soundtracks for any type of media. The algorithms behind AIVA are based on deep learning architectures<ref>[http://www.aiva.ai]. AIVA 2016</ref> AIVA has also been used to compose a Rock track called ''On the Edge'',<ref>[https://medium.com/@aivatech/the-making-of-ai-generated-rock-music-with-aiva-9ae0257e6d5c] AI-generated Rock Music: the Making Of</ref> as well as a pop tune ''Love Sick''<ref>[https://www.youtube.com/watch?v=gQSPjAYTlx8] Love Sick | Composed with Artificial Intelligence - Official Video with Lyrics | Taryn Southern</ref> in collaboration with singer [[Taryn Southern]],<ref>[https://techcrunch.com/2018/05/10/ai-is-the-future-of-rhythm-nation/] Algo-Rhythms: the future of album collaboration</ref> for the creation of her 2018 album "I am AI".
Created in February 2016, in [[Luxembourg]], [[AIVA]] is a program that produces soundtracks for any type of media. The algorithms behind AIVA are based on deep learning architectures<ref>{{Cite web |date=2017-03-09 |title=A New AI Can Write Music as Well as a Human Composer |url=https://futurism.com/a-new-ai-can-write-music-as-well-as-a-human-composer |access-date=2024-04-19 |website=Futurism}}</ref> AIVA has also been used to compose a Rock track called ''On the Edge'',<ref>{{Cite web |last=Technologies |first=Aiva |date=2018-10-24 |title=The Making of AI-generated Rock Music with AIVA |url=https://medium.com/@aivatech/the-making-of-ai-generated-rock-music-with-aiva-9ae0257e6d5c |access-date=2024-04-19 |website=Medium |language=en}}</ref> as well as a pop tune ''Love Sick''<ref>{{Cite AV media |url=https://www.youtube.com/watch?v=gQSPjAYTlx8 |title=Lovesick {{!}} Composed with AIVA Artificial Intelligence - Official Video with Lyrics {{!}} Taryn Southern |date=2 May 2018}}</ref> in collaboration with singer [[Taryn Southern]],<ref>{{Cite web |last=Southern |first=Taryn |date=2018-05-10 |title=Algo-Rhythms: The future of album collaboration |url=https://techcrunch.com/2018/05/10/ai-is-the-future-of-rhythm-nation/ |access-date=2024-04-19 |website=TechCrunch |language=en-US}}</ref> for the creation of her 2018 album "I am AI".


===Google Magenta===
===Google Magenta===
[[File:Hypnotic ambient electronic music by MusicLM.mp3|right|thumb|20-second music clip generated by MusicLM using the prompt "hypnotic ambient electronic music"]]
[[File:Hypnotic ambient electronic music by MusicLM.mp3|right|thumb|20-second music clip generated by MusicLM using the prompt "hypnotic ambient electronic music"]]
Google's Magenta team has published several AI music applications and technical papers since their launch in 2016.<ref>[https://magenta.tensorflow.org/blog/2016/06/01/welcome-to-magenta/] Welcome to Magenta. Douglas Eck. Published June 1, 2016.</ref> In 2017 they released the [[NSynth]] algorithm and dataset,<ref>{{Cite journal |last1=Engel |first1=Jesse |last2=Resnick |first2=Cinjon |last3=Roberts |first3=Adam |last4=Dieleman |first4=Sander |last5=Eck |first5=Douglas |last6=Simonyan |first6=Karen |last7=Norouzi |first7=Mohammad |date=2017 |title=Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders |arxiv=1704.01279 }}</ref> and an [[open source]] hardware musical instrument, designed to facilitate musicians in using the algorithm.<ref>{{Citation |title=Open NSynth Super |date=2023-02-13 |url=https://github.com/googlecreativelab/open-nsynth-super |publisher=Google Creative Lab |access-date=2023-02-14}}</ref> The instrument was used by notable artists such as [[Grimes]] and [[Yacht (band)|YACHT]] in their albums.<ref>{{Cite web |title=Cover Story: Grimes is ready to play the villain |url=https://crackmagazine.net/article/long-reads/grimes-is-ready-to-play-the-villain/ |access-date=2023-02-14 |website=Crack Magazine}}</ref><ref>{{Cite web |date=2019-09-18 |title=What Machine-Learning Taught the Band YACHT About Themselves |url=https://losangeleno.com/people/what-machine-learning-taught-the-band-yacht-about-themselves/ |access-date=2023-02-14 |website=Los Angeleno |language=en-US}}</ref> In 2018, they released a piano improvisation app called Piano Genie. This was later followed by Magenta Studio, a suite of 5 MIDI plugins that allow music producers to elaborate on existing music in their DAW.<ref>[https://magenta.tensorflow.org/studio/] Magenta Studio</ref> In 2023, their machine learning team published a technical paper on GitHub that described MusicLM, a private text-to-music generator which they'd developed.<ref>[https://google-research.github.io/seanet/musiclm/examples/] MusicLM on Github. Authored by Andrea Agostinelli, Timo I. Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matt Sharifi, Neil Zeghidour, Christian Frank. Published January 26, 2023.</ref><ref>[https://www.audiocipher.com/post/musiclm] Understanding What Makes MusicLM Unique. Published January 27, 2023.</ref>
Google's Magenta team has published several AI music applications and technical papers since their launch in 2016.<ref>{{Cite web |date=2016-06-01 |title=Welcome to Magenta! |url=https://magenta.tensorflow.org/blog/2016/06/01/welcome-to-magenta/ |access-date=2024-04-19 |website=Magenta |language=en}}</ref> In 2017 they released the [[NSynth]] algorithm and dataset,<ref>{{Cite journal |last1=Engel |first1=Jesse |last2=Resnick |first2=Cinjon |last3=Roberts |first3=Adam |last4=Dieleman |first4=Sander |last5=Eck |first5=Douglas |last6=Simonyan |first6=Karen |last7=Norouzi |first7=Mohammad |date=2017 |title=Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders |journal=PMLR |arxiv=1704.01279}}</ref> and an [[open source]] hardware musical instrument, designed to facilitate musicians in using the algorithm.<ref>{{Citation |title=Open NSynth Super |date=2023-02-13 |url=https://github.com/googlecreativelab/open-nsynth-super |publisher=Google Creative Lab |access-date=2023-02-14}}</ref> The instrument was used by notable artists such as [[Grimes]] and [[Yacht (band)|YACHT]] in their albums.<ref>{{Cite web |title=Cover Story: Grimes is ready to play the villain |url=https://crackmagazine.net/article/long-reads/grimes-is-ready-to-play-the-villain/ |access-date=2023-02-14 |website=Crack Magazine}}</ref><ref>{{Cite web |date=2019-09-18 |title=What Machine-Learning Taught the Band YACHT About Themselves |url=https://losangeleno.com/people/what-machine-learning-taught-the-band-yacht-about-themselves/ |access-date=2023-02-14 |website=Los Angeleno |language=en-US}}</ref> In 2018, they released a piano improvisation app called Piano Genie. This was later followed by Magenta Studio, a suite of 5 MIDI plugins that allow music producers to elaborate on existing music in their DAW.<ref>{{Cite web |title=Magenta Studio |url=https://magenta.tensorflow.org/studio/ |access-date=2024-04-19 |website=Magenta |language=en}}</ref> In 2023, their machine learning team published a technical paper on GitHub that described MusicLM, a private text-to-music generator which they'd developed.<ref>{{Cite web |date=2023 |title=MusicLM |url=https://google-research.github.io/seanet/musiclm/examples/ |access-date=2024-04-19 |website=google-research.github.io}}</ref><ref>{{Cite web |last=Sandzer-Bell |first=Ezra |date=2024-02-16 |title=Best Alternatives to Google's AI-Powered MusicLM and MusicFX |url=https://www.audiocipher.com/post/musiclm |access-date=2024-04-19 |website=AudioCipher |language=en}}</ref>


===Riffusion===
===Riffusion===
{{excerpt|Riffusion}}
{{excerpt|Riffusion}}


== Musical Applications ==
=== Spike AI ===
Spike AI is an AI-based [[audio plug-in]], developed by [[Spike Stent]] in collaboration with his son Joshua Stent and friend Henry Ramsey, that analyzes tracks and provides suggestions to increase clarity and other aspects during [[Audio mixing (recorded music)|mixing]]. Communication is done by using a [[chatbot]] trained on Spike Stent's personal data. The plug-in integrates into [[digital audio workstation]].<ref name="Mix-Magazine-Spike">{{Cite web |last=Levine |first=Mike |date=2024-10-04 |title=Spike AI — A Mix Product of the Week |url=https://www.mixonline.com/business/spike-ai-a-mix-product-of-the-week |access-date=2024-11-19 |website=[[Mix (magazine)]] |publisher=[[Future US]] |language=en-US}}</ref><ref name="SOS-Spike">{{Cite web |title=Spike Stent offers his expertise in Spike AI |url=https://www.soundonsound.com/news/spike-stent-offers-his-expertise-spike-ai |access-date=2024-11-19 |website=[[Sound on Sound]]}}</ref>
Artificial Intelligence has the opportunity to impact how producers create music by giving reiterations of a track that follow a prompt given by the creator. These prompts allow the AI to follow a certain style that the artist is trying to go for<ref>{{Cite book |title=Shibboleth Authentication Request |url=https://login.libproxy1.usc.edu/login?qurl=https://link.springer.com%2fbook%2f10.1007%2f978-3-030-72116-9 |access-date=2024-04-03 |website=login.libproxy1.usc.edu | date=2021 |doi=10.1007/978-3-030-72116-9 | isbn=978-3-030-72115-2 | editor-last1=Miranda | editor-first1=Eduardo Reck }}</ref>.


== Musical applications ==
AI has also been seen in musical analysis where it has been used for feature extraction, pattern recognition, and musical recommendations<ref>{{Cite journal |last=Zhang |first=Yifei |date=December 2023 |title=Utilizing Computational Music Analysis and AI for Enhanced Music Composition: Exploring Pre- and Post-Analysis |journal=Journal of Advanced Zoology |volume=44 |issue=S-6 |pages=1377–1390 |doi=10.17762/jaz.v44is6.2470 |s2cid=265936281 }}</ref>.
Artificial Intelligence has the opportunity to impact how producers create music by giving reiterations of a track that follow a prompt given by the creator. These prompts allow the AI to follow a certain style that the artist is trying to go for.<ref name=miranda/>

AI has also been seen in musical analysis where it has been used for feature extraction, pattern recognition, and musical recommendations.<ref>{{Cite journal |last=Zhang |first=Yifei |date=December 2023 |title=Utilizing Computational Music Analysis and AI for Enhanced Music Composition: Exploring Pre- and Post-Analysis |journal=Journal of Advanced Zoology |volume=44 |issue=S-6 |pages=1377–1390 |doi=10.17762/jaz.v44is6.2470 |s2cid=265936281 |doi-access=free }}</ref>


=== Composition ===
=== Composition ===
Artificial intelligence has had major impacts in the composition sector as it has influenced the ideas of composers/producers and has the potential to make the industry more accessible to newcomers.<ref name=":03">{{Cite book |title=Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity |date=2021 |publisher=Springer International Publishing |isbn=978-3-030-72115-2 |editor-last=Miranda |editor-first=Eduardo Reck |edition=1st ed. 2021 |series=Oxford handbooks |location=Cham}}</ref> With its development in music, it has already been seen to be used in collaboration with producers. Artists use these software to help generate ideas and bring out musical styles by prompting the AI to follow specific requirements that fit their needs.<ref name=":03" /> Software such as [[ChatGPT]] have been used by producers  to do these tasks, while other software such as [[IZotope|Ozone11]] have been used to automate time consuming and complex activities such as [[Mastering (audio)|mastering]]. <ref>{{Cite web |last=Sunkel |first=Cameron |date=2023-12-16 |title=New Research Reveals Top AI Tools Utilized by Music Producers |url=https://edm.com/gear-tech/top-ai-tools-music-producers |access-date=2024-04-03 |website=EDM.com - The Latest Electronic Dance Music News, Reviews & Artists |language=en}}</ref> Future compositional impacts by the technology include style emulation and fusion, and revision and refinement. Development of these types of software can give ease of access to newcomers to the music industry.<ref name=":03" />
Artificial intelligence has had major impacts in the composition sector as it has influenced the ideas of composers/producers and has the potential to make the industry more accessible to newcomers. With its development in music, it has already been seen to be used in collaboration with producers. Artists use these software to help generate ideas and bring out musical styles by prompting the AI to follow specific requirements that fit their needs. Future compositional impacts by the technology include style emulation and fusion, and revision and refinement. Development of these types of software can give ease of access to newcomers to the music industry.<ref name=miranda/> Software such as [[ChatGPT]] have been used by producers  to do these tasks, while other software such as [[IZotope|Ozone11]] have been used to automate time consuming and complex activities such as [[Mastering (audio)|mastering]].<ref>{{Cite web |last=Sunkel |first=Cameron |date=2023-12-16 |title=New Research Reveals Top AI Tools Utilized by Music Producers |url=https://edm.com/gear-tech/top-ai-tools-music-producers |access-date=2024-04-03 |website=EDM.com - The Latest Electronic Dance Music News, Reviews & Artists |language=en}}</ref>


==Copyright==
==Copyright==
Line 85: Line 75:
The situation in the European Union (EU) is similar to the US, because its legal framework also emphasizes the role of human involvement in a copyright-protected work.<ref name=":2">Bulayenko, Oleksandr; Quintais, João Pedro; Gervais, Daniel J.; Poort, Joost (February 28, 2022). [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4072806 "AI Music Outputs: Challenges to the Copyright Legal Framework"]. ''reCreating Europe Report''. Retrieved 2024-04-03.</ref> According to the [[European Union Intellectual Property Office]] and the recent jurisprudence of the [[Court of Justice of the European Union]], the originality criterion requires the work to be the author’s own intellectual creation, reflecting the personality of the author evidenced by the creative choices made during its production, requires distinct level of human involvement.<ref name=":2" /> The reCreating Europe project, funded by the European Union’s Horizon 2020 research and innovation program, delves into the challenges posed by AI-generated contents including music, suggesting legal certainty and balanced protection that encourages innovation while respecting copyright norms.<ref name=":2" /> The recognition of [[AIVA]] marks a significant departure from traditional views on authorship and copyrights in the realm of music composition, allowing AI artists capable of releasing music and earning royalties. This acceptance marks AIVA as a pioneering instance where an AI has been formally acknowledged within the music production.<ref>Ahuja, Virendra (June 11, 2021). [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3864922 "Artificial Intelligence and Copyright: Issues and Challenges"]. ''ILI Law Review Winter Issue 2020.'' Retrieved 2024-04-03.</ref>
The situation in the European Union (EU) is similar to the US, because its legal framework also emphasizes the role of human involvement in a copyright-protected work.<ref name=":2">Bulayenko, Oleksandr; Quintais, João Pedro; Gervais, Daniel J.; Poort, Joost (February 28, 2022). [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4072806 "AI Music Outputs: Challenges to the Copyright Legal Framework"]. ''reCreating Europe Report''. Retrieved 2024-04-03.</ref> According to the [[European Union Intellectual Property Office]] and the recent jurisprudence of the [[Court of Justice of the European Union]], the originality criterion requires the work to be the author’s own intellectual creation, reflecting the personality of the author evidenced by the creative choices made during its production, requires distinct level of human involvement.<ref name=":2" /> The reCreating Europe project, funded by the European Union’s Horizon 2020 research and innovation program, delves into the challenges posed by AI-generated contents including music, suggesting legal certainty and balanced protection that encourages innovation while respecting copyright norms.<ref name=":2" /> The recognition of [[AIVA]] marks a significant departure from traditional views on authorship and copyrights in the realm of music composition, allowing AI artists capable of releasing music and earning royalties. This acceptance marks AIVA as a pioneering instance where an AI has been formally acknowledged within the music production.<ref>Ahuja, Virendra (June 11, 2021). [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3864922 "Artificial Intelligence and Copyright: Issues and Challenges"]. ''ILI Law Review Winter Issue 2020.'' Retrieved 2024-04-03.</ref>


The recent advancements in artificial intelligence made by groups such as [[Stability AI]], [[OpenAI]], and [[Google]] has incurred an enormous sum of copyright claims leveled against generative technology, including AI music. Should these lawsuits succeed, the machine learning models behind these technologies would have their datasets restricted to the public domain.<ref>{{Cite journal |last=Samuelson |first=Pamela |date=2023-07-14 |title=Generative AI meets copyright |journal=Science |language=en |volume=381 |issue=6654 |pages=158–161 |doi=10.1126/science.adi0656 |issn=0036-8075|doi-access=free |pmid=37440639 }}</ref>
The recent advancements in artificial intelligence made by groups such as [[Stability AI]], [[OpenAI]], and [[Google]] has incurred an enormous sum of copyright claims leveled against generative technology, including AI music. Should these lawsuits succeed, the machine learning models behind these technologies would have their datasets restricted to the public domain.<ref>{{Cite journal |last=Samuelson |first=Pamela |date=2023-07-14 |title=Generative AI meets copyright |journal=Science |language=en |volume=381 |issue=6654 |pages=158–161 |doi=10.1126/science.</ref>


==Musical deepfakes==
==Musical deepfakes==
A more nascent development of AI in music is the application of [[audio deepfake]]s to cast the lyrics or musical style of a preexisting song to the voice or style of another artist. This has raised many concerns regarding the legality of technology, as well as the ethics of employing it, particularly in the context of artistic identity.<ref>[https://ceur-ws.org/Vol-3528/paper3.pdf DeepDrake ft. BTS-GAN and TayloRVC: An Exploratory Analysis of Musical Deepfakes and Hosting Platforms]</ref> Furthermore, it has also raised the question of to whom the authorship of these works is attributed. As AI cannot hold authorship of its own, current speculation suggests that there will be no clear answer until further rulings are made regarding machine learning technologies as a whole.<ref>[https://www.cigionline.org/enwiki/static/documents/DPH-paper-Josan.pdf AI and Deepfake Voice Cloning: Innovation, Copyright and Artists’ Rights]</ref> Most recent preventative measures have started to be developed by [[Google]] and Universal Music group who have taken into royalties and credit attribution to allow producers to replicated the voices and styles of artists<ref>{{Cite web |title=Google and Universal Music negotiate deal over AI 'deepfakes' |url=https://www.ft.com/content/6f022306-2f83-4da7-8066-51386e8fe63b |access-date=2024-04-03 |website=www.ft.com}}</ref>.
A more nascent development of AI in music is the application of [[audio deepfake]]s to cast the lyrics or musical style of a pre-existing song to the voice or style of another artist. This has raised many concerns regarding the legality of technology, as well as the ethics of employing it, particularly in the context of artistic identity.<ref>[https://ceur-ws.org/Vol-3528/paper3.pdf DeepDrake ft. BTS-GAN and TayloRVC: An Exploratory Analysis of Musical Deepfakes and Hosting Platforms]</ref> Furthermore, it has also raised the question of to whom the authorship of these works is attributed. As AI cannot hold authorship of its own, current speculation suggests that there will be no clear answer until further rulings are made regarding machine learning technologies as a whole.<ref>[https://www.cigionline.org/enwiki/static/documents/DPH-paper-Josan.pdf AI and Deepfake Voice Cloning: Innovation, Copyright and Artists’ Rights]</ref> Most recent preventative measures have started to be developed by [[Google]] and Universal Music group who have taken into royalties and credit attribution to allow producers to replicated the voices and styles of artists.<ref>{{Cite news |title=Google and Universal Music negotiate deal over AI 'deepfakes' |url=https://www.ft.com/content/6f022306-2f83-4da7-8066-51386e8fe63b |access-date=2024-04-03 |newspaper=Financial Times|date=8 August 2023 |last1=Murgia |first1=Madhumita |last2=Nicolaou |first2=Anna }}</ref>

=== "Heart on My Sleeve" ===
In 2023, an artist known as ghostwriter977 created a musical deepfake called "[[Heart on My Sleeve (Ghostwriter977 song)|Heart on My Sleeve]]" that cloned the voices of [[Drake (musician)|Drake]] and [[The Weeknd]] by inputting an assortment of vocal-only tracks from the respective artists into a deep-learning algorithm, creating an artificial model of the voices of each artist, to which this model could be mapped onto original [[Scratch vocal|reference vocals]] with original lyrics.<ref name=":02">{{Cite magazine |last=Robinson |first=Kristin |date=2023-10-11 |title=Ghostwriter, the Mastermind Behind the Viral Drake AI Song, Speaks For the First Time |url=https://www.billboard.com/music/pop/ghostwriter-heart-on-my-sleeve-drake-ai-grammy-exclusive-interview-1235434099/ |access-date=2024-04-03 |magazine=Billboard |language=en-US}}</ref> The track was submitted for [[Grammy Awards|Grammy]] consideration for the best rap song and song of the year.<ref>{{Cite web |title=Drake/The Weeknd deepfake song "Heart on My Sleeve" submitted to Grammys |url=https://www.thefader.com/2023/09/06/drake-the-weeknd-song-heart-on-my-sleeve-submitted-to-grammys |access-date=2024-04-03 |website=The FADER |language=en}}</ref> It went viral and gained traction on [[TikTok]] and received a positive response from the audience, leading to its official release on [[Apple Music]], [[Spotify]], and [[YouTube]] in April of 2023.<ref name=":12">{{Cite web |title=The AI deepfake of Drake and The Weeknd will not be eligible for a GRAMMY |url=https://mixmag.net/read/ai-deepfake-drake-and-the-weeknd-track-is-not-eligible-for-grammy-award-news |access-date=2024-04-03 |website=Mixmag}}</ref> Many believed the track was fully composed by an AI software, but the producer claimed the songwriting, production, and original vocals (pre-conversion) were still done by him.<ref name=":02" /> It would later be rescinded from any Grammy considerations due to it not following the guidelines necessary to be considered for a Grammy award.<ref name=":12" /> The track would end up being removed from all music platforms by [[Universal Music Group]].<ref name=":12" /> The song was a watershed moment for AI voice cloning, and models have since been created for hundreds, if not thousands, of popular singers and rappers.


=== Heart on My Sleeve ===
=== "Where That Came From" ===
In 2013, country music singer [[Randy Travis]] suffered a [[stroke]] which left him unable to sing. In the meantime, vocalist James Dupré toured on his behalf, singing his songs for him. Travis and longtime producer [[Kyle Lehning]] released a new song in May 2024 titled "[[Where That Came From]]", Travis's first new song since his stroke. The recording uses AI technology to re-create Travis's singing voice, having been composited from over 40 existing vocal recordings alongside those of Dupré.<ref name="Tennesseean">{{cite web|url=https://www.tennessean.com/story/entertainment/music/2024/05/06/randy-travis-now-where-that-came-now-ai-origin/73585407007/|title=Randy Travis' shocks music industry with AI pairing for 'Where That Came From.' How the song came together|author=Marcus K. Dowling|date=May 6, 2024|website=The Tennesseean|accessdate=May 6, 2024}}</ref><ref>{{cite web|url=https://apnews.com/article/randy-travis-artificial-intelligence-song-voice-589a8c142f70ed8ccf53af6d32c662dc|title=With help from AI, Randy Travis got his voice back. Here's how his first song post-stroke came to be|author=Maria Sherman|date=May 6, 2024|website=AP News|accessdate=May 6, 2024}}</ref>
In 2023 an artist known as “ghostwriter977” created a musical deepfake called [[Heart on My Sleeve (ghostwriter977 song)|Heart on My Sleeve]] that copied the voices of [[Drake (musician)|Drake]] and [[The Weeknd]]  by prompting an AI to create the track for them<ref name=":02">{{Cite magazine |last=Robinson |first=Kristin |date=2023-10-11 |title=Ghostwriter, the Mastermind Behind the Viral Drake AI Song, Speaks For the First Time |url=https://www.billboard.com/music/pop/ghostwriter-heart-on-my-sleeve-drake-ai-grammy-exclusive-interview-1235434099/ |access-date=2024-04-03 |magazine=Billboard |language=en-US}}</ref> .The track was submitted for [[Grammy Awards|Grammy]] consideration for the best rap song and song of the year <ref>{{Cite web |title=Drake/The Weeknd deepfake song "Heart on My Sleeve" submitted to Grammys |url=https://www.thefader.com/2023/09/06/drake-the-weeknd-song-heart-on-my-sleeve-submitted-to-grammys |access-date=2024-04-03 |website=The FADER |language=en}}</ref>.  It went viral and gained traction on TikTok and received a positive response from the audience leading to its official release on [[Apple Music]], [[Spotify]], and [[YouTube]] in April of 2023<ref name=":12">{{Cite web |title=The AI deepfake of Drake and The Weeknd will not be eligible for a GRAMMY |url=https://mixmag.net/read/ai-deepfake-drake-and-the-weeknd-track-is-not-eligible-for-grammy-award-news |access-date=2024-04-03 |website=Mixmag}}</ref>. Many believed the track was fully composed by an AI software, but the producer claimed the songwriting, production, and voice were still done him<ref name=":02" />. It would later be rescinded from any Grammy considerations due to it not following the guidelines necessary to be considered for a grammy award<ref name=":12" />.  The track would end up being removed from all music platforms by [[Universal Music Group]]<ref name=":12" />.  


==See also==
==See also==
Line 98: Line 91:
* [[Computational models of musical creativity]]
* [[Computational models of musical creativity]]
* [[Generative artificial intelligence]]
* [[Generative artificial intelligence]]
* [[Generative music]]
* [[List of music software]]
* [[List of music software]]
* [[Music information retrieval]]
* [[Music information retrieval]]
* {{section link|OpenAI|MuseNet and Jukebox (music)}}
* {{section link|OpenAI|Music generation}}


==References==
==References==
Line 119: Line 113:
*[https://opendream.ai/ OpenDream]
*[https://opendream.ai/ OpenDream]


{{Artificial intelligence navbox}}
{{Computer music}}
{{Computer music}}


{{DEFAULTSORT:Music And Artificial Intelligence}}
{{DEFAULTSORT:Music And Artificial Intelligence}}
[[Category:Artificial intelligence art| ]]
[[Category:Artificial intelligence art]]
[[Category:Cognitive musicology]]
[[Category:Cognitive musicology]]
[[Category:Computer music]]
[[Category:Computer music]]

Latest revision as of 00:15, 18 December 2024

Music and artificial intelligence (AI) is the development of music software programs which use AI to generate music.[1] As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment.[2] Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in music composition, performance, theory and digital sound processing.

Erwin Panofksy proposed that in all art, there existed three levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject.[3][4] AI music explores the foremost of these, creating music without the "intention" which is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning.[5]

History

[edit]

Artificial intelligence finds its beginnings in music with the transcription problem: accurately recording a performance into musical notation as it is played. Père Engramelle's schematic of a "piano roll", a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1752.[6]

In 1957, the ILLIAC I (Illinois Automatic Computer) produced the "Illiac Suite for String Quartet", a completely computer-generated piece of music. The computer was programmed to accomplish this by composer Lejaren Hiller and mathematician Leonard Isaacson.[5]: v–vii  In 1960, Russian researcher Rudolf Zaripov published worldwide first paper on algorithmic music composing using the Ural-1 computer.[7]

In 1965, inventor Ray Kurzweil developed software capable of recognizing musical patterns and synthesizing new compositions from them. The computer first appeared on the quiz show I've Got a Secret.[8]

By 1983, Yamaha Corporation's Kansei Music System had gained momentum, and a paper was published on its development in 1989. The software utilized music information processing and artificial intelligence techniques to essentially solve the transcription problem for simpler melodies, although higher-level melodies and musical complexities are regarded even today as difficult deep-learning tasks, and near-perfect transcription is still a subject of research.[6][9]

In 1997, an artificial intelligence program named Experiments in Musical Intelligence (EMI) appeared to outperform a human composer at the task of composing a piece of music to imitate the style of Bach.[10] EMI would later become the basis for a more sophisticated algorithm called Emily Howell, named for its creator.

In 2002, the music research team at the Sony Computer Science Laboratory Paris, led by French composer and scientist François Pachet, designed the Continuator, an algorithm uniquely capable of resuming a composition after a live musician stopped.[11]

Emily Howell would continue to make advancements in musical artificial intelligence, publishing its first album From Darkness, Light in 2009.[12] Since then, many more pieces by artificial intelligence and various groups have been published.

In 2010, Iamus became the first AI to produce a fragment of original contemporary classical music, in its own style: "Iamus' Opus 1". Located at the Universidad de Malága (Malága University) in Spain, the computer can generate a fully original piece in a variety of musical styles.[13][5]: 468–481  In August 2019, a large dataset consisting of 12,197 MIDI songs, each with their lyrics and melodies,[14] was created to investigate the feasibility of neural melody generation from lyrics using a deep conditional LSTM-GAN method.

With progress in generative AI, models capable of creating complete musical compositions (including lyrics) from a simple text description have begun to emerge. Two notable web applications in this field are Suno AI, launched in December 2023, and Udio, which followed in April 2024.[15]

Software applications

[edit]

ChucK

[edit]

Developed at Princeton University by Ge Wang and Perry Cook, ChucK is a text-based, cross-platform language.[16] By extracting and classifying the theoretical techniques it finds in musical pieces, the software is able to synthesize entirely new pieces from the techniques it has learned.[17] The technology is used by SLOrk (Stanford Laptop Orchestra)[18] and PLOrk (Princeton Laptop Orchestra).

Jukedeck

[edit]

Jukedeck was a website that let people use artificial intelligence to generate original, royalty-free music for use in videos.[19][20] The team started building the music generation technology in 2010,[21] formed a company around it in 2012,[22] and launched the website publicly in 2015.[20] The technology used was originally a rule-based algorithmic composition system,[23] which was later replaced with artificial neural networks.[19] The website was used to create over 1 million pieces of music, and brands that used it included Coca-Cola, Google, UKTV, and the Natural History Museum, London.[24] In 2019, the company was acquired by ByteDance.[25][26][27]

MorpheuS

[edit]

MorpheuS[28] is a research project by Dorien Herremans and Elaine Chew at Queen Mary University of London, funded by a Marie Skłodowská-Curie EU project. The system uses an optimization approach based on a variable neighborhood search algorithm to morph existing template pieces into novel pieces with a set level of tonal tension that changes dynamically throughout the piece. This optimization approach allows for the integration of a pattern detection technique in order to enforce long term structure and recurring themes in the generated music. Pieces composed by MorpheuS have been performed at concerts in both Stanford and London.

AIVA

[edit]

Created in February 2016, in Luxembourg, AIVA is a program that produces soundtracks for any type of media. The algorithms behind AIVA are based on deep learning architectures[29] AIVA has also been used to compose a Rock track called On the Edge,[30] as well as a pop tune Love Sick[31] in collaboration with singer Taryn Southern,[32] for the creation of her 2018 album "I am AI".

Google Magenta

[edit]
20-second music clip generated by MusicLM using the prompt "hypnotic ambient electronic music"

Google's Magenta team has published several AI music applications and technical papers since their launch in 2016.[33] In 2017 they released the NSynth algorithm and dataset,[34] and an open source hardware musical instrument, designed to facilitate musicians in using the algorithm.[35] The instrument was used by notable artists such as Grimes and YACHT in their albums.[36][37] In 2018, they released a piano improvisation app called Piano Genie. This was later followed by Magenta Studio, a suite of 5 MIDI plugins that allow music producers to elaborate on existing music in their DAW.[38] In 2023, their machine learning team published a technical paper on GitHub that described MusicLM, a private text-to-music generator which they'd developed.[39][40]

Riffusion

[edit]
Generated spectrogram from the prompt "bossa nova with electric guitar" (top), and the resulting audio after conversion (bottom)

Riffusion is a neural network, designed by Seth Forsgren and Hayk Martiros, that generates music using images of sound rather than audio.[41] It was created as a fine-tuning of Stable Diffusion, an existing open-source model for generating images from text prompts, on spectrograms.[41] This results in a model which uses text prompts to generate image files, which can be put through an inverse Fourier transform and converted into audio files.[42] While these files are only several seconds long, the model can also use latent space between outputs to interpolate different files together.[41][43] This is accomplished using a functionality of the Stable Diffusion model known as img2img.[44]

The resulting music has been described as "de otro mundo" (otherworldly),[45] although unlikely to replace man-made music.[45] The model was made available on December 15, 2022, with the code also freely available on GitHub.[42] It is one of many models derived from Stable Diffusion.[44]

Riffusion is classified within a subset of AI text-to-music generators. In December 2022, Mubert[46] similarly used Stable Diffusion to turn descriptive text into music loops. In January 2023, Google published a paper on their own text-to-music generator called MusicLM.[47][48]

Spike AI

[edit]

Spike AI is an AI-based audio plug-in, developed by Spike Stent in collaboration with his son Joshua Stent and friend Henry Ramsey, that analyzes tracks and provides suggestions to increase clarity and other aspects during mixing. Communication is done by using a chatbot trained on Spike Stent's personal data. The plug-in integrates into digital audio workstation.[49][50]

Musical applications

[edit]

Artificial Intelligence has the opportunity to impact how producers create music by giving reiterations of a track that follow a prompt given by the creator. These prompts allow the AI to follow a certain style that the artist is trying to go for.[5]

AI has also been seen in musical analysis where it has been used for feature extraction, pattern recognition, and musical recommendations.[51]

Composition

[edit]

Artificial intelligence has had major impacts in the composition sector as it has influenced the ideas of composers/producers and has the potential to make the industry more accessible to newcomers. With its development in music, it has already been seen to be used in collaboration with producers. Artists use these software to help generate ideas and bring out musical styles by prompting the AI to follow specific requirements that fit their needs. Future compositional impacts by the technology include style emulation and fusion, and revision and refinement. Development of these types of software can give ease of access to newcomers to the music industry.[5] Software such as ChatGPT have been used by producers  to do these tasks, while other software such as Ozone11 have been used to automate time consuming and complex activities such as mastering.[52]

[edit]

In the United States, the current legal framework tends to apply traditional copyright laws to AI, despite its differences with the human creative process.[53] However, music outputs solely generated by AI are not granted copyright protection. In the compendium of the U.S. Copyright Office Practices, the Copyright Office has stated that it would not grant copyrights to “works that lack human authorship” and “the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”[54] In February 2022, the Copyright Review Board rejected an application to copyright AI-generated artwork on the basis that it "lacked the required human authorship necessary to sustain a claim in copyright."[55]

The situation in the European Union (EU) is similar to the US, because its legal framework also emphasizes the role of human involvement in a copyright-protected work.[56] According to the European Union Intellectual Property Office and the recent jurisprudence of the Court of Justice of the European Union, the originality criterion requires the work to be the author’s own intellectual creation, reflecting the personality of the author evidenced by the creative choices made during its production, requires distinct level of human involvement.[56] The reCreating Europe project, funded by the European Union’s Horizon 2020 research and innovation program, delves into the challenges posed by AI-generated contents including music, suggesting legal certainty and balanced protection that encourages innovation while respecting copyright norms.[56] The recognition of AIVA marks a significant departure from traditional views on authorship and copyrights in the realm of music composition, allowing AI artists capable of releasing music and earning royalties. This acceptance marks AIVA as a pioneering instance where an AI has been formally acknowledged within the music production.[57]

The recent advancements in artificial intelligence made by groups such as Stability AI, OpenAI, and Google has incurred an enormous sum of copyright claims leveled against generative technology, including AI music. Should these lawsuits succeed, the machine learning models behind these technologies would have their datasets restricted to the public domain.[58]

Musical deepfakes

[edit]

A more nascent development of AI in music is the application of audio deepfakes to cast the lyrics or musical style of a pre-existing song to the voice or style of another artist. This has raised many concerns regarding the legality of technology, as well as the ethics of employing it, particularly in the context of artistic identity.[59] Furthermore, it has also raised the question of to whom the authorship of these works is attributed. As AI cannot hold authorship of its own, current speculation suggests that there will be no clear answer until further rulings are made regarding machine learning technologies as a whole.[60] Most recent preventative measures have started to be developed by Google and Universal Music group who have taken into royalties and credit attribution to allow producers to replicated the voices and styles of artists.[61]

"Heart on My Sleeve"

[edit]

In 2023, an artist known as ghostwriter977 created a musical deepfake called "Heart on My Sleeve" that cloned the voices of Drake and The Weeknd by inputting an assortment of vocal-only tracks from the respective artists into a deep-learning algorithm, creating an artificial model of the voices of each artist, to which this model could be mapped onto original reference vocals with original lyrics.[62] The track was submitted for Grammy consideration for the best rap song and song of the year.[63] It went viral and gained traction on TikTok and received a positive response from the audience, leading to its official release on Apple Music, Spotify, and YouTube in April of 2023.[64] Many believed the track was fully composed by an AI software, but the producer claimed the songwriting, production, and original vocals (pre-conversion) were still done by him.[62] It would later be rescinded from any Grammy considerations due to it not following the guidelines necessary to be considered for a Grammy award.[64] The track would end up being removed from all music platforms by Universal Music Group.[64] The song was a watershed moment for AI voice cloning, and models have since been created for hundreds, if not thousands, of popular singers and rappers.

"Where That Came From"

[edit]

In 2013, country music singer Randy Travis suffered a stroke which left him unable to sing. In the meantime, vocalist James Dupré toured on his behalf, singing his songs for him. Travis and longtime producer Kyle Lehning released a new song in May 2024 titled "Where That Came From", Travis's first new song since his stroke. The recording uses AI technology to re-create Travis's singing voice, having been composited from over 40 existing vocal recordings alongside those of Dupré.[65][66]

See also

[edit]

References

[edit]
  1. ^ D. Herremans; C.H.; Chuan, E. Chew (2017). "A Functional Taxonomy of Music Generation Systems". ACM Computing Surveys. 50 (5): 69:1–30. arXiv:1812.04186. doi:10.1145/3108242. S2CID 3483927.
  2. ^ Dannenberg, Roger. "Artificial Intelligence, Machine Learning, and Music Understanding" (PDF). Semantic Scholar. S2CID 17787070. Archived from the original (PDF) on 23 August 2018. Retrieved 23 August 2018.
  3. ^ Erwin Panofsky, Studies in Iconology: Humanistic Themes in the Art of the Renaissance. Oxford 1939.
  4. ^ Dilly, Heinrich (2020), Arnold, Heinz Ludwig (ed.), "Panofsky, Erwin: Zum Problem der Beschreibung und Inhaltsdeutung von Werken der bildenden Kunst", Kindlers Literatur Lexikon (KLL) (in German), Stuttgart: J.B. Metzler, pp. 1–2, doi:10.1007/978-3-476-05728-0_16027-1, ISBN 978-3-476-05728-0, retrieved 3 March 2024
  5. ^ a b c d e Miranda, Eduardo Reck, ed. (2021). "Handbook of Artificial Intelligence for Music". SpringerLink. doi:10.1007/978-3-030-72116-9. ISBN 978-3-030-72115-2.
  6. ^ a b Roads, Curtis (1985). "Research in music and artificial intelligence". ACM Computing Surveys. 17 (2): 163–190. doi:10.1145/4468.4469. Retrieved 6 March 2024.
  7. ^ Zaripov, Rudolf (1960). "Об алгоритмическом описании процесса сочинения музыки (On algorithmic description of process of music composition)". Proceedings of the USSR Academy of Sciences. 132 (6).
  8. ^ "Ray Kurzweil". National Science and Technology Medals Foundation. Retrieved 10 September 2024.
  9. ^ Katayose, Haruhiro; Inokuchi, Seiji (1989). "The Kansei Music System". Computer Music Journal. 13 (4): 72–77. doi:10.2307/3679555. ISSN 0148-9267. JSTOR 3679555.
  10. ^ Johnson, George (11 November 1997). "Undiscovered Bach? No, a Computer Wrote It". The New York Times. Retrieved 29 April 2020. Dr. Larson was hurt when the audience concluded that his piece -- a simple, engaging form called a two-part invention -- was written by the computer. But he felt somewhat mollified when the listeners went on to decide that the invention composed by EMI (pronounced Emmy) was genuine Bach.
  11. ^ Pachet, François (September 2003). "The Continuator: Musical Interaction With Style". Journal of New Music Research. 32 (3): 333–341. doi:10.1076/jnmr.32.3.333.16861. hdl:2027/spo.bbp2372.2002.044. ISSN 0929-8215.
  12. ^ Lawson, Mark (22 October 2009). "This artificially intelligent music may speak to our minds, but not our souls". The Guardian. ISSN 0261-3077. Retrieved 10 September 2024.
  13. ^ "Iamus: Is this the 21st century's answer to Mozart?". BBC News. 2 January 2013. Retrieved 10 September 2024.
  14. ^ yy1lab (13 November 2024), yy1lab/Lyrics-Conditioned-Neural-Melody-Generation, retrieved 19 November 2024{{citation}}: CS1 maint: numeric names: authors list (link)
  15. ^ Nair, Vandana (11 April 2024). "AI-Music Platform Race Accelerates with Udio". Analytics India Magazine. Retrieved 19 April 2024.
  16. ^ ChucK => Strongly-timed, On-the-fly Audio Programming Language. Chuck.cs.princeton.edu. Retrieved on 2010-12-22.
  17. ^ Foundations of On-the-fly Learning in the ChucK Programming Language
  18. ^ Driver, Dustin. (1999-03-26) Pro - Profiles - Stanford Laptop Orchestra (SLOrk), pg. 1. Apple. Retrieved on 2010-12-22.
  19. ^ a b "From Jingles to Pop Hits, A.I. Is Music to Some Ears". The New York Times. 22 January 2017. Retrieved 3 January 2023.
  20. ^ a b "Need Music For A Video? Jukedeck's AI Composer Makes Cheap, Custom Soundtracks". techcrunch.com. 7 December 2015. Retrieved 3 January 2023.
  21. ^ "What Will Happen When Machines Write Songs Just as Well as Your Favorite Musician?". motherjones.com. Retrieved 3 January 2023.
  22. ^ Cookson, Robert (7 December 2015). "Jukedeck's computer composes music at touch of a button". Financial Times. Retrieved 3 January 2023.
  23. ^ "Jukedeck: the software that writes music by itself, note by note". Wired UK. Retrieved 3 January 2023.
  24. ^ "Robot rock: how AI singstars use machine learning to write harmonies". standard.co.uk. March 2018. Retrieved 3 January 2023.
  25. ^ "TIKTOK OWNER BYTEDANCE BUYS AI MUSIC COMPANY JUKEDECK". musicbusinessworldwide.com. 23 July 2019. Retrieved 3 January 2023.
  26. ^ "As TikTok's Music Licensing Reportedly Expires, Owner ByteDance Purchases AI Music Creation Startup JukeDeck". digitalmusicnews.com. 23 July 2019. Retrieved 3 January 2023.
  27. ^ "An AI-generated music app is now part of the TikTok group". sea.mashable.com. 24 July 2019. Retrieved 3 January 2023.
  28. ^ D. Herremans; E. Chew (2016). "MorpheuS: Automatic music generation with recurrent pattern constraints and tension profiles". IEEE Transactions on Affective Computing. PP(1). arXiv:1812.04832. doi:10.1109/TAFFC.2017.2737984. S2CID 54475410.
  29. ^ "A New AI Can Write Music as Well as a Human Composer". Futurism. 9 March 2017. Retrieved 19 April 2024.
  30. ^ Technologies, Aiva (24 October 2018). "The Making of AI-generated Rock Music with AIVA". Medium. Retrieved 19 April 2024.
  31. ^ Lovesick | Composed with AIVA Artificial Intelligence - Official Video with Lyrics | Taryn Southern. 2 May 2018.
  32. ^ Southern, Taryn (10 May 2018). "Algo-Rhythms: The future of album collaboration". TechCrunch. Retrieved 19 April 2024.
  33. ^ "Welcome to Magenta!". Magenta. 1 June 2016. Retrieved 19 April 2024.
  34. ^ Engel, Jesse; Resnick, Cinjon; Roberts, Adam; Dieleman, Sander; Eck, Douglas; Simonyan, Karen; Norouzi, Mohammad (2017). "Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders". PMLR. arXiv:1704.01279.
  35. ^ Open NSynth Super, Google Creative Lab, 13 February 2023, retrieved 14 February 2023
  36. ^ "Cover Story: Grimes is ready to play the villain". Crack Magazine. Retrieved 14 February 2023.
  37. ^ "What Machine-Learning Taught the Band YACHT About Themselves". Los Angeleno. 18 September 2019. Retrieved 14 February 2023.
  38. ^ "Magenta Studio". Magenta. Retrieved 19 April 2024.
  39. ^ "MusicLM". google-research.github.io. 2023. Retrieved 19 April 2024.
  40. ^ Sandzer-Bell, Ezra (16 February 2024). "Best Alternatives to Google's AI-Powered MusicLM and MusicFX". AudioCipher. Retrieved 19 April 2024.
  41. ^ a b c Coldewey, Devin (15 December 2022). "Try 'Riffusion,' an AI model that composes music by visualizing it".
  42. ^ a b Nasi, Michele (15 December 2022). "Riffusion: creare tracce audio con l'intelligenza artificiale". IlSoftware.it.
  43. ^ "Essayez "Riffusion", un modèle d'IA qui compose de la musique en la visualisant". 15 December 2022.
  44. ^ a b "文章に沿った楽曲を自動生成してくれるAI「Riffusion」登場、画像生成AI「Stable Diffusion」ベースで誰でも自由に利用可能". GIGAZINE. 16 December 2022.
  45. ^ a b Llano, Eutropio (15 December 2022). "El generador de imágenes AI también puede producir música (con resultados de otro mundo)".
  46. ^ "Mubert launches Text-to-Music interface – a completely new way to generate music from a single text prompt". 21 December 2022.
  47. ^ "MusicLM: Generating Music From Text". 26 January 2023.
  48. ^ "5 Reasons Google's MusicLM AI Text-to-Music App is Different". 27 January 2023.
  49. ^ Levine, Mike (4 October 2024). "Spike AI — A Mix Product of the Week". Mix (magazine). Future US. Retrieved 19 November 2024.
  50. ^ "Spike Stent offers his expertise in Spike AI". Sound on Sound. Retrieved 19 November 2024.
  51. ^ Zhang, Yifei (December 2023). "Utilizing Computational Music Analysis and AI for Enhanced Music Composition: Exploring Pre- and Post-Analysis". Journal of Advanced Zoology. 44 (S-6): 1377–1390. doi:10.17762/jaz.v44is6.2470. S2CID 265936281.
  52. ^ Sunkel, Cameron (16 December 2023). "New Research Reveals Top AI Tools Utilized by Music Producers". EDM.com - The Latest Electronic Dance Music News, Reviews & Artists. Retrieved 3 April 2024.
  53. ^ "Art created by AI cannot be copyrighted, says US officials – what does this mean for music?". MusicTech. Retrieved 27 October 2022.
  54. ^ "Can (and should) AI-generated works be protected by copyright?". Hypebot. 28 February 2022. Retrieved 27 October 2022.
  55. ^ Re: Second Request for Reconsideration for Refusal to Register A Recent Entrance to Paradise (Correspondence ID 1-3ZPC6C3; SR # 1-7100387071) (PDF) (Report). Copyright Review Board, United States Copyright Office. 14 February 2022.
  56. ^ a b c Bulayenko, Oleksandr; Quintais, João Pedro; Gervais, Daniel J.; Poort, Joost (February 28, 2022). "AI Music Outputs: Challenges to the Copyright Legal Framework". reCreating Europe Report. Retrieved 2024-04-03.
  57. ^ Ahuja, Virendra (June 11, 2021). "Artificial Intelligence and Copyright: Issues and Challenges". ILI Law Review Winter Issue 2020. Retrieved 2024-04-03.
  58. ^ {{Cite journal |last=Samuelson |first=Pamela |date=2023-07-14 |title=Generative AI meets copyright |journal=Science |language=en |volume=381 |issue=6654 |pages=158–161 |doi=10.1126/science.
  59. ^ DeepDrake ft. BTS-GAN and TayloRVC: An Exploratory Analysis of Musical Deepfakes and Hosting Platforms
  60. ^ AI and Deepfake Voice Cloning: Innovation, Copyright and Artists’ Rights
  61. ^ Murgia, Madhumita; Nicolaou, Anna (8 August 2023). "Google and Universal Music negotiate deal over AI 'deepfakes'". Financial Times. Retrieved 3 April 2024.
  62. ^ a b Robinson, Kristin (11 October 2023). "Ghostwriter, the Mastermind Behind the Viral Drake AI Song, Speaks For the First Time". Billboard. Retrieved 3 April 2024.
  63. ^ "Drake/The Weeknd deepfake song "Heart on My Sleeve" submitted to Grammys". The FADER. Retrieved 3 April 2024.
  64. ^ a b c "The AI deepfake of Drake and The Weeknd will not be eligible for a GRAMMY". Mixmag. Retrieved 3 April 2024.
  65. ^ Marcus K. Dowling (6 May 2024). "Randy Travis' shocks music industry with AI pairing for 'Where That Came From.' How the song came together". The Tennesseean. Retrieved 6 May 2024.
  66. ^ Maria Sherman (6 May 2024). "With help from AI, Randy Travis got his voice back. Here's how his first song post-stroke came to be". AP News. Retrieved 6 May 2024.

Further reading

[edit]
[edit]