Jump to content

User:Younghazi/Music and artificial intelligence

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Ejromero03 (talk | contribs) at 23:41, 27 March 2024. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Intro

Artificial intelligence is the development of music software programs which use AI to generate music. As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment. Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control.

Erwin Panofksy proposed that in all art, there existed 3 levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject. AI music explores the foremost of these, creating music without the "intention" which is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning.

History

Artificial intelligence finds its beginnings in music with the transcription problem: accurately recording a performance into musical notation as it is played. Père Engramelle's schematic of a "piano roll," a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1752.

Software Applications

Musical Applications

Identification:

Composition:

https://link-springer-com.libproxy1.usc.edu/book/10.1007/978-3-030-72116-9

Musical Analysis:

The question of who owns the copyright to AI music outputs remain uncertain. When AI is used as a collaborative tool as a function of the human creative process, current US copyright laws are likely to apply. However, music outputs solely generated by AI are not granted copyright protection. In the Compendium of U.S. Copyright Office Practices, the Copyright Office has stated that it would not grant copyrights to “works that lack human authorship” and “the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” In February 2022, the Copyright Review Board rejected an application to copyright AI-generated artwork on the basis that it "lacked the required human authorship necessary to sustain a claim in copyright."

The recent advancements in artificial intelligence made by groups such as Stability AI, OpenAI, and Google has incurred an enormous sum of copyright claims leveled against generative technology, including AI music. Should these lawsuits succeed, the machine learning models behind these technologies would have their datasets restricted to the public domain.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4072806

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3864922

Musical Deepfakes

(PREVIOUS) A more nascent development of AI in music is the application of audio deepfakes to cast the lyrics or musical style of a preexisting song to the voice or style of another artist. This has raised many concerns regarding the legality of technology, as well as the ethics of employing it, particularly in the context of artistic identity. Furthermore, it has also raised the question of to whom the authorship of these works is attributed. As AI cannot hold authorship of its own, current speculation suggests that there will be no clear answer until further rulings are made regarding machine learning technologies as a whole.

(NEW) Heart on My Sleeve

Notes:

A musical AI deepfake known as Heart on my Sleeve which imitates two popular artists, The Weeknd and Drake, which was originally nominated for two grammy awards (Best Rap Song and Song of the Year)

Received popularity through TikTok

Released in April 2023 on Apple Music, Spotify, and YouTube

Would later be removed by Universal Music Group

Featured Metro Boomin's fanous


Health Impacts of AI music

References

[1]

  1. ^ Williams, Duncan; Hodge, Victoria J.; Wu, Chia-Yu (2020). "On the use of AI for Generation of Functional Music to Improve Mental Health". Frontiers in Artificial Intelligence. 3. doi:10.3389/frai.2020.497864/full. ISSN 2624-8212.{{cite journal}}: CS1 maint: unflagged free DOI (link)

[1][2]

  1. ^ Nicolaou, Anna; Murgia, Madhumita (August 8, 2023). "Google and Universal Music negotiate deal over AI 'deepfakes'". www.ft.com. Retrieved 2024-03-27.
  2. ^ Feffer, Michael; Lipton, Zachary C.; Donahue, Chris. "DeepDrake ft. BTS-GAN and TayloRVC: An Exploratory Analysis of Musical Deepfakes and Hosting Platforms" (PDF) – via Carnegie Mellon University. {{cite journal}}: Cite journal requires |journal= (help); line feed character in |title= at position 36 (help)