User:SparkWorks16/Music and artificial intelligence

Lead
Add: Erwin Panofksy proposed that in all art, there existed 3 levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject. AI music explores the foremost of these, creating music without the "intention" which is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning.

History
Add the following:

Artificial intelligence finds it's beginnings in music with the transcription problem: accurately recording a performance into musical notation as it's played. Père Engramelle's schematic of a "piano roll," a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1752.

In 1957, the ILLIAC I (Illinois Automatic Computer) produced the "Illiac Suite for String Quartet," a completely computer-generated piece of music. The computer was programmed to accomplish this by composer Lejaren Hiller and mathematician Leonard Isaacson.

By 1983, Yamaha Corporation's Kansei Music System had gained momentum, and a paper was published on it's development in 1989. The software utilized music information processing and artificial intelligence techniques to essentially solve the transcription problem for simpler melodies, allowing users to play on the keyboard and generating the sheet music as they played. However, this was only the case for simple pieces; higher-level melodies and musical complexities are regarded even today as difficult deep learning tasks, and near-perfect transcription is still a subject of research.

EMI would later become the basis for a more sophisticated algorithm called Emily Howell, named for it's creator.

In 2002, the music research team at the Sony Computer Science Laboratory Paris, led by French composer and scientist François Pachet, designed the Continuator, an algorithm uniquely capable of resuming a composition after a live musician stopped.

Emily Howell would continue to make advancements in musical artificial intelligence, publishing it's first album, "From Darkness, Light," in 2009, and it's second, "Breathless," by 2012.

In 2010, Iamus became the first AI to produce a fragment of original contemporary classical music, in it's own style: "Iamus' Opus 1." Located at the Universidad de Malága (Malága University) in Spain, the computer can generate a fully original piece in a variety of musical styles in the span of eight minutes.

ChucK
Developed at Princeton University by Ge Wang and Perry Cook, ChucK is a text-based, cross-platform language. By extracting and classifying the theoretical techniques it finds in musical pieces, the software is able to synthesize entirely new pieces from the techniques it has learned. The technology is used by SLOrk (Stanford Laptop Orchestra) and PLOrk (Princeton Laptop Orchestra).

Copyright
Add: The recent advancements in artificial intelligence made by groups such as Stability AI, OpenAI, and Google has incurred an enormous sum of copyright claims levelled against generative technology, including AI music. Should these lawsuits succeed, the machine learning models behind these technologies would have their datasets restricted to the public domain.

Musical deepfakes
A more nascent development of AI in music is the application of audio deepfakes to cast the lyrics or musical style of a preexisting song to the voice or style of another artist. This has raised many concerns regarding the legality of technology, as well as the ethics of employing it, particularly in the context of artistic identity. Furthermore, it has also raised the question of to whom the authorship of these works is attributed. As AI cannot hold authorship of it's own, current speculation suggests that there will be no clear answer until further rulings are made regarding machine learning technologies as a whole.