Description du projet
We are introducing an AI powered music generation solution. Using new technologies, every person worldwide has the capacity to compose music by themselves without having musical skills, formal music education, access to resources or other requirements.
All one needs is a musical idea and a smartphone. With our app, the user will be able to turn this idea into a coherent musical story, complete with sound, lyrics and artwork that fit together.
The roadmap includes deciphering the DNA of music creativity and breaking it down into its elementary components. These components can then be used as building blocks for new content. In order to offer these services, our team will create state-of-the-art A.I. by leveraging great computational power and large datasets.
In a new music-oriented social network, people will be able to share, comment and react to songs from around the world.
Using blockchain technology, we can aggressively simplify the rigid royalty-based legal system. Professional composers will be rewarded for their content in a digital distributed marketplace. It will also be possible to monetize songs and song pieces (e.g. beats or movie sounds) as NFTs.
We will begin by offering aMUZE as an app on the iOS app store. Users will be able to provide some text as input into their smartphones (either by writing or by speaking). After making some adjustments, like duration or loudness and selecting between available emotions or music genres, aMUZE will create the melodic equivalent of the provided message. The users will be free to use the resulting musical snipped as they please. For example, it will be possible to export this melodic message and send it to friends using a favorite messaging app, or use it as a beat for a song, or just to be inspired while in the process of songwriting.
This will be the first step in the direction of AI powered music composition based on lyrics. As a next step, users of aMUZE will be able to use their smartphones to compose music based on a simple melody that they will
whistle or hum into the microphone of their device. aMUZE will try to “understand” the original idea that the user intended and create a song. Depending on the music skills that the users have, they will be able to dive deep into the notes and refine the result using composition features that are present in music composition software (Digital Audio Workstations).
Gebert Rüf Stiftung – Instruction Sheet: Publishing your Project on our Website 10.3.2021 Page 1 of 2At first, aMUZE will support selected music genres including simple lullabies, children’s songs, arcade melodies, basic electronic and meditation music.
Within our app, composers and content creators will be able to generate images based on seed words like “spring”, “flowers”, “sun”, “holiday”. After generating an image of their liking, they will be able to use it as artwork for their song and make it part of their musical story.
With aMUZE the user will also be able to generate lyrics based on some seed words and by adjusting some options. An example would be to use the same words as previously for the image generation process. By providing aMUZE with the name of a favorite artist and a timeframe (e.g. Michael Jackson, 80s) aMUZE will be able to create lyrics that fits this description best.
Our emotional analysis algorithms will inform the users about the psychological impact the content they consume has on them.
aMUZE will also offer a social network to its users, where they will be able to share their songs and comment and react on each other’s creations.
Songs and song parts will get a financial dimension based on blockchain technology. If the user activates this feature, their creations will be offered within the aMUZE marketplace for a price depending on specific criteria.
Personnes participant au projet
, Project manager | Full stack developer (iOS) Prof. Dr. Alexander
, SupervisorDr. Luca Mazzola
, SupervisorProf. Dr. Richard
, Lead of AI R&D, Prof. Dr. Kostas
, Lead of Software DevelopmentDimitri Spicher
, Full stack developer (iOS), transcriptionShivam Adarsh
, Back-end developer, composition
Dernière mise à jour de cette présentation du projet 18.12.2022