
In 2024, I started generating music with AI models. I've generally found that the latest models are now capable of outputting any sound that is possible. The limitations of AI music are now generally caused by my selecting the wrong inferences (it usually takes 600-1000 runs plus editing), not understanding song structure, or not understanding what listeners like and dislike.
Every time I publish a new release, which I've been doing about once or twice per month, I will post a poll to Manifold to bring it to users' attention. The polls are bet on with one of the related markets.
Once I feel that I have achieved a song that is satisfactory, I will attempt to get a radio station to play it, even if only as a novelty during a morning show or something like that. "Six Weeks From AGI" (https://soundcloud.com/steve-sokolowski-2/six-weeks-from-agi) may be sufficient or I may wait for another song later this year to try, but I will try before AGI is actually achieved and anyone will be able to generate something better than this with no effort.
If any station plays any portion of any song I have ever created where the entire sound is generated by AI, then this market will resolve to YES. If time runs out, it will resolve to NO.
Whether the song is played once, becomes a Billboard hit, or if a morning show of a small rural station decides to play part of the song to criticize what AI is able to achieve is not relevant to the resolution.
I intend to pay for and use the latest models and software tools as they are released, even if they cost money, although I believe the musical "Turing Test" has already been reached.
RELATED MARKETS:
Update 2025-02-06 (PST) (AI summary of creator comment): AI Created Song Clarification:
If an AGI model outputs MIDI files for instruments, which are then input into music production software to create the arrangement, that song will be considered an AI created song.
@SteveSokolowski I’m interested in knowing more about the AI model that you’re using to produce MIDI files
@probajoelistic I"m not using one yet. I'm just reserving that possibility for o3 being able to do it.
I think though, that I had a breakthrough that may end up causing that be unnecessary. I'm working on a "reasoning" or "test-time compute" architecuture for music models over the next few days by using o3-mini-high to output Python.
The realization I had is that music models right now are like GPT-4.5 - they just output stuff and have no idea what it sounds like. We can add "reasoning" to them by connecting Gemini Pro 2.0 Experimental 0205, and entering a loop where Gemini improves the prompt until it is satisfied with the results.
In my manual testing by dragging and dropping, this works dramatically well. For example, it can completely eliminate the "AI-sounding vocals" Issue. I'm hoping to have this automated by the end of the week.