In a remarkable leap forward in the ongoing ChatGPT AI race, Stable Audio, a text-to-audio AI model, is poised to change the way we create music and possibly challenge musicians for their jobs.

Developed by Stability AI, the same company that helped develop Stable Diffusion, a text-to-image synthesis model, this audio marvel takes AI capabilities to a whole new level.

Its text audio model will now allow simple prompts like typing "dramatic intro music" to instantly generate a soaring symphony or writing "lofi hip hop beat melodic chillhop 85 bpm" to generate high-quality music.

The model delivers not only music but also sound effects, such as an airline pilot speaking over an intercom or the ambience of a bustling restaurant.

To train its model, Stability AI partnered with AudioSparx, licensing a massive dataset of over 800,000 audio files and corresponding text metadata. After feeding 19,500 hours of audio into their model, Stable Audio learned to imitate sounds based on text descriptions, bridging the gap between text and lifelike audio generation.

The best part is that Stable Audio isn't keeping its magic locked away as it is making it accessible to users. Stability AI plans to offer Stable Audio in a free tier and a $12 monthly Pro plan that will let you generate up to 20 tracks per month, each lasting 20 seconds. The Pro plan, on the other hand, gives you the freedom to create 500 tracks per month, each with a duration of up to 90 seconds.

Notably Stable Audio is not the first music generator employing latent diffusion techniques. However, its audio fidelity at 44.1 kHz stereo audio (often called "CD quality") surpasses earlier attempts like Google's AI music generator MusicLM with 24 kHz audio, and Meta's AudioCraft. Notably, the parent company, Stable Diffusion, boasts more than 10 million users according to company claims.

As AI-generated music continues to evolve, questions inevitably arise about its impact on musicians. While AI can craft impressive compositions, human creativity remains unmatched, but it's undeniable that the gap between human and AI-generated music is closing rapidly.