Illustrious XL Model Series: Addressing Open-Source Concerns & Future Plans

It''s been a while since we released Illustrious XL v0.1, and we know many of you have been eagerly waiting for updates. We also recognize that many are disappointed with the closed-source nature of Illustrious XL v1.0, and we want to address this directly. A lot has happened since them, and we're truly grateful for the open-source community's contributions-whether it's large-scale fine-tuned models, ControlNets or the countless LoRAs and adapters that have been developed.


Development Journey:

When we started working on the Illustrious XL series, our goal was simple: there weren’t any strong pretrained models available for illustrations, so we decided to build one ourselves—a pretrain-level fine-tuned model that artists and researchers could actually use.


We also knew that keeping everything in-house wouldn’t help the field move forward. That’s why we released v0.1 to the public and focused on training newer variations, pushing the model’s capabilities further with improved quality, deeper knowledge, and architectural refinements.


Along the way, we discovered something unexpected. The model wasn’t just good at illustrations—it could also interpret natural language, handle complex prompts, and generate high-resolution images, far beyond what we originally planned.


Our Model Versions:
  • v0.1 (trained in May 2024)
  • v1.0 (July 2024)
  • v1.1 (August 2024)
  • v2.0 (September 2024)
  • v3 (November 2024)
  • v3.5 (a special variant incorporating Google's v-parameterization)

These models take another step forward in natural language composition and image generation.


That said, we can’t drop everything all at once. There’s a clear roadmap ahead, and open-source releases are part of it. But rather than rushing, we want to do this the right way—with explanations, insights, and research-backed improvements.

 

Our Future Plans:

Now, after months of work behind the scenes, we’re finally ready to move forward. We’ll be rolling out our latest models step by step while progressively open-sourcing previous versions so they can be studied and improved upon. Expect breakthroughs like true 2K-resolution generation and better natural language alignment along the way.


Commitment to Open Source:

This will take time, but we’re moving fast. Our next-generation models are already in development, tackling some of the fundamental limitations of the base SD XL architecture. As we progress, older models will naturally be deprecated, and weight releases will follow accordingly. Our team aim to proceed thoughtfully, ensuring that each release is accompanied by comprehensive explanations and insights.


Backward Compatibility:

One last thing—we’re not just here to release models. Every model we’ve built is designed with backward compatibility in mind, because Illustrious XL wasn’t just about making something new—it was about creating a better foundation for fine-tuning. That’s why we’ve put so much effort into training LoRAs properly, and soon, we’ll be sharing insights on how to train them more effectively.


In summary, Onoma AI plans to roll out open-source weights step by step and encourages the community to stay tuned for upcoming developments— we’re just getting started.