Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

OpenAI Is Returning to Its Original Mission of Openness

After six long years, the company is launching its first new open-weight LLMs.

Oyinebiladou Omemu profile image
by Oyinebiladou Omemu
OpenAI Is Returning to Its Original Mission of Openness
Photo by Growtika / Unsplash

For the first time since the release of GPT-2 in 2019, OpenAI is opening the gates, albeit not fully, with the launch of gpt-oss-120b and gpt-oss-20b.

These models represent OpenAI’s step in trying to realign with its original mission of ensuring accessibility of artificial intelligence.

To be clear, these aren’t open-source models. What OpenAI has shared are the weights, the billions of parameters that essentially represent the learned knowledge of the model during training. Developers can build, fine-tune, and customize AI applications without gaining access to the full underlying architecture that competitors could reverse engineer.

Anthropic Blocks OpenAI from Using Claude AI
Claude’s makers say OpenAI was using its chatbot to train their upcoming model.

The names, gpt-oss-120b and gpt-oss-20b, refer to their parameter counts: 117 billion and 21 billion, respectively. These parameters act like the internal dials that the model adjusts to generate intelligent responses. More parameters generally translate to higher performance, but also to higher hardware demands. The larger 120b model, for instance, requires a robust system with a single NVIDIA 80GB GPU to run efficiently. Meanwhile, the 20b model is optimized for accessibility. It can run on a consumer device with just 16GB of RAM.

What makes these models especially intriguing isn’t just their performance, though that’s impressive in itself, but how closely they mirror OpenAI’s proprietary offerings. On reasoning benchmarks, gpt-oss-120b scores just behind o3, while gpt-oss-20b places somewhere between o3-mini and o4-mini. While they can’t yet handle multimodal inputs like images or audio, they offer features like chain-of-thought reasoning and tool use, critical functions for tackling complex tasks and automating workflows.

ChatGPT is becoming the internet’s new first stop for answers, hits 2.5B daily prompts
330 million come from U.S. users, meaning one in every eight prompts comes from the U.S.

This reversal on OpenAI's end comes amidst growing competition from China, where models like DeepSeek R1, Moonshot Kimi, and Alibaba’s Qwen are pushing the envelope in open-source reasoning performance, and doing so at a fraction of the cost. 

Interestingly, OpenAI declined to benchmark its new models against those Chinese releases. Instead, it left the comparisons up to the broader AI community, a subtle but telling move that implies confidence in its models but caution in sparking direct competition. 

Meanwhile, Meta, once the loudest voice in the open-weight movement, is reportedly stepping back. And now, in a turn of events, OpenAI, long criticized for drifting from its founding ideals, is the one re-engaging with the open community.

Oyinebiladou Omemu profile image
by Oyinebiladou Omemu

Subscribe to Techloy.com

Get the latest information about companies, products, careers, and funding in the technology industry across emerging markets globally.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More