Who's Open Now?

Who's Open Now?

Arlo Gilbert ·

For two years, the open-source AI conversation had a simple map. Meta was the champion, and Llama was the model everyone reached for. Google kept its best work behind APIs. If you wanted capable open weights, you went to Meta.

That map is wrong now.

On April 2, Google released Gemma 4 under Apache 2.0. Four model variants. The smallest runs on a phone. The largest, a dense 31B parameter model, ranks #3 on the Arena AI leaderboard. Apache 2.0 means unrestricted commercial use, modification, and redistribution. The 26B mixture-of-experts variant activates only 3.8 billion parameters at inference, so quantized versions run on consumer GPUs. Context windows go up to 256K tokens. Google handed the community its most capable open weights ever and said: do whatever you want with them.

Six days later, Meta shipped Muse Spark. It's the first model from Meta Superintelligence Labs, built from scratch over nine months under Alexandr Wang. Wang joined Meta last year as part of a $14.3 billion investment in Scale AI. Muse Spark is proprietary. No weights, no license, no community access to the architecture. It powers Meta AI across Facebook, Instagram, WhatsApp, and the standalone app.

Meta says it "hopes to open-source future versions of the model."

The company that made open-source AI its identity just shipped its flagship model closed. Meanwhile, Google, which kept its best work behind APIs for years, gave away the most capable open model it's ever released.

The business logic

The reasons on both sides are legible if you follow the incentives.

Meta spent years distributing Llama to build an ecosystem. It worked. Thousands of applications, research projects, and fine-tuned derivatives were built on Llama weights. That ecosystem established Meta as a serious AI player outside social media. But it also armed competitors. Every company that fine-tuned Llama for their own product got a head start on Meta's research budget.

Now Meta is in a capability race. Wang's team built Muse Spark from the ground up, and Meta has decided that sharing architectural innovations is a cost it won't pay right now. Wall Street agreed: the stock rose about 9% on the announcement.

This pattern has precedent. In 1981, IBM opened the PC architecture to win market share fast. It worked brilliantly. Within two years, IBM dominated the market. Then Compaq reverse-engineered the BIOS, legal clones flooded in, and by the end of the decade IBM had lost control of the platform it created. Open ecosystems drive adoption fast, but they're hard to defend once competitors show up building on your standard.

Meta's bet is that it captured enough value from the Llama ecosystem to justify closing the next generation. Maybe. But "hope to open-source future versions" reads like a press release, not a product roadmap.

Google is running a different calculation. It doesn't need to sell model weights. It needs developers building on its infrastructure. Every team that downloads Gemma 4, trains on it locally, and scales up will eventually need more compute than a local GPU provides. That's when they land in Google Cloud, where the actual revenue lives.

Why this is actually clarifying

I'm going to say something that might sound optimistic for a blog that spends a lot of time pointing out what's broken: the clarity we got this week is useful.

For the past couple of years, the open-versus-closed debate in AI has been muddy. Companies released model weights with restrictive custom licenses. Others talked about openness while gating their best capabilities behind an API. Llama was "open" but its license wasn't Apache 2.0. The landscape was full of half-measures and fine print, and figuring out what you could actually do with any given model required a lawyer.

This week made things cleaner. Meta is in the business of selling model access. Google is in the business of selling infrastructure around open models. Neither is pretending to be something it isn't. Both positions are honest.

If you're building AI products, that honesty makes your decisions easier. When we evaluate models at Osano for privacy-sensitive workloads, three things matter. Can we run it on our own infrastructure? Can we fine-tune it for our domain? Can we inspect it when something goes wrong? Gemma 4 under Apache 2.0 answers yes to all three. For companies handling regulated data, running inference locally without routing through a third-party API is a compliance requirement, not a nice-to-have.

Meta's move clarifies something different. Anyone who built on Llama still has access to the existing models. But if the best new capabilities are proprietary going forward, the gap between open and closed will widen. That's worth pricing into your planning now, not later.

Three things worth doing this week

If you're making model decisions right now, this is a good week to act on them.

Look at Gemma 4 for real. Not the benchmarks (though the 31B model punches above its weight). Look at the license. Look at the model sizes. The smaller variants running on consumer hardware under Apache 2.0 are a genuine option for workloads where data can't leave your environment. If that describes your situation, set up a test.

Audit your Llama dependencies. If your product or pipeline relies on Llama weights, think about what happens when the next generation isn't freely available. You're not stuck today. But the direction Meta just signaled is worth factoring into your roadmap.

Stop treating "open source" as a binary. Apache 2.0 is not the same as a custom non-commercial license. A model that runs on a Raspberry Pi is not the same as one that needs an H100 cluster. The specifics of each release matter more than whether someone calls it "open." Read the actual license before you build on it.

The simplest way to predict what any company will do with its models is to look at where it makes money. If the revenue comes from selling model access, expect the best work to stay closed. If the revenue comes from cloud compute and tooling, expect openness, because giving away models that drive adoption is just good business. That's not ideology on either side. And this week, both sides got a lot more visible.

Back to Words