What 'open' actually means
The word "open" in AI is doing a lot of heavy lifting, and most of it is marketing. Before evaluating specific models, you need to understand the spectrum of openness, because it directly affects what you can legally deploy in your enterprise.
Fully open source (Apache 2.0, MIT). You can use the model for any purpose, modify it, distribute it, and build commercial products with it. No usage restrictions, no reporting requirements, no revenue thresholds. Gemma 4 (Apache 2.0) and Mistral's Apache-licensed models fall here. This is the cleanest option for enterprise deployment.
Community licence with restrictions (Llama licence, Qwen licence). Meta's Llama models use a custom licence that permits commercial use but imposes conditions: if your product has more than 700 million monthly active users, you need a separate licence from Meta. This sounds generous until you remember that enterprise deployments embedded in widely-used internal tools could theoretically approach these thresholds in large organisations. More practically, the custom licence means your legal team needs to review and approve it -- adding weeks to deployment timelines.
Open weights, restricted use. Some models release weights but restrict certain use cases. For example, models trained on specific datasets may prohibit use in certain industries or for certain applications. Always read the full licence, not just the headline.
Gated access. Some "open" models require you to accept terms and request access through Hugging Face or the provider's portal before downloading. This is an administrative speed bump, not a legal restriction, but it matters for automated deployment pipelines.
For enterprise deployment, Apache 2.0 is the gold standard. Your legal team signs off once, and every team in the organisation can deploy without additional review. Any other licence creates friction proportional to the number of teams that want to use the model.