Open AI Models: The Key to Adoption
The path to broad AI deployment runs on open weights
Earlier this week, we got a sneak peek at a new report from Georgetown’s Center for Security and Emerging Technology (CSET) that provides the most comprehensive empirical analysis to date of how researchers actually use open-weight AI models. The findings have real implications for both AI policy and the future of American compute infrastructure.
Open-weight models allow researchers and developers outside of the organization that designed it to see what the model knows and how it makes decisions. For example, imagine a search engine that shared its algorithm and thus allowed the customization for specific use cases; such as media monitoring across a more diverse set of publications that you could choose.
After analyzing over 500 research papers, CSET identified seven distinct use cases that require access to model weights: fine-tuning (67% of papers), examination (27%), compression (27%), modification (16%), continued pretraining (8%), combination (8%), and hardware benchmarking (1%).
Meanwhile, closed models with API access enable essentially one use case: prompting.
The Commercial Reality
Here’s what the policy chatter about Open Weights often misses: the uses of Open Source aren’t just academic. These seven use cases are the fundamental drivers of enterprise AI adoption and commercial deployment:
Fine-tuning is how companies may adapt models to proprietary data and specific workflows.
Compression is how models can get deployed to edge devices (i.e. smartphones) and cost-constrained environments.
Combination is how multi-model systems of agents will be built.
Modification is how models get optimized for specific hardware and use cases.
The report documents researchers building medical diagnosis systems that outperform GPT-4 on specialized tasks, creating efficient models that run on smartphones, and developing domain-specific models for chemistry, law, and radiology.
This is the path to broad AI deployment, instead of everyone paying API fees to a handful of providers forever.
Compute Infrastructure
Token Factories Need Customization: First, what’s a “token”? Tokens are the outputs from the massive computers running in the even larger data centers.
The vision of these massive compute clusters generating work (tokens) at scale depends on diverse workloads. That diversity comes from thousands of organizations customizing and optimizing models, development and deployment of which require access to the model’s weights.
Chip Demand Follows Deployment Diversity: When hospitals want to run medical AI locally, when manufacturers want models on factory floors, when financial firms want sovereign compute, they need to fine-tune, compress, and optimize models. Each deployment scenario creates more chip demand, up and down the value chain.
The API Model Concentrates, Open Models Distribute: API-only access from frontier labs (i.e. closed models) means compute demand concentrates in a few providers’ data centers. Open models mean compute demand spreads across enterprises, research institutions, edge deployments, and specialized clusters, precisely the broad base needed to justify massive chip manufacturing scale-up.
The Thinking Machines Signal
Thinking Machines Lab (founded by former OpenAI CTO Mira Murati) just launched Tinker, “a flexible API for fine-tuning language models” that “empowers researchers to experiment with models by giving them control over the algorithms and data.”
Tinker lets users fine-tune models using custom algorithms and data. Early users at Princeton, Stanford, Berkeley, and Redwood Research are already using it for everything from mathematical theorem proving to chemistry reasoning to trial and error training.
This is the direction serious AI builders are heading: toward more optimization, more customization, more experimentation.
The American Way
Open models embody something fundamentally American …
Non-Walled Gardens: The internet’s value came from openness, not from AOL’s curated experience. Open models follow that pattern. Anyone can build, anyone can compete, anyone can innovate.
Sovereign Compute: American enterprises and institutions can deploy on their own infrastructure, with their own data, under their own control. No API dependencies on potential competitors or adversaries.
Independent Builders: The report shows 87% of papers came from academic institutions—the traditional engine of American research. The accessibility of open models means universities and independent researchers can participate in the AI revolution.
Competitive Dynamics: The report notes 65% of papers had U.S. authors versus 38% Chinese. But Chinese competitors like DeepSeek and Alibaba are already releasing high-performing open models. The question is not whether open models exist, it’s whether American researchers and companies can access and build on them.
The Bottom Line
Open models are essential infrastructure for the commercial AI deployment that will drive compute demand at scale. The seven use cases CSET identified are how AI goes from demo to deployment, from prototype to production.
The choice isn’t the 2023 framing between open or safe. It will be between an AI ecosystem that’s concentrated, dependent, and brittle, or one that’s distributed, sovereign, and resilient.



