Everyone's still rushing to build AI companies.
But the game already changed. And most people missed it.
The Party's Over for Model Trainers
A year ago, every pitch deck said "we're training our own model." Investors ate it up. Differentiated! Proprietary! Moat!
Today? Training a foundation model costs $100M minimum. OpenAI, Anthropic, Google, Meta — they already won that race.
If you're a startup still talking about training your own LLM, you're bringing a knife to a drone strike.
The New Game: Inference
Here's what actually matters now: deployment.
• Training = teaching the AI (Big Tech's game)
• Inference = using the AI (everyone's game)
Every ChatGPT message. Every AI-generated image. Every copilot suggestion. That's inference. That's where the money is.
Why?
The models exist. GPT-4, Claude, Llama — pick your weapon.
The APIs are cheap. Costs dropped 90% in 18 months.
The winners are builders, not researchers. Best product wins, not best benchmark.
What This Means If You're Building
The startups winning right now aren't building models. They're building workflows.
They're asking one question: "What can I do with GPT-4 that OpenAI will never build themselves?"
The answer is always vertical. Specific. Unsexy.
AI for insurance claims? Boring. Massive market.
AI for legal document review? Boring. Massive market.
AI for restaurant inventory? Boring. Massive market.
Meanwhile, 500 startups are building "AI assistants" that are basically ChatGPT with a different font. Good luck with that.
The Real Moat
It's not the model. It's the workflow.
It's not the AI. It's the data you feed it.
It's not the technology. It's the distribution.
The companies that win the next phase of AI won't have the smartest models. They'll be the ones who figured out how to make existing models actually useful for specific jobs.
The gold rush attracted the prospectors.
The gold mine rewards the operators.
Your move,
Blaine