Don't Bet Against The Platform
Why platforms like ChatGPT win, what it means for product builders, and how to adapt in the age of AI platformification
When I first left college, GrubHub and Uber Eats were just starting out and Dominos was the current market leader in delivery experience. A fellow classmate and I started working on a project so that brands could craft their own delivery apps. It was gonna be like an interactive app builder, but with the logistics and POS stuff all sorted out, so that restaurants could create custom, branded apps and not worry about the complexities of managing drivers and such.
Basically we operated completely under the assumption that these restaurants would truly want their own custom branded experiences and wouldn't want to be tied to a centralized platform with their fees and stipulations.
Which was obviously wrong in retrospect.
I've spent the past year writing about agentic UI and how AI products should be designed. But watching the OpenAI DevDay presentation today, I'm realizing I might be making the same mistake again.
Just like how most restaurants don't have their own custom ordering UX anymore, AI experiences are potentially witnessing the first major step in becoming platformized. ChatGPT Apps is OpenAI’s major bet to centralize everything into one hub with a message that companies won't be building their own AI interfaces. They'll build tiny applets inside OpenAIs interfaces instead, utilizing MCP and proprietary UI tech under the hood.
Why platforms always seem to win
Speed isn't your advantage anymore
Traditionally, startups moved faster than enterprises. Small teams could ship features in days while big companies took months. That was the whole advantage behind small teams.
But now in a lot of ways, it seems like large teams have caught up and are shipping features in weeks. OpenAI went from chatbot to GPT Store to ChatGPT Apps in under two years. The combination of AI tooling and the fact that AI is still a relatively nascent field without a ton of pre-existing baggage means that big companies can move faster than before.
Scale crushes everything
Large teams like OpenAI also have user bases to A/B test at massive scale. They have eval and data science teams with more data than any small team could dream of. In AI specifically, you can't build good evals without significant users.
Startups and smaller teams have to build everything from scratch. Auth, privacy, GDPR compliance, accessibility, distribution. Larger teams can iterate faster because they have the data that startups simply cannot replicate.
So we see that the platform ends up winning in the end for these reasons and the comparisons to food delivery apps are pretty interesting:
- Users want broad choice. Access to many restaurants for food delivery, access to many apps for generative AI.
- Complex workflows require dedicated infrastructure. Logistics and driver management for food delivery, chat UX and evaluation/finetuning systems for GenAI.
- Users prefer economic efficiency. One DoorDash+ membership covers many restaurants, one ChatGPT subscription covers many apps. Paying $20/month to the platform beats paying $5-10 each to dozens of individual services.
These are great benefits, however we should take a step back, learn from history, and be honest with ourselves about the risks of platformification. While the US was living under ZIRP (Zero Interest-Rate Policy – a period when the Federal Reserve maintained interest rates at or near 0%, roughly from 2008-2015 and again from 2020-2022), delivery apps absolutely blossomed – coming to a head during COVID. Artist and cultural critic Brad Troemel explores this concept in depth through his ZIRPSLOP Report, examining how this era of easy money funneled unprecedented venture capital into startups and gave rise to what he calls the "Millennial Lifestyle Subsidy" – a brief window where VC-funded companies offered artificially cheap services, reshaping American culture and enabling high-risk business models that could prioritize growth over profitability. They were able to quickly expand, grow their user base, disrupt existing industries, and gain huge amounts of market and mindshare.
But as ZIRP ended, we've witnessed large-scale enshittification of these delivery platforms, with higher prices and struggling restaurateurs and drivers footing the bill. Enshittification, a term coined by Cory Doctorow, describes the predictable lifecycle of platforms: first, they're good to users to build market share; then they abuse users to please business customers; finally, they abuse both groups to extract maximum value for shareholders.
Enshittification is the inevitable end state for platforms that achieve market dominance, and I think OpenAI is positioning itself to follow the same playbook.
The thing people are doing right now
In 2024 and 2025, the default move for product builders seemed to be to add a chat window to your existing app, add an API endpoint, hook up some workflows, and attempt to ship it. This model doesn’t seem to be working as users are reporting that Generative AI is not nearly as life changing as it was advertised as being, as well as MIT reporting that 95% of GenAI Pilots are failing to make it to production.
Why it might not be working
There are a few reasons I think…
- Evals are critical but very difficult – maybe even impossible without sufficient data at scale. Most AI pilots don't have enough data.
- Legacy devs don't really know how to code for AI effectively yet. Lots of trial and error.
- Frameworks like LangChain and CrewAI are seeing quite a bit of pushback and there are a lot of vendor lock-in fears and implementation struggles. I can personally attest to spending hours in dependency hell on my project. Trying to figure out which version of CopilotKit works with my version of Zod, and my version of Mastra, and my version of AI SDK… endless cycles of
node_modules
debugging. The ecosystem is fragmented, and even experienced devs get stuck. - Non-deterministic workflows are weird for people to wrap their heads around. Companies want predictability and DAGs, but AI doesn't really work that way. This unpredictability makes it hard to build reliable products and set user expectations.
- Then there's the other direction – folks building agents as simple ReAct loops with tools. Which is actually more accurate to how agents work. But it's also unpredictable and lacks that "I am Jarvis, I am autonomously doing all of your day-to-day work tasks safely and automagically" vision that sold people on AI in the first place.
- Then there's the problem of choosing where to put AI. Using it for critical decisions where errors matter. Insurance claims, medical records, scheduling… This is also a huge point of contention. Picking the wrong spot for AI can have real consequences.
All these challenges combine to make building effective AI features much harder than it first appears. The result is a landscape where many pilots stall, and even successful launches struggle to deliver on the original promise.
Taco Bell tried AI voice ordering at 500+ locations. It didn't seem to go very well. People posted videos of the system completely failing to understand basic orders. They're rethinking the whole thing now.
Banks had to rehire humans after chatbot failures. 55% of companies regret AI-driven layoffs.
But even when you pick the right use case
And even when you’re building things out “the right way,” there's still all this technical complexity you inherit. Streaming UX, stop generation UX, tool call UX, error handling, logging, analytics, RAG, data retrieval, data storage, auth, identity. It's complicated and non-standard for an entire generation of SaaS developers who learned and thrived in a different era.
What comes next
I think that I started to see the writing on the wall when OpenAI released their latest Sora update which effectively updated it to an AI Slop TikTok clone, built to drive endless engagement and ad consumption.
Someone on Twitter asked: "If you genuinely believed you were 2-4 years away from AGI, is Sora slop really the thing you'd release?" and I can’t stop thinking about that. This is their huge move towards their goal of platformizing the AI space. This makes sense economically for them as they are absolutely bleeding money. They need to recoup costs somehow. The only way to do that right now might be to establish themselves as the centralized player. Be the hub where they can run advertisements, charge for premium services, charge for you to be on the platform. Be the place where everyone wants to be. If history is to be trusted though, this means that they will begin to slow on innovation as they start to leverage their position in the market to squeeze out as much efficiency as possible from their customers, their developers, and their ecosystem.
The economics are different this time
Before we even get to how enshittification affects us as product builders downstream of companies like OpenAI, we have to note that there's something generally weird about the economics of AI apps to begin with.
Traditional B2B SaaS margins are 70-90% whereas AI-native SaaS margins are often in the 50-60% range—with leading AI startup Anthropic reporting gross margins of 50-55%—due to the cost of upkeep on the services. This has a huge effect on what app builders can meaningfully support.
In traditional SaaS, once you build the platform, the marginal cost to serve each additional customer is basically zero. You can literally just add another user into a Postgres database and call it a day for a large majority of SaaS. In AI-native products, however, costs can scale exponentially with increased use.
API calls per user are ongoing, not one-time. Compute time scales with usage. Model licensing fees, content moderation, storage for context and memory. It adds up.
In early 2023, analysts estimated ChatGPT was costing OpenAI between $100K-$700K per day to operate. More recently, in January 2025, OpenAI CEO Sam Altman revealed that the company is losing money on its $200/month ChatGPT Pro subscriptions because "people use it much more than we expected," with individual queries on advanced models costing up to $1,000 to process.
GitHub Copilot, which launched at $10 per month, was reportedly costing Microsoft an average of $30 per month to serve each user in early 2023—losing $20 per user per month on average. Some power users were costing the company as much as $80-$90 per month to serve.
What this means if you're building something
When you build chat into your product, you inherit these economics. You pay for every interaction. You deal with power users who generate exponential costs. You implement usage caps which degrades UX. You lose money on your heaviest users.
Questions you might want to ask yourself
If enshittification happens, will your product keep up when tokens are 1.5x more expensive? 2x? 3x? 5x? Will your product survive if your model provider institutes a charge to even exist on the platform? How much can you absorb?
Can your unit economics work with 30-60% margins instead of 70-90%?
How to win
If platforms are winning, how do you build a defensible product that can survive and thrive in this new world?
Going deep on specific verticals
Platforms are horizontal. Depth might be your advantage.
Figma in ChatGPT Apps doesn't seem worried. ChatGPT can't replace deep design collaboration. They can exist within ChatGPT while still owning their workflows. The platform becomes distribution, but Figma still owns the actual design tools, version control, commenting systems, component libraries. All the complicated stuff that took years to build.
Owning your data
If users interact through ChatGPT but you own proprietary data, you might be defensible.
Think of an app like Zillow. Their strength is in the fact that they have such a wide and thorough list of homes. Listings, price history, neighborhood data, school ratings, tax records. This data is super defensible. It took years to aggregate and clean. Zillow in ChatGPT owns the property data. ChatGPT is just the interface.
Think hard about where consumption happens
Your product might be consumed 25% via your own interface, 40% via ChatGPT Apps, 20% via Claude or Cursor, 15% via other platforms or API. Which might be fine. You're still providing value. You're still defensible if you own the data or the vertical depth. You just don't own the interface anymore.
This is kind of a weird mental shift. We're used to thinking the interface IS the product. But maybe in AI, the interface becomes commoditized and the data/logic is the actual product.
Building with platform assumptions from the start
Don't build your product, then think about how it fits into ChatGPT. Build assuming 50%+ of users will interact via ChatGPT, Claude, or similar platforms from the start.
Build API first. Build MCP second. Build UI third.
Your UI becomes a demo, a showcase, a way for power users to do advanced things. But the main way people use your product is through platform integrations you built from day one, not tacked on later.
Think about integration from day one, not as an afterthought. Which protocols matter? Which platforms have your users? How does your data model need to be structured so it can be exposed through multiple interfaces?
Where your users actually are
General consumers seem to use ChatGPT. Mass market, mobile-first, UX leaders. They're not even downloading your app. They're asking ChatGPT for help and expecting it to just work.
Developers use Claude, Cursor, Claude Code in coding contexts. They're already in their IDE. They're not switching to your web app to use your tool. They want it embedded in their existing workflow. They want it working via Zed’s ACP.
Enterprise uses Microsoft Copilot and Azure. They have compliance requirements, IT policies, existing contracts. They're not adopting some new platform. They want your functionality inside the tools they already approved.
Education uses Google Classroom with Gemini and Canvas LMS. Teachers and students aren't leaving those environments. They want your educational tool accessible where they already are.
Think about where your user interacts with your product. At their desk? Desktop apps, browser extensions. On mobile? iOS Shortcuts, Android App Actions. In their IDE? VSCode extensions, Claude Code. In workflow tools? Slack apps, Teams bots.
Find the appropriate platform ecosystem where your users live and build that integration well, then expand.
The pattern again
Ten years ago, I thought restaurants would want their own apps. But consumers didn't. Consumers went to platforms.
The insight wasn't wrong. Just one abstraction layer off.
Companies want their own AI chat experiences right now. They're building chat into their products. But users might just go to ChatGPT and Claude much like how they just open up GrubHub. The insight about customized experiences might be right. The execution via ChatGPT Apps, not your own chat, might be where it happens.
Ultimately, the lesson is to stay flexible, watch platform trends closely, and build with integration and defensibility in mind. The platform always wins—unless you find a way to win alongside it.