The reason big tech is giving away AI agent frameworks
Major tech companies including Amazon, Google, and Microsoft are giving away free AI agent frameworks to pull developers onto their paid cloud infrastructure. It is the same playbook that won the container wars a decade ago, and developers building agents today should understand ...
According to coverage surfaced via FireCrawl Discovery and corroborated by reporting at The New Stack, the wave of free AI agent frameworks hitting developers in 2025 is not generosity. It is a land grab. The frameworks in question, LangGraph from LangChain, CrewAI, Google's ADK, AWS Strands, and Microsoft's Agent frameworks, are being distributed as open-source software with a very specific commercial purpose baked .
Why This Matters
This is the container wars of 2015 playing out again, but the stakes are bigger this time. When Kubernetes won orchestration, AWS, Google, and Microsoft each built managed Kubernetes services and extracted revenue from the infrastructure layer while the framework itself stayed free. The same trap is being set now, and developers who standardize on AWS Strands today are making an infrastructure bet whether they realize it or not. The AI inference market is projected to dwarf container compute costs, which means the lock-in being established right now carries more long-term value than anything that happened with Docker or Kubernetes.
Daily briefing from 50+ sources. Free, 5-minute read.
The Full Story
A decade ago, developers building containerized applications faced a confusing choice between competing orchestration platforms. By 2017, Kubernetes had won, but AWS, Google, and Microsoft had already positioned their managed Kubernetes services, EKS, GKE, and AKS, to capture the infrastructure revenue that the free framework generated. Developers got a free tool. The cloud providers got long-term customers locked into their billing systems.
That story is repeating itself in 2025 with AI agent frameworks. Right now, developers face a nearly identical bewildering choice between at least five major frameworks, all free, all open source, and all backed by companies with significant paid cloud services to sell. LangChain's LangGraph focuses on graph-based orchestration. CrewAI builds around role-based agent teams that collaborate on defined tasks. Google's ADK ties tightly to Vertex AI. AWS Strands integrates with AWS's broader AI services infrastructure. Microsoft's Agent frameworks connect to Azure OpenAI Service and the Copilot ecosystem.
The business logic is straightforward. The framework is free to download and use locally. The moment a developer wants to scale their agent, handle production traffic, or call a frontier language model at volume, they need compute and inference services. A developer who has spent three months building on AWS Strands is going to deploy on AWS. The framework choice is the infrastructure commitment, just made earlier in the development cycle and with less visibility into the financial implications.
This pattern is especially effective during periods when developers are still figuring out what agent development even looks like. The frameworks that engineers learn first tend to persist in their workflows. Familiarity creates switching costs that are separate from technical dependencies. A team that becomes fluent in Google ADK by mid-2025 will carry that fluency into hiring decisions, internal tooling, and future project choices.
There is a secondary effect worth understanding. Open-source frameworks generate data. When thousands of developers use a framework and report bugs, request features, and contribute code, the platform provider learns what real agent workflows look like, which integration patterns matter most, and which use cases drive the highest compute costs. That information directly sharpens the paid product offering. Community contributors are, in effect, providing free research and development work to the sponsoring company.
The sophistication of underlying language models is also changing how this plays out. Earlier agent frameworks had to encode significant runtime complexity because models were unreliable at following multi-step instructions. As models from OpenAI, Anthropic, and others improve at sustained reasoning, frameworks can become thinner orchestration layers. That makes switching between frameworks easier in theory, but the cloud infrastructure dependencies created during development persist regardless of how thin the abstraction becomes.
Key Details
- At least 5 major frameworks are currently competing for developer attention: LangGraph, CrewAI, Google ADK, AWS Strands, and Microsoft's Agent frameworks.
- The strategic parallel is the 2015 to 2017 container orchestration war, which ended with Kubernetes dominant but cloud providers capturing infrastructure revenue.
- Google's ADK channels deployments to Vertex AI. AWS Strands directs workloads to AWS inference services. Microsoft's frameworks connect to Azure OpenAI Service.
- The C&F organization published "The BIG AI Framework" white paper in January 2026 to help enterprises evaluate agent platforms before committing.
- OpenAI also participates in this market with its own Agents SDK, adding a sixth major player to the framework competition.
What's Next
Expect consolidation in the agent framework market over the next 18 to 24 months, following the same arc as container orchestration. One or two frameworks will pull ahead in community size and enterprise adoption, and the companies behind losing frameworks will either pivot to supporting the winner or quietly deprecate their offering. Enterprises evaluating agent platforms in 2025 should document their cloud infrastructure dependencies now, because the switching costs of changing both framework and cloud provider simultaneously will be significant once agent workloads reach production scale.
How This Compares
The most instructive comparison is the Kubernetes story. Google open-sourced Kubernetes in 2014, and by 2017 it had won the orchestration market. But AWS and Microsoft, rather than conceding, built EKS and AKS to capture the managed infrastructure revenue that Kubernetes workloads required. The framework was neutral. The infrastructure was not. That dynamic is exactly what is being constructed in the agent framework market, except this time every major player is releasing their own framework simultaneously rather than rallying around a single winner.
Compare this also to how Hugging Face built its ecosystem. Hugging Face open-sourced Transformers, built a massive community, and then monetized through its hosted inference and enterprise services. The difference is that Hugging Face's framework was genuinely model-agnostic at the infrastructure layer. Most of the current agent frameworks carry at least soft dependencies on their parent company's model or deployment services, which makes the lock-in more direct.
OpenAI's Agents SDK represents an interesting outlier. OpenAI does not operate a general-purpose cloud platform, so its framework lock-in targets model spend rather than compute infrastructure. A developer building on the Agents SDK is not choosing AWS versus Google Cloud. They are committing to GPT-4o or its successors for their agent's reasoning layer. That is a different kind of dependency, but the commercial logic is identical. You can browse AI tools and platforms to compare the current options with their infrastructure requirements mapped out.
FAQ
Q: What is an AI agent framework and why does it matter? A: An AI agent framework is a software toolkit that helps developers build AI systems capable of taking multi-step actions, like browsing the web, writing code, or calling external services. It matters because the framework you choose often determines which cloud platform you end up paying to run your agents in production, creating long-term cost and vendor commitments.
Q: Is it bad to use frameworks like LangGraph or AWS Strands? A: Not inherently. These are capable tools with real communities behind them, and you can find setup guides that walk through each one. The important thing is to go in with eyes open. Understand that choosing a framework backed by a cloud provider creates a natural pull toward that provider's paid services when you scale, and factor that into your architecture decisions early.
Q: How did the container wars end and will agent frameworks follow the same path? A: The container wars ended with Kubernetes as the dominant standard by around 2017, but AWS, Google, and Microsoft all won commercially by offering managed Kubernetes services. Most analysts expect agent frameworks to consolidate similarly, with one or two frameworks pulling ahead in adoption while the others fade, though the timeline for that consolidation is likely 18 to 24 months away.
The agent framework market is at the same inflection point that container orchestration was in early 2016, right before the winners became obvious. Developers and engineering teams that understand the infrastructure economics now will make better architectural decisions than those who treat framework selection as a purely technical question. Follow AI agents news to track which frameworks are gaining enterprise traction as the market sorts itself out. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.
Get stories like this daily
Free briefing. Curated from 50+ sources. 5-minute read every morning.




