As telecom CEOs and CTOs consider their artificial intelligence (AI) investments, a crucial question arises: does the true value of AI lie in foundation models—such as large language models (LLMs)—or in their applications?
There’s no doubt that we’re in the middle of an AI model gold rush, with major players like SK Telecom investing $100 million in Anthropic and the Global Telco AI Alliance forming to develop industry-specific LLMs. Microsoft has committed $13 billion to OpenAI, while Amazon has invested $8 billion in Anthropic, and T-Mobile has also made a significant move, allocating $100 million to OpenAI.
But many in telecom haven’t yet realized: foundation models are rapidly commoditizing—and real value is shifting to those focused on domain-specific implementations. This isn’t just a theory—it’s happening now. Foundation models are following the same path as cloud infrastructure: once an advantage, now a commodity.
Like cloud computing—initially a hyperscaler differentiator, now a standard utility—the real value no longer lies in owning the models but in translating them into tailored, telecom-ready solutions.
The commoditization of foundation models: a shifting AI landscape
As models become standardized and widely available, their strategic value will continue to diminish. Smart telecom operators are already shifting their bets to where long-term advantage will last: the solution layer—where AI turns from theory into transformation.
AI is evolving so quickly that models considered groundbreaking just a few months ago are now widely available and increasingly standardized across multiple providers. This commoditization is happening at an even faster pace than expected, driven by several key factors:
- Plummeting prices – Training and inference costs have been dropping tenfold year over year. Competition is driving prices down in a classic indicator of commoditization.
- Ready availability – New releases from Anthropic, Google, Meta, OpenAI, and dozens more arrive every week. AWS Bedrock alone offers access to over 100 foundational models. Accessing a powerful LLM is no longer an issue for enterprises.
- Low switching costs – Applications can easily swap the underlying model.
- Closing performance gaps – Most LLMs now perform comparably on tasks like code generation, math, image and video generation, audio processing, and general knowledge queries. While there might be subtle variations depending on the modality required, LLMs’ central offerings are becoming more alike.
The value of the application layer
A marginally stronger model won’t move the needle anymore. The real differentiator is building domain-specific applications with relevant context and knowledge—that drive tangible business impact, whether that’s boosting revenue by double digits, uncovering critical insights, or slashing costs.
Models alone can’t deliver results in complex enterprise environments. Only custom-built tools with specific industry logic can turn AI into bottom-line value. In telecom, this is critical: industry expertise inside a general model is almost worthless, but inside a specialized application, it becomes a game-changer. That knowledge asymmetry is your competitive edge.
To unlock real intelligence, applications must integrate directly with your business systems. Foundation models overlook your unique terms, processes, and architecture—they aren’t aware of your internal systems. Your proprietary data can’t power public models, but it can supercharge private applications. And as foundation models become interchangeable, business-ready AI tools give you the flexibility to swap models like utilities while protecting your AI investments from market volatility.
This is where the smart money flows—into the AI layer that understands your business.
The impact of vertical AI in telco
In recent months, we’ve seen the rise of agentic AI as more organizations trial AI agents. Yet McKinsey research shows that while 79% of companies have deployed generative AI, 78% report no significant bottom-line impact—a gap they call the “Gen AI paradox.” The problem? Most deployments are horizontal tools and copilots that lack deep domain integration. McKinsey also finds that around 90% of transformative, vertical AI use cases remain stuck in pilot mode. If agents can execute tasks autonomously, what are they missing to be able to drive impact through sophisticated reasoning and decision-making?
Foundation models are smart in the abstract—but Totogi BSS Magic makes them smart about you. Implementing horizontal AI agents does not deliver the needed results, and making each AI agent knowledgeable about your ecosystem is a massive effort. This is exactly where BSS Magic comes in. It overlays your entire BSS stack with an AI-powered telco ontology: the accumulated brainpower of decades of operator experience, mapped to your unique data structure, processes, business rules, and organizational logic.
Suddenly, AI agents understand how to launch a product, fix a provisioning error, or migrate millions of subscribers without breaking a thing. As agents and applications are built on top of Totogi BSS Magic, every agent or application inherits this context from day one, so you’re not teaching each one the basics from scratch. That’s how you turn generic AI into telecom-grade AI—and real results.
The name of the game: AI applications that deliver value
The model race is noise. The real opportunity isn’t in marginally better language models—it’s in applications that deliver measurable business outcomes, like increased revenue and faster time-to-market. While competitors sink hundreds of millions into foundation models that quickly commoditize, Analysys Mason predicts that telecom firms prioritizing AI-powered application development could realize an additional $15–20 billion in global revenue by 2027. That’s not just market growth—it’s market leadership.