Blogs

Why Telcos Should Rethink Their Large Language Model (LLM) Strategy

September 23, 2025

Avoiding the Telco to “Digitalco” to “Techco” to “Aico” Trap

The telecommunications industry stands at a pivotal crossroads. As telcos transition to technology-driven enterprises, artificial intelligence (AI), particularly large language models (LLMs), emerges as a central pillar. The potential benefits are enormous: smarter customer service, more resilient networks, and leaner operations. However, a foundational challenge persists: the ability to monetize vast data assets effectively. Despite possessing volumes of valuable customer and network intelligence, many telcos have struggled to convert data into meaningful revenue or differentiated service advantages.

The Talent and Resource Dilemma

A pressing reality for telcos is the scarcity of talent and resources crucial for AI success. With intense competition from hyperscalers, fintech, and tech giants, telcos face a compelling challenge in attracting AI expertise. These sectors offer alluring pay, cutting-edge projects, and rapid growth opportunities, creating a “brain drain” scenario. As a result, telcos must rethink their AI strategies, carefully balancing scale and specialization to navigate talent scarcity and investment risk. Notably, telcos in Korea, Japan, and China are exceptions, with these countries leading in AI expertise.

The Rising Costs and Risks of Building LLMs

Building proprietary LLMs comes with staggering costs, often ranging from $10 million to $100 million in compute alone, excluding infrastructure, talent, and ongoing update costs. Security concerns also loom large: telcos manage sensitive subscriber data and national critical infrastructure, making their AI systems prime targets. Larger LLMs are harder to audit, more vulnerable to data leakage, and costly to secure. For most telcos, investing in a homegrown LLM is less about mastering technology and more about managing a high-stake gamble with uncertain rewards. Moreover, telcos don’t need the compute that comes with LLMs. The workloads required to be run by them are a fraction of those of the hyperscalers and AI LLMs like OpenAI.

LLM vs. SLM: Finding the Right Fit

As telcos navigate the complexities of AI integration, the decision between deploying Large Language Models (LLMs) and Small Language Models (SLMs) becomes critical. While LLMs offer broad capabilities, they come with significant costs and infrastructure demands that may not align with the specific needs of telcos. Here’s why SLMs present a compelling alternative:

  • Efficiency and Cost-Effectiveness: SLMs are designed to operate efficiently on standard servers or edge devices, making them a cost-effective solution for telcos. Unlike LLMs, which require extensive computational resources, SLMs can handle real-time customer queries and network events without the need for high-end infrastructure. This efficiency translates into substantial savings, especially for telcos with limited budgets.
  • Tailored Solutions for Specific Needs: The one-size-fits-all approach of LLMs often fails to address the unique requirements of telcos. SLMs, however, can be fine-tuned with telecom-specific datasets, providing domain-specific excellence in tasks such as fraud detection and SIM-swap prevention. This targeted approach ensures that telcos can deploy AI solutions that are directly aligned with their operational goals.
  • Data Sovereignty and Compliance: With increasing regulatory scrutiny around data privacy, SLMs offer a significant advantage by allowing deployment entirely on-premises. This capability is crucial for telcos operating under stringent data privacy laws, such as the EU AI Act. By maintaining data sovereignty, telcos can ensure compliance while leveraging AI to enhance their services.
  • Scalability and Flexibility: SLMs provide the flexibility to scale AI capabilities according to the evolving needs of the business. Unlike the rigid structure of LLMs, SLMs can be adapted and expanded as new challenges and opportunities arise, offering telcos a dynamic toolset to stay competitive.

The answer isn’t LLM for everything. The focus should be on what AI workloads are required and what language models and RAG are needed. SLMs and light SLMs that can be run at the edge can suffice.

Partnering with AI Leaders: Pragmatic and Immediate Scale with Lower Risk

For many telcos, partnering with AI giants such as OpenAI, Anthropic, Google DeepMind, and Meta presents a pragmatic approach. Partnering provides:

  • Instant access to secure, enterprise-grade AI capabilities without massive upfront capital.
  • The ability to focus internal expertise on AI application development rather than underlying model infrastructure.

Telco examples confirm this approach’s value: Vodafone’s AI copilot cut call center handling times by 25%, while AT&T’s collaboration with Microsoft Azure enabled predictive fault detection that reduced network downtime. These wins come from effectively deploying existing models rather than duplicating development efforts.

When Building In-House LLMs Still Matters

Some specialized situations warrant custom LLM development:

  • Regulatory demands for absolute data sovereignty, forbidding third-party hosting.
  • Proprietary knowledge or processes form a competitive moat.
  • Licensing fees at vast scale offset internal development costs.
  • Special language or dialect needs: Telcos serving linguistically unique regions (e.g., Bahasa Indonesia, regional dialects) may require models tailored to these languages, which are often underserved by general AI platforms.

However, such efforts require significant capital, as well as deep alignment of data strategy, AI talent, and long-term business goals—viable only for a few players with substantial resources. With the exception of Korea, Japan, and China, most telcos may not have the scale or language support needed for such endeavors.

Balancing Scale, Specialization, and Monetization

The future for telcos lies in balancing foundational LLM partnerships with strategic SLM deployments. This balance enables telcos to:

  • Harness scale where it matters for broad AI capabilities.
  • Develop precise, sovereign AI tools tuned to telco-specific challenges.
  • Overcome long-standing barriers in data monetization by unlocking new business models powered by AI-driven data intelligence.
  • Address talent scarcity by focusing limited internal AI expertise on deploying and maintaining specialized applications rather than building foundational models from scratch.

Telcos that rethink their AI strategies to integrate scale, specialization, and monetization will be best positioned to thrive as technology-driven enterprises amid fierce industry competition.

In conclusion, the journey to becoming an "Aico" is not just about adopting the latest AI technologies but about strategically leveraging them to create sustainable value. By focusing on what matters—security, efficiency, and domain-specific excellence—telcos can transform challenges into opportunities and lead the way in the digital age.

Enjoyed the read? Let’s take it further.


Connect to unlock exclusive insights, smart AI tools, and real connections that spark action.

Schedule a chat to unlock the full experience