Industry 4.0 smart factories, private 5G campuses, neutral-host venues, and a global MVNO revival are all pounding on operators’ doors. As operators increasingly roll out 5G Standalone (SA) networks, these deployments amplify opportunities, enabling mobile operators to monetize network capabilities like never before. Operators are investing significantly in upgrading to 5G SA, driven by the promise of offering tailored connectivity and specialized services to diverse customer segments. Each new business opportunity—whether it’s powering an industrial smart factory with ultra-low latency connections, enabling an MVNO with differentiated services, or hosting a dedicated slice for a high-demand venue—creates unique demands on the network and the charging system. Monetizing these varied and specialized opportunities efficiently requires the operator to rapidly and flexibly deploy dedicated environments, known as “tenants.” A tenant, in this context, refers to an independent entity such as a private network operator, MVNO, or enterprise customer, each needing its own distinct policies, resources, charging, billing, and analytics frameworks.
However, rapidly provisioning these tenants is challenging with traditional, single-tenant architectures, which demand extensive manual configurations and dedicated resources and hardware for each new customer or scenario. As discussed in our previous blog post on network slicing, multi-tenancy is critical to achieving efficient and scalable monetization of diverse 5G use cases. In this blog, we dive deeper into why multi-tenancy is not just beneficial but essential for operators aiming to capitalize on the full commercial potential of 5G. We’ll discuss the specific challenges posed by single-tenancy approaches, the tangible advantages of multi-tenancy, and how Totogi’s Charging-as-a-Service radically simplifies and accelerates partner onboarding.
Why every campus, IoT network, and MVNO is a “tenant”
In a 5G SA world, a tenant is first and foremost a commercial customer—an enterprise campus, an IoT service provider, a venue owner, or an MVNO—each having its particular needs and specialized products, and each expecting its own contract terms, usage counters, SLAs, charging and invoicing. One tenant can command multiple slices (a smart factory might pair an ultra‑reliable motion‑control slice with a massive IoT telemetry slice), while a single shared slice can simultaneously serve several sub‑tenants, as happens when rival MVNOs ride the same neutral‑host network in a stadium. What matters isn’t how the radio is carved up; it’s that every customer gets airtight separation for policies, data, and money.
The “heavier” infrastructure—roof‑mounted gNB radios, licensed spectrum blocks, metro‑fiber rings, and the pooled compute that runs cloud‑native 5G Core functions such as the User Plane Function (UPF) and Access & Mobility Management Function (AMF)—is purposely built for sharing. A single 5G cell can schedule tens of thousands of devices, so welcoming a new tenant seldom requires another antenna farm. What it does require is elastic capacity: when the smart‑factory’s URLLC slice spins up a midnight maintenance run or an IoT fleet doubles its telemetry rate, additional CPU cycles and I/O are reserved in the public‑cloud fabric for both the Core and the charging plane, then released when demand subsides. Because Totogi’s charger is fully serverless, the same pool of stateless functions rates every event irrespective of tenant, expanding or contracting automatically and sparing operators the capital headache of sizing racks in advance.
Some artefacts, however, should remain tenant‑specific—chiefly the business logic that turns usage into revenue and guarantees into SLAs. Each tenant keeps its own product catalogue, rating rules, and policy definitions so that a price tweak for a stadium’s high‑capacity slice cannot bleed into an automotive OEM’s latency‑critical slice. Tenant‑scoped data stores hold balances, tiered discounts, charging records, and performance KPIs, all encrypted with tenant‑unique keys. Observability follows the same rule of separation: the mining company’s dashboard that tracks packet‑loss inside its pit sensors is invisible to the MVNO next door. By ring‑fencing these logical layers while sharing spectrum, transport, and compute, operators blend ruthless resource efficiency with rock‑solid commercial isolation.
Single tenancy versus multi‑tenancy: what’s the difference?
In a single‑tenant charging model the operator stands up a dedicated software stack—virtual machines, databases, mediation, catalogues—for each commercial customer. When a healthcare campus requests a private 5G network, the operator must roll out an entirely new CCS stack in its own VPC—procure licenses, stand up virtual machines, load rating and policy tables, commission a dedicated database, and pass vendor acceptance testing that can take months. Perform the same ritual for the next MVNO and the next factory and the operations team is soon nursing half‑a‑dozen parallel environments that duplicate 90 % of the same code, devour budget on extra licences and infrastructure, and tie up skilled engineers in perpetual maintenance.
A multi‑tenant charger flips that logic: one cloud‑native codebase runs once, but every request carries a tenant ID that steers it into an isolated namespace. The factory’s rating rules, the MVNO’s balance tree and the stadium’s KPI feed live side‑by‑side in the same serverless pool yet remain cryptographically separate. Adding a new tenant is no heavier than a handful of user-friendly screens and a push of a button—exactly how public clouds onboard customers every day.
With those definitions nailed, let’s see why clinging to single‑tenant deployments drags operators down.
Single‑tenant pitfalls: the hidden costs
Choosing a one‑tenant‑per‑instance charging model may appear straightforward during initial planning, yet each deployment imposes significant capital outlay and slows the introduction of new services. A dedicated Convergent Charging System (CCS) instance requires its own perpetual licenses, reserved compute capacity, security services, and high‑availability footprint. Before traffic can flow, solution architects must produce tenant‑specific design documents, provision the environment, install the charging software, populate catalogues and policy tables, and complete several rounds of system integration and user‑acceptance testing. When this cycle repeats tenant after tenant, the up‑front spend is baked into each commercial offer, limiting discount headroom and pricing flexibility, while the proliferation of bespoke stacks clogs IT pipelines, multiplies technical debt, and drives sustained operational inefficiency.
Single‑tenant deployments also prolong time to market, as each fresh instance must traverse the same labor‑intensive design, provisioning, and testing gauntlet regardless of budget. Commissioning a bespoke stack is never a click‑and‑go affair; it involves capacity planning, database provisioning, policy table uploads, vendor acceptance testing, and months of integration with mediation and with network and other BSS applications. While engineers juggle tickets, competitors equipped with multi‑tenant SaaS chargers are already live and billing—winning logos simply by being first to market.
Even after go‑live the model bleeds cash in OPEX. Each isolated environment hoards compute that sits idle off‑peak yet still racks up cloud invoices, while burst demand forces costly over‑provisioning because capacity cannot be shared across tenants. There is no elastic safety net: the operator either pays for slack resources that aren’t used or risks SLA breaches if usage spikes. A pay‑per‑use, multi‑tenant charger avoids that Hobson’s choice by expanding and contracting a single serverless pool only when traffic warrants it. Finally, single tenancy locks the operator into vendor‑driven timelines. Upgrades, patches, and change requests must be cloned and regression‑tested for every environment. The coterie of vendor specialists who know how to tune each stack becomes a bottleneck; their calendars—not your product managers—set the cadence for new offers and regulatory tweaks. The longer releases drag on, the less competitive the tariff catalogue becomes, reinforcing a vicious cycle of lost market share.
The benefits of a multi‑tenant Convergent Charging System
A true multi‑tenant CCS turns the liabilities of single‑tenant deployments on their head. Instead of carving out a standalone stack for each customer, one shared, cloud‑native codebase serves every tenant, with requests routed by a tenant ID that enforces strict logical separation. Data isolation is achieved through dedicated namespaces: each tenant’s catalogs, rating rules, balances, and performance dashboards sit behind its own encryption keys, so commercial information never leaks across boundaries. Because these namespaces run in a common, serverless execution pool, compute and storage are allocated dynamically. Idle capacity is surrendered back to the cloud provider, eliminating the wasted spend that plagues dedicated environments, while sudden peaks—whether an MVNO promotion or an IoT firmware push—are absorbed in milliseconds.
The economics shift immediately. Up‑front licenses, database instances, and high‑availability clusters are no longer multiplied per customer; they are amortized across the entire tenant base. The operator can price new private‑network or MVNO offers more aggressively because a large slice of fixed cost has vanished.
Operational agility follows the same curve. Onboarding a tenant is largely self‑service: an administrator sets up a new tenant with a few clicks, selects a service template, and configures business parameters through a single control plane. There is no multi‑week provisioning marathon, no waiting for vendor professional services, and no cascade of regression tests across dozens of parallel stacks. Change requests make way for feature requests, which take a fraction of the time to get from specification to GA. Thanks to accelerated CI/CD pipelines, such updates are published once and instantly available to all tenants and customers- no downtime to any of the tenants, no upgrades, and no risk.
These efficiencies translate directly into competitive advantage. Shorter time to market means the operator can secure enterprise logos before rivals even complete sizing exercises. Lower unit costs enhance margin headroom, enabling flexible pricing strategies in price‑sensitive segments. Most important, the ability to scale elastically—paying only for usage, never for idle reserve—aligns cost with revenue in real time, a discipline that traditional capital‑heavy telecom infrastructure rarely achieved.
Economics Side‑by‑Side: single‑tenant vs. multi‑tenant charging
A quick comparison shows how fast costs and timelines diverge when every tenant gets its own stack versus when they share a cloud‑native, multi‑tenant CCS.
Dimension | Single‑Tenant CCS | Multi‑Tenant CCS (Totogi) |
---|---|---|
Up‑front CAPEX | New licenses, VMs, databases, and HA clusters for each tenant drive six‑figure set‑up bills. | One shared codebase and infrastructure across all tenants; incremental cost per tenant is negligible. |
Time to market | 4-9 months: design documents, provisioning, integration, and UAT repeated for every deal. | Days to weeks: create tenant in UI, apply template, start rating immediately. |
OPEX: cloud bills | Idle compute locked inside siloed stacks; over‑provisioning to protect SLAs wastes budget. | Serverless pool scales per request—operators pay only for consumption, never for idle reserve. |
Scalability | Capacity locked per tenant; bursts require manual resizing and extra hardware bookings. | Elastic by design; bursts absorbed automatically without manual intervention. |
Upgrade overhead | Patches cloned and regression‑tested for every stack—slow, risky, and people‑intensive. | One continuous deployment stream; all tenants benefit instantly, with no downtime. |
Pricing flexibility | High fixed cost forces rigid tariffs and limits discount headroom. | Low unit cost enlarges margin headroom and supports aggressive, usage‑based pricing. |
Vendor dependence | Heavy reliance on vendor PS and bespoke integrations per tenant. | Centralized SaaS updates and standardized APIs minimize external touch points. |
Data isolation and security—facts, not myths
Multi‑tenancy does not imply porous boundaries. Every tenant in Totogi’s cloud is walled off by a dedicated namespace whose objects—catalogs, balances, policies, usage records—are encrypted with tenant‑specific keys. Service calls carry a signed tenant token; cross‑namespace access is mathematically impossible. Compliance audits confirm that logical isolation matches—or exceeds—the guarantees of physically separate stacks, without the inefficiency.
Totogi point of view
Totogi Charging‑as‑a‑Service was built as a multi‑tenant, serverless platform from day one. No virtual machines to size, no containers to orchestrate—just high‑performance, event‑driven functions that expand when traffic surges and contract when it ebbs. Operators onboard new tenants through a self‑service portal, launch differentiated tariffs in minutes, and pay strictly for what they process. The result: faster deals, leaner cost curves, and the freedom to experiment without capital risk.
Totogi Charging‑as‑a‑Service eliminates traditional CCS pitfalls through a fully managed, multi‑tenant SaaS platform. Our solution maximizes operational efficiency, reduces CAPEX and OPEX, and dramatically accelerates tenant onboarding. Plus, with usage‑based pricing and a free tier, Totogi offers a genuinely risk‑free environment for telcos to experiment and innovate rapidly.