Cloud Native BNG Overview

The Cloud Native Broadband Network Gateway (cnBNG) redefines the traditional physical BNG by decoupling the subscriber management and forwarding functions of the control plane (CP) and user plane (UP) to give better flexibility and scalability for the service providers. The cnBNG architecture is based on Control and User Plane Separation (CUPS), where the CP performs the policy and charging rule function (PCRF), whereas the UP performs policy enforcement function (PEF) of the overall BNG subscriber management solution. The cnBNG solution provides optimum scale dimensioning in terms of the number of subscriber sessions and forwarding capacity and aims at rapid deployment of multi-access services for the users. It also acts as a step forward towards converging the fixed line and mobile networks at all network layers.

Overview

The Broadband Network Gateway (BNG) is the access point for subscribers, through which they connect to the broadband network. When a connection is established between BNG and Customer Premise Equipment (CPE), the subscriber can access the broadband services provided by the Network Service Provider (NSP) or Internet Service Provider (ISP).

BNG establishes and manages subscriber sessions. When a session is active, BNG aggregates traffic from various subscriber sessions from an access network, and routes it to the network of the service provider.

BNG is deployed by the service provider and is present at the first aggregation point in the network, such as the edge router. An edge router, like the Cisco ASR 9000 Series Router, needs to be configured to act as the BNG. Because the subscriber directly connects to the edge router, BNG effectively manages subscriber access, and subscriber management functions such as:

  • Authentication, Authorization, and Accounting (AAA) of subscriber sessions

  • Address assignment

  • Security

  • Policy management

  • Quality of Service (QoS)

Implementing the BNG provides the following benefits:

  • Communicates with authentication, authorization, and accounting (AAA) server to perform session management and billing functions besides the routing function. This feature makes the BNG solution more comprehensive.

  • Provides different network services to the subscriber. This enables the service provider to customize the broadband package for each customer based on their needs.

Cisco provides two BNG solutions:

  • Physical BNG where the BNG Control Plane (CP) and the User Plane (UP) are tightly coupled inside a Cisco IOS XR platform where the CP runs on an x86 CPU and the UP runs on a physical NPU or ASIC.

    For more information about the physical BNG, refer to the latest version of the Broadband Network Gateway Configuration Guide for Cisco ASR 9000 Series Routers.

  • Virtual BNG (vBNG) where the BNG CP and UP run in separate VM-based Cisco IOS XR software on general purpose x86 UCS servers.

Evolution of cnBNG

The Cisco Cloud Native Broadband Network Gateway (cnBNG) provides a new dimension to the Control Plane and User Plane Separation (CUPS) architecture of the Broadband Network Gateway (BNG), enabling flexibility and rapid scaling for Internet Service Providers (ISPs).

Figure 1. Evolution of BNG to cnBNG


The architectural change is an evolution from an integrated traditional BNG running on a single router to a disaggregated solution, where the centralized subscriber management runs on an elastic and scalable Cloud Native Control Plane (CP) and the User Plane (UP) delivers the forwarding functionality.

cnBNG Architecture

In the cnBNG architecture, the CPs and UPs are clearly and cleanly separated from each other and run in completely distinct and independent environments.

The BNG CP is moved out to a container-based microservice cloud environment.

The UP can be on any of the physical platforms that supports the BNG UP, like Cisco ASR 9000 Series Routers.

The following figure illustrates the overall cnBNG architecture.

Figure 2. cnBNG Architecture


Features and Benefits

The cnBNG supports the following features:

  • Path to convergence: With shared Subscriber Management infrastructure, common microservices across the policy layer and shared UPs for BNG and Mobile back-haul, cnBNG paves the way for real Fixed Mobile Convergence (FMC).

  • Flexibility of scaling: cnBNG architecture provides flexibility by decoupling the required scalability dimensions. The CP can be scaled with requirement of number of subscribers to be managed and UPs can be augmented based on the bandwidth requirements. Instead of building the CP for peak usage, the orchestrator can be triggered to deploy the relevant microservices as needed to handle the increased rate of transactions.

  • Distributed UPs: With reduced operational complexity and minimal integration efforts with centralize CP, UPs can be distributed, closer to end-users to offload traffic to nearest peering points and CDNs. This feature reduces the core transport costs.

  • Cost effective and Leaner User planes: With the subscriber management functions moved to cloud, you can choose cost-effective UP models for optimized deployment requirements.

The benefits of the cnBNG architecture are:

  • Simplified and unified BNG CP

  • Platform independent and Network Operation System (NOS) agnostic BNG CP

  • Unified Policy interface across both BNG and mobility

  • Common infrastructure across wireline and mobility

  • Seamless migration from existing deployments

  • Leverage the common infrastructure across access technologies

  • Standardized model driven interface with the UP

  • Data externalization for North-bound interfaces (NBI)

  • Highly available and fault tolerant

  • Simplified Subscriber Geo redundancy

  • Horizontally scalable CP

  • Independent CP and UP upgrades

  • Feature agility with CI and CD

  • Manageability and Operational Simplification

cnBNG Components

The cnBNG solution comprises of the following components:

Subscriber Microservices Infrastructure

The Cisco Ultra Cloud Core Subscriber Microservices Infrastructure (SMI) is a layered stack of cloud technologies that enable the rapid deployment, and seamless life-cycle operations for microservices-based applications.

The SMI stack consists of the following:

  • SMI Cluster Manager—Creates the Kubernetes (K8s) cluster, creates the software repository, and provides ongoing LCM for the cluster including deployment, upgrades, and expansion.

  • Kubernetes Management—Includes the K8s master and etcd functions, which provide LCM for the NF applications deployed in the cluster. This component also provides cluster health monitoring and resources scheduling.

  • Common Execution Environment (CEE)—Provides common utilities and OAM functionalities for Cisco cloud native NFs and applications, including licensing and entitlement functions, configuration management, telemetry and alarm visualization, logging management, and troubleshooting utilities. Additionally, it provides consistent interaction and experience for all customer touch points and integration points in relation to these tools and deployed applications.

  • Common Data Layer (CDL)—Provides a high performance, low latency, stateful data store, designed specifically for 5G and subscriber applications. This next generation data store offers HA in local or geo-redundant deployments.

  • Service Mesh—Provides sophisticated message routing between application containers, enabling managed interconnectivity, additional security, and the ability to deploy new code and new configurations in low risk manner.

  • NB Streaming—Provides Northbound Data Streaming service for billing and charging systems.

  • NF/Application Worker nodes—The containers that comprise an NF application pod.

  • NF/Application Endpoints (EPs)—The NF's/application's interfaces to other entities on the network.

  • Application Programming Interfaces (APIs)—SMI provides various APIs for deployment, configuration, and management automation.

For more information on SMI components, refer to the "Overview" chapter of the Ultra Cloud Core Subscriber Microservices Infrastructure documentation—Deployment Guide.

For information on the Cisco Ultra Cloud Core, see https://www.cisco.com/c/en/us/products/collateral/wireless/packet-core/datasheet-c78-744630.html.

cnBNG Control Plane

The Cisco cnBNG CP is built on Cisco® Cloud Native Infrastructure, which is a Kubernetes-based platform that provides a common execution environment for container-based applications. This CP is built on principles of stateless microservices, to scale at-ease, introduce services much faster and more cost-effective.

Figure 3. cnBNG Control Plane Architecture


The CP runs as a Virtual Machine (VM) to adapt to existing service provider-deployed virtual infrastructure. It is built ground-up on a clean-slate architecture with a view on ‘Converged Subscriber Services’ and is aligned to 3gpp and BBF standards.

The cnBNG CP effectively manages the subscriber management functions such as:

  • Authentication, authorization, and accounting of subscriber sessions

  • IP Address assignment

  • In-built DHCP Server

  • Security

  • Policy management

  • Quality of Service (QoS)

Service providers can choose from wide choice of available ASR 9000 form factors, based on exact deployment requirements. The CUPS architecture allows to run these UPs in a distributed mode, to the edge of network, for early traffic offloads.

For more information about the cnBNG control plane, refer to the Cloud Native Broadband Network Gateway Control Plane Configuration Guide.

cnBNG User Plane

The UP delivers the forwarding functionality of the entire cnBNG solution. With the CP handling the subscriber management functionality, the cnBNG architecture enables the UP to be more distributed and interoperable with cnBNG CP with minimal integration efforts. The cnBNG Subscriber Provisioning Agent (SPA), which is the common interface between UP and CP, is bundled with the existing Cisco IOS XR image to transform an integrated physical BNG router to a cnBNG user plane.

For more information about the cnBNG UP, see the Cloud Native BNG User Plane Overview chapter.

License Information

cnBNG supports the following licenses:

License Description

Application Base

Per cluster

Session (Increments)

Network-wide

These are the software license PIDs for cnBNG:

Cisco cnBNG Control Plane:

Product IDs Description

CN-BNG-BASE-L

Base PID for cnBNG Control Plane (per cluster)

CN-BNG-100k-L

Session scale for 100,000 subscribers (network-wide) base licenses

CN-BNG-400k-L

Session scale for 400,000 subscribers (network-wide) base licenses

CN-BNG-1M-L

Session scale for 1,000,000 subscribers (network-wide) base licenses

CN-BNG-2M-L

Session scale for 2,000,000 subscribers (network-wide) base licenses

Cisco cnBNG User Planes:

Refer the ASR9000 data sheet for ordering information: https://www.cisco.com/c/en/us/products/routers/asr-9000-series-aggregation-services-routers/datasheet-listing.html

Standard Compliance

cnBNG solution is aligned with the following standard:

TR-459 Control and User Plane Separation for a disaggregated BNG

Limitations and Restrictions

The cnBNG has the following limitations and restrictions in this release:

  • High availability on CP is not supported.

  • Only one subnet is supported per VRF.

  • QoS provisioning is supported only through service.