The days of a proprietary radio-access network are vanishing as carriers look for disaggregated systems that bring more flexibility while reducing cost.
Even though 5G has standards and is in deployment, it’s very much still undergoing change. One such place where 5G is changing from its initial concept is in the space between the wireless aspect and the wired network that carries the bits. Because 5G must support several overall use cases, the need has arisen for a flexible architecture on the front end, just after the radio, known as the radio-access network (RAN).
Wireless carriers need a transport technology with greater complexity and significantly more hardware and software that they needed with LTE networks. Carriers need networks and network components easy and economical to deploy, have high reliability, and minimize power consumption. This need to control expenses has led to an industry-wide shift from 4G’s dedicated hardware and proprietary software to open software stacks installed on open and commercial-off-the-shelf (COTS) hardware platforms.
4G’s proprietary components
You can think of wireless networks in terms of the core and the RAN. The core encompasses the backbone plus metro and regional networks (Figure 1). The core aggregates data at the edge of the RAN, which transfers the aggregated data to the radio tower. Early networks used fixed switches and routers to direct data. The goal more recently has been to develop software defined networks (SDNs) that can be dynamically reconfigured to address changes in demand.
4G was largely implemented with custom hardware running proprietary software stacks. When a carrier chose an equipment vendor, it became a long-term commitment. That approach was tolerable for 4G networks but given 5G and the drive for lower total cost of ownership, carriers have begun developing open-source solutions. 5G’s goal is interchangeable COTS ARM or x86 servers running open-source software stacks.
5G is different
The 5G network is almost entirely different from 4G LTE, beginning with frequency band. 5G picks up where 4G leaves off, spanning the spectrum from 6 GHz to 300 GHz. Higher frequencies support significantly smaller cell sizes, enabling 5G cells to provide highly localized coverage in locations such as neighborhoods, manufacturing plants, or even within houses and other structures.
5G disaggregates the 4G BBU into a radio unit (RU), distributed unit (DU), and centralized unit (CU) (Figure 2). Decoupling these functions brings carriers flexibility because they can co-locate the RU, DU, and CU or deploy them in different locations as needed. A network requiring the lowest possible edge latency may, for example, locate the RU, CU and DU together at the edge. This will maximize performance for far-edge-connected user applications. Unfortunately, such a configuration means that each tower includes environmentally controlled enclosures. Multiple RUs may be serviced by one DU, lowering network costs while providing adequate performance where longer latencies are acceptable. Carriers may deploy a mixture of architectures to target different markets and geographies.
Figure 3 shows a deeper view of 5G network hardware and interconnections.
The 5G RU consists of an RF transmitter and a LO PHY block, typically implemented as an FPGA or ASIC optimized for packet management. It operates at wireline speed and can deliver latencies of less than 1 ms. The RU connects to the DU in what’s known as the fronthaul between the LO PHY and HI PHY.
The DU manages radio packet transmissions to and from the RU over the fronthaul link. The primary components of the DU are the radio link controller (RLC), the media access controller (MAC), and the HI PHY. The MAC incorporates software that communicates with the RLC and a hardware module that communicates with the PHY. It can incorporate hardware accelerators such as GPUs or FPGAs and can operate with a latency of less than 5 ms. The DU is connected to the CU over an F1 mid-haul interface. A DU COTS implementation would consist of a server chassis with hardware acceleration PCIe cards and an open-source MAC/RLC stack.
The CU consists of a control plane (CP) and a user plane (UP). The configuration mimics that of LTE, making it easier to integrate a 5G network with a 4G LTE network. Plus, it provides flexibility for unique 5G RAN configurations. The CP and the UP connect in the CU box as part of the CU. They can operate with latencies of around 10 ms.
The RAN Intelligent Controller (RIC) sits upstream from the CU. This function virtualizes the radio network into a series of functions accessible by upstream core controllers.
The shift toward open
The RU, DU, and CU include all of the functions and interfaces necessary for a software-defined network, or virtual RAN (vRAN). The network orchestration and automation layer at the core does, however, need software to manage the process. LTE networks manage this task through proprietary hardware and software. Cost constraints in 5G has inspired carriers for look for a standardized, open-source option that leverages COTS hardware. In response, four key open-source initiatives have emerged: the Akraino Edge Stack, the O-RAN Alliance, the Open Networking Automation Platform (ONAP), and the Open Computing Project (OCP).
The Akraino Edge Stack
Launched in 2018 and now part of the LF Edge Initiative, the Akraino Edge Stack focuses on developing open software stacks for the network edge. The organization emphasizes modular design, which enables reuse of software components. Known as Akraino blueprints, the stacks serve various subsets of the edge cloud infrastructure, including enterprise edge, over-the-top edge, provider edge, and carrier edge. When installed on “bare-metal” servers, the blueprints convert the machines into application-specific appliances.
Akraino has multiple carrier blueprints in development as it works to create 5G telco appliances that speed RAN deployment. The group recently released the Akraino radio-edge cloud (REC) blueprint, which provides an essential component for the management and orchestration and automation layer to interface with the vRAN.
Running on a Linux CentOS distribution, the REC works with management and monitoring software containerized and managed with Kubernetes. The stack virtualizes a bare-metal server so that it can be abstracted as a software service. These APIs can be called by the overlying control layer, enabling it to interact with the data plane at the network layer.
The O-RAN Alliance
The O-RAN Alliance is dedicated to the realization of an open, intelligent RAN. The alliance is developing open virtualized network elements such as an open DU and open CU. As with Akraino, the focus is on building modular reference designs that are both reusable and standardized. The approach not only speeds integration and deployment, it lets developers skip writing code blocks for common functions, freeing them to spend time innovating.
The O-RAN effort is closely tied to the development of the Akraino blueprints. The idea is that the Akraino blueprints abstract the hardware layer and then the O-RAN/ONAP software stacks run on top of that and interface with the APIs (Figure 4).
The RIC is co-located with the CU. It is connected to the orchestration and automation stack at the core by backhaul and connected to the CU and the DU by midhaul. It will run atop the Akraino REC blueprint, which is optimized to minimize latency between the RIC and the DU/CU (Figure 5). The Akraino REC is integrated with the regional controller at the core edge to provide fully automated deployment of the REC to edge sites.
The Open Networking Automation Platform (ONAP)
The 5G network is expected to support a variety of applications with dramatically different requirements. A mobile device streaming video can tolerate higher latencies but may be highly mobile. Smart factories don’t move but demand the lowest possible latency. Automated vehicles present the dual challenges of ultrahigh reliability and ultralow latency. Other variables include bandwidth and cost. Effectively serving these diverse applications requires the ability to virtualize the network so that it can act as a collection of network slices, each of which can be dynamically reconfigured to provide the quality of service required by each application.
The building blocks discussed so far provide a means to create network slices, but they need a top-level control fabric at the core to orchestrate and manage services. The Open Networking Automation Platform (ONAP), an open-source networking project hosted by the Linux Foundation, was established to address this need.
ONAP is critical for 5G deployment. It supports orchestration, automation, and end-to-end lifecycle management of network services. It is highly complex and computationally intensive; running just one instance of ONAP requires 140 cores and 140 GB of RAM. ONAP interfaces with the RAN as shown in Figure 6.
The Open Compute Project
Creating interoperability in the networking world is required to standardize form factors and interfaces. The Open Compute Project (OCP) was launched to establish hardware specifications to achieve this standardization. One of the specifications to come out of the OCP is the openEDGE chassis (Figure 7). Its shallow form factor, low power requirements, and processing density are optimized for telco and edge applications.
5G promises enormous performance improvements that could fundamentally change global communications. It provides telco operators the opportunity to create new markets and consumer services. To succeed in 5G, carriers need improved network equipment with flexibility, low total cost of ownership and fast time to market. Open 5G hardware and software enables these goals.
A version of this article originally appeared as a white paper “Understanding 5G Transport Networks.”
Darrin Vallis is a computing systems architect with over 20 years’ experience in hardware and software development. His expertise covers embedded systems, PC client, storage and hyperscale servers. Recently he has been focused on Cloud, Edge and 5G design. Darrin is a Navy veteran, holds ten patents and has been published fourteen times in technical journals. When not at his lab in Austin Texas, Darrin can usually be found designing and building race cars or somewhere on a race track.