5G networks find flexibility in FPGA-based modems

April 09, 2018


5G networks find flexibility in FPGA-based modems

Base station hardware must be able to accommodate the broad requirements of 5G network workloads. This is driving increased interest in FPGA technology across the next-generation wireless infrastructu

If you buy into the hype, 5G networks are poised to revolutionize the current wireless infrastructure. Headline capabilities of 5G technology include peak 20 Gbps download and 10 Gbps upload speeds, 1 ms latencies, up to 1,000x greater capacity per km2 than 4G, 3x spectrum efficiency, 100x better network energy efficiency, and the integration of multiple radio access technologies into a single network (Figure 1). Many, if not all, of these characteristics make 5G extremely attractive for the Internet of Things (IoT).

Figure 1. According to the International Telecommunications Union (ITU), 5G (shown here as IMT-2020) is projected to deliver significant advantages over 4G (shown as IMT-Advanced), including triple the spectrum efficiency, 100x energy efficiency, and 1,000x capacity.

In fact, many 5G IoT deployments have already begun. Beyond demonstrations of 5G at the 2018 Olympics in Pyeongchang, South Korea, the University of Bristol’s Smart Internet Lab recently deployed an end-to-end 5G network testbed in Bristol. The testbed demonstrates a variety of smart city use cases, including autonomous transportation, augmented reality, and smart tourism, which are enabled by 5G New Radio (NR) radio heads connected to a virtual 5G baseband pool.

5G NR is the new air interface for 5G networks. Although it is not backwards compatible with LTE, 5G NR does provide spectrum coverage from sub-1 GHz to 100 GHz. Signals are sent from 5G NR radio heads over a new orthogonal frequency-division multiplexing (OFDM) wireless standard, which uses closely-spaced sub-carrier signals to send data simultaneously across several parallel channels.

Many current 5G network architectures deploy these NR radio heads in base stations with massive multiple-input, multiple-output (MIMO) antennas that use multiple transmitters and receivers to transfer more data, more quickly. Such infrastructure can support various access and connectivity scenarios for applications like enhanced mobile broadband (eMBB), massive machine-type communications (mMTC), and ultra-reliable low-latency communications (URLLC).

More traditional distributed small cells and fixed wireless access points will also remain, but the constant requirement across all 5G infrastructure is the need for extreme flexibility. This has led many carriers, telecom companies, and researchers such as the University of Bristol to implement software-defined networking (SDN) capabilities as the backbone of their 5G networks. Raghu Rao, Principal System Architect, Wireless Communications at Xilinx, explains this transition.

“One aspect is the variety of deployment types we are seeing in 5G,” Rao says. “The physical layer is being split, with a portion of the physical layer moving to the radios, especially in the context of this new massive MIMO technology. And then we have small cells and home gateways. And then we also have fixed wireless access. Then we have the traditional macro cell and metro cell sort of deployments. So if you look at the variety of deployments, what you need is extreme flexibility. A software-driven approach is a lot more capable of supporting this diversity than a 100-percent hardware scenario like it used to be in 4G, where everything was in a box.

“The other aspect to that is the support for various spectrum,” he continues. “In 5G there is sub-6 GHz and greater than sub-6 GHz. Now, even if you look at the traditional sub-6 GHz deployments, there’s a huge range of frequencies on which one wants to talk. In the case of License Assisted Access (LAA) and LTE-Unlicensed, you're talking about anchoring an unlicensed carrier with a licensed carrier with over a GHz of gap between them. Some of these deployments require extreme flexibility, even on the RF side of things.

“Both of these place certain requirements on the design of the hardware at the infrastructure site,” Rao says. “There are different types of access and connectivity coming into the picture, all of them connecting to the same packet core using similar types of infrastructure and similar standards.

“What we see people looking for is a very, very flexible modem,” he adds.

5G flexibility on FPGAs

As Rao points out, the underlying base station hardware must be able to accommodate the broad requirements of 5G network workloads. This is driving increased interest in FPGA technology across the next-generation wireless infrastructure

“There are aspects of a modem that cannot entirely be software on a server or software running on a x86 processor, such as extremely compute-intensive tasks,” Rao says. “The approach that we see people taking is the software-plus-acceleration approach, especially at what we call layer one, or the physical layer. Those workloads are accelerated, and a great choice for acceleration is FPGAs.”

In Q4 2017, Xilinx announced its Zynq UltraScale+ RFSoC product line that, among other use cases, targets the 5G wireless RF signal chain. RFSoCs incorporate analog-to-digital and digital-to-analog converters (ADCs/DACs) that operate at up to 4 GSps and 6.4 GSps, respectively, as well as soft-decision forward error correction (SD-FEC), a quad-core Arm Cortex-A53, and a dual-core Arm Cortex-R5 alongside programmable logic fabric. The company states that the devices provide a 50 to 75 percent power and footprint reduction over competing SoC-based architectures for use in remote radio heads for massive MIMO, baseband, and wireless backhaul systems.

Figure 2. Xilinx Zynq UltraScale+ RFSoCs integrate multi-GSps RF data converters and forward error correction (FEC) in support of 5G infrastructure applications that require spectral efficiency, power efficiency, and network densification.

The integrated data converters in RFSoC chips are particularly advantageous in systems with multi-mode requirements, as they can dynamically support 3GPP LTE and NB-IoT simultaneously, for instance. Rao describes this versatility by returning to the example of licensed and unlicensed carriers sharing the same platform.

“The RFSoC architecture supports what we call “direct RF,” where you can sample at extremely high speeds,” he says. “Sampling at those high speeds allows the rest of it to done digitally, possibly in the FPGA, and so it can be moved into a software-like approach. All of this supports a considerably flexible, programmable modem.

“Depending on the deployment scenario, a single RFSoC can handle multiple workloads,” Rao continues. “It could be reconfigured or the configurations could be built in ahead of time. There are certain workloads like Massive MIMO that would require multiple RFSoCs, but there are many others where one or two RFSoCs are perfectly capable.”

Beyond base stations

While Xilinx expects RFSoC technology to be deployed in 5G base stations everywhere, the company is also seeing interest in small cell settings, macro cells, virtual baseband units, cloud radio access networks (RANs), and even telecom clouds. As network infrastructure is increasingly governed by software, expect FPGA technology to appear in places you wouldn’t normally suspect.

“Everyone is looking for a low cost, quickly deployable system, and for that FPGAs are a great solution. Time to market and all of those advantages help FPGAs in many of the remote radio head deployments,” Rao says. “But baseband is a is a new kind of use for our devices. We used to find a spot in baseband for connectivity reasons. But this new, flexible, software approach to baseband has opened up newer opportunities for FPGAs to maximize throughput and increase power efficiency.”



Brandon Lewis

Brandon Lewis, Editor-in-Chief of Embedded Computing Design, is responsible for guiding the property's content strategy, editorial direction, and engineering community engagement, which includes IoT Design, Automotive Embedded Systems, the Power Page, Industrial AI & Machine Learning, and other publications. As an experienced technical journalist, editor, and reporter with an aptitude for identifying key technologies, products, and market trends in the embedded technology sector, he enjoys covering topics that range from development kits and tools to cyber security and technology business models. Brandon received a BA in English Literature from Arizona State University, where he graduated cum laude.

He can be reached by email at [email protected].

Follow on Twitter

Follow on Linkedin

Visit Website

More Content by Brandon Lewis