Introduction to PCIe 6.0

By Paul Karazuba

Vice President of Marketing

PDLA

June 01, 2021

Story

Introduction to PCIe 6.0

The PCIe (Peripheral Component Interconnect express) has existed for some time as a method to quickly move data around within chips and systems.

To examine its origins, one would have to go all the way back to 1991 and start with Intel’s PCI standard for local busses. Appearing first in servers and later making its way to desktops, the PCI slot was the desktop PC expansion card standard for a decade. As computing evolved, the industry recognized the need for the creation of a standard addressing a new bus architecture technology and internal connection of multiple chips. With that, PCIe was born, and in 2003, led by PCI-SIG (the Peripheral Component Interconnect Special Interest Group), the 1.0 specification was ratified, offering a then-blistering 2.5GT/s data rates.

Finding its footing in a variety of uses including solid-state drives, graphics card acceleration, and networking, PCIe has continued to evolve to suit the needs of the market. PCIe can be found in almost every modern computing system; not only just in personal computers and servers for which PCIe was originally developed, but in places like high-end mobile, IoT devices, automobiles, medical devices, and many more.

Figure 1 shows the evolution of the specification over time:

PCIe Specification

Data Rate

(GT/s/lane)

Encoding

x16 Bandwidth*

(GB/s)

Specification Ratification

Year

1.x

2.5

8b/10b

4

2003

2.x

5

8b/10b

8

2007

3.x

8

128b/130b

15.75

2010

4.0

16

128b/130b

31.5

2017

5.0

32

128b/130b

63

2019

6.0

64

PAM4, FLIT

128

2021 (anticipated)

Figure 1: PCIe Evolution over Time

Let’s take a look at the PCIe 6.0 specification and how it has evolved from past specifications, and why this should matter to a system designer or chip architect as they consider their next generation designs.

First, as has been the case with each new revision, PCIe 6.0 doubles maximum bandwidth; this time to 64 GT/s. PCI-SIG recognized early in the specification process that NRZ signal encoding, used since PCIe 1.0, simply isn’t capable of supporting the 64GT/s bandwidth required in PCIe 6. Accordingly, the specification has transitioned to PAM4, a technique that allows carrying twice the number of bits over the same amount of time. However, the transition to PAM4 signal encoding has the side effect of introducing a significantly higher Bit Error Rate (BER). This has prompted the adoption of a Forward Error Correction (FEC) mechanism to mitigate the higher error rate. Fortunately, the FEC mechanism is lightweight enough to have minimal impact on latency. To note: while PAM4 signaling is more susceptible to errors, the channel loss is not affected compared to PCIe 5.0 due to the nature of the modulation technique, and so the reach of a PCIe 6.0 signal on a PCB will be the same as that of a PCIe 5.0 signal.

PCIe 6.0 also introduces FLIT mode, where packets are organized in Flow Control Units of fixed sizes, as opposed to variable sizes in past specification releases. The initial reason for introducing FLIT mode was that error correction requires working with fixed size packets; however, FLIT mode also simplifies data management at the controller level and results in higher bandwidth efficiency, lower latency, and smaller controller footprint. Let’s address bandwidth efficiency for a minute: with fixed-size packets, the framing of packets at the Physical Layer is no longer needed, that’s a 4-byte savings for every packet. FLIT encoding also does away with 128B/130B encoding and DLLP overhead from previous PCIe specifications, resulting in a significantly higher TLP efficiency, especially for smaller packets.

While the changes to PCIe 6.0 are significant from previous generations, there is strong and widespread industry support for its adoption. PCIe is omnipresent in every modern computing architecture, and you should expect PCIe 6.0 will gain quick adoption in performance-critical applications in the HPC & Cloud Computing space, the Enterprise Storage and Networking space, and in emerging applications such as AI/Machine Learning and Automobiles.


Paul Karazuba serves as vice president of marketing at PLDA Inc. in San Jose, CA. PLDA is a developer and licensor of semiconductor Intellectual Property (SIP) specializing in high-speed interconnect supporting multi-gigabit rates (2.5G to 64G) and protocols such as PCI Express, CXL and CCIX.

Paul Karazuba serves as vice president of marketing at PLDA Inc. in San Jose, CA. PLDA is a developer and licensor of semiconductor Intellectual Property (SIP) specializing in high-speed interconnect supporting multi-gigabit rates (2.5G to 64G) and protocols such as PCI Express, CXL and CCIX.

More from Paul