The PCIe® 6.0 Specification Webinar Q&A: Supported Features in PCIe 6.0 Specification
The PCI Express® (PCIe®) 6.0 specification will provide multiple features, including Transaction Layer Packet (TLP) payload, encoding and decoding, shared credit pooling, Forward Error Correction (FEC), CRC error correction, and more. PCIe 6.0 architecture introduces PAM4 signaling, requiring the use of FEC mechanisms to minimize the high error rate (FBER). The channel loss of PCIe 6.0 architecture will have a similar reach as PCIe 5.0 architecture — approximately 12” on the mother board and 3-4” on the add-in-card with the materials available. These are rough estimates, however, and the exact dB is under evaluation. Review the PCIe 6.0 specification, version 0.7 blog for additional details.
This post includes answers to questions about PCIe 6.0 specification features that were asked during the PCIe 6.0 specification webinar.
- What size TLP payload is supported in the PCIe 6.0 specification?
As with PCI Express technology today, a TLP can have anywhere from 0 DW (Double Word, which is equal to 4 Bytes) to 1024 DW, although enhancements to the Max Payload Size mechanism will generally encourage the implementation of a 128 DW (512 Byte) maximum payload size. TLP prefix support will also be there, although in a modified way. The size of the TLP from a transaction layer perspective is also more or less the same. What is changing is the arrangement of the bits in the TLP, so that the transaction layer can process the packets by looking at fixed locations without needing the help of the physical layer to identify the start and end of each TLP. This helps us gain efficiency since we will no longer be incurring the framing overhead.
- Why are encoding and decoding different in the PCIe 6.0 specification, compared to the PCIe 5.0 specification?
For 64.0 GT/s data rate, we had to go with PAM4 signaling, which results in a high error rate (FBER). The high error rate requires us to incorporate a Forward Error Correction (FEC) mechanism to bring the replay probability to an acceptable level. FEC works on a fixed number of symbols. If we kept the older encoding and protected each TLP/DLLP/IDLE separately, the code size would vary dynamically. Then the framing token, with its independent FEC protection to say how many symbols the next FEC code size was would result in a very inefficient interconnect. By moving to fixed sized symbols protected by FEC, it is easy to move to FLIT-based encoding since they are of fixed size. FLIT is the basic unit of transfer where we can have variable sized transactions or data link payload, etc.
- Is a shared credit pool a part of the PCIe 6.0 specification?
The shared credit pool will be a part of the PCIe 6.0 specification. It is optional for a receiver to implement but mandatory for a PCIe 6.0 device to support as a transmitter. The shared credit pool is orthogonal to the data rate or FLIT mode support, so a PCIe 6.0 device can operate at 32.0 GT/s data rate and still use the shared credit pool.
- What are the mechanical changes in the form factor and for rugged designs?
The form-factor specifications typically lag behind the base specification. This is expected since we need information from the base specification for the form factor specification. Therefore, it is premature to talk about the changes in form factor, such as Card Electro Mechanical (CEM). The intent is to be fully backward compatible even with a small set of changes needed for PCIe 6.0 slots/form factors, which is consistent with what we have done in prior generations.
- Is PCI-SIG® getting rid of x32 support?
The x32 and x12 modes, while present in the base specification, were never adopted and there has been no form factor specification or even designs to these widths. All the other widths (x1, x2, x4, x8, and x16) have seen widespread adoption since the PCIe 1.0 specification. After much deliberation, we decided to drop support for the x12 and x32 modes.
- Is there any difference in reference clock for the PCIe 6.0 specification?
We will continue with the same reference clock as well as clocking support for common clock, SRNS and SRIS clocking modes as in PCIe 5.0 specification and prior generations.
- How many lanes are considered for 10-6 error?
The FBER rate of 10-6 is for any number of lanes since it is based on the number of bits.
- What is a good TLP size to get maximum efficiency in PCIe 6.0 technology mode given FLIT size of 256B?
The efficiency of TLP still gets better with higher payload size even with FLIT Mode removing the framing overhead per TLP as well moving to a fixed overhead with CRC and DLLP. So, while the bandwidth efficiency improvement with larger payload size per TLP is diminished in FLIT Mode, it is still there. For example, for a 100% read or write in FLIT Mode, the link efficiency increases from 0.89 to 0.91 for a TLP payload size of 512B vs 4KB, whereas for a 50-50 mix, the difference is more pronounced between 0.86 vs. 0.91 and between 512B vs. 4KB. Therefore, the payload size becomes a choice of the usage model. Increasing the payload size increases the latency (e.g., there is a latency penalty of about 29 ns for a x16 and about 450ns for a x1 between the 512B vs the 4KB payload). There is a trade-off to be made. Most systems have a 256B or a 512B max payload size today – so that will probably continue.
- Is the 8B CRC different from the link CRC from previous PCIe technology generations?
Yes, it is different. FLIT Mode has 8B CRC on a fixed FLIT size, which includes the transaction layer and data link layer packets. Prior generations used a 4B CRC for variable sized transaction layer packets and a separate 2B CRC for the data link layer packets.
- What will the BER visible to the system be? After FEC, replay, etc.?
We do not expect any system-level visibility with BER at an application or user level, the same way it has been. However, as in the past, we have the mechanisms to measure the margins and log any errors (including replay, etc.) that get corrected. This is done so that one can check the system health and ensure everything is working within the spec-defined limits.
- I understand the trade-offs you are making here. But wont a higher BER give you more channel length?
This is a very good question. We have done extensive studies before settling on the 10-6 FBER. As you have seen in the presentation, 10-6 is a critical number to meet the latency impact of FEC (and CRC) to be less than 2ns and keeping the bandwidth overhead less, in line with our <2% impact. Another point to note is that the BER would be much worse than FBER – about an order of magnitude worse – due to burst errors in a lane as well as lane to Lane correlation. If we relaxed the FBER, then we would need the networking style FEC even if we have the retry to keep the retry probability less than 1E-5. Based on our analysis, we are confident that we can push the existing channel reach to 1E-6 FBER. For longer channels, we can deploy retimers.
Based on our experience over the last two decades, channels always improve over time. We always come up with better materials with lower loss characteristics. But once we make a target FBER and deploy the FEC/CRC accordingly, that does NOT change over time. We will be stuck with it for the life of the technology, so we need to make the right set of trade-offs. A higher FBER may give us an extra inch or two of channel reach today, but that is not worth the area, performance, cost, power penalty and, above all, losing a substantial segment of latency and power-sensitive usage models. We are already meeting our key metrics, including channel reach, even with today’s materials that are deployed in volume.
- Can you comment on channel loss? How many dB?
The channel loss will have similar reach as PCIe 5.0 specification. Basically, we are targeting to reach about 12” on the motherboard and 3-4” on the add-in card with the materials available in the timeframe of PCIe 6.0 specification introduction. These are very rough estimates. In terms of the exact dB, it is under evaluation and was introduced with the PCIe 6.0 specification, version 0.7.
Learn more about the PCIe 6.0 specification
This series of Q&A blogs continues to provide answers asked by attendees during the live webinar presentation. The recording of the PCIe 6.0 Specification: The Interconnect for I/O Needs of the Future webinar is available to view on-demand on the PCI-SIG YouTube channel. Watch our website for updates on upcoming webinars from PCI-SIG or subscribe to the PCI-SIG BrightTALK channel.