Frequently Asked Questions

The following FAQ list was generated using standard responses provided to PCI-SIG members by Technical Support and PCI-SIG Administration. For questions relative to the PCI Specifications, please reference the specifications themselves as the authoritative text. 

PCI Express - 3.0

Show answer
Hide answer
Why is a new generation of PCIe needed?

PCI-SIG responds to the needs of its members. As applications evolve to consume the I/O bandwidth provided by the current generation of the PCIe architecture, PCI-SIG begins to study the requirements for technology evolution to keep abreast of performance and feature requirements.

Show answer
Hide answer
What are the initial target applications for PCIe 3.0?

It is expected that graphics, Ethernet, InfiniBand, storage and PCIe switches will continue to drive the bandwidth evolution for the PCIe architecture and these applications are the current targets of the PCIe 3.0 technology. In the future, other applications may put additional bandwidth and performance demands on the PCIe architecture.

Show answer
Hide answer
Does PCIe 3.0 enable greater power delivery to cards?

The PCIe Card Electromechanical (CEM) 3.0 specification consolidates all previous form factor power delivery specifications, including the 150W and the 300W specifications.

Show answer
Hide answer
Is PCIe 3.0 more expensive to implement than PCIe 2.x?

PCI-SIG attempts to define and evolve the PCIe architecture in a manner consistent with low-cost and high-volume manufacturability considerations. While PCI-SIG cannot comment on design choices and implementation costs, optimized silicon die size and power consumption continue to be overarching imperatives that inform PCIe specification development and architecture evolution.

Show answer
Hide answer
Has there been a new compliance specification developed for PCIe 3.0?

For each revision of its specification, PCI-SIG develops compliance tests and related collateral consistent with the requirements of the new architecture. All of these compliance requirements are incremental in nature and build on the prior generation of the architecture. PCI-SIG anticipates releasing compliance specifications as they mature along with corresponding tests and measurement criteria. Each revision of the PCIe technology maintains its own criteria for product interoperability and admission into the PCI-SIG Integrators List.

Show answer
Hide answer
Does this mean that PCIe is finished at 8GT/s? What comes next?

The PCI-SIG will study the requirements of its members and of the industry for the next generation of the PCIe architecture following the successful release of the PCIe 3.0 specifications. Higher signaling rates depend on a number of factors. The PCI-SIG is committed to delivering the most robust and high-performance I/O interconnect specifications, while at the same time maintaining an uncompromised focus on low cost, low power, high volume manufacturability and compatibility, by taking advantage of breakthroughs in signaling technologies and silicon process capabilities.

Show answer
Hide answer
Section 2.3.1.1 - If an End Point returns a Completion (Cpl) with no data and Successful Completion status to a memory read request, should this be handled as a Malformed TLP or as an Unexpected Completion?

A compliant device would not return a Completion (Cpl) with no data and Successful Completion status to a memory read request, so normally this should not occur. If a properly formed Cpl is received that matches the Transaction ID, it is recommended that it be handled as an Unexpected Completion, but it is permitted to be handled as a Malformed TLP.

Show answer
Hide answer
Section 4.2.6.2.3 - Can you please clarify the below statement quoted from section "4.2.6.2.3. Polling.Configuration" of PCI Express 3.0 specification: Receiver must invert polarity if necessary (see Section 4.2.4.4). Does this imply the polarity inversion can only be initiated by receiver in Polling.Configuration state or can the inversion happen in Polling.Active state as well?

When polarity needs to be inverted, it must be done before exiting Poilling.Configuration, which permits it to be done in Polling.Active.

Show answer
Hide answer
Section 4.2.6.3.1.2 - The question is regarding Configuration.Linkwidth.Start state in the case of upconfiguration. It is written in the spec: "The Transmitter sends out TS1 Ordered Sets with Link numbers and Lane numbers set to PAD on all the active Upstream Lanes; the inactive Lanes it is initiating to upconfigure the Link width; and if upconfigure_capable is set to 1b, on each of the inactive Lanes where it detected an exit from Electrical Idle since entering Recovery and has subsequently received two consecutive TS1 Ordered Sets with Link and Lane numbers, each set to PAD, in this substate." It is not mentioned here if sending of these TS1s should be done ONLY on lanes that detected a receiver in the last time the LTSSM was at Detect state. Is it so?

The only lanes that can be part of an upconfigure sequence are lanes that were part of the configured link when configuration was performed with LinkUp = zero, and Lanes that failed to detect a Receiver can not be part of that initially configured Link. Therefore, a Port in the Configuration.Linkwidth.Start state must only transmit TS1s on a subset of the Lanes that detected a Receiver while last in the Detect state, regardless of if it is attempting to upconfigure the Link width, or not.

Show answer
Hide answer
Section 4.2.6.4.2 - According to pg227 of spec, "When using 128b/130b encoding, TS1 or TS2 Ordered Sets are considered consecutive only if Symbols 6-9 match Symbols 6-9 of the previous TS1 or TS2 Ordered Set". When in Recovery.Equalization and if using 128b/130b encoding, is it required that lane/link numbers (symbol 2) match in TS1s to be considered as consecutive or is it need not match?

The Receiver is not required to check the Link and Lane numbers while in Recovery.Equalization.

Show answer
Hide answer
Section 4.2.6.4.2.2.1 - In redoing equalization (after a successful completion of one) what settings does an EP use in Recovery.Equalization phase 0?

The Transmitter sends TS1 Ordered Sets using the Transmitter settings specified by the Transmitter Presets received in the EQ TS2 Ordered Sets during the most recent transition to 8.0 GT/s data rate from 2.5 GT/s or 5.0 GT/s data rate.

Show answer
Hide answer
Section 4.2.6.4.3 - When a device has down configured the number of operational lanes, what is the expected power state and characteristics of the unused lanes?

The unused transmitter Lane is put into Electrical Idle. It is recommended that the receiver terminations be left on.

Show answer
Hide answer
Section 4.2.6.4.3 - While down configured and a rate change request occurs, do the unused lanes also participate in the rate change?

The transmitter of the unused lanes remains in Electrical Idle during the speed change.

Show answer
Hide answer
Section 4.2.6.4.1 - The specification says to transition from Recovery.RcvrLock to Recovery.RcvrCfg, upon receiving eight consecutive TS1 or TS2 Ordered Sets. If a Port in Recovery.RcvrLock state receives x (where x < 8) number of consecutive TS1 ordered sets and then receives (8 - x) number of consecutive TS2 ordered sets, should it transition to Recovery.RcvrCfg, OR should it wait for receiving 8 consecutive TS2 ordered sets to transition to Recovery.RcvrCfg (basically discarding the received TS1 Ordered Sets).

The transition requirements can be satisfied by receiving 8 TS1s, 8 TS2s, or a combination of TS1s and TS2s totaling 8.

Show answer
Hide answer
Section 4.2.7.3 - PCIe 3.0 Base spec section 4.2.7.4 states that "Receivers shall be tolerant to receive and process SKP Ordered Sets at an average interval between 1180 to 1538 Symbol Times when using 8b/10b encoding and 370 to 375 blocks when using 128b/130b encoding.ÌÒ For 128/130 encoding, if the Transmitter sends one SKP OS after 372 blocks and a second after 376 blocks, the average interval comes out to be 374 blocks and that falls in the valid range. So is this allowed, or must every SKP interval count fall inside the 370 to 375 blocks?

At 8 GT/s, a SKP Ordered Set must be scheduled for transmission at an interval between 370 to 375 blocks. However, the Transmitter must not transmit the scheduled SKP Ordered Set until it completes transmission of any TLP or DLLP it is sending, and sends an EDS packet. Therefore, the interval between SKP OS transmissions may not always fall within a 370 to 375 block interval.

For example, if a SKP Ordered Set remains scheduled for 6 block times before framing rules allow it to be transmitted, the interval since the transmission of the previous SKP OS may be 6 blocks longer than normal, and the interval until the transmission of the next SKP OS may be 6 Blocks shorter than normal. But the Transmitter must schedule a new SKP Ordered Set every 370 to 375 blocks, so the long-term average SKP OS transmission rate will match the scheduling rate.

Receivers must size their elastic buffers to tolerate the worst-case transmission interval between any two SKP Ordered Sets (which will depend on the Max Payload Size and the Link width), but can rely on receiving SKP Ordered Sets at a long term average rate of one SKP Ordered Set for every 370 to 375 blocks. The SKP Ordered Set interval is not checked by the Receiver.

Show answer
Hide answer
Section 3.5.2.1 - If you can receive TLPs and flow control DLLPs normally but do not receive any ACK or NAK. Do you exit to DL_Inactive state?

When an Ack or Nak is not received and the REPLAY_TIMER expires, the TLPs in the Transmit Retry Buffer are retransmitted. The Replay Timer Timeout error is also reported.

Show answer
Hide answer
Section 7.5.3 - An Endpoint sends a Memory Request Upstream to a Switch. How will the Switch determine if it needs to route that packet Upstream or to an Endpoint below another Downstream Port?

Each Port of a Switch contains registers that define Memory Space apertures. The Memory Base/Limit registers define an aperture for 32-bit non-prefetchable Memory Space. The Prefetchable Memory Base/Limit & their corresponding Upper registers define an aperture for 64-bit prefetchable Memory Space. Here is the basic behavior with a properly configured Switch. If the TLP address falls within the aperture of another Downstream Port, the TLP is routed to that Downstream Port and sent Downstream. If the TLP address falls within a Memory Space range mapped by any BAR within the Switch, the TLP is routed to the Function containing that BAR. Otherwise, if the TLP address falls within an aperture of the Upstream Port, the TLP is handled as an Unsupported Request. Otherwise, the TLP is routed to the Upstream Port where it is sent to the Upstream Link or another Function associated with the Upstream Port.

If a Switch implements and supports Access Control Services (ACS), ACS mechanisms provide additional controls governing whether each Memory Request TLP is routed normally to another Downstream Port, blocked as an error, or redirected Upstream even if its address falls within the aperture of another Downstream Port. See Section 6.12.

Show answer
Hide answer
Section 4.2.3 - Section 4.2.3 states, "After entering L0, irrespective of the current Link speed, neither component must transmit any DLLP if the equalization procedure must be performed, and until the equalization procedure completes." Does that result in the following sequence: 1. Negotiate a Link and enter L0. Do not allow DLLP transmission while in L0. 2. Change the data rate to 8.0 GT/s and execute the equalization procedure. 3. Enter L0. Allow DLLP transmission.

Yes, that is the expected sequence when the autonomous equalization mechanism is executed. Note that Section 4.2.3 also describes other equalization mechanisms.

Show answer
Hide answer
Section 4.2.6 - Table 4-14 says that Receiver Errors should not be reported in the L0s or L1 states. During L1 entry, an Upstream Port's transmitter may be in Electrical Idle while its receivers are not in Electrical Idle. Similarly, a Port's transmitters may be in Electrical Idle for L0s, while its receivers are not in Electrical Idle. In these situations, should the Port report Receiver Errors such as 8b10b errors?

If the receivers are in L0s, Receiver Errors should not be reported. It does not matter whether the transmitters are in L0 or L0s for reporting of Receiver Errors. Section 4.2.6.5 specifies the 3 conditions required for the LTSSM to transition from L0 to L1. Until all of these conditions are satisfied, the LTSSM is in L0 state, and should report Receiver Errors, even if its transmitters are in Electrical Idle.

Show answer
Hide answer
Section 4.2.4.2 - When upconfiguring a Link in the LTSSM Configuration.Linkwidth.Start state, are the Lanes which are being activated required to transmit an EIEOS first when they exit Electrical Idle?

No. Lanes being activated for upconfiguration are not required to align their exit of Electrical Idle with the transmission of any Symbol, Block, or Ordered Set type. Furthermore, the Lanes are not required to exit Electrical Idle before the LTSSM enters the Configuration.Linkwidth.Start state.

Show answer
Hide answer
Section 2.3.2 - If a Requester receives a CplLk or CplDLk Completion that does not match the Transaction ID for any of the Requester's outstanding Requests, and the Requester does not support this type of Completion, is the Completion handled as a Unexpected Completion or a Malformed TLP?

Only Host CPUs are permitted to generate locked transaction sequences, so Endpoints should never receive CplLk or CplDLk Completions that match their Transaction IDs. An Endpoint is permitted to handle the case in question either as a Malformed TLP or an Unexpected Completion, depending upon implementation specific factors, such as whether it decodes these types of Completions. For this case it is recommended that an Endpoint handle it as an Unexpected Completion since it may be the result of a misrouted TLP, and best handled as an Advisory Non-Fatal Error as described in Section 6.2.3.2.4.5.

Show answer
Hide answer
Section 4.2.6.4.4 - Table 4-5 defines that the valid range of Link Number (Symbol 1 of a TS1 Ordered Set) is 0-31 for Downstream Ports that support 8.0 GT/s or above. If a Downstream Port in the LTSSM Recovery.RcvrCfg state receives TS1 Ordered Sets with a Link Number that is not in the range 0-31, do they qualify as "TS1 Ordered Sets ... with Link or Lane numbers that do not match what is being transmitted" ?

Yes. The received Link Number (not in the range 0-31) does not match the transmitted Link Number (in the range 0-31).

Show answer
Hide answer
Section 4.2.6.4.1 - While in the LTSSM Recovery.RcvrLock state, if a Port receives TS Ordered Sets with a Link or Lane number that does not match those being transmitted on at least one Lane, but receives TS Ordered Sets with Link and Lane numbers that match those being transmitted and the speed_change bit is equal to 1b on at least one other Lane, should the Port transition to the LTSSM Recovery.RcvrCfg state or the LTSSM Detect state after a 24 ms timeout?

The Port should transition to the LTSSM Recovery.RcvrCfg state.

Show answer
Hide answer
Section 4.2.6.6.1.3 - How can I configure the RC, if permissible, to send 4096 FTS to EP while RC transits out of L0s?

Setting the Extended Synch bit in the Link Control register of the two devices on the link will increase the number of FTS Ordered Sets to 4096, but the Extended Synch bit is used only for testing purposes.

Show answer
Hide answer
Section 4.2.6.4.1 - When the directed_speed_change variable is changed (as a result of receiving eight consecutive TS1 or TS2 Ordered Sets with the speed_change bit set while in Recovery.RcvrLock), is the eight_consecutive counter cleared and the device does not transition to Recovery.RcvrCfg state at this time?

When setting the directed_speed_change variable (in response to receiving 8 consecutive TS1 or TS2 Ordered Sets with the speed_change bit set), it is recommended, but not required, to reset the counters/status of received TS1 or TS2 Ordered Sets. That is, it is recommended that a Device receive an additional 8 consecutive TS1 or TS2 Ordered Sets with the speed_change bit set after it has started transmitting TS1 Ordered Sets with the speed_change bit set before it transitions from Recovery.RcvrLock to Recovery.RcvrCfg.

Show answer
Hide answer
Section 5.5.3.3.1 - Section 5.5.3.3.1 of the PCIe spec states the following: In order to ensure common mode has been established, the Downstream Port must maintain a timer, and the Downstream Port must not send TS2 training sequences until a minimum of TCOMMONMODE has elapsed since the Downstream Port has started both transmitting and receiving TS1 training sequences.

If the Downstream Port receives no valid TS1 Ordered Sets but does receive valid TS2 Ordered Sets, should it timeout and transition to Detect?

No, the timer is to guarantee that the Transmitter will stay in Recovery.RcvrLock for a minimum time to establish common mode. The Port must wait to transition from Recovery.RcvrLock until this timer has expired, and the timer does not start counting until an exit from Electrical Idle has been detected. Errata A21 modified this section of the L1 PM Substates with CLKREQ ECN document.

Show answer
Hide answer
Section 7.11.7 - Software has enabled Virtual Channel VC1 and currently UpdateFC DLLPs for VC1 are being transmitted on the link. Now, software disables VC1. So my question is, should UpdateFC DLLPs for VC1 be transmitted on the link?

When VC Enable for VC1 is set to 0b, the device must stop transmitting UpdateFC DLLPs for VC1.

Show answer
Hide answer
Section 7.28.3 - The default Target Link Speed field in the Link Control 2 register requires the field be set to the highest support speed. Should the default value of the M-PCIe Target Link Speed Control field be 10b?

The default value of the M-PCIe Target Link Speed Control field is 01b.

Show answer
Hide answer
Section 4.2.6.9 - When in the Disabled state the Upstream Port transitions to Detect when an Electrical Idle exit is detected at the receiver. Is an Electrical Idle exit required to be detected on all Lanes?

An Electrical Idle exit is required to be detected on at least one Lane.

Show answer
Hide answer
Section 7.8.6 - Is the L1 Exit Latency in the Link Capabilities register only the ASPM L1.0 exit latency or does it include the added ASPM L1.2 to ASPM L1.0 latency?

The ASPM L1 Exit Latency in the Link Capabilities register indicates the L1/L1.0 to L0 latency, and does not include added latency due to Clock Power Management, L1.1 or L1.2.

Show answer
Hide answer
Section 5.3.1.4.1 - While in the D2 state, a Function must not initiate any Request TLPs on the Link with the exception of a PME Message. What are the requirements in the D3hot state?

While in the D3hot state, a Function must not initiate any Request TLPs on the Link with the exception of a PME Message.

Show answer
Hide answer
Section 5.3.1.4.1 - A Root Port is connected to a multifunction Endpoint. The Root Port is ECRC capable. The multifunction Endpoint only has 1 function that is ECRC capable, the others are not. Software enables ECRC checking and generation in the Root Port and also enables ECRC checking and generation in the 1 Endpoint function that supports it. Given that one function is enabled for ECRC check, is the EP required to check the TD bit & ECRC on all TLPs that target any of the endpoint's functions regardless of whether the receiving function is ECRC capable?

Per Section 2.7.1, the device is required to check ECRC for all TLPs where it is the ultimate PCI Express Receiver. Note that per Section 6.2.4, an ECRC Error is not Function-specific, so it must be logged in all Functions of that device.

Show answer
Hide answer
Section 8.4.2 - A switch Upstream port receives a Memory Read TLP while in the D3hot State. The Upstream port handles the received Memory Read Request as Unsupported Request. Is the switch Upstream port allowed or required to send a Completion with Completion Status as UR. Or must it transmit the Completion only after the power state is programmed to D0.

The Completion with UR status is required to be transmitted while the Port is in D3hot, assuming that the Port remains powered long enough to transmit the Completion.

Show answer
Hide answer
Section 7.28.3 - When the maximum M-PCIe Link speed supported is 2.5 GT/s, what will be the Link speed following a reset?

The Link Speed following reset will be the result of Configuration process. During the M-PCIe discovery and Configuration process, RRAP is used to discover M-PHY capabilities, analyze and configure configuration attributes accordingly. Depending on the High speed supported by both components, the Link Speed and Rate Series may get configured for HS-G1, HS-G2 or HS-G3 and RATE A or B respectively. For this particular example Link Speed could be either HS-G1 or HS-G2 depending on the supported Link Speeds of the other Component on the LINK.

Show answer
Hide answer
Section 3.5.2.1 - The M-PCIe ECN contains no information on the REPLAY_TIMER and the Ack Transmission Latency Limit. What are the recommended values for the following? Gear 1 - Rate A Gear 1 - Rate B Gear 2 - Rate A Gear 2 - Rate B Gear 3 - Rate A Gear 3 - Rate B

We suggest using the 2.5 GT/s values for Gear 1 and 2 at Rates A and B, and also suggest using the 5.0 GT/s values for Gear 3 at Rates A and B, until clarification is received from the workgroup. This clarification will be included in next errata release.

Show answer
Hide answer
Section 6.20 - Is a PASID permitted on a Completion? [refer to Section 6.20 – Lines 8-13 on page 628 of PCIe 3.1]

Section 6.20 – Lines 8-13 on page 628 of PCIe 3.1 states:
A PASID TLP Prefix is permitted on:

- Memory Requests (including AtomicOp Requests) with Untranslated Addresses (See Section 2.2.4.1).

- Translation Requests and Translation Message Requests as defined in the Address Translation Services Specification.

The PASID TLP Prefix is not permitted on any other TLP.

No, the text is correct as-is -- a PASID is not permitted on a Completion.  We will consider if an errata is needed to clarify this.”

 

Show answer
Hide answer
SECTION 2.2.9 - If a link is up and bus and device numbers are snooped by the Endpoint, then the link is disabled and enabled again, should the Endpoint set the bus and device number fields to zero in a completion that it sends prior to the first CfgWr being received?

Yes, when the LTSSM enters the Disabled state, the DLCMSM transitions to DL_Inactive, the Link transitions to DL_Down, and this causes the equivalent of a Hot Reset to the Endpoint. See Sections 2.2.9 & 3.2.1.

Show answer
Hide answer
Section 4.2.6.4 - What is the specification condition on transmitting TS1 Ordered sets while in Recover.RcvrLock state?

While the LTSSM is in Recovery.RcvrLock the Transmitter must send TS1 Ordered Sets on all configured lanes continuously with the following exceptions:
1.      At data rates above 2.5 GT/s send an EIEOS every 32 TS1 ordered Sets (4.2.4.2)

  •        EIEOS guarantees that electrical Idle exit will be detected by the link partner

2.      At all data rates send SKPOS according to 4.2.7.3

  •        for clock compensation

PCI Express - 4.0

Show answer
Hide answer
What is PCI Express (PCIe) 4.0? What are the requirements for this evolution of the PCIe architecture?

PCIe 4.0 is the next evolution of the ubiquitous and general-purpose PCI Express I/O specification. At 16GT/s bit rate, the interconnect performance bandwidth will be doubled over the PCIe 3.0 specification, while preserving compatibility with software and mechanical interfaces. The key requirement for evolving the PCIe architecture is to continue to provide performance scaling consistent with bandwidth demand from a variety of applications with low cost, low power and minimal perturbations at the platform level. One of the main factors in the wide adoption of the PCIe architecture is its sensitivity to high-volume manufacturing capabilities and materials such as FR4 boards, low-cost connectors and so on.

Show answer
Hide answer
What is the bit rate for the PCIe 4.0 specification and how does it compare to prior generations of PCIe?

Based on PCI-SIG feasibility analysis, the bit rate for the PCIe 4.0 specification will be 16GT/s. This bit rate represents the optimum tradeoff between performance, manufacturability, cost, power and compatibility. PCI-SIG analysis covered multiple topologies. All of these studies confirmed the potential feasibility of 16GT/s signaling with low-cost enablers.

Show answer
Hide answer
What are the results of the feasibility testing for the PCIe 4.0 specification?

After technical analysis, the PCI-SIG has determined that 16 GT/s on copper, which will double the bandwidth over the PCIe 3.0 specification, is technically feasible at approximately PCIe 3.0 power levels. The preliminary data also confirms that a 16GT/s interconnect can be manufactured in mainstream silicon process technology and can be deployed with existing low-cost materials and infrastructure, while maintaining compatibility with previous generations of PCIe architecture. In addition, the PCI-SIG will investigate advancements in active and idle power optimizations as they become available.

Show answer
Hide answer
What were the requirements outlined for the feasibility analysis?

In assessing potential improvements to the connector, materials, silicon and channel improvements, PCI-SIG required that compatibility, low-cost and high-volume manufacturing be maintained.

Show answer
Hide answer
Will PCIe 4.0 products be compatible with existing PCIe 1.x, PCIe 2.x and PCIe 3.x products?

PCI-SIG is proud of its long heritage of developing compatible architectures and its members have consistently produced compatible and interoperable products. In keeping with this tradition, the PCIe 4.0 architecture is compatible with prior generations of this technology, from software to clocking architecture to mechanical interfaces. That is to say PCIe 1.x, 2.x and 3.x cards will seamlessly plug into PCIe 4.0-capable slots and operate at the highest performance levels possible. Similarly, all PCIe 4.0 cards will plug into PCIe 1.x-, PCIe 2.x- and PCIe 3.x-capable slots and operate at the highest performance levels supported by those configurations.

Show answer
Hide answer
Why is a new generation of PCIe architecture needed?

PCI-SIG responds to the needs of its members. As applications evolve to consume the I/O bandwidth provided by the current generation of the PCIe architecture, PCI-SIG begins to study the requirements for technology evolution to keep abreast of performance and feature requirements.

Show answer
Hide answer
What are the initial target applications for the PCIe 4.0 architecture?

The PCIe 4.0 specification will address the many applications pushing for increased bandwidth at a low cost including server, workstation, desktop PC, notebook PC, tablets, embedded systems, peripheral devices, high-performance computing markets and more. The target implementations are entirely at the discretion of the designer.

Show answer
Hide answer
Is PCIe 4.0 architecture more expensive to implement than PCIe 3.x?

PCI-SIG attempts to define and evolve the PCIe architecture in a manner consistent with low-cost and high-volume manufacturability considerations. While PCI-SIG cannot comment on design choices and implementation costs, optimized silicon die size and power consumption continue to be important considerations that inform PCIe specification development and architecture evolution.

Show answer
Hide answer
Will there been a new compliance specification developed for the PCIe 4.0 specification?

For each revision of its specification, PCI-SIG develops compliance tests and related collateral consistent with the requirements of the new architecture. All of these compliance requirements are incremental in nature and build on the prior generation of the architecture. PCI-SIG anticipates releasing compliance specifications as they mature along with corresponding tests and measurement criteria. Each revision of the PCIe technology maintains its own criteria for product interoperability and admission into the PCI-SIG Integrators List.

PCI Express - 5.0

Show answer
Hide answer
What is PCI Express (PCIe) 5.0 and what requirements guided its development?

The PCIe 5.0 architecture Is an evolution of the ubiquitous and general-purpose PCI Express I/O architecture. It supports a maximum bit rate that is double that of the PCIe 4.0 architecture. The key requirement for evolving the PCIe architecture is to continue to provide performance scaling consistent with bandwidth demand from a variety of applications, and with low cost, low power, and minimal perturbations at the platform level. One of the main factors in the wide adoption of the PCIe architecture is its sensitivity to high-volume manufacturing capabilities and materials, low-cost connectors and so on.

Show answer
Hide answer
What bit rates does the PCIe 5.0 specification support and how does it compare to prior PCIe generations?

A PCIe Link consists of 1, 2, 4, 8, 12, 16, or 32 Lanes, all operating at one of the supported signaling rates.

  • PCIe 1.0 provided an effective 2.5 Gigabits/second/Lane/direction of raw bandwidth.
  • PCIe 2.0 added support for 5.0 Gigabits/second/Lane/direction of raw bandwidth.
  • PCIe 3.0 added support for 8.0 Gigabits/second/Lane/direction of raw bandwidth.
  • PCIe 4.0 added support for 16.0 Gigabits/second/Lane/direction of raw bandwidth.
  • PCIe 5.0 adds support for 32.0 Gigabits/second/Lane/direction of raw bandwidth.

A PCIe 5.0 Link consisting of 32 Lanes and operating at a bit rate of 32 GT/s provides an effective raw bandwidth of 128 Gigabytes/second in each direction simultaneously. 

Show answer
Hide answer
Why is a new generation of PCIe architecture needed?

PCI-SIG responds to the needs of its members. As applications evolve to consume the I/O bandwidth provided by the current generation of the PCIe architecture, PCI-SIG begins to study the requirements for technology evolution to keep abreast of performance and feature requirements.

Pages