Frequently Asked Questions

The following FAQ list was generated using standard responses provided to PCI-SIG members by Technical Support and PCI-SIG Administration. For questions relative to the PCI Specifications, please reference the specifications themselves as the authoritative text. 

PCI Express - 3.0

Show answer
Hide answer
What is PCI Express (PCIe) 3.0? What are the requirements for this evolution of the PCIe architecture?

PCIe 3.0 is the next evolution of the ubiquitous and general-purpose PCI Express I/O standard. At 8GT/s bit rate, the interconnect performance bandwidth is doubled over PCIe 2.0, while preserving compatibility with software and mechanical interfaces. The key requirement for evolving the PCIe architecture is to continue to provide performance scaling consistent with bandwidth demand from leading applications with low cost, low power and minimal perturbations at the platform level. One of the main factors in the wide adoption of the PCIe architecture is its sensitivity to high-volume manufacturing materials and tolerances such as FR4 boards, low-cost clock sources, connectors and so on. In providing full compatibility, the same topologies and channel reach as in PCIe 2.0 are supported for both client and server configurations. Another important requirement is the manufacturability of products using the most widely available silicon process technology. For the PCIe 3.0 architecture, PCI-SIG believes a 65nm process or better will be required to optimize on silicon area and power.

Show answer
Hide answer
What is the bit rate for PCIe 3.0 and how does it compare to prior generations of PCIe?

The bit rate for PCIe 3.0 is 8GT/s. This bit rate represents the most optimum tradeoff between manufacturability, cost, power and compatibility.
The PCI-SIG analysis covered multiple topologies and configurations, including servers. All of these studies confirmed the feasibility of 8GT/s signaling with low-cost enablers and with minimal increases in power, silicon die size and complexity.

Show answer
Hide answer
How does the PCIe 3.0 8GT/s "double" the PCIe 2.0 5GT/s bit rate?

The PCIe 2.0 bit rate is specified at 5GT/s, but with the 20 percent performance overhead of the 8b/10b encoding scheme, the delivered bandwidth is actually 4Gbps. PCIe 3.0 removes the requirement for 8b/10b encoding and uses a more efficient 128b/130b encoding scheme instead. By removing this overhead, the interconnect bandwidth can be doubled to 8Gbps with the implementation of the PCIe 3.0 specification. This bandwidth is the same as an interconnect running at 10GT/s with the 8b/10b encoding overhead. In this way, the PCIe 3.0 specifications deliver the same effective bandwidth, but without the prohibitive penalties associated with 10GT/s signaling, such as PHY design complexity and increased silicon die size and power. The following table summarizes the bit rate and approximate bandwidths for the various generations of the PCIe architecture: PCIe architecture Raw bit rate Interconnect bandwidth Bandwidth per lane per direction Total bandwidth for x16 link
PCIe 1.x 2.5GT/s 2Gbps ~250MB/s ~8GB/s
PCIe 2.x 5.0GT/s 4Gbps ~500MB/s ~16GB/s
PCIe 3.0 8.0GT/s 8Gbps ~1GB/s ~32GB/s

Total bandwidth represents the aggregate interconnect bandwidth in both directions.

Show answer
Hide answer
Do PCIe 3.0 specifications only deliver a signaling rate increase?

The PCIe 3.0 specifications comprise the Base and the Card Electro-mechanical (CEM) specifications. There may be updates to other form factor specifications as the need arises. Within the Base specification, which defines a chip-to-chip interface, updates have been made to the electrical section to comprehend 8GT/s signaling. As the technology definition progresses through PCI-SIG specification development process, additional ECN and errata will be incorporated with each review cycle. For example, the current PCIe protocol extensions that address interconnect latency and other platform resource usage considerations have been rolled into the PCIe 3.0 specification revisions. The final PCIe 3.0 specification consolidates all ECN and errata published since the release of the PCIe 2.1 specification, as well as interim errata.

Show answer
Hide answer
Will PCIe 3.0 products be compatible with existing PCIe 1.x and PCIe 2.x products?

PCI-SIG is proud of its long heritage of developing compatible architectures and its members have consistently produced compatible and interoperable products. In keeping with this tradition, the PCIe 3.0 architecture is fully compatible with prior generations of this technology, from software to clocking architecture to mechanical interfaces. That is to say PCIe 1.x and 2.x cards will seamlessly plug into PCIe 3.0-capable slots and operate at their highest performance levels. Similarly, all PCIe 3.0 cards will plug into PCIe 1.x- and PCIe 2.x-capable slots and operate at the highest performance levels supported by those configurations.

The following chart summarizes the interoperability between various generations of PCIe and the resultant interconnect performance level:
   

Transmitter Device Receiver Device Channel  Interconnect Data Rate
8GHz 8GHz 8GHz  8.0GT/s
5GHz 5GHz  5GHz  5.0GT/s
2.5GHz 2.5GHz 2.5GHz 2.5GT/s
8GHz 5GHz 8GHz 5.0GT/s
8GHz 2.5GHz 8GHz 2.5GT/s
5GHz 2.5GHz 5GHz 2.5GT/s

In short, the notion of the compatible highest performance level is modeled after the mathematical least common denominator (LCD) concept. Also, PCIe 3.0 products will need to support 8b/10b encoding when operating in a pre-PCIe 3.0 environment.

Show answer
Hide answer
What are the PCIe protocol extensions, and how do they improve PCIe interconnect performance?

The PCIe protocol extensions are primarily intended to improve interconnect latency, power and platform efficiency. These protocol extensions pave the way for better access to platform resources by various compute- and I/O-intensive applications as they interact with and through the PCIe interconnect hierarchy. There are multiple protocol extensions and enhancements being developed and they range in scope from data reuse hints, atomic operations, dynamic power adjustment mechanisms, loose transaction ordering, I/O page faults, BAR resizing and so on. Together, these protocol extensions will increase PCIe deployment leadership in emerging and future platform I/O usage models by enabling significant platform efficiencies and performance advantages.

Show answer
Hide answer
When was the PCIe 3.0 specifications made available?

PCI-SIG released the PCIe 3.0 specification on November 17, 2010.

Show answer
Hide answer
What is 8b/10b encoding?

8b/10b encoding is a byte-oriented coding scheme that maps each byte of data into a 10-bit symbol. It guarantees a deterministic DC wander and a minimum edge density over a per-bit time continuum. These two characteristics permit AC coupling and a relaxed clock data recovery implementation. Since each byte of data is encoded as a 10-bit quantity, this encoding scheme guarantees that in a multi-lane system, there are no bubbles introduced in the lane striping process.

Show answer
Hide answer
What is scrambling? How does scrambling impact the PCIe 3.0 architecture?

Scrambling is a technique where a known binary polynomial is applied to a data stream in a feedback topology. Because the scrambling polynomial is known, the data can be recovered by running it through a feedback topology using the inverse polynomial. Scrambling affects the PCIe architecture at two levels: the PHY layer and the protocol layer immediately above the PHY. At the PHY layer, scrambling introduces more DC wander than an encoding scheme such as 8b/10b; therefore, the Rx circuit must either tolerate the DC wander as margin degradation or implement a DC wander correction capability. Scrambling does not guarantee a transition density over a small number of unit intervals, only over a large number. The Rx clock data recovery circuitry must be designed to remain locked to the relative position of the last data edge in the absence of subsequent edges. At the protocol layer, an encoding scheme such as 8b/10b provides out-of-band control characters that are used to identify the start and end of packets. Without an encoding scheme (i.e. scrambling only) no such characters exist, so an alternative means of delineating the start and end of packets is required. Usually this takes the form of packet length counters in the Tx and Rx and the use of escape sequences. The choice for the scrambling polynomial is currently under study.

Show answer
Hide answer
What is equalization? How is Tx equalization different from Rx equalization? What is trainable equalization?

Equalization is a method of distorting the data signal with a transform representing an approximate inverse of the channel response. It may be applied either at the Tx, the Rx, or both. A simple form of equalization is Tx de-emphasis as specified in PCIe 1.x and PCIe 2.x, where data is sent at full swing after each polarity transition and is sent at reduced swing for all bits of the same polarity thereafter. De-emphasis reduces the low frequency energy seen by the Rx. Since channels exhibit greater loss at high frequencies, the effect of equalization is to reduce these effects. Equalization may also be used to compensate for ripples in the channel that occur due to reflections from impedance discontinuities such as vias or connectors. Equalization may be implemented using various types of algorithms; the two most common are linear (LE) and decision feedback (DFE). Linear equalization may be implemented at the Tx or the Rx, while DFE is implemented at the Rx. Trainable equalization refers to the ability to adjust the tap coefficients. Each combination of Tx, channel, and Rx will have a unique set of coefficients yielding an optimum signal-to-noise ratio. The training sequence consists of adjustments to the tap coefficients while applying a quality metric to minimize the error. The choice for the type of equalization to require in the next revision of the PCIe specifications depends largely on the interconnect channel optimizations that can be derived at the lowest cost point. It is the intent of PCI-SIG to deliver the most optimum combination of channel and silicon enhancements at the lowest cost for the most common topologies.

Show answer
Hide answer
Why is a new generation of PCIe needed?

PCI-SIG responds to the needs of its members. As applications evolve to consume the I/O bandwidth provided by the current generation of the PCIe architecture, PCI-SIG begins to study the requirements for technology evolution to keep abreast of performance and feature requirements.

Show answer
Hide answer
What are the initial target applications for PCIe 3.0?

It is expected that graphics, Ethernet, InfiniBand, storage and PCIe switches will continue to drive the bandwidth evolution for the PCIe architecture and these applications are the current targets of the PCIe 3.0 technology. In the future, other applications may put additional bandwidth and performance demands on the PCIe architecture.

Show answer
Hide answer
Does PCIe 3.0 enable greater power delivery to cards?

The PCIe Card Electromechanical (CEM) 3.0 specification consolidates all previous form factor power delivery specifications, including the 150W and the 300W specifications.

Show answer
Hide answer
Is PCIe 3.0 more expensive to implement than PCIe 2.x?

PCI-SIG attempts to define and evolve the PCIe architecture in a manner consistent with low-cost and high-volume manufacturability considerations. While PCI-SIG cannot comment on design choices and implementation costs, optimized silicon die size and power consumption continue to be overarching imperatives that inform PCIe specification development and architecture evolution.

Show answer
Hide answer
Has there been a new compliance specification developed for PCIe 3.0?

For each revision of its specification, PCI-SIG develops compliance tests and related collateral consistent with the requirements of the new architecture. All of these compliance requirements are incremental in nature and build on the prior generation of the architecture. PCI-SIG anticipates releasing compliance specifications as they mature along with corresponding tests and measurement criteria. Each revision of the PCIe technology maintains its own criteria for product interoperability and admission into the PCI-SIG Integrators List.

Show answer
Hide answer
Does this mean that PCIe is finished at 8GT/s? What comes next?

The PCI-SIG will study the requirements of its members and of the industry for the next generation of the PCIe architecture following the successful release of the PCIe 3.0 specifications. Higher signaling rates depend on a number of factors. The PCI-SIG is committed to delivering the most robust and high-performance I/O interconnect specifications, while at the same time maintaining an uncompromised focus on low cost, low power, high volume manufacturability and compatibility, by taking advantage of breakthroughs in signaling technologies and silicon process capabilities.

Show answer
Hide answer
Section 2.3.1.1 - If an End Point returns a Completion (Cpl) with no data and Successful Completion status to a memory read request, should this be handled as a Malformed TLP or as an Unexpected Completion?

A compliant device would not return a Completion (Cpl) with no data and Successful Completion status to a memory read request, so normally this should not occur. If a properly formed Cpl is received that matches the Transaction ID, it is recommended that it be handled as an Unexpected Completion, but it is permitted to be handled as a Malformed TLP.

Show answer
Hide answer
Section 4.2.6.2.3 - Can you please clarify the below statement quoted from section "4.2.6.2.3. Polling.Configuration" of PCI Express 3.0 specification: Receiver must invert polarity if necessary (see Section 4.2.4.4). Does this imply the polarity inversion can only be initiated by receiver in Polling.Configuration state or can the inversion happen in Polling.Active state as well?

When polarity needs to be inverted, it must be done before exiting Poilling.Configuration, which permits it to be done in Polling.Active.

Show answer
Hide answer
Section 4.2.6.3.1.2 - The question is regarding Configuration.Linkwidth.Start state in the case of upconfiguration. It is written in the spec: "The Transmitter sends out TS1 Ordered Sets with Link numbers and Lane numbers set to PAD on all the active Upstream Lanes; the inactive Lanes it is initiating to upconfigure the Link width; and if upconfigure_capable is set to 1b, on each of the inactive Lanes where it detected an exit from Electrical Idle since entering Recovery and has subsequently received two consecutive TS1 Ordered Sets with Link and Lane numbers, each set to PAD, in this substate." It is not mentioned here if sending of these TS1s should be done ONLY on lanes that detected a receiver in the last time the LTSSM was at Detect state. Is it so?

The only lanes that can be part of an upconfigure sequence are lanes that were part of the configured link when configuration was performed with LinkUp = zero, and Lanes that failed to detect a Receiver can not be part of that initially configured Link. Therefore, a Port in the Configuration.Linkwidth.Start state must only transmit TS1s on a subset of the Lanes that detected a Receiver while last in the Detect state, regardless of if it is attempting to upconfigure the Link width, or not.

Show answer
Hide answer
Section 4.2.6.4.2 - According to pg227 of spec, "When using 128b/130b encoding, TS1 or TS2 Ordered Sets are considered consecutive only if Symbols 6-9 match Symbols 6-9 of the previous TS1 or TS2 Ordered Set". When in Recovery.Equalization and if using 128b/130b encoding, is it required that lane/link numbers (symbol 2) match in TS1s to be considered as consecutive or is it need not match?

The Receiver is not required to check the Link and Lane numbers while in Recovery.Equalization.

Show answer
Hide answer
Section 4.2.6.4.2.2.1 - In redoing equalization (after a successful completion of one) what settings does an EP use in Recovery.Equalization phase 0?

The Transmitter sends TS1 Ordered Sets using the Transmitter settings specified by the Transmitter Presets received in the EQ TS2 Ordered Sets during the most recent transition to 8.0 GT/s data rate from 2.5 GT/s or 5.0 GT/s data rate.

Show answer
Hide answer
Section 4.2.6.4.3 - When a device has down configured the number of operational lanes, what is the expected power state and characteristics of the unused lanes?

The unused transmitter Lane is put into Electrical Idle. It is recommended that the receiver terminations be left on.

Show answer
Hide answer
Section 4.2.6.4.3 - While down configured and a rate change request occurs, do the unused lanes also participate in the rate change?

The transmitter of the unused lanes remains in Electrical Idle during the speed change.

Show answer
Hide answer
Section 4.2.6.4.1 - The specification says to transition from Recovery.RcvrLock to Recovery.RcvrCfg, upon receiving eight consecutive TS1 or TS2 Ordered Sets. If a Port in Recovery.RcvrLock state receives x (where x < 8) number of consecutive TS1 ordered sets and then receives (8 - x) number of consecutive TS2 ordered sets, should it transition to Recovery.RcvrCfg, OR should it wait for receiving 8 consecutive TS2 ordered sets to transition to Recovery.RcvrCfg (basically discarding the received TS1 Ordered Sets).

The transition requirements can be satisfied by receiving 8 TS1s, 8 TS2s, or a combination of TS1s and TS2s totaling 8.

Show answer
Hide answer
Section 4.2.7.3 - PCIe 3.0 Base spec section 4.2.7.4 states that "Receivers shall be tolerant to receive and process SKP Ordered Sets at an average interval between 1180 to 1538 Symbol Times when using 8b/10b encoding and 370 to 375 blocks when using 128b/130b encoding.ÌÒ For 128/130 encoding, if the Transmitter sends one SKP OS after 372 blocks and a second after 376 blocks, the average interval comes out to be 374 blocks and that falls in the valid range. So is this allowed, or must every SKP interval count fall inside the 370 to 375 blocks?

At 8 GT/s, a SKP Ordered Set must be scheduled for transmission at an interval between 370 to 375 blocks. However, the Transmitter must not transmit the scheduled SKP Ordered Set until it completes transmission of any TLP or DLLP it is sending, and sends an EDS packet. Therefore, the interval between SKP OS transmissions may not always fall within a 370 to 375 block interval.

For example, if a SKP Ordered Set remains scheduled for 6 block times before framing rules allow it to be transmitted, the interval since the transmission of the previous SKP OS may be 6 blocks longer than normal, and the interval until the transmission of the next SKP OS may be 6 Blocks shorter than normal. But the Transmitter must schedule a new SKP Ordered Set every 370 to 375 blocks, so the long-term average SKP OS transmission rate will match the scheduling rate.

Receivers must size their elastic buffers to tolerate the worst-case transmission interval between any two SKP Ordered Sets (which will depend on the Max Payload Size and the Link width), but can rely on receiving SKP Ordered Sets at a long term average rate of one SKP Ordered Set for every 370 to 375 blocks. The SKP Ordered Set interval is not checked by the Receiver.

Show answer
Hide answer
Section 3.5.2.1 - If you can receive TLPs and flow control DLLPs normally but do not receive any ACK or NAK. Do you exit to DL_Inactive state?

When an Ack or Nak is not received and the REPLAY_TIMER expires, the TLPs in the Transmit Retry Buffer are retransmitted. The Replay Timer Timeout error is also reported.

Show answer
Hide answer
Section 7.5.3 - An Endpoint sends a Memory Request Upstream to a Switch. How will the Switch determine if it needs to route that packet Upstream or to an Endpoint below another Downstream Port?

Each Port of a Switch contains registers that define Memory Space apertures. The Memory Base/Limit registers define an aperture for 32-bit non-prefetchable Memory Space. The Prefetchable Memory Base/Limit & their corresponding Upper registers define an aperture for 64-bit prefetchable Memory Space. Here is the basic behavior with a properly configured Switch. If the TLP address falls within the aperture of another Downstream Port, the TLP is routed to that Downstream Port and sent Downstream. If the TLP address falls within a Memory Space range mapped by any BAR within the Switch, the TLP is routed to the Function containing that BAR. Otherwise, if the TLP address falls within an aperture of the Upstream Port, the TLP is handled as an Unsupported Request. Otherwise, the TLP is routed to the Upstream Port where it is sent to the Upstream Link or another Function associated with the Upstream Port.

If a Switch implements and supports Access Control Services (ACS), ACS mechanisms provide additional controls governing whether each Memory Request TLP is routed normally to another Downstream Port, blocked as an error, or redirected Upstream even if its address falls within the aperture of another Downstream Port. See Section 6.12.

Show answer
Hide answer
Section 4.2.3 - Section 4.2.3 states, "After entering L0, irrespective of the current Link speed, neither component must transmit any DLLP if the equalization procedure must be performed, and until the equalization procedure completes." Does that result in the following sequence: 1. Negotiate a Link and enter L0. Do not allow DLLP transmission while in L0. 2. Change the data rate to 8.0 GT/s and execute the equalization procedure. 3. Enter L0. Allow DLLP transmission.

Yes, that is the expected sequence when the autonomous equalization mechanism is executed. Note that Section 4.2.3 also describes other equalization mechanisms.

Show answer
Hide answer
Section 4.2.6 - Table 4-14 says that Receiver Errors should not be reported in the L0s or L1 states. During L1 entry, an Upstream Port's transmitter may be in Electrical Idle while its receivers are not in Electrical Idle. Similarly, a Port's transmitters may be in Electrical Idle for L0s, while its receivers are not in Electrical Idle. In these situations, should the Port report Receiver Errors such as 8b10b errors?

If the receivers are in L0s, Receiver Errors should not be reported. It does not matter whether the transmitters are in L0 or L0s for reporting of Receiver Errors. Section 4.2.6.5 specifies the 3 conditions required for the LTSSM to transition from L0 to L1. Until all of these conditions are satisfied, the LTSSM is in L0 state, and should report Receiver Errors, even if its transmitters are in Electrical Idle.

Show answer
Hide answer
Section 4.2.4.2 - When upconfiguring a Link in the LTSSM Configuration.Linkwidth.Start state, are the Lanes which are being activated required to transmit an EIEOS first when they exit Electrical Idle?

No. Lanes being activated for upconfiguration are not required to align their exit of Electrical Idle with the transmission of any Symbol, Block, or Ordered Set type. Furthermore, the Lanes are not required to exit Electrical Idle before the LTSSM enters the Configuration.Linkwidth.Start state.

Show answer
Hide answer
Section 2.3.2 - If a Requester receives a CplLk or CplDLk Completion that does not match the Transaction ID for any of the Requester's outstanding Requests, and the Requester does not support this type of Completion, is the Completion handled as a Unexpected Completion or a Malformed TLP?

Only Host CPUs are permitted to generate locked transaction sequences, so Endpoints should never receive CplLk or CplDLk Completions that match their Transaction IDs. An Endpoint is permitted to handle the case in question either as a Malformed TLP or an Unexpected Completion, depending upon implementation specific factors, such as whether it decodes these types of Completions. For this case it is recommended that an Endpoint handle it as an Unexpected Completion since it may be the result of a misrouted TLP, and best handled as an Advisory Non-Fatal Error as described in Section 6.2.3.2.4.5.

Show answer
Hide answer
Section 4.2.6.4.4 - Table 4-5 defines that the valid range of Link Number (Symbol 1 of a TS1 Ordered Set) is 0-31 for Downstream Ports that support 8.0 GT/s or above. If a Downstream Port in the LTSSM Recovery.RcvrCfg state receives TS1 Ordered Sets with a Link Number that is not in the range 0-31, do they qualify as "TS1 Ordered Sets ... with Link or Lane numbers that do not match what is being transmitted" ?

Yes. The received Link Number (not in the range 0-31) does not match the transmitted Link Number (in the range 0-31).

Show answer
Hide answer
Section 4.2.6.4.1 - While in the LTSSM Recovery.RcvrLock state, if a Port receives TS Ordered Sets with a Link or Lane number that does not match those being transmitted on at least one Lane, but receives TS Ordered Sets with Link and Lane numbers that match those being transmitted and the speed_change bit is equal to 1b on at least one other Lane, should the Port transition to the LTSSM Recovery.RcvrCfg state or the LTSSM Detect state after a 24 ms timeout?

The Port should transition to the LTSSM Recovery.RcvrCfg state.

Show answer
Hide answer
Section 4.2.6.6.1.3 - How can I configure the RC, if permissible, to send 4096 FTS to EP while RC transits out of L0s?

Setting the Extended Synch bit in the Link Control register of the two devices on the link will increase the number of FTS Ordered Sets to 4096, but the Extended Synch bit is used only for testing purposes.

Show answer
Hide answer
Section 4.2.6.4.1 - When the directed_speed_change variable is changed (as a result of receiving eight consecutive TS1 or TS2 Ordered Sets with the speed_change bit set while in Recovery.RcvrLock), is the eight_consecutive counter cleared and the device does not transition to Recovery.RcvrCfg state at this time?

When setting the directed_speed_change variable (in response to receiving 8 consecutive TS1 or TS2 Ordered Sets with the speed_change bit set), it is recommended, but not required, to reset the counters/status of received TS1 or TS2 Ordered Sets. That is, it is recommended that a Device receive an additional 8 consecutive TS1 or TS2 Ordered Sets with the speed_change bit set after it has started transmitting TS1 Ordered Sets with the speed_change bit set before it transitions from Recovery.RcvrLock to Recovery.RcvrCfg.

Show answer
Hide answer
Section 5.5.3.3.1 - Section 5.5.3.3.1 of the PCIe spec states the following: In order to ensure common mode has been established, the Downstream Port must maintain a timer, and the Downstream Port must not send TS2 training sequences until a minimum of TCOMMONMODE has elapsed since the Downstream Port has started both transmitting and receiving TS1 training sequences.

If the Downstream Port receives no valid TS1 Ordered Sets but does receive valid TS2 Ordered Sets, should it timeout and transition to Detect?

No, the timer is to guarantee that the Transmitter will stay in Recovery.RcvrLock for a minimum time to establish common mode. The Port must wait to transition from Recovery.RcvrLock until this timer has expired, and the timer does not start counting until an exit from Electrical Idle has been detected. Errata A21 modified this section of the L1 PM Substates with CLKREQ ECN document.

Show answer
Hide answer
Section 7.11.7 - Software has enabled Virtual Channel VC1 and currently UpdateFC DLLPs for VC1 are being transmitted on the link. Now, software disables VC1. So my question is, should UpdateFC DLLPs for VC1 be transmitted on the link?

When VC Enable for VC1 is set to 0b, the device must stop transmitting UpdateFC DLLPs for VC1.

Show answer
Hide answer
Section 7.28.3 - The default Target Link Speed field in the Link Control 2 register requires the field be set to the highest support speed. Should the default value of the M-PCIe Target Link Speed Control field be 10b?

The default value of the M-PCIe Target Link Speed Control field is 01b.

Show answer
Hide answer
Section 4.2.6.9 - When in the Disabled state the Upstream Port transitions to Detect when an Electrical Idle exit is detected at the receiver. Is an Electrical Idle exit required to be detected on all Lanes?

An Electrical Idle exit is required to be detected on at least one Lane.

Show answer
Hide answer
Section 7.8.6 - Is the L1 Exit Latency in the Link Capabilities register only the ASPM L1.0 exit latency or does it include the added ASPM L1.2 to ASPM L1.0 latency?

The ASPM L1 Exit Latency in the Link Capabilities register indicates the L1/L1.0 to L0 latency, and does not include added latency due to Clock Power Management, L1.1 or L1.2.

Show answer
Hide answer
Section 5.3.1.4.1 - While in the D2 state, a Function must not initiate any Request TLPs on the Link with the exception of a PME Message. What are the requirements in the D3hot state?

While in the D3hot state, a Function must not initiate any Request TLPs on the Link with the exception of a PME Message.

Show answer
Hide answer
Section 5.3.1.4.1 - A Root Port is connected to a multifunction Endpoint. The Root Port is ECRC capable. The multifunction Endpoint only has 1 function that is ECRC capable, the others are not. Software enables ECRC checking and generation in the Root Port and also enables ECRC checking and generation in the 1 Endpoint function that supports it. Given that one function is enabled for ECRC check, is the EP required to check the TD bit & ECRC on all TLPs that target any of the endpoint's functions regardless of whether the receiving function is ECRC capable?

Per Section 2.7.1, the device is required to check ECRC for all TLPs where it is the ultimate PCI Express Receiver. Note that per Section 6.2.4, an ECRC Error is not Function-specific, so it must be logged in all Functions of that device.

Show answer
Hide answer
Section 8.4.2 - A switch Upstream port receives a Memory Read TLP while in the D3hot State. The Upstream port handles the received Memory Read Request as Unsupported Request. Is the switch Upstream port allowed or required to send a Completion with Completion Status as UR. Or must it transmit the Completion only after the power state is programmed to D0.

The Completion with UR status is required to be transmitted while the Port is in D3hot, assuming that the Port remains powered long enough to transmit the Completion.

Show answer
Hide answer
Section 7.28.3 - When the maximum M-PCIe Link speed supported is 2.5 GT/s, what will be the Link speed following a reset?

The Link Speed following reset will be the result of Configuration process. During the M-PCIe discovery and Configuration process, RRAP is used to discover M-PHY capabilities, analyze and configure configuration attributes accordingly. Depending on the High speed supported by both components, the Link Speed and Rate Series may get configured for HS-G1, HS-G2 or HS-G3 and RATE A or B respectively. For this particular example Link Speed could be either HS-G1 or HS-G2 depending on the supported Link Speeds of the other Component on the LINK.

Show answer
Hide answer
Section 3.5.2.1 - The M-PCIe ECN contains no information on the REPLAY_TIMER and the Ack Transmission Latency Limit. What are the recommended values for the following? Gear 1 - Rate A Gear 1 - Rate B Gear 2 - Rate A Gear 2 - Rate B Gear 3 - Rate A Gear 3 - Rate B

We suggest using the 2.5 GT/s values for Gear 1 and 2 at Rates A and B, and also suggest using the 5.0 GT/s values for Gear 3 at Rates A and B, until clarification is received from the workgroup. This clarification will be included in next errata release.

Show answer
Hide answer
Section 6.20 - Is a PASID permitted on a Completion? [refer to Section 6.20 – Lines 8-13 on page 628 of PCIe 3.1]

Section 6.20 – Lines 8-13 on page 628 of PCIe 3.1 states:
A PASID TLP Prefix is permitted on:

- Memory Requests (including AtomicOp Requests) with Untranslated Addresses (See Section 2.2.4.1).

- Translation Requests and Translation Message Requests as defined in the Address Translation Services Specification.

The PASID TLP Prefix is not permitted on any other TLP.

No, the text is correct as-is -- a PASID is not permitted on a Completion.  We will consider if an errata is needed to clarify this.”

 

Show answer
Hide answer
SECTION 2.2.9 - If a link is up and bus and device numbers are snooped by the Endpoint, then the link is disabled and enabled again, should the Endpoint set the bus and device number fields to zero in a completion that it sends prior to the first CfgWr being received?

Yes, when the LTSSM enters the Disabled state, the DLCMSM transitions to DL_Inactive, the Link transitions to DL_Down, and this causes the equivalent of a Hot Reset to the Endpoint. See Sections 2.2.9 & 3.2.1.

Show answer
Hide answer
Section 4.2.6.4 - What is the specification condition on transmitting TS1 Ordered sets while in Recover.RcvrLock state?

While the LTSSM is in Recovery.RcvrLock the Transmitter must send TS1 Ordered Sets on all configured lanes continuously with the following exceptions:
1.      At data rates above 2.5 GT/s send an EIEOS every 32 TS1 ordered Sets (4.2.4.2)

  •        EIEOS guarantees that electrical Idle exit will be detected by the link partner

2.      At all data rates send SKPOS according to 4.2.7.3

  •        for clock compensation