Frequently Asked Questions
The following FAQ list was generated using standard responses provided to PCI-SIG members by Technical Support and PCI-SIG Administration. For questions relative to the PCI Specifications, please reference the specifications themselves as the authoritative text.
PCI Express
Who developed the ExpressCard standard? The ExpressCard standard was created by a broad coalition of PCMCIA member companies, including Dell, Hewlett Packard, IBM, Intel, Lexar Media, Microsoft, SCM Microsystems and Texas Instruments. PCMCIA developed the new standard with assistance from the USB Implementers Forum (USB-IF) and PCI-SIG. PCMCIA was a non-profit trade association founded in 1989 to establish technical standards for PC Card technology and to promote interchangeability among computer systems. In 2010, PCMCIA transferred its technology assets to the USB-IF and dissolved as an organization. |
How can PCI Express developers build components that can be integrated into ExpressCard products? First, developers must use silicon that has passed the PCI-SIG compliance program. PCI Express components that successfully pass the PCI-SIG compliance tests may be useable in an ExpressCard module if they allow for a module design which meets the additional requirements (e.g., power management, power and thermal) detailed in the ExpressCard standard. |
What is the compliance program for ExpressCard products? The ExpressCard compliance program is now being run by the USB-IF. For full details, visit https://www.usb.org/members. |
PCI Express - 2.0
How can I get a copy of the PCI Express (PCIe) 2.0 specification? Members may access specifications online on our Specifications web page or non-members may purchase specifications (order form is available on our Ordering Information web page). |
What prompted the need for another generation of PCI Express (PCIe)? The PCIe 1.1 specification was developed to meet the needs of most I/O platforms. However a few applications, such as graphics, continue to require more bandwidth in order to enrich user experiences. PCI-SIG also saw the opportunity to add new functional enhancements (listed below), as well as incorporate all edits it had received to the PCIe 1.1 spec (via ECNs). In response to these needs, PCI-SIG developed PCI Express 2.0 (PCIe 2.0). It provides faster signaling, which doubles the bit rate from 2.5GT/s to 5GT/s. |
What are the benefits of PCIe 2.0? What business opportunities does it bring to the market? While doubling the bit rate satisfies high-bandwidth applications, faster signaling has the advantage of allowing various interconnect links to save cost by adopting a narrow configuration. For example, a PCI Express 1.1 x8 link (8 lanes) yields a total aggregate bandwidth of 4Gbps, which is the same bandwidth obtained from a PCI Express 2.0 x4 link (4 lanes) that adopts the 5GT/s signaling technology. This can result in significant savings in platform implementation cost while achieving the same performance level. Backward compatibility is retained as 2.5 GT/s adapters can plug into 5.0 GT/s slots and will run at the slower rate. Conversely, PCIe 2.0 adapters running at 5.0 GT/s can plug into existing PCIe slots and run at the slower rate of 2.5 GT/s. |
Are both 2.5GT/s and 5GT/s signaling rates supported in the PCIe 2.0 specification? The PCIe Base 2.0 specification supports both 2.5GT/s and 5GT/s signaling rates, in order to retain backward compatibility with existing PCIe 1.0 and 1.1 systems. Aside from the faster bit rate, there are a number of improvements in this specification that allow greater flexibility and reliability in designing PCIe links. For example, the interconnect can be dynamically managed for platform power and performance considerations through software controls. Another significant RAS feature is the inclusion of new controls to allow a PCIe link to continue to function even when some lanes become non-operational. |
Is PCIe 2.0 backward compatible with PCIe 1.1 and 1.0? Yes. The PCIe Base 2.0 specification supports both the 2.5GT/s and 5GT/s signaling technologies. A device designed to the PCIe Base 2.0 specification may support 2.5GT/s, 5GT/s or both. However, a device designed to operate specifically at 5GT/s must also support 2.5GT/s signaling. The PCIe Base specification covers chip-to-chip topologies on the system board. For I/O extensibility across PCIe connectors, the Card Electromechanical (CEM) and ExpressModule? specifications will also need to be updated, but this work will not impact mechanical compatibility of the slots, cards or modules. |
What other features are introduced in the PCIe 2.0 specification? The most predominant feature in PCIe 2.0 is 5GT/s speed, which includes new mechanisms for software control of link speed, reporting of speed and width changes, and control of loopback. Other new features include:
|
What were the initial target applications for PCIe 2.0? The same set of core applications, high-performance graphics, enterprise-class storage and high-speed networking that benefited from the introduction of PCIe 1.0 architecture have led the charge for adoption of PCIe 2.0. |
What test tools and other infrastructure are available to support the development of PCIe 2.0 products? The established PCIe ecosystem delivers both pre-silicon and post-silicon tools to assist design engineers with implementing PCIe 2.0 products. In addition, PCI-SIG provides updated hardware test fixtures and test software upgrades to facilitate compliance verification at its Compliance Workshops. |
Where can interested parties get more information? PCI-SIG is the sole source for PCIe specifications. In addition, both the PCI-SIG and its members provide a plethora of technical and marketing collateral in support of the PCIe architecture. Please visit www.pcisig.com for additional information. |
Section 4.2.4.1 - What does Link Upconfigure mean? What is it used for? Link Upconfigure means the device is capable of increasing the link width. When Upconfigure is supported by both devices on a link, the link width may be reduced to conserve power. When link use is going to increase, the devices will increase the link width to support the needed high data rate preferred by the device. |
Section 4.2.4.3 - What is the purpose of the "inferred" electrical idle? The purpose of the "inferred" electrical idle is to permit a method of detecting an electrical idle that does not use an analog circuit. Using an analog circuit can be difficult at 5.0 GT/s and the inferred method is an alternate (permitted) method. |
Section 4.2.6.10.1 - The Loopback slave should wait until Symbol lock is archived after link speed change during Loopback.Entry substate. However, the base spec does not appear to define whether symbol lock should be archieved on some Lanes or all Lanes. The Loopback slave transitions to Loopback.Active immediately after exiting Electrical Idle following the link speed change. It attempts to acquire symbol lock on all of the lanes that were active when it entered Loopback.Entry. |
Section 2.2.4.1 - In the PCIe spec 2.0 page 57, there is a sentence "For Memory Read Requests and Memory Write Requests, the Address Type field is encoded as shown in Table 2-5, with full descriptions contained in the Address Translation Services Specification, Revision 1.0." If the value of AT field is invalid, what will PCIe do? Will it report an error, and if so, what error will be reported? Endpoints that do not support Address Translation Services set the AT field to 00b on transmitted TLPs and ignore the AT field on received TLPs. |
Section 2.2.62. - How does a CPU know a device exists and where the position of the device is? Configuration softrware reads configuration space address 00h (using different bus, device and function numbers). When it gets a response, it knows a device exists at that ID. |
Section 4.2.6.4.3 - An Endpoint is in Recovery.RcvrCfg state and has received the 8 required consecutive TS2's. But before it is able to complete sending 16 TS2s, the downstream port sends EIEOS and then starts sending TS1s. At this point, should the Endpoint move to Recovery.Idle after sending 16 TS2s? Or is it required to reset its RX counter and start counting TS1s and try to go to Configuration? Transition to Recovery.Idle after sending the 16 TS2s since the requirements for that transition are met. |
Section 5.3.2.3 - Is the following error scenario valid?
- RC sends PME_Turn_off message to EP
- EP doesn't respond with ACK due to delay
- EP responds with PME_TO_Ack message
- EP sends PM_Enter_L23 and not sending ACK.
Can the EP do without ACK? The Endpoint is required to send an Ack for the PME_Turn_Off message. There is no valid reason for an extended delay of the Ack. |
Section 2.7.2.2 - In PCIe 2.0 Spec P.128, a Poisoned I/O or Memory Write Request, or a Message with data (except for vendor-defined 25 Messages), that addresses a control register or control structure in the Completer must be handled as an Unsupported Request (UR) by the Completer. The completer receiving this kind of TLP needs to report error as UR or Poison TLP Received? The intent is for this error case to be handled as a Poisoned TLP Received error. Errata is being developed against the 2.1 Base spec to clarify this. Due to ambiguous language in earlier versions of the spec, a component will be permitted to handle this error as an Unsupported Request, but this will be strongly discouraged. |
Section 2.9.1 - For a PCIe 2.0 Switch, when upstream port goes to DL_down, it is stated in pg. 131 line 11 that the config registers will be reset, also line 15 says propagate reset to all other ports (which I interpret as all downstream ports, am I right?)
But on line 11 of pg. 130, it says downstream port registers are not affected except status update, do these contradict? Yes, when a link reports DL_Down the upsteam port on the switch (and all other downstream devices) are reset. The section 2.9.1 text covers two contexts. The context of a Downstream Port in DL_Down and the context of an Upstream Port in DL_Down. Care must be taken to apply the requirements in this section to the correct context. |
Section 4.2.6.10.1 - I have a question about LTSSM in Loopback state. When the LTSSM is in Loopeback.Entry(p.233L24), Loopback master will send TS1 with Compliance Receive bit (Symbol 5 bit 4)=0b and Loopback bit=1b and wait to receive identical TS1 with Loopback bit asserted less than 100 ms. In this time, both sides of link are probably in 5GT/s. Then if Loopback slave cannot do Symbol lock, how long does Loopback slave need to wait, and what is the next substate? The slave stays in Loopback.Active indefinitely until it receives an EIOS (or detects or infers an Electrical Idle). There is not timeout. |
Section 4.2.6.1.1 - According to Section 4.2.6.1.1 in PCIe Base Specification 2.0, "The next state is Detect.Active after a 12 ms timeout or if Electrical Idle is broken on any Lane". Does this mean next state is Detect.Active only when electrical idle is broken? It means the next state is Detect.Active after a 12 ms timeout, or the next state is Detect.Active (prior to the end of the 12 ms timer) if Electrical Idle is broken on any Lane. |
Section 6.18 - If a Switch supports the LTR feature, which of its ports must support LTR? If a Switch supports the LTR feature, it must support the feature on its Upstream Port and all Downstream Ports. |
SECTION 4.2.6.3.5.2 - Based on the PCIe 2.0 spec, Line 13 page 212: - The next state is Configuration.Idle immediately after all Lanes that are transmitting TS2 Ordered Sets receive eight consecutive TS2 Ordered Sets with matching Lane and Link numbers (non-PAD) and identical data rate identifiers (including identical Link Upconfigure Capability (Symbol 4 bit 6)), and 16 consecutive TS2 Ordered Sets are sent after receiving one TS2 Ordered Sets. Does the received eight consecutive TS2 Ordered Sets with identical data rate identifiers (including identical Link Upconfigure Capability (Symbol 4 bit 6)) need to match the transmitted TS2 Ordered Sets if the next state is Configuration.Idle? The received Link number must match the transmitted Link number. The received Lane number must match the transmitted Lane number. The received data rate identifier must be the same on all received lanes (but is not required to be the same as the transmitted data rate identifier). The received Link Upconfigure Capability bit must be the same on all received lanes (but is not required to be the same as the transmitted Link Upconfigure Capability bit). |
SECTION 7.5.3.6 Ì Can you please clarify the behavior of a Switch Downstream Port when the Secondary Bus Reset bit is Set in its Bridge Control register? It is our understanding that a Secondary Bus Reset will not affect anything in the Downstream Port where it is Set, only in components Downstream (i.e. components on or below the secondary bus of that virtual Bridge). Should the primary side of the virtual Bridge reset or preserve its Requester ID after the Secondary Bus Reset bit is Set? When software sets the Secondary Bus Reset bit in a Switch Downstream Port, the Downstream Port must not reset any of its own configuration settings, and it must transition the Link below it to the Hot Reset state, assuming the Link is not down. The description of the Secondary Bus Reset bit in Section 7.5.3.6 states "Port configuration registers must not be changed, except as required to update Port status." |
SECTION 4.2.8 - In the PCIe Base Spec 2.0, Section 4.2.8, page 239, under Key below the table it states -
D Delay Symbol K28.5 (with appropriate disparity)
What exactly does the term 'appropriate disparity' mean in the above lines from Spec? Appropriate disparity means that the D symbol must have the correct disparity for the specified sequence of symbols. |
SECTION 6.1.4 - This question relates to MSI. More specifically this question also relates to the Conventional PCI 3.0 spec (on page 237) for MSI where it states that - The Multiple Message Enable field (bits 6-4 of the Message Control register) defines the number of low order message data bits the function is permitted to modify to generate its system software allocated vectors. Does this mean that the binary value of the LSBs of the message data specifies the vector number? Yes (up to a total of 5 bits). Also to avoid confusion for the function, software sets each of the low order message data bits to 0, that correspond to the low order message data bits the function is permitted to modify to generate its system software allocated vectors. |
SECTION 4.2.6.4.4 - Referring to section 4.2.6.4.4 (Recovery.Idle), our EP is implemented such that it will send Idle data once entry into recovery.idle. If Hot Reset bit is asserted in two consecutive received TS1 ordered set, then we will move to HotReset state. Will the RC respond to the idle data that the EP sends out and falsely trigger into L0 state even though RC is directed to enter into HotReset? For this case, the LTSSM of the Downstream Port above the Endpoint is already in the Hot Reset state, since that is how it transmitted TS1 Ordered Sets with the Hot Reset bit asserted. |
SECTION 4.2.6.5 - In Base Spec 2.1 on page 246 line 10, it states that - "If directed" is defined as both ends of the Link having agreed to enter L1 etc. and then refers to Section 4.3.2.1, but there is no such section in the spec. Is there a section in the spec that provides more detail on this? The reference in the spec should be to Section 5.3.2.1, which provides more detail (note that this reference will be fixed through upcoming errata to the 2.1 spec). |
SECTION 7.5.1.1 - We implement Memory Space Enable and IO Space Enable bit in our Endpoint. If the Endpoint receives a Memory Write TLP when Memory Space Enable bit is not set. How should the Endpoint handle this TLP? Also, if the Endpoint receives a Memory Write TLP and its data payload exceeds Max_Payload_Size when Memory Space Enable bit is not set. How should the Endpoint handle this TLP in each case? For the first case, the Endpoint must handle the Request as an Unsupported Request. For the second case, it is recommended that the Endpoint handle the Request as a Malformed TLP, but the Endpoint is permitted to handle the Request as an Unsupported Request. |
SECTION 2.3.1 - What is the correct behavior if a read or write exceeds a bar limit? For example, let's say a BAR is 128 bytes, and the Read or write request to the address space mapped by the BAR is for a size that is larger than 128 bytes. In this case what is the correct response from the device? It should be handled as an unsupported request. |
SECTION 4.2.6.4.4 - Is the following lane setting valid: executing a downconfiguration from x4 to x2, with lane0=ACTIVE, lane1=INACTIVE, lane2=ACTIVE, lane3=INACTIVE? The active lanes must be consecutively numbered lanes starting with lane 0. Your example would configure as a x1 link. |
SECTION 7.7 - Is a PCI Express Root Complex required to support MSI? All PCI Express device Functions (including root ports) that are capable of generating interrupts must implement MSI or MSI-X or both. |
SECTION 4.2.6.2.1 -- This is in reference to the Polling.Active state as described in section 4.2.6.2.1 - "Next state is Polling.Configuration after at least 1024 TS1 Ordered Sets were transmitted, and all Lanes that detected a Receiver during Detect receive eight consecutive TS1 or TS2 Ordered Sets or their complement with both of the following conditions." We have a question relative to the statement eight consecutive TS1 or TS2 Ordered Sets". Our understanding is that it means 8 consecutive TS1 or 8 consecutive TS2. It doesn't mean a mixture of TS1 and TS2. " The transition to Polling.Configuration follows either 8 consecutive TS1s, or 8 consecutive TS2s on all lanes that detected a receiver in Detect. Note that the intent of the spec also is to allow the 8 to be any mixture of 8 consecutive TS1s or TS2s for this particular case (not necessarily for other LTSSM transitions, however). Note also that the PCIe 2.0 Errata item A42 (Polling.Active Substate) modifies this section (see Errata item A42 at www.pcisig.com/specifications/pciexpress/base2/). |
SECTION 6.2.3.2.3 -- If a device encounters more than one error, will it log all the errors or the most significant error only (according to the precedence list). It is recommended that only the highest precedence error associated with a single TLP be reported. However, it is recognized that reasonable implementations may not be able to support the recommended precedence order, which is why this is recommended rather than required behavior. |
SECTION 7.8.6 -- Relative to Bits 3:0 in Section 7.8.6 - Link Capabilities Register, Supported Link Speeds. Is it OK for my device to support 0010b" and only support 5GT/s (and not support 2.5GT/s)?" A device that supports 5GT/s must also be able to support and operate at 2.5GT/s. |
SECTION 4.2.6.2.1 -- Device A has transmitters on 8 lanes. Device B has transmitters on 4 lanes. Both devices are connected via a link. During Receiver Detection sequence in Detect.Active: Device A detects that Device B has drivers on 4 lanes, and Device B detects that Device A has drivers on 8 lanes. PCIe Link is symmetric - so each component has the same number of Transmitters as Receivers. Since device B has transmitters on only 4 lanes, it also has receivers on 4 Lanes. Hence it would not be capable of detecting receivers on 8 lanes of device A. |
SECTION 4.2.6.2.1 -- During Polling.Active, should Device A transmit TS1s on 4 lanes while Device B transmits TS1s on 8 lanes? Or, TS1s must be transmitted in both directions on the identical number of lanes? Since device B has transmitters on only 4 lanes, it cannot transmit TS1s on more than 4 lanes. Device A will transmit TS1s on only the lanes where it detected receivers (and that is a maximum of 4 lanes). |
SECTION 4.2.6.6.2.2 -- I have an LTSSM L0s question. Let's say we have an EP that has both its RX and TX in L0s - specifically Rx_L0s.Idle and Tx_L0s.Idle. Also assume the EP receives and EI exit, and then the receiver transitions from RX_L0s.Idle to Rx_L0s.FTS. - What should Tx_L0s.Idle transition to, or should it stay in the same state? The transmitter stays in TX_L0s.Idle. |
PCI Express - 3.0
What is PCI Express (PCIe) 3.0? What are the requirements for this evolution of the PCIe architecture? PCIe 3.0 is the next evolution of the ubiquitous and general-purpose PCI Express I/O standard. At 8GT/s bit rate, the interconnect performance bandwidth is doubled over PCIe 2.0, while preserving compatibility with software and mechanical interfaces. The key requirement for evolving the PCIe architecture is to continue to provide performance scaling consistent with bandwidth demand from leading applications with low cost, low power and minimal perturbations at the platform level. One of the main factors in the wide adoption of the PCIe architecture is its sensitivity to high-volume manufacturing materials and tolerances such as FR4 boards, low-cost clock sources, connectors and so on. In providing full compatibility, the same topologies and channel reach as in PCIe 2.0 are supported for both client and server configurations. Another important requirement is the manufacturability of products using the most widely available silicon process technology. For the PCIe 3.0 architecture, PCI-SIG believes a 65nm process or better will be required to optimize on silicon area and power. |
||||||||||||||||||||||||||||
What is the bit rate for PCIe 3.0 and how does it compare to prior generations of PCIe? The bit rate for PCIe 3.0 is 8GT/s. This bit rate represents the most optimum tradeoff between manufacturability, cost, power and compatibility. |
||||||||||||||||||||||||||||
How does the PCIe 3.0 8GT/s "double" the PCIe 2.0 5GT/s bit rate? The PCIe 2.0 bit rate is specified at 5GT/s, but with the 20 percent performance overhead of the 8b/10b encoding scheme, the delivered bandwidth is actually 4Gbps. PCIe 3.0 removes the requirement for 8b/10b encoding and uses a more efficient 128b/130b encoding scheme instead. By removing this overhead, the interconnect bandwidth can be doubled to 8Gbps with the implementation of the PCIe 3.0 specification. This bandwidth is the same as an interconnect running at 10GT/s with the 8b/10b encoding overhead. In this way, the PCIe 3.0 specifications deliver the same effective bandwidth, but without the prohibitive penalties associated with 10GT/s signaling, such as PHY design complexity and increased silicon die size and power. The following table summarizes the bit rate and approximate bandwidths for the various generations of the PCIe architecture: PCIe architecture Raw bit rate Interconnect bandwidth Bandwidth per lane per direction Total bandwidth for x16 link Total bandwidth represents the aggregate interconnect bandwidth in both directions. |
||||||||||||||||||||||||||||
Do PCIe 3.0 specifications only deliver a signaling rate increase? The PCIe 3.0 specifications comprise the Base and the Card Electro-mechanical (CEM) specifications. There may be updates to other form factor specifications as the need arises. Within the Base specification, which defines a chip-to-chip interface, updates have been made to the electrical section to comprehend 8GT/s signaling. As the technology definition progresses through PCI-SIG specification development process, additional ECN and errata will be incorporated with each review cycle. For example, the current PCIe protocol extensions that address interconnect latency and other platform resource usage considerations have been rolled into the PCIe 3.0 specification revisions. The final PCIe 3.0 specification consolidates all ECN and errata published since the release of the PCIe 2.1 specification, as well as interim errata. |
||||||||||||||||||||||||||||
Will PCIe 3.0 products be compatible with existing PCIe 1.x and PCIe 2.x products? PCI-SIG is proud of its long heritage of developing compatible architectures and its members have consistently produced compatible and interoperable products. In keeping with this tradition, the PCIe 3.0 architecture is fully compatible with prior generations of this technology, from software to clocking architecture to mechanical interfaces. That is to say PCIe 1.x and 2.x cards will seamlessly plug into PCIe 3.0-capable slots and operate at their highest performance levels. Similarly, all PCIe 3.0 cards will plug into PCIe 1.x- and PCIe 2.x-capable slots and operate at the highest performance levels supported by those configurations. The following chart summarizes the interoperability between various generations of PCIe and the resultant interconnect performance level:
In short, the notion of the compatible highest performance level is modeled after the mathematical least common denominator (LCD) concept. Also, PCIe 3.0 products will need to support 8b/10b encoding when operating in a pre-PCIe 3.0 environment. |
||||||||||||||||||||||||||||
What are the PCIe protocol extensions, and how do they improve PCIe interconnect performance? The PCIe protocol extensions are primarily intended to improve interconnect latency, power and platform efficiency. These protocol extensions pave the way for better access to platform resources by various compute- and I/O-intensive applications as they interact with and through the PCIe interconnect hierarchy. There are multiple protocol extensions and enhancements being developed and they range in scope from data reuse hints, atomic operations, dynamic power adjustment mechanisms, loose transaction ordering, I/O page faults, BAR resizing and so on. Together, these protocol extensions will increase PCIe deployment leadership in emerging and future platform I/O usage models by enabling significant platform efficiencies and performance advantages. |
||||||||||||||||||||||||||||
When was the PCIe 3.0 specifications made available? PCI-SIG released the PCIe 3.0 specification on November 17, 2010. |
||||||||||||||||||||||||||||
What is 8b/10b encoding? 8b/10b encoding is a byte-oriented coding scheme that maps each byte of data into a 10-bit symbol. It guarantees a deterministic DC wander and a minimum edge density over a per-bit time continuum. These two characteristics permit AC coupling and a relaxed clock data recovery implementation. Since each byte of data is encoded as a 10-bit quantity, this encoding scheme guarantees that in a multi-lane system, there are no bubbles introduced in the lane striping process. |
||||||||||||||||||||||||||||
What is scrambling? How does scrambling impact the PCIe 3.0 architecture? Scrambling is a technique where a known binary polynomial is applied to a data stream in a feedback topology. Because the scrambling polynomial is known, the data can be recovered by running it through a feedback topology using the inverse polynomial. Scrambling affects the PCIe architecture at two levels: the PHY layer and the protocol layer immediately above the PHY. At the PHY layer, scrambling introduces more DC wander than an encoding scheme such as 8b/10b; therefore, the Rx circuit must either tolerate the DC wander as margin degradation or implement a DC wander correction capability. Scrambling does not guarantee a transition density over a small number of unit intervals, only over a large number. The Rx clock data recovery circuitry must be designed to remain locked to the relative position of the last data edge in the absence of subsequent edges. At the protocol layer, an encoding scheme such as 8b/10b provides out-of-band control characters that are used to identify the start and end of packets. Without an encoding scheme (i.e. scrambling only) no such characters exist, so an alternative means of delineating the start and end of packets is required. Usually this takes the form of packet length counters in the Tx and Rx and the use of escape sequences. The choice for the scrambling polynomial is currently under study. |
||||||||||||||||||||||||||||
What is equalization? How is Tx equalization different from Rx equalization? What is trainable equalization? Equalization is a method of distorting the data signal with a transform representing an approximate inverse of the channel response. It may be applied either at the Tx, the Rx, or both. A simple form of equalization is Tx de-emphasis as specified in PCIe 1.x and PCIe 2.x, where data is sent at full swing after each polarity transition and is sent at reduced swing for all bits of the same polarity thereafter. De-emphasis reduces the low frequency energy seen by the Rx. Since channels exhibit greater loss at high frequencies, the effect of equalization is to reduce these effects. Equalization may also be used to compensate for ripples in the channel that occur due to reflections from impedance discontinuities such as vias or connectors. Equalization may be implemented using various types of algorithms; the two most common are linear (LE) and decision feedback (DFE). Linear equalization may be implemented at the Tx or the Rx, while DFE is implemented at the Rx. Trainable equalization refers to the ability to adjust the tap coefficients. Each combination of Tx, channel, and Rx will have a unique set of coefficients yielding an optimum signal-to-noise ratio. The training sequence consists of adjustments to the tap coefficients while applying a quality metric to minimize the error. The choice for the type of equalization to require in the next revision of the PCIe specifications depends largely on the interconnect channel optimizations that can be derived at the lowest cost point. It is the intent of PCI-SIG to deliver the most optimum combination of channel and silicon enhancements at the lowest cost for the most common topologies. |