• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Electrical Engineering News and Products

Electronics Engineering Resources, Articles, Forums, Tear Down Videos and Technical Electronics How-To's

  • Products / Components
    • Analog ICs
    • Connectors
    • Microcontrollers
    • Power Electronics
    • Sensors
    • Test and Measurement
    • Wire / Cable
  • Applications
    • Automotive/Transportation
    • Industrial
    • IoT
    • Medical
    • Telecommunications
    • Wearables
    • Wireless
  • Resources
    • DesignFast
    • Digital Issues
    • Engineering Week
    • Oscilloscope Product Finder
    • Podcasts
    • Webinars / Digital Events
    • White Papers
    • Women in Engineering
  • Videos
    • Teschler’s Teardown Videos
    • EE Videos and Interviews
  • Learning Center
    • EE Classrooms
    • Design Guides
      • WiFi & the IOT Design Guide
      • Microcontrollers Design Guide
      • State of the Art Inductors Design Guide
    • FAQs
    • Ebooks / Tech Tips
  • EE Forums
    • EDABoard.com
    • Electro-Tech-Online.com
  • 5G

How PCIe interface developments have changed and advanced

August 27, 2021 By Ed Cady

By Ed Cady, contributing editor

Peripheral component interconnect express, or PCIe, has become the interface standard for connecting high-speed components, such as graphics, memory, and storage.

Typically, PCIe interface signaling and physical layer implementation rely on printed circuit board assembly (PCBAs) and edge connectors. Several standardized and non-standardized cabling solutions are available that support a variety of applications and related topologies.

Older interface standards (such as ISA, EISA, PCI, or PCI-X) often made use of internal flat cables and edge connectors from previous generations.

Several of the early applications included internally cabled power distribution options, usually combined within the IO ribbon cable legs.

Now, next-generation PCIe CEM edge connectors are available, which offer higher-speed performance. Newer standards, such as the CXL 1.0, also use the latest PCIe CEM connector. The latest internal cables, such as the PCIe 6.0 and PCIe 7.0 applications, are expected to hit the market by or before 2022.

The PCIe 1.0 from Samtec (circa 2004), which offered 2.5GTs NRZ per lane, 4x internal edge connector, paddleboards, and an extruded ribbon flat-cable assembly with power jumpers and a 10G bus extender link.

The earliest PCIe 1.0 applications included production testbeds for motherboards, add-in boards, server chassis, and backplane extenders. The wire termination was becoming increasingly important. 

The PCIe 2.0 from Meritec (circa 2007), which offered 5G per lane, 8x internal cables attached to paddleboards with PCIe CEM standard edge connectors — typically making a full bus extender in a loosely bundled flat-to-oval cable assembly 40G link. At the time, new and better twin-axial cables supported longer link reaches.

Features
(click to enlarge text below)

The PCIe 2.0 applications included several embedded computer planar cables for the cPCI and ATCA interconnects and various form-factors and topologies. Any of the well-shielded designs typically supported external flat and round full-bus, twin-axial cable applications.

A few design options for a ribbon, twin-axial internal cable PCIe 2.0 (Meritec, 2007).

 

The PCIe 3.0 from Corsair (circa 2021), which offered an 8GTs NRZ per lane, x16 internal 300mm chassis extension, five-legged cable harness. This was ideal for industrial performance reliability, and included an advanced airflow profile and a 128GT link.
This PCIe 3.0’s internal backplane extender cable uses a CEM connector and latch interface. It’s shielded with different mounting and latching options.

The PCIe 3.0 internal-cabling solutions brought several choices with additional applications, beyond embedded bus extenders and test adapters. These devices offered better crosstalk control and interconnect options.

For example, internal flat-foil shielded, twin-axial cables meant advanced application interconnect solutions that were usually set within server and storage boxes. Application-specific foldable solutions also became an option, which extended to the PCIe 4.0 inside-the-box applications.

It’s worth noting that the flat-foil shielded cables are still used for many 16GTs, 25Gbps, and 32GTs for NRZ per-lane applications.

3M‘s internal flat-foil shielded, twin-axial cables (circa 2014).

It did not take long to realize that most of the tightly bent foil, shielded, twin-axial cable assemblies failed to meet performance requirements, and particularly at the at 56G PAM4 or 112G PAM4 per lane or higher speed rates. This was because of the link budget limitation at each cable crease or fold (which consumed .5dB or more).

The straight, internal cable assemblies almost always performed better.

The PCIe 4.0 from ADT-Link (circa 2017), which included 16GTs NRZ per lane, x16 internal cable with CEM connector for use with the 256G links. The finer, pitch-ribbon cables used here required an advanced wire-termination design — particularly for the ground-wire termination SI design and for the symmetrical structures and processes.

The PCIe 4.0 facilitated better and faster automated optical inspection for in-line production testing equipment. Manufacturer’s often used clear polymer material in the design, such as silicone.

Another example of the PCIe 4.0 from ADT-Link, designed with silicon.
The PCIe 5.0 from Amphenol (circa 2019), which included 32GTs NRZ per lane and a CEM SMT edge connector.

The PCIe 5.0 x16 to GenZ 4C 1.1 connector adapter cable assemblies have advanced to offer several power delivery options, which are compatible with the PCIe CEM r5.0 32GT NRZ per-lane edge connectors.

The five RSVD pins support the flex bus system and 12 and 48V power options are specified as internal cables.

The GenZ SFF TA 1002’s smaller form factor is ideal for reliability at 56G PAM4 or 112G PAM4 per lane.

A smaller SFF TA 1002 cabled to a paddleboard SMT PCIe CEM cable connector (circa 2019). It includes a fold bend radius diameter, using Optimax’s Twinax.

The PCIe 6.0’s 64GTs PAM4 per-lane specifications are nearly complete and scheduled for release in 2021. Currently, there are many new internal cable assemblies and connector applications and products in development. This includes the PCIe 6.0 CEM x16 connector and several M.2 connector adapter cables and harnesses. 

It will be interesting to see if the SFF TA 1002 x32 connector or another type will become the next PCIe 7.0 CEM for the latest PCIe internal cable design standard.

Today’s advances require even smaller form-factor packaging with tighter routing requirements inside the box. The cables and connector must also be able to handle high-temperature interiors without damage or reduced performance. High speed is also a must.

In fact, several internal high-speed IO cable assemblies are now designed for double-generation performance capabilities, such as for 53 and 106G, 56 and 112G, or 128 and 224G. 

Demands for 8, 16, and 32-lane link options are also expected to increase, especially for internal pluggable connector cables with an SFF TA 1002 on one end.

High-performance expectations are being met by the new Twinax flat, raw cable, which supports PCIe 6.0 64GT and potentially PCIe 7.0 128GT, as well as external 56/112G per-lane DAC applications. 

Luxshare Tech’s Optimax new Twinax cable has proven capable of accurate and stable SI performance when folded and while in active bending applications. The Twinax cable offers test data results that are exceeding many corporate testing regimes.

The simulation models are extremely close to their physical measurements in real-life performances, through many several testing regimes. This is a special accomplishment as the industry has struggled with faulty 100G signal modeling, stimulation reliability, and real-life performance.

Such raw cable performance allows for tighter routings for inside-the-box cable assemblies and smaller form factors, which are ideal for PECL, EDSFF, OCP NIC, Ruler, and others.

A few features that seem to be making a difference in cable performance, include: 

  • The use of conductor and dielectric insulation materials
  • More symmetric designs
  • Stringently controlled tolerances
  • Better process control
  • Inline SI full testing
  • Active optical inspection and histograms
  • Multiple testing regimes per application sets (using Telcordia, TIA/EIA, ISO, and Tier 1 user labs)
Here, an optimal wrap is used to protect the cable signals, right to its termination points. The dielectric insulation also offers symmetrical memory. (Luxshare-Technologies; 2021)

The Optimax Twinax family set includes:

1. A bending radius down to 2x cable OD with minimal SI degradation
2. Between 33-24 AWG Twinax, with a 26, 30, and 40GHz bandwidth for 112G+
3. A 16, 25, and 32 Gb/s NRZ, as well as a 56, 62, and 112Gb/s PAM4
4. Impedance options, include: 85, 90, 92, 95, 100, and 104 Ohms
5. Several drain and pair counts, as well as a single pair or laminated types
6. Various temperature rated raw cable types, such as 85 or 105 degrees Celsius

The PCIe 7.0 128GTs PAM4 per-lane internal cable solutions are likely to include inside-the-box optical interconnect options, such as COBO OBO or different CPO types. 

There’s a good chance that Optamax Twinax’s copper internal cable could support the PCIe 7.0 128GTs and 128G PAM4 short reach inside-the-box, including the rack applications. But we’ll have to wait and see.

A few observations…
The use of higher speed signaling and wider (16 and 32-lane IO PHY) interfaces will greatly increase the requirement for circuits with greater power and control. Using a smaller footprint, the GenZ internal interconnect system that supports 256-lane interfaces will likely provide an ideal option for the Hyperscaler DataCenter systems, as the PCIe currently only supports 128 lanes. 

Successful, higher-volume internal cable manufacturing will also require a faster ramp production line, similar to the consumer high-speed cabling methodologies already largely in place. Quality control is also necessary for high-quality products.

Currently, the CXL accelerator link uses the latest PCIe CEM connector revisions. This CXL link is an internal connector and cabling application. GenZ has an agreement with the CXL Consortium for an external Link interface for the Inter-Rack topologies. But will CXL developers also use the SFF TA 1002 connectors and cables, or other types to achieve PCIe 7.0 128GT per-lane performances? 

It will be interesting to find out. 

 

DesignFast Banner version: 22e7f758

Filed Under: Connector Tips, Featured

Primary Sidebar

EE Training Center Classrooms

EE Classrooms

Featured Resources

  • EE World Online Learning Center
  • CUI Devices – CUI Insights Blog
  • EE Classroom: Power Delivery
  • EE Classroom: Building Automation
  • EE Classroom: Aerospace & Defense
  • EE Classroom: Grid Infrastructure
Search Millions of Parts from Thousands of Suppliers.

Search Now!
design fast globle

R&D World Podcasts

R&D 100 Episode 7
See More >

Current Digital Issue

April 2022 Special Edition: Internet of Things Handbook

How to turn off a smart meter the hard way Potential cyber attacks have a lot of people worried thanks to the recent conflict in Ukraine. So it might be appropriate to review what happened when cybersecurity fi rm FireEye’s Mandiant team demonstrated how to infiltrate the network of a North American utility. During this…

Digital Edition Back Issues

Sponsored Content

Positioning in 5G NR – A look at the technology and related test aspects

Radar, NFC, UV Sensors, and Weather Kits are Some of the New RAKwireless Products for IoT

5G Connectors: Enabling the global 5G vision

Control EMI with I-PEX ZenShield™ Connectors

Speed-up time-to-tapeout with the Aprisa digital place-and-route system and Solido Characterization Suite

Siemens Analogue IC Design Simulation Flow

More Sponsored Content >>

RSS Current EDABoard.com discussions

  • Help with Verilog replicate operator
  • How to Find Capacitance between Coil, Magnetic Core and Shield.(Need Help)
  • Is tab connected to collector in NPN?
  • Do you use C++ for device drivers in embedded systems?
  • Smallest header needed to support PCB vertically above main PCB

RSS Current Electro-Tech-Online.com Discussions

  • ASM - Enhanced 16F and long calls - how?
  • data sheet on signetics sd304
  • Will Header and socket hold this PCB OK?
  • Relaxation oscillator with neon or...
  • Adding Current Limit Feature to a Buck Converter

Oscilloscopes Product Finder

Footer

EE World Online

EE WORLD ONLINE NETWORK

  • 5G Technology World
  • Analog IC Tips
  • Battery Power Tips
  • Connector Tips
  • DesignFast
  • EDABoard Forums
  • Electro-Tech-Online Forums
  • Engineer's Garage
  • Microcontroller Tips
  • Power Electronic Tips
  • Sensor Tips
  • Test and Measurement Tips
  • Wire & Cable Tips

EE WORLD ONLINE

  • Subscribe to our newsletter
  • Lee's teardown videos
  • Advertise with us
  • Contact us
  • About Us
Follow us on TwitterAdd us on FacebookConnect with us on LinkedIn Follow us on YouTube Add us on Instagram

Copyright © 2022 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy