San Francisco — At last year’s OFC 2024, the next speed of optical connections (1.6T) was all talk with one exception. In a back room, Keysight demonstrated a 1.6T optical link between an arbitrary waveform generator and a bit-error-rate tester. The demonstration was a transmission of raw, unstructured bits. One thing was clear: AI was going to drive engineers to develop 1.6T optics that carry structured bits such as Ethernet. Fast forward to 2025, and many companies exhibited 1.6T pluggable optical modules.
The people who design, operate, and maintain telecom and data center networks must deliver data to consumers, businesses, governments, and so on. AI is stressing these networks and will continue to do so. That’s pushing for ever-higher data rates per connection. The computing power needed for AI is also pushing the need for ever more power. The power needed per bit keeps dropping but the amount of increased data more than offsets any power savings.
Although 1.6T hype was in full swing and products were on display, most networks are just now moving from 200G to 400G. 800G is ready for deployment and that will take a few more years to ramp up. Standards for 1.6T are a few years away as organizations such as Ethernet Alliance and OIF are just now demonstrating 800G. I expect 1.6T demonstrations at OFC 2026.
Today, we don’t know how many lanes will comprise 1.6T. Currently, 224G is the top speed for data traveling over copper. 448G is in the works but new, unforeseen problems must be solved first.
Other technologies to solve the current bandwidth limitations include co-packaged optics, where optical engines mount on network switch PCBs rather than residing in plug-in modules. This shortens the copper distance between the electrical-optical interface and the ASICs on the board. Unfortunately, it makes installation and maintenance harder, given the lack of pluggable optical modules.
(l-r) Alan Weckel, 650 Group; Craig Thompson, NVIDIA; Don Barnetson, Credo; Nathan Tracy, OIF and TE Connectivity; Josef Berger, Marvell.
“Front pluggable modules are not going away anytime soon,” said Credo’s Don Barnetson at a press-analyst panel during OFC 2025. “The front pluggable has a lot of utility.” Indeed, Barnetson also spoke of active electrical cables (AECs) even replacing optical cables in some applications. That’s because as compute density increases, the distances between processors within a rack can shrink, meaning that copper cabling can work, provided the distances are short enough to handle the data rates. “For the first time, we’re actually converting optics back into copper because it turns out it’s a better solution. It’s more reliable, it burns less power, and it costs less than optical.”
“How are we going to tie all these GPUs together and interconnect them with those switches?” asked OIF president Nathan Tracy of TE Connectivity. “We’re doing that with cable backplanes.” Tracy went on to explain how TE delivers thousands of kilometers of cable backplanes to a single customer. Data rates run at 50 Gb/sec and 100 Gb/sec with forecasts going to 100,000 km of aggregate copper backplanes running at 200 Gb/sec. Tracy also noted that the industry is looking to deliver 400 Gb/sec on a single differential pair.
On the optical side, OIF has developed 400ZR optics, which data centers have been installing. “We’ve broken through the barrier to 800 gig and now breaking through the barrier to 1.6T interoperable coherent modules.”
“GPU bandwidth is doubling every two years,” said NVIDIA’s Craig Thompson. “We’ve seen bandwidth across a single lane double in roughly the same period for the last few generations and I don’t really see it slowing down yet. The ride will continue for at least the next few years and it will be a really exciting time. We need the ecosystem to come along with us. We need innovation, startups, and investment. We need bigger networks.”
With AI driving the need for bigger and faster networks, where will the power come from and how will it reach the GPUs? Regarding the electrical grid, there’s talk of building new nuclear power plants or using other sources of electricity because the grid won’t be able to support all of this additional energy.
Once AC mains power reaches a data rack, it needs to be converted to DC and delivered to equipment on the rack. Tracy noted that today, the solid bars of copper that deliver power throughout a rack need liquid cooling running through them to deliver more power into the rack. Then, there’s the power delivery needed at the board level to deliver hundreds and even low thousands of amps to a single GPU.
“It’s all about the power,” said Josef Berger of Marvell. Power delivery to the rack, power delivery to the aisle, to each site, and then to each region. A lot of buildings are not limited by anything other than power delivery into that area or that site. To rank these clusters to not just tens of thousands, hundreds of thousands, but the millions require multi-site AI clusters that then need super high-speed, low power connectivity between all those different sites.”
The need for better use of power is pushing engineers to look beyond just the optics and electronics to the network itself. “We’ve been working like crazy to take power out of the network for the last 15 years, ever since we started moving to coherent [optics],” said Nokia’s James Watt at the company’s press and customer briefing. “The power constraints and the impacts of power in the AI world are not just on the equipment; they’re on architecture and latency requirements. The power equation must be solved.”
1.6T is already too slow
A slide from NVIDIA at OFC 2025 shows the future move from 1.6T to 3.2T will likely invoke more power and cooling issues.
Now that 1.6T optical modules are commonplace, it’s time to talk 3.2T. While 3.2T is still years away as 1.6T has not even reached deployment, the discussions are already happening. This slide from NVIDIA’s Ashkan Seyedi, delivered at one of the exhibit hall theaters, says that today’s optical module form factors won’t be able to dissipate the heat once 3.2T becomes real sufficiently.
That’s one reason we’re seeing low-power optical modules (LPO) — also called linear-drive optics (LDO). Conventional modules include a DSP, but DSPs use power and generate heat. Moving the DSP from the module to the rack equipment removes that heat generator from the module, reducing its need for cooling. With the DSP in the rack, the optical module essentially runs in an “analog” mode.