Co-Packaged Optics (CPO) Book – Scaling with Light for the Next Wave of Interconnect
Deep Dives
Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:
-
Mach–Zehnder interferometer
15 min read
The article discusses Mach-Zehnder Modulators (MZMs) as a key modulator type in CPO systems. Understanding the underlying interferometer physics explains how these devices split and recombine light to encode data, which is fundamental to how optical signals are generated in these systems.
-
Wavelength-division multiplexing
11 min read
WDM is explicitly mentioned as one of the key vectors for scaling bandwidth in CPO systems. This technology allows multiple data streams to share a single optical fiber by using different wavelengths of light, and understanding its physics and engineering tradeoffs is essential to grasping why CPO can scale bandwidth more effectively than copper.
-
SerDes
8 min read
SerDes technology is discussed extensively throughout the article as both a current scaling limitation and a key component in both copper and optical interconnects. The article mentions SerDes scaling limits and alternatives like Wide I/O, making a deep understanding of how SerDes converts parallel data to serial transmission highly relevant.
Co-Packaged Optics (CPO) has long promised to transform datacenter connectivity, but it has taken a long time for the technology to come to market, with tangible deployment-ready products only arriving in 2025. In the meantime, pluggable transceivers have kept pace with networking requirements and remain the default path thanks to their relative cost-effectiveness, familiarity in deployment, and standards-based interoperability.
However the heavy networking demands that come with AI workloads mean that this time is different. The AI networking bandwidth roadmap is such that interconnect speed, range, density and reliability requirements, will soon outpace what transceivers can provide. CPO will provide some benefit and bring more options to scale-out networking, but it will be central to scale-up networking. CPO will be the main driver of bandwidth increases in scale-up networking for the latter part of this decade and beyond.
Today’s copper-based scale-up solutions, such as NVLink, provide tremendous bandwidth of 7.2 Tbit/s per GPU – soon to be 14.4 Tbit/s per GPU in the Rubin generation, yet copper-based links are limited in range to two meters at most, meaning the scale-up domain world size is limited to one or two racks at most. It is also increasingly difficult to scale bandwidth over copper. In Rubin, Nvidia will deliver another doubling of bandwidth per copper lane through bi-directional SerDes, but doubling bandwidth on copper by developing ever-faster SerDes is a highly challenging vector of scaling that is a slow grind. CPO can deliver the same or better bandwidth density and can provide additional vectors for scaling bandwidth, all while enabling larger scale-up domains.
A starting point for understanding the impetus for CPO is to consider the many inefficiencies and trade-offs when using a transceiver for optical communication. Transceivers can be used to achieve greater link range, but the cage on the front panel of a networking switch or compute tray that transceivers plug into is typically situated 15-30cm from an XPU or switch ASIC. This means that signals must first be transmitted electrically using an LR SerDes over that 15-30cm distance, with the electrical signal recovered and conditioned by a Digital Signal Processor (DSP) within the transceiver before being converted into an optical signal. With CPO, optical engines are instead placed next to XPUs or Switch ASICs, meaning that the DSP can be eliminated and that lower power SerDes can be used to move data from the XPU to the Optical Engine. ...
This excerpt is provided for preview purposes. Full article content is available on the original publication.