Co-Packaged Optics: How Silicon Photonics Integrated at the Chip Package Level Is Rewriting Data Centre Power Economics

The most significant technological transition in data centre networking infrastructure since the invention of the Ethernet switch is underway, and most of the technology industry has not yet appreciated its scale. Co-Packaged Optics — the integration of optical engines directly with switch and router ASIC dies in the same package — is not an incremental improvement on pluggable optical modules. It is a fundamental rearchitecting of how data centre networking silicon interfaces with the physical world, with consequences for data centre power consumption, bandwidth density, and economics that will shape the industry for the next decade.

From Pluggable to Co-Packaged: The Architecture Change

For the past twenty years, data centre optical connectivity has followed a standardised architecture: a switch ASIC connects to optical transceivers through a standardised electrical interface (currently CAUI-8 or similar), the optical transceiver converts electrical signals to optical signals and transmits them through a pluggable optical module in a front-panel cage. This architecture — the Pluggable Optic architecture — has served the industry well, enabling vendor interoperability and simplified field replacement.

Its fundamental limitation is the electrical interface between the ASIC and the optical engine. Driving high-speed SerDes lanes from an ASIC die across a package substrate and PCB trace to a pluggable module at 100G per lane requires signal amplification, equalisation, and retiming that consumes 5-15W per 400G port at 51.2T switch scale — representing 25-30% of the total switch system power budget. At 12.8T to 51.2T switches with 512 ports, this electrical interconnect power overhead reaches 7-16MW at data centre scale.

CPO eliminates this electrical interface entirely. The optical engine — a silicon photonics PIC that generates, modulates, and detects optical signals — is co-packaged with the switch ASIC, connected by ultra-short electrical traces measured in millimetres rather than centimetres or metres. The signal integrity at millimetre distances requires no amplification or retiming. Power consumption drops to 1-3W per 400G port — an 80% reduction. Bandwidth density increases because the optical engine occupies space within the package rather than external front-panel cage space.

The Commercial Rollout

CPO has transitioned from research concept to commercial product in the period 2023-2026. Intel's Optical I/O chiplet — a CPO optical engine based on their silicon photonics process — began sampling to hyperscaler customers in 2024. Broadcom's 51.2T Tomahawk 5 switch ASIC is available in CPO configurations for OEM design-ins at Microsoft and Google. Marvell's Brightlane CPO platform ships in configurations supporting 1.6T per port. NVIDIA's Quantum-X800 InfiniBand switch integrates CPO to achieve 51.2T with dramatically lower power than pluggable equivalents.

The economics are compelling at hyperscaler scale. A hyperscaler deploying 50,000 51.2T switches — the spine layer of a major AI training cluster — saves over 10MW of power by choosing CPO over pluggable optics, worth $50M+ per year in electricity costs at $0.05/kWh. The total cost of ownership advantage of CPO over pluggable at this scale exceeds $500M over a five-year depreciation period. This is not a marginal technology preference — it is a capital allocation decision worth hundreds of millions of dollars.

"Co-Packaged Optics is to data centre networking what the transition from through-hole to surface-mount was to PCB manufacturing — an irreversible architectural transition driven by physics and economics that will define the next twenty years of infrastructure design. PhotonicDC.com covers it from chip architecture to data centre deployment."

Agentic AI and Robotics: Photonic Infrastructure Demands

The bandwidth and latency requirements of agentic AI systems — AI agents that must process language model outputs and take actions in near-real-time — create a specific photonic infrastructure demand that CPO addresses. An AI agent managing a complex workflow needs model inference latency in the single-digit millisecond range to maintain responsiveness. This requires the inference compute (GPU cluster) and the agent orchestration layer to be separated by photonic interconnects with sub-microsecond latency rather than copper interconnects with microsecond-range latency at long distances.

For humanoid robots and physical AI systems that rely on cloud inference, photonic data centre infrastructure providing sub-millisecond round-trip latency to edge endpoints is not a performance luxury — it is a functional requirement. A surgical robot that experiences 5ms versus 0.5ms inference latency from cloud LLM processing operates in qualitatively different safety regimes. PhotonicDC.com covers the complete photonic infrastructure stack that makes real-time AI agent and robot operation from cloud infrastructure possible.

Own the CPO and Silicon Photonics Intelligence Domain

PhotonicDC.com — Co-Packaged Optics, silicon photonics, optical switching, and the complete photonic data centre infrastructure story. Available now.

Acquire This Domain →
// more_articles

Continue Reading

AI Bandwidth Crisis

Why AI Killed Copper

Jan 12, 202611 min
Domain Value

PhotonicDC.com Domain Value Analysis

Feb 17, 20267 min