Follow BigDATAwire:

April 14, 2025

IEEE Study Leverages Silicon Photonics for Scalable and Sustainable AI Hardware

PISCATAWAY, N.J., April 14, 2025 — The emergence of AI has profoundly transformed numerous industries. Driven by deep learning technology and Big Data, AI requires significant processing power for training its models. While the existing AI infrastructure relies on graphical processing units (GPUs), the substantial processing demands and energy expenses associated with its operation remain key challenges.

A recent study published in IEEE Journal of Selected Topics in Quantum Electronics demonstrates a novel AI acceleration platform based on photonic integrated circuits (PICs), which offer superior scalability and energy efficiency compared to conventional GPU-based architectures.

The study, led by Dr. Bassem Tossoun, a Senior Research Scientist at Hewlett Packard Labs, demonstrates how PICs leveraging III-V compound semiconductors can efficiently execute AI workloads. Unlike traditional AI hardware that relies on electronic distributed neural networks (DNNs), photonic AI accelerators utilize optical neural networks (ONNs), which operate at the speed of light with minimal energy loss.

“While silicon photonics are easy to manufacture, they are difficult to scale for complex integrated circuits. Our device platform can be used as the building blocks for photonic accelerators with far greater energy efficiency and scalability than the current state-of-the-art”, explains Dr. Tossoun.

The team used a heterogeneous integration approach to fabricate the hardware. This included the use of silicon photonics along with III-V compound semiconductors that functionally integrate lasers and optical amplifiers to reduce optical losses and improve scalability.

The fabrication started with silicon-on-insulator (SOI) wafers that have a 400 nm-thick silicon layer. Lithography and dry etching were followed by doping for metal oxide semiconductor capacitor (MOSCAP) devices and avalanche photodiodes (APDs). Next, selective growth of silicon and germanium was performed to form absorption, charge, and multiplication layers of the APD. III-V compound semiconductors (such as InP or GaAs) were then integrated onto the silicon platform using die-to-wafer bonding. A thin gate oxide layer (Al₂O₃ or HfO₂) was added to improve device efficiency, and finally a thick dielectric layer was deposited for encapsulation and thermal stability.

This unique photonic platform can achieve wafer-scale integration of all of the various devices required to build an optical neural network on one single photonic chip, including active devices such as on-chip lasers and amplifiers, high-speed photodetectors, energy-efficient modulators, and non-volatile phase shifters. Going forward, the photonic platform will enable datacenters to accommodate more AI workloads and help solve several optimization problems.

The platform will be addressing computational and energy challenges, paving the way for robust and sustainable AI accelerator hardware in the future.

Reference

  • Title of original paper: Large-Scale Integrated Photonic Device Platform for Energy-Efficient AI/ML Accelerators
  • Journal: IEEE Journal of Selected Topics in Quantum Electronics
  • DOI: 10.1109/JSTQE.2025.3527904

Source: IEEE Photonics Society

BigDATAwire