Revolutionizing Power Management with 3D Integration

In advanced systems, bulky components (e.g., capacitors, inductors) occupy significant space and contribute to power losses. Power management is the unsung hero (or sometimes the villain) in high-performance computing. Nowhere is this more apparent than in modern AI hardware, like GPUs and AI accelerators, which can consume hundreds of watts. In a flagship GPU, it’s common that about 50% of the board area is devoted to power delivery and regulation circuitry​– think of all the voltage regulators, inductors, capacitors spread around the chip. Even with all that hardware, roughly 20–30% of the power can be lost as heat in the process of getting it from the power supply to the silicon​. These losses come from the resistance of PCB traces, inefficiencies in converters, and the fact that power is often delivered at a lower voltage then stepped up and down multiple times. The heat generated not only wastes energy but also creates thermal challenges that can throttle performance. Essentially, power management has become a limiting factor: you can design a super-fast AI chip, but if you can’t deliver power to it efficiently, it will never reach its potential. Traditional power management ICs (PMICs) are typically separate chips (or modules) on the board – this separation is part of the problem, as it introduces distance and parasitics between the power source and the load (the AI chip).

Local Power Delivery – A Paradigm Shift: one of CDimension’s approachs revolutionizes power management by moving it on to the chip. We integrate the buck regulator (the DC-DC converter that steps voltage down) right into the same package or even on the same silicon as the AI processor. This concept is known as point-of-load regulation – you’re regulating the voltage exactly where it’s needed, rather than inches away on the PCB. The advantages are immediate: the distance electricity travels in its low-voltage, high-current form is minimized, greatly reducing resistive losses (Joule heating). Also, by being on-chip or in-package, we can use much smaller and faster interconnects for power delivery, slashing the inductance that tends to cause voltage droops and require large capacitors as compensation. In our design, the moment the AI circuit needs more current, the integrated regulator is just microns away to supply it, rather than having to signal across a board to a separate chip.

MoS₂ Power Switches – >100× Faster: The heart of any switching regulator is the power transistor that chops the voltage – and this is where our materials make a huge difference. We utilize MoS₂-based power FETs that switch extraordinarily fast, more than 100 times faster than equivalent silicon power transistors​. Faster switching has two key benefits: it allows the regulator to respond more quickly to changes in load (important for AI chips that can go from idle to peak consumption in a nanosecond), and it enables use of higher switching frequencies. Higher frequency in converters means you can use smaller inductors and capacitors to filter the output. In fact, with sufficiently high frequency, these passive components can be shrunk to a level that they too can be integrated in-package or on-die. CDimension’s buck regulator operates at much higher frequencies (>20MHz) than conventional on-board regulators (~1MHz or lower), which means we’re on the path to integrating even the filtering components. This further reduces the footprint and eliminates those rows of large capacitors you see on boards (as in the image above). Moreover, our wide-bandgap 2D transistors waste less energy as heat during switching, improving efficiency at the device level.

Efficiency Gains and Thermal Benefits: By combining these factors – local delivery and superior transistors – the efficiency of power conversion improves markedly. In tests, our integrated regulator design shows significantly less voltage drop from the board connector to the chip, meaning less wasted power as heat. In quantitative terms, if a traditional setup was losing say 20% of power in distribution, our approach cut that loss to less than one percent. This reclaimed power either goes into useful computation or reduces the load on the system’s cooling. Additionally, because regulation is happening right at the chip, we can tailor the power supply dynamically across different parts of the chip (“dynamic voltage and frequency scaling” – DVFS – per region) with finer granularity, something difficult to do with one big external regulator. This means parts of the AI chip not in use can have their voltage lowered quickly to save energy, and cranked up when needed, with minimal lag. Thermal-wise, distributing power more efficiently means the chip package has a more uniform heat profile (no hotspots from where current enters the die), and the total heat to dissipate is less. We observed more than 50% less heat generation in our integrated power delivery simulations compared to traditional setups​– a huge win for maintaining performance under heavy AI workloads.

System Impact – Smaller and Simpler Boards: Removing large PMICs and dozens of passive components from the board simplifies the design and frees area for other uses (or allows the board/device to be smaller). In data centers, this might mean AI accelerator boards can pack more compute in the same space or have improved airflow with fewer obstructions. In edge devices like robots or cars, reducing board size and weight can be crucial. Also, reliability can improve – fewer soldered components means fewer points of failure, an especially important consideration in automotive AI hardware, for instance. Manufacturing is also eased: instead of sourcing and assembling a complex power delivery network on the PCB, a lot of that function is handled by our integrated module which can be tested at the chip level.

3D integration is rewriting the rulebook for power management in AI hardware. By treating power delivery as an integral part of the chip (rather than an afterthought on the board), CDimension is achieving efficiency and performance levels unattainable with conventional methods. The buck doesn’t stop at the board anymore – it happens right on the chip, where it belongs. This is enabling AI processors to get all the clean power they need to run faster and more efficiently, which ultimately translates to more AI computations per watt and more capable systems. As AI demands continue to soar, such innovations in power management are not just nice-to-have – they will be essential to make the next leaps in performance feasible.

Read more

Want to know more? Stay in touch

See what we’re working on, be the first to hear about product releases, and get our insights articles sent right to your inbox.