The shift to 800-VDC power architectures in AI factories

The wide adoption of artificial-intelligence models has led to a redesign of data center infrastructure. Traditional data centers are beingContinue Reading The post The shift to 800-VDC power architectures in AI factories appeared first on EDN.

The shift to 800-VDC power architectures in AI factories

The wide adoption of artificial-intelligence models has led to a redesign of data center infrastructure. Traditional data centers are being replaced with AI factories, specifically designed to meet the computational capacity and power requirements required by today’s machine-learning and generative AI workloads.

Data centers traditionally relied on a microprocessor-centric (CPU) architecture to support cloud computing, data storage, and general-purpose compute needs. However, with the introduction of large language models and generative AI applications, this architecture can no longer keep pace with the growing demand for computational capacity, power density, and power delivery required by AI models.

AI factories, by contrast, are purpose-built for large-scale training, inference, and fine-tuning of machine-learning models. A single AI factory can integrate several thousand GPUs, reaching power consumption levels in the megawatt range. According to a report from the International Energy Agency, global data center electricity consumption is expected to double from about 415 TWh in 2024 to approximately 945 TWh by 2030, representing almost 3% of total global electricity consumption.

To meet this power demand, a simple data center upgrade would be insufficient. It is therefore necessary to introduce an architecture capable of delivering high efficiency and greater power density.

Following a trend already seen in the automotive sector, particularly in electric vehicles, Nvidia Corporation presented at Computex 2025 an 800-VDC power architecture designed to efficiently support the multi-megawatt power demand required by the compute racks of next-generation AI factories.

Power requirements of AI factories

The power profile of an AI factory differs significantly from that of a traditional data center. Because of the large number of GPUs employed, an AI factory’s architecture requires high power density, low latency, and broad bandwidth.

To maximize computational throughput, an increasing number of GPUs must be packed into ever-smaller spaces and interconnected using high-speed copper links. This inevitably leads to a sharp rise in per-rack power demand, increasing from just a few dozen kilowatts in traditional data centers to several hundred kilowatts in AI factories.

The ability to deliver such high current levels using traditional low-voltage rails, such as 12, 48, and 54 VDC, is both technically and economically impractical. Resistive power losses, as shown in the following formula, increase exponentially with rising current, leading to a significant reduction in efficiency and requiring the use of copper connections with extremely large cross-sectional areas.

Presistive loss = V × I = R × I2

To support high-speed connectivity among multiple GPUs, Nvidia developed the NVLink point-to-point interconnect system. Now in its fifth generation, NVLink enables thousands of GPUs to share memory and computing resources for training and inference tasks as if they were operating within a single address space.

A single Nvidia GPU based on the Blackwell architecture (Figure 1) supports up to 18 NVLink connections at 100 GB/s, for a total bandwidth of 1.8 TB/s, twice that of the previous generation and 14× higher than PCIe Gen5.

A single Nvidia GPU based on the Blackwell architecture.
Figure 1: Blackwell-architecture GPUs integrate two reticle-limit GPU dies into a single unit, connected by a 10-TB/s chip-to-chip link. (Source: Nvidia Corporation)

800-VDC power architecture

Traditional data center power distribution typically uses multiple, cascading power conversion stages, including utility medium-voltage AC (MVAC), low-voltage AC (LVAC, typically 415/480 VAC), uninterruptible power supply, and power distribution units (PDUs). Within the IT rack, multiple power supply units (PSUs) execute an AC-to-DC conversion before final DC-to-DC conversions (e.g., 54 VDC to 12 VDC) on the compute tray itself.

This architecture is inefficient for three main reasons. First, each conversion stage introduces power losses that limit overall efficiency. Second, the low-voltage rails must carry high currents, requiring large copper busbars and connectors. Third, the management of three-phase AC power, including phase balancing and reactive power compensation, requires a complex design.

Conversely, the transition to an 800-VDC power backbone minimizes I2R resistive losses. By doubling the distribution voltage from the industry-standard high end (e.g., 400 VDC) to 800 VDC, the system can deliver the same power output while halving the current (P = V × I), reducing power loss by a factor of four for a given conductor resistance.

By adopting this solution, next-generation AI factories will have a centralized primary AC-to-DC conversion outside the IT data hall, capable of converting MVAC directly to a regulated 800-VDC bus voltage. This 800 VDC can then be distributed directly to the compute racks via a simpler, two-conductor DC busway (positive and return), eliminating the need for AC switchgear, LVAC PDUs, and the inefficient AC/DC PSUs within the rack.

Nvidia’s Kyber rack architecture is designed to leverage this simplified bus. Power conversion within the rack is reduced to a single-stage, high-ratio DC-to-DC conversion (800 VDC to the 12-VDC rail used by the GPU complex), often employing highly efficient LLC resonant converters. This late-stage conversion minimizes resistive losses, provides more space within the rack for compute, and improves thermal management.

This solution is also capable of scaling power delivery from the current 100-kW racks to over 1 MW per rack using the same infrastructure, ensuring that the AI factory’s power-delivery infrastructure can support future increased GPU energy requirements.

The 800-VDC architecture also mitigates the volatility of synchronous AI workloads, which are characterized by short-duration, high-power spikes. Supercapacitors located near the racks help attenuate sub-second peaks, while battery energy storage systems connected to the DC bus manage slower events (seconds to minutes), decoupling the AI factory’s power demand from the grid’s stability requirements.

The role of wide-bandgap semiconductors

The implementation of 800-VDC architecture can benefit from the superior performance and efficiency offered by wide-bandgap semiconductors such as silicon carbide and gallium nitride.

SiC MOSFETs are the preferred technology for the high-voltage front-end conversion stages (e.g., AC/DC conversion of 13.8-kV utility voltage to 800 VDC, or in solid-state transformers). SiC devices, typically rated for 1,200 V or higher, offer higher breakdown voltage and lower conduction losses compared with silicon at these voltage levels, despite operating at moderately high switching frequencies. Their maturity and robustness make them the best candidates for handling the primary power entry point into the data center.

GaN HEMTs, on the other hand, are suitable for high-density, high-frequency DC/DC conversion stages within the IT rack (e.g., 800 VDC to 54 VDC or 54 VDC to 12 VDC). GaN’s material properties, such as higher electron mobility, lower specific on-resistance, and reduced gate charge, enable switching frequencies into the megahertz range.

This high-frequency operation permits the use of smaller passive components (inductors and capacitors), reducing the size, weight, and volume of the converters. GaN-based converters have demonstrated power densities exceeding 4.2 kW/l, ensuring that the necessary power conversion stages can fit within the constrained physical space near the GPU load, maximizing the compute-to-power-delivery ratio.

Market readiness

Leading semiconductor companies, including component manufacturers, system integrators, and silicon providers, are actively collaborating with Nvidia to develop full portfolios of SiC, GaN, and specialized silicon components to support the supply chain for this 800-VDC transition.

For example, Efficient Power Conversion (EPC), a company specializing in advanced GaN-based solutions, has introduced the EPC91123 evaluation board, a compact, GaN-based 6-kW converter that supports the transition to 800-VDC power distribution in emerging AI data centers.

The converter (Figure 2) steps 800 VDC down to 12.5 VDC using an LLC topology in an input-series, output-parallel (ISOP) configuration. Its GaN design delivers high power density, occupying under 5,000 mm2 with a height of 8 mm, well-suited for tightly packed server boards. Placing the conversion stage close to the load reduces power losses and increases overall efficiency.

EPC’s GaN solution for 800-VDC power architecture for AI data centers.
Figure 2: The EPC GaN converter evaluation board integrates the 150-V EPC2305 and the 40-V EPC2366 GaN FETs. (Source: Efficient Power Conversion)

Navitas Semiconductor, a semiconductor company offering both SiC and GaN devices, has also partnered with Nvidia to develop an 800-VDC architecture for the emerging Kyber rack platform. The system uses Navitas’s GaNFast, GaNSafe, and GeneSiC technologies to deliver efficient, scalable power tailored to heavy AI workloads.

Navitas introduced 100-V GaN FETs in dual-side-cooled packages designed for the lower-voltage DC/DC stages used on GPU power boards, along with a new line of 650-V GaN FETs and GaNSafe power ICs that integrate control, drive, sensing, and built-in protection functions. Completing the portfolio are GeneSiC devices, built on the company’s proprietary trench-assisted planar technology, that offer one of the industry’s widest voltage ranges—from 650 V to 6,500 V—and are already deployed in multiple megawatt-scale energy storage systems and grid-tied inverter projects.

Alpha and Omega Semiconductor Limited (AOS) also provides a portfolio of components (Figure 3) suitable for the demanding power conversion stages in an AI factory’s 800-VDC architecture. Among these are the Gen3 AOM020V120X3 and the top-side-cooled AOGT020V120X2Q SiC devices, both suited for use in power-sidecar configurations or in single-step systems that convert 13.8-kV AC grid input directly to 800 VDC at the data center’s edge.

Inside the racks, AOS supports high-density power delivery through its 650-V and 100-V GaN FET families, which efficiently step the 800-VDC bus down to the lower-voltage rails required by GPUs.

In addition, the company’s 80-V and 100-V stacked-die MOSFETs, along with its 100-V GaN FETs, are offered in a shared package footprint. This commonality gives designers flexibility to balance cost and efficiency in the secondary stage of LLC converters as well as in 54-V to 12-V bus architectures. AOS’s stacked-die packaging technology further boosts achievable power density within secondary-side LLC sockets.

AOS’s GaN, SiC, power MOSFETs, and power ICs.
Figure 3: AOS’s portfolio supports 800-VDC AI factories. (Source: Alpha and Omega Semiconductor Limited)

Other leading semiconductor companies also announced their readiness to support the transition to 800-VDC power architecture, including Renesas Electronics Corp. (GaN power devices) and Innoscience (GaN power devices), onsemi (SiC and silicon devices), Texas Instruments Inc. (GaN and silicon power modules and high-density power stages), and Infineon Technologies AG (GaN, SiC, and silicon power devices).

For example, Texas Instruments recently released a 30-kW reference design for powering AI servers. The design uses a two-stage architecture built around a three-phase, three-level flying-capacitor PFC converter, which is then followed by a pair of delta-delta three-phase LLC converters. Depending on system needs, the unit can be configured to deliver a unified 800-VDC output or split into multiple isolated outputs.

Infineon, besides offering its CoolSiC, CoolGaN, CoolMOS, and OptiMOS families of power devices, also introduced a 48-V smart eFuse family and a reference board for hot-swap controllers, designed for 400-V and 800-V power architectures in AI data centers. This enables developers to design a reliable, robust, and scalable solution to protect and monitor energy flow.

The reference design (Figure 4) centers on Infineon’s XDP hot-swap controller. Among high-voltage devices suitable for a DC bus, the 1,200-V CoolSiC JFET offers the right balance of low on-resistance and ruggedness for hot-swap operation. Combined with this SiC JFET technology, the digital controller can drive the device in linear mode, allowing the power system to remain safe and stable during overvoltage conditions. The reference board also lets designers program the inrush-current profile according to the device’s safety operating area, supporting a nominal thermal design power of 12 kW.

Infineon’s XDP hot-swap controller reference design.
Figure 4: Infineon’s XDP hot-swap controller reference design supports 400-V/800-V data center architectures. (Source: Infineon Technologies AG)

The post The shift to 800-VDC power architectures in AI factories appeared first on EDN.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow