
Nvidia has unveiled a major expansion to its AI hardware strategy with the launch of NVLink Fusion, a platform that allows third-party processors to work seamlessly alongside Nvidia GPUs.
Previously restricted to Nvidia’s own chips, the NVLink interconnect technology will now support central processing units (CPUs) and application-specific integrated circuits (ASICs) from external vendors.
The move, announced by CEO Jensen Huang during his keynote at Computex 2025 in Taiwan, marks a shift in Nvidia’s approach—one that aims to secure its position at the heart of next-generation AI infrastructure, even in hybrid systems that do not rely exclusively on its components.
MediaTek, Qualcomm, and others onboard NVLink Fusion
The new NVLink Fusion programme has already secured a number of key partners. Chipmakers including MediaTek, Marvell, Alchip, Astera Labs, Synopsys, and Cadence have joined the initiative.
The integration enables clients such as Qualcomm Technologies and Fujitsu to connect their own CPUs to Nvidia GPUs in AI data centre environments.
By doing so, these companies can benefit from Nvidia’s ecosystem and interconnect bandwidth while maintaining flexibility in hardware design.
This development allows clients to build semi-custom AI systems, rather than relying solely on off-the-shelf solutions from Nvidia.
Ray Wang, a US-based technology analyst, noted that NVLink Fusion helps Nvidia tap into demand for data centres built on ASICs, traditionally considered alternatives to Nvidia’s general-purpose graphics processors.
Wang said the new approach “consolidates Nvidia as the centre of next-generation AI factories”, especially as tech giants like Google, Amazon and Microsoft continue developing proprietary AI chips.
Nvidia responds to custom chip competition
Nvidia currently dominates the GPU market used in general AI training, but it faces increasing competition from hyperscalers investing in custom silicon.
The new NVLink Fusion strategy appears to be a direct response, allowing Nvidia to retain a role in AI compute systems that integrate diverse hardware.
According to Rolf Bulk, equity analyst at New Street Research, the flexibility offered by NVLink Fusion could reduce demand for Nvidia’s own CPUs.
However, he pointed out that at the system level, the feature makes Nvidia’s GPUs more attractive to customers exploring alternative system architectures.
While tech rivals AMD, Intel, and Broadcom are notably absent from the Fusion ecosystem, Nvidia’s willingness to work with competitors’ chips could help cement its dominance in AI compute environments, even as the architecture landscape becomes more fragmented.
New DGX Cloud platform and Taiwan expansion
Alongside NVLink Fusion, Nvidia also introduced the DGX Cloud Lepton, a platform designed to provide developers with access to tens of thousands of GPUs through a unified global marketplace.
The system aims to address GPU supply shortages and streamline access to compute capacity across Nvidia’s cloud network.
In a regional boost, Huang also announced plans for a new office and AI supercomputer project in Taiwan in collaboration with Foxconn, officially known as Hon Hai Technology Group.
The move reinforces Taiwan’s role as a global AI hub and expands Nvidia’s presence in Asia, a critical market for its supply chain and ecosystem partners, including chip foundry TSMC.
The upcoming GB300 Grace Blackwell system, scheduled for release in Q3 2025, will offer a leap in performance for AI workloads and further complement Nvidia’s evolving product stack.
The post Nvidia opens NVLink Fusion ecosystem, expands Taiwan footprint with Foxconn appeared first on Invezz