NVIDIA UALink Artificial Intelligence accelerators

Will UALink Kill NVIDIA?

That was my first thought when AMD briefed me about the Ultra Accelerator Link Consortium (UALink), a joint effort between AMD, Broadcom, Cisco, Google, HPE, Intel, Meta, and Microsoft to create a set of standards to allow the interoperation of AI accelerators. Currently, NVIDIA owns the AI accelerator market and is the preeminent leader of AI technology in the market.

But the thing is, when one vendor dominates a market like NVIDIA does, the ability of any consortium to break that dominance tends to be very limited because the dominant vendor has three unique advantages: ownership of the existing eco-system, deep relationships with the customers who are largely just learning to use NVIDIA’s technology, and the fact that it is really, really, hard to catch any tech company from behind after it reaches dominance.

So, this effort has little chance of harming NVIDIA much, but it could force NVIDIA to listen and adjust to the unmet demands this initiative is currently surfacing, including the need to get away from single sourcing solutions. NVIDIA is already under anti-trust review, so it’ll be open to at least some of the things this consortium is promising in order to avoid an adverse decision.

Let’s talk about what changes UALink could drive into the accelerator market.

Open Ecosystem

NVIDIA is closed in its approach to most things. This is not uncommon in the tech industry. Oracle and Apple also tend to use closed models, and both are remarkably successful with them. However, the market at large prefers the concept of “open” because they want the ability to fully understand a technology they deploy, particularly one like AI that promises to be extremely mission critical.

Given this, I am surprised that IBM isn’t part of this effort since it pivoted to becoming open before Microsoft did. I expect this may just be a timing issue since IBM would also seem to embrace what this consortium stands for.


This is something most vendors, including Microsoft, fought for years. Ironically, Microsoft was forced to pivot to become an interoperability leader by the European Commission back in the early part of the century. The surprising result was that rather than becoming less successful and less competitive, Microsoft used this pivot to become an interoperability leader and became even more successful as a result.

This interoperability requirement becomes more important the larger the customer because at large scales, interoperability problems can result in catastrophic failures and an inability to quickly determine whose technology is at fault and to get the failure fixed timely.

While the interoperability efforts of this consortium are limited in that they assure interoperability between AI pods but not within the pods themselves (something I expect will get sorted later), they do allow for the ability to mix and match pod vendors, incite competitive bidding, and more easily create a tuned, multi-vendor environment.

This multi-vendor environment is critical in the early years of any major technology advancement because no one vendor is expert on every type of AI technology or solution. This means it’s critical that companies continue to pick the best vendor for a particular project and not be tied into one that isn’t ideal for everything, and no vendor is ideal for everything.

Wrapping Up

As noted, I don’t see the UALink Consortium creating much of a risk for NVIDIA’s survival, but it does hit in a couple of areas IT buyers are enthusiastic about: open ecosystem and interoperability. Given NVIDIA is under investigation for anti-trust concerns, both of these concepts could become attractive to either the regulators who are trying to ensure competition and NVIDIA, which would prefer not to be under an anti-trust cloud.

As a result, I expect the big gains this consortium is likely to accomplish surround the two that are critical to the buyer elements, and that in the end, the effort will result in a far more customer-friendly group of solutions that better meet the needs of the emerging accelerator market.

Oh, and if I wasn’t clear, the answer to the question in the title is, “No.”

Scroll to Top