AMD’s Navi will be a traditional monolithic GPU, not a multi-chip module | PCGamesN

AMD’s Navi will be a traditional monolithic GPU, not a multi-chip module

AMD Navi will  be monolithic GPU

Contrary to what most of us in the tech press had hoped, the next-gen AMD Navi graphics cards will use a familiar monolithic GPU design as opposed to the multi-chip layout we'd hoped might deliver an efficient high-end gaming card.

Most of us had thought AMD would start to use its Infinity Fabric interconnect to join smaller GPUs together in a single package to create a high-performance multi-chip module (MCM) with the new Navi graphics silicon. But we recently spoke with David Wang, the new SVP of engineering for AMD’s Radeon Technologies Group (RTG), and there’s pretty much zero chance that’s going to be worked into next year’s Navi GPUs.

The GPU is only one part of the PC gaming picture, you’re also going to need the best CPU for gaming too.

It’s definitely something AMD’s engineering teams are investigating, but it still looks a long way from being workable for gaming GPUs, and definitely not in time for the AMD Navi release next year. “We are looking at the MCM type of approach,” says Wang, “but we’ve yet to conclude that this is something that can be used for traditional gaming graphics type of application.”

When the previous RTG lead, Raja Koduri, had been waxing lyrical about his Vega baby he had introduced the notion that the Infinity Fabric interconnect would be the perfect system to splice a bunch of discrete GPUs together on a single ASIC design. 

AMD Vega layout with Infinity Fabric

"Infinity Fabric allows us to join different engines together on a die much easier than before," Koduri explained. "As well it enables some really low latency and high-bandwidth interconnects.This is important to tie together our different IPs (and partner IPs) together efficiently and quickly. It forms the basis of all of our future ASIC designs.

“We haven't mentioned any multi GPU designs on a single ASIC, like Epyc, but the capability is possible with Infinity Fabric."

While there was nothing definite, the suggestion was that for future GPUs Infinity Fabric would become more of a key component in the design, and would be there to ensure multiple slices of graphics silicon could communicate quickly and efficiently across a single package. Because of that the tech world expected Navi might come with some interesting new multi-GPU layouts.

And Infinity Fabric does indeed seem to be the perfect interconnect, unfortunately just because you can plug a bunch of GPUs together that doesn’t mean you should. The problem is that, especially in the gaming world, the software isn’t there to make such a discrete graphics card design worthwhile. For CPUs the infrastructure is already there, baked into the OS, to allow for multiple chips to function invisibly, such as with Ryzen’s discrete CCX design. 

Infinity Fabric beyond the SoC

That infrastructure doesn’t exist with graphics cards outside of CrossFire and Nvidia’s SLI. And even that kind of multi-GPU support is dwindling to the point where it’s practically dead. Game developers don’t want to spend the necessary resources to code their games specifically to work with a multi-GPU array with a miniscule install base, and that would be the same with an MCM design.

“To some extent you’re talking about doing CrossFire on a single package,” says Wang. “The challenge is that unless we make it invisible to the ISVs [independent software vendors] you’re going to see the same sort of reluctance.

“We’re going down that path on the CPU side, and I think on the GPU we’re always looking at new ideas. But the GPU has unique constraints with this type of NUMA [non-uniform memory access] architecture, and how you combine features... The multithreaded CPU is a bit easier to scale the workload. The NUMA is part of the OS support so it’s much easier to handle this multi-die thing relative to the graphics type of workload.”

So, is it possible to make an MCM design invisible to a game developer so they can address it as a single GPU without expensive recoding?

“Anything’s possible…” says Wang.

AMD Vega GPU

But realistically it’s more of a software problem than a hardware one. The Infinity Fabric interconnect should be able to provide an interface that is wide enough, and high-speed enough to deal with the communication to make is look and feel like one chip, but getting the OS and the applications to see it that way is a lot tougher.

It seems, however, that the MCM approach is only an issue in the gaming world, with the professional space more accepting of multi-GPU and potential MCM designs.

“That’s gaming” AMD’s Scott Herkelman tells us. “In professional and Instinct workloads multi-GPU is considerably different, we are all in on that side. Even in blockchain applications we are all in on multi-GPU. Gaming on the other hand has to be enabled by the ISVs. And ISVs see it as a tremendous burden.”

Does that mean we might end up seeing diverging GPU architectures for the professional and consumer spaces to enable MCM on one side and not the other?

“Yeah, I can definitely see that,” says Wang, “because of one reason we just talked about, one workload is a lot more scalable, and has different sensitivity on multi-GPU or multi-die communication. Versus the other workload or applications that are much less scalable on that standpoint. So yes, I can definitely see the possibility that architectures will start diverging.”

AMD Radeon Instinct professional GPUs

So David Wang expects that at some point AMD will have dedicated silicon and dedicated architectures for the professional and gaming sides, but he doesn’t believe it will diverge to the point where they are completely different, unrelated designs.

“The fact that the GPU is the most efficient machine in the world to handle large batch efficiently, that’s not going to change,” he says, “whether you’re using it for computation or graphics. But, you know, I think there’ll be tweaks, so that for computation it may be biased to one kind of architecture and gaming will be biased to another type of architecture, but I think the fundamental building blocks and concept will still be pretty similar.”

AMD's Navi GPUs will start shipping next year in 7nm trim, but what level of performance Navi will deliver is still up for debate. Without an MCM setup it looks likely that we're talking about a mainstream GPU, an RX 680 successor to the RX 580, rather than something that's going to punch it out with Nvidia's GTX 1180

Especially if Sony's PS5 really does have something to do with Navi's design...

GOTW
Sign in to Commentlogin to comment
MacDaffy avatarDave James avatar
MacDaffy Avatar
1
1 Week ago

> one workload is a lot more scalable, and has different sensitivity on multi-GPU or multi-die communication.

CG path tracing is a highly scalable workload. EPIC says in a couple of years we might won't see anything else than ray tracing in game engines. Fully path traced renders are at 1 fps at 1080p now (no RTX required) and would take advantage of MCM: https://www.reddit.com/r/technology/comments/8gp45m/ai_advances_cgi_industry_by_several_years_otoy/

1
Dave James Avatar
627
1 Week ago

I think EPIC might be being a little ambitious in thinking that in only a couple of years there will just be ray-tracing in game engines. There will probably be some form of hybrid ray-tracing effects in most game engines by then, but the big switch from rasterization to ray-tracing isn't going to happen that quickly.

It was something we spoke with David Wang about too:

“I do think ray-tracing has its place, but is this going to be a sea-change overnight? Will people compromise on 4K display performance and start to become jaggy? I don’t think so.

"So, I think it will take time for this sea-change to happen, but even in the end I still believe it will co-exist with the conventional rasterizaton technology. Just because frame rate and resolution are so important to people.”

1