The next-gen Nvidia RTX 2080 is coming, which means we’re getting ever closer to a whole new world of fresh-faced GeForce graphics cards. The next-gen GPUs will be revealed on August 20, just before Gamescom 2018 kicks off.
It’s looking like AMD will cede graphics tech supremacy to Nvidia for 2018 – without even putting up much of a fight – leaving the green team as the only GPU crew likely to deliver us a new generation of graphics cards before the end of the year. But Nvidia has opted to break the rules with the next-gen graphics cards in favour of a more bombastic marketing opportunity.
Nvidia has announced that its GeForce RTX 2080 will be powered by the Nvidia Turing architecture, built from the ground up for ray tracing performance. It also looks like Nvidia plans on taking the lid completely off its next-gen GeForce cards at the end of August. Both the Hot Chips tech symposium, GeForce Gaming Celebration, and Gamescom are hitting around the 20th, so we’re expecting big things late summer.
Nvidia RTX 2080 release date
All the evidence is currently pointing towards a Q3 release for the new Nvidia graphics cards. Nvidia is probably still presenting its ‘next generation mainstream GPU’ at the Hot Chips symposium on August 20, and Nvidia’s Gamescom pre-show event, at which it is due to announce the next-gen GPUs, is happening at exactly the same time.
Nvidia RTX 2080 specs
The final specs for the RTX 2080 and RTX 2080 Ti have both been leaked in the days before the official announcement is set to go live. The straight RTX 2080 is set to have 2,944 CUDA cores, with 8GB GDDR6, and the RTX 2080 Ti is expected to sport a full 4,352 CUDA cores with 11GB GDDR6 memory.
Nvidia RTX 2080 price
The pre-order pricing that has come out of the leaked PNY details show the cards coming in at $800 for the overclocked XLR8 version of the RTX 2080 and $1,000 for the same RTX 2080 Ti version. These are likely placeholder prices right now, with the reference cards probably a little lower. Though not much…
Nvidia RTX 2080 performance
The purpose of the newest GeForce graphics cards is to give some credence to the claims of real-time raytracing being the future of gaming, and so they’re going to have to deliver on that front. At that price they’ll have to beat the GTX 1080 Ti in normal games too.
There’s been a lot of talk about the possibility of a new Nvidia Turing architecture being the basis of the new GeForce cards. And with the latest announcements from Nvidia on that front, it looks like it’s actually going to happen.
The flagship of the Turing GeForce lineup will be the RTX 2080, which Nvidia has all but confirmed through a YouTube video it posted in the early hours of August 14, 2018. This video outlines the RTX 2080 naming scheme, and that Nvidia will be taking the opportunity to skip over the 11-series nomenclature and go straight to 20.
The Nvidia RTX 2080 has been teased in a video posted by Nvidia on its YouTube channel, and is set to be announced on August 20, during the GeForce Gaming Celebration.
If you can’t get there in person the countdown to the unveiling is going on right now, and you can tune into the livestream right here with us at PCGamesN.
Nvidia’s pre-Gamescom GeForce Gaming Celebration is set for August 20, and has promised “new, exclusive, hands-on demos of the hottest upcoming games, stage presentations from the world’s biggest game developers, and some spectacular surprises.”
If that doesn’t mean Jen-Hsun Huang running around a stage waving around a new graphics card I’m going to be seriously disappointed…
It’s interesting to note that the event on the evening of August 20 in Cologne is happening at exactly the same time as the Hot Chips presentation that Stuart Oberman was scheduled to be giving around “Nvidia’s Next Generation Mainstream GPU” before it was struck from the listings pages.
The Hot Chips talk was removed from agenda, with a big fat TBC in its place, but that’s no guarantee it’s been cancelled, however. Nvidia wouldn’t do a full graphics card launch at the Hot Chips event – so it’s pretty likely that Gamescom would be the place it would start talking up its next generation of GPUs.
The effusive Nvidia CEO did announce at a pre-Computex event that the next-gen GeForce cards wouldn’t launch until “a long time from now.” But then he’s going to say that even if they were imminent. Which they are.
That August – September window marries well with the recent rumour that board partners are starting to brief their engineers on the specifics of the new GPUs. With that going down right now it seems prudent to expect the final product from the AIBs in the tail end of the summer.
Previous rumours had pegged this Q3 release date because SK Hynix is seriously cranking up volume production of GDDR6, and has recently signed a large supply deal with Nvidia. But SK Hynix is not the only company making GDDR6 memory as both Samsung and Micron are getting involved in the new graphics memory technology.
The RTX 2080 will assumedly arrive sporting a GT104 GPU – but that’s not been confirmed just yet. And what sort of configuration that chip, if that even is its name, might have is still up for debate.
|VRAM||11GB GDDR6||8GB GDDR6||11GB GDDR5X||8GB GDDR5X|
The full Turing GPU in the top-end Quadro cards, that $10,000 version, has a GPU with 4,608 CUDA cores inside it, which means the expected RTX 2080 Ti isn’t far short of that for around a 10th of the price. The extra push from the GDDR6 memory is also giving the GPU a boost too, offering a huge amount more bandwidth than the previous generation.
These aren’t going to be the energy-efficient cards of their forebears, however, as both the RTX 2080 and RTX 2080 Ti are looking like they’re rocking a 285W TDP. That means their massive GPUs are going to be rather more power-hungry than other recent Nvidia cards.
And what of the RTX 2070? We still haven’t seen any recent leaks about that card, which could mean that Nvidia is just going to launch the ultra-enthusiast cards first and leave the rest of us drooling over specs sheets of GPUs we’ll never afford.
The Streaming Multiprocessor (SM) of the recently announced Turing chip is chock full of silicon designed for machine learning and ray tracing How much of that will make the transition over to the gaming GPU we don’t yet know. But we can only assume that the ray tracing silicon will survive the transition to a gaming GPU as the card is supposedly putting RTX in its name in reference to Nvidia’s ray tracing platform…
The exact specs are still a little up in the air, but a recent leak seems to have made them a little more real.
- GPUNvidia GT104
- CUDA cores2,944
- VRAM8GB GDDR6
- Memory bus256-bit
With the Pascal generation, Nvidia stripped out the double precision cores for the GP104 silicon, and it may do the same with Turing. Historically, it would then push the SMs together – with the GP100, for example, there were ten SMs in a general processing cluster (GPC) and then just five in a GP104 GPC, despite having the same number of CUDA cores in each cluster. Each SM then has double the cores sharing the same instruction cache and shared memory.
Ray tracing is enhanced by the new RT cores in the Turing architecture, but the Tensor cores definitely helps in cleaning up a raytraced image too. With WinML also looking to bring machine learning into the gaming space we’re likely to see more pro-level silicon remaining in our gaming GPUs in the future.
On the memory side, the RTX 2080 comes with GDDR6 support rather than the more expensive – and largely unnecessary for gaming – HBM2. Samsung, SK Hynix, and Micron are all going to town on the new graphics memory, and both Samsung and SK Hynix have specifically mentioned the tech playing a key role in this year’s next-gen graphics card releases. And it doesn’t look like AMD will be doing anything so that just leaves a single player in the game…
Given the amount of work the GPU is going to have to do, and the amount of data needed to be shunted around, with real-time raytracing it’s not surprising to see 11GB of GDDR6 being used on the top-end Turing gaming cards. Though that’s going to be expensive and still dependent on the vagaries of memory supply.
There have been fresh rumours, from a writer on Tom’s Hardware with friends in technical places, that the new cards will feature a new video output. This almost certainly means the VirtualLink USB Type-C connector on the Quadro Turing cards will also make its way over to its gaming RTX 2080 sibling. There’s also speculation that this means the RTX 2080 will natively run HDMI 2.1 out of the box, hopefully delivering the bandwidth required to deal with 4K HDR at 120Hz without messing with the colours too much. It could also introduce Game Mode VRR (variable refresh rate) that might even give us non-hardware based G-Sync.
You’ll probably need some new cables, however, as the bandwidth for HDMI 2.1 is going up from 18Gbps with HDMI 2.0 up to a heady 48Gbps.
Graphics cards are expensive beasts, especially in these troubled, frontier-like, crypto-goldrush times of ours, and even with the recent price drops. When you factor in the new GDDR6 memory technology costing some 20% more than its GDDR5 forebear, it wouldn’t be at all surprising to see the top-end RTX 2080 coming in at around $699 at launch. Or potentially even more.
It seems Nvidia has dropped the ridiculous Founders Edition schtick, so that would be the price tag to think of as the base and to use as a reference model for others. Expect any and all overclocked, or third-party cooled, versions of the RTX 2080 to come nearer $800 – $1,000.
The RTX 2070 would likely begin at the same price that the GTX 1080 started out at. Yeah, ouch. The RTX 2060 can’t come soon enough with those kinds of launch prices…
Without any actual competition at the high-end of the graphics market, Nvidia knows it can almost price with impunity, knowing people will pay because there is no other performance alternative.
A lack of competition is definitely not going to do us consumers any favours at all.
The new 20-series graphics cards are going to have to be capable of real-time raytracing. That’s going to be one of the first tests anyone does when they get their hands on both the new cards and Futuremark’s upcoming 3DMark raytracing benchmark.
What we do know of Turing is that the top-end Quadro RTX 8000 is able to run the Unreal Engine reflections demo alone. Previously, this demo had been showed off using a DGX station, which is powered by four Tesla V100 GPUs and costs $70,000, so those RT cores are evidently pulling their weight in the new Turing GPU architecture.
Obviously, it will also need to game like a frickin’ hero. And, given that it will potentially appear at the same initial price point as the GTX 1080 Ti, it has to outperform the fastest of the last generation GeForce graphics cards in traditional gaming workloads too.
And that’s no mean feat.