NVIDIA: Introducing the Next Generation Lovelace GPU
Source: Hardware Luxx added 29th Dec 2020NVIDIA has not yet fully rolled out the current generation of amps, when the first rumors about the successor generation are already making the rounds. Once again, the early information comes from twitterer @ kopite7kimi, who was also able to come up with (mostly) correct information quite early on for the ampere generation.
For a few days it also seems to have been known that the next gaming architecture from NVIDIA will be called Lovelace. Ada Lovelace was a British mathematician, considered by some historians as “the first person who can be described as a programmer” – see Wikipedia. NVIDIA remains in the tradition of naming on the basis of people who have earned their name in the technical and mathematical fields. Hopper (named after Andy Hopper, a professor of computer science), the architecture previously traded as the Ampere successor, is being taken a step back.
It is currently not possible to estimate when Lovelace will be considered an architecture will come onto the market in the form of first products. As I said, NVIDIA will initially want to expand the current Ampere architecture. In January we expect the introduction of the mobile versions and also a GeForce RTX 3060 and the GeForce RTX 3080 Ti with 20 GB of graphics memory is apparently in the Starting blocks. AMD wants to update its RDNA architectures at relatively short intervals. According to AMD’s own roadmap, RDNA 3 is expected for the end 2021. Accordingly, NVIDIA should feel under pressure to bring an Ampere successor onto the market just as quickly, which will now be Lovelace.
Accordingly, NVIDIA will currently be in the design phase and all too It can’t be long before the tape out has to be mastered. Accordingly, there may well be the first leaks on which the information from @ kopite7kimi is based.
Data protection notice for Twitter
At this point we would like to show you a Twitter feed. Protecting your data is important to us: By integrating the applet, Twitter sets cookies on your computer, with which you can possibly be tracked. If you want to allow that, just click on this feed. The content is then loaded and displayed to you.
Your Hardwareluxx-Team
Show tweets directly from now on
But now we come to this first information, which, however, should be treated with caution. If the Ampere-GPU GA 84 comes fully equipped with seven GPCs, 42 TPCs and thus with 84 SMs on 10. 752 FP 32 – Computing units, it should be for the AD 84 – GPU possibly 12 GPCs, 72 TPCs and thus 144 SMs. If still 128 FP 32 – arithmetic units per SM would be full 18. 432 FP 32 – arithmetic units. A similar ratio for the INT 20 – assuming arithmetic units would be an AD 102 – GPU on 9. 216 INT 20 – arithmetic units.
These sample calculations all assume that NVIDIA retains the basic structure of the architecture. A 70% larger expansion of the GPU would be quite conceivable. A further reduction in the size of the structures by manufacturing in 5 nm creates the necessary potential.
Data protection notice for Twitter
At this point we would like to show you a Twitter feed. Protecting your data is important to us: By integrating the applet, Twitter sets cookies on your computer, with which you can possibly be tracked. If you want to allow that, just click on this feed. The content is then loaded and displayed to you.
Your Hardwareluxx-Team
Show tweets directly from now on
A larger cache is also mentioned. However, this increases with the larger expansion alone – both the total capacity of the L1 cache and a shared L2 cache. It remains to be seen whether NVIDIA will use a large L3 cache like the Infinity Cache from AMD. Information on the storage hierarchy (including storage interface) is still completely missing.
About the beginning of the year and the course of the year 2021 We will certainly receive more information so that an estimate of the expansion and the release date will be easier. The fast rhythm that AMD wants to display with the RDNA architectures suggests an equally faster rhythm at NVIDIA and the leaps in performance seem to be getting bigger again.