Hynix has been one of the “big three” memory manufacturers for decades. Together with Samsung and Micron they have been dominating the memory market and are producing a substantial percentage of the world’s DRAM and NAND memory. A few years ago the company “Hynix”, which was originally founded as “Hyundai Electronics Industrial Co” in 1983, was sold to “SK Group”—a large Korean conglomerate, hence the name “SK Hynix”. Just last year, Hynix decided to purchase Intel’s NAND business for $9 billion.
The SK Hynix Gold P31 SSD was announced in October last year and has since been receiving attention from enthusiasts. Today we finally bring you our review of the Hynix Gold P31, which is built with only Hynix components—an ability that only Samsung had in the past. The controller is an in-house design by Hynix, called ACNT038 or “Cepheus”. The flash chips are modern 128-layer 3D TLC. A LPDDR4-3733 DRAM chip provides 1 GB of memory for the mapping tables of the SSD.
The Hynix Gold P31 comes in capacities of 512 GB ($75) and 1 TB ($135). Endurance for these models is set to 500 TBW and 750 TBW respectively. Hynix includes a five-year warranty with the Gold P31.
New Line Cinema and Warner Brothers Animation are working on a new Lord of the Rings movie that’s bound for theaters, with an anime twist (via Variety). The movie will be called The Lord of the Rings: The War of the Rohirrim, and it will cover the history behind the fortress at Helm’s Deep — whose battle is one of the most exciting parts of the entire original trilogy with its exploding walls, shield surfing, and incredible horse stunts.
In case you need your memory jogged:
Kenji Kamiyama will be directing the movie, and it’ll be interesting to see how well his style maps to a medieval fantasy setting — he’s mostly tackled futuristic sci-fi projects before, including Ghost in the Shell: Stand Alone Complex, 009 Re: Cyborg,and Netflix’s Ultraman. The movie is being written by the creators of The Dark Crystal: Age of Resistance, and while Peter Jackson isn’t directly involved, one of the writers for the original trilogy (and the Hobbit movies) is acting as a consultant.
While the names behind the film are interesting, it doesn’t seem like Warner Brothers or New Line are going deep into the Tolkien well for story ideas — the events Rohirrim will be depicting are only a few hundred years before the battle depicted in The Two Towers. Compare that with, say, Amazon’s Lord of the Rings show, which is set in the Second Age, way before the War of the Ring — the characters in the anime movie may be unfamiliar to most viewers, but the world will likely be pretty similar to what we’ve already seen. Of course, there’s still room for this project to be interesting, but perhaps it would’ve been more exciting if we were getting a story about something like the Silmarils or the fall of the Númenor.
So far, we don’t know details like when the movie will be released or who will be staring in it.
With the launch of the Nvidia GeForce RTX 3070 Ti, we’re collecting information about all of the partner cards that have launched or will launch soon. We have listings from seven companies, ranging from top-end liquid cooling models to budget-friendly cards.
The RTX 3070 Ti, is Nvidia’s latest mid-range to high-end SKU for the RTX 3000 series lineup. The GPU is based on a fully enabled GA104 die consisting of 6144 CUDA cores, operating at up to a 1770MHz boost frequency for the reference spec. The GPU will come with 8GB of GDDR6X memory operating at 19Gbps and a TDP of 290W.
EVGA
Image 1 of 2
Image 2 of 2
To keep things simple during this GPU shortage crisis, EVGA has only released two SKUs for the RTX 3070 Ti, the XC3 Gaming and the FTW3 Ultra Gaming. You can grab both of these cards right now on EVGA’s store if you have the companies Elite membership. If not, you’ll need to wait until tomorrow to grab the cards if they happen to be availably.
Nothing has really changed with the RTX 3070 Ti’s FTW3 and XC3 designs, both cards feature a triple-fan cooler design, along with a fully blacked-out shroud. The XC3 is a much more stealthy dual-slot cooler, with barely any RGB insight.
The FTW3 model is much larger at 2.75 slots in thickness, and features much more RGB than its cheaper counterpart.
The FTW3 model runs at a boost frequency of 1860MHz while the XC3 runs at a lower 1770MHz.
Gigabyte
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Gigabyte has done the exact opposite of EVGA and released five different SKUs for the RTX 3070 Ti, ranging from the flagship Aorus Master model down to the RTX 3070 Ti Eagle, a more budget-friendly card.
Because Gigabyte does not have its own store, expect to buy (or wait to buy) these cards from popular retailers such as MicroCenter, Newegg, Amazon, Best Buy and others.
Aesthetically, each RTX 3070 Ti SKU has subtle differences between the RTX 3080 and RTX 3070 versions. For each SKU, Gigabyte has adjusted the design of the cards, giving them the same colors as the 3080 and 3070 cards, but offering slightly different design elements.
The only exception to this is the RTX 3070 Ti Vision, which shares the exact same design as the RTX 3080 and 3070 models.
All RTX 3070 Ti models are triple-fan cooler designs, presumably due to the 3070 Ti’s really high TDP of 290W. The Aorus Master is the top trim with a beefy triple slot heatsink, and lots of RGB. The Gaming variant is Gigabyte’s mid-range SKU, and the Eagle represents Gigabyte’s lowest-end offering. The Vision model is aimed more towards the prosumer market, with less “gamery” aesthetics.
MSI
Image 1 of 3
Image 2 of 3
Image 3 of 3
MSI will be offering three custom-designed versions of the RTX 3070 Ti, the Suprim, Gaming Trio and Ventus. Each model also comes with a OC model, doubling the amount of options to six.
The Suprim is the flagship card with a silver and grey finish, and a shroud that measures beyond two slots in thickness. RGB can be seen by the fans and on the side.
The Gaming Trio is the mid-range offering, featuring a blacked out shroud along with red and silver accents. The card is similar in height to the Suprim and is over two slots thick.
The Ventus is MSI’s budget entry level card featuring a fully blacked out shroud, with grey accents and again, is more than two slots thick. If you want a stealthy appearance this is the card to go for.
Compared to the RTX 3080 and RTX 3070 equivalent models, there’s very little difference between them and the RTX 3070 Ti SKUs. They all are incredibly similar in size, and aesthetically are largely identical besides a few backplate design changes and a couple of accent changes on the main shroud.
Zotac
Image 1 of 2
Image 2 of 2
Zotac is coming out with just two models for the RTX 3070 Ti, the Trinity and AMP Holo.
Both the Trinity and Holo feature triple-fan cooler designs, with largely identical design elements to them. Both feature grey and back color combinations, along with
The main difference between the cards is a slightly different boost speed of 1800MHz on the Trinity vs 1830MHz on the Holo, and the Holo features a much larger RGB light bar on the side, making the Trinity the more “stealthy” of the two.
Inno3D
Image 1 of 2
Image 2 of 2
Inno3D is releasing four different SKUs for the 3070 Ti, the iChillX4, iChill X3, X3 and X3 OC.
The Chill X4 and X3 are almost identical in everything; The only major add for the X4 is a quad fan setup, with an extra fan to give the card some active airflow from the side. We are not sure how much this will affect temps, but it’s a cool looking feature.
Both the Chill X3 and X4 feature very aggressive styling for a graphics card, with a black and metal finish, with several screws drilled into the metal, similar to race cars. To the side is a very bright and large strip of RGB that looks like something from Cyberpunk 2077. The RGB itself has a neon glow to it, with the ‘iChill’ logo installed in the middle.
The Chill X3 and X4 feature 1830Mhz boost frequencies and thicknesses beyond 2 slots.
INnno3D’s RTX 3070 Ti X3 and X3 OC on the other hand, are the complete opposite of the Chill cards. The shroud is a very basic black shroud with no RGB or lighting anywhere on the card. This is Inno’s budget-friendly option which explains the simplistic design.
The card comes with a 1770MHz boost clock, with the OC model featuring a 1785MHz boost frequency. The X3 comes with a flat 2 slot thickness, allowing the card to fit in slimmer chassis.
Galax
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Galax is coming in with four different versions of the RTX 3070 Ti, including dual fan options.
The flagship model for Galaxy is the 3070 Ti EXG, available in Black or White colors. These cards feature large triple-fan cooler designs and thicknesses beyond two slots. The shrouds are very basic, with just a pure black or pure white shroud, depending on the color you purchase. Making up the lighting are the fans with RGB illumination.
The RTX 3070 Ti SG is probably the most interesting of all of the 3070 Ti cards as a whole, with a unique add-on cooling solution. The card comes with the same shroud and fan design as the EXG, but features a significantly cut-down PCB, to make way for a large cut-out at the end to allow the installation of an additional fan to the rear of the card. If space allows, this additional fan gives the rear of the card a push-pull design, for maximum airflow.
Next, we have the 3070 Ti EX, a dual-fan option available in black or white flavors. This is the first SKU we’ve seen with a dual-fan solution for the 3070 Ti, so this will be a great option for users looking for a compact solution for smaller chassis. However, like the other Galax cards, the thickness is higher than two slots, so keep that in mind for smaller builds.
Besides the dual fan cooler, everything else is very similar to the EXG models with a pure black or white finish (depending on the flavor you choose) and RGB fans.
Lastly, there’s the Galax RTX 3070 Ti, a card with no fancy name, representing the budget endo Galax’s lineup.
The card is super basic with a carbon fiber-looking black shroud, and black fans. Unlike the EX model, this card is boxier with fewer angles to the design.
Palit
Image 1 of 2
Image 2 of 2
Palit is introducing three versions of the RTX 3070 Ti: the GameRock, GameRock OC, and Gaming Pro.
The GameRock appears to be the company’s flagship model for the 3070 Ti. The card comes in a wild-looking grey shroud paired with a layer of see-through diamond-like material all along the fan area. This part is all RGB illuminated.
The cards are triple fan cards with sizes larger than two slots in thickness.
The GamingPro, on the other hand, is a more normal card, with a black and grey shroud and some fancy silver accents which act as fan protectors on the middle and rear fans. This card is similar in size to the GameRock cards.
The GameRock OC comes with a 1845MHz boost clock, the vanilla model features a 1770MHz boost clock, and the same clock goes for the GamingPro.
Asus Z590 WiFi Gundam Edition (Image credit: Asus)
In a collaboration with Sunrise and Sostu, Asus announced last year a special lineup of PC components inspired by the Gundam anime series. While the products were originally specific to the Asian region, they have now made their way over to the U.S. market.
Asus introduced two opposing Gundam series. The Gundam series is based on the RX-78-2 Gundam, while the Zaku series borrows inspiration from the MS-06S Char’s Zaku II. The list of components include motherboards, graphics cards, power supplies, monitors and other peripherals. Specification-wise, the Gundam-and Zaku-based versions are identical to their vanilla counterparts.
For now, there’s not a lot to choose from. Newegg only currently sells four Gundam-themed products from Asus. On the motherboard end, we have the Z590 WiFi Gundam Edition and the TUF Gaming B550M-Zaku (Wi-Fi). The U.S. retailer also listed the RT-AX86U Zaku II Edition gaming router and TUF Gaming GT301 Zaku II Edition case.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The Z590 WiFi Gundam Edition, which retails for $319.99, is a LGA1200 motherboard that supports Intel’s latest 11th Generation Rocket Lake-S processors. The motherboard supports up to 128GB of memory and memory frequencies up to DDR4-5133 without a sweat. The Z590 WiFi Gundam Edition also offers PCIe 4.0 support on both its M.2 ports and PCIe expansion slots as well as Wi-Fi 6 connectivity with added Bluetooth 5.0 functionality.
The TUF Gaming B550M-Zaku (Wi-Fi), on the other hand, leverages the B550 to accommodate multiple generations of Ryzen processors up to the latest Zen 3 chips. The microATX motherboard also supports the latest technologies, such as PCIe 4.0, Wi-Fi 6 and USB Type-C. Newegg has the TUF Gaming B550M-Zaku (Wi-Fi) up for purchase for $219.99.
The RT-AX860U Zaku II Edition is one of Asus’ most recent dual-band Wi-Fi 6 gaming routers. The router, which sells for $299.99, offers speeds up to 5,700 Mbps and 160 MHz channels. A quad-core 1.8 GHz processor and 1GB of DDR3 powers the RT-AX860U Zaku II Edition.
Lastly, the TUF Gaming GT301 Zaku II Edition is a $119.99 mid-tower case for ATX motherboards. It offers generous support for radiators up to 360mm and a tempered glass side panel to show off your hardware. There’s also a convenient headphone hangers to keep your heaphones safe and at hand.
Microsoft is back teasing Windows 11 again. In a new video on YouTube, the software giant has published an 11-minute (ahem) collection of startup sounds from various versions of Windows. They’re all slowed down by 4,000 percent, and Microsoft positions this as a relaxing video for those far too excited by the Windows event on June 24th.
“Having trouble relaxing because you’re too excited for the June 24th Microsoft Event?” asks the YouTube caption. “Take a slow trip down memory lane with the Windows 95, XP, and 7 startup sounds slowed down to a meditative 4,000 percent reduced speed.”
I think this is very much teasing a new startup sound for Windows 11, or whatever the next version of Windows will be called. Microsoft has been teasing “a new version of Windows” recently, and has dropped a number of hints that it could in fact be called Windows 11.
Among those hints is the event starting at 11AM ET and the event invite that has a window that creates a shadow with an outline that looks very much like the number 11. Microsoft execs have been teasing a “next generation of Windows” announcement for months, and it’s clear from this latest video that these 11 teasers will continue in the weeks ahead.
We’re expecting Microsoft to announce a new version of Windows with significant user interface changes. Microsoft has been working on something codenamed “Sun Valley,” which the company has referred to as a “sweeping visual rejuvenation of Windows.” There will be many other changes, including some significant Windows Store ones, so read our previous coverage for what to expect.
We’ll find out on June 24th whether Microsoft is ready to dial the version number of Windows up to 11, simply name it Windows Sun Valley, or something else entirely. The Windows elevent (as we’re now calling it) will start at 11AM ET on June 24th, and The Verge will be covering all the news live as it happens.
Intel introduced the Iris Xe discrete graphics processor months ago, but so far, only a handful of OEMs and a couple of graphics card makers have adopted it for their products. This week, VideoCardz discovered another vendor, Gunnir, that offers a desktop system and a standalone Intel DG1 graphics card with a rare D-Sub (VGA) output, making it an interesting board design.
It’s particularly noteworthy that the graphics card has an HDMI 2.0 and a D-Sub output that can be used to connect outdated LCD or even CRT monitors. In 2021, this output (sometimes called the VGA connector, yet a 15-pin D-Sub is not exclusive for monitors) is not particularly widespread as it does not properly support resolutions beyond 2048×1536. Image quality at resolutions higher than 1600×1200 heavily depends on the quality of the output and the cable (quality is typically low). Adding a D-Sub output to a low-end PC makes some sense because some old LCD screens are still in use, and retro gaming with CRT monitors has become a fad.
As far as formal specifications are concerned, the Gunnir Lanji DG1 card is powered by Intel’s cut-down Iris Xe Max GPU with 80 EUs clocked at 1.20 GHz ~ 1.50 GHz paired with 4GB of LPDDR4-4266 memory connected to the chip using a 128-bit interface. The card has a PCIe 3.0 x4 interface to connect to the host. The card can be used for casual games and for multimedia playback (a workload where Intel’s Xe beats the competition). Meanwhile, DG1 is only compatible with systems based on Intel’s 9th- and 10th-Gen Core processors and motherboards with the B460, H410, B365, and H310C chipsets.
It is unclear where these products are available (presumably from select Chinese retailers or to select Chinese PC makers), and at what price.
Intel lists Gunnir on its website, but the card it shows is not actually a custom Gunnir card but is a typical reference design of an entry-level add-in-board from Colorful, a company that officially denies it produces Intel DG1 products as it exclusively makes Nvidia-powered GPUs.
Google is using machine learning to help design its next generation of machine learning chips. The algorithm’s designs are “comparable or superior” to those created by humans, say Google’s engineers, but can be generated much, much faster. According to the tech giant, work that takes months for humans can be accomplished by AI in under six hours.
Google has been working on how to use machine learning to create chips for years, but this recent effort — described this week in a paper in the journal Nature — seems to be the first time its research has been applied to a commercial product: an upcoming version of Google’s own TPU (tensor processing unit) chips, which are optimized for AI computation.
“Our method has been used in production to design the next generation of Google TPU,” write the authors of the paper, led by Google’s head of ML for Systems, Azalia Mirhoseini.
AI, in other words, is helping accelerate the future of AI development.
In the paper, Google’s engineers note that this work has “major implications” for the chip industry. It should allow companies to more quickly explore the possible architecture space for upcoming designs and more easily customize chips for specific workloads.
An editorial in Naturecalls the research an “important achievement,” and notes that such work could help offset the forecasted end of Moore’s Law — an axiom of chip design from the 1970s that states that the number of transistors on a chip doubles every two years. AI won’t necessarily solve the physical challenges of squeezing more and more transistors onto chips, but it could help find other paths to increasing performance at the same rate.
The specific task that Google’s algorithms tackled is known as “floorplanning.” This usually requires human designers who work with the aid of computer tools to find the optimal layout on a silicon die for a chip’s sub-systems. These components include things like CPUs, GPUs, and memory cores, which are connected together using tens of kilometers of minuscule wiring. Deciding where to place each component on a die affects the eventual speed and efficiency of the chip. And, given both the scale of chip manufacture and computational cycles, nanometer-changes in placement can end up having huge effects.
Google’s engineers note that designing floor plans takes “months of intense effort” for humans, but, from a machine learning perspective, there is a familiar way to tackle this problem: as a game.
AI has proven time and time again it can outperform humans at board games like chess and Go, and Google’s engineers note that floorplanning is analogous to such challenges. Instead of a game board, you have a silicon die. Instead of pieces like knights and rooks, you have components like CPUs and GPUs. The task, then, is to simply find each board’s “win conditions.” In chess that might be checkmate, in chip design it’s computational efficiency.
Google’s engineers trained a reinforcement learning algorithm on a dataset of 10,000 chip floor plans of varying quality, some of which had been randomly generated. Each design was tagged with a specific “reward” function based on its success across different metrics like the length of wire required and power usage. The algorithm then used this data to distinguish between good and bad floor plans and generate its own designs in turn.
As we’ve seen when AI systems take on humans at board games, machines don’t necessarily think like humans and often arrive at unexpected solutions to familiar problems. When DeepMind’s AlphaGo played human champion Lee Sedol at Go, this dynamic led to the infamous “move 37” — a seemingly illogical piece placement by the AI that nevertheless led to victory.
Nothing quite so dramatic happened with Google’s chip-designing algorithm, but its floor plans nevertheless look quite different to those created by a human. Instead of neat rows of components laid out on the die, sub-systems look like they’ve almost been scattered across the silicon at random. An illustration from Nature shows the difference, with the human design on the left and machine learning design on the right. You can also see the general difference in the image below from Google’s paper (orderly humans on the left; jumbled AI on the right), though the layout has been blurred as it’s confidential:
This paper is noteworthy, particularly because its research is now being used commercially by Google. But it’s far from the only aspect of AI-assisted chip design. Google itself has explored using AI in other parts of the process like “architecture exploration,” and rivals like Nvidia are looking into other methods to speed up the workflow. The virtuous cycle of AI designing chips for AI looks like it’s only just getting started.
While high bandwidth memory (HBM) has yet to become a mainstream type of DRAM for graphics cards, it is a memory of choice for bandwidth-hungry datacenter and professional applications. HBM3 is the next step, and this week, SK Hynix revealed plans for its HBM3 offering, bringing us new information on expected bandwidth of the upcoming spec.
SK Hynix’s current HBM2E memory stacks provide an unbeatable 460 GBps of bandwidth per device. JEDEC, which makes the HBM standard, has not yet formally standardized HBM3. But just like other makers of memory, SK Hynix has been working on next-generation HBM for quite some time.
Its HBM3 offering is currently “under development,” according to an updated page on the company’s website, and “will be capable of processing more than 665GB of data per second at 5.2 Gbps in I/O speed.” That’s up from 3.6 Gbps in the case of HBM2E.
SK Hynix is also expecting bandwidth of greater than or equal to 665 GBps per stack — up from SK Hynix’s HBM2E, which hits 460 GBps. Notably, some other companies, including SiFive, expect HBM3 to scale all the way to 7.2 GTps.
Nowadays, bandwidth-hungry devices, like ultra-high-end compute GPUs or FPGAs use 4-6 HBM2E memory stacks. With SK Hynix’s HBM2E, such applications can get 1.84-2.76 TBps of bandwidth (usually lower because GPU and FPGA developers are cautious). With HBM3, these devices could get at least 2.66-3.99 TBps of bandwidth, according to the company.
SK Hynix did not share an anticipated release date for HBM3.
In early 2020, SK Hynix licensed DBI Ultra 2.5D/3D hybrid bonding interconnect technology from Xperi Corp., specifically for high-bandwidth memory solutions (including 3DS, HBM2, HBM3 and beyond), as well as various highly integrated CPUs, GPUs, ASICs, FPGAs and SoCs.
The DBI Ultra supports from 100,000 to 1,000,000 interconnects per square-millimeter and allows stacks up to 16 high, allowing for ultra-high-capacity HBM3 memory modules, as well as 2.5D or 3D solutions with built-in HBM3.
Just when you thought things were as bad in the GPU market as they could be, something new pops up. Demand for gaming graphics cards, game consoles, and GPUs used for mining cryptocurrencies have driven prices of graphics cards and graphics memory to new heights in recent months, but according to TrendForce, that’s about to get even worse. Contract prices of GDDR memory are expected to grow another 8% – 13% later this year due to numerous factors. The only question is how badly the price of graphics memory will affect the prices of actual graphics cards, too.
Graphics DRAM represents a relatively small fraction of the overall memory market, which is largely dominated by LPDDR memory for smartphones, mainstream DRAM for PCs, and server DRAM for datacenters. To that end, GDDR always has to fight for limited DRAM production capacities with other types of memory. Due to relatively low graphics memory volumes and superior performance characteristics, these chips are usually fairly expensive. But that’s a liability in a severe under-supply situation — the price of GDDR6/GDDR6X DRAM has increased significantly.
Memory makers traditionally serve large contract customers first. In the case of GDDR6 and GDDR6X DRAMs, the largest consumers are Nvidia (which bundles memory with some of its GPUs), contract manufacturers that produce game consoles for Microsoft and Sony (Flextronics, Foxconn), and several GPU makers (Asus, Colorful, Palit, etc.). As a result of this prioritization, smaller clients are struggling to get graphics memory.
TrendForce says that GDDR fulfillment rates for some medium- and small-size customers have been around 30% for some time, which is why spot prices of graphics memory sometimes exceeded contract prices by up to 200%. To some degree, GDDR6 spot prices were affected by increasing crypto pricing (particularly Ethereum), so a drop in the coin’s value also reduced GDDR6 spot pricing.
In contrast, GDDR5 hasn’t fluctuated significantly. That’s mostly because it’s really only used for GeForce GTX 1650/1660 and some OEM-oriented graphics boards.
There are several factors that will affect graphics memory pricing in the coming months:
Demand for gaming PCs remains at high levels, so GPU makers will require more GDDR6 and GDDR6X SGRAM chips.
The latest game consoles from Microsoft and Sony use 16Gb GDDR6 memory chips, whereas graphics cards use 8Gb GDDR6 devices, so makers of graphics memory are not exactly flexible.
Since Nvidia, contract manufacturers, and select makers of graphics cards will remain the top consumers of GDDR6 and GDDR6X memory, other players will still be severely supply-constrained.
As demand for servers and mainstream PCs is increasing, graphics memory has to fight for production capacity, affecting its pricing.
Since GDDR6 and GDDR6X pricing is set to increase, the bill-of-materials (BOM) costs for GPUs will also increase. Since there are supply constraints of other components and logistics problems in place, it is unlikely anything could offset the BOM increase in the third quarter. And with that, it will get more expensive for manufacturers to build GPUs.
Meanwhile, GPU pricing is inflated because of demand from gamers and miners. Therefore, if AMD and Nvidia increase their GPU supply in Q3 and demand from miners receeds because of lower Ethereum value, then GPU pricing could actually decrease. Unfortunately, we don’t know exactly what will happen, so the future is hard to predict.
Ragnar Locker has claimed another victim. BleepingComputer reported yesterday that the ransomware group forced Adata to take its systems offline in May. Even though Adata says it has since resumed normal operations, the group claims that it was able to steal 1.5TB of data before the company detected its attack.
It’s not clear how the ransomware attack affected Adata’s ability to manufacture its storage, memory, and power solutions. The company told BleepingComputer that “things are being moved toward the normal track, and business operations are not disrupted for corresponding contingency practices are effective.”
Ragnar Locker has reportedly claimed that it was able to “collect and exfiltrate proprietary business information, confidential files, schematics, financial data, Gitlab and SVN source code, legal documents, employee info, NDAs, and work folders” as part of this attack. But those files have not yet been shared with the public.
The ransomware group has been operating since at least November 2019. Sophos offered some insight into how the ransomware itself operated in May 2020, and the FBI said in November 2020 that it has targeted “cloud service providers, communication, construction, travel, and enterprise software companies.”
It seems Ragnar Locker isn’t bashful, either, with Threatpost reporting in November 2020 that it took out Facebook ads threatening to leak the 2TB of data it stole from Campari Group unless it was paid $15 million in Bitcoin. Other high-profile attacks have targeted Energias de Portugal (a Portuguese electric company) and Capcom.
Ransomware doesn’t necessarily get as much attention as it used to, but attacks are still common, and they’re still able to affect large companies like Adata or Quanta Computer. The attacks often follow the pattern set by Ragnar Locker by attempting to block access to data while simultaneously threatening to leak it to the public.
Attacks continue to target consumers, too, with a recent example being Android ransomware that masqueraded as a mobile version of Cyberpunk 2077 to find its victims. Companies have even started to sell their “self-defending” SSDs to consumers to ease concerns about being targeted by these kinds of attacks.
Adata told BleepingComputer that it is “determined to devote ourselves making the system protected than ever, and yes, this will be our endless practice while the company is moving forward to its future growth and achievements.” Somebody’s gotta make sure those efforts to capitalize on Chia aren’t disrupted again.
The Nvidia GeForce RTX 3070 Ti continues the Ampere architecture rollout, which powers the GPUs behind many of the best graphics cards. Last week Nvidia launched the GeForce RTX 3080 Ti, a card that we felt increased the price too much relative to the next step down. RTX 3070 Ti should do better, both by virtue of only costing $599 (in theory), and also because there’s up to a 33% difference between the existing GeForce RTX 3070 and GeForce RTX 3080. That’s a $100 increase in price relative to the existing 3070, but both the 3070 and 3080 will continue to be sold, in “limited hash rate” versions, for the time being. We’ll be adding the RTX 3070 Ti to our GPU benchmarks hierarchy shortly, if you want to see how all the GPUs rank in terms of performance.
The basic idea behind the RTX 3070 Ti is simple enough. Nvidia takes the GA104 GPU that powers the RTX 3070 and RTX 3060 Ti, only this time it’s the full 48 SM variant of the chip, and pairs it with GDDR6X. While Nvidia could have tried doing this last year, both the RTX 3080 and RTX 3090 were already struggling to get enough GDDR6X memory, and delaying by nine months allowed Nvidia to build up enough inventory of both the GPU and memory for this launch. Nvidia has also implemented its Ethereum hashrate limiter, basically cutting mining performance in half on crypto coins that use the Ethash / Dagger-Hashimoto algorithm.
Will it be enough to avoid having the cards immediately sell out at launch? Let me think about that, no. Not a chance. In fact, miners are probably still trying to buy the limited RTX 3080 Ti, 3080, 3070, 3060 Ti, and 3060 cards. Maybe they hope the limiter will be cracked or accidentally unlocked again. Maybe they made too much money off of the jump in crypto prices during the past six months. Or maybe they’re just optimistic about where crypto is going in the future. The good news, depending on your perspective, is that mining profitability has dropped significantly during the past month, which means cards like the RTX 3090 are now making under $7 per day after power costs, and the RTX 3080 has dropped down to just over $5 per day.
GeForce RTX 3070 Ti: Not Great for Mining but Still Profitable
Image 1 of 3
Image 2 of 3
Image 3 of 3
Even if the RTX 3070 Ti didn’t have a limited hashrate, it would only net about $4.25 a day. With the limiter in place, Ravencoin (KAWPOW) and Conflux (Octopus) are the most profitable crypto coins right now, and both of those hashing algorithms still appear to run at full speed. Profitability should be a bit higher with tuning, but right now, we’d estimate making only $3.50 or so per day. That’s still enough for the cards to ‘break even’ in about six months, but again, profitability has dropped and may continue to drop.
The gamers among us will certainly hope so, but even without crypto coin mining, demand for GPUs continues to greatly exceed supply. By launching the RTX 3070 Ti, with its binned GA104 chips and GDDR6X memory, Nvidia continues to steadily increase the number of GPUs it’s selling. Nvidia is also producing more Turing GPUs right now, mostly for the CMP line of miner cards, and at some point, supply should catch up. Will that happen before the next-gen GPUs arrive? Probably, but only because the next-gen GPUs are likely to be pushed back thanks to the same shortages facing current-gen chips.
Okay, enough of the background information. Let’s take a look at the specifications for the RTX 3070 Ti, along with related Nvidia GPUs like the 3080, 3070, and the previous-gen RTX 2070 Super:
GPU Specifications
Graphics Card
RTX 3080
RTX 3070 Ti
RTX 3070
RTX 2070 Super
Architecture
GA102
GA104
GA104
TU104
Process Technology
Samsung 8N
Samsung 8N
Samsung 8N
TSMC 12FFN
Transistors (Billion)
28.3
17.4
17.4
13.6
Die size (mm^2)
628.4
392.5
392.5
545
SMs / CUs
68
48
46
40
GPU Cores
8704
6144
5888
2560
Tensor Cores
272
192
184
320
RT Cores
68
48
46
40
Base Clock (MHz)
1440
1575
1500
1605
Boost Clock (MHz)
1710
1765
1725
1770
VRAM Speed (Gbps)
19
19
14
14
VRAM (GB)
10
8
8
8
VRAM Bus Width
320
256
256
256
ROPs
96
96
96
64
TMUs
272
192
184
160
TFLOPS FP32 (Boost)
29.8
21.7
20.3
9.1
TFLOPS FP16 (Tensor)
119 (238)
87 (174)
81 (163)
72
RT TFLOPS
58.1
42.4
39.7
27.3
Bandwidth (GBps)
760
608
448
448
TDP (watts)
320
290
220
215
Launch Date
Sep 2020
Jun 2021
Oct 2020
Jul 2019
Launch Price
$699
$599
$499
$499
The GeForce RTX 3070 Ti provides just a bit more theoretical computational performance than the 3070, thanks to the addition of two more SMs. It also has slightly higher clocks, giving it 7% more TFLOPS — and it still has 27% fewer TFLOPS than the 3080. More important by far is that the 3070 Ti goes from 14Gbps of GDDR6 and 448 GB/s of bandwidth to 19Gbps GDDR6X and 608 GB/s of bandwidth, a 36% improvement. In general, we expect performance to land between the 3080 and 3070, but closer to the 3070.
Besides performance specs, it’s also important to look at power. It’s a bit shocking to see that the 3070 Ti has a 70W higher TDP than the 3070, and we’d assume nearly all of that goes into the GDDR6X memory. Some of it also allows for slightly higher clocks, but generally, that’s a significant increase in TDP just for a change in VRAM.
There’s still the question of whether 8GB of memory is enough. These days, we’d say it’s sufficient for any game you want to play, but there are definitely instances where you’ll run into memory capacity issues. Not surprisingly, many of those come in games promoted by AMD, it’s almost like AMD has convinced developers to target 12GB or 16GB of VRAM at maximum quality settings. But a few judicious tweaks to settings (like dropping texture quality a notch) will generally suffice.
The difficulty is that there’s no good way to get more memory other than simply doing it. The 256-bit interface means Nvidia can do 8GB or 16GB — nothing in between. And with the 3080 and 3080 Ti offering 10GB and 12GB, respectively, there was basically no chance Nvidia would equip a lesser GPU with more GDDR6X memory. (Yeah, I know, but the RTX 3060 12GB remains a bit of an anomaly in that department.)
GeForce RTX 3070 Ti Design: A Blend of the 3070 and 3080
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Unlike the RTX 3080 Ti, Nvidia actually made some changes to the RTX 3070 Ti’s design. Basically, the 3070 Ti has a flow-through cooling fan at the ‘back’ of the card, similar to the 3080 and 3090 Founders Edition cards. In comparison, the 3070 just used two fans on the same side of the card. This also required some tweaks to the PCB layout, so the 3070 Ti doesn’t use the exact same boards as the 3070 and 3060 Ti. It’s not clear exactly how much the design tweak helps with cooling, but considering the 290W vs. 220W TDP, presumably Nvidia did plenty of testing before settling on the final product.
Overall, whether the change significantly improves the cooling or not, we think it does improve the look of the card. The RTX 3070 and 3060 Ti Founders Editions looked a bit bland, as they lacked even a large logo indicating the product name. The 3080 and above (FE models) include RGB lighting, though, which the 3070 Ti and below lack. Third party cards can, of course, do whatever they want with the GPU, and we assume many of them will provide beefier cooling and RGB lighting, along with factory overclocks.
One question we had going into this review was how well the card would cool the GDDR6X memory. The various Founders Edition cards with GDDR6X memory can all hit 110 degrees Celsius on the memory with various crypto mining algorithms, at which point the fans kick into high gear and the GPU throttles. Gaming tends to be less demanding, but we still saw 102C-104C on the 3080 Ti. The 3070 Ti doesn’t have that problem. Even with mining algorithms, the memory peaked at 100C, and temperatures in games were generally 8C–12C cooler. That’s the benefit of only having to cool 8GB of GDDR6X instead of 10GB, 12GB, or 24GB.
GeForce RTX 3070 Ti: Standard Gaming Performance
TOM’S HARDWARE GPU TEST PC
Our test setup remains unchanged from previous reviews, and like the 3080 Ti, we’ll be doing additional testing with ray tracing and DLSS — using the same tests as our AMD vs. Nvidia: Ray Tracing Showdown. We’re using the test equipment shown above, which consists of a Core i9-9900K, 32GB DDR4-3600 memory, 2TB M.2 SSD, and the various GPUs being tested — all of which are reference models here, except for the RTX 3060 (an EVGA model running reference clocks).
That gives us two sets of results. First is the traditional rendering performance, using thirteen games, at 1080p, 1440p, and 4K with ultra/maximum quality settings. Then we have ten more games with RT (and sometimes DLSS, where applicable). We’ll start with 4K, since this is a top-tier GPU more likely to be used at that resolution, plus it’s where the card does best relative to the other GPUs — CPU bottlenecks are almost completely eliminated at 4K, but more prevalent at 1080p. If you want to check 1080p/1440p/4K medium performance, we’ll have those results in our best graphics cards and GPU benchmarks articles — though only for nine of the games.
Image 1 of 14
Image 2 of 14
Image 3 of 14
Image 4 of 14
Image 5 of 14
Image 6 of 14
Image 7 of 14
Image 8 of 14
Image 9 of 14
Image 10 of 14
Image 11 of 14
Image 12 of 14
Image 13 of 14
Image 14 of 14
The RTX 3070 Ti does best as a 1440p gaming solution, which remains the sweet spot in terms of image quality and performance requirements. Overall performance ended up 9% faster than the RTX 3070 and 13% slower than the RTX 3080, so the added memory bandwidth only goes so far toward removing bottlenecks. However, a few games benefit more, like Assassin’s Creed Valhalla, Dirt 5, Horizon Zero Dawn, Shadow of the Tomb Raider, and Strange Brigade — all of which show double-digit percentage improvements relative to the 3070.
Some of the games are also clearly hitting other bottlenecks, like the GPU cores. Borderlands 3, The Division 2, Far Cry 5, FFXIV, Metro Exodus, and Red Dead Redemption 2 all show performance gains closer to the theoretical 7% difference in compute that we get from core counts and clock speeds. Meanwhile, Watch Dogs Legions ends up showing the smallest change in performance, improving just 3% compared to the RTX 3070.
The RTX 3070 Ti makes for a decent showing here, but we’re still looking at an MSRP increase of 20% for a slightly less than 10% increase in performance. Compared to AMD’s RX 6000 cards, the 3070 Ti easily beats the RX 6700 XT, but it comes in 6% behind the RX 6800 — which, of course, means it trails the RX 6800 XT as well.
On the one hand, AMD’s GPUs tend to sell at higher prices, even when you see them in places like the Newegg Shuffle. At the same time, RTX 30-series hardware on eBay remains extremely expensive, with the 3070 selling for around $1,300, compared to around $1,400 for the RX 6800. Considering the RTX 3070 Ti is faster than the RTX 3070, it remains to be seen where street pricing lands. Of course, the reduced hashrates for Ethereum mining on the 3070 Ti may also play a role.
Image 1 of 14
Image 2 of 14
Image 3 of 14
Image 4 of 14
Image 5 of 14
Image 6 of 14
Image 7 of 14
Image 8 of 14
Image 9 of 14
Image 10 of 14
Image 11 of 14
Image 12 of 14
Image 13 of 14
Image 14 of 14
Next up is 1080p testing. Lowering the resolution tends to make games more CPU limited, and that’s exactly what we see. The 3070 Ti was 7% faster than the 3070 this time and 11% slower than the 3080. It was also 7% faster than the 6700 XT and 6% slower than the 6800. While you can still easily play games at 1080p on the RTX 3070 Ti, the same is true of most of the other GPUs on our charts.
We won’t belabor the point, other than to note that our current test suite is slightly more tilted in favor of AMD GPUs (six AMD-promoted games compared to four Nvidia-promoted games, with three ‘agnostic’ games). We’ll make up for that when we hit the ray tracing benchmarks in a moment.
Image 1 of 14
Image 2 of 14
Image 3 of 14
Image 4 of 14
Image 5 of 14
Image 6 of 14
Image 7 of 14
Image 8 of 14
Image 9 of 14
Image 10 of 14
Image 11 of 14
Image 12 of 14
Image 13 of 14
Image 14 of 14
Not surprisingly, while 4K ultra gaming gave the RTX 3070 Ti its biggest lead over the RTX 3070 (11%), it also got its biggest loss (17%) against the 3080. 4K also narrowed the gap between the 3070 Ti and the RX 6800, as AMD’s Infinity Cache starts to hit its limits at 4K.
Technically, the RTX 3070 Ti can still play all of the test games at 4K, just not always at more than 60 fps. Nearly half of the games we tested came in below that mark, with Valhalla and Watch Dogs Legion being the two lowest scores — and they’re still in the mid-40s. The RTX 3070 was already basically tied with the previous generation RTX 2080 Ti, which means the RTX 3070 Ti is now clearly faster than the previous-gen halo card, at half the price.
GeForce RTX 3070 Ti: Ray Tracing and DLSS Gaming Performance
So far, we’ve focused on gaming performance using traditional rasterization graphics. We’ve also excluded using Nvidia’s DLSS technology in order to provide an apples-to-apples comparison. Now we’ll focus on ray tracing performance, with DLSS 2.0 enabled where applicable. We’re only using DLSS in Quality mode (2x upscaling) in the six games where it is supported. We’ll have to wait for AMD’s FSR to see if it can provide a reasonable alternative to DLSS 2.0 in the coming months, though Nvidia clearly has a lengthy head start. Note that these are the same tests we used in our recent AMD vs. Nvidia Ray Tracing Battle.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
Nvidia’s RTX 3070 Ti does far better — at least against the AMD competition — in ray tracing games. It’s not a complete sweep, as the RX 6800 still leads in Godfall, but the 3070 Ti ties or wins in every other game. In fact, the 3070 Ti basically ties the RX 6800 XT in our ray tracing test suite, and that’s before we enable DLSS 2.0.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
Even 1080p DXR generally ends up being GPU limited, so the rankings don’t change much from above. DLSS doesn’t help quite as much at 1080p, but otherwise, the 3070 Ti ends up right around 25% faster than the RX 6800 — the same as at 1440p. We’ve mentioned before that Fortnite is probably the best ‘neutral’ look at advanced ray tracing techniques, and the 3070 Ti is about 5–7% faster there. Turn on DLSS Quality and it’s basically double the framerate of the RX 6800.
GeForce RTX 3070 Ti: Power, Clocks, and Temperatures
We’ve got our Powenetics equipment working again, so we’ve added the 3080 Ti to these charts. Unfortunately, there was another slight snafu: We couldn’t get proper fan speeds this round. It’s always one thing or another, I guess. Anyway, we use Metro Exodus running at 1440p ultra (without RT or DLSS) and FurMark running at 1600×900 in stress test mode for our power testing. Each test runs for about 10 minutes, and we log the result to generate the charts. For the bar charts, we only average data where the GPU load is above 90% (to avoid skewing things in Metro when the benchmark restarts).
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Nvidia gives the RTX 3070 Ti a 290W TDP, and it mostly makes use of that power. It averaged about 282W for our Metro testing, but that’s partly due to the lull in GPU activity between benchmark iterations. FurMark showed 291W of power use, right in line with expectations.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Core clocks were interesting, as the GeForce RTX 3070 Ti actually ended up with slightly lower clocks than the RTX 3070 in FurMark and Metro. On the other hand, both cards easily exceeded the official boost clocks by about 100 MHz. Custom third-party cards will likely hit higher clocks and performance, though also higher power consumption.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
While we don’t have fan data (or noise data — sorry, I’m still trying to get unpacked from the move), the RTX 3070 Ti did end up hitting the highest temperatures of any of the GPUs in both Metro and FurMark. As we’ve noted before, however, none of the cards are running “too hot,” and we’re more concerned with memory temperatures. The 3070 Ti thankfully didn’t get above 100C on GDDR6X junction temperatures when testing, and even that value occured while testing crypto coin mining.
GeForce RTX 3070 Ti: Good but With Diminishing Returns
We have to wonder what things would have been like for the RTX 3070 Ti without the double whammy of the Covid pandemic and the cryptocurrency boom. If you look at the RTX 20-series, Nvidia started at higher prices ($599 for the RTX 2070 FE) and then dropped things $100 with the ‘Super’ updates a year later. Ampere has gone the opposite route: Initial prices were excellent, at least on paper, and every one of the cards sold out immediately. That’s still happening today, and the result is a price increase — along with improved performance — for the 3070 Ti and 3080 Ti.
Thankfully, the jump in pricing on the 3070 Ti relative to the 3070 isn’t too arduous. $100 more for the switch to GDDR6X is almost palatable. Except, while the 3070 offers about 90% of the 3070 Ti performance for 80% of the price and represents an arguably better buy, the real problem is the RTX 3080. It’s about 12–20% faster across our 13 game test suite and only costs $100 more (a 17% price increase).
Well, in theory anyway. Nobody is really selling RTX 3080 for $700, and they haven’t done so since it launched. The 3080 often costs over $1,000 even in the lottery-style Newegg Shuffle, and the typical price is still above $2,000 on eBay. It’s one of the worst cards to buy on eBay, based on how big the markup is. In comparison, the RTX 3070 Ti might only end up costing twice its MSRP on eBay, but that’s still $1,200. And it could very well end up costing more than that.
We’ll have to see what happens in the coming months. Hopefully, the arrival of two more desktop graphics cards in the form of the RTX 3080 Ti and RTX 3070 Ti will alleviate the shortages a bit. The hashrate limiter can’t hurt either, at least if you’re only interested in gaming performance, and the drop in mining profitability might help. But we’re far from being out of the shortage woods.
If you can actually find the RTX 3070 Ti for close to its $600 MSRP, and you’re in the market for a new graphics card, it’s a good option. Finding it will be the difficult part. This is bound to be a repeat of every AMD and Nvidia GPU launch of the past year. If you haven’t managed to procure a new card yet, you can try again (and again, and again…). But for those who already have a reasonable graphics card, there’s nothing really new to see here: slightly better performance and higher power consumption at a higher price. Let’s hope supply and prices improve by the time fall blows in.
Samsung introduced the first mass-produced 3D NAND memory, dubbed V-NAND, in 2013, well ahead of its rivals. Samsung started with 24-layer V-NAND chips back then, and now having gained plenty of experience with multi-layer flash memory, it is on track to introduce 176-Layer V-NAND devices. But that’s only the beginning —Samsung says it envisions V-NAND chips with more than 1,000 layers in the future.
176-Layer V NAND on Track for This Year, PCIe 5.0 Coming
Samsung intends to begin producing consumer SSDs powered by its seventh-gen V-NAND memory that features 176 layers and, according to the company, the industry’s smallest NAND memory cells. This new flash’s interface boasts a 2000 MT/s data transfer rate, allowing Samsung to build ultra-fast SSDs with PCIe 4.0 and PCIe 5.0 interfaces. The drives will use an all-new controller ‘optimized for multitasking huge workloads,’ so expect a 980 Pro successor that demonstrates strong performance in workstation applications.
Over time, Samsung will introduce data center-grade SSDs based on its 176-Layer V-NAND memory. It’s logical to expect the new drives to feature enhanced performance and higher capacities.
While 176-layer V-NAND chips are nearing mass production, Samsung has already built the first samples of its eighth-gen V-NAND with over 200 layers. Samsung says that it will begin producing this new memory based on market demand. Companies typically introduce new types of NAND devices every 12 to 18 months, so you could make more or less educated guesses about Samsung’s planned timeline for 200+ layer V-NAND.
There are several challenges that Samsung and other NAND makers face in their pursuit to increase the number of layers. Making NAND cells small (and layers thinner) requires using new materials to store charges reliably, and etching hundreds of layers is also challenging. Since it isn’t feasible or economical to etch hundreds of layers (i.e., building a 1,000-layer 3D NAND wafer in a single pass), manufacturers use techniques like string stacking, which is also quite difficult to manufacture in high volume.
Finally, flash makers need to ensure that their 3D NAND stacks are thin enough to fit into smartphones and PCs. As a result, they can’t simply increase the number of layers forever, but Samsung believes that 1,000+ layer chips are feasible.
Big Plans for V-NAND
Earlier this year SK Hynix said that it envisioned 3D NAND with over 600 layers, so Samsung is certainly not alone with its big plans for 3D NAND.
It is impossible to say when Samsung develops 1,000-layer V-NAND and SK Hynix launches its 600-layer flash memory. Keeping in mind that manufacturers no longer aim to double the number of layers every year, it is likely that large makers have 3D NAND roadmaps that stretch at least five to ten years out.
The NVIDIA GeForce RTX 3070 Ti is the company’s attempt at bolstering its sub-$700 lineup targeting a segment of the gaming market that predominantly games at 1440p, but needs an upgrade path toward 4K UHD. Cards from this segment are very much capable of 4K gaming, but require a tiny bit of tweaking. There are also handy features like DLSS to fall back on. NVIDIA already has such a product in the RTX 3070, so why did it need the new RTX 3070 Ti? The answer lies in AMD’s unexpected return to the high-end graphics market with its Radeon RX 6800 series “Big Navi” graphics cards. The RX 6800 was found to outclass the RTX 3070 in most games that don’t use raytracing, and the more recently released RX 6700 XT only adds to the pressure as it trades blows with the RTX 3070 at a slightly lower price.
The GeForce RTX 3070 Ti is among a two-part refresh by NVIDIA for the higher-end of its GeForce RTX 30-series “Ampere” product stack, with the other being the RTX 3080 Ti we reviewed last week. NVIDIA attempted to set the RTX 3070 Ti apart from the RTX 3070 without significantly increasing manufacturing costs (i.e., without having to tap into the larger GA102 silicon). It did this with two changes. First, the RTX 3070 Ti maxes out the GA104 chip, enabling all 6,144 CUDA cores physically present as opposed to the 5,888 on the RTX 3070—a 4% increase. Next, NVIDIA gave the memory sub-system a major boost by giving this card 19 Gbps GDDR6X memory instead of the 14 Gbps GDDR6 on the RTX 3070. This in itself is a 35% increase in memory bandwidth even if the memory size remains the same at 8 GB. Slightly higher GPU clock speeds wrap things up. The idea is to outclass the RX 6700 XT and make up ground lost to the RX 6800.
The “Ampere” graphics architecture debuts the second generation of NVIDIA’s ambitious RTX real-time raytracing technology that combines raytraced elements with conventional raster 3D to significantly improve realism. It combines second-generation RT cores, fixed-function hardware that accelerate raytracing, now even even more raytraced effects, third-generation Tensor cores, which accelerate AI deep-learning and leverage the sparsity phenomenon to significantly increase AI inference performance, and the new Ampere CUDA core that doubles compute performance over the previous generation, leveraging concurrent INT32+FP32 math.
The new GeForce RTX 3070 Ti Founders Edition graphics card comes with an all-new design that looks like a cross between the RTX 3080 FE and RTX 3070 FE. It implements the same dual-axial flow-through concept as the RTX 3080 FE, but with styling elements that remind more of the RTX 3070 FE. The design involves two fans, one on either side of the card, and the PCB being shorter than the card itself, so fresh air drawn in by one fan is exhausted from the other side for better heat dissipation. NVIDIA is pricing the GeForce RTX 3070 Ti Founders Edition at $599, a $100 premium over the RTX 3070. We expect that current market conditions will have the card end up at around $1300, matching the RTX 3070 and slightly below the $1400 RX 6800 non-XT.
The MSI GeForce RTX 3070 Ti Suprim X is the company’s top custom-design graphics card based on the swanky new RTX 3070 Ti high-end graphics card by NVIDIA. The Suprim series represents MSI’s best efforts in the areas of product design, factory-overclocked speeds, cooling performance, and more. NVIDIA debuted the RTX 3070 Ti and RTX 3080 Ti to augment its RTX 30-series “Ampere” graphics card family, particularly as it faced unexpected competition from rival AMD in the high-end with the Radeon RX 6000 series “Big Navi” graphics cards. The RTX 3070 Ti is designed to fill a performance gap between the the RTX 3070 and RTX 3080, letting NVIDIA better compete with the RX 6700 XT and RX 6800, which posed stiff competition to the RTX 3070. Cards from this segment are expected to offer maxed-out gaming at 1440p with raytracing enabled, and also retain the ability to play at 4K UHD with reasonably good settings.
The GeForce RTX 3070 Ti is based on the same GA104 silicon as the RTX 3070, but NVIDIA made two major design changes—first, it has maxed out the GA104, enabling all 6,144 CUDA cores as opposed to 5,888 on the RTX 3070; and second, it is using faster 19 Gbps GDDR6X memory in place of 14 Gbps GDDR6 memory. The memory sub-system alone sees a significant 35% uplift in bandwidth. The memory size is still 8 GB.
The GeForce “Ampere” graphics architecture debuts the second-generation of NVIDIA’s path-breaking RTX real-time raytracing technology that combines raytraced effects, such as reflections, shadows, lighting, and global-illumination, with conventional raster 3D graphics to increase realism. “Ampere” combines second-generation RT cores with third-generation Tensor cores that accelerate AI, and faster “Ampere” CUDA cores.
The MSI RTX 3070 Ti Suprim X is an attempt by MSI to match NVIDIA’s Founders Edition cards in terms of aesthetics. A premium-looking, brushed metal cooler shroud greets you, with its trio of TorX 4.0 fans, and a dense aluminium fin-stack heatsink. MSI has given the RTX 3070 Ti its top factory-overclock at 1860 MHz compared to the 1770 MHz reference. In this review, we take the card out for a spin to show you whether MSI has aced a better-looking and better-performing card than the NVIDIA Founders Edition.
Palit GeForce RTX 3070 Ti GameRock OC is the company’s most premium custom-design implementation of NVIDIA’s latest high-end graphics card launch. The RTX 3070 Ti along with last week’s RTX 3080 Ti launch, form part of an attempt to refresh the high-end segment in the face of competition from AMD and its “Big Navi” Radeon RX 6800 series. This segment of graphics cards are targeted at those wanting maxed out gaming at 1440p with raytracing, but also the ability to play at 4K UHD with reasonably good details. NVIDIA already has such a SKU in the RTX 3070, but this was embattled by the RX 6700 XT and RX 6800, which is possibly what the RTX 3070 Ti launch is all about.
NVIDIA created the GeForce RTX 3070 Ti out of the same GA104 silicon as the RTX 3070, by maxing it out. You hence get all 6,144 CUDA cores physically present on the chip, compared to just 5,888 on the RTX 3070. Another major change is memory, with NVIDIA opting for fast 19 Gbps GDDR6X memory over 14 Gbps GDDR6. This results in a significant 35% increase in memory bandwidth over the RTX 3070. The memory size remains 8 GB, though. Wrapping things up are the slightly higher GPU clock speeds. The resulting product, NVIDIA believes, should be competitive against the RX 6800, restoring competition to the sub-$600 market segment.
Palit bolstered the RTX 3070 Ti with its highest factory overclock, at 1845 MHz boost frequency, compared to 1770 MHz reference. The GameRock OC series from Palit always represented over-the-top designs, and this card is no exception. A neatly executed “icebox” pattern tops the cooler shroud, which isn’t unlike the G.SKILL Trident Royal memory modules. This element is illuminated with addressable-RGB.
At this time Palit is unable to provide any MSRP for the GameRock OC. I’d estimate that it’ll end up around $1350 in the free market, so $50 higher than the RTX 3070 Ti Founders Edition.
GeForce RTX 3070 Ti Market Segment Analysis
Price
Cores
ROPs
Core Clock
Boost Clock
Memory Clock
GPU
Transistors
Memory
RX 5700 XT
$370
2560
64
1605 MHz
1755 MHz
1750 MHz
Navi 10
10300M
8 GB, GDDR6, 256-bit
RTX 2070
$340
2304
64
1410 MHz
1620 MHz
1750 MHz
TU106
10800M
8 GB, GDDR6, 256-bit
RTX 3060
$900
3584
48
1320 MHz
1777 MHz
1875 MHz
GA106
13250M
12 GB, GDDR6, 192-bit
RTX 2070 Super
$450
2560
64
1605 MHz
1770 MHz
1750 MHz
TU104
13600M
8 GB, GDDR6, 256-bit
Radeon VII
$680
3840
64
1400 MHz
1800 MHz
1000 MHz
Vega 20
13230M
16 GB, HBM2, 4096-bit
RTX 2080
$600
2944
64
1515 MHz
1710 MHz
1750 MHz
TU104
13600M
8 GB, GDDR6, 256-bit
RTX 2080 Super
$690
3072
64
1650 MHz
1815 MHz
1940 MHz
TU104
13600M
8 GB, GDDR6, 256-bit
RTX 3060 Ti
$1300
4864
80
1410 MHz
1665 MHz
1750 MHz
GA104
17400M
8 GB, GDDR6, 256-bit
RX 6700 XT
$1000
2560
64
2424 MHz
2581 MHz
2000 MHz
Navi 22
17200M
12 GB, GDDR6, 192-bit
RTX 2080 Ti
$1400
4352
88
1350 MHz
1545 MHz
1750 MHz
TU102
18600M
11 GB, GDDR6, 352-bit
RTX 3070
$1300
5888
96
1500 MHz
1725 MHz
1750 MHz
GA104
17400M
8 GB, GDDR6, 256-bit
RTX 3070 Ti
$1300 MSRP: $600
6144
96
1575 MHz
1770 MHz
1188 MHz
GA104
17400M
8 GB, GDDR6X, 256-bit
Palit RTX 3070 Ti GameRock OC
$1350
6144
96
1575 MHz
1845 MHz
1188 MHz
GA104
17400M
8 GB, GDDR6X, 256-bit
RX 6800
$1400
3840
96
1815 MHz
2105 MHz
2000 MHz
Navi 21
26800M
16 GB, GDDR6, 256-bit
RX 6800 XT
$1700
4608
128
2015 MHz
2250 MHz
2000 MHz
Navi 21
26800M
16 GB, GDDR6, 256-bit
RTX 3080
$1500
8704
96
1440 MHz
1710 MHz
1188 MHz
GA102
28000M
10 GB, GDDR6X, 320-bit
RTX 3080 Ti
$2200
10240
112
1365 MHz
1665 MHz
1188 MHz
GA102
28000M
12 GB, GDDR6X, 384-bit
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.