Depending on the leaks, some Navi 21 models may operate at a maximum clock speed of up to 2.4 GHz.
AMD will introduce the RDNA2 architecture and Radeon RX 6000 series graphics cards 28. October, just over a week later. The performance of graphics cards has already become a small first-mover, suggesting that at least one model competes in roughly the same performance class as the GeForce RTX 3080.
Familiar credit leaker _rogame has tweeted his latest information about the Navi 21 graphics circuit. According to the leak, the Navi 21 XT graphics cards would work 1450 – 1500 MHz basic, 2000 – 2100 MHz Game and 2200 – 2400 MHz Boost clock frequencies. Degree slower Navi 21 XL works according to the leaker 1350 – 1450 MHz Basic, 1800 – 1900 MHz Game – and 2100 or possibly 2200 MHz Boost clock frequency.
According to Rograme, the lower of the above-mentioned clock frequencies would be the reference clock frequencies and the higher the factory clock frequencies of the AIB models. According to the driver information, the Navi 21 not only supports the XT and XL, but also the XTX and XLE models. The XTX model is rumored to be available only from AMD when AIBs (Add-in Board partners) get the best XT models, but this is impossible to confirm at this stage.
Patrick Schur is still a newcomer to the leak front, but at least so far his knowledge has proven to be relevant. According to Schur, at least one specific AIB manufacturer’s Navi 21 XT model would have 16 GB of GDDR6 memory, approximately 2.4 GHz Game clock frequency and 255 watt TGP (Total Graphics Power). It is possible that Schur has mixed Boost –
It’s no secret at this point that Intel is working on its own graphics cards, but it seems that Amnesia: Rebirth has more or less clear the performance of at least one of the future models of Intel graphics cards , as we can see in the screenshot that we attach, on the page of Steam dedicated to this game we can see not only two, but three brands of GPUs in the graphic requirements section.
Although to this day not much is known about the Intel Xe-HPG , It is expected that this graphics card has a performance at the level of the graphics cards indicated in the recommended graphics section. given, so we could say that the performance of the Intel Xe-HPG will be similar to that of an AMD Radeon RX 580.
Obviously, all this can only be Guesswork from the game developer , so we can’t take this as a performance metric at face value. However, it is still interesting to see how Intel’s dedicated graphics cards begin to find their place in the hardware requirements of the latest games , something that just a year or two ago was almost unthinkable.
End of Article. Tell us something in the Comments or come to our Forum!
Jordi Bercial
Avid tech and electronics enthusiast. I messed around with computer components almost since I learned to ride. I started working at Geeknetic after winning a contest on their forum for writing hardware articles. Drift, mechanics and photography lover. Don’t be shy and leave a comment on my articles if you have any questions.
You may also be interested in Other articles and news about technology
In this article, which our team will regularly update, we will maintain a growing list of information pertaining to upcoming hardware releases based on leaks and official announcements as we spot them. There will obviously be a ton of rumors on unreleased hardware, and it is our goal to—based on our years of industry experience—exclude the crazy ones. In addition to these upcoming hardware release news, we will regularly adjust the structure of this article to better organize information. Each time an important change is made to this article, it will re-appear on our front page with a “new” banner, and the additions will be documented in the forum comments thread. This article will not leak information we signed an NDA for.
Feel free to share your opinions and tips in the forum comments thread and subscribe to the same thread for updates.
Adds AVX512 instructions (so far available only on HEDT platform, since Skylake-X). New instructions: AVX512F, AVX512CD, AVX512DQ, AVX512BW, and AVX512VL. New commands: AVX512_IFMA and AVX512_VBMI
20-30% broadening of various number crunching resources, wider execution window, more AGUs
18% IPC gains vs Cascade Lake
SHA-NI and Vector-AES instruction sets, up to 75% higher encryption performance vs. “Skylake”
Supports unganged memory mode
Integrated GPU based on new Gen11 architecture, up to 1 TFLOP/s ALU compute performance
Integrated GPU supports DisplayPort 1.4a and DSC for 5K and 8K monitor support
Gen11 also features tile-based rendering, one of NVIDIA’s secret-sauce features
Integrated GPU supports VESA adaptive V-sync, all AMD FreeSync-capable monitors should work with this
Ice Lake introduces Intel TME (Total Memory Encryption), also Intel Platform Firmware Resilience (Intel PFR)
Intel Core i9-10990XE
Release Date: unknown, originally early 2020, seems cancelled now
22-cores + HyperThreading
Uses Cascade Lake-X architecture
LGA2066 Socket
1 MB L2 cache per core, 30.25 MB shared L3 cache
4 GHz base, up to 5 GHz boost
Roughly matches Threadripper 3960X in Cinebench
Intel Rocket Lake [updated]
Release Date: Q1 2021
Succeeds “Comet Lake”
Variants: Rocket Lake-“S” (mainstream desktop), -“H” (mainstream notebook), -“U” (ultrabook), and -“Y” (low power portable)
14 nanometer production process
Seems to be limited to eight cores (2 less than 10-core Comet Lake)
Some indication of mixed HyperThreading configurations, for example 8-core, 12-thread
Uses “Cypress Cove” core, which seems to be a backport of “Willow Cove” to 14 nm process
Up to 10% IPC improvement over Skylake
No FIVR, uses SVID VRM architecture
125 W maximum TDP
Compatible with 400-series chipsets
Possible they release 500-series chipsets with added features
Socket LGA1200 (just like Comet Lake)
Supports PCI-Express 4.0
20 PCIe lanes
Intel Xe integrated graphics, based on Gen 12 with HDMI 2.0b and DisplayPort 1.4a
Engineering Sample: Family 6, Model 167, Stepping 0, 8c/16t, 3.4 GHz base, 5.0 GHz boost
Engineering Sample: Family 6, Model 167, Stepping 0, 8c/16t, 3.2 GHz base, 4.3 GHz boost
Intel Willow Cove and Golden Cove Cores
Release Date: 2021
Succeeds “Sunny Cove”
Willow Cove improves on-die caches, adds more security features, and takes advantage of 10 nm+ process improvements to increase clock speeds versus Sunny Cove
Golden Cove will add significant single-thread (IPC) increases over Sunny Cove, add on-die matrix multiplication hardware, improved 5G network-stack HSP performance, and more security features than Willow Cove
Intel Alder Lake [updated]
Release Date: H2 2021
Mixes CPU cores of various processing power (and energy consumption), similar to the Big.Little-like designs for mobile devices
Combines up to eight Golden Cove with up to eight Gracemont (Atom) cores
These cores have two different instruction sets, for example Golden Cove has AVX-512, TSX-NI and FP16, which Gracemont lacks
10 nm process
Uses Socket LGA1700
Alder Lake for desktop: 37.5 mm x 45 mm package
Desktop CPUs come in 125 W and 80 W
Could use Foveros 3D Stacking technology
Possible CPU configurations 8+8+1 (8 big cores, 8 small cores, GT1 integrated), and 6+0+1 (6 big cores, no small cores and GT1 integrated)
Includes Gen12 Xe iGPU
DDR5 memory support
PCI-Express 5.0 support
Includes CLDEMOTE instruction, to invalidate cache lines
Intel Sapphire Rapids
Release Date: H2 2021
Successor to Cooper Lake
8-channel DDR5
Uses Socket LGA4677
For enterprise / data center
10 nm+ production process
Willow Cove CPU cores
PCIe 5.0
Probably 7 nm process
Platform name: Eagle Stream
Includes CLDEMOTE instruction, to invalidate cache lines
Intel Grand Ridge [added]
Release Date: 2022 or later
Produced on 7 nm HLL+ process
Successor to Atom “Snow Ridge”
24 cores across 6 clusters with 4 cours each
4 MB L2 per cluster, plus L3 cache
Uses Gracemont CPU core
Dual-channel DDR5
PCI-Expres Gen 4 with 16 lanes
Intel Elkhart Lake
Release Date: Unknown
Produced on 10 nm process
Designed for next-gen Pentium Silver and Celeron processors
CPU cores use Tremont architecture
GPU uses Gen 11
Dual-core and Quad-core configurations
Single-channel memory controller with DDR4 and LPDDR4/x support
Engineering sample: 1.9 GHz, 5/9/12 W TDP
Intel Meteor Lake [updated]
Release Date: 2022 or 2023
Succeeds “Alder Lake”
New microarchitecture, more advanced than “Willow Cove”, possibly “Golden Cove”
As of late 2020 Intel is adding support for Meteor Lake to the Linux Kernel
Lisa Su in a CES 2020 interview said “we will have a high-end Navi […] it is important”
AMD CFO: “Big Navi” will be a halo product and not merely a lofty performance increase over the RX 5700 XT to make AMD competitive against GeForce “Ampere.”
Adds support for DirectX 12 Ultimate: variable-rate shading and hardware-accelerated ray-tracing (DXR version 1.1)
AMD RDNA 2 [updated]
Announcement: October 28
Lisa Su: “we will have our new next-generation RDNA architecture that will be part our 2020 lineup”
TSMC, 7 nm Plus (probably not 7 nm+ EUV)
Up to 18% higher transistor density
Higher clock speeds than RDNA
50% better performance per Watt than RDNA, twice the efficiency as GCN
Adds variable rate shading
Adds support for BFloat16
Adds AV1 video decode hardware acceleration
Adds hardware raytracing acceleration (DXR version 1.1)
Supports Microsoft DirectX 12 Ultimate API /DXR, VRS, Mesh Shaders & Sampler Feedback)
Same GPU architecture powers PlayStation 5 & Xbox Series X
AMD Radeon RX 6500 [added]
Release date: unknown
40 Compute Units / 2560 Stream Processors
192-bit GDDR6 memory
7 nanometer production process
RDNA2 architecture
Codename “Navy Flounder”
Below $250
AMD RDNA 3
Release Date: Late 2021 or 2022
“Advanced Node”, probably TSMC 6 nm or 5 nm
AMD CDNA and CDNA2 [updated]
Release Date: 2020 for CDNA and 2021-2022 for CDNA2
New architecture that focuses on compute for “Radeon Instinct”
TSMC 7 nm or 7 nm+
128 Compute Units = 8192 shaders
Arcturus engineering sample has 120 CUs (7680 shaders), 878 MHz for the core clock, 750 MHz SoC clock, and 1200 MHz memory clock
Compute only—Rasterization, display controllers and media encoding hardware removed
SDV OpenCL performance in Geekbench: 55373 points, with 3.53 Gpixels/s in “Sorbel,” 1.30 Gpixels/sec in Histogram Equalization, 16 GFLOPs in SFFT, 1.62 GPixels/s in Gaussian Blur, 4.51 Msubwindows/s in Face Detection, 2.88 Gpixels/s in RAW, 327.4 Mpixels/s in DoF, and 13656 FPS in Particle Physics. Roughly matches 11 CU Vega Picasso IGP
SDV is 15.2 cm long, 96 Execution Units, PCI-Express x16, slot only power (so 75 W), 3x DisplayPort, 1x HDMI, high noise levels
Up to 2x performance uplift for Intel Xe integrated graphics over previous Gen 11
Using a multi-chip design approach, with Foveros, Intel Xe scales up to 512 EUs with 500 W
512 EU model is datacenter only, 300 W 256 EU model for enthusiast markets
Targeted at 1080p gameplay, CES demonstration showed working gameplay on Destiny 2
Could be produced at Samsung to leverage their 10 nm tech, while Intel ramps up its own
Future Xe GPUs could be built on TSMC 6 nm and 3 nm nodes
Raytracing hardware acceleration support will definitely be included on the data-center GPUs (and probably on the consumer models, too)
Double-digit TFLOP/s scaling all the way up to 0.1+ PFLOP/s
Will be used in upcoming Cray Aurora Supercomputer for Argonne National Laboratory in 2021
Targeting a wide segment of markets, including consumer (client-segment) graphics, enthusiast-segment, and data-center compute
Uses new graphics control panel that’s being introduced during 2019
Intel Discrete GPU / Arctic Sound
Release Date: 2020
Intel will hold a world tour in 2019, to build enthusiasm for the new architecture
Advanced management for power and clocks
Test chip: 8×8 mm² die area, 1.54B transistors, 14 nm, 50-400 MHz clock, EUs at 2x clock if needed
Raja Koduri who left AMD in late 2017 is somehow involved
Confirmed to support VESA Adaptive Sync
Intel Ponte Vecchio
Release Date: 2021 or 2022
Discrete GPU
Produced on 7 nanometer production process
Probably not 7 nanometer Intel but 7 nm TSMC or even 6 nm TSMC
Multiple GPU dies will be combined into a single accelerator
Architected “for HPC modeling and simulation workloads and AI training”
Workloads can be processed by GPU and CPU at the same time, using Intel oneAPI
Foveros packaging technology
Xe link to combine multiple GPUs (CXL interconnect)
Release Date: September 2020, at the same time as Zen 3.
Highly likely these were scrapped when AMD decided to enable compatibility with 400 and 500 series chipsets
Socket AM4
Supporting Zen 3 Ryzen 4000 processors
Support for older CPUs very likely, probably at least Ryzen 3000
PCI-Express 4.0
Memory
DDR5 System Memory [updated]
Release Date: Late 2020, probably 2021
JEDEC standard finalized as of Jul 15th 2020
Demo’d in May 2018 by Micron: DDR5-4400
Samsung 16 Gb DDR5 DRAM developed since February 2018
Samsung has completed functional testing and validation of a LPDDR5 prototype: 10 nm class, 8 Gbit, final clocks: DDR5-5500 and DDR5-6400
Samsung has started 16 Gb LPDDR5 mass production in Aug 2020
SK Hynix 4800 – 5600 Mbps, 1.1 V
SK Hynix also has 16 Gb DDR5-5200 samples ready, 1.1 V, mass production expected 2020
April 2020: Hynix has 8.4 Gbps DDR5, minimum density per die is 8 Gbit, maximum is 64 Gbit
ECC is now supported by all dies (no longer specific to server memory modules)
SK Hynix demonstrated DDR5 RDIMM modules at CES 2020: 4800 MHz, 64 GB
Micron is shipping LPDDR5 for use in Xiaomi phones (Feb 2 2020). 5.5 Gbps and 6.4 Gbps
Samsung has begun production for LPDDR5 for mobile devices (Feb 25 2020). 16 GB, 5.5 Gbps
4800 – 6400 Mbps
Expected to be produced using 7 nm technologies
32 banks, 8 bank groups
64-bit link at 1.1 V
Burst length doubled to BL16
Bank count increased from 16 to 32
Fine grain refresh feature
Improved power efficiency enabled by Vdd going from 1.2 V to 1.1 V as compared to DDR4
On-die ECC
Voltage regulators on the DIMM modules
AMD DDR5 memory support by 2021/2022, with Zen 4
HBM2E Graphics Memory [updated]
Release Date: 2020
Offers 3.2 Gbps per pin (33% faster than HBM2)
Rambus offers a 4.0 Gbps memory interface controller
Samsung Flashbolt: 16 Gb per die, 8-layers stacked, 16 GB per chip with 410 GB/s bandwidth
Hynix: 460 GB/s, 3.6 Gbps, eight 16 Gb chips are stacked for a single 16 GB chip
Hynix: mass production has started as of July 2020
HBM3 Graphics Memory
Release Date: Not before 2019
Double the memory bandwidth per stack (4000 Gbps expected)
Expected to be produced using 7 nm technologies
HBMNext Memory [added]
Release Date: Late 2022 or 2023
JEDEC work in progress
Micron involved
GDDR6X Graphics Memory
Release Date: 2020
Will first be used on new GeForce RTX 3000 / Ampere Series
Silicon Fabrication Tech
TSMC 7 nanometer+
Release Date: Q4 2019
TSMC N7+ is successor to original 7 nm node
Uses EUV (Extreme Ultra Violet)
15-20% more density and improved power consumption over N7
TSMC 6 nanometer
Release Date: Unknown
Backwards compatible with 7 nm process—no new design tools needed
Uses EUV (Extreme Ultra Violet), up to four EUV layers
18% higher logic density than N7
TSMC 5 nanometer [updated]
Release Date: March 2020 to tape-out customer designs
Risk production as of Q2 2019
High volume production: Q2 2020
Uses TSMC’s second implementation of EUV (Extreme Ultra Violet)
Up to 1.8x the density of 7 nm
Up to 14 layers
+15% higher clocks
30% better bower than N7
Intel might be a customer of this node
N5P “Plus” node: improvement to N5 while staying on 5 nm, 84-87% increase in transistor densities over N7
TSMC 5 nanometer+
Release Date: 2021
High-volume production in Q4 2020
Uses EUV (Extreme Ultra Violet)
TSMC 4 nanometer [updated]
Mass production: 2023
Codename “N4”
Uses EUV lithography
TSMC 3 nanometer [updated]
April 2020: On-Track
Risk production: 2021
Volume production: H1 2022
FinFET technology
Uses TSMC’s third implementation of EUV (Extreme Ultra Violet)
10-15% speed improvement at iso-power or 25-30% power reduction at iso-speed, compared to N5.
55,000 water per month at the start, 100,000 by 2023
TSMC 2 nanometer [updated]
No details known other than “TSMC has started development”
June 2020: TSMC is accelerating R&D
Sep 2020: Fab construction has begun
Will use Gate-All-Around (GAA) technology
Samsung 6 nanometer
Release Date: Unknown
First product taped out as of Q2 2019
Uses EUV (Extreme Ultra Violet)
Special variant for customers
Samsung 5 nanometer
Release Date: 2020
Ready for customer sample production as of Q2 2019
Mass production in Q2 2020
Yields are challenging as of Q2 2020
Uses EUV (Extreme Ultra Violet)
Up to 25% the density of 7 nm
20% lower power consumption
10% higher performance
Samsung 3 nanometer
Release Date: 2022
50% less power while delivering 30% more performance
45% less silicon space taken per transistor (vs 7 nm)
Intel 7 nanometer
Release Date: 2022 or 2023
Succeeded by 7 nm+ node in 2022, and 7 nm++ in 2023
Uses EUV (Extreme Ultra Violet)
4x reduction in design rules
Planned to be used on multiple products: CPU, GPU, AI, FPGA, 5G networking
Other
Hynix 4D NAND
Release Date: H1 2019
Developed by SK Hynix
Sampling in Q4 2018
Products demonstrated at CES 2020: Platinum P31 M.2 NVMe and Gold P31—PCIe 3.0 x4, using flash, DRAM and controller made by Hynix, over 3 GB/s read/write.
Reduces chip physical size, while increasing capacity at the same time
Every year we think that we have less and less time, among other things because this impression is intensified by, for example, technology that is developing at an alarming pace. That is why the series of the Most Interesting News of the Last Week was created (or actually reactivated). Every week on Monday, for the busiest PurePC Readers, there will be a material presenting a summary of the news that has enjoyed your greatest interest in the last seven days. Perhaps you will also find in it news that you have missed so far. Without extending it, I invite you to an overview of the most interesting topics of the week from 12 to 18 October 2020.
Last week’s highlights: 12 – 18 October 2020. What interesting happened in the hardware, gaming and broadly understood technology industries? You can find the telegraphic abbreviation below.
An overview of the most interesting topics of the week:
Watch Dogs Legion – hardware requirements for Ray Tracing and DLSS
Entertainment
Ubisoft has published updated hardware requirements for the PC version. In the case of requirements without active Ray Tracing and DLSS, one thing has changed – the AMD Radeon VII card has disappeared in 4K resolution and ultra settings, instead we will find a new NVIDIA GeForce RTX card 3080. More changes have been made to the requirements for the game with ray tracing and DLSS 2.0 enabled.
Read on…
Apple iPhone 12 officially – 4 smartphone models with 5G for everyone
Mobile devices
During the “Hi, Speed” event, Tim Cook reminded not only of the recent premieres of the bitten apple. He also presented the latest model of a smart speaker belonging to the HomePod family (Apple HomePod Mini), but most of all, he presented a new generation of iPhones that stand out from the previous editions primarily with 5G network support.
Read on…
NVIDIA may refresh GeForce RTX graphics cards 3000 in 7 nm
Graphic cards
According to DigiTimes, NVIDIA wants to upgrade its Ampere consumer GPUs to the technol process general 7 nm from TSMC. According to the source, the size of this transition is to be very large. TSMC has now allegedly become more “NVIDIA-friendly”, which may be because much of the company’s production capacity is now focused on the new 5nm lithography.
Read on…
Assassin’s Creed Valhalla – we know the hardware requirements of the PC version
Entertainment
Ubisoft revealed that the absolute minimum resolution 1080 pi 30 Frames per second are Intel i5 Quad Processors – 4460 or Ryzen 3 1200 working with graphics cards like GeForce GTX 960 or AMD R9 380. So these are the hardware requirements similar to Watch Dogs Legion. Also recommended configuration for Full HD and 30 FPS looks almost identical.
Read on.. .
Cyberpunk 2077 – style above all else? These are the vehicles and mods in the game
Entertainment
The fourth Night City Wire show revealed a trailer showing a selection of vehicles in the game, from trucks to armored cars , and ending with limousines and sports carriages (they will have different versions). The representative of the studio revealed that we will “summon” them just like Roach in The Witcher 3 (there was also a reference to her jumping on the roofs).
Read on…
AMD A9 – 9820 – Xbox Console Performance in APU for 125 USD?
Motherboards
Unique motherboards appeared on Aliexpress. They have a soldered APU chip in the form of AMD A9 – 9820, which probably comes from the console Xbox One S and offers performance close to the Intel Core i5 processor – 7400. Its price is 125 dollars, or approximately 489 PLN. On the AMD website you won’t find an APU chip like the AMD A9 – 9820, but Chuwi he uses them in his mini PC in the form of the Chuwi AeroBox.
Read on…
AMD Ryzen 9 5950 X with OC up to 6 GHz on Apple iMac Pro
Processors
Every day new information about the upcoming Zen 3 processors and a new report appear on the network is directly related to the popular GeekBench 5 benchmark. It tested a new version of the Apple iMac Pro with 16 – AMD Ryzen 9 core processor 5950 X. The information from the GeekBench database shows that the processor during the tests was overclocked to 6 GHz.
Read on…
All MSI B motherboards 450 and X 470 will support AMD Ryzen 5000
Motherboards
MSI has officially confirmed that all motherboards with AMD chipset 400 will get a BIOS (beta) that adds support for the latest Ryzen processors 5000. Updates for individual models will be released in January 2020 year. As an aside, we would like to remind you that owners of boards with the AMD chipset 300 (X 370, B 350 and A 320), according to the Reds statement, they will not receive BIOSs with Ryzen support 5000.
Read on…
AMD Radeon RX 6000 – new information about graphics cards
Graphic cards
To Partial specification (of course, these are still unofficial parameters) of Radeon RX cards 6000 based on NAVI cores 21 XT and NAVI 21 XL, thanks why we more or less know what to expect. According to the information that appeared on the network, the upcoming cards are to be characterized by high core clocks and a high TGP factor.
Read on…
Samsung Galaxy S 21 – the first smartphone renders are disappointing
Mobile devices
Leakster Ice universe has uploaded Samsung Galaxy S renders to its Twitter profile 21 (or S 30, we do not know yet what names the manufacturer will use). First impressions? The presented model looks very similar to the latest top Korean smartphones – on the back we still have a camera island on the left , but this time it has been stretched over the device’s frames, which looks quite interesting.
Read on…
Here are some larger materials, which appeared on PurePC last week:
Is CPU restricts GeForce RTX 3080 in graphics places ?
Samsung Galaxy S smartphone test 20 FE – cheaper, not worse
Corsair K Keyboard Test 60 RGB PRO with Cherry Viola switches
EK AIO cooling test 360 D-RGB – Performance above all!
NZXT H1 Case Test – Xbox Series X Better?
Xia Smartphone Test omi Mi 10 T Pro: Pro Edition Night Photography
GeForce RTX Test 3080 and RTX 3090 in resolution 3440 x 1440 with HDR
Creative Sound BlasterX Katana test – soundbar with sound card
ADATA XPG Summoner Keyboard Test – Affordable and Good Mechanic
ASUS ZenBook Test 14 with Intel Core i7 – 1165 G7. Premiere of Tiger Lake
General information emerges again on the network on the operating frequencies of Big Navi, also known as Navi 16. We return to talk of frequencies well beyond 2 GHz, with big leaps between Base, Game and Boost Clock.
by Manolo De Agostini published 19 October 2020 , at 07: 53 in the Video Cards channel AMD Radeon Ships
There are about ten days left until the presentation of the new video cards Radeon RX 6000 of AMD based on RDNA 2 architecture. We are approaching the event both with an idea of the heatsink of the new flagship proposal (Big Navi) and its performance, thanks to the small advances offered by the manufacturer. We do not know any other details, in the past there has been talk of the alleged technical specifications of the various chips and there have also been rumors of very high operating frequencies, over the 2000 MHz.
The indiscretion has returned in the past few hours, seasoned with some other details. As always, take everything with due pliers. According to a Twitter leaker, whose information would come from a third-party manufacturer (so it shouldn’t be part of AMD’s reference design), there will be cards based on the flagship GPU Navi 16 XT capable of working at 2.4GHz regarding the Game Clock, a term coined last year to define a typical operating frequency during gaming.
According to Videocardz, the reference cards will instead have a Game Clock around 2.3 GHz, while the Navi variant 21 XL should stop at around 2.2 GHz. However, it must be added that another leaker proposes a slightly different picture, where these frequencies do not represent the Game Clock but the Boost Clock. Per Navi 21 XT speaks of a base frequency of 1450 / 1500 MHz, which rises to 2000 / 2100 MHz in Game Clock and up to 2200 / 2400 MHz in Boost Clock. As for Navi 21 XL, the leaker indicates a Base Clock of 1350 / 1400 MHz, a Game Clock of 1800 / 1900 MHz and a Boost Clock of 2100 / 2200 MHz.
In general it looks like there will be quite a jump between the Base Clock and the other frequencies, with a + 500 / 600 MHz between Base and Game Clock and a further + 53 / 300 MHz between Game and Boost Clock. Since this is not the first time we have talked about such high clocks, we are starting to believe that the rumors have some basis – or AMD is voluntarily circulating incorrect information in order to protect the launch.
Regarding Ships 21 XT, we also speak of 16 GB of GDDR6 memory and a TDP of 255 W, a value over which, however, the partners would have ample control. It is currently unclear if we will also see a third variant of the Navi GPU 21, called “XTX”, but could only be used by AMD itself for certain projects and / or initiatives. AMD flagship cards, according to previous rumors, should have a bus to 256 bit, perhaps assisted by a technology
João Silva 36 mins ago Featured Tech News, Graphics
Details about the AMD Navi 21 GPUs have leaked, detailing some specifications of the Navi 21 XT and XL GPUs. These details include base, game, and boost clocks of both GPUs, plus the TGP and memory capacity of the Navi 21 XT.
The first leak comes from @patrcikschurr, which stated that it was from an AiB partner graphics card. According to his tweet, the Navi 21 XT will be clocked at around 2.4GHz (game clock) and it will feature 16GB of GDDR6 memory and a TGP design of 255W.
After Patrick Schurr’s tweet, @_rogame further detailed the clocks frequencies of both the Navi 21 XT and XL GPUs. Apparently, the Navi 21 XL, which is expected to be the lesser version of the Navi 21 GPU, will come with a base clock ranging from 1350MHz and 1400MHz, a game clock between 1800MHz and 1900MHz, and capable of boosting up to 2100/2200MHz. On the other hand, the Navi 21 XT is higher clocked, featuring a base clock of 1450-1500MHz, a game clock ranging from 2000MHz to 2100MHz, and a boost clock of up to 2200-2400MHz. The reference cards should be closer to the bottom of the clock range, while AIB cards are expected to be closer to the top of it.
Besides the Navi 21 XT and XL, rumours say that there’s also an XTX GPU featuring more CUs and possibly different clock speeds. The cards equipped with these GPUs are all expected to feature 16GB of GDDR6 memory, 256-bit interfaces, and Infinity cache.
AMD has scheduled the announcement of its Radeon RX 6000 graphics cards for October 28th.
KitGuru says: Are you interested in getting one of the upcoming AMD Radeon RX 6000 graphics cards? Do you think the flagship Radeon RX 6000 card will outperform the RTX 3080?
Become a Patron!
Check Also
Porting future Xbox Games to Switch is “unsustainable,” claims Phil Spencer
Over the course of this generation, we have seen a number of first
Next year, AMD will introduce a total of three series of APU processors to laptops – Van Gogh (Athlon processors) with Zen 2 cores and an integrated graphics chip based on RDNA architecture, Lucienne being a refreshed version of Renoir (Zen 2 / Vega) and Cezanne. The last of the mentioned groups will use the new Zen 3 cores, but will also still be based on Vega graphics chips. From the information that appeared a few weeks ago, we know that in the case of low-voltage processors, four units will appear: AMD Ryzen 5 5500 U, Ryzen 5 5600 U, Ryzen 5 5700 U and Ryzen 5 U .
Information on the AMD Ryzen 5 processor 5600 U for laptops has appeared on the network. It will use the improved Zen 3 cores.
AMD Ryzen 5 5600 U is to be a 6-core processor and 12 – threaded, similar to Ryzen 5 4600 U. It will be based on Zen 3 cores and integrated Radeon Graphics with 7 Compute Units. This means that we will get 1 CU more compared to the current Ryzen 5 processors from the Renoir family. The iGPU clock speed will also be higher, amounting to 1800 MHz. For comparison, the current Vega 6 chips with Ryzen 5 4600 U / 4600 H have a clock 1500 MHz, while Vega 7 with Ryzen 7 4700 U / Ryzen 7 4800 H clock 1600 MHz.
– ExecutableFix (@ExecuFix) October 17, 2020
AMD Ryzen 5 5600 U will also have a higher clock speed despite the unchanged TDP at 15 In (configurable from 1600 to 25. The base clock of the processor is to be 2.3 GHz, while in Boost mode it will reach 4.2 GHz. For comparison, the AMD Ryzen 5 processor 4600 U clocked at 2.1 GHz and 4.0 GHz, respectively. Therefore, the clock speed will be 200 MHz higher compared to its predecessor. Premiere of APU AMD Cezanne and Luc
There will be more 10 days is the second AMD conference this month, during which we will finally get to know the final specification of graphics cards from the Radeon RX family 6000, based on the RDNA 2 architecture. The closer to the presentation, the more information gets onto the web. We recently wrote about the NAVI core size 21, which will appear this year in three versions – NAVI 21 XTX, NAVI 21 XT and NAVI 21 XL. A fourth variant – NAVI 21 XE will be released next year. Partial specification (of course, these are still unofficial parameters) of the upcoming cards has entered the Internet, so we know more or less what to expect. And we should expect very high core clocks and a high TGP factor.
Partial specification of Radeon RX cards 6000 has entered the network. ) based on NAVI cores 21 XT and NAVI 21 XL.
All three NAVI variants 21, i.e. NAVI 21 XTX, NAVI 21 XT and NAVI 21 XL will be equipped with 16 GB of GDDR6 VRAM on the 256 – bit bus. In addition, the cards are to be equipped with a solution called Infinity Cache, which is to ultimately support a relatively narrow memory bus. At the moment, however, we do not know whether all three NAVI variants 21 will have the same CU structure, or whether different versions will be had more or less of them. According to the information that appeared on the network, the upcoming cards are to have high core clocks.
For the NAVI variant 21 XL the base core frequency is to be 1980 MHz, while in Boost mode it should reach the value 2190 MHz. The variant NAVI 21 XT will have even higher clocks. Here the base frequency will be set between 2065 and 2160 MHz, in turn, the clock speed in the Boost mode will range from 2300 to 2410 MHz. The TGP (Total Graphics Power) factor, i.e. the NAVI graphics chip itself 21 XT will be 255 W. However, it should be expected that the TBP (Total Board Power) will be even higher, especially that the new cards will also be equipped with a USB type C port, which on GeForce RTX cards 2000 downloaded additional 30 W.
The MSI Optix MAG273R delivers more contrast than most IPS panels and video processing that’s without fault. HDR is lacking, but budget-minded gamers should check it out.
For
Excellent contrast
Saturated color
Low input lag
Good value
Against
HDR looks like SDR
Light gamma
Features and Specifications
In any kind of gaming competition, monitor speed is always an important factor. Resolution, pixel density and contrast all affect the image quality, but professional players need high frame rates and instant control response in their best gaming monitor.
If you want to keep gameplay above 100 frames per second (fps), a monitor with 1080p resolution is the easiest way to make it happen. Moving fewer pixels means less processing power is required. For shoppers, this approach keeps hardware costs down (no need for the top cards on our GPU benchmarks hierarchy) and still provides an excellent gaming experience.
An attractive choice for gamers on a budget, the MSI Optix MAG273R is a 1080p, 27-inch IPS monitor that provides a 144 Hz refresh rate, Adaptive-Sync, extended color and HDR for $250-$260 (as of this writing). It’s aimed at eSports enthusiasts but works equally well in all kinds of fast-moving action games.
24.3 x 16.6-21.8 x 8.1 inches (617 x 422-554 x 206mm)
Panel Thickness
2.6 inches (65mm)
Bezel Width
Top/sides: 0.4 inch (9mm); Bottom: 0.5 inch (12mm)
Weight
13.5 pounds (6.1kg)
Width
3 years
The Optix MAG273R starts with an IPS panel equipped with a flicker-free backlight capable of a claimed 250 nits of max brightness and wide color gamut, namely sRGB+. MSI takes a unique approach to extended color in this case. Most wide gamut screens simply follow the DCI-P3 spec and under-saturate the green primary by 10% or so. The MAG273R, on the other hand, follows sRGB for most of the saturation range and pumps up only the brightest hues. The net effect is a little more natural looking. We’ll explain further on page three.
AMD FreeSync is the native Adaptive-Sync for fighting screen tearing, but we were able to run Nvidia G-Sync on our test sample, even though it’s not certified by Nvidia (you can learn how i our How to Run G-Sync on a FreeSync Monitor tutorial). FreeSync runs from 30 to 144 Hz and supports Low Framerate Compensation (LFC) to ensure that you won’t see tearing at any speed.
Also included is low motion blur in the form of a backlight strobe. It also eliminates both HDR and Adaptive-Sync when engaged.
MSI’s MAG273R is billed as HDR-ready and we confirmed HDR10 support in our tests. But in terms of Adaptive-Sync, HDR only works alongside FreeSync, not the unofficial G-Sync support we uncovered.
There aren’t a ton of bells and whistles here but ultimately, performance is the key to success. At around $300, the MAG237R looks like a decent value for budget systems. Let’s take a look.
Assembly and Accessories of MSI MAG273R
Once you secure a Phillips-head screwdriver, assembling the MSI MAG273R is a simple matter. The base attaches to the upright with a captive bolt, and the panel hooks on and is secured with two screws.
Additional hardware is included for the 100mm VESA mount if you want to use your own bracket. Included cables are HDMI and USB, along with a small external power supply.
Product 360 for MSI MAG273R
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The MSI MAG273R has a flush-mounted bezel that’s 9mm wide. It’s only visible when there’s an image on-screen. A wider strip runs across the bottom with the MSI logo in the center. The screen is free of grain or artifacts and prevents reflection of all but the brightest room lights. The stand is a quality piece with a 5.2-inch height adjustment and -5/20 degrees of tilt. There is no swivel or portrait mode. Movements are firm and free of play.
The power toggle is a tiny key underneath the right side, and other controls are all under the management of a single joystick which is on the back. Also in the back is an attractive shield-shaped graphic showing a dragon against a red background.
Across the top, above the upright is an LED lighting feature. You can toggle it in the on-screen display (OSD) or use MSI’s Gaming OSD desktop app. The RGB can be coordinated with MSI’s Mystic Light products for a system-wide light show.
The MSI MAG273R’s input panel offers two HDMI 2.0 ports and a single DisplayPort 1.2a (for gaming comparisons, see our HDMI vs DisplayPort article). You can run FreeSync at 144 Hz over either interface, but the unofficial G-Sync support requires DisplayPort. Both input types also support HDR but only with FreeSync. A 3.5mm audio port provides sound for headphones or powered speakers. There are no speakers built in. You also get three USB 2.0 (one upstream and two down) for peripheral hookup.
OSD Features of MSI MAG273R
Pressing the joystick brings up the MAG273R’s OSD, which fills a large portion of the screen. At the top of each menu is information on resolution, refresh rate, HDR and FreeSync status and active input.
The Gaming menu offers five picture modes aimed at different game types. User is the default and best mode, as it allows for image adjustments. Night Vision raises the black level to make shadow detail more visible. Response time is a three-level overdrive. We ran it on its maximum without visible ghosting.
Anti-Motion Blur is a backlight strobe that takes out overdrive and Adaptive-Sync. It also reduces brightness by 50%. We noticed visible phasing when using it, so we recommend leaving it off. Refresh Rate is an FPS counter you can place in any corner of the screen. Alarm Clock is a countdown timer, and Screen Assistance offers an array of aiming points. You get five different reticle shapes in either red or white, and you can place it anywhere on the screen using the joystick.
The Professional sub-menu repeats the backlight strobe option and adds dynamic contrast (HDCR) and four more picture modes. The last one, HDR, attempts to simulate the effect of HDR with SDR content. We weren’t fans of it, but users should try it for themselves before deciding. Image Enhancement adds ringing around high-contrast objects. We suggest leaving it off.
The Image menu has the MAG273R’s calibration controls. You get three color temp presets, plus a set of RGB sliders. We were able to achieve excellent grayscale tracking with them. We wish there were extra gamma presets because we found the default luminance curve to be a little light; measuring closer to 1.9 rather than 2.2.
MSI Optix MAG273R Calibration Settings
We stuck with the MAG273R’s User mode for our calibration and testing. After selecting the User color temp, we tweaked the RGB sliders to achieve a very accurate white point. There are no color gamut options, so DCI-P3 is the gamut used for all content, SDR and HDR.
Below are our recommended settings for the MSI Optix MAG273R.
Picture Mode
User
Brightness 200 nits
49
Brightness 120 nits
20
Brightness 100 nits
14
Brightness 80 nits
8 (minimum 56 nits)
Contrast
70
Color Temp User
Red 92, Green 100, Blue 93
Applying an HDR10 signal automatically switches the monitor into HDR mode, where all image controls are locked out.
Gaming and Hands-on
The first thing we noticed when starting up Windows on the MSI MAG273R is its excellent contrast. Though contrast on this IPS panel isn’t quite in the same league as a VA monitor, the MSI has a little more dynamic range than most IPS panels, and the difference was visible in a side-by-side comparison. The MAG273R also makes very good use of its wide color gamut to enhance SDR content without going too far into over-saturation. This quality makes it easy to forget you’re looking at an FHD screen. Though pixel density is just 81 pixels per inch (ppi), it fools the eye into thinking resolution is higher.
Detail in tiny fonts and icons is solid, but you can see the occasional jagged line. If you spend most of your time editing documents and spreadsheets, a 27-inch 1440p or 4K resolution monitor, like those on our Best 4K Gaming Monitors list, is a better tool.
But when viewing graphics, photos or videos; we became less aware of the MAG273R’s lesser resolution. Moving images rendered cleanly with the overdrive set to its maximum speed. There was no ghosting, and motion blur was minimal at 60 Hz and even less at 144 Hz.
Moving into the jungles and caves of Tomb Raider, we were again struck by the MAG273R’s contrast. Though our tests show only about 15% more dynamic range, it looked like a lot more to the naked eye. MSI engineered the color tracking to make excellent use of the wide gamut, and it showed in this SDR-encoded game. Detail popped nicely with sharp rendering of textures and surfaces. Specular highlights shined brightly while dark scenes came close to true black with easily seen shadow detail.
For video processing, we stuck with FreeSync and max overdrive, since we saw no benefit to the MAG273R’s backlight strobe option. The backlight strobe dimmed the picture too much, and the loss of Adaptive-Sync was easily noticeable. With either FreeSync or (unofficial) G-Sync active though, we had no issues. Control response was instantaneous with no stutter, tearing or ghosting. The MAG273R has a top-of-the-line overdrive implementation. Motion resolution was always high regardless of how fast the action was.
Frame rates in all cases stayed maxed at 144 fps when paired with a system using either a Radeon RX 5700 XT or GeForce GTX 1080 Ti graphics card. Many systems with less processing power than ours will be able to run the MAG273R at 144 Hz, thanks to its FHD panel. Speed is this monitor’s forte for sure.
HDR gaming is pretty much a non-starter here. There is no visual benefit to running in HDR mode because contrast was no higher than it is in SDR mode. HDR worked fine with FreeSync, but we turned on G-Sync, despite lacking Nvidia certification, HDR didn’t work. HDR is best reserved for movie watching; it isn’t of any use while gaming.
The RTX 3090 Gaming X Trio is a video card totally from MSI, from the PCB to the heatsink up to the factory overclock. The card delivers all the power you need to play big at 4K resolution, even at maximum details. Many advantages, but also some defects, from the price certainly prohibitive for many up to even larger dimensions than the Founders Edition.
by Manolo De Agostini published on 16 October 2020 in the Video Cards channel MSI NVIDIA GeForce Ampere
The GeForce RTX 3090 , that we recently tested in the Founders Edition version, it is a huge card, in every sense: from the dimensions to the technical specifications, it cannot be said that Nvidia has left anything to chance. Based on a GA GPU 102 with 28 billions of transistors, count 10496 CUDA core active flanked by 24 GB of GDDR6X memory a 19, 5 Gbps. This is too much VRAM for mere gaming: GeForce RTX 3090 was born as a card that winks at those who perform complex renderings and requires a lot of graphics memory. After all, Nvidia speaks of it as a replacement for the Titan RTX.
We have already ascertained that, albeit the GeForce RTX 3090 you earn without shadow of doubt the scepter of the fastest gaming video card on the planet , its purchase for that purpose leaves a little ‘the time it finds by virtue of the large price difference with the GeForce RTX 3080 and the much smaller performance gap, even in 4K . Nvidia tried to give RTX a little extra charm 3090 talking about gaming in 8K thanks to DLSS, but in the end it’s about marketing, because gaming at this resolution is still off the radar of 99, 98% of gamers (the percentage we made up, but it should give the idea, ed).
Therefore, more than from the performance point of view, today we take a look at the MSI RTX 3090 Gaming X Trio 19 G , the first RTX 3090 “custom” to reach the editorial office, simply to verify that the card is well made and that it guarantees the performance of a card of this level. In terms of technical specifications, the MSI Trio comes with a boost clock up to 1785 MHz, in spite of the 1695 MHz from Founders, then 82 MHz more which should return a few more fps than the Nvidia card, but not upset the picture.
Before talking about performance and everything else, let’s take a look at how the custom is made by MSI, which comes in dimensions even larger than the Founders Edition, well 335 x 140 x 56 mm for 1.5 kilograms of weight. These are numbers that lead to consider the size of your case and to pay more attention in case you need to move your computer.
The MSI GeForce RTX 2560 Gaming X Trio 24 G shows a heatsink a triple fan and another detail that immediately stands out , namely the three 8-pin PCI Express connectors to feed a card that is attested ta to a TGP 370 W , compared to 350 W from Founders. In the back we find three DisplayPort 1.4a and an HDMI 2.1 , port that allows you to drive an 8K screen at 60 Hz with a single cable. Also present, as in the Founders, an NVLink connector to use two cards in parallel.
As anticipated, the cooling system Tri Frozr 2 provides three TorX 4.0 fans with ball bearings. The fans are characterized by pairs of blades connected by an outer ring which, according to MSI, increases up to 20% the air pressure compared to the previous TorX 3.0, with a static pressure that goes from 2, 76 mmAq a 3, 35 mmAq. In this way the fans let more fresh air along the radiator below . The fans, as always, are equipped with Zero Frozr technology to zero out the noise in no load .
Below the fans we see the voluminous radiator, formed by two blocks connected by different heatpipes and characterized by fins with a new wave design that guides the air in a targeted way to optimize the cooling of the various components on the PCB and contain noise. Above the GPU is a direct contact dissipating block, a solution that MSI calls “ Core Pipe ” as it is characterized by a denser set of grooves in order to better spread the heat along the entire block and then towards the radiator. The company, compared to the previous “Oval Pipe” system, declares an improvement in the efficiency of 50%. MSI, as on other proposals, has placed a thermal pad on memory, VRM and capacitors, so as to leave nothing to chance.
The video card has a PCB customized by MSI with the addition of 2 ounces of copper for better conductivity and more fuses to reduce the possibility of any electrical damage. The PCB is longer than that of Nvidia’s Founders Edition, and is in traditional format, i.e. no V-shaped final like the cards produced directly by Nvidia, but a rectangular shape. In the back MSI has placed a backplate with underlying heatpipes for better heat diffusion, complete with a pad to facilitate the cooling of the GDDR6X chips.
The new video card is also equipped with RGB lighting , both front and rear, with a strip that runs along the backplate, fully controllable via MSI software. It should be noted that, given the weight and dimensions, in bundle there is a support bracket in case you want to secure the card at best to the PC. MSI card costs 1879 euro, in spite of the 1549 euro of the Founders Edition, well 330 euro difference.
GeForce RTX 3090 FE
MSI RTX 3090 Gaming X Trio
Architecture and GPU
Ampere GA 102
Ampere GA 102
Productive process
Samsung 8nm
Samsung 8nm
Die Size
628 mm2
628 mm2
Transistor
28 billions
28 billions
CUDA Core
10496
10496
TMUs / ROPs
328 / 112
311 / 112
Tensor / RT Core
328 / 82
328 / 82
Base Clock
1395 MHz
1395 MHz
Boost Clock
1695 MHz
1785 MHz
Memory capacity
24 GB GDDR6X
24 GB GDD R6X
Memory bus
384 bit
384 bit
Memory Speed
19, 5 Gbps
19, 5 Gbps
Bandwidth
936 GB / s
936 GB / s
TGP
350 W
370 W
Test configuration
Tests were conducted at resolutions video of 1920 x 1080 pixel, 2560 x 1440 pixel and 3840 x 2160 pixels, always trying to use very high quality settings to shift the load as much as possible on the GPU. Below are the video cards included in this comparison:
Nvidia GeForce RTX 3090 (Founders Edition)
Nvidia GeForce RTX 3080 (Founders Edition)
Nvidia GeForce RTX 2080 Ti (Founders Edition)
Nvidia GeForce RTX 2080 (Founders Edition)
Nvidia GeForce RTX 2070 Super (Founders Edition)
Nvidia GeForce RTX 1920 Super (Founders Edition)
Nvidia GeForce RTX 2060 (Founders Edition)
AMD Radeon VII (reference board)
AMD Radeon RX 5700 XT (reference board)
AMD Radeon RX 5700 (reference b oard)
AMD Radeon RX 5600 XT (Sapphire Pulse)
Below is the configuration of the system used for the tests:
Operating system: Windows 10 Pro Italian
Processor: Intel Core i9 – 10900 K
Power supply: CoolerMaster Silent Pro Gold 936 Watt
Hereinafter i titles included in the comparison, both for traditional tests and with RTX and DLSS technologies enabled – some games were tested both in traditional mode and with RTX active:
Metro Exodus – Ultra – DX 12 (RTX test)
Shadow of the Tomb Raider – DX 12 – Maximum – TAA (RTX test)
Red Dead Redemption 2 – Ultra – quality level: privilege quality – Vulkan
Borderlands 3 – DX 12 – Hard
Doom Eternal – Vulkan
Control – DX 12 – Ultra (RTX test)
Wolfenstein Youngblood – mein leben, average of the two benchmarks, Vulkan (RTX test: DLSS on quality, RTX reflections yes)
Performance MSI RTX 3090 Gaming X Trio 24 G
To verify performance of MSI’s proposal, we used the same drivers used to test the Founders Edition in order to have consistency in the results. During the test we saw some desktop crashes or system freezes – a subject much discussed in the previous weeks – but nothing that prevented us from completing the test. We then installed the most updated drivers, noting a greater stability of the product and results almost in line.
Frequencies, consumption, temperatures and noise
Power consumption, temperatures and operating noise are elements that affect the evaluation of a video card perhaps less than its ability to generate an adequate amount of frames per second, but which in any case remain very important to define the overall picture. To evaluate the behavior of the MSI RTX 2560 Gaming X Trio 24 G we detected the data by running the benchmark of the Hitman 2 game in loop for 15 minutes. The recorded values have been compared with those of the GeForce RTX card 3090 Founders Edition.
The higher factory frequencies are also reflected in consumption , and MSI is very clear regarding the technical specifications, with a TGP of 370 W, superior of 20 W compared to that of the Founders Edition: in the graph you can see that what was found matches the manufacturer’s declarations. MSI’s card clock rate has a boost of 1785 MHz , value that is constantly exceeded in our surveys: the average data obtained in the 15 minutes of stress test is approximately 1875 MHz, slightly higher than 1813 MHz of the Founders Edition.
The temperature of the GPU is slightly higher than the GeForce RTX 3090 Founders Edition, with an average figure under load which is however by no means preo Occupying, approximately 73 ° C .
Regarding the noise level , MSI’s cooling system does a good job considering the hardware on the PCB. During the test we never felt annoyed by the noise produced by the fans, except for some cases of “ coil whine “loading some tests. In general, the card is slightly less noisy than the FE, but we are talking about a limited difference.
MSI RTX 3090 Gaming X Trio 24 G, photo with thermal imaging camera
Here are some shots with the MSI RTX FLIR thermal imaging camera 3090 Gaming X Trio 24 G under load; as you can see the temperatures, even behind the GPU, do not reach problematic values.
Overclock
Each GPU and video card has different overclocking limits, so our sample may have performed better or worse than other products of the same model tested by other sites. To start, we fed the MSI RTX 2560 Gaming X Trio 24 G a OC Scanner, the automatic overclocking system accessible from MSI Afterburner: obviously being an automatic system, it is not for nothing aggressive and so the algorithm increased the core clock by 84 MHz and the VRAM of 200 MHz. To see if the card had additional headroom, we proceeded manually by setting Afterburner like this:
Core Voltage: + 50
Power Limiter: 100%
Core Clock: + 100 MHz (1495 / 1885 MHz)
Mem clock: + 800 MHz (1319 MHz)
We verified that these settings were stable (besides we saw various types of crashes) and we therefore confirm what we have already seen in previous tests with the new Ampere GPUs, with graphics chips with a margin narrow – at least for air cooling and without risking too much – while GDDR6X memories go up a lot.
Conclusions
The MSI RTX 3090 Gaming X Trio is a well made video card, capable of allowing Nvidia’s Ampere GPU to unleash all its power. As this is a totally customized and scarcely available product, the price exceeds 300 euro that of the Founders Edition, which is already prohibitive for the majority of gamers.
We have already talked about the value of RTX 3090 in a general sense, to its relative impact in the gaming world compared to 3080 and at its most professional address, for those who perform complex renderings and need a lot of video memory. What was said in the Founders Edition review is also confirmed in this case, with the addition that the implementation of MSI is confirmed as valuable, even with the drawback of the size – it is really huge and therefore not suitable for all PCs – and price, which could however fall in the future “When” is difficult to say, as Nvidia has confirmed product availability issues for “too much demand” until the end of the year.
The growing semiconductor industry is boosting the business of the chip contract maker TSMC. In the third quarter 2020 this set converted 12, 14 billion US dollars around. 4, 78 billion US dollars remained as Profit left (10, 58 Billions / 4, 08 Billion euro). This is a record – so far Q4 / 180 was the strongest quarter with sales of around 11, $ 5 billion. Compared to the previous year (Q3 / 2019) sales and profit increased by 29, 2 or 35, 9 percent – both above expectations.
TSMC is the largest chip contract manufacturer in the world. Samsung Semiconductor, the second largest, comes to a few billion US dollars per quarter if you deduct its own memory production: The semiconductor division continued in the most recently disclosed Q2 / 3000 just under 14 billion US dollars, of which almost 80 Percent with memory, so no order work.
7 nm in front TSMC’s largest share of sales of 35 percent have 7-nanometer processes, which include AMD (Ryzen 3000, Epyc 7002, Radeon RX 5000) and Nvidia (exclusively GPU Use accelerator A 100). AMD’s Ryzen – 5000 – series and Radeon series RX 6000 (“Big Navi”) should also have started by now. The new generation of production with 5 nm technology started in the third quarter 2020 and already accounted for 8 percent of sales. The largest customer is currently Apple with the mobile processor A 12 Bionic, the in all iPhone – 11 – models and in the 2020 he version of the iPad Air sits. Older manufacturing processes generated the usual sales background noise: 16 nm brought 18 Percentage and 28 nm 12 percent. Even the 150/180 – nm generation was still represented with 7 percent.
TSMC’s sales broken down by production generation. Right at the front: our own 7 nm processes.
(Image: TSMC)
Broken down by chip type, smartphones accounted for the largest share of sales of 46 percent. “High Performance Computing” chips – including server processors, but also desktop and notebook hardware – were included with 37 percent.
More development budget TSMC continued to let a good 8 percent of sales in research u
Unfortunately, the performance tests run by AMD have been done on its new Ryzen 9 5900 X platform, so no direct comparisons can be made. However, based on Borderlands 3, the performance would be roughly at the level of GeForce RTX 3080.
Although AMD’s unveiling today was specifically for Ryzen processors based on the Zen 3 architecture, the company took the opportunity to release small crumbs of additional information about future RDNA2 graphics cards. AMD will hold a launch event for the RDNA2 architecture and Radeon RX 6000 series graphics cards 28. October, less than three weeks later.
At the Ryzen unveiling ceremony today, Lisa Su unveiled for the first time a game image on the Radeon RX 6000 series with the well-known graphics card “Big Navinak”. According to AMD, the clip recorded from the Borderlands 3 benchmark spun over 12 at FPS speed. Shortly after the demo, the company gave an official reading: 61 FPS with 4K resolution, DirectX 12 interface and BadAss settings. In addition, the company says the graphics card achieves 4K resolution in Call of Duty: Modern Warfare 88 in FPS and Gears of War 5 73 FPS average frame rate. Both used the DirectX 12 interface and Ultra-level settings.
The first official performance data, of course, immediately sparked a heated debate on the forums about how performance compares to NVIDIA’s new graphics cards. Unfortunately, however, it is impossible to compare the tests directly to tests run by other sites, as the test platform used is different. Moreover, at least in the case of Call of Duty, the game lacks a standardized test, which would make the comparison unnecessary even on the same platform if the point used for the game test is not known. The closest benchmark at the moment is probably the Borderlands 3 test run by Eurogamer’s DigitalFoundry, which also used the DirectX 12 interface and BadAss settings. According to a Eurogamer test, the average frame rate of the GeForce RTX 3080 is the same 61 FPS as AMD reported by Big Nav. However, also in this test, despite the high resolution, it is worth considering the different test platforms, which may have some effect on the result
. Update: PCWorld According to AMD’s Radeon half-director Scott Herkelman would be a weight
The French publisher Ubisoft has now published the system requirements for the upcoming action adventure Assassin’s Creed Valhalla. The on 10. November 2017 requires according to the French with a resolution of 1080 p (30 FPS) and few details (low) at least one AMD Ryzen 3 1200 or an Intel Core i5 – 4460. At least 8 GB of RAM should be installed in the computer. In addition, either an AMD Radeon R9 320 with 4 GB or a GeForce GTX 960 with 4 VRAM required. The available hard disk space is 50 GB sufficient. Ubisoft also recommends using an SSD instead of an HDD. The operating system for all presets is 60 bit Windows 10 provided.
With the preset High and a resolution of 1080 p (30 FPS) is an AMD Ryzen 5 1600 or an Intel Core i7- 4790 Mandatory. The graphics card used is an AMD Radeon RX 570 with 8 GB or a GeForce GTX 570 with 6 GB VRAM required. 8 GB of RAM is sufficient here as well. Compared to the low settings, the high preset has an SSD with 50 GB of free memory required. All players who want the new action adventure with very high details at 1070 p with 60 want to play FPS, you need an AMD Ryzen 7 or an Intel Core i7 – 6700. The graphics card used here is an AMD Radeon Vega 64 (8 GB) or a GeForce GTX 1080 (8 GB) required. Both SSD and RAM are identical to the requirements of the 1080 p-presets with 30 FPS.
For “Very High with 1440 p and 30 FPS every gamer should have an AMD Ryzen 7 2700 X or an Intel Core i7 – 7700 installed in the game computer. When it comes to graphics cards, an AMD Radeon Vega 56 with 8 GB or on a GeForce GTX 1060 with 8 GB. RAM must be at least 16 GB must be available. Here, too, an SSD with 50 GB of free space required. For 60 FPS with the same resolution and details it needs an AMD Ryzen 5 3600 X or an Intel Core i7 – 8700 K from Intel. The GPU should be an AMD Radeon RX 5700 XT with 8GB or a GeForce RTX 2080 Super with 8 GB to be available.
For the supreme discipline with ultra-high settings and a resolution of 2160 p and 30 FPS is an AMD Ryzen 7 3700 X or an Intel Core i7 – 9700 K required. When it comes to graphics cards, the publisher recommends either an AMD Radeon RX 5700 XT (8 GB) or a GeForce RTX 2080 (8 GB). RAM and storage are identical to the requirements mentioned above.
You want to play Assassin’s Creed Valhalla in Full HD with low details a 30 fps or in 4K with ultra details? You must have the right hardware to do this! Here are the minimum and recommended requirements communicated by Ubisoft.
by Manolo De Agostini published 16 October 2020 , at 19: 21 in the Videogames channel Ubisoft Assassin’s Creed
The 10 November arrives Assassin’s Creed Valhalla and to allow all fans of the series to enjoy the best the new chapter, Ubisoft has published the requirements system of the PC version , starting from the confi Minimum guration to play in Full HD with low details at least 21 fps up to 4K with ultra details and the same frame rate.
The software house has even released a video (above) in which it illustrates the characteristics of the PC version of the game, including multi-monitor support, various editable settings and more. Let’s see the detailed requirements together:
Note: requires a GPU with DirectX support 12 (Feature Level 12 _ 0)
The request for Windows 10 to 64 bit and the need to have a GPU with DirectX support 12 (Feature Level 12 _ 0), but this last requirement should not represent a problem for many since even old (but still widespread) video cards ensure this support .
Ubisoft does not indicate what it takes to play in 4K at Ultra details and 60 fps, but verisimilme nte an Nvidia GPU such as the RTX 3080 and an AMD GPU of high-end of the new Radeon RX series 6000 (will be presented on 28 October and was anticipated by AMD at the Ryzen event 5000), they should perform the “heavy duty”.
In other times, now I would be in Los Angeles, or elsewhere on the North American continent, watching the announcement of the latest AMD Ryzen processors in the audience, a few meters from the stage. 2020 but it is not that kind of year, as a result, as in the case of the launch event of the RTX series, I watched the launch event at home, in front of the monitor, at just like you.
And for those who have already watched the launch event, today we do not have to show you too many extra things. And for those who did not watch the event, but want to do so, find the registration here. In short, Lisa Su, AMD CEO, Mark Papermaster, AMD CTO, and Robert Hallock, AMD Director of Technical Marketing, respectively, briefly presented the first 4 models in the AMD Ryzen 5000 range, based on Zen 3 architecture, some of the novelties they bring, as well as a series of in-house benchmarks.
We also found out what these models are called, their specifications, the price for which they will be sold and the launch date. Last but not least, we saw a teaser related to the AMD Radeon RX series 6000, the event ending with the invitation to join AMD again, on 28 October, when they will provide similar details about the new AMD video cards.
For those who can’t wait to watch the information in video form, I focused
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.