Intel’s new W-1300 series of Xeon processors briefly emerged in a compatibility list showing all CPUs supported in Intel’s latest LGA1200 socket. However, a few hours later they were taken down. This could mean that these new Xeon processors will support the same socket as Intel’s consumer-grade Rocket Lake products.
Because these chips are compatible with the LGA 1200 socket, they should be nearly identical to Intel’s upcoming Core i5, i7, and i9 Rocket Lake parts, but have a few alterations that include and support for vPro technologies and ECC memory.
Intel has been doing this for years with its lower-end Xeon processors, re-purposing lower-end Core i5, i7, and i9 parts and turning them into Xeon chips. The strategy makes a lot of sense, as not all servers and workstations require HEDT levels of processing power and connectivity.
However, since the Skylake generation, Intel has severely limited its entry-level Xeons’ motherboard compatibility and requires them to run in a workstation/server “designed” chipset. So yes, while these chips have the same socket compatibility as Intel’s consumer desktop chips, Intel’s W-1300 Xeons will not work in a standard H-, B-, or Z-series motherboard.
The W-1300 CPUs, which appeared on an ASRock list, were the W-1390, W-1390T, W-1350P, W-1350, and W-1370.
The main differences we can find between each Xeon chip are its TDP and with some, its amount of L3 cache. For instance, chips like the W-1350, W-1390, and W1-1370 have an average TDP of 80W. The 1390T, has the lowest TDP at just 35W, and the W-1350P has the highest TDP of 125W.
Additionally, the W-1350 & W-1350P are equipped with less L3 cache, coming in at 12MB instead of 16MB. Presumably, this reduction in L3 cache is due to a lower core count compared to its other siblings.
Unfortunately, that’s all we know (if it’s even accurate), for now. We still don’t know what prices will be, what core counts these chips will have, and what boost frequencies these chips will be equipped with. (But expect a maximum of 8 cores for W1300 chips due to the Rocket Lake architecture.)
Hopefully, we should have more information on Intel’s new W-1300 chips right around or after the official Rocket Lake launch.
Samsung has announced that it has developed the industry’s first 512GB memory module using its latest DDR5 memory devices that use high-k dielectrics as insulators. The new DIMM is designed for next-generation servers that use DDR5 memory, including those powered by AMD’s Epyc ‘Genoa’ and Intel’s Xeon Scalable ‘Sapphire Rapids’ processors.
Samsung’s 512GB DDR5 registered DIMM (RDIMM) memory module uses 32 16GB stacks based on eight 16Gb DRAM devices. The 8-Hi stacks use through silicon via interconnects to ensure low power and quality signaling. For some reason, Samsung does not disclose the maximum data transfer rate its RDIMM supports, which is not something completely unexpected as the company cannot disclose specifications of next-generation server platforms.
An interesting thing about Samsung’s 512GB RDIMM is that it uses the company’s latest 16 Gb DDR5 memory devices which replace traditional insulators with a high-k material originally used for logic gates to lower leakage current. This is not the first time Samsung has used HKMG technology for memory as, back in 2018, it started using it for high-speed GDDR6 devices. Theoretically, usage of HKMG could help Samsung’s DDR5 devices to hit higher data transfer rates too.
Samsung says that because of DDR5’s reduced voltages, the HKMG insulating layer and other enhancements, its DDR5 devices consume 13% less power than predecessors, which will be particularly important for the 512GB RDIMM aimed at servers.
When used with server processors featuring eight memory channels and two DIMMs per channel, Samsung’s new 512GB memory modules allow you to equip each CPU with up to 8TB of DDR5 memory, up from 4TB today.
Samsung says it has already started sampling various DDR5 modules with various partners from the server community. The company expects its next-generation DIMMs to be validated and certified by the time servers using DDR5 memory hit the market.
“Intel’s engineering teams closely partner with memory leaders like Samsung to deliver fast, power-efficient DDR5 memory that is performance-optimized and compatible with our upcoming Intel Xeon Scalable processors, code-named Sapphire Rapids,” said Carolyn Duran, Vice President and GM of Memory and IO Technology at Intel.
Two upcoming professional graphics cards from Nvidia — the RTX A4000 and the RTX A5000 — have received an OpenCL 1.2 certification from the Khronos Group, the consortium that oversees that API. The submission for certification indicates that Nvidia is getting ready to release these products commercially.
Nvidia submitted its yet-to-be-launched RTX A4000 and RTX A5000 proviz graphics cards along with appropriate drives to Khronos Group back in mid-February, as noticed by @Komachi_Ensaka. By now, the organization has tested the boards and found that they conform to the OpenCL 1.2 specification.
It is noteworthy that the new professional graphics cards were submitted to Khronos Group along the RTX A6000 board and all three were submitted as Quadro RTX A6000/A5000/A4000 products despite the fact that Nvidia started to phase out its Quadro brand last October and ceased to use it with Ampere-based proviz boards. However, these are professional GPUs so we don’t expect them to compete with the best graphics cards for gaming or carry the GeForce branding.
Nvidia’s RTX A6000 professional graphics card is based on the GA102 GPU with 10752 active CUDA cores as well as 48 GB of memory. Specifications of Nvidia’s RTX A4000 and RTX A5000 products are unknown. The GPU developer only used its TU102 and TU104 for its Quadro RTX family launched in 2018. If it follows the same approach with the RTX A-series cards, then both the RTX A4000 and the RTX A5000 will be powered by the GA104 chip. Theoretically, Nvidia could use the GA106 for the RTX A4000.
Neither RTX A4000 nor the RTX A5000 boards have been formally announced, and Nvidia does not typically comment on rumors, so we’ll have to wait for an official announcement for confirmation of these specs and models.
Intel uploaded a detailed description of its upcoming Ponte Vecchio Xe-HPC GPU for supercomputers just hours after outlining its IDM 2.0 strategy that involves usage of internal and external manufacturing capabilities. Ponte Vecchio, which employs components produced by Intel, Samsung, and TSMC using a variety of process technologies, demonstrates Intel’s vision of the future in the best way possible.
Intel’s codenamed Ponte Vecchio is the company’s first GPU based on the Xe-HPC microarchitecture that will be initially used for Argonne National Laboratory’s Aurora supercomputer along with Intel’s next-generation Xeon Scalable ‘Sapphire Rapids’ processor. The machine will be one of the industry’s first supercomputers to feature over 1 ExaFLOPS FP64 performance.
Over time, the part will be available to other customers and Intel might even customize it as it is relatively easy to do given the fact that the Ponte Vecchio uses a disaggregated modular architecture, Intel’s new approach to complex processors.
In fact, it would be impossible to build a monolithic Ponte Vecchio as it is a massive processor featuring 47 components, over 100 billion transistors, and offering PetaFLOPS-class AI performance (more on this later).
The Ponte Vecchio includes the following tiles/chiplets:
2 base tiles made using Intel’s 10 nm SuperFin technology
16 compute tiles produced by TSMC initially and then by Intel when its 7 nm technology is ready for high-volume manufacturing (HVM).
8 Rambo cache tiles fabbed using Intel’s 10 nm Enhanced SuperFin process
11 EMIB links made by Intel
2 Xe Link I/O tiles made by a foundry
8 HBM memory stacks produced by a DRAM manufacturer
At present, Intel only using its Ponte Vecchio Xe-HPC GPUs in the lab. While the modular design of the device allows the company to build it more or less cost efficiently, tailoring the design’s thermals, voltages, and frequencies is tricky and will take some time.
One interesting thing to note about Intel’s Ponte Vecchio description is that the chipmaker said that it offers ‘PetaFLOPS-class AI performance.’ There are numerous AI workloads that require different compute precision.
Intel usually considers FP16 to be the optimal precision for AI, so when the company says that that its Ponte Vecchio is a ‘PetaFLOP scale AI computer in the palm of the hand,’ this might mean that that the GPU features about 1 PFLOPS FP16 performance, or 1,000 TFLOPS FP16 performance. To put the number into context, Nvidia’s A100 compute GPU provides about 312 TFLOPS FP16 performance.
Argonne National Laboratory’s Aurora supercomputer is due in 2022.
It would seem that Nvidia might have encountered a setback leading up to the GeForce RTX 3080 Ti’s release. According to a message posted on the Board Channels forums, the chipmaker has purportedly pushed the launch date to mid-May. The GeForce RTX 3080 Ti was previously rumored to debut in mid-April.
The GeForce RTX 3090 and the Radeon RX 6900 XT are the two of the best graphics cards on the market right now. The truth of the matter is that neither flagship is cheap or in stock. The GeForce RTX 3090 retails for $1,499, while the Radeon RX 6900 XT sells for $999 if you can find one outside of eBay (see our GPU price index). The general consensus is that Nvidia is cooking up the GeForce RTX 3080 Ti to compete with the Radeon RX 6900 XT at the $999 price bracket. In order for that to happen, the GeForce RTX 3080 Ti’s performance would have to be on equal grounds or better than the Radeon RX 6900 XT.
Nvidia has gone to lengths to keep the GeForce RTX 3080 Ti under wraps, but it presumably takes after the GeForce RTX 3090. It’s probable that the graphics card uses the same GA102 silicon. There is speculation that the GeForce RTX 3080 Ti comes equipped with 82 Streaming Multiprocessors (SMs), same as the GeForce RTX 3090. However, there’s another group that thinks that Nvidia might disable two SMs to bring the CUDA count down to 10,240. The latter seems reasonable since Nvidia wouldn’t want the GeForce RTX 3080 Ti’s performance to be too close to the flagship SKU.
Nvidia GeForce RTX 3080 Ti Specifications
GeForce RTX 3090
GeForce RTX 3080 Ti*
GeForce RTX 3080
GeForce RTX 3070
Architecture (GPU)
Ampere (GA102)
Ampere (GA102)
Ampere (GA102)
Ampere (GA104)
CUDA Cores / SP
10,496
10,496 / 10,240
8,704
5,888
RT Cores
82
82 / 80
68
46
Tensor Cores
328
328 / 320
272
184
Texture Units
328
328 / 320
272
184
Base Clock Rate
1,395 MHz
1,365 MHz
1,440 MHz
1,500 MHz
Boost Clock Rate
1,695 MHz
1,665 MHz
1,710 MHz
1,730 MHz
Memory Capacity
24GB GDDR6X
12GB GDDR6X
10GB GDDR6X
8GB GDDR6
Memory Speed
19.5 Gbps
19 Gbps
19 Gbps
14 Gbps
Memory Bus
384-bit
384-bit
320-bit
256-bit
Memory Bandwidth
936 GBps
912.4 GBps
760 GBps
448 GBps
ROPs
112
112
96
96
L2 Cache
6MB
6MB
5MB
4MB
TDP
350W
350
320W
220W
Transistor Count
28.3 billion
28.3 billion
28.3 billion
17.4 billion
Die Size
628 mm²
628 mm²
628 mm²
392 mm²
MSRP
$1,499
$999
$699
$499
*Specifications are unconfirmed.
Reputable hardware leaker kopite7kimi reported that the GeForce RTX 3080 Ti could feature a 1,665 MHz boost clock, 30 MHz lower than the GeForce RTX 3090. If Nvidia applies the same treatment to the base clock then the GeForce RTX 3080 Ti should check in at 1,365 MHz.
On the memory front, the GeForce RTX 3080 Ti reportedly features 12GB of 19 Gbps GDDR6X memory, half of what’s on the GeForce RTX 3090 and slightly slower. However, it may still retain the 384-bit memory interface. If so, the memory bandwidth would peak around 912.4 GBps. The GeForce RTX 3090 delivers a hash rate up to 106.5 MH/s on Ethereum so it’ll be interesting to see just how the GeForce RTX 3080 Ti performs in mining Ethereum.
Previous Ampere launches have taught us that intial supply is extremely limited, and we don’t expect the situation to change with the GeForce RTX 3080 Ti. It’ll probably require a miracle to beat scalpers to one, much less find it at the rumored $999 price tag. Let’s hope that the extra month gives Nvidia enough time to build up its GeForce RTX 3080 Ti stock. If not, it’s time to pull another all-nighter continuously hitting that F5 button in hope to snag one away from the scalpers.
With clear audio, a great microphone and an understated but attractive design, the Fnatic React+ is aimed at eSports gamers, but it’s a great all-around headset for media and working from home too. The bundled USB sound card adds great-sounding 7.1 virtual surround sound to PC gaming, and a 3.5mm jack means you can use it with other gaming devices too.
For
Very good virtual 7.1 surround
Simple, attractive design
Superb microphone clarity
Swappable ear cushions
USB-A and 3.5mm
Against
Vestigial inline volume/mic switch is redundant when using USB
No software
Earcups don’t swivel
The Fnatic React+ adds virtual surround sound to the feature set that made the original React popular with gamers: large, clear drivers with very good gaming audio quality and excellent stereo separation, a design that remains comfortable throughout long gaming sessions, and a microphone with top-of-its class clarity. All that is wrapped in an understated design that looks cool enough for eSports gaming but subtle enough for teleconferencing.
The React+ pairs the original React headphones with Fnatic’s XP USB sound card (no relation to Windows XP), which adds 7.1 simulated surround sound at the touch of a button, and an extra set of earpads. Yet, the cans are still cheaper than many of the best gaming headsets, at just $99.99 as of writing. The resulting package, while not without its quirks, offers superb performance for a headset in its price class.
Fnatic React+ Specs
Driver Type
53mm
Impedance
23 Ohms
Frequency Response
20 – 40,0000 Hz
Microphone Type
Cardioid boom, detachable
Connectivity
3.5mm or USB Type-A
Cables
3.9 feet (1.2m) 3.5mm cable
3.3 feet (1m) USB cable
6.5 feet (2m) extender/mic splitter
Weight
0.8 pounds (348g)
Lighting
None
Software
None
Extra
1x extra set ear cushions
Design and Comfort
For a design marketed directly at the eSports crowd, the Fnatic React+ headset has a tasteful, understated aesthetic that lacks any elements you’d likely describe as bling. There’s no RGB lighting here, just a tasteful matte-black plastic finish with white accents. There’s a Fnatic logo on each earcup, and the company name is subtly embossed on the side of and on top of the headband.
The one hint of color is the soft, bright orange mesh fabric inside the earcups, helpfully stamped “R” and “L” to assist in putting them on correctly when the microphone is unplugged. The React+ ships with comfortable, memory foam-filled, faux leather-covered earpads installed. But you can also swap these for the included velour earpads. Those will feel more airy, particularly helpful for gamers who get warm while playing.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The oval, enclosed earcups are mounted on adjustable metal hangers, which feel very solid and should hold up well to regular use. The earcups completely enclose your ears, providing very good passive noise isolation. They can swivel vertically for comfort when being worn, but there’s no horizontal swivel axis to fold them out and flatten them for easier transport or storage.
With either set of pads in place, the React+ headset was comfortable even on my rather large head. At 0.8 pounds, it’s not as lightweights as some wired headsets. The similarly specced MSI Immerse GH61, for example, is 0.6 pounds. Thankfully, the React+ didn’t feel overly heavy in use. The clamping force is solid enough to provide good noise isolation without becoming uncomfortable over time, which is not always the case with my big noggin. Meanwhile, a strip of memory foam padding across the inside of the headband aids in comfort.
When using the microphone, it snaps solidly into the left earcup, but if you’re playing a solo game, listening to music or watching a movie, you can easily pop it out.
The React+ also includes Fnatic’s XP USB sound card, which the company also sells separately for $23. The sound card is enclosed in a small, oval controller with a 3.5mm jack on one end and a 3.3-foot-long USB-A cable on the other. Its matte black design matches the headphones, with rocker switches for headphone volume and microphone level, a button to toggle 7.1-channel surround sound and a microphone mute switch on the side. The controller adds little weight to the headphone setup, and the rockers are well-positioned for quick adjustment when gaming.
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
Overall, it’s well-designed, but an additional analog volume dial and microphone switch near the top of the headphone cable (left over from the original design that didn’t include the sound card) can cause frustration if you accidentally brush the analog volume dial and wonder why the volume dial on the soundcard suddenly won’t go high enough. That said, if Fanatic had omitted the analog controls from the React+ bundle, they’d be unavailable when using the headphones sans soundcard with other devices.
The headset also comes with a 6.6-foot extension cable that splits the microphone and audio jacks for devices that don’t support both on a single connector.
The one design element I’d change, if given the chance, is that the 3.5mm cable is permanently attached to the headset. Without a removable cable, the headphones will be rendered useless if the primary cable is damaged by your cat, kids, or other sinister elements.
Audio Performance
The 53mm drivers Fnatic uses in the React+ are calibrated for gaming, with a separate chamber for bass frequencies to help separate them from mids and lows. This helps keep bass from explosions and gunshots from overwhelming other game sounds. Though the sound is relatively pure, mids and highs are slightly boosted, and the result is much better audio clarity from complex game soundscapes than you’d expect in headphones in this price range. Playing Metro 2033, Call of Duty: Warzone and Apex Legends, environmental sound and voices remained clear even in heavy combat situations.
This clarity isn’t lost when engaging the React+ virtual surround sound by pressing the surround button in the center of the USB sound card controller. The effect is convincing and adds a more enveloping quality to the audio without changing it to the point where clarity is lost.
Playing Watch Dogs: Legion, the surround sound significantly enhanced immersion as I walked and drove around the city. Even in the sedate environment of Microsoft Flight Simulator, the directional audio as I panned around my plane in external views was noticeably more enveloping than the default stereo audio heard with surround disabled.
The in-game soundscape of the React+ is excellent because the bass separation, large drivers and clarity across frequencies means you won’t miss important dialogue or environmental sounds in the heat of play. It’s a significant improvement over using headphones geared for music playback while gaming, where heavy bass emphasis can muddy the audio.
These cans also sound great when watching movies on the PC, as those same characteristics also keep audio clear during film and TV action sequences.
Conversely, the one area where the cans are more pedestrian is music. Albums like Logic’s The Incredible True Story and Kenrick Lamar’s DAMN. benefit from the boosted bass on more music-oriented headsets, and Pink Floyd’s classic Dark Side of the Moon sounded off with emphasized mids and highs of the React+ when compared to my (admittedly more expensive) Sennheiser Momentum 2.0 wired headphones.
With the leatherette ear cushions, the passive noise isolation from the large earcups is excellent; in my home office I only heard the loudest outside sounds when playing games. They also do a good job of keeping the noise from leaking out and disturbing others nearby. It is passive isolation, though, so if you use these to listen to music on your next flight, they can only block out so much. The velour cups are slightly less isolating than the leatherette.
Microphone
The detachable cardioid microphone includes a pop filter and has a flexible but stiff arm that stayed in position well and never came loose during gaming. There’s no noise cancellation, but it targets the mouth well enough that it didn’t pick up environmental sounds when I was gaming.
Fellow players reported that my vocals were very clear. And when I listened to audio from the microphone recorded on my PC, it sounded very pure, although perhaps a tiny bit higher in pitch than natural. As you’d expect from a headset marketed squarely at the eSports market, Fnatic does a great job with the microphone here.
In addition to a microphone mute switch, the XP sound card controller includes a mic level adjust rocker as well. This is great when you’re in-game, and your teammates complain about your mic’s volume. It’s much easier to quickly adjust mic sensitivity with the rocker instead of having to tweak it using audio settings on your computer.
Features and Software
Image 1 of 3
Image 2 of 3
Image 3 of 3
The headset uses a 3.5mm TRSS plug to connect to the USB sound card. You can omit the sound card and use the plug to connect to other devices. Fnatic says the headset is compatible with Macs, as well as Xbox, Nintendo Switch, PlayStation 4 and (if you still have a headphone jack or adapter) mobile phones. The USB adapter is only fully supported under Windows, but we found the headset worked well plugged directly into an Xbox Series X controller and a Switch, though we missed the surround sound and the ability to adjust microphone levels.
There’s no bundled software, so you won’t be able to adjust equalization in-game. That said, the ability to toggle surround sound and adjust microphone and volume levels using physical buttons is more convenient when in-game than having to switch to an app.
Bottom Line
For a penny under $100, the Fnatic React+ performs like a more expensive headset. Audio is clear and sharp, both in your ears and coming from your microphone. The addition of effective, clear virtual 7.1-channel surround sound addresses the chief complaint about the original React (if you bought that, Fnatic offers a $29.99 bundle that includes the XP USB sound card and velour earpads to bring it up to React+ level), and the additional volume controls on the USB soundcard are a godsend if you need to quickly make adjustments during a frantic battle.
I’d love for the primary headset cable to be removable though. Not only would that make it less susceptible to being taken out by cable damage, but then we could omit the analog volume dial and microphone mute switch, which are redundant when using the USB sound card.
Overall, the Fnatic React+ offers superb audio for gaming and movies, decent–if unexceptional–music playback, and the headphones look cool without turning your head into a light show. So you’re not going to get strange looks if you’re wearing them during a Zoom call. The React+ also offers stiff competition to some of the best gaming headsets too, such as the HyperX Cloud Alpha. The React+ comes in at around the same price but adds 7.1 surround sound to the mix.
You can certainly find headsets with more features, but not in the React+’s price range. For gamers on a budget, this is a top choice.
Founded in 1998, Razer is a US-based peripherals and gaming equipment company. The history of the DeathAdder hearkens all the way back to 2006. Almost 15 years later, Razer finally gives it the wireless treatment with the V2 Pro. Equipped with Razer’s Focus+ sensor capable of up to 20,000 CPI, the DeathAdder V2 Pro has a battery life of up to 70 hours in low-latency 2.4 GHz mode and up to 120 hours in long-endurance Bluetooth mode. Razer’s second generation optical switches ensure no double-clicking while keeping latency low. The rubber side grips are injection-molded for improved longevity, and a lightweight construction of 87 g coupled with 100% PTFE mouse feet promises great handling. Additionally, the DeathAdder V2 Pro is fully compatible with the Razer Chroma charging dock also used for the Viper Ultimate and Basilisk Ultimate. Basic RGB lighting is included, while Razer Synapse has the usual customization options, along with on-board memory support.
Bang & Olufsen’s latest pair of headphones are the Beoplay HX. They’re over-ear, noise canceling, and offer up to a truly impressive 35 hours of battery life. The headphones launch in black today for $499 (£499 / €499), but there’s a white model coming at the end of April, to be followed by a white and brown version in May.
At $499, the Beoplay HX are among the more expensive wireless noise-canceling headphones available. But this isn’t unfamiliar territory for Bang & Olufsen: the previous Beoplay H9 headphones cost exactly the same — and this is the company that also sells an $800 pair of Bluetooth headphones.
Thirty-five hours of battery life beats pretty much all competitors (and it rises up to 40 hours if you turn ANC off). The $549 AirPods Max are rated at just 20 hours with ANC on, while our top pick $350 Sony WH-1000XM4 can go for up to 30 hours. Others, like the Sennheiser Momentum 3 Wireless or Shure Aonic are rated for 16 hours and 20 hours, respectively.
Beyond battery life, the other thing your $499 gets you is build quality. The Beoplay HX’s ear cushions are made from lambskin with a memory foam interior, while the headband uses cow hide and knitted fabric in its construction. The ear cups themselves feature an aluminum disc surrounded by a recycled plastic housing, and the arm sliders are also aluminum.
The rest of the Beoplay HX specs are typical. There’s a USB-C port for charging, a 3.5mm jack for wired connections, buttons on the left and right ear cup, and also touch controls on just the right side. The headphones support Bluetooth 5.1, and for codecs, you get aptX Adaptive, AAC, and SBC. Google Fast Pair and Microsoft Swift Pair are both included for easy pairing with their respective platforms. And yes, the headphones come with a 3.5mm cable in the box, unlike the AirPods Max.
ASRock has just spilled the beans on Intel’s new W-1300 series of Xeon processors in a compatibility list showing all CPUs supported in Intel’s latest LGA1200 socket. This means that these new Xeon processors will support the same socket as Intel’s consumer-grade Rocket Lake products.
Because these chips are compatible with the LGA 1200 socket, they should be nearly identical to Intel’s upcoming Core i5, i7, and i9 Rocket Lake parts, but have a few alterations that include and support for vPro technologies and ECC memory.
Intel has been doing this for years with its lower-end Xeon processors, re-purposing lower-end Core i5, i7, and i9 parts and turning them into Xeon chips. The strategy makes a lot of sense, as not all servers and workstations require HEDT levels of processing power and connectivity.
However, since the Skylake generation, Intel has severely limited its entry-level Xeons’ motherboard compatibility and requires them to run in a workstation/server “designed” chipset. So yes, while these chips have the same socket compatibility as Intel’s consumer desktop chips, Intel’s W-1300 Xeons will not work in a standard H-, B-, or Z-series motherboard.
Currently, the only W-1300 CPUs ASRock has listed so far are the W-1390, W-1390T, W-1350P, W-1350, and W-1370.
The main differences we can find between each Xeon chip are its TDP and with some, its amount of L3 cache. For instance, chips like the W-1350, W-1390, and W1-1370 have an average TDP of 80W. The 1390T, has the lowest TDP at just 35W, and the W-1350P has the highest TDP of 125W.
Additionally, the W-1350 & W-1350P are equipped with less L3 cache, coming in at 12MB instead of 16MB. Presumably, this reduction in L3 cache is due to a lower core count compared to its other siblings.
Unfortunately, that’s all we know, for now, we still don’t know what prices will be at, what core counts these chips will have, and what boost frequencies these chips will be equipped with. (But expect a maximum of 8 cores for W1300 chips due to the Rocket Lake architecture.)
Hopefully, we should have more information on Intel’s new W-1300 chips right around or after the official Rocket Lake launch.
Large companies like Google have been building their own servers for many years now in a bid to get machines that suit their needs the best way possible. Most of these servers run Intel’s Xeon processors with or without customizations, but feature additional hardware that accelerate certain workloads. For Google, this approach is no longer good enough. This week the company announced that it had hired Intel veteran Uri Frank to lead a newly established division that will develop custom system-on-chips (SoC) for the company’s datacenters.
Google is not a newbie when it comes to hardware development. The company introduced its own Tensor Processing Unit (TPU) back in 2015 and today it powers various services, including real-time voice search, photo object recognition, and interactive language translation. In 2018, the company unveiled its video processing units (VPUs) to broaden the number of formats it can distribute videos in. In 2019, it followed with OpenTitan, the first open-source silicon root-of-trust project. Now Google installs its own and third-party hardware onto the motherboards next to an Intel Xeon processor. Going forward, the company wants to pack as many capabilities as it can into SoCs to improve performance, reduce latencies, and reduce the power consumption of its machines.
“To date, the motherboard has been our integration point, where we compose CPUs, networking, storage devices, custom accelerators, memory, all from different vendors, into an optimized system,” Amin Vahdat, Google Fellow and Vice President of Systems Infrastructure, wrote in a blog post. “Instead of integrating components on a motherboard where they are separated by inches of wires, we are turning to SoC designs where multiple functions sit on the same chip, or on multiple chips inside one package.”
These highly integrated system-on-chips (SoCs) and system-in-packages (SiPs) for datacenters will be developed in a new development center in Israel, which will be headed by Uri Frank, vice president of engineering for server chip design at Google, who brings 24 years of custom CPU design and delivery experience to the company. The cloud giant plans to recruit several hundred world-class SoC engineers to design its SoCs and SiPs, so these products are not going to jump into Google’s servers in 2022, but will likely reach datacenters by the middle of the decade.
Google has a vision of tightly integrated SoCs replacing relatively disintegrated motherboards. The company is eager to develop building blocks of its SoCs and SiPs, but will have nothing against buying them from third party if needed.
“Just like on a motherboard, individual functional units (such as CPUs, TPUs, video transcoding, encryption, compression, remote communication, secure data summarization, and more) come from different sources,” said Vahdat. “We buy where it makes sense, build it ourselves where we have to, and aim to build ecosystems that benefit the entire industry.”
Google’s foray into datacenter SoCs is consistent with what its rivals Amazon Web Services and Microsoft Azure are doing. AWS already offers instances powered by its own Arm-powered Graviton processors, whereas Microsoft is reportedly developing its own datacenter chip too. Google yet has to disclose whether it intends to build its own CPU cores or license them from Arm or another party, but since the company is early in its journey, it is probably considering different options at this point.
“I am excited to share that I have joined Google Cloud to lead infrastructure silicon design,” Uri Frank wrote in a blog post. “Google has designed and built some of the world’s largest and most efficient computing systems. For a long time, custom chips have been an important part of this strategy. I look forward to growing a team here in Israel while accelerating Google Cloud’s innovations in compute infrastructure. Want to join me? If you are a world class SOC designer, open roles will be posted to careers.google.com soon.”
After almost a decade of total market dominance, Intel has spent the past few years on the defensive. AMD’s Ryzen processors continue to show improvement year over year, with the most recent Ryzen 5000 series taking the crown of best gaming processor: Intel’s last bastion of superiority.
Now, with a booming hardware market, Intel is preparing to make up some of that lost ground with the new 11th Gen Intel Core Processors. Intel is claiming these new 11th Gen CPUs offer double-digit IPC improvements despite remaining on a 14 nm process. The top-end 8-core Intel Core i9-11900K may not be able to compete with its Ryzen 9 5900X AMD rival in heavily multi-threaded scenarios, but the higher clock speeds and alleged IPC improvements could be enough to take back the gaming crown. Along with the new CPUs, there is a new chipset to match, the Intel Z590. Last year’s Z490 chipset motherboards are also compatible with the new 11th Gen Intel Core Processors, but Z590 introduces some key advantages.
First, Z590 offers native PCIe 4.0 support from the CPU, which means the PCIe and M.2 slots powered off the CPU will offer PCIe 4.0 connectivity when an 11th Gen CPU is installed. The PCIe and M.2 slots controlled by the Z590 chipset are still PCI 3.0. While many high-end Z490 motherboards advertised this capability, it was not a standard feature for the platform. In addition to PCIe 4.0 support, Z590 offers USB 3.2 Gen 2×2 from the chipset. The USB 3.2 Gen 2×2 standard offers speeds of up to 20 Gb/s. Finally, Z590 boasts native support for 3200 MHz DDR4 memory. With these upgrades, Intel’s Z series platform has feature parity with AMD’s B550. On paper, Intel is catching up to AMD, but only testing will tell if these new Z590 motherboards are up to the challenge.
The MSI Performance Gaming line, or “MPG” for short, from MSI is generally pitched as the middle ground between the no-holds-barred MEG line and more value-oriented MAG line. The MSI MPG Z590 Carbon EK X is an exception. Developed in partnership with and distributed by EKWB, the MSI MPG Z590 Carbon EK X features a monoblock for CPU and VRM cooling as well as all the tools you need to integrate it into your custom water-cooling build.
The MSI MPG Z590 Carbon EK X features a 16-phase Vcore VRM on a 6-layer PCB. There is also 2.5 Gb/s LAN and built-in WiFi 6E, as well as three M.2 slot heatsinks and even a physical RGB LED off switch. EK is including a leak test kit with the MSI MPG Z590 Carbon EK X, so you can build with confidence.
Let’s take a closer look at what the MSI MPG Z590 Carbon EK X has to offer.
1x Intel 1225V 2.5G LAN 1x Intel WiFi 6E AX210 module
Rear Ports:
4x USB 2.0 ports 1x DisplayPort 2x USB 3.2 Gen1 5 Gbps Type-A 1x 2.5G LAN 5x Audio Connectors 1x Flash BIOS Button 1x HDMI port 3x USB 3.2 Gen 2 10Gbps Type-A 1x USB 3.2 Gen 2×2 20Gbps Type-C 2x SMA WiFi connectors 1x Optical S/PDIF Out
Audio:
1x Realtek ALC4080 Codec
Fan Headers:
8x 4-pin
Form Factor:
ATX Form Factor: 12.0 x 9.6 in.; 30.5 x 24.4 cm
Exclusive Features:
Custom EK monoblock
EK leak test kit
2.5 Gb/s LAN
Intel WiFi 6E
Mystic Light
Frozr heatsink design
M.2 Shield Frozr
PCIe Steel Armor
Pre-installed I/O shielding
Testing for this review was conducted using a 10th Gen Intel Core i9-10900K. Stay tuned for an 11th Gen update when the new processors launch!
If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.
There are a lot of topics, both serious and fun, that are out there to be covered by The Verge, and it falls on our news writers to cover them: from coronavirus and space exploration to YouTube and Super Nintendo World. Mitchell Clark is one of those writers; among other articles, he wrote one of the best explanations you can find of what exactly NFTs are. We took a look (remotely via photos) of Mitchell’s desk and asked him some questions about his stuff.
Tell me a little about yourself. What is your background, and what do you do at The Verge?
Like Jay, I’m a news writer, tasked with keeping The Verge’s readers up to date with news about pretty much anything you could think of. Lately, it’s been a lot of NFTs, but it’s really just a grab bag every day I come into work, which keeps it exciting.
I also literally just got here — I started in December. I previously did a little of everything, from slinging fast-food chicken fingers, professionally fixing people’s phone problems, and doing training, testing, and coding for software the government uses. Basically, pretty much anything not related to my degree in video production.
How did you decide where and how to set up your workspace?
I live in a relatively small and cheap city, so I’m luxuriating in a two-bedroom apartment. I’ve worked at home ever since we moved here in 2017, so as soon as we got all the moving boxes out of the second room, I claimed it as my office. As for where the desk is: it used to be up against the window, but the sun kept getting in my eyes, so I moved it against the wall instead.
Tell me a little about the desk itself.
It’s called the iMovR Energize, and it’s a motorized standing desk. And yes, I do actually work standing up a lot. I don’t often work sitting at it, though — the cat is banned from the office, but if I’m in here he’ll sit outside the door and scream. So if I’m going to work sitting down, I do it on the couch so he doesn’t guilt-trip me.
Half of the reason why I chose the Energize was because it’s ostensibly made in the US, and the other half is that there are almost no reviews of it, and I wanted to do one and have it stand out. As far as I can tell, I’m still the only person who’s done a video review of it on YouTube, the TL;DR of which is that it’s a good desk. If it lasts for 10 years, it may actually be worth the almost $1,000 price tag.
I think that’s the simplest desk chair I’ve seen so far.
Yeahhhh, it’s an Ikea Trollberget. I went with it over an office chair in the optimistic hope that it would help me not slouch so much. The seat part of it tilts back and forth, so it really requires some core strength to sit up straight, which is great when I actually do that, but honestly I usually just put my elbows on the desk and curve my body into some horrible “S” shape. If I lived somewhere I could find a used Herman Miller, I’d probably give one of those a try.
Tell us a bit about your audio setup. It looks like you’ve put considerable thought into it.
Yes, I have. It’s a Shure Beta 87A microphone, mounted on a Heil PL2T arm and connected to a Focusrite Scarlett 2i4 audio interface. The headphones are the Beyerdynamic DT 770 Pro 80 ohms, which aren’t super fun for music (hence the fifth-gen iPod with KZ ES4 earbuds) but are great for accurately reproducing vocals.
The whole setup is optimized for one thing: making sure that my voice is as clear and echo-free as possible. I was tired of having to go into a cave of blankets to record voiceovers, so I got a microphone with a very narrow (supercardioid) pickup pattern, and it works great. I also sometimes use it as an improvised video mic, for which it’s only okay. Usually, it just makes me sound really great on Zoom calls (and lets me pretend I’m going to make more episodes of a podcast I made three episodes of and then gave up on).
Okay, now it’s time to talk about your other tech: your computer, display setup, and other tech stuff.
Alright! My computer is a 13-inch M1 Macbook Pro — I went with the Pro over the Air mainly for the brighter screen. When I’m working from my desk and not the couch, I plonk that on a Twelve South Curve stand, and plug it in to a… *checks B&H order history* Dell U2415 24-inch monitor.
It’s 16:10, which is nice, but unfortunately it’s got a 1920 x 1200 resolution. I seem to be especially sensitive to low resolutions (I can immediately tell the difference between YouTube at 1080p and 720p on my iPhone Mini), so my next big upgrade may be to LG’s 24-inch UltraFine 4K (if I can find one used).
I switch between a Magic Trackpad and Logitech G502 Hero for my mousing needs. Changing which device and hand I use helps stave off wrist pain, and I’ve discovered that any mouse without Logitech’s ratcheting / free-spinning scroll wheel is almost unusable for me. For my keyboard, I use the peculiarly named Ducky One 2 with Cherry MX Browns. The main theme is wired: I’ve always run into weird, annoying issues with Bluetooth keyboards and mice.
The final Big Thing on my desk is an OWC ThunderBay 4. Being into video production and photography (Fujifilm X-T3 for digital, Nikon F3HP for film, by the way), I accumulate a lot of absolutelymassive files: I’ve currently got 11TB of data spread out across 17TB of drives.
You mentioned that you had a bit of a cable issue.
Yeah, I just up (down? side-to-side?)-graded from an iMac Pro, which had just enough ports to plug in my five bajillion peripherals. Now my computer has two ports, so I have an absolute nightmare of a situation.
Here’s my current setup: I connect my laptop with Thunderbolt to the ThunderBay 4. Somehow that provides enough power to trickle-charge the laptop, and provides a Thunderbolt pass-through, which I currently have a USB-C Satechi Clamp Hub Pro plugged into. Plugged into that are my mouse and keyboard, and my monitor’s built-in USB hub, which has even more devices plugged into it (notably the scanner and Scarlett). Then I use my laptop’s second Thunderbolt port to plug in the monitor (good thing the ThunderBay can charge the computer, I’m out of ports).
I’ve got an OWC Thunderbolt 4 Dock on preorder to save me from this triple-hub chain nightmare, but until then, I’ve just got a mess of wires and am hoping nothing breaks.
I see your keyboard is right near your desk. Do you ever take a break to make some music?
I can’t actually play piano to be honest, even though I’ve literally had this keyboard since I was seven years old. I do have it hooked up to my computer through the Scarlett’s MIDI interface, so sometimes if I find a really cool-sounding synth in Logic, I’ll mash at the keyboard until I get something that sounds good. Its main job, though, is to sit there, guilting me until I actually learn even a drop of music theory.
Looks like a great setup for storing your bikes, but I’d be nervous about crashing into them if I push my chair back too hard…
I’d never even thought about that, but thankfully my chair doesn’t have wheels so I’d really have to try for it. The biggest risk with the bikes is that I’ll look out my window, see the paved trail that runs right outside it (and keeps going for 100 miles into a different state), and not be able to resist the temptation to take a ride!
For any other apartment-dwellers, the bike stand is probably a great option: it’s made by a company called Delta Design. I bought mine at Costco, but as always when I find something I like there, it’s no longer available. Amazon still sells it, and REI has a nicer-looking version, too.
Tell us a bit about your decorations: the great collection you’ve got on your bulletin board, the sculptures on your windowsill, etc.
I always want to have things that, as Marie Kondo would put it, spark joy around me while I’m working. So, I try to decorate with things made by creators or friends, or with art that is associated with some sort of memory. Some of the pins are from webcomics or podcasts that I enjoy, some are from Etsy, and the vintage and Michigan-related ones I got from my grandma, who apparently collected them. I’m on the record as absolutely loving Kentucky Route Zero, so I figured I’d get a poster of it, too.
The coolest story, though, goes with the metal bonsai trees. I did karate for about 10 years (and have missed doing it for six), and my sensei had a friend who would make the trees by hand. He’d give them out every year as awards for people who exemplified certain qualities of the Shotokan dojo kun. I don’t remember which I got them for, but they’re good reminders of some pretty good rules.
What’s on the shelving beneath the bulletin board?
A little bit of everything! There’s an Epson Perfection V550 scanner, which I use for everything from the mundane (scanning documents and birthday / holiday cards) to the exciting only to me (scanning all the film negatives I’ve developed). I also keep all my camera gear there, with one of the drawers having a mishmash of GoPro accessories, a Rode VideoMic Go, Zoom H5, and other video gear. The other drawer has “ancient media” like VHS tapes, cassette tapes, and vinyl records.
Oh, and there’s a label maker, which I’m pretty sure doesn’t have any tape left.
Finally — do you often hide under your desk?
Only in the summer, when it’s hot and I need to get out of the sunlight! But I do work from the floor a lot, either just sitting on it or laying down. I’ve been told it’s weird (usually by my wife, who comes home and finds me laying on the floor, with the cat having sprawled himself across my legs), but it works for me.
With the number of leaks concerning Nintendo’s upgraded Switch console over the past few months, we can be almost certain that the Japanese gaming company is indeed preparing to launch an update to the Switch. This morning Bloomberg added some more details to the picture. As it turns out, Nintendo’s upgraded console will be powered by a new system-on-chip designed by Nvidia. Interestingly, the new SoC will even support some of Nvidia’s latest graphics technologies.
The upgraded version of Nintendo’s Switch console is expected to come with a 7-inch OLED screen, an upgrade from a 6.2-inch 720p LCD screen used on the currently available model. A higher resolution display automatically requires a significant upgrade of the graphics subsystem of a console, so it is not particularly surprising that the revamped Switch will use an all-new Nvidia SoC that can handle 4K graphics when docked to an external TV.
The original Nintendo Switch is powered by Nvidia’s Tegra X1 SoC featuring four Arm Cortex-A57 general-purpose cores as well as GM20B GPU with 256 CUDA cores featuring the Maxwell architecture (note that Nintendo’s Switch does not use four low-power Cortex-A53 cores also found in the X1). This processor was introduced in early 2015 and by now it is completely out of date.
The new system-on-chip from Nvidia will feature new general-purpose CPU cores as well as a new GPU that will support Nvidia’s Deep Learning Super Sampling (DLSS) that enhances graphics quality in games that support it, reports Bloomberg citing sources familiar with the matter. The console will also most likely come with more memory featuring higher bandwidth (think LPDDR4X or LPDDR5).
It is hard to say exactly what the new Nvidia SoC for Nintendo’s upgraded Switch will pack, but DLSS requires Tensor cores, so we are definitely talking about Volta, Turing or Ampere here architectures. The exact configuration of the GPU is unknown, but if Nintendo wants proper 4K graphics both on internal and external screens, it should not skimp on graphics performance.
The information about the new SoCs of course comes from an unofficial source and has to be taken with a grain of salt. For obvious reasons, neither Nintendo nor Nvidia commented on the matter.
Meanwhile, in a bid to maintain backwards compatibility with games for Switch, Nintendo had to use an SoC with Nvidia’s graphics, so a new chip from the green giant seems perfectly reasonable. Nvidia has experience integrating its latest GPU architectures into SoCs for automobiles, so it should not be a problem for the company to design a new processor for Nintendo’s upcoming game console.
Although it’s a pricey drive, Corsair’s MP600 Pro is fast, secure, and keeps its cool with innovative cooling solutions.
For
+ Competitive performance
+ Innovative and functional thermal solutions
+ AES 256-bit encryption
+ 5-year warranty
Against
– Less endurance than the non-Pro model
– Smaller-than-expected SLC cache
– Slow-to-recover SLC cache
– Dated software support
– Costly
Features and Specifications
Powered by Phison’s new PCIe Gen4 NVMe SSD controller and Micron’s 96-Layer TLC flash, Corsair’s all-new MP600 Pro is the company’s fastest M.2 NVMe SSD yet. With sequential read/write throughput that stretches up to a blistering 7.1 / 6.5 GBps, the MP600 Pro offers nearly bus-saturating performance and looks brilliant with innovative custom heatsinks, making it a sure contender for our list of Best SSDs. Though the drive is pricey, Corsair offers not only a standard drive cooled with a heatsink but also a Hydro X Edition with a water block for those who want a truly water-cooled M.2 SSD.
Historically, Corsair’s SSDs have a solid design and rank well in both performance and value. Until now, the MP600 served as the company’s top dog, sporting a sleek design, AES 256-bit encryption, and packing top speeds of up to 5 / 4.4 GBps of sequential read/write throughput.
Now, over a year later and just in time for Intel’s Rocket Lake launch, the company has upgraded to faster hardware to create a Pro model for those who want even more speed. Corsair’s MP600 Pro improves upon its predecessor, trading out the Phison E16 controller for the new E18 and interfacing with a faster flash with a 1,200 MTps transfer rate. Those enhancements yield up to 660,000 / 800,000 random read/write IOPS.
Unlike Team Group’s form-over-function attempt at water-cooling an SSD, Corsair’s MP600 Pro Hydro X Edition is the first truly water-cooled M.2 NVMe SSD we’ve seen. With both a heatsink edition and the Hydro X Edition for integrating into your custom watercooled PC, both of the sleek and innovative designs ensure cool operation.
Specifications
Product
Force MP600 Pro 1TB
Force MP600 Pro 2TB
Force MP600 Pro Hydro X 2TB
Pricing
$224.99
$434.99
$259.99
Capacity (User / Raw)
1000GB / 1024GB
2000GB / 2048GB
2000GB / 2048GB
Form Factor
M.2 2280
M.2 2280
M.2 2280
Interface / Protocol
PCIe 4.0 x4 / NVMe 1.4
PCIe 4.0 x4 / NVMe 1.4
PCIe 4.0 x4 / NVMe 1.4
Controller
Phison PS5018-E18
Phison PS5018-E18
Phison PS5018-E18
DRAM
DDR4
DDR4
DDR4
Memory
Micron 96L TLC
Micron 96L TLC
Micron 96L TLC
Sequential Read
7,000 MBps
7,000 MBps
7,000 MBps
Sequential Write
5,500 MBps
6,550 MBps
6,550 MBps
Random Read
360,000 IOPS
660,000 IOPS
660,000 IOPS
Random Write
780,000 IOPS
800,000 IOPS
800,000 IOPS
Security
AES 256-bit encryption
AES 256-bit encryption
AES 256-bit encryption
Endurance (TBW)
700 TB
1,400 TB
1,400 TB
Part Number
CSSD-F1000GBMP600PRO
CSSD-F2000GBMP600PRO
CSSD-F2000GBMP600PROHXE
Warranty
5-Years
5-Years
5-Years
The MP600 Pro is available in capacities of 1TB and 2TB for $225 and $435, respectively. The Hydro X Edition only comes in a 2TB capacity with a slightly higher price tag of $460. Corsair rates the MP600 Pro to deliver speeds of up to 7,000 / 6,550 MBps in sequential read/write transfers and up to 660,000 / 800,000 random read/write IOPS under heavy load.
Corsair didn’t improve the MP600 Pro’s endurance ratings, though. In fact, instead of improving, the Pro has slightly lower endurance than the MP600. Corsair’s MP600 Pro comes backed by a five-year warranty and is rated to endure up to 700TB of written data per 1TB of drive capacity, while the original MP600 carries a much higher 1,800TB-per-1TB of capacity rating.
Corsair supports the MP600 Pro with an SSD Toolbox software, too, but the GUI is dated compared to some of the better SSD Toolbox software like Samsung’s Magician, WD’s SSD Dashboard, or Intel’s Memory and Storage Tool.
A Closer Look
Image 1 of 3
Image 2 of 3
Image 3 of 3
Corsair’s MP600 Pro is an M.2 2280 SSD, and our sample comes with a large extruded aluminum heatsink rather than the XM2 water block. The heatsink measures 24 x 14.5 x 70 mm and comes with plenty of fins to dissipate the SSD’s heat output, even in situations with little to no airflow. However, the fins are large enough that they could potentially block a GPU.
Unlike Adata’s XPG Gammix S70, you can remove the MP600 Pro from the heatsink, which is always a plus. But bear in mind, doing so may ruin the thermal pad between the heatsink and SSD, so you might have to do some patchwork or replace it with a new strip entirely if you plan to reinstall the heatsink later.
Image 1 of 2
Image 2 of 2
Phison’s PS5018-E18, a fast eight-channel PCIe Gen4 NVMe SSD controller, resides under the hood. This controller offers among the fastest write speeds we’ve seen, thanks to its DRAM cache that assures responsive access to the file mapping table. The controller also interfaces with a single 8Gb package of SK Hynix DDR4.
Furthermore, the controller incorporates three Cortex R5 cores clocked at 1GHz, and two lower-clocked Dual CoXProcessor 2.0 cores handle the host’s requests and the SSDs’ internal NAND management algorithms. The controller also supports APST, ASPM, and the L1.2 standby power state for efficiency, as well as thermal throttling to ensure cool operation. However, like the Sabrent Rocket 4 Plus, the MP600 also comes with a low throttle temperature limit. Thermal throttling triggers if the temperature exceeds 68 degrees Celsius. This algorithm dynamically reduces performance by roughly 50 MBps for every 1 degree Celsius over that limit.
Image 1 of 2
Image 2 of 2
The MP600 Pro’s controller interfaces with Micron’s 96-Layer 3D TLC flash at speeds of up to 1,200 MTps. The 1TB model uses four NAND packages, each containing four 512Gb dies, while the 2TB models use eight NAND packages with eight 512Gb dies apiece. This flash features a robust quad-plane architecture and many innovative design features, including CuA (Circuit under Array) and tile grouping for responsive random read access. However, it isn’t as cutting-edge as Micron’s 176-Layer flash that should hit the market soon.
On Monday executives at SK Hynix gave a keynote speech to the IEEE International Reliability Physics Symposium (IRPS) where they shared their vision about the company’s mid-term and long-term technological goals. SK Hynix believes it can continue increasing capacity of its 3D NAND chips by increasing the number of layers to over 600. Furthermore, the company is confident that it can scale DRAM technologies below 10nm with the help of extreme ultraviolet (EUV) lithography. Ultimately, SK Hynix wants to converge memory and logic into one device to address emerging workloads.
“We are improving materials and design structures for technical evolution in each field of DRAM and NAND, and solving the reliability problems step by step,” said Seok-Hee Lee, CEO of SK Hynix. “If the platform is innovated successfully based on this, it is possible to achieve the DRAM process below 10nm and stack over 600 layers of NAND in the future.”
The Future of 3D NAND: 600-Layers and Counting
3D NAND has proven to be a very efficient architecture both in terms of performance and scalability, so SK Hynix will continue using it for years to come. Back in December, 2020, SK Hynix introduced its 176-layer ‘4D’ 3D NAND memory with a 1.60 Gbps interface. The company has already started sampling 512Gb 176-layer chips with makers of SSD controllers, so expect drives based on the new type of 3D NAND memory sometimes in 2022.
Just a few years ago the company believed that it could scale 3D NAND to ~500 layers, but now it is confident that it can scale it even beyond 600 layers in the long-term future. By increasing the number of layers, SK Hynix (and other producers of 3D NAND) will have to keep making layers thinner, NAND cells smaller and introduce new dielectric materials to maintain uniform electric charges therefore preserving reliability. The company is already among the leaders in atomic layer deposition, so one of its next goals is to implement high aspect ratio (A/R) contact (HARC) etching technology. Also, for 600+ layers it will probably have to learn how to string stack more than one wafers.
SK Hynix did not even imply when the industry should expect 3D NAND devices with over 600 layers and what capacities will such incredible number of layers bring. With its 176-layer technology SK Hynix is looking at 1Tb products, so with 600 layers per-device capacities will get even more impressive.
The Future of DRAM: Below 10 nm with EUV
Just like Samsung Semiconductor and unlike Micron Technology, SK Hynix believes that adoption of EUV lithography is the most straightforward way to keep increasing performance of DRAM while also boosting capacity of memory chips and keeping their power consumption in check. With DDR5, the company will have to introduce memory devices with a capacity of over 16Gb and data transfer rates of up to 6400 GT/s (initially) that will be stacked together to build high-capacity DRAM chips.
Since future memory products will have to bring together high performance, high capacity, and limited power consumption, advanced manufacturing technologies will get even more important. To successfully implement EUV, SK Hynix is developing new materials and photoresists for stable EUV patterning and defect management. In addition, the company is looking at innovating the cell structure while maintaining its capacitance by using thinner dielectrics made of materials with high dielectric constant.
It is noteworthy that SK Hynix is now also looking at ways to reduce resistance “of the metal for interconnect”, which is an indicator that the sizes of DRAM transistors have gotten so small that their contacts are about to become a bottleneck. With EUV, transistors will shrink their sizes, gain performance, and reduce their power, so contact resistance will indeed become a bottleneck somewhere at 10nm or below. Producers of logic solved this issue in different ways: Intel decided to use cobalt instead of tungsten, whereas TSMC and Samsung Foundry switched to selective tungsten deposition process. SK Hynix did not elaborate about its way to fight contact resistance, but only said it was seeking “next-generation electrode and insulating materials and introducing new processes.” It remains to be seen what DRAM makers will use to lower contact resistance, but it is evident that memory makers have essentially the same issues as their logic peers.
Converging Processing and Memory
In addition to making DRAM faster and boosting its capacity, SK Hynix is looking forward to converging memory and processing. Nowadays leading-edge processors for supercomputers use high-bandwidth memory (HBM) that is connected to them using an interposer. SK Hynix calls this concept PNM (Processing Near Memory). SK Hynix asserts that the next step is PIM (Processing In Memory) with the processor and the memory existing within a single package, whereas ultimately the company is looking at CIM (Computing in Memory), where the CPU and the memory is integrated into a single die.
To a large degree, SK Hynix’s CIM concept resembles Samsung’s PIM (Processing in Memory) concept introduced this past February and set to become an industrial standard defined by JEDEC. Samsung’s HBM-PIM embeds 32 FP16-capable programmable computing units (PCUs) that run at 300 MHz into a 4Gb memory die. The PCUs can be controlled using conventional memory commands and execute some basic computations. Samsung claims that its HBM-PIM memory is already under trials in AI accelerators with leading AI solutions providers and indeed the technology makes a lot of sense for AI and other workloads that do not require high precision, but benefit from the number of simplistic cores that can be made using DRAM fabrication processes.
At this point it is unclear whether SK Hynix’s CIM will be implemented in accordance with the upcoming JEDEC standard proposed by Samsung, or SK Hynix will go with a proprietary technology. But at least the largest makers of DRAM in the world have similar vision about converged memory and logic devices.
Converging logic and memory makes a lot of sense for niche applications. Meanwhile, there are more common applications that that can benefit from tighter integration of memory, storage, and processing. To that end, SK Hynix is developing heterogeneous computing interconnect technology for tightly integrated packages containing processing IP, DRAM, NAND, micro-electro-mechanical systems (MEMS), radio frequency identification (RFID), and various sensors. Again, the company did not provide many details here.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.