The Corsair K70 RGB TKL is a powerful yet compact gaming keyboard. We didn’t notice an immediate benefit from the 8,000 Hz polling rate, but with a sleek look plyus premium media controls and keycaps, this keyboard’s in a league of its own.
For
+ Space-saving, durable build
+ Premium keycaps
+ Media keys
+ Some software-free RGB control
Against
– Close keys can require getting used to
– Expensive
Let’s be real: Mechanical keyboards can get expensive. While the best budget mechanical keyboards can give you the switches you need, the best gaming keyboards often come with extra bells and whistles that up the price. At $140, the Corsair K70 RGB TKL is one example, but you get a lot for that price.
Corsair’s been dubbing keyboards “K70” for a while. Just look at our Corsair K70 RGB Red review from 2016 or the most recent iteration, the low-profile Corsair K70 RGB MK.2. Our review focus brings the tenkeyless (TKL) form factor to the lineup.
The K70 RGB TKL is a competitive board that earns its price with extra features, like programmable keys, per-key RGB via manageable software. And as someone who games full-time, the quality of the keyboard’s build seems like a great investment. This is a sturdy keyboard that should hold up over extended use. And since this is a TKL keyboard, you’ll have all the space you need on your desk for your mouse, to let you focus exclusively on playing.
On top of that, Corsair is continuing its trend of upping the polling rate of its gaming keyboards, with the K70 RGB TKL offering an 8,000 Hz polling rate — 8 times the 1,000 Hz you usually see. The usefulness of that high spec, however, is debatable.
Corsair K70 RGB TKL Specs
Switches
Cherry MX Red (tested), Cherry MX Silent Red or Cherry MX Speed Silver
Lighting
Per-key RGB
Onboard Storage
8MB
Media Keys
Yes
Interface
USB Type-A
Cable
6 feet (1.8m) USB-C to USB-A , braided, detachable
Additional Ports
None
Keycaps
Doubleshot PBT plastic
Software
Corsair iCue
Dimensions (LxWxH)
14.2 x 6.5 x 1.9 inches
Weight
2.1 pounds
Extra
1x ABS plastic A, S, D, Q, E and R keycaps, 2x ABS plastic W and D keycaps, 1x keycap puller
Design
The Corsair K70 RGB TKL Champion Series is a tournament-ready keyboard with a colorful and durable design in a small form factor. As a TKL keyboard, it forgoes the numpad in favor of more desk space, which makes it great for people who don’t have a lot of room on their desk or travel a lot. At 14.2 x 6.5 x 1.9 inches, the K70 RGB TKL is similar but slightly taller than other TKL keyboards, such as the Razer BlackWidow V3 Tenkeyless (14.3 x 6.1 x 1.6 inches) and more petite Roccat Vulcan TKL Pro (14.2 x 5.3 x 1.3 inches). Another downside for travel is the K70 RGB TKL’s weight. It’s 2.1 pounds compared to 1.9 pounds for the Razer and 1.5 pounds for the Roccat.
But part of that slightly larger design comes thanks to the K70 RGB TKL’s inclusion of luxurious media keys. There are five dedicated hotkeys, plus an aluminum, textured volume roller, which are all a decent accomplishment to include on a TKL.
All those keys felt pretty solid, especially compared to the cheap plastic alternatives available on lower-priced keyboards.
This brings us to the overall durability of the keyboard. The K70 RGB TKL feels more rigid and sturdy than the ~$250 Logitech G915 Lightspeed full-sized wireless gaming keyboard I often use, (which has an identical design to its TKL counterpart, the Logitech G915 TKL). The Logitech is conveniently lightweight (2.3 pounds) and thin (0.9 inches) but feels like it might break if dropped. Suddenly, the K70 RGB TKL’s $140 price tag starts to make more sense. The K70 RGB TKL lives in a plastic chassis with a black matte finish and aluminum frame.
With its media key layout and brushed aluminum finish the K70 RGB TKL looks more interesting than a lot of other TKLs (looking at you, Razer BlackWidow V3 Tenkeyless). And it’s mature and subdued enough to go well with any setup. But I’m not wowed by its overall look; it’s not earning any style points from me at first. Out of the box, this appears to be a tool for competitive gamers, not a showy looker. You can add a little more flair, however, if you use the included silver W, A, S, D, Q, E, R, D or F keycaps. These keycaps are a cheaper ABS plastic than the doubleshot PBT that the keys come with by default, but do add more color to the design and a slight texturing that I like a lot.
For even more customization, you’ll have to rely on the K70 RGB TKL’s per-key RGB effects. You’ll need the software to create and play with different RGB effects. But you can also toggle through 10 different presets and control speed and direction using FN shortcuts. You can also create profiles in the iCue software with different RGB effects and store them in your onboard memory. When you toggle through profiles with the dedicated profile switch button, the RGB will change accordingly. As somebody who loves the variety of RGB settings on my keyboard, it is wonderful to be able to control these settings regardless of whether iCUE is running or not.
Next to the profile switch button are an RGB brightness key and Windows lock key as well. These and the media keys are also reprogrammable via iCue for ultimate customization.
Corsair didn’t skimp when it came to the keycaps. The use of doubleshot PBT plastic delivers a more premium feel than standard ABS plastic. And doubleshot means the legends will never fade. The keycaps feel strong at 1.5mm thick and have a matte coating that easily fought off grease and fingerprints during my testing. With many still working from home, you’d be hard-pressed to find someone who isn’t eating near their keyboard, so this feature is highly appealing.
The K70 RGB TKL uses a detachable USB-C to USB-A cable that’s high-quality braided. Some keyboard’s USB cables can feel thin or cheap, but this one should survive a good amount of bending and wear. Our review focus’ cable is 6-feet-long, which is standard among gaming keyboards but can still feel a little long in actual use, which is why I prefer one of the best wireless keyboards when possible.
Typing Experience on Corsair K70 RGB TKL
The Corsair K70 RGB TKL comes with either Cherry MX Speed Silver, Cherry MX Silent Red or Cherry MX Red switches. All three options actuate with 45g of force and are linear, a mechanical switch style that tends to be a favorite among gamers for its interruption-free travel. Our review unit came with Red switches, which are specced for 2.0mm pretravel and 4.0mm total travel. Those who want less travel, (perhaps, potentially, for more speed, may prefer the Speed Silver switches (1.9mm / 3.7mm) or even the quieter Silent Reds (1.2mm / 3.4mm).
Pressing keys on the K70 RGB TKL felt lovely and easy because it felt like the keys registered quickly. But there’s very little space between the keys which, in addition to the lighter actuation force of Cherry MX Reds, made typos more common. As such, the K70 RGB TKL may require a slight adjustment period in order to use it smoothly, but this wasn’t a huge concern, as I was eventually able to adapt.
The doubleshot PBT keycaps were also a boon, both for typing and gaming. The quality plastic was more comfortable than the keyboards on most other keyboards I’ve tried. My typing accuracy increased slightly but like I stated I used less pressure to type, which I believe made typing easier.
8,000 Hz Polling Rate
Initially kicked off with the 4,000 Hz Corsair K100 RGB last year, Corsair is continuing its polling rate race with the 8,000 Hz K70 RGB TKL. It’s launching alongside the Corsair Sabre RGB Pro gaming mouse, which also has an 8,000 Hz polling rate, showing a newfound dedication to Hz from the gaming brand.
Your keyboard (or other peripheral) polling rate tells you how many times per second the device sends data to your PC. Instead of doing so 1,000 times a second, like the vast majority of gaming keyboards, the K70 RGB TKL can do it 8,000 times per second. It achieves this through what Corsair calls Axon, “an embedded onboard system with Corsair’s purpose-engineered, real-time operating system” running on a system-on-chip (SoC) with multi-threading in order to “process multiple complex instructions in parallel.” Corsair claims Axon uses an advanced scheduling algorithm. There are some caveats though.
First, there are some requirements. You’ll need a USB 3.0 port and to download iCue software and change the polling rate (from 1,000 Hz) in order to use the 8,000 Hz polling rate. Corsair also noted in its reviewer’s guide that the keyboard “transits keystrokes to the PC up to 8x faster than standard” but can only “detect keypresses up to 4x faster than conventional gaming keyboards.” The vendor doesn’t get too specific in terms of system requirements for 8,000 Hz. A rep told us, “Keyboards send a lot less data, so 8,000 Hz has only a small added CPU usage impact” but added, “the more up-to-date the system is – the smoother the experience.”
But similarly to when we used the 4,000 Hz polling rate on the K100 RGB, I didn’t notice a difference when moving from 1,000 Hz on the K70 RGB TKL to 8,000 Hz, despite using a system running an AMD Ryzen 5950X CPU. There’s a bit of future-proofing here, and it wouldn’t hurt for a very competitive pro player to have this feature handy. But as a low-level competitive player, I didn’t notice my speed or accuracy increase in Fortnite or Destiny.
Gaming Experience on Corsair K70 RGB TKL
This is still a powerful gaming weapon though, as it feels incredibly responsive and fast on the battlefield (whether gaming at 1,000 Hz or 8,000 Hz). I used the K70 RGB TKL during intense Fortnite matches, as well as crucible matches in Destiny, and it didn’t disappoint. The quick and easy actuation of the go-to Cherry MX Red switches honestly made me feel like I was able to better focus on gameplay without looking at my keyboard as often as I normally do.
The best part was how lightly I had to touch the keys for them to register. This really cut down on hand fatigue. When I play, I usually overpress buttons and can even be guilty of mashing (gasp!). On Corsair’s TKL, I quickly realized I didn’t need to press the keys nearly as hard. That really reduced hand pain, which I sometimes experience after several hours of gaming.
And while the tight spacing of the keys was a bit of a hindrance for general typing, this became helpful when gaming, as it meant my fingers had less distance to travel to input my next move. Meanwhile, the TKL form factor gave me a little more room to breathe with my mouse, and I found it easier to focus on the game than when using a full-sized keyboard. I have always been a fan of a larger build but now I am thinking compact is the way moving forward.
Those doubleshot PBT keycaps also came in handy in action. The premium plastic doesn’t get slick, including from sweaty hands. These keys managed to stay dry during high-pressure gaming.
Features and Software on Corsair K70 RGB TKL
Image 1 of 2
Image 2 of 2
To create new RGB effects or make onboard or software-based profiles, you’ll need iCue, which I found user-friendly. The Corsair K70 RGB TKL features 8MB of onboard storage allowing you to customize to your heart’s content. You can store up to 50 onboard profiles, depending on the configuration, that allow you to customize your RGB settings with up to 20 lighting layers, as well as custom macros.
A unique feature, the keyboard also includes a Tournament Switch on the top edge. This could help you focus on your game more by swapping the keyboard to static backlighting to reduce distractions and disabling programmed actions / macros. As someone who’s been known to press incorrect buttons or clumsily drop things in the heat of battle, I found this to be a great addition.
Bottom Line
If you want a powerhouse of a keyboard made for competitive gameplay, the Corsair K70 RGB TKL is an immediate must-have. This keyboard isn’t just pleasant to look at, it is an efficient tool that will take your gameplay to the next level, thanks to responsive keys, high-end PBT keycaps and a lot of customization options both with or without software.
At $140, this is an expensive wired gaming keyboard though. For comparison, the HyperX Alloy Origins Core, one of the best budget mechanical keyboards, is about half the price, and the Razer BlackWidow V3 Tenkeyless is currently $100. But the K70 RGB Pro gives you a lot for the price. Not only is there a robust featureset, including media keys, this is a tough keyboard. I will definitely be utilizing it more for my tournament gaming needs. And there are pricier TKLs than the K70 RGB TKL, such as the $160 Roccat Vulcan TKL Pro with its optical-mechanical switches or the wireless Logitech G915 TKL, which starts at about $200 and is excellent but not for everyone, since it’s low-profile.
Ultimately, the K70 RGB TKL can be an efficient weapon in your gaming toolkit, granting you the look and functionality you need for your most competitive setup.
Intel’s Bleep announcement starts at the 27:24 mark in its GDC 2021 presentation.
Last month during its virtual GDC presentation Intel announced Bleep, a new AI-powered tool that it hopes will cut down on the amount of toxicity gamers have to experience in voice chat. According to Intel, the app “uses AI to detect and redact audio based on user preferences.” The filter works on incoming audio, acting as an additional user-controlled layer of moderation on top of what a platform or service already offers.
It’s a noble effort, but there’s something bleakly funny about Bleep’s interface, which lists in minute detail all of the different categories of abuse that people might encounter online, paired with sliders to control the quantity of mistreatment users want to hear. Categories range anywhere from “Aggression” to “LGBTQ+ Hate,” “Misogyny,” “Racism and Xenophobia,” and “White nationalism.” There’s even a toggle for the N-word. Bleep’s page notes that it’s yet to enter public beta, so all of this is subject to change.
With the majority of these categories, Bleep appears to give users a choice: would you like none, some, most, or all of this offensive language to be filtered out? Like choosing from a buffet of toxic internet slurry, Intel’s interface gives players the option of sprinkling in a light serving of aggression or name-calling into their online gaming.
Bleep has been in the works for a couple of years now — PCMag notes that Intel talked about this initiative way back at GDC 2019 — and it’s working with AI moderation specialists Spirit AI on the software. But moderating online spaces using artificial intelligence is no easy feat as platforms like Facebook and YouTube have shown. Although automated systems can identify straightforwardly offensive words, they often fail to consider the context and nuance of certain insults and threats. Online toxicity comes in many, constantly evolving forms that can be difficult for even the most advanced AI moderation systems to spot.
“While we recognize that solutions like Bleep don’t erase the problem, we believe it’s a step in the right direction, giving gamers a tool to control their experience,” Intel’s Roger Chandler said during its GDC demonstration. Intel says it hopes to release Bleep later this year, and adds that the technology relies on its hardware accelerated AI speech detection, suggesting that the software may rely on Intel hardware to run.
LG has announced a pledge to issue future Android OS updates to many of its smartphones despite confirming earlier this week that it’ll be leaving the phone business altogether. The Velvet, Wing, and G- and V- series phones from 2019 or later should be getting three Android updates from their year of release, and “certain 2020 models such as LG Stylo and K series” will get two updates.
For example, the Velvet came out last year with Android 10, and its Android 11 rollout is currently in progress. That means that it should also be getting Android 12 and 13 at some point, assuming Google continues its yearly cadence. LG variously describes this announcement as a “three-year pledge” and a “three-OS-update guarantee.”
The announcement is a little surprising because LG phones generally wouldn’t have been expected to get that many updates even while LG was actually in the phone business. The company announced a dedicated “Software Upgrade Center” in 2018, but little to nothing changed about its Android update situation. Most of its latest premium phones aren’t scheduled to receive Android 11 until the end of 2021.
Earlier this week, LG’s Korean website indicated that some selected models would also get an Android 12 update.
Apple is officially opening up its Find My tracking network to third-party companies (as it had promised last year.) Now, any hardware manufacturer can add software-side support for Apple’s localized network to track missing items — so long as they play by Apple’s Made for iPhone (MFi) accessory rules.
The first wave of items that can now be tracked starting today include VanMoof’s S3 and X3 e-bikes, Belkin’s SoundForm Freedom True Wireless Earbuds, and the Chipolo One Spot tracking tag — all of which can now rely on Apple’s crowdsourced Bluetooth network (which encompasses millions of iPhones, iPads, and Macs).
Users will be able to add those devices to the updated Find My app starting today and can track them through that app much in the same way that they’d track any missing Apple product.
Apple says that third-party devices looking to add support will have to apply through the company’s MFi program for authorized accessories and “adhere to all the privacy protections of the Find My network that Apple customers rely on.” Approved products will sport a new “Works with Apple Find My” badge to let customers know that they’re compatible with Apple’s network.
Additionally, Apple announced that it’d be offering a chipset specification for third-party hardware companies to integrate with the Ultra-Wideband systems in Apple’s more recent phones for even more precise tracking in the future. Apple has also long been rumored to be working on its own AirTags product, which would offer similar UWB-based tracking.
It probably won’t make the list of best CPU coolers for end users anytime soon, but Microsoft’s data center servers could be getting a massive thermal management upgrade in the near future. Right now, the software giant is testing a radical new cooling technology known as boiling liquid, which promises to be higher performance, more reliable, and cheaper to maintain compared to traditional air cooling systems in data centers right now.
Servers that are equipped with this new prototype cooling system look very similar to mineral oil PCs if you’ve seen one. Dozens of server blades are packed tightly together in a fully submerged tank of boiling liquid. The liquid of course is non-conductive so the servers can operate safely inside the liquid environment.
The liquid is a special unknown recipe that boils at 122 degrees Fahrenheit (which is 90 degrees lower than the boiling point of water). The low boiling point is needed to drive heat away from critical components. Once the liquid begins boiling, it’ll automatically roll up to the surface, allowing cooled condensers to contact the liquid, returning the cooling liquid to its fully liquidized state.
Effectively, this system is one gigantic vapor chamber. Both cooling systems rely on chemical reactions to bring heat from system components to cooling chambers, whether that be heatsinks or, in this case, a condenser.
Death of Moore’s Law Is to Blame
Microsoft’s says that it is developing such a radically new cooling technology is because of the rising demands of power and heat from computer components, which are only going to get worse.
The software giant claims that the death of Moore’s Law is to blame for this; transistors on computer chips have become so small that they’ve hit the atomic level. Soon you won’t be able to shrink the transistors on a new process node any smaller as it will be physically impossible to do so.
To counter this, chip fabs have had to increase power consumption quite significantly to keep increasing CPU performance — namely from adding more and more cores to a CPU.
Microsoft notes that CPUs have increased from 150 watts to more than 300 watts per chip, and GPUs have increased to more than 700 watts per chip on average. Bear in mind that Microsoft is talking about server components and not about consumer desktops where the best CPUs and best graphics cards tend to have less power consumption than that.
If server components get more and more power-hungry, Microsoft believes this new liquid solution will be necessary to keep costs down on server infrastructure.
Boiling Liquid Is Optimized For Low Maintenance
Microsoft took inspiration from the datacenter server clusters operating on seabeds when developing the new cooling technology.
A few years ago Microsoft unleashed Project Natick, which was a massive operation to bring datacenters underwater to inherit the benefits of using seawater as a cooling system.
To do this, the server chambers were filled with dry nitrogen air instead of oxygen and use cooling fans, a heat exchanger, and a specialized plumbing system that pipes in seawater through the cooling system.
What Microsoft learned was the sheer reliability of water/liquid cooling. The servers on the seafloor experienced one-eighth the failure rate of replica servers on land with traditional air cooling.
Analysis of the situation indicates the lack of humidity and corrosive effects of oxygen were responsible for the increased sustainability of these servers.
Microsoft hopes its boiling liquid technology will have the same effects. If so, we could see a revolution in the data center world where servers are smaller, and much more reliable. Plus with this new cooling system, server performance is hopefully increased as well.
Perhaps the boosting algorithms we see on Intel and AMD’s desktop platforms can be adopted into the server world so processors can automatically hit higher clocks when they detect more thermal headroom.
Making good on its promise, AMD has deployed new patches to the Linux kernel to mitigate the potential security risk with the Predictive Store Forwarding (PSF) feature. Linux publication Phoronix spotted five patches that allow users to disable Predictive Store Forwarding if security is a concern.
Predictive Store Forwarding is a feature baked into AMD’s Zen 3 processors that boosts code execution performance by predicting the relationship between loads and stores. In AMD’s whitepaper, the chipmaker exposed the benefits and security complications with Predictive Store Forwarding. The vulnerability is similar to Spectre v4 that affected Intel processors. We reached out to AMD about the feature, and the chipmaker responded with this statement:
“AMD recommends leaving the feature enabled. We do however outline methods to disable PSF if desired.”
Software that uses “sandboxing” is more susceptible to the exploit, which is why AMD gives users the power to turn off Predictive Store Forwarding. As Phoronix noted, Predictive Store Forwarding is enabled by default even on the patched Linux kernel. The Linux publication shared two ways to disable Predictive Store Forwarding: You can do so through the Spectre v4 mitigation control or implement the nopsfd parameter boot option.
Predictive Store Forwarding’s job is to improve performance, so you might wonder if it presents a significant performance hit. Fortunately, it doesn’t. Phoronix conducted a plethora of tests before AMD’s patches and discovered performance deltas that were less than a half percent.
Intel’s long-delayed 10nm+ third-gen Xeon Scalable Ice Lake processors mark an important step forward for the company as it attempts to fend off intense competition from AMD’s 7nm EPYC Milan processors that top out at 64 cores, a key advantage over Intel’s existing 14nm Cascade Lake Refresh that tops out at 28 cores. The 40-core Xeon Platinum 8380 serves as the flagship model of Intel’s revamped lineup, which the company says features up to a 20% IPC uplift on the strength of the new Sunny Cove core architecture paired with the 10nm+ process.
Intel has already shipped over 200,000 units to its largest customers since the beginning of the year, but today marks the official public debut of its newest lineup of data center processors, so we get to share benchmarks. The Ice Lake chips drop into dual-socket Whitley server platforms, while the previously-announced Cooper Lake slots in for quad- and octo-socket servers. Intel has slashed Xeon pricing up to 60% to remain competitive with EPYC Rome, and with EPYC Milan now shipping, the company has reduced per-core pricing again with Ice Lake to remain competitive as it targets high-growth markets, like the cloud, enterprise, HPC, 5G, and the edge.
The new Xeon Scalable lineup comes with plenty of improvements, like increased support for up to eight memory channels that run at a peak of DDR4-3200 with two DIMMs per channel, a notable improvement over Cascade Lake’s support for six channels at DDR4-2933 and matching EPYC’s eight channels of memory. Ice Lake also supports 6TB of DRAM/Optane per socket (4TB of DRAM) and 4TB of Optane Persistent Memory DIMMs per socket (8 TB in dual-socket). Unlike Intel’s past practices, Ice Lake also supports the full memory and Optane capacity on all models with no additional upcharge.
Intel has also moved forward from 48 lanes of PCIe 3.0 connectivity to 64 lanes of PCIe 4.0 (128 lanes in dual-socket), improving both I/O bandwidth and increasing connectivity to match AMD’s 128 available lanes in a dual-socket server.
Intel says that these additives, coupled with a range of new SoC-level optimizations, a focus on improved power management, along with support for new instructions, yield an average of 46% more performance in a wide range of data center workloads. Intel also claims a 50% uplift to latency-sensitive applications, like HammerDB, Java, MySQL, and WordPress, and up to 57% more performance in heavily-threaded workloads, like NAMD, signaling that the company could return to a competitive footing in what has become one of AMD’s strongholds — heavily threaded workloads. We’ll put that to the test shortly. First, let’s take a closer look at the lineup.
Intel Third-Gen Xeon Scalable Ice Lake Pricing and Specfications
We have quite the list of chips below, but we’ve actually filtered out the downstream Intel parts, focusing instead on the high-end ‘per-core scalable’ models. All told, the Ice Lake family spans 42 SKUs, with many of the lower-TDP (and thus performance) models falling into the ‘scalable performance’ category.
Intel also has specialized SKUs targeted at maximum SGX enclave capacity, cloud-optimized for VMs, liquid-cooled, networking/NFV, media, long-life and thermal-friendly, and single-socket optimized parts, all of which you can find in the slide a bit further below.
Cores / Threads
Base / Boost – All Core (GHz)
L3 Cache (MB)
TDP (W)
1K Unit Price / RCP
EPYC Milan 7763
64 / 128
2.45 / 3.5
256
280
$7,890
EPYC Rome 7742
64 / 128
2.25 / 3.4
256
225
$6,950
EPYC Milan 7663
56 / 112
2.0 / 3.5
256
240
$6,366
EPYC Milan 7643
48 / 96
2.3 / 3.6
256
225
$4.995
Xeon Platinum 8380
40 / 80
2.3 / 3.2 – 3.0
60
270
$8,099
Xeon Platinum 8368
38 / 76
2.4 / 3.4 – 3.2
57
270
$6,302
Xeon Platinum 8360Y
36 / 72
2.4 / 3.5 – 3.1
54
250
$4,702
Xeon Platinum 8362
32 / 64
2.8 / 3.6 – 3.5
48
265
$5,448
EPYC Milan 7F53
32 / 64
2.95 / 4.0
256
280
$4,860
EPYC Milan 7453
28 / 56
2.75 / 3.45
64
225
$1,570
Xeon Gold 6348
28 / 56
2.6 / 3.5 – 3.4
42
235
$3,072
Xeon Platinum 8280
28 / 56
2.7 / 4.0 – 3.3
38.5
205
$10,009
Xeon Gold 6258R
28 / 56
2.7 / 4.0 – 3.3
38.5
205
$3,651
EPYC Milan 74F3
24 / 48
3.2 / 4.0
256
240
$2,900
Intel Xeon Gold 6342
24 / 48
2.8 / 3.5 – 3.3
36
230
$2,529
Xeon Gold 6248R
24 / 48
3.0 / 4.0
35.75
205
$2,700
EPYC Milan 7443
24 / 48
2.85 / 4.0
128
200
$2,010
Xeon Gold 6354
18 / 36
3.0 / 3.6 – 3.6
39
205
$2,445
EPYC Milan 73F3
16 / 32
3.5 / 4.0
256
240
$3,521
Xeon Gold 6346
16 / 32
3.1 / 3.6 – 3.6
36
205
$2,300
Xeon Gold 6246R
16 / 32
3.4 / 4.1
35.75
205
$3,286
EPYC Milan 7343
16 / 32
3.2 / 3.9
128
190
$1,565
Xeon Gold 5317
12 / 24
3.0 / 3.6 – 3.4
18
150
$950
Xeon Gold 6334
8 / 16
3.6 / 3.7 – 3.6
18
165
$2,214
EPYC Milan 72F3
8 / 16
3.7 / 4.1
256
180
$2,468
Xeon Gold 6250
8 / 16
3.9 / 4.5
35.75
185
$3,400
At 40 cores, the Xeon Platinum 8380 reaches new heights over its predecessors that topped out at 28 cores, striking higher in AMD’s Milan stack. The 8380 comes at $202 per core, which is well above the $130-per-core price tag on the previous-gen flagship, the 28-core Xeon 6258R. However, it’s far less expensive than the $357-per-core pricing of the Xeon 8280, which had a $10,008 price tag before AMD’s EPYC upset Intel’s pricing model and forced drastic price reductions.
With peak clock speeds of 3.2 GHz, the 8380 has a much lower peak clock rate than the previous-gen 28-core 6258R’s 4.0 GHz. Even dipping down to the new 28-core Ice Lake 6348 only finds peak clock speeds of 3.5 GHz, which still trails the Cascade Lake-era models. Intel obviously hopes to offset those reduced clock speeds with other refinements, like increased IPC and better power and thermal management.
On that note, Ice Lake tops out at 3.7 GHz on a single core, and you’ll have to step down to the eight-core model to access these clock rates. In contrast, Intel’s previous-gen eight-core 6250 had the highest clock rate, 4.5 GHz, of the Cascade Lake stack.
Surprisingly, AMD’s EPYC Milan models actually have higher peak frequencies than the Ice Lake chips at any given core count, but remember, AMD’s frequencies are only guaranteed on one physical core. In contrast, Intel specs its chips to deliver peak clock rates on any core. Both approaches have their merits, but AMD’s more refined boost tech paired with the 7nm TSMC process could pay dividends for lightly-threaded work. Conversely, Intel does have solid all-core clock rates that peak at 3.6 GHz, whereas AMD has more of a sliding scale that varies based on the workload, making it hard to suss out the winners by just examining the spec sheet.
Ice Lake’s TDPs stretch from 85W up to 270W. Surprisingly, despite the lowered base and boost clocks, Ice Lake’s TDPs have increased gen-on-gen for the 18-, 24- and 28-core models. Intel is obviously pushing higher on the TDP envelope to extract the most performance out of the socket possible, but it does have lower-power chip options available (listed in the graphic below).
AMD has a notable hole in its Milan stack at both the 12- and 18-core mark, a gap that Intel has filled with its Gold 5317 and 6354, respectively. Milan still holds the top of the hierarchy with 48-, 56- and 64-core models.
Image 1 of 12
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
The Ice Lake Xeon chips drop into Whitley server platforms with Socket LGA4189-4/5. The FC-LGA14 package measures 77.5mm x 56.5mm and has an LGA interface with 4189 pins. The die itself is predicted to measure ~600mm2, though Intel no longer shares details about die sizes or transistor counts. In dual-socket servers, the chips communicate with each other via three UPI links that operate at 11.2 GT/s, an increase from 10.4 GT/s with Cascade Lake. . The processor interfaces with the C620A chipset via four DMI 3.0 links, meaning it communicates at roughly PCIe 3.0 speeds.
The C620A chipset also doesn’t support PCIe 4.0; instead, it supports up to 20 lanes of PCIe 3.0, ten USB 3.0, and fourteen USB 2.0 ports, along with 14 ports of SATA 6 Gbps connectivity. Naturally, that’s offset by the 64 PCIe 4.0 lanes that come directly from the processor. As before, Intel offers versions of the chipset with its QuickAssist Technology (QAT), which boosts performance in cryptography and compression/decompression workloads.
Image 1 of 12
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
Intel’s focus on its platform adjacencies business is a key part of its messaging around the Ice Lake launch — the company wants to drive home its message that coupling its processors with its own differentiated platform additives can expose additional benefits for Whitley server platforms.
The company introduced new PCIe 4.0 solutions, including the new 200 GbE Ethernet 800 Series adaptors that sport a PCIe 4.0 x16 connection and support RDMA iWARP and RoCEv2, and the Intel Optane SSD P5800X, a PCIe 4.0 SSD that uses ultra-fast 3D XPoint media to deliver stunning performance results compared to typical NAND-based storage solutions.
Intel also touts its PCIe 4.0 SSD D5-P5316, which uses the company’s 144-Layer QLC NAND for read-intensive workloads. These SSDs offer up to 7GBps of throughput and come in capacities stretching up to 15.36 TB in the U.2 form factor, and 30.72 TB in the E1.L ‘Ruler’ form factor.
Intel’s Optane Persistent Memory 200-series offers memory-addressable persistent memory in a DIMM form factor. This tech can radically boost memory capacity up to 4TB per socket in exchange for higher latencies that can be offset through software optimizations, thus yielding more performance in workloads that are sensitive to memory capacity.
The “Barlow Pass” Optane Persistent Memory 200 series DIMMs promise 30% more memory bandwidth than the previous-gen Apache Pass models. Capacity remains at a maximum of 512GB per DIMM with 128GB and 256GB available, and memory speeds remain at a maximum of DDR4-2666.
Intel has also expanded its portfolio of Market Ready and Select Solutions offerings, which are pre-configured servers for various workloads that are available in over 500 designs from Intel’s partners. These simple-to-deploy servers are designed for edge, network, and enterprise environments, but Intel has also seen uptake with cloud service providers like AWS, which uses these solutions for its ParallelCluster HPC service.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
Like the benchmarks you’ll see in this review, the majority of performance measurements focus on raw throughput. However, in real-world environments, a combination of throughput and responsiveness is key to deliver on latency-sensitive SLAs, particularly in multi-tenant cloud environments. Factors such as loaded latency (i.e., the amount of performance delivered to any number of applications when all cores have varying load levels) are key to ensuring performance consistency across multiple users. Ensuring consistency is especially challenging with diverse workloads running on separate cores in multi-tenant environments.
Intel says it focused on performance consistency in these types of environments through a host of compute, I/O, and memory optimizations. The cores, naturally, benefit from increased IPC, new ISA instructions, and scaling up to higher core counts via the density advantages of 10nm, but Intel also beefed up its I/O subsystem to 64 lanes of PCIe 4.0, which improves both connectivity (up from 48 lanes) and throughput (up from PCIe 3.0).
Intel says it designed the caches, memory, and I/O, not to mention power levels, to deliver consistent performance during high utilization. As seen in slide 30, the company claims these alterations result in improved application performance and latency consistency by reducing long tail latencies to improve worst-case performance metrics, particularly for memory-bound and multi-tenant workloads.
Image 1 of 12
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
Ice Lake brings a big realignment of the company’s die that provides cache, memory, and throughput advances. The coherent mesh interconnect returns with a similar arrangement of horizontal and vertical rings present on the Cascade Lake-SP lineup, but with a realignment of the various elements, like cores, UPI connections, and the eight DDR4 memory channels that are now split into four dual-channel controllers. Here we can see that Intel shuffled around the cores on the 28-core die and now has two execution cores on the bottom of the die clustered with I/O controllers (some I/O is now also at the bottom of the die).
Intel redesigned the chip to support two new sideband fabrics, one controlling power management and the other used for general-purpose management traffic. These provide telemetry data and control to the various IP blocks, like execution cores, memory controllers, and PCIe/UPI controllers.
The die includes a separate peer-to-peer (P2P) fabric to improve bandwidth between cores, and the I/O subsystem was also virtualized, which Intel says offers up to three times the fabric bandwidth compared to Cascade Lake. Intel also split one of the UPI blocks into two, creating a total of three UPI links, all with fine-grained power control of the UPI links. Now, courtesy of dedicated PLLs, all three UPIs can modulate clock frequencies independently based on load.
Densely packed AVX instructions augment performance in properly-tuned workloads at the expense of higher power consumption and thermal load. Intel’s Cascade Lake CPUs drop to lower frequencies (~600 to 900 MHz) during AVX-, AVX2-, and AVX-512-optimized workloads, which has hindered broader adoption of AVX code.
To reduce the impact, Intel has recharacterized its AVX power limits, thus yielding (unspecified) higher frequencies for AVX-512 and AVX-256 operations. This is done in an adaptive manner based on three different power levels for varying instruction types. This nearly eliminates the frequency delta between AVX and SSE for 256-heavy and 512-light operations, while 512-heavy operations have also seen significant uplift. All Ice Lake SKUs come with dual 512b FMAs, so this optimization will pay off across the entire stack.
Intel also added support for a host of new instructions to boost cryptography performance, like VPMADD52, GFNI, SHA-NI, Vector AES, and Vector Carry-Less multiply instructions, and a few new instructions to boost compression/decompression performance. All rely heavily upon AVX acceleration. The chips also support Intel’s Total Memory Encryption (TME) that offers DRAM encryption through AES-XTS 128-bit hardware-generated keys.
Intel also made plenty of impressive steps forward on the microarchitecture, with improvements to every level of the pipeline allowing Ice Lake’s 10nm Sunny Cove cores to deliver far higher IPC than 14nm Cascade Lake’s Skylake-derivative architecture. Key improvements to the front end include larger reorder, load, and store buffers, along with larger reservation stations. Intel increased the L1 data cache from 32 KiB, the capacity it has used in its chips for a decade, to 42 KiB, and moved from 8-way to 12-way associativity. The L2 cache moves from 4-way to 8-way and is also larger, but the capacity is dependent upon each specific type of product — for Ice Lake server chips, it weighs in at 1.25 MB per core.
Intel expanded the micro-op cache (UOP) from 1.5K to 2.25K micro-ops, the second-level translation lookaside buffer (TLB) from 1536 entries to 2048, and moved from a four-wide allocation to five-wide to allow the in-order portion of the pipeline (front end) to feed the out-of-order (back end) portion faster. Additionally, Intel expanded the Out of Order (OoO) Window from 224 to 352. Intel also increased the number of execution units to handle ten operations per cycle (up from eight with Skylake) and focused on improving branch prediction accuracy and reducing latency under load conditions.
The store unit can now process two store data operations for every cycle (up from one), and the address generation units (AGU) also handle two loads and two stores each cycle. These improvements are necessary to match the increased bandwidth from the larger L1 data cache, which does two reads and two writes every cycle. Intel also tweaked the design of the sub-blocks in the execution units to enable data shuffles within the registers.
Intel also added support for its Software Guard Extensions (SGX) feature that debuted with the Xeon E lineup, and increased capacity to 1TB (maximum capacity varies by model). SGX creates secure enclaves in an encrypted portion of the memory that is exclusive to the code running in the enclave – no other process can access this area of memory.
Test Setup
We have a glaring hole in our test pool: Unfortunately, we do not have AMD’s recently-launched EPYC Milan processors available for this round of benchmarking, though we are working on securing samples and will add competitive benchmarks when available.
We do have test results for the AMD’s frequency-optimized Rome 7Fx2 processors, which represent AMD’s performance with its previous-gen chips. As such, we should view this round of tests largely through the prism of Intel’s gen-on-gen Xeon performance improvement, and not as a measure of the current state of play in the server chip market.
We use the Xeon Platinum Gold 8280 as a stand-in for the less expensive Xeon Gold 6258R. These two chips are identical and provide the same level of performance, with the difference boiling down to the more expensive 8280 coming with support for quad-socket servers, while the Xeon Gold 6258R tops out at dual-socket support.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
Intel provided us with a 2U Server System S2W3SIL4Q Software Development Platform with the Coyote Pass server board for our testing. This system is designed primarily for validation purposes, so it doesn’t have too many noteworthy features. The system is heavily optimized for airflow, with the eight 2.5″ storage bays flanked by large empty bays that allow for plenty of air intake.
The system comes armed with dual redundant 2100W power supplies, a 7.68TB Intel SSD P5510, an 800GB Optane SSD P5800X, and an E810-CQDA2 200GbE NIC. We used the Intel SSD P5510 for our benchmarks and cranked up the fans for maximum performance in our benchmarks.
We tested with the pre-installed 16x 32GB DDR4-3200 DIMMs, but Intel also provided sixteen 128GB Optane Persistent Memory DIMMs for further testing. Due to time constraints, we haven’t yet had time to test the Optane DIMMs, but stay tuned for a few demo workloads in a future article. As we’re not entirely done with our testing, we don’t want to risk prying the 8380 out of the socket yet for pictures — the large sockets from both vendors are becoming more finicky after multiple chip reinstalls.
Memory
Tested Processors
Intel S2W3SIL4Q
16x 32GB SK hynix ECC DDR4-3200
Intel Xeon Platinum 8380
Supermicro AS-1023US-TR4
16x 32GB Samsung ECC DDR4-3200
EPYC 7742, 7F72, 7F52
Dell/EMC PowerEdge R460
12x 32GB SK hynix DDR4-2933
Intel Xeon 8280, 6258R, 5220R, 6226R
To assess performance with a range of different potential configurations, we used a Supermicro 1024US-TR4 server with three different EPYC Rome configurations. We outfitted this server with 16x 32GB Samsung ECC DDR4-3200 memory modules, ensuring the chips had all eight memory channels populated.
We used a Dell/EMC PowerEdge R460 server to test the Xeon processors in our test group. We equipped this server with 12x 32GB Sk hynix DDR4-2933 modules, again ensuring that each Xeon chip’s six memory channels were populated.
We used the Phoronix Test Suite for benchmarking. This automated test suite simplifies running complex benchmarks in the Linux environment. The test suite is maintained by Phoronix, and it installs all needed dependencies and the test library includes 450 benchmarks and 100 test suites (and counting). Phoronix also maintains openbenchmarking.org, which is an online repository for uploading test results into a centralized database.
We used Ubuntu 20.04 LTS to maintain compatibility with our existing test results, and leverage the default Phoronix test configurations with the GCC compiler for all tests below. We also tested all platforms with all available security mitigations.
Naturally, newer Linux kernels, software, and targeted optimizations can yield improvements for any of the tested processors, so take these results as generally indicative of performance in compute-intensive workloads, but not as representative of highly-tuned deployments.
Linux Kernel, GCC and LLVM Compilation Benchmarks
Image 1 of 2
Image 2 of 2
AMD’s EPYC Rome processors took the lead over the Cascade Lake Xeon chips at any given core count in these benchmarks, but here we can see that the 40-core Ice Lake Xeon 8380 has tremendous potential for these type of workloads. The dual 8380 processors complete the Linux compile benchmark, which builds the Linux kernel at default settings, in 20 seconds, edging out the 64-core EPYC Rome 7742 by one second. Naturally, we expect AMD’s Milan flagship, the 7763, to take the lead in this benchmark. Still, the implication is clear — Ice Lake-SP has significantly-improved performance, thus reducing the delta between Xeon and competing chips.
We can also see a marked improvement in the LLVM compile, with the 8380 reducing the time to completion by ~20% over the prior-gen 8280.
Molecular Dynamics and Parallel Compute Benchmarks
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
NAMD is a parallel molecular dynamics code designed to scale well with additional compute resources; it scales up to 500,000 cores and is one of the premier benchmarks used to quantify performance with simulation code. The Xeon 8380’s notch a 32% improvement in this benchmark, slightly beating the Rome chips.
Stockfish is a chess engine designed for the utmost in scalability across increased core counts — it can scale up to 512 threads. Here we can see that this massively parallel code scales well with EPYC’s leading core counts. The EPYC Rome 7742 retains its leading position at the top of the chart, but the 8380 offers more than twice the performance of the previous-gen Cascade Lake flagship.
We see similarly impressive performance uplifts in other molecular dynamics workloads, like the Gromacs water benchmark that simulates Newtonian equations of motion with hundreds of millions of particles. Here Intel’s dual 8380’s take the lead over the EPYC Rome 7742 while pushing out nearly twice the performance of the 28-core 8280.
We see a similarly impressive generational improvement in the LAAMPS molecular dynamics workload, too. Again, AMD’s Milan will likely be faster than the 7742 in this workload, so it isn’t a given that the 8380 has taken the definitive lead over AMD’s current-gen chips, though it has tremendously improved Intel’s competitive positioning.
The NAS Parallel Benchmarks (NPB) suite characterizes Computational Fluid Dynamics (CFD) applications, and NASA designed it to measure performance from smaller CFD applications up to “embarrassingly parallel” operations. The BT.C test measures Block Tri-Diagonal solver performance, while the LU.C test measures performance with a lower-upper Gauss-Seidel solver. The EPYC Milan 7742 still dominates in this workload, showing that Ice Lake’s broad spate of generational improvements still doesn’t allow Intel to take the lead in all workloads.
Rendering Benchmarks
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Turning to more standard fare, provided you can keep the cores fed with data, most modern rendering applications also take full advantage of the compute resources. Given the well-known strengths of EPYC’s core-heavy approach, it isn’t surprising to see the 64-core EPYC 7742 processors retain the lead in the C-Ray benchmark, and that applies to most of the Blender benchmarks, too.
Encoding Benchmarks
Image 1 of 3
Image 2 of 3
Image 3 of 3
Encoders tend to present a different type of challenge: As we can see with the VP9 libvpx benchmark, they often don’t scale well with increased core counts. Instead, they often benefit from per-core performance and other factors, like cache capacity. AMD’s frequency-optimized 7F52 retains its leading position in this benchmark, but Ice Lake again reduces the performance delta.
Newer software encoders, like the Intel-Netflix designed SVT-AV1, are designed to leverage multi-threading more fully to extract faster performance for live encoding/transcoding video applications. EPYC Rome’s increased core counts paired with its strong per-core performance beat Cascade Lake in this benchmark handily, but the step up to forty 10nm+ cores propels Ice Lake to the top of the charts.
Compression, Security and Python Benchmarks
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
The Pybench and Numpy benchmarks are used as a general litmus test of Python performance, and as we can see, these tests typically don’t scale linearly with increased core counts, instead prizing per-core performance. Despite its somewhat surprisingly low clock rates, the 8380 takes the win in the Pybench benchmark and improves Xeon’s standing in Numpy as it takes a close second to the 7F52.
Compression workloads also come in many flavors. The 7-Zip (p7zip) benchmark exposes the heights of theoretical compression performance because it runs directly from main memory, allowing both memory throughput and core counts to heavily impact performance. As we can see, this benefits the core-heavy chips as they easily dispatch with the chips with lesser core counts. The Xeon 8380 takes the lead in this test, but other independent benchmarks show that AMD’s EPYC Milan would lead this chart.
In contrast, the gzip benchmark, which compresses two copies of the Linux 4.13 kernel source tree, responds well to speedy clock rates, giving the 16-core 7F52 the lead. Here we see that 8380 is slightly slower than the previous-gen 8280, which is likely at least partially attributable to the 8380’s much lower clock rate.
The open-source OpenSSL toolkit uses SSL and TLS protocols to measure RSA 4096-bit performance. As we can see, this test favors the EPYC processors due to its parallelized nature, but the 8380 has again made big strides on the strength of its higher core count. Offloading this type of workload to dedicated accelerators is becoming more common, and Intel also offers its QAT acceleration built into chipsets for environments with heavy requirements.
Conclusion
Admittedly, due to our lack of EPYC Milan samples, our testing today of the Xeon Platinum 8380 is more of a demonstration of Intel’s gen-on-gen performance improvements rather than a holistic view of the current competitive landscape. We’re working to secure a dual-socket Milan server and will update when one lands in our lab.
Overall, Intel’s third-gen Xeon Scalable is a solid step forward for the Xeon franchise. AMD has steadily chewed away data center market share from Intel on the strength of its EPYC processors that have traditionally beaten Intel’s flagships by massive margins in heavily-threaded workloads. As our testing, and testing from other outlets shows, Ice Lake drastically reduces the massive performance deltas between the Xeon and EPYC families, particularly in heavily threaded workloads, placing Intel on a more competitive footing as it faces an unprecedented challenge from AMD.
AMD will still hold the absolute performance crown in some workloads with Milan, but despite EPYC Rome’s commanding lead in the past, progress hasn’t been as swift as some projected. Much of that boils down to the staunchly risk-averse customers in the enterprise and data center; these customers prize a mix of factors beyond the standard measuring stick of performance and price-to-performance ratios, instead focusing on areas like compatibility, security, supply predictability, reliability, serviceability, engineering support, and deeply-integrated OEM-validated platforms.
AMD has improved drastically in these areas and now has a full roster of systems available from OEMs, along with broadening uptake with CSPs and hyperscalers. However, Intel benefits from its incumbency and all the advantages that entails, like wide software optimization capabilities and platform adjacencies like networking, FPGAs, and Optane memory.
Although Ice Lake doesn’t lead in all metrics, it does improve the company’s positioning as it moves forward toward the launch of its Sapphire Rapids processors that are slated to arrive later this year to challenge AMD’s core-heavy models. Intel still holds the advantage in several criteria that appeal to the broader enterprise market, like pre-configured Select Solutions and engineering support. That, coupled with drastic price reductions, has allowed Intel to reduce the impact of a fiercely-competitive adversary. We can expect the company to redouble those efforts as Ice Lake rolls out to the more general server market.
Wisk Aero, a joint venture between Boeing and Kitty Hawk, is suing rival air taxi firm Archer Aviation for allegedly stealing its trade secrets and infringing on its patents. Wisk is seeking unspecified monetary damages and an injunction against Archer to prevent it from using the allegedly stolen technology. In response, Archer says it has “no reason to believe” that it possesses any of Wisk’s intellectual property.
In a complaint filed in the US District Court of Northern California, Wisk said it is suing Archer “to stop a brazen theft of its intellectual property and confidential information, and protect the substantial investment of resources and years of hard work and effort of its employees and their vision of the future in urban air transportation.”
Santa Clara-based Archer came out of stealth in the spring of 2020 after having poached key talent from Kitty Hawk, the flying taxi company bankrolled by Google co-founder Larry Page and run by Sebastian Thrun, the Stanford AI and robotics whiz who launched Google’s self-driving car unit. The company also hired engineers away from Airbus’ Vahana project. According to Avionics International, Archer was able to lure these engineers by offering higher salaries.
Archer was founded by Adam Goldstein and Brett Adcock, co-founders of Vettery, a marketing software-as-a-service company, which the two sold to Switzerland-based staffing firm Adecco Group in 2018 for $100 million. But Wisk argues that Archer didn’t just steal talent — it also stole trade secrets. Wisk accuses Archer of misappropriating “thousands of highly confidential files containing very valuable trade secrets, as well as the use of significant innovations Wisk has patented.”
Archer’s emergence “surprised the industry,” Wisk claims, based on its shortened time frame for going to market with its electric aircraft with just a fraction of the staff of other, more established urban air mobility firms. According to Wisk:
Archer’s stated timeline for releasing an aircraft was a fraction of the time taken by its serious competitors, using a fraction of the number of employees of those competitors. The development of an entirely new kind of passenger aircraft requires years of engineering and significant expertise to get right, as demonstrated by Wisk and the other leading players in this space. For example, after 10 years of hard engineering and testing, Wisk is currently developing its sixth-generation aircraft, which it plans to certify with the U.S. Federal Aviation Administration (FAA). We believe it is virtually impossible for Archer to have produced an originally-designed aircraft in this timeframe that has gone through the necessary testing and is ready for certification with the FAA.
Most surprising to Wisk was the design of Archer’s prototype aircraft — mainly because it closely resembled Wisk’s own prototype. Both aircraft feature six front rotors, each with five blades, that can tilt either horizontally or vertically, as well as six rear rotors that each consist of two blades and remain fixed in a vertical position. Archer’s aircraft also includes an “unconventional” V-shaped tail, similar to Wisk’s patented design.
“The striking similarity in these designs could not have been a coincidence,” Wisk says. The month that Wisk filed its patent application was the same month that Archer hired away 10 of Wisk’s engineers, the company says.
With its suspicions raised, Wisk said it hired a forensic investigator, who returned with some “troubling” information:
We discovered that one of those engineers downloaded thousands of Wisk files near midnight, shortly before he announced his resignation and immediately departed to Archer. Those files contain our valuable trade secrets and confidential information about Wisk’s aircraft development spanning the history of the company, accumulated over countless hours of incremental progress by scores of engineers. Another engineer downloaded numerous files containing test data just before departing for Archer. Yet another wiped any trace of his computer activities shortly before leaving for Archer.
Wisk also points out that among its competitors, no two prototypes look the same, which makes Archer’s alleged theft of its trade secrets stand out even more.
Archer has recently made news by raising $1.1 billion by going public through a reverse merger with a special acquisition company, or SPAC. The merger, which is valued at $3.8 billion, is also backed by Stellantis, the parent company of Fiat Chrysler and Peugeot, and United Airlines. United has placed a $1 billion order for 200 Archer electric vertical takeoff and landing (eVTOL) aircraft, with an option to purchase 100 more for $500 million.
In an interview with The Verge last month, the startup’s co-founders gave a lot of credit to Page’s Kitty Hawk and Wisk for helping launch the eVTOL industry, while also acknowledging having poached many of the key players from Wisk to help start Archer.
“We started the business three or four years ago. And our team here is a team that basically kind of started the space,” Adcock told The Verge. “So Larry Page basically invented the space in a large way. In 2010, he started a company that’s now called Wisk. It used to be called Kitty Hawk, and then Zero, but it’s called Wisk now. That was like the big, big group in the space.”
Adcock continued to heap praise on Page, saying, “You know, Larry’s spent a considerable amount of money and time over 10 years basically maturing the technologies, like motors, batteries, flight control, software, aircraft design, these type of things.”
But that didn’t prevent him from hiring away many of the key players at Wisk and Airbus’ Vahana, including Tom Muniz, who ran engineering at Wisk and is now chief operating officer at Archer; and Geoffrey Bower, who was chief engineer at Vahana and now holds that same title at Archer. “We’ve basically been bringing over kind of the best folks in the world here to kind of tackle this problem last several years,” he added.
A spokesperson for Archer called the lawsuit “regrettable” and denied the allegations that the company had stolen Wisk’s intellectual property.
“It’s regrettable that Wisk would engage in litigation in an attempt to deflect from the business issues that have caused several of its employees to depart,” the spokesperson said. “The plaintiff raised these matters over a year ago, and after looking into them thoroughly, we have no reason to believe any proprietary Wisk technology ever made its way to Archer. We intend to defend ourselves vigorously.”
Air taxis, sometimes misidentified as “flying cars, “are essentially helicopters without the noisy, polluting gas motors. A number of startups have emerged in recent years with prototype aircraft that are electric-powered, able to carry a handful of passengers, and intended for short flights within a city or regionally. Analysts predict that the flying taxi market could grow to $150 billion in revenue by 2035.
Update April 6th 1:08PM ET: This story has been updated to include Archer’s response.
Tech companies don’t just want to identify you using facial recognition — they also want to read your emotions with the help of AI. For many scientists, though, claims about computers’ ability to understand emotion are fundamentally flawed, and a little in-browser web game built by researchers from the University of Cambridge aims to show why.
Head over to emojify.info, and you can see how your emotions are “read” by your computer via your webcam. The game will challenge you to produce six different emotions (happiness, sadness, fear, surprise, disgust, and anger), which the AI will attempt to identify. However, you’ll probably find that the software’s readings are far from accurate, often interpreting even exaggerated expressions as “neutral.” And even when you do produce a smile that convinces your computer that you’re happy, you’ll know you were faking it.
This is the point of the site, says creator Alexa Hagerty, a researcher at the University of Cambridge Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk: to demonstrate that the basic premise underlying much emotion recognition tech, that facial movements are intrinsically linked to changes in feeling, is flawed.
“The premise of these technologies is that our faces and inner feelings are correlated in a very predictable way,” Hagerty tells The Verge. “If I smile, I’m happy. If I frown, I’m angry. But the APA did this big review of the evidence in 2019, and they found that people’s emotional space cannot be readily inferred from their facial movements.” In the game, says Hagerty, “you have a chance to move your face rapidly to impersonate six different emotions, but the point is you didn’t inwardly feel six different things, one after the other in a row.”
A second mini-game on the site drives home this point by asking users to identify the difference between a wink and a blink — something machines cannot do. “You can close your eyes, and it can be an involuntary action or it’s a meaningful gesture,” says Hagerty.
Despite these problems, emotion recognition technology is rapidly gaining traction, with companies promising that such systems can be used to vet job candidates (giving them an “employability score”), spot would-be terrorists, or assess whether commercial drivers are sleepy or drowsy. (Amazon is even deploying similar technology in its own vans.)
Of course, human beings also make mistakes when we read emotions on people’s faces, but handing over this job to machines comes with specific disadvantages. For one, machines can’t read other social clues like humans can (as with the wink / blink dichotomy). Machines also often make automated decisions that humans can’t question and can conduct surveillance at a mass scale without our awareness. Plus, as with facial recognition systems, emotion detection AI is often racially biased, more frequently assessing the faces of Black people as showing negative emotions, for example. All these factors make AI emotion detection much more troubling than humans’ ability to read others’ feelings.
“The dangers are multiple,” says Hagerty. “With human miscommunication, we have many options for correcting that. But once you’re automating something or the reading is done without your knowledge or extent, those options are gone.”
If you’re finding that background noise is disrupting voice or video calls made from your computer, then a new piece of software from Nvidia might help (provided you have the necessary hardware to run it). Released in April 2020, RTX Voice uses the hardware found in Nvidia’s RTX (and more recently, GTX) GPUs to process your incoming and outgoing audio and eliminate almost all background noise.
Below, you’ll find a quick demonstration I recorded to show how it works. This was recorded from a Blue Snowball microphone using the built-in call recording functionality in Zoom. When I don’t have the software enabled, you can hear the loud clacking of my mechanical keyboard in the background of the call. But when I turn on RTX Voice, the sound completely disappears.
As well as processing your microphone’s input so that the people you’re speaking to can’t hear any background noise around you, you can also set the software to eliminate background noise coming in from other people. So you can save yourself from your colleagues’ loud keyboard as well as protecting them from your own. It’s a win-win.
How to use RTX Voice to reduce background noise
RTX Voice is pretty simple to use, but the big caveat is that you need the right hardware. In order to run it, you’ll need an Nvidia GeForce or Quadro RTX or GTX graphics card since the software uses this hardware to process your audio. That means you’re out of luck if you’ve got a Mac, or a Windows machine without a dedicated GPU.
As well as hardware requirements, the other thing to note about RTX Voice is that since the processing is being done by your graphics card, it might take system resources away from any games or other graphically intensive applications you’re running. I ran some quick and dirty benchmarks to try to gauge the performance impact and found that running RTX Voice on my Discord microphone input reduced UniEngine’s Heaven Benchmark by just over 3fps or around 6 percent, rising to over 8fps or 14 percent if I used the software to process incoming audio as well. That more or less tracks with YouTuber EposVox’s report of a 4 to 10 percent reduction when using it on his microphone, rising to 20 percent with both mic and speakers.
I think that makes RTX Voice a much better option for calls where you’re unlikely to be running something graphically intensive at the same time, like a work conference call, rather than while you’re running a game simultaneously. If you’re looking for something more gaming-specific, Discord recently launched its own noise suppression feature, which might be a better alternative.
RTX Voice can be set it up in just a couple of minutes.
First, update the driver software of your graphics card if it’s not already running on version 410.18 or above
Download RTX Voice from Nvidia’s website and install it
Once the software is installed, you can configure it to improve your incoming audio, outgoing audio, or both. Nvidia recommends only turning it on for your input device (read: microphone) to minimize the impact the audio processing will have on the performance of your system. You can also select how much noise suppression you want. I left it at 100 percent, but you might want to play around to find what works best for you.
Once installed, “Nvidia RTX Voice” will appear as an audio input and / or output device for your PC. That means you can go into your voice chat app of choice and select it as though you’d plugged an extra microphone or set of speakers into your PC. Check out Nvidia’s site for specific instructions on how to configure the software for individual applications; here’s what the setting looks like in Zoom.
Nvidia’s software isn’t unique. In addition to Discord’s feature, Microsoft also plans to add a similar piece of functionality to Teams later this year. The advantage of RTX Voice, however, is that it works across a much broader range of apps. Nvidia’s site lists 12 apps that it’s validated. However, I tested out audio recording app Audacity, which Nvidia doesn’t list as being supported, and found that RTX Voice worked just fine, so there are likely to be other unlisted apps that also work.
Not everyone will have the hardware to take advantage of this latest feature, and for others, the performance hit won’t be worth it. However if, like me, your gaming PC is mainly being used as a work computer these days, then using RTX Voice is a no-brainer.
Correction: This article originally stated that RTX Voice won’t work on a Windows machine with a dedicated GPU when it should have read that it won’t work on a Windows machine without a dedicated GPU. We regret the error.
Update 10:31AM, April 6th: Nvidia has extended RTX Voice support for earlier GTX, Quadro, and Titan-branded graphics card, so we’ve updated this post with relevant info.
Vox Media has affiliate partnerships. These do not influence editorial content, though Vox Media may earn commissions for products purchased via affiliate links. For more information, see our ethics policy.
Nvidia’s RTX Voice feature that eliminates background noises that would otherwise come through your microphone is no longer limited to RTX graphics cards. The latest update to the software has enabled any GeForce GTX, Quadro, or Titan-branded Nvidia GPU from the past to use the feature (via Tom’s Hardware).
Given that Nvidia’s newest RTX 30-series graphics cards are pricey and tough to find (as are just about any kind of somewhat capable GPU), this is a good way to extend the utility of older hardware. If you update your graphics card to Nvidia’s 410.18 driver, models down to the GTX 600-series released in 2012 should be able to run the feature once you download it. Here’s a link to the download.
This update comes just shy of a year since the feature’s original announcement. It’s also been about a year since some people quickly figured out a rather simple hack to get it working on older cards without Nvidia’s help, despite the company’s initial claims that RTX Voice tapped the AI-focused Tensor cores that are found exclusively in RTX graphics cards. Alas, it’s official now, so go download it if you work or play in a noisy environment that you’d like to make more quiet.
Bloomberg today reported that a shortage of inexpensive display driver chips has delayed production of the LCD panels used in, well, pretty much every product category you can think of. Displays are ubiquitous, and many devices can’t function without them. But for the displays to work, they require a display driver — no, not Nvidia or AMD display drivers, those are software. We’re talking about a tiny chip that sends instructions and signals to the display.
That’s a fairly simple function, at least compared to those performed by the vastly more powerful components inside the device proper, which is why many display drivers cost $1. But a component’s price doesn’t always reflect its importance, as anyone who’s built a high-end PC, bought one of the best gaming monitors, and then realized they forgot to get a compatible cable can attest. That missing link is both cheap and vital.
All of which means that a display driver shortage can cause delays for smartphones, laptops, and game consoles; automobiles, airplanes, and navigation systems; and various appliances, smart home devices, and other newly be-screened products.
“I have never seen anything like this in the past 20 years since our company’s founding,” Himax CEO Jordan Wu told Bloomberg. He should know — Himax claims to be the “worldwide market leader in display driver ICs” for many product categories.
Himax’s share price has risen alongside demand for display drivers. Yahoo Finance data puts its opening share price for May 1, 2020 at $3.53; it opened at $14.06 on Monday. The market, at least, is acutely aware of display drivers’ importance.
Unfortunately there isn’t much Himax can do to improve the availability of display drivers, Wu told Bloomberg, because it’s a fabless company that relies on TSMC for production. TSMC simply can’t keep up with all the demand it’s experiencing.
Companies will have to sit on otherwise-ready displays (assuming panel production improves) until that changes. This probably seems familiar to manufacturers waiting for SSD controller supply to rebound after the February disruption of a Samsung fab.
That’s only part of the problem, of course, as the global chip shortage affects practically every aspect of the electronics industry. It’s a matter of improving the availability of CPUs, GPUs, mobile processors, chipsets, display panels, single board computers, and who-knows-how-many other components. No biggie.
E3 2021 will be an all-digital event. The Electronic Software Association has announced that this year’s event, which will be free of charge, will take place from June 12 to June 15.
Last year, the event was canceled due to the COVID-19 pandemic, which is still ongoing. For the virtual event, Nintendo, Xbox, Capcom, Konami, Ubisoft, Take-Two Interactive, Warner Bros. Games and Koch Media will participate, with promises of “more to come.” It is likely that other companies will have adjacent events, much like they have during in person E3s.
“For more than two decades, E3 has been the premier venue to showcase the best that the video game industry has to offer, while uniting the world through games,” said Stanley Pierre-Louis, president and CEO of the ESA said in a press release. “We are evolving this year’s E3 into a more inclusive event, but will still look to excite the fans with major reveals and insider opportunities that make this event the indispensable center stage for video games.”
The exact format has yet to be unveiled, though it will likely feature a number of pre-recorded presentations and interviews, and possibly some game demonstrations. And surely, there will be a lot of world premiere trailers.
E3 typically makes the ESA a lot of money, so this is yet another hit for the trade group’s budget. But at least, for the fans, everyone will be able to attend. ESA ended its announcement by saying, “ESA looks forward to coming back together to celebrate E3 2022 in person.” Let’s hope that actually happens.
E3 2021 will take place June 12th-15th this year as a free, “reimagined, all-virtual” event, the Entertainment Software Association announced today. Organizers announced that the lineup includes companies such as Nintendo, Xbox, Capcom, Konami, Ubisoft, Take-Two Interactive, and Warner Bros. Games. Sony is notably missing from that list so far.
E3, gaming’s biggest annual conference in North America, typically takes place in downtown LA every June and attracts a mix of developers, press, and consumers. Last year’s event was canceled in April due to COVID-19. In its absence, Geoff Keighley launched Summer Game Fest in partnership with many developers to deliver game reveals and news; the digital event is also returning this June.
E3 is expected to resume in person next summer.
ESA president and CEO Stanley Pierre-Louis says the organization is “evolving this year’s E3 into a more inclusive event” that will still include game reveals and news. The organization confirmed in February that it would hold an online event only this year.
TikTok creators will soon be able to add automatically generated captions to their videos as the app attempts to make itself more accessible. The option to add auto captions will appear in the editing page after a video has been uploaded or recorded. TikTok says the feature will be available in American English and Japanese at first, but it plans to add support for more languages “in the coming months.”
The platform is adding the feature to make TikTok videos easier to watch for deaf and hard of hearing viewers. However, a TikTok dialogue box also says the feature is useful for anyone watching videos “when it’s difficult or inconvenient for them to listen to audio.” Creators can edit their captions after they’ve been automatically generated to fix any mistakes, and viewers can turn captions off via the captions button on the share panel.
As automatic transcription has gotten better over the years, services have increasingly been adding it to their software to make content more accessible. Last month, Google built the feature into Chrome, allowing it to generate captions for audio played through the browser. The company’s Live Caption system is also available as a system-wide feature for select Android devices. Video chat services like Zoom and Google Meet can auto-generate captions during calls, and Instagram also seems to be testing a similar feature for its videos.
TikTok’s auto captions are only its latest accessibility feature. The app also warns creators about videos that might trigger photosensitive epilepsy and provides filters for viewers to avoid these videos. A text-to-speech feature was also added late last year.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.