In a world where the vast majority of all-in-one and small form-factor PCs rely on proprietary motherboards, the Thin Mini-ITX form-factor is not particularly widespread, making it difficult for PC shops and DIY enthusiasts to build AIO and SFF computers. However, Thin-Mini-ITX motherboards are not going the way of the dodo, and ASRock’s recently announced AM4 X300TM-ITX is a good example of continued interest in the platform.
The ASRock X300TM-ITX platform combines compatibility with AMD’s Ryzen APUs (up to Zen 2-based Ryzen 4000-series) with an expansive feature set, including a USB 3.1 Gen 1 Type-C connector, a COM port, and an LVDS header, all of which are rather exotic for what are typically inexpensive Thin Mini-ITX motherboards.
Furthermore, the COM port and LVDS header make this platform useful for commercial systems that actually need these types of connectivity. ASRock doesn’t officially position the motherboard for business or commercial PCs, but it does support AMD Ryzen Pro APUs, so you can certainly use it to build a PC with Pro-class features.
As the name suggests, ASRock’s X300TM-ITX motherboard is based on a rather dated AMD X300 chipset that was originally designed for entry-level systems aimed at overlockers, but it still supports the vast majority of AMD’s APUs with an (up to) 65W TDP (except the upcoming Ryzen 5000-series processors). The board also supports up to 64GB of DDR4-3200 memory across two SO-DIMM memory modules, an M.2-2280 slot for SSDs with a PCIe 3.0x or a SATA interface, and one SATA connector.
ASRock aims the X300TM-ITX motherboard at thin entry-level systems that don’t typically use discrete graphics cards, so it doesn’t have a PCIe x16 slot for an add-in card. Instead, the platform uses AMD’s integrated Radeon Vega GPUs. Meanwhile, the LVDS header supports resolutions of up to 1920×1080 at 60Hz, whereas the HDMI 2.1 connector supports HDCP 2.3. There is no word about DisplayPort support over the USB Type-C connector, and you should be aware that HDMI-to-DisplayPort adapters may not work with all displays.
ASRock’s X300TM-ITX has an M.2-2230 slot for a Wi-Fi card along with a GbE port. It also has USB Type-A connectors as well as a 3.5-mm audio input and output.
The platform is already mentioned on the manufacturer’s website, so it should be available for purchase soon. Unfortunately, ASRock didn’t touch on pricing in its press release.
Memory maker Adata is preparing to release its first-ever DDR5 memory modules aimed at the gaming market, possibly adding a new tier to our list of Best RAM. Adata will release these new DDR5 modules under the new ‘Caster’ series branding. The kits will come in three capacities, including 8GB, 16GB, and 32GB configurations. Memory speeds will also be significantly higher than the JEDEC spec with 6000 MHz-7400 MHz configurations with a voltage of just 1.1V.
The Caster DDR5 memory modules will come in both RGB and non-RGB flavors, and just as you’d expect from gaming-flavored RAM, both variants come with a full heatsink cover.
The heatsinks feature a two-tone design with a matte black finish and an angled silver-metal finish in the middle that sports triangular shapes. Both variants feature the same design, but the RGB model is quite noticeably taller to make room for the additional RGB equipment necessary to light up the DIMM.
Without a doubt, this is the first major sign of DDR5 hitting the mainstream market sooner rather than later. We’ve already confirmed that Intel will use both DDR4 and DDR5 with its next-gen Alder Lake chips, and AMD is rumored to use DDR5 for its new AM5 Zen4 chips.
Altogether, this means we’ll see a major change in memory adoption, from DDR4 to DDR5, over the 2022 to 2023 timeframe. This is good news, as DDR5’s significantly higher memory bandwidth and other optimizations will allow both Intel and AMD to produce higher core count CPUs that will perform better without the current DDR4 bottlenecks. More memory bandwidth is also helping for feeding hungry graphics engines, which might ultimately result in faster integrated graphics, too.
The Razer Kraken V3 X will keep you satisfied with an excellent microphone and solid rich audio reproduction.
For
+ Lightweight
+ Solid audio reproduction and thump
+ Great, microphone
+ Succulently soft ear cups
Against
– ll-plastic design
Designed to compete with the best gaming headsets, without breaking the bank, Razer’s Kraken V3 X combines a comfortable ear cup design with strong audio output, an excellent microphone and software that greatly enhances the experience. This $69 set of USB cans are thumpy thanks to Razer’s patented Triforce 40mm drivers while offering a dash of RGB style in-the-ear cups.
Razer Kraken V3 X Specs
Driver Type
40mm neodymium magnet
Impedance
32 Ohms
Frequency Response
12 Hz – 28kHz
Microphone Type
Cardioid Hyperclear Unidirectional
Connectivity
USB Type-A (PC)
Weight
0.6 pounds (285g)
Cord Length
USB Type-A cable: 6 feet
Lighting
RGB on Earcups
Software
Razer Synapse and 7.1 Surround Sound
Design and Comfort of Razer Kraken V3 X
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Though it’s made from lightweight plastic, the Razer Kraken V3 X feels very sturdy. The unit’s Hybrid-Fabric memory foam ear cups are succulently soft and the headband is highly adjustable, fitting comfortably on my obnoxiously large head. When I plugged it in, the three-headed snake logo on each ear cup illuminated in RGB.
On the left earcup, you will find the flexible Razer Hyper Clear cardioid microphone, which is quite bendy, with a volume knob and a mute button. The Razer Kraken V3 X is fine to wear for long periods of time as they do not tend to get very hot or warm with long usage, unlike many other over-the-ear styled gaming headphones I have previously reviewed.
Audio Performance of Razer Kraken V3 X
The headset uses a pair of 40mm Triforce drivers that are designed by Razer and they pump out thunderous distortion-free bass and sweet sound throughout the audio spectrum. From sweet, warm, throaty lows, to angelic highs, the rich sound on the Razer Kraken V3 X surprised me.
First, I went to Youtube to listen to Busta Rhymes’ “Put Your Hands Where My Eyes Could See,” because the thick bold bassline would be an excellent test of the Kraken V3 X’s capabilities. The unit came through with flying colors as they pushed out clear, loud, thunderous bass that Thor Odinson would be proud of.
My favorite moment came while listening to Earth, Wind & Fire’s “September.” At the beginning of the song, the Razer Kraken V3 X reproduced the softer tones of the finger snaps and guitar melody sweetly. When the horn section takes over with its powerful rhythm, the Krakens proved they were audio titans.
The Razer Kraken V3 X also has plenty of gaming prowess. While playing Borderlands 2, the 7.1 spatial surround sound helped me hear some creeps off to my right and I was able to turn around swiftly with my sniper’s rifle and blow a villain’s head off before he could roast me with a flamethrower. The sound of explosions was exquisite when I shot out a barrel filled with chemicals, taking out three enemies.
After I was done with Borderlands 2, I decided to knock some heads and so I launched Batman Arkham Knight and again the spatial sound software helped me as I heard footsteps to my left and I bataranged a would-be attacker. I thoroughly enjoyed hearing the bone-crunching punches, and then my favorite sound, the thruster on the Batmobile firing, was bombastically reproduced, as it launched me across off a bridge and onto a rooftop.
To test the movie viewing experience, I watched Avengers Infinity War via Disney Plus. The audio captured the thunderous bass and every nuance so well that it sounded like it did when I watched this film in an IMAX theater.
During the scene where Starlord is feeling insecure about Thor’s presence and starts deepening his voice, I picked up the subtle difference in tone from the moment when Chris Pratt starts his impression. Every fight scene and explosion was so realistic. When Iron Man is battling Thanos and he roots his armor’s feet and then double punches Thanos and he slams against the debris, I literally could hear individual rocks fling off and land elsewhere.
Microphone on Razer Kraken V3 X
The Razer Kraken V3 X comes with Razer’s HyperClear cardioid microphone, which has a rated frequency response that ranges from 100Hz-10Hz with a sensitivity of -42dB. It’s very flexible and bendy and really does a nice job when recording audio.
I took part in an afternoon Google Meet, and everyone said that my voice came in loud and clear, my natural deep timbre was nicely picked up by the microphone and when I made an appearance on my friend’s baseball podcast, he commented that the mic had an excellent pickup and recorded very nicely.
Features and Software of Razer Kraken V3 X
Image 1 of 3
Image 2 of 3
Image 3 of 3
The Razer Kraken V3 X is a solid performer on its own but, I highly, recommend you download Razer’s Synapse software which will allow you to configure the RGB lighting effects, create lighting profiles, and adjust the volume.
The real winner here is Razer’s 7.1 Surround Sound download; it is the game changer and takes the sound quality up many notches. The normal audio performance as previously mentioned is solid. However, the truly thunderous, high-quality audio that makes these cans worth their weight, is when the unit is paired with the software. They go from sounding like $69 headphones to sounding like a pair of $200 headphones.
Bottom Line
For $69.99 you get an excellent pair of sounding headphones, especially if you remember to download Razer’s 7.1 surround sound software. Yes, they’re plastic, but they’re very stylish with the RGB lighting adding a little panache and flair. The Kraken V3 X is also super lightweight, the hybrid cloth and memory foam cups will cradle your ears in soft comfort.
With the excellent microphone performance, you will be able to bark orders out to your friends during games or even host a podcast with crystal clear audio. If you don’t mind spending a bit more money and want a headset with a 3.5mm jack, you should consider the HyperX Cloud Alpha, but if you want a high-quality, affordable USB gaming headset, the Razer Kraken V3 X is a great choice.
Intel kicked off Computex 2021 by adding two new flagship 11th-Gen Tiger Lake U-series chips to its stable, including a new Core i7 model that’s the first laptop chip for the thin-and-light segment that boasts a 5.0 GHz boost speed. As you would expect, Intel also provided plenty of benchmarks to show off its latest silicon.
Intel also teased its upcoming Beast Canyon NUCs that are the first to accept full-size graphics cards, making them more akin to a small form factor PC than a NUC. These new machines will come with Tiger Lake processors. Additionally, the company shared a few details around its 5G Solution 5000, its new 5G silicon for Always Connected PCs that it developed in partnership with MediaTek and Fibocom. Let’s jump right in.
Intel 11th-Gen Tiger Lake U-Series Core i7-1195G7 and i5-1155G7
Intel’s two new U-series Tiger Lake chips, the Core i7-1195G7 and Core i5-1155G7, slot in as the new flagships for the Core i7 and Core i5 families. These two processors are UP3 models, meaning they operate in the 12-28W TDP range. These two new chips come with all the standard features of the Tiger Lake family, like the 10nm SuperFin process, Willow Cove cores, the Iris Xe graphics engine, and support for LPDDR4x-4266, PCIe 4.0, Thunderbolt 4 and Wi-Fi 6/6E.
Intel expects the full breadth of its Tiger Lake portfolio to span 250 designs by the holidays from the usual suspects, like Lenovo MSI, Acer and ASUS, with 60 of those designs with the new 1195G7 and 1155G7 chips.
Intel Tiger Lake UP3 Processors
PROCESSOR
CORES/THREADS
GRAPHICS (EUs)
OPERATING RANGE (W)
BASE CLOCK (GHZ)
SINGLE CORE TURBO FREQ (GHZ)
MAXIMUM ALL CORE FREQ (GHZ)
Cache (MB)
GRAPHICS MAX FREQ (GHZ)
MEMORY
Core i7-1195G7
4C / 8T
96
12 -28W
2.9
5.0
4.6
12
1.40
DDR4-3200, LPDDR4x-4266
Core i7-1185G7
4C / 8T
96
12 – 28W
3.0
4.8
4.3
12
1.35
DDR4-3200, LPDDR4x-4266
Core i7-1165G7
4C / 8T
96
12 – 28W
2.8
4.7
4.1
12
1.30
DDR4-3200, LPDDR4x-4266
Core i5-1155G7
4C / 8T
80
12 – 28W
2.5
4.5
4.3
8
1.35
DDR4-3200, LPDDR4x-4266
Core i5-1145G7
4C / 8T
80
12 – 28W
2.6
4.4
4.0
8
1.30
DDR4-3200, LPDDR4x-4266
Core i5-1135G7
4C / 8T
80
12 – 28W
2.4
4.2
3.8
8
1.30
DDR4-3200, LPDDR4x-4266
Core i3-1125G4*
4C / 8T
48
12 – 28W
2.0
3.7
3.3
8
1.25
DDR4-3200, LPDDR4x-3733
The four-core eight-thread Core i7-1195G7 brings the Tiger Lake UP3 chips up to a 5.0 GHz single-core boost, which Intel says is a first for the thin-and-light segment. Intel has also increased the maximum all-core boost rate up to 4.6 GHz, a 300 MHz improvement.
Intel points to additional tuning for the 10nm SuperFin process and tweaked platform design as driving the higher boost clock rates. Notably, the 1195G7’s base frequency declines by 100 MHz to 2.9 GHz, likely to keep the chip within the 12 to 28W threshold. As with the other G7 models, the chip comes with the Iris Xe graphics engine with 96 EUs, but those units operate at 1.4 GHz, a slight boost over the 1165G7’s 1.35 GHz.
The 1195G7’s 5.0 GHz boost clock rate also comes courtesy of Intel’s Turbo Boost Max Technology 3.0. This boosting tech works in tandem with the operating system scheduler to target the fastest core on the chip (‘favored core’) with single-threaded workloads, thus allowing most single-threaded work to operate 200 MHz faster than we see with the 1185G7. Notably, the new 1195G7 is the only Tiger Lake UP3 model to support this technology.
Surprisingly, Intel says the 1195G7 will ship in higher volumes than the lower-spec’d Core i7-1185G7. That runs counter to our normal expectations that faster processors fall higher on the binning distribution curve — faster chips are typically harder to produce and thus ship in lower volumes. The 1195G7’s obviously more forgiving binning could be the result of a combination of the lower base frequency, which loosens binning requirements, and the addition of Turbo Boost Max 3.0, which only requires a single physical core to hit the rated boost speed. Typically all cores are required to hit the boost clock speed, which makes binning more challenging.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The four-core eight-thread Core i5-1155G7 sees more modest improvements over its predecessor, with boost clocks jumping an additional 100 MHz to 4.5 GHz, and all-core clock rates improving by 300 MHz to 4.3 GHz. We also see the same 100 MHz decline in base clocks that we see with the 1195G7. This chip comes with the Iris Xe graphics engine with 80 EUs that operate at 1.35 GHz.
Intel’s Tiger Lake Core i7-1195G7 Gaming Benchmarks
Intel shared its own gaming benchmarks for the Core i7-1195G7, but as with all vendor-provided benchmarks, you should view them with skepticism. Intel didn’t share benchmarks for the new Core i5 model.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Intel put its Core i7-1195G7 up against the AMD Ryzen 7 5800U, but the chart lists an important caveat here — Intel’s system operates between 28 and 35W during these benchmarks, while AMD’s system runs at 15 to 25W. Intel conducted these tests on the integrated graphics for both chips, so we’re looking at Iris Xe with 96 EUs versus AMD’s Vega architecture with eight CUs.
Naturally, Intel’s higher power consumption leads to higher performance, thus giving the company the lead across a broad spate of triple-A 1080p games. However, this extra performance comes at the cost of higher power consumption and thus more heat generation. Intel also tested using its Reference Validation Platform with unknown cooling capabilities (we assume they are virtually unlimited) while testing the Ryzen 7 5800U in the HP Probook 455.
Intel also provided benchmarks with DirectX 12 Ultimate’s new Sampler Feedback feature. This new DX12 feature reduces memory usage while boosting performance, but it requires GPU hardware-based support in tandem with specific game engine optimizations. That means this new feature will not be widely available in leading triple-A titles for quite some time.
Intel was keen to point out that its Xe graphics architecture supports the feature, whereas AMD’s Vega graphics engine does not. ULMark has a new 3DMark Sampler Feedback benchmark under development, and Intel used the test release candidate to show that Iris Xe graphics offers up to 2.34X the performance of AMD’s Vega graphics with the feature enabled.
Intel’s Tiger Lake Core i7-1195G7 Application Benchmarks
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Here we can see Intel’s benchmarks for applications, too, but the same rules apply — we’ll need to see these benchmarks in our own test suite before we’re ready to claim any victors. Again, you’ll notice that Intel’s system operates at a much higher 28 to 35W power range on a validation platform while AMD’s system sips 15 to 25W in the HP Probook 455 G8.
As we’ve noticed lately, Intel now restricts its application benchmarks to features that it alone supports at the hardware level. That includes AVX-512 based benchmarks that leverage the company’s DL Boost suite that has extremely limited software support.
Intel’s benchmarks paint convincing wins across the board. However, be aware that the AI-accelerated workloads on the right side of the chart aren’t indicative of what you’ll see with the majority of productivity software. At least not yet. For now, unless you use these specific pieces of software very frequently in these specific tasks, these benchmarks aren’t very representative of the overall performance deltas you can expect in most software.
In contrast, the Intel QSV benchmarks do have some value. Intel’s Quick Sync Video is broadly supported, and the Iris Xe graphics engine supports hardware-accelerated 10-bit video encoding. That’s a feature that Intel rightly points out also isn’t supported with MX-series GPUs, either.
Intel’s support for hardware-accelerated 10-bit encoding does yield impressive results, at least in its benchmarks, showing a drastic ~8X reduction in a Handbrake 4K 10-bit HEVC to 1080P HEVC transcode. Again, bear in mind that this is with the Intel chip running at a much higher power level. Intel also shared a chart highlighting its broad support for various encoding/decoding options that AMD doesn’t support.
Intel Beast Canyon NUC
Image 1 of 2
Image 2 of 2
Intel briefly showed off its upcoming Beast Canyon NUC that will sport 65W H-Series Tiger Lake processors and be the first NUC to support full-length graphics cards (up to 12 inches long).
The eight-litre Beast Canyon certainly looks more like a small form factor system than what we would expect from the traditional definition of a NUC, and as you would expect, it comes bearing the Intel skull logo. Intel’s Chief Performance Strategist Ryan Shrout divulged that the system will come with an internal power supply. Given the size of the unit, that means there will likely be power restrictions for the GPU. We also know the system uses standard air cooling.
Intel is certainly finding plenty of new uses for its Tiger Lake silicon. The company recently listed new 10nm Tiger Lake chips for desktop PCs, including a 65W Core i9-11900KB and Core i7-11700KB, and told us that these chips would debut in small form factor enthusiast systems. Given that Intel specifically lists the H-series processors for Beast Canyon, it doesn’t appear these chips will come in the latest NUC. We’ll learn more about Beast Canyon as it works its way to release later this year.
Intel sold its modem business to Apple back in 2019, leaving a gap in its Always Connected PC (ACPC) initiative. In the interim, Intel has worked with MediaTek to design and certify new 5G modems with carriers around the world. The M.2 modules are ultimately produced by Fibocom. The resulting Intel 5G Solution 5000 is a 5G M.2 device that delivers up to five times the speed of the company’s Gigabit LTE solutions. The solution is compatible with both Tiger and Alder Lake platforms.
Intel claims that it leads the ACPC space with three out of four ACPCs shipping with LTE (more than five million units thus far). Intel’s 5G Solution 5000 is designed to extend that to the 5G arena with six designs from three OEMs (Acer, ASUS and HP) coming to market in 2021. The company says it will ramp to more than 30 designs next year.
Intel says that while it will not be the first to come to market with a 5G PC solution, it will be the first to deliver them in volume, but we’ll have to see how that plays out in the face of continued supply disruptions due to the pandemic.
Intel’s EVP Michelle Johnston Holthaus will take to the stage tonight, May 30, 2021, to present Intel’s ‘Innovation Unleashed’ keynote for Computex 2021 before the show officially kicks off Monday. Due to the pandemic, Computex 2021 is an all-virtual experience, so you can pull up a seat and see the keynote here live at 10pm ET (7pm PT) or watch the replay later. We’ll also have all of our standard coverage after the event (we’ll provide links as it emerges).
Intel has kept details about its announcements close to the chest, but we know that the presentation will cover consumer CPUs and Xeon processors. That means we could hear more about the company’s upcoming Alder Lake processors, though we certainly wouldn’t expect a full launch until later this year. Intel has already teased an Alder Lake demo earlier this year, so we expect more of the same.
Given the increasing amount of information that’s emerged about the company’s DG2 GPUs based on the Xe-HPG architecture, we could also learn more details about the company’s forthcoming discrete graphics cards.
Here are the few details that Intel has shared in its press release:
“Join Intel Executive Vice President Michelle Johnston Holthaus for Intel’s first virtual COMPUTEX keynote and a firsthand look at how the strategies of new CEO Pat Gelsinger, along with the forces of a rapidly accelerating digital transformation, are unleashing a new era of innovation at Intel — right when the world needs it most.
Johnston Holthaus will welcome Intel’s Steve Long, corporate vice president of Client Computing Group Sales, and Lisa Spelman, corporate vice president and general manager of the Xeon and Memory Group, to outline how Intel innovations help expand human potential by expanding technology’s potential. This includes collaborating with partners to drive innovation across the technology ecosystem — from the data center and cloud to connectivity, artificial intelligence, and the intelligent edge.”
And with that, pull up a seat for the show, and stay tuned for our coverage afterward.
YouTube channel Moore’s Law Is Dead has published what it claims to be one of the first images of Intel’s upcoming enthusiast-grade DG2-series graphics card based on the Xe-HPG architecture (possibly codenamed ‘Niagara Falls’). The board does look like a graphics card, but it doesn’t have any Intel logotypes (they might have been removed to protect the source) or any other clear indication this is a DG2 GPU, so we should view any gleaned information with some skepticism.
Intel’s upcoming DG2 lineup is projected to include at least two graphics cards with either 384 or 512 execution units (EUs) and up to 16GB of memory that communicates over a 256-bit interface. The YouTube channel has published an image of Intel’s alleged DG2 graphics card and shared some additional information about Intel’s possible plans. The report says that while Intel might formally introduce its DG2-series graphics cards in Q4 2021, the cards won’t be widely available until Q1 2022.
Performance-wise, the top-of-the-range DG2 is projected to be slightly slower than Nvidia’s GeForce RTX 3080. Still, Intel is reportedly pricing the product ‘aggressively’ and is looking at a ‘sweet spot’ in the $349 to $499 range to grab market share.
The picture of the board also gives us a few points to chew over. First, the board has DisplayPort and HDMI interfaces and houses memory chips, so this is definitely a graphics card. The memory chips are installed in a pattern previously attributed to Intel’s upcoming high-end graphics cards with Xe-HPG GPUs, so this may indeed be Intel’s DG2.
Secondly, Intel’s high-end Xe-HPG GPU has a rather sophisticated multi-phase (10+) voltage regulating module (VRM). The VRM consists of two blocks on both sides of the GPU with a power management controller located near the display outputs. Such a VRM may imply the complexity and dimension of the graphics processor. In any case, this is an early sample and not a commercial product. Since this is a development board, some elements might be installed on the PCB merely for testing purposes.
Another thing that catches the eye are the two eight-pin auxiliary PCIe power connectors that can deliver up to 300W of power to the GPU and its memory. Additionally, the card can draw another 75W from the motherboard. That means we’re looking at a power-hungry graphics card. It’s noteworthy that the power connectors face the front side of the PC, which increases the actual length of the card. In contrast, modern AMD and Nvidia graphics cards feature power connectors on their top edge near the back. It is noteworthy that previously leaked pictures of an alleged Intel DG2 card showed the board with power connectors on top. Since we don’t know how old either sample is, it’s impossible to draw any conclusions here.
Finally, just like the latest AMD Radeon and Nvidia GeForce graphics cards, the alleged Intel DG2 desktop board seems to be slightly taller than the bracket, which is logical as its developers needed to accommodate the sophisticated power supply circuitry somewhere. It still isn’t as tall as Nvidia’s GA102-based reference designs, though.
Keeping in mind that Intel’s higher-end Xe-HPG graphics cards seem to be quite a bit out on the horizon, even accurate information about their current state should be considered preliminary – hardware gets more mature, and plans tend to change during the design process.
One of the more common Windows stop codes is named IRQL_NOT_LESS_OR_EQUAL. This cryptic-seeming name refers to an Interruption ReQuest Line (IRQL) that PCs use to signal events in urgent need of attention or response. In fact, IRQL_NOT_LESS_OR_EQUAL (sometimes referred to as just “IRQL”) is typically a memory related error that occurs if a system process or a device driver seeks access to a memory address for which it lacks valid access rights. Memory allocations for processes usually have an upper bound address, so the “NOT_LESS_OR_EQUAL” part refers to an attempt to access an address outside (greater than) that boundary value.
What Happens When the IRQL_NOT_LESS_OR_EQUAL Error Occurs?
This error triggers an OS stop, which causes Windows to crash and triggers what’s often called a Blue Screen of Death, or BSOD. Windows displays a stop screen, while it’s collecting forensics data in the background. When that collection phase ends, the PC reboots by default (unless you’ve changed settings to instruct it to shut down instead). This BSOD is shown as the lead-in graphic for this story.
Most Common Causes for the IRQL_NOT_LESS_OR_EQUAL Error
When this error is triggered, there are numerous potential causes that may be worth investigating. Your best bet is to think about what on your PC has changed recently. The list of potential causes includes:
● Corrupt system files: These are best addressed using the DISM /Online /Cleanup-Image /CheckHealth command (run at an administrative command prompt or in an administrative PowerShell session). If this command finds anything to report, run DISM /Online /Cleanup-Image /RestoreHealth to clean things up. Next, run the system file checker until it reports nothing found or fixed (this sometimes takes 2 or 3 iterations) by typing SFC /SCANNOW. If it works, this will often fix the IRQL error as well.
● Incompatible device drivers: if you’ve recently updated a driver, try rolling it back to the prior version. You can do this by opening the Device Manager (available when you hit Win + X), right clicking on the device in question, selecting Properties, navigating to the Driver tab and hitting the “Roll Back Driver” button. If the button is grayed out, you may have to uninstall the current driver and install the previous version manually.
● Faulty hardware: if a device is malfunctioning or failing, you’ll usually see error reports in Reliability Monitor, which you run by typing perfmon /rel in the Run box, at a command prompt, or in PowerShell.
The best thing you can do with a failing or malfunctioning device is to disconnect it from the PC (though for important devices – e.g. mouse, keyboard, disk drive, and so forth – you may also have to replace the ailing device with a known good working instance).
● Damaged or incomplete software installation: If you’ve recently installed an application or update, look in Reliability Monitor for installation failure messages (for updates and upgrades these will also appear in Update History). In such cases, your best bet is to uninstall the problem software and see if the problem goes away.
Graphics Drivers Often Cause IRQL_NOT_LESS_OR_EQUAL Errors
I’ve seen the IRQL error on more than half-a-dozen occasions in my 6-plus years of working with Windows 10. In all but one instance, the cause was a buggy Nvidia GeForce graphics driver. In all of those cases, by rolling back to the preceding version, I stopped the IRQL error dead in its tracks.
That’s why I don’t remove duplicate Nvidia graphics drivers from my Windows PCs until the new one has run without problems for a couple of weeks. The excellent GitHub project DriverStore Explorer is great at performing such cleanups, when the time comes. Don’t be too quick to make such cleanups, and you’ll leave the rollback option open to yourself, should you need it.
Try a Clean Boot
If the preceding suggested repairs provide no relief from IRQL_NOT_LESS_OR_EQUAL errors, a Windows 10 clean boot should be your next step. A clean boot starts Windows 10 with the barest minimum set of drivers and startup programs. It seeks to eliminate possible causes of trouble that have been added to the startup environment over time.
To perform a clean boot:
1. Launch the msconfig System Configuration utility. You can get there by hitting WinKey+R and entering “msconfig.”
2. Uncheck Load startup items in the General Tab under the Selective Startup setting. This disables all startup items currently present on this PC.
3. Navigate to the Services tab, click the Hide all Microsoft services checkbox at lower left, then click “Disable all.” This disables all non-Microsoft (mostly OS) services on this PC. You can now click OK to close the window.
Your PC is now set up for a clean boot, so you’ll want to restart to try further troubleshooting. This may allow you to replace or reinstall otherwise problematic or reluctant drivers, applications, updates and so forth. When you’ve finished your troubleshooting, you must then go back and reverse all changes.
If you’ve already disabled some startup items in Task Manager, for example, you might want to make a screenshot to capture the list of disabled items before you disable those still active. That way you’ll know what to leave alone when you put things back the way they used to be.
Troubleshooting non-Windows Services
If your troubleshooting leads you to suspect non-Windows services are involved in the IRQL_NOT_LESS_OR_EQUAL error, you need to conduct a process of elimination to identify the culprit (or culprits). This means turning on 3rd-party services in groups. Shawn Brink at Tenforums.com recommends a binary search technique in his clean boot tutorial. This works pretty well, and helps you zero in quickly. I sometimes do things in groups by vendor (Chrome, Nvidia, Intel, and so forth) and that seems to work well, too.
See Who Else is GETTING IRQL_NOT_LESS_OR_EQUAL Errors
If you visit TenForums.com, BleepingComputer.com Answers.Microsoft.com or the Tom’s Hardware forums and search for IRQL_NOT_LESS_OR_EQUAL and you will see how often the error has been reported lately as there may be a new driver or update wreaking havoc. You will also get some very good ideas on how others have approached diagnosis of the underlying cause, and what fixes they’ve applied.
It’s especially helpful to read through fixes that claim success because these might work for you, too. On the other hand, unsuccessful fixes can be informative, too, because they tell you which repairs to try later rather than sooner (or not at all).
Our COVID-19 vaccines have passed their first tests with flying colors. They work unbelievably well, and they’re helping to slow the spread of disease in countries where they’re widely available. Now, scientists are turning to the next key question: how long will they work that well?
In people who were sick with COVID-19 and then got vaccinated, new research shows that they probably work for years. That group has powerful memory cells in their bone marrow that produce new antibodies when they’re needed. And they work so well that they can even block variants of the virus, studies show. These people may not even need vaccine boosters to stay protected long term.
Protection may be different for people who got vaccinated but never had COVID-19, Michel Nussenzweig, an immunologist at Rockefeller University in New York, told The New York Times. The immune system responds differently to vaccines than it does to natural infection, so they might need boosters against variants — even if they have strong and long-lasting protection against the original coronavirus strain. “That’s the kind of thing that we will know very, very soon,” Nussenzweig said.
Luckily, other research is charging ahead to figure out exactly what those potential boosters might look like. Scientists are honing in on the levels of antibodies someone needs to be protected against COVID-19. That benchmark, known as the immune correlate of protection, will give them an idea of the safety threshold — if someone’s antibodies drop below it, they might be more vulnerable to infection again.
Zeroing in on that threshold does two things. First, it gives scientists a way to monitor protection in people who have already been vaccinated. They can watch to see how long it takes for antibodies to decline below it, and get an idea of when people might need that booster shot. Antibodies naturally decline over time, and they’re not the only measure of protection (those long-lasting memory cells in bone marrow are another, for example). But they’re an early look at how immunity might be changing.
Second, having a threshold for protection opens up a shortcut to creating any needed booster shots against COVID-19 variants. COVID-19 vaccine trials included tens of thousands of people. They took months to run, because researchers needed to watch how frequently people with and without the shots got sick. Once we have a good idea of the immune response that stops infections, though, they can test boosters — which are functionally the same vaccine, with small tweaks — in smaller groups of people. We already know the shots are safe, so all they may need to do is check if the new version also pushes people’s immune system above the cutoff.
Together, this research outlines a way to keep people safe from COVID-19 going forward. It starts to ease fears that protection against the coronavirus would start to fade over time, putting communities at risk for outbreaks down the road. The virus is tricky, and variants are a curveball, but — luckily — the human immune system has ammunition as well.
Here’s what else happened this week.
Research
What Breakthrough Infections Can Tell Us
Finding and analyzing the rare cases of COVID-19 in people who have been vaccinated can give us crucial information about variants. But testing vaccinated people too often can have drawbacks, as well. (Katherine Wu / The Atlantic)
Moderna says its COVID-19 vaccine is effective in teens
The shot could be the second vaccine available for people under 18 and key toward safely reopening schools in the fall. (Nicole Wetsman / The Verge)
Development
Half of all US adults are now fully vaccinated against COVID-19
The country hit this huge milestone in less than six months. (Bill Chappell / NPR)
The U.S. may never hit the herd immunity threshold. That’s OK.
In places like the United States, where half of the population has been vaccinated, the pandemic will slow and stop being a threat even without reaching the level needed for herd immunity. (Erin Mordecai, Mallory Harris and Marc Lipsitch / The New York Times)
We have bigger problems than COVID-19’s origins
Spending too much time arguing over the lab leak speculation is a distraction from the important steps governments need to take to end this pandemic and prepare for the next one. (Nicole Wetsman / The Verge)
Perspectives
During a brainstorming session at the end of April, we discussed a number of ideas, including gift cards, direct payments and free tickets to sporting events. My chief adviser, Ann O’Donnell, who has been with me since my time in Congress, hesitantly suggested the idea of a lottery. She almost didn’t mention it because of its seeming absurdity. “This is kind of a wacky idea but ….”
—Ohio Governor Mike DeWine wrote in The New York Times about the decision to give $1 million to five vaccinated adults through a lottery.
More than numbers
To the people who have received the 1.8 billion vaccine doses distributed so far: thank you.
To the more than 168,927,298 people worldwide who have tested positive, may your road to recovery be smooth.
To the families and friends of the 3,509,402 people who have died worldwide — 592,938 of those in the US — your loved ones are not forgotten.
Update 28/05/2021 3:13 pm PT: Intel has provided us with the following statement that sheds more light on the latest Tiger Lake desktop processors:
“Intel has partnered with customers interested in expanding their product portfolio with enthusiast, small form-factor desktop designs. The Intel Core i9-11900KB processor is a BGA solution built with unique specifications and performance specifically for these designs.”
Update 28/05/2021 11:13 am PT: Intel has updated the product pages for the Tiger Lake B-series processors to confirm that they are indeed desktop processors. We’ve amended the article to reflect the change.
Original Article:
If you think Intel was done with Tiger Lake, then you have another thing coming. The chipmaker has unceremoniously posted four new Tiger Lake chips (via momomo_us) in its ARK database. Apparently, the processors are already launched.
The quartet of new processors are listed under the Tiger Lake family, with the 11th Generation moniker. However, they carry the “B” suffix, which is a designation that Intel hasn’t used until now. We’re unsure of what the letter stands for. The product pages for the Core i9-11900KB, Core i5-11500B, Core i7-11700B and Core i3-11100B have the aforementioned processors as desktop chips. Nevertheless, the “B” is rumored to BGA (Ball Grid Array), which makes sense since Intel doesn’t specify a type of socket for the B-series parts. There’s a possibility that these processors are soldered to the motherboard via the BGA package.
The core configurations for the listed Tiger Lake processors stick to Intel’s guidelines. The Core i9 and Core i7 are equipped with eight cores and 16 threads, but with clock speeds as the main differentiating factor. The Core i5 and Core i3 SKUs arrive with six-core, 12-thread and four-core, eight-thread setups, respectively. It would appear that the Tiger Lake B-series processors benefit from Thermal Velocity Boost (TVB), though.
Intel Tiger Lake B-Series Specifications
Processor
Cores / Threads
Base / Boost / TVB Clocks (GHz)
L3 Cache (MB)
TDP (W)
Graphics
Graphics Base / Boost Clocks (MHz)
RCP
Core i9-11900KB
8 / 16
3.3 / 4.9 / 5.3
24
65
Intel UHD Graphics
350 / 1,450
$417
Core i7-11700B
8 / 16
3.2 / 4.8 / 5.3
24
65
Intel UHD Graphics
350 / 1,450
?
Core i5-11500B
6 / 12
3.3 / 4.6 / 5.3
12
65
Intel UHD Graphics
350 / 1,450
?
Core i3-11100B
4 / 8
3.6 / 4.4 / 5.3
12
65
Intel UHD Graphics
350 / 1,400
?
Since the B-series all enjoy a 65W TDP, it’s common sense that they are faster than Intel’s recently announced Tiger Lake-H 45W processors. The 20W margin allows the B-series access to TVB after all, which can be a difference maker in certain workloads. According to the Intel’s specification sheets, only the Core i9-11900KB and Core i7-11700B can be configured down to 55W. The Core i5-11500B and Core i3-11100B have a fixed 65W TDP.
The Core i9-11900KB is the only chip out of the lot that comes with an unlocked multiplier. The octa-core processor appears to feature a 3.3 GHz base clock, 4.9 GHz boost clock and 5.3 GHz TVB boost clock. Despite the Core i9-11900KB and the Core i9-11980HK having the same maximum 65W TDP, the first leverages TVB to boost to 5.3 GHz, 300 MHz higher than the latter.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Comparing from tier to tier, we’re noticing higher base clocks on the B-series SKUs. The difference is between 400 MHz to 700 MHz, depending on which models you’re looking at. Obviously, TVB gives the B-series higher boost clocks on paper. If we don’t take TVB into consideration, the improvement is very little. For example, the Core i7-11700B has a 4.8 GHz boost clock speed, only 200 MHz higher than the Core i7-11800H. The Core i5-11500B is rated for 4.6 GHz boost clock, 100 MHz faster than a Core i5-11400H.
It seems that Intel only made improvements to the processing aspect of the B-series. The iGPU and Tiger Lake’s other features look untouched. Like Tiger Lake-H, the B-series also comes with native support for DDR4-3200 memory and a maximum capacity of 128GB. However, the B-series seems to offer less memory bandwidth. For comparison, Tiger Lake-H delivers up to 51.2 GBps of maximum memory bandwidth, while the B-series tops out at 45.8 GBps.
It’s unknown what Intel’s intentions are for the Tiger Lake B-series lineup. Given the 65W TDP, it’s reasonable to think that Intel launched the new processors to compete with AMD’s Ryzen 5000G (codename Cezanne) desktop APUs that will eventually make their way to the DIY market.
A seven-year-old flaw in DRAM chips is making another comeback. Google revealed this week that it’s discovered a new technique, Half-Double, that can be used to exploit the Rowhammer bug thought to have been fixed with the release of DDR4.
Rowhammer was discovered in 2014 when researchers showed that it was possible to manipulate data stored in DDR3 memory by repeatedly accessing (“hammering”) a single row of memory cells to cause bit flips in adjacent rows.
Manufacturers responded with Target Row Refresh (TRR) mitigations, but in March 2020, researchers showed that it was possible to bypass those protections in a paper titled “TRRespass: Exploiting the Many Sides of Target Row Refresh.”
But TRRespass still operated under the assumption that Rowhammer attacks were only capable of affecting rows of memory adjacent to the row being hammered. Google said that doesn’t seem to be the case, which is where Half-Double comes in.
“Unlike TRRespass, which exploits the blind spots of manufacturer-dependent defenses, Half-Double is an intrinsic property of the underlying silicon substrate,” Google said. “This is likely an indication that the electrical coupling responsible for Rowhammer is a property of distance, effectively becoming stronger and longer-ranged as cell geometries shrink down. Distances greater than two are conceivable.”
Google said it’s been working with JEDEC, a trade group devoted to open standards for the semiconductor industry that counts more than 300 companies among its members, and “other industry partners” to work on solutions to Rowhammer.
“We are disclosing this work because we believe that it significantly advances the understanding of the Rowhammer phenomenon, and that it will help both researchers and industry partners to work together, to develop lasting solutions,” Google said. “The challenge is substantial and the ramifications are industry-wide. We encourage all stakeholders (server, client, mobile, automotive, IoT) to join the effort to develop a practical and effective solution that benefits all of our users.”
This is the Raspberry Pi Pico on steroids. The power of the RP2040 with the extra conveniences that make creating projects a breeze.
For
+ Identical Pico pinout
+ Battery charging
+ Stemma QT / Qwiic connector
+ Large flash memory
+ USB C
Against
– Costs much more than a Pico
There are now a slew of RP2040 powered boards on the market. From the smallest, Adafruit’s QT Py RP2040 and Pimoroni’s Tiny 2040, to the largest, Adafruit’s Feather RP2040 and our Editor’s Choice Cytron Maker Pi Pico. The Raspberry Pi Pico itself is a $4 microcontroller, that offers lots of GPIO pins and a programmable IO which can be used to simulate many types of interfaces, even full retro computer systems.
The Raspberry Pi Pico form factor, a DIP package, is at home in a breadboard, protoboard or surface mount soldered into your project, and Pimoroni’s $17 Pico LiPo shares that same form factor but adds many more features. The board is three times the price of a typical Raspberry Pi Pico, but that extra money is well spent as it provides a drop in replacement for an existing Pico project with added features such as battery charging, a USB-C port, 16MB of Flash memory and a Stemma QT / Qwiic connector. All of these extras make this board a joy to use. And use it we did!
Pimoroni Pico LiPo Hardware Specifications
System on Chip
RP2040 microcontroller chip designed by Raspberry Pi in the United Kingdom.
Dual-core Arm Cortex M0+ processor, flexible clock running up to 133 MHz.
264KB of SRAM, and 4 / 16MB of on-board Flash memory
8 × Programmable I/O (PIO) state machines for custom peripheral support.
Stemma QT / Qwiic connector
SWD debug breakout
Castellated module allows soldering directly to carrier boards.
Power
USB C for data and power
2 pin JST connector for LiPo / Li lon batteries. Onboard battery monitoring and LED status indicator.
Design and Use of the Pimoroni Pico LiPo
Image 1 of 3
Image 2 of 3
Image 3 of 3
Pico LiPo works great with MicroPython. Pimoroni have their own spin which comes with modules for the range of boards. To get the best from Pico LiPo we need to use CircuitPython, especially when using Stemma QT / Qwiic components. If you really need MicroPython, but want to use Stemma QT / Qwiic devices then you can try our Adafruit’s latest project which merges the two. Pimoroni even has a download ready to go which works with the Pico LiPo.
Pimoroni’s Pico LiPo is the Raspberry Pi Pico on steroids. It shares the same size and form factor along with the same GPIO pinout, but we also get battery charging, Stemma QT / Qwiic and a toggle power button. The most important feature on this board is the battery charging. Controlled using an MCP73831 charge controller, it uses a steady 215mA charging current which easily charged our LiPo battery as we tested the board.
The XB6096I2S battery protector prevents the battery from straying into voltages which may harm its health. There is no MicroPython or CircuitPython module for monitoring the battery in code, but GPIO 24 is used to detect charging, and GPIO 29 can be used to monitor the battery voltage. This does mean that we only have three analog inputs, the same as the Raspberry Pi Pico but less than Adafruit’s Feather RP2040. The sacrifice of an analog input is worth it when we consider that the pin can be used to monitor our battery status, a key feature of Pico LiPo.
A great feature of the battery is that it can act as a basic UPS. Our project can be powered via the USB C interface, but should the power drop out, it switches to battery with zero downtime. Pico LiPo shares the same GPIO as the Raspberry Pi Pico which means we get all the pins, unlike other boards such as Adafruit’s Feather RP2040. But what the Pico LiPo shares with Adafruit’s and SparkFun’s boards is a Stemma QT connector (Qwiic on SparkFun boards) which makes connecting compatible devices exceptionally easy.
Stemma QT / Qwiic is really a bespoke breakout for I2C devices, and both Adafruit and SparkFun have a slew of compatible components such as temperature sensors, screens and capacitive inputs. Using our trusty MPR121 12 point capacitive touch sensor and the latest version of CircuitPython 7 for the Pico LiPo, we quickly hacked up a demo to test the Stemma QT connector.
Everything worked splendidly and we can see Stemma QT / Qwiic being adopted by many makers. Just next to the Stemma QT / Qwiic connector is a three pin JST-SH connector which breaks out the three debug pins, typically at the base of the Raspberry Pi Pico. These pins are used to pull data from a running RP2040 without cluttering the default UART port. Using these pins and another Raspberry Pi Pico as a debug host we can interactively work with the SRAM, CPU and memory mapped IO directly from our chosen development environment. If you are building mission critical RP2040 applications, then this is a key feature. For most of us, this is a fun feature to explore.
The power button is a toggle switch. That may not sound exciting straight away, but hear us out. The power button can fully turn off the board; it is not a momentary switch that merely resets the SoC. So in the field, with a battery powered project, we can conserve battery by simply pressing a button. When we need the board, press the power button to restart your project. Simple yet effective.
The BOOT button is normally used to put the Pico LiPo into a mode where the firmware can be installed, but Pico LiPo can also use that button in your code, a trend started by Pimoroni’s Tiny 2040 board. There are three LEDs present on the board, power (lightning icon), battery charging status (battery icon) and a user LED (exclamation point) connected to GPIO 25. All of these LEDs offer an at-a-glance status update.
As we mentioned earlier, the Pico LiPo shares the same pinout, and castellations as a Raspberry Pi Pico which means we can drop this board into an existing project and benefit from the extra features present on the board. We tested this by reusing our CircuitPython weather station project along with Pimoroni’s Pico Wireless pack. It worked exceptionally well and we queried the API, returned the data and stored it to the micro SD. We tested the project on battery, with a green LED informing us that the data collection was complete, and it worked with no issues.
Use Cases for the Pimoroni Pico LiPo
Pico LiPo provides the power of the Raspberry Pi Pico, and gives us so much more. The battery features alone make this board worth the money. Expect to see this board in portable projects such as props (NeoPixel lightsaber?), data collection projects using sensors and when joined to the Pico Wireless we have a battery powered Wi-Fi enabled data collection device. Pico LiPo would also be useful in robotics projects but an external power source would be needed for the motors and motor controller as the GPIO can only provide 3.3V at a maximum 600mA.
Bottom Line
Pimoroni’s Pico LiPo costs more than a typical Pico, but for the extra money we get a fully featured product. We get the power of the RP2040, all of the GPIO pins and icing on the cake are the Stemma QT / Qwiic connector and battery charging. This is a truly excellent board that should be in your projects!
The National Energy Research Scientific Computing Center (NERSC) Lawrence Berkeley National Laboratory (Berkeley Lab) this week announced its new supercomputer that will combine deep learning and simulation computing capabilities. The Perlmutter system will use AMD’s top-of-the-range 64-core EPYC 7763 processors as well as Nvidia’s A100 compute GPUs to push out up to 180 PetaFLOPS of ‘standard’ performance and up to four ExaFLOPS of AI performance. All told, that makes it the second-fastest supercomputer in the world behind Japan’s Fugaku.
“Perlmutter will enable a larger range of applications than previous NERSC systems and is the first NERSC supercomputer designed from the start to meet the needs of both simulation and data analysis,” said Sudip Dosanjh, director or NERSC.
NERSC’s Perlmutter supercomputer relies on HPE’s heterogeneous Cray Shasta architecture and is set to be delivered in two phases:
Phase 1 features 12 heterogenous cabinets comprising 1,536 nodes. Each node packs one 64-core AMD EPYC 7763 ‘Milan’ CPU with 256GB of DDR4 SDRAM and four Nvidia A100 40GB GPUs connected via NVLink. The system uses a 35PB all-flash storage subsystem with 5TB/s of throughput.
The first phase of NERSC’s Perlmutter can deliver 60 FP64 PetaFLOPS of performance for simulations, and 3.823 FP16 ExaFLOPS of performance (with sparsity) for analysis and deep learning. The system was installed earlier this year and is now being deployed.
While 60 FP64 PetaFLOPS puts Perlmutter into the Top 10 list of the world’s most powerful supercomputers, the system does not stop there.
Phase 2 will add 3,072 AMD EPYC 7763-based CPU-only nodes with 512GB of memory per node that will be dedicated to simulation. FP64 performance for the second phase will be around 120 PFLOPS.
When the second phase of Perlmutter is deployed later this year, the combined FP64 performance of the supercomputer will total 180 PFLOPS, which will put it ahead of Summit, the world’s second most powerful supercomputer. However, it will still trail Japan’s Fugaku, which weighs in at 442 PFLOPs. Meanwhile, in addition to formidable throughput for simulations, Perlmutter will offer nearly four ExaFLOPS of FP16 throughput for AI applications.
After months of build-up, we finally see a GPU-Z validation (courtesy of Matthew Smith) for Nvidia’s looming GeForce RTX 3080 Ti graphics card. Barring any last-second surprises, the information from the validation entry should be the final specifications for the GeForce RTX 3080 Ti.
If you haven’t been following the rumor mill, the GeForce RTX 3080 Ti utilizes the GA102 silicon. The submission fails to specify the exact die revision, but we expect it to come with the new silicon that has the improved Ethereum anti-mining limiter. With 80 enabled Streaming Multiprocessors (SMs), the Ampere graphics card debuts with 10,240 CUDA cores, 320 Tensor cores and 80 RT cores. The reference clock speeds appear to be 1,365 MHz base and 1,665 MHz boost. With these specifications, the GeForce RTX 3080 Ti pushes out a single-precision performance of up to 34.1 TFLOPs.
For comparison, the GeForce RTX 3080 and GeForce RTX 3090 offer around 29.77 TFLOPs and 35.58 TFLOPs, respectively. If we look at single-precision performance figures alone, the GeForce RTX 3080 Ti is up to 14.5% faster than a GeForce RTX 3080. At the same time, the flagship GeForce RTX 3090 is only 4.3% faster than a GeForce RTX 3080 Ti, which explains the close proximity between the two graphics cards in Geekbench 5.
Nvidia GeForce RTX 3080 Ti Specifications
GeForce RTX 3090
GeForce RTX 3080 Ti*
GeForce RTX 3080
GeForce RTX 3070
Architecture (GPU)
Ampere (GA102)
Ampere (GA102)
Ampere (GA102)
Ampere (GA104)
CUDA Cores / SP
10,496
10,240
8,704
5,888
RT Cores
82
80
68
46
Tensor Cores
328
320
272
184
Texture Units
328
320
272
184
Base Clock Rate
1,395 MHz
1,365 MHz
1,440 MHz
1,500 MHz
Boost Clock Rate
1,695 MHz
1,665 MHz
1,710 MHz
1,730 MHz
Memory Capacity
24GB GDDR6X
12GB GDDR6X
10GB GDDR6X
8GB GDDR6
Memory Speed
19.5 Gbps
19 Gbps
19 Gbps
14 Gbps
Memory Bus
384-bit
384-bit
320-bit
256-bit
Memory Bandwidth
936 GBps
912.4 GBps
760 GBps
448 GBps
ROPs
112
112
96
96
L2 Cache
6MB
6MB
5MB
4MB
TDP
350W
350W
320W
220W
Transistor Count
28.3 billion
28.3 billion
28.3 billion
17.4 billion
Die Size
628 mm²
628 mm²
628 mm²
392 mm²
MSRP
$1,499
$999 – $1,099
$699
$499
*Specifications are unconfirmed.
Looking at the memory system, the GeForce RTX 3080 Ti delivers up to 12GB of GDDR6X memory that adheres to a 19 Gbps memory clock. This memory operates across a 384-bit memory interface, meaning we get a maximum theoretical memory bandwidth up to 912.4 GBps. That’s 20% more bandwidth than a GeForce RTX 3080 and only 2.5% less than the GeForce RTX 3090.
The GPU-Z validation submission doesn’t specify the GeForce RTX 3080 Ti’s TDP (thermal design power) rating. However, there is heavy speculation that the Ampere graphics card could max out at 350W, which is the same thermal limit for the GeForce RTX 3090.
Nvidia most likely produced the GeForce RTX 3080 Ti to cross swords with AMD’s Radeon RX 6900 XT at the $999 price bracket. The GeForce RTX 3090 was already a formidable opponent for the Radeon RX 6900 XT. However, the flagship Ampere part’s $1,499 price tag dissuaded consumers from taking the graphics card into consideration. At a rumored price range between $999 and $1,099, the GeForce RTX 3080 Ti should be a very attractive option. The Ampere offering has yet to prove its worth beside the Radeon RX 6900 XT, though.
Hopefully, we won’t have to wait long to find out if the rumored dates are accurate. The GeForce RTX 3080 Ti may see an official announcement on May 31. Lastly, the GeForce RTX 3080 Ti should be available on the streets on June 4, although the exact pricing remains a mystery.
Unisantis Electronics, a startup led by Fujio Masuoka, the inventor of NAND memory, has developed Dynamic Flash Memory (DFM), a volatile type of memory that promises four times higher density than dynamic random access memory (DRAM) along with higher performance and lower power consumption.
DRAM memory relies on arrays of charge storage cells consisting of one capacitor and one transistor per data bit. Capacitors charge transistors when ‘1’ is recorded into that cell and discharge when ‘0’ is recorded into that cell. The arrays are arranged in horizontal wordlines and vertical bitlines. Each column of cells consists of two ‘+’ and ‘−’ bitlines that are connected to their own sense amplifiers that are used to read/write data from/to the cells. Both read and write operations are performed on wordlines, and it is impossible to address a single bit.
Throughout the history of DRAM, manufacturers have focused on making memory cells smaller by applying new cell structure and process technologies in a bid to increase DRAM capacity, reduce power consumption, and improve performance.
Unisantis’ Dynamic Flash Memory uses a Dual Gate Surrounding Gate Transistor (SGT) to eliminate capacitors and uses 4F2 gain cell structures (which are smaller than 6F2 used by DRAM today), something that significantly increases bit density (by up to four times) of memory compared to DRAMs. DFM is not the industry’s first capacitor-less type of random access memory (RAM), but previous attempts were unsuccessful.
According to Unisantis, unlike ZRAM (where the margins between 1 and 0 have been too narrow), its DFM has significantly increased ‘1’ and ‘0’ margin results, increasing speeds and improving the reliability of the memory cell. DFM uses the PL (Plate Line) gate to ‘stabilize’ the FB (Floating Body) by separating ‘1’ write and ‘0’ erase modes, Unisantis says.
Unisantis is an IP licensing company that does not produce memory or commercialize its technologies. The company’s DFM will only come to market if Unisantis manages to persuade the industry (namely SoC and memory makers) to adopt its dynamic flash memory. Since DFM uses conventional CMOS materials and does not require very sophisticated manufacturing methods, it may indeed be commercialized. Meanwhile, the company’s Dual Gate Surrounding Gate Transistor (SGT) IP could be licensed by various parties that want to take advantage of GAAFET-type transistors.
The DFM technology was described by its inventors, Drs. Koji Sakui and Nozomu Harada earlier this month at the 13thIEEE International Memory Workshop.
Acer’s bringing Intel’s new high-powered 11th Gen Tiger Lake chips to its gaming laptops, alongside new mobile RTX 3000 GPU options and a new design for one laptop in particular. New, more budget-friendly RTX 3000 gaming desktops are also joining those mobile options, giving gamers who haven’t been able to try Ampere yet another avenue to buy an RTX 3000 series GPU.
Specs
Predator Triton 500 SE
Predator Helios 500
Predator Orion 3000
Acer Nitro 50
CPU
Up to 11th Gen Intel Core i9 H-series
Up to 11th Gen Intel Core i9 HK-series
11th Gen Intel Core i7
Up to AMD Ryzen 9 5900 or 11th Gen Intel Core i7
GPU
Up to RTX 3080
Up to RTX 3080
RTX 3070
RTX 3060 Ti
Memory
Up to 64GB DDR4-3200
Up to 64GB DDR4-3200
Up to 64GB DDR4-3200
Up to 64GB DDR4-3200
Storage
Up to 4TB PCIe 4.0 NVMe SSD
2x Up to 1TB PCIe 4.0 NVMe SSDs with 1x Up to 2TB SATA HDD
Up to 1TB PCIe 4.0 NVMe and 3TB HDD
Up to 1TB PCIe NVMe and 3TB HDD
Display
16 inches, 16:10: FHD @ 165 Hz, Mini LED or 165 Hz LCD or 240 Hz IPS
17.3 inches: 4K @ 120 Hz, Mini LED with HDR or FHD @ 360 Hz
N/A
N/A
Starting Price
$1,749
$2,499
$1,199
$949
NA Release Date
June
August
July
July
Acer Predator Triton 500 SE
Aesthetically, the star of the show here is the new Acer Predator Triton 500 SE. This model brings the general look and feel of the silvery Predator Triton 300 SE to a larger laptop. Based on photos Acer shared, the chassis seems to be somewhat darker without being fully black. We praised the Triton 300 SE for its power-to-size ratio, but the new Predator Triton 500 SE is likely to focus more on power.
Speaking of power, the Predator Triton 500 SE packs up to an 11th Gen Intel Core i9 H-series processor (so no overclocking), plus up to an RTX 3080 mobile GPU and up to 64GB of 3,200 MHz DDR4 RAM. Storage is all PCIe NVMe and can go up to 4TB, while the 16 inch display embraces the new 16:10 trend with an FHD screen that can reach up to 240 Hz. That high refresh rate is only available on the IPS model, not the 165 Hz LCD and Mini LED models. Acer also hasn’t commented on what type of LCD its LCD screen uses.
Acer Predator Helios 500 Refresh
If you want something a little larger and with a very premium display (especially for a laptop), you may want to opt for the refreshed Predator Helios 500. We got to spend some time with a configuration of the Acer Predator Helios 500, and what stood out most for us was its Mini LED display option. You can read about our experience in our Acer Predator Helios 500 hands-on article.
But long story short, this model maintains the same look as the current Predator Helios 500, but upgrades the internals to bring them up to line with Intel and Nvidia’s latest offerings: up to an 11th Gen Intel Core i9 HK series (overclocking is a go!) CPU with up to an RTX 3080 mobile GPU.
Besides Mini LED, you can also get the laptop with a FHD panel with your typical LED backlight and a 360 Hz refresh rate, the fastest refresh rate displays carry these days.
New Acer Predator Gaming Desktops
Acer’s new attempts at the best gaming PCs in desktop form are a bit more constrained. The Predator Orion 3000 is limited to an 11th Gen Intel Core i7 with an RTX 3070 but does let you customize your storage and memory capacities. The former maxes out at up to 1TB of PCIe NVMe storage with up to 3TB of HDD storage supplementing it, while the latter can go up to 64GB of DDR4-3200 RAM.
The Acer Nitro 50 series carries the sole AMD gaming machine (the N50-120) Acer announced today, as well as an Intel model (the N50-620).
The AMD model has up to AMD Ryzen 9 5900, while the Intel model has up to an 11th Gen Intel Core i7 CPU. Either way, you’ll get an RTX 3060 Ti for your GPU. Storage and memory options are the same as what’s available on the Predator Orion 3000.
Prices and Release Dates
The Predator Triton 500 SE will be the first of these laptops to hit U.S. store shelves. It launches today at Best Buy (and in June everywhere else) for a starting price of $1,749. The desktops will both follow in July, with the Predator Orion 3000 starting at $1,199 and the Intel Nitro 50 model starting at $949.
Finally, the Predator Helios 500 will launch in August for a starting price of $2,499.
We don’t yet have a release date or starting price for the Nitro 50 AMD model.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.