Intel has quietly added its Xeon W-1300 series Rocket Lake processors for entry-level workstations to its database, which essentially means they’ve had a somewhat silent introduction, at least to businesses. Unlike desktop counterparts, the new Xeon W CPUs support ECC memory and come with Xe-LP graphics featuring drivers certified for professional applications.
The new Xeon W-1300 CPUs feature six or eight cores based on the Cypress Cove microarchitecture, a built-in GPU with 32 EUs featuring the Xe-LP architecture, AVX-512 support, 20 PCIe 4.0 lanes, and DDR4-3200 support. In general, the new processors for entry-level workstations provide the same benefits as Intel’s 11th Gen Core parts provided for desktops, including higher general-purpose performance, improved graphics, and faster PCIe support.
Intel’s Xeon W-1300-series ‘Rocket Lake’ CPU family includes seven models with a 35W (T-series), 80W or 125W (P-series) TDP that are compatible with LGA1200 motherboards based on Intel’s W480 (with the latest BIOS updates) as well as W580 chipsets. The parts featuring a 35W and 125W TDP have the same clocks as their Core-branded siblings, whereas 80W SKUs are faster than Core processors rated for a 65W TDP.
The new Xeon W-1300 series processors share design and architecture with Intel’s latest 11th Generation Core processors as well as feature the same cores and clock configurations (except 80W models). In addition, all Xeon W-1300 CPUs come with Intel’s UHD Graphics P750 with 32 EUs GPU as well as support up to 128GB of DDR4-3200 memory with ECC. In general, Xe LP-based P-series graphics (which essentially means driver certifications for more than 15 popular CAD and graphics applications) and ECC support are the primary features that differ Intel’s Xeon W-1300-series CPUs from Intel’s 11th Generation Core products. Obviously, these features cost some additional money.
Intel Xeon W-1300-Series ‘Rocket Lake’ CPUs
Cores/Threads
Base Clock
Max Turbo
TDP
iGPU
RRC
Xeon W-1390P
8/16
3.50GHz
5.30 GHz
125W
UHD Graphics P750
$539
Xeon W-1390
8/16
2.80GHz
5.20 GHz
80W
UHD Graphics P750
$494
Xeon W-1390T
8/16
1.50 GHz
4.90 GHz
35W
UHD Graphics P750
$494
Xeon W-1370P
8/16
3.60 GHz
5.20 GHz
125W
UHD Graphics P750
$428
Xeon W-1370
8/16
2.90 GHz
5.10 GHz
80W
UHD Graphics P750
$362
Xeon W-1350P
6/12
4.00 GHz
5.10 GHz
125W
UHD Graphics P750
$311
Xeon W-1350
6/12
3.30 GHz
5.00 GHz
W
UHD Graphics P750
$255
At present, the list of Intel’s Xeon W-1300 series features five eight-core models equipped with a 16MB LLC (W-1390P, W-1390, W-1390T, W-1370P, W-1370) and two six-core models featuring a 12MB LLC (W-1350P, W-1350). Intel’s Xeon W parts are not compatible with platforms based on Intel’s B, H, Q, and Z-series chipsets.
Intel typically isn’t known for listing chips in its database if they aren’t either currently available or expected to land in the very near future. As such, we expect workstations based on the new Xeon W-1300 CPUs to arrive in the coming months.
Intel’s Iris Xe DG1 may be shaping up to be a disappointment, but the chipmaker’s approaching Xe-HPG DG2 GPU could be a solid performer. German publication Igor’s Lab has shared the alleged specifications for the DG2 in its desktop and mobile format.
The Xe-HPG DG2 block diagram seemingly suggests that Intel had originally planned to pair the GPU with its Tiger Lake-H chips, which are rumored to launch next week. It would seem that Intel didn’t make the window for Tiger Lake-H, however, as Wallossek claims that the chipmaker will use the DG2 for Alder Lake-P instead. The DG2 reportedly features the BGA2660 package.
Apparently, the DG2 was supposed to communicate with Tiger Lake-H through a high-speed PCIe Gen 4.0 x12 interface. The 12-lane connection is a bit unorthodox, so it’s uncertain if that was a typo. The DG2 would be the first GPU to offer DisplayPort 2.0 support. Oddly, the GPU only supports HDMI 2.0 and not HDMI 2.1. However, Wallossek did mention that this was an outdated diagram and DG2 could perhaps come with HDMI 2.1.
Wallossek shared a drawing of the board layout for a Tiger Lake-H chip that’s accompanied by the DG2. We spotted a total of six memory chips. Evidently, only two of the memory chips are actually attached to the DG2. This would mean that the remaining four memory chips are probably soldered memory chips for the system.
Nevertheless, we can’t discard the possibility that all six memory chips are for the DG2. The leaked specifications suggest that the DG2 can leverage up to 16GB of GDDR6 memory.
Intel Xe-HPG DG2 GPU Specifications
SKU 1
SKU 2
SKU 3
SKU 4
SKU 5
Package Type
BGA2660
BGA2660
BGA2660
TBC
TBC
Supported Memory Technology
GDDR6
GDDR6
GDDR6
GDDR6
GDDR6
Memory Speed
16 Gbps
16 Gbps
16 Gbps
16 Gbps
16 Gbps
Interface / Bus
256-bit
192-bit
128-bit
64-bit
64-bit
Memory Size (Max)
16 GB
12 GB
8 GB
4 GB
4 GB
Smart Cache Size
16 MB
16 MB
8 MB
TBC
TBC
Graphics Execution Units (EUs)
512
384
256
196
128
Graphics Frequency (High) Mobile
1.1 GHz
600 MHz
450 MHz
TBC
TBC
Graphics Frequency (Turbo) Mobile
1.8 GHz
1.8 GHz
1.4 GHz
TBC
TBC
TDP Mobile (Chip Only)
100W
100W
100W
TBC
TBC
TDP desktop
TBC
TBC
TBC
TBC
TBC
Wallossek listed a total of five potential DG2 GPUs. The SKU 1, SKU 2 and SKU 3 could be considered the high-performance versions, while the SKU 4 and SKU 5 are likely the entry-level models. They have one common denominator though. Regardless of the model, the DG2 allegedly utilizes 16 Gbps GDDR6 memory chips. The GPU alone should consume up to 100W, maybe around 125W if we factor in the GDDR6 memory chips. The desktop variants of the DG2 might arrive with a TDP over 200W.
The flagship DG2 GPU seemingly has 512 EUs that can clock up to 1.8 GHz. This particular model is equipped with 16GB of 16 Gbps GDDR6 memory across a 256-bit memory interface. This works out to 512 GBps of memory bandwidth.
The budget DG2 SKUs are limited to 192 and 128 EUs. The boost clock speeds are unknown for the moment. The memory configuration consists of 4GB of 16 Gbps GDDR6 memory that communicate through a 64-bit memory bus. The maximum memory bandwidth on these models is 128 GBps.
Assuming that Wallossek’s time frame is accurate, production for the SKU 4 and SKU 5 models should start between late October and early December. He thinks that they may be ready just in time for the Christmas holidays. Production on the SKU 1 through SKU 3 models should start in between December and early March in 2022.
Samsung, SK Hynix, and Micron allegedly used their control over the memory market to fix DRAM pricing, accuses a new class-action lawsuit from Hagens Berman.
If you’re experiencing some deja vu, that’s because Hagens Berman filed similar suits in 2002 and 2018. The first was settled for $345 million in 2006; the second was dismissed by the U.S. District Court for the Northern District of California in 2020.
The firm said in 2018 that “An investigation has revealed that a group of the largest electronics manufacturers that produce dynamic random-access memory (DRAM) may have agreed to collectively raise the price of memory used in mobile phones and computers from 2016-2017, illegally inflating the price paid by consumers.”
Hagens Berman also said it “believes that those who unknowingly paid high prices for their computers and mobile devices deserve compensation for the greed and wrongdoing of these major electronics corporations.”
There’s no denying that DRAM prices increased more than anyone would have guessed in 2016 and 2017. IC Insights said in July 2017 that DRAM pricing rose 111% year-over-year and predicted that it would rise another 40% before the year ended.
Those higher prices resulted from a combination of increasing demand for DRAM and Samsung, SK Hynix, and Micron’s unwillingness to up their bit output in response. (The exact same thing is expected to happen throughout this year.)
The problem is proving the companies fixed DRAM pricing, and given that Hagens Berman’s previous suit was dismissed, it seems like that’s harder to prove than one might expect given the companies’ control over the DRAM market.
It’s not clear what changes Hagens Berman made after the 2018 suit was dismissed, but BusinessKorea reported that the firm filed a followup suit in the same U.S. District Court on May 3, which shows it’s not ready to give up on the case just yet.
According to IC Insights, Samsung is soon expected to make a comeback in the semiconductor industry to once again overtake Intel as the largest semiconductor manufacturer in the world. Predictions believe this change will take place right around Q2 of 2021 (which isn’t far from now).
Intel has long been a dominant payer in the semiconductor industry and currently holds the longest run as the number 1 semiconductor manufacturer in the world, starting in 1993 and lasting all the way through 2016.
It took 23 years before Samsung finally displaced Intel of its position in 2017, thanks to a growing supply of memory sales during that time. It was a good time for competition, and finally proved Intel could be beaten by another competitor in the semiconductor industry.
It should be of no surprise that Samsung was the company to beat Intel; over the past decade, Samsung has become a mega-corporation in the tech industry, becoming the worlds leading memory and NAND flash manufacturer, as well as producing many other devices such as TVs, phones, and smart home appliances.
But Samsung’s lead was short-lived — after just two quarters, the company suffered a 17% loss in revenue due to a sharp decline in memory sales allowing Intel to regain the number one position in 2018.
Luckily for Samsung, Intel’s sales have mostly flatlined since 2020, leading to a minor decline in revenue. This has allowed Samsung, with its slow but continuous increases in revenue, to almost match Intel in sales performance over the past few months.
If this trend continues, Samsung should once again displace Intel as the lead semiconductor manufacturer.
The NH-L9x65 provides all the impressive Noctua engineering you’d expect in an exceptionally tiny footprint. And while it’s expensive, unlike other compact downdraft cooler designs, it boasts 100% PCI-e card and memory compatibility on Intel platforms.
For
+ Quality build and design
+ Simple to install
+ 100% memory and graphics cards clearance on ITX platforms
Against
– Price
– Fan noise at 100% PWM
Features and Specifications
Noctua is held in high regard due to exceptional build quality and some of the best large heatpipe air coolers money can buy. But armed with a 14mm thin fan at just 90mm in diameter, how does the compact Noctua NH-L9x65 sibling perform on small system builds while squeezing into a space only 95mm x 95mm square and just 65mm (3.74 x 3.74 x 2.56 inches) tall? The majority of Noctua’s line focuses on large, nearly silent cooling behemoths like the NH-D15, but is the same possible in a pint-sized package?
Specifications
Height
2.0″ / 51mm
(2.55″ / 65mm w/fan)
Width
3.75″ / 95mm
Depth
3.75″ / 95mm
Memory Clearance
No Limit
Assy. Offset
0
Cooling Fans
(1x) 90mm x 14mm
Connectors
(1x) 4-pin PWM
Weight
15.0 oz / 425g
Intel Sockets
115x, 1200,
2011x (square ILM), 2066
AMD Sockets
AM2(+), AM3(+), AM4,
FM1, FM2(+)
(some AMD sockets may require backplate)
Warranty
6-years
Web Price
$55
Feature
The Noctua NH-L9x65 might be incredibly compact, but the boxed contents include a wide array of components to allow for most Intel and AMD desktop processor sockets, although some AMD chips might require an applicable backplate (see Noctua’s product site for details). Most of the mounting hardware is nickel plated, and a 4-pin PWM speed reduction cable and angled screwdriver are included in the box.
Also per the Noctua norm, an enamel and nickel-plated metal logo badge comes standard, as does a tube of the company’s NT-H1 thermal compound. Noctua covers the NH-L9x65 with a generous 6-year warranty.
The heatsink of the NH-L9x65 features four heatpipes which are tightly angled to wrap into the stack of 50 cooling fins overhead. This design gives the cooler a compact footprint, but also means that due to the 95mm cooler length and width, the overall cooling fin stack has less surface area to dissipate heat than larger downdraft coolers.
Airflow is moved over the NH-L9x65 from NF-A9-x14 PWM fan (90mm x14mm). This small, narrow fan is rated up to 2500 RPM, has built-in rubber mounting pads and utilizes a 4-pin PWM header. A pair of wire clips secure and support the fan during use.
The base of the NH-L9x65 features a milled, nickel-plated base, which clamps each of the four heatpipes against the integrated mounting plate of the cooler. At either side of the mounting plate sits a spring machine screw which is used to properly tension the cooler to the mounting crossbars and simplifies alignment over the CPU during installation.
The base of the NH-L9x65 is milled flat, preventing any lighting from peeking beneath our steel ruler held flat against the base. Also visible from this angle is a view into the small tolerances between the cooler base, heatpipes and the heatsink fins themselves, showing just how compact this component truly is.
Mounting and removing the NH-L9x65 provides us with a universal thermal compound spread of our standard MX-4 testing paste. Concentric circles give us a view into how the cooler is centered and the mount allows for equal tensioning, provided the installer takes their time and applies pressure evenly to each side.
With the mounting hardware installed and the cooler positioned over the crossbars, getting the Noctua NH-L9x65 into place is as easy as aligning the machine tension screws over the threaded mounting posts. We can also see how the cooler claims an unlimited memory DIMM clearance by lining up just a few millimeters from the installed system RAM.
The fan clips to the heatsink to complete the downdraft airflow design of the NH-L9x65.
We have with us the ASRock Radeon RX 6900 XT OC Formula, the company’s new flagship graphics card, positioned a notch above even the RX 6900 XT Phantom Gaming. This sees the company revive its topmost “OC Formula” brand co-developed by Nick Shih, which represents the company’s boutique range of motherboards and graphics cards for professional overclockers taking a crack at world records of all shapes and sizes. What triggered the company to come out with an RX 6900 XT-based graphics card in particular, is a concerted preemption by AMD to NVIDIA’s rumored GeForce RTX 3080 Ti, an SKU slotted between the RTX 3080 and RTX 3090.
The Radeon RX 6900 XT GPU at the heart of the ASRock RX 6900 XT OC Formula isn’t the same chip as the one in the RX 6900 XT Phantom Gaming. AMD refers to this silicon as the Navi 21 “XTXH”. It is the highest bin of the Navi 21, designed to sustain up to 10% higher clock speeds than the regular RX 6900 XT. With its default “performance” BIOS, the RX 6900 XT OC Formula can now boost up to 2475 MHz, and achieve game clocks of up to 2295 MHz. The reference AMD Radeon RX 6900 XT sustains only up to 2250 MHz boost, and 2015 MHz game clocks, while ASRock’s previous RX 6900 XT-based flagship, the RX 6900 XT Phantom Gaming, does 2340 MHz boost, with 2105 MHz game clocks. Compared to the reference design, that’s exactly a 10 percent OC from ASRock.
The AMD Radeon RX 6900 XT is AMD’s current-generation flagship graphics card, which, along with the RX 6800 series, propelled the company back to the big leagues of enthusiast-segment graphics cards against NVIDIA. The RX 6900 XT is powered by AMD’s RDNA2 graphics architecture, which is its first to feature full DirectX 12 Ultimate readiness, including real-time raytracing. The RDNA2 architecture transcends platforms, and also powers the latest PlayStation 5 and Xbox Series X/S consoles, which makes it easier for game developers to optimize for the architecture on the PC.
At the heart of the RX 6900 XT is the 7 nm Navi 21 silicon, which it maxes out. This chip features 5,120 stream processors spread across 80 RDNA2 compute units, 80 Ray Accelerators (components that accelerate raytracing), 288 TMUs, 128 ROPs, and an impressive 16 GB of GDDR6 memory. This memory, however, runs across a 256-bit wide memory interface. AMD attempted to shore up bandwidth by using the fastest JEDEC-standard 16 Gbps memory chips, and deploying Infinity Cache, a 128 MB last-level cache on the GPU die, which speeds up transfers between the GPU and the memory, by acting as a scratchpad. Together with the GDDR6 memory, Infinity Fabric unleashes memory bandwidths of up to 2 TB/s.
The ASRock RX 6900 XT features the company’s most opulent custom board design to date, with a large triple-slot, triple fan cooling solution that’s packed with innovations; the company’s most over-the-top power-delivery solution ever on a graphics card; and design optimization for professional overclocking using liquid- or extreme cooling methods. The Navi XTXH silicon not only sustains boost frequencies better, but is also designed for better overclocking headroom than the original Navi 21 XTX powering the reference RX 6900 XT. In this review, we take our first look at this exotic new graphics card to tell you if ASRock has tangibly improved performance of the RX 6900 XT over the reference, and whether it gets any closer to the RTX 3090.
Kingston has sent over the brand’s overclockable DDR5 memory modules to its motherboard partners for qualification. The company plans to ship the new DDR5 products in the third quarter of this year to compete with the best RAM on the market.
Kingston’s DDR5 memory is equipped with a XMP profile for an easy and fast setup. In addition, the memory modules feature a programmable PMIC (power management integrated circuit) so motherboard partners can have some fun with them. The standard operating voltage for DDR5 is 1.1V, however, an adjustable PMIC would allow vendors to overclock the memory modules beyond JEDEC’s baseline.
DDR5 not only pushes the speed limit, but also the capacity envelope. Some memory vendors are already on the drawing board to develop DDR5 that runs at 10,000 MHz, while others are aiming for 512GB memory modules. We’ve already gotten a first taste of what DDR5 brings to the table in some early RAM benchmarks, and it looks very promising.
Intel’s 12th Generation Alder Lake processors are rumored to be the first consumer chips to embrace DDR5. Although the chipmaker hasn’t commited to a specific date, Alder Lake production is scheduled to ramp up in the second half of this year. If we’re optimistic, the first Alder Lake chips could land in late 2021 or maybe early 2022. So while Kingston expects to ship DDR5 memory modules in the third quarter of the year, consumers might not be able to exploit them until later in the year, barring any setbacks.
The server and data center market, on the other hand, will welcome DDR5 with open arms. Intel’s looming 4th Generation Xeon Scalable (Sapphire Rapids) will arrive with DDR5 support. AMD’s next generation of EPYC chips (Genoa) will launch this year as well. AMD has stated that Genoa will support “new memory,” likely referring to DDR5.
Reviews for Capcom’s Resident Evil Village have gone live, and we’re taking the opportunity to look at how the game runs on the best graphics cards. We’re running the PC version on Steam, and while patches and future driver updates could change things a bit, both AMD and Nvidia have provided Game Ready drivers for REV.
This installment in the Resident Evil series adds DirectX Raytracing (DXR) support for AMD’s RX 6000 RDNA2 architecture, or Nvidia’s RTX cards — both the Ampere architecture and Turing architecture. AMD’s promoting Resident Evil Village, and it’s on the latest gen consoles as well, so there’s no support of Nvidia’s DLSS technology. We’ll look at image quality in a moment, but first let’s hit the official system requirements.
Capcom notes that in either case, the game targets 1080p at 60 fps, using the “Prioritize Performance” and presumably “Recommended” presets. Capcom does state that the framerate “might drop in graphics-intensive scenes,” but most mid-range and higher GPUs should be okay. We didn’t check lower settings, but we can confirm that 60 fps at 1080p will certainly be within reach of a lot of graphics cards.
The main pain point for anyone running a lesser graphics card will be VRAM, particularly at higher resolutions. With AMD pushing 12GB and 16GB on its latest RX 6000-series cards, it’s not too surprising that the Max preset uses 12GB VRAM. It’s possible to run 1080p Max on a 6GB card, and 1440p Max on an 8GB card, but 4K Max definitely wants more than 8GB VRAM — we experienced inconsistent frametimes in our testing. We’ve omitted results on cards where performance wasn’t reliable in the charts.
Anyway, let’s hit the benchmarks. Due to time constraints, we’re not going to run every GPU under the sun in these benchmarks, but will instead focus on the latest gen GPUs, plus the top and bottom RTX 20-series GPUs and a few others as we see fit. We used the ‘Max’ preset, with and without ray tracing, and most of the cards we tested broke 60 fps. Turning on ray tracing disables Ambient Occlusion, because that’s handled by the ray-traced GI and Reflection options, but every other setting is on the highest quality option (which means variable-rate shading is off for our testing).
Our test system consists of a Core i9-9900K CPU, 32GB VRAM and a 2TB SSD — the same PC we’ve been using for our graphics card and gaming benchmarks for about two years now, because it continues to work well. With the current graphics card shortages, acquiring a new high-end GPU will be difficult — our GPU pricing index covers the details. Hopefully, you already have a capable GPU from pre-2021, back in the halcyon days when graphics cards were available at and often below MSRP. [Wistful sigh]
Granted, these are mostly high-end cards, but even the RTX 2060 still posted an impressive 114 fps in our test sequence — and it also nearly managed 60 fps with ray tracing enabled (see below). Everything else runs more than fast enough as well, with the old GTX 1070 bringing up the caboose with a still more than acceptable 85 fps. Based off what we’ve seen with these GPUs and other games, it’s a safe bet that cards like the GTX 1660, RX 5600 XT, and anything faster than those will do just fine in Resident Evil Village.
AMD’s RDNA2 cards all run smack into an apparent CPU limit at around 195 fps for our test sequence, while Nvidia’s fastest GPUs (2080 Ti and above) end up with a lower 177 fps limit. At 1080p, VRAM doesn’t appear to matter too much, provided your GPU has at least 6GB.
Turning on ray tracing drops performance, but the drop isn’t too painful on many of the cards. Actually, that’s not quite true — the penalty for DXR depends greatly on your GPU. The RTX 3090 only lost about 13% of its performance, and the RTX 3080 performance dropped by 20%. AMD’s RX 6900 XT and RX 6800 XT both lost about 30-35% of their non-RT performance, while the RTX 2080 Ti, RX 6800, RTX 3070, RTX 3060 Ti, and RTX 3060 plummeted by 40–45%. Meanwhile, the RX 6700 XT ended up running at less than half its non-DXR rate, and the RTX 2060 also saw performance chopped in half.
Memory and memory bandwidth seem to be major factors with ray tracing enabled, and the 8GB and lower cards were hit particularly hard. Turning down a few settings should help a lot, but for these initial results we wanted to focus on maxed-out graphics quality. Let us know in the comments what other tests you’d like to see us run.
The performance trends we saw at 1080p become more pronounced at higher resolutions. At 1440p Max, more VRAM and memory bandwidth definitely helped. The RX 6900 XT, RX 6800 XT, RTX 3090, and RTX 3080 only lost a few fps in performance compared to 1080p when running without DXR enabled, and the RX 6800 dipped by 10%. All of the other GPUs drop by around 20–30%, but the 6GB RTX 2060 plummeted by 55%. Only the RTX 2060 and GTX 1070 failed to average 60 fps or more.
1440p and ray tracing with max settings really needs more than 8GB VRAM — which probably explains why the Ray Tracing preset (which we didn’t use) opts for modest settings everywhere else. Anyway, the RTX 2060, 3060 Ti, and 3070 all started having problems at 1440p with DXR, which you can see in the numbers. Some runs were much better than we show here, others much worse, and after repeating each test a bunch of times, we still aren’t confident those three cards will consistently deliver a good experience without further tweaking the graphics settings.
On the other hand, cards with 10GB or more VRAM don’t show nearly the drop that we saw without ray tracing when moving from 1080p to 1440p. The RTX 3060 only lost 18% of its 1080p performance, and chugs along happily at just shy of 60 fps. The higher-end AMD and Nvidia cards were all around the 15% drop mark as well.
But enough dawdling. Let’s just kill everything with some 4K testing…
Well, ‘kill’ is probably too strong of a word. Without ray tracing, most of the GPUs we tested still broke 60 fps. But of those that came up short, they’re very short. RTX 3060 is still generally playable, but Resident Evil Village appears to expect 30 fps or more, as dropping below that tends to cause the game to slow down. The RX 5700 XT should suffice in a pinch, even though it lost 67% of its 1440p performance, but the 1070 and 2060 would need lower settings to even take a crack at 4K.
Even with DXR, the RTX 2080 Ti and RX 6800 and above continue to deliver 60 fps or more. The RTX 3060 also still manages a playable 41 fps — this isn’t a twitch action game, so sub-60 frame rates aren’t the end of the world. Of course, we’re not showing the cards that dropped into the teens or worse — which is basically all the RTX cards with 8GB or less VRAM.
The point isn’t how badly some of the cards did at 4K Max (with or without DXR), but rather how fast a lot of the cards still remained. The DXR switch often imposed a massive performance hit at 1080p, but at 4K the Nvidia cards with at least 10GB VRAM only lost about 15% of their non-DXR performance. AMD’s GPUs took a larger 25% hit, but it was very consistent across all four GPUs.
Resident Evil Village Graphics Settings
Image 1 of 8
Image 2 of 8
Image 3 of 8
Image 4 of 8
Image 5 of 8
Image 6 of 8
Image 7 of 8
Image 8 of 8
You can see the various advanced settings available in the above gallery. Besides the usual resolution, refresh rate, vsync, and scaling options, there are 18 individual graphics settings, plus two more settings for ray tracing. Screen space reflections, volumetric lighting and shadow quality are likely to cause the biggest impact on performance, though the sum of the others can add up as well. For anyone with a reasonably high-end GPU, though, you should be able to play at close to max quality (minus ray tracing if you don’t have an appropriate GPU, naturally).
But how does the game look? Capturing screenshots with the various settings on and off is a pain, since there are only scattered save points (typewriters), and some settings appear to require a restart to take effect. Instead of worrying about all of the settings, let’s just look at how ray tracing improves things.
Resident Evil Village Image Quality: Ray Tracing On / Off
Image 1 of 18
Image 2 of 18
Image 3 of 18
Image 4 of 18
Image 5 of 18
Image 6 of 18
Image 7 of 18
Image 8 of 18
Image 9 of 18
Image 10 of 18
Image 11 of 18
Image 12 of 18
Image 13 of 18
Image 14 of 18
Image 15 of 18
Image 16 of 18
Image 17 of 18
Image 18 of 18
Or doesn’t, I guess. Seriously, the effect is subtle at the best of times, and in many scenes, I couldn’t even tell you whether RT was on or off. If there’s a strong light source, it can make a difference. Sometimes a window or glass surface will change with RT enabled, but even then (e.g., in the images of the truck and van) it’s not always clearly better.
The above gallery should be ordered with RT off and RT on for each pair of images. You can click (on a PC) to get the full images, which I’ve compressed to JPGs (and they look visually almost the same as the original PNG files). Indoor areas tend to show the subtle lighting effects more than outside, but unless a patch dramatically changes the way RT looks, Resident Evil Village will be another entry in the growing list of ray tracing games where you could skip it and not really miss anything.
Resident Evil Village will release to the public on May 7. So far, reviews are quite favorable, and if you enjoyed Resident Evil 7, it’s an easy recommendation. Just don’t go in expecting ray tracing to make a big difference in the way the game looks or feels.
Microsoft is finally preparing to refresh its Windows 95-era icons. The software giant has been slowly improving the icons it uses in Windows 10, as part of a “sweeping visual rejuvenation” planned for later this year. We saw a number of new system icons back in March, with new File Explorer, folder, Recycle Bin, disk drive icons, and more. Microsoft is now planning to refresh the Windows 95-era icons you still sometimes come across in Windows 10.
Windows Latest has spotted new icons for the hibernation mode, networking, memory, floppy drives, and much more as part of the shell32.dll file in preview versions of Windows 10. This DLL is a key part of the Windows Shell, which surfaces icons in a variety of dialog boxes throughout the operating system. It’s also a big reason why Windows icons have been so inconsistent throughout the years. Microsoft has often modernized other parts of the OS only for an older app to throw you into a dialog box with Windows 95-era icons from shell32.dll.
Hopefully this also means Windows will never ask you for a floppy disk drive when you dig into Device Manager to update a driver. That era of Windows, along with these old icons, has been well and truly over for more than a decade now.
All of this work to improve the consistency of Windows is part of Microsoft’s design overhaul to Windows 10, codenamed Sun Valley. The visual changes are expected to appear in the Windows 10 21H2 update that should arrive in October. Microsoft has not officially detailed its Sun Valley work, but a job listing earlier this year teased a “sweeping visual rejuvenation of Windows.”
Microsoft has so far revealed new system icons for Windows 10, alongside File Explorer icon improvements, and more colorful Windows 10 icons that appeared last year. Rounded corners will also be a big part of Sun Valley, alongside changes to built-in apps and the Start menu.
We’re expecting to hear more about Sun Valley at Microsoft’s Build conference later this month, or as part a dedicated Windows news event.
The Oppo Reno5 lineup launched at the beginning of this year, but it’s only now that it’s getting a European release. Now, the most affordable of the bunch – the Reno5 is here and looks well-equipped too. An OLED panel with a high refresh rate, fast charging, capable SoC, lightweight build and plenty of base storage and memory.
And in a (not so) surprising move, Oppo is releasing this one under two names in Europe. The Reno5 is launching in Eastern Europe, whereas Western Europe is getting it as the Find X3 Lite. The two models are identical in specs as you can see.
Oppo Reno5 5G • Oppo Find X3 Lite
So even though we got specifically the Reno5 model for review, our review findings should apply to both devices in equal parts.
While the Oppo brand is well-known in Asia, and even though it’s yet to make a name for itself in Europe, it’s positioned as a premium brand elsewhere. So it’s no wonder that the company avoids undercutting the competition price-wise and yet focuses on making well-executed handsets with a premium look and feel.
The Reno5 (or Find X3 Lite, if you prefer) uses a bright, 90Hz OLED panel and a 64MP main camera and it also offers one of the fastest charging technologies. It’s also nicely compact and pocketable.
Probably the biggest selling point of this one is its size and ergonomics. In a market where behemoths rule, the Reno5 5G is a breath of fresh air with its compact 6.43-inch display and a weight of 172g.
Oppo Reno5 5G specs at a glance:
Body: 159.1×73.4×7.9mm, 172g; Gorilla Glass 5 front, plastic back and frame.
Display: 6.43″ AMOLED, 90Hz, 430 nits (typ), 750 nits (peak), 1080x2400px resolution, 20:9 aspect ratio, 410ppi.
Chipset: Qualcomm SM7250 Snapdragon 765G 5G (7 nm): Octa-core (1×2.4 GHz Kryo 475 Prime & 1×2.2 GHz Kryo 475 Gold & 6×1.8 GHz Kryo 475 Silver); Adreno 620.
Memory: 128GB 8GB RAM, 256GB 12GB RAM; UFS 2.1.
OS/Software: Android 11, ColorOS 11.1.
Rear camera: Wide (main): 64 MP, f/1.7, 26mm, 1/1.73″, 0.8µm, PDAF; Ultra wide angle: 8 MP, f/2.2, 119˚, 1/4.0″, 1.12µm; Macro: 2 MP, f/2.4; Depth: 2 MP, f/2.4.
Front camera: 32 MP, f/2.4, 24mm (wide), 1/2.8″, 0.8µm.
Video capture: Rear camera: 4K@30fps, 1080p@30/60/120fps; gyro-EIS, HDR; Front camera: 1080p@30fps, gyro-EIS.
Battery: 4300mAh; Fast charging 65W, 100% in 35 min (advertised), Reverse charging, SuperVOOC 2.0.
Misc: Fingerprint reader (under display, optical); 3.5mm jack, The phone also comes with 128GB of base storage, and the Snapdragon 765G 5G is nothing to scoff at.
What we can scoff at is the phone’s current pricing. The launch price of €450 is quite optimistic considering that the competition in the midrange is quite heated and this phone comes with a plastic back and frame.
But let’s not rush to any conclusions as this phone might offer more than what meets the eye at first glance. First, time for an unboxing.
Unboxing the Oppo Reno5 5G
The phone comes in a premium-looking box and fresh mint color. It contains the usual user manuals and the 65W-capable wall charger with a USB-A to USB-C cable.
Oppo has also thrown in a bonus case, too, along with a pair of 3.5mm headphones.
Resident Evil Village is the latest addition to the long-running horror series, and just like last year’s Resident Evil 3 remake, it is built on Capcom’s RE Engine. We test over 25 GPUs at 1080p, 1440p and 4K to find out what sort of hardware you need to run this game at maximum settings, while also looking at the performance and visual quality of the game’s ray tracing options.
Watch via our Vimeo channel (below) or over on YouTube at 2160p HERE
In terms of visual settings, there are a number of options in the display menu. Texture and texture filtering settings are on offer, as well as variable rate shading, resolution, shadows, and so on. There’s also selection of quick presets, and for our benchmarking today we opted for the Max preset, but with V-Sync and CAS disabled.
One interesting thing about the Max preset is the default ambient occlusion setting – FidelityFX CACAO, which stands for Combined Adaptive Compute Ambient Occlusion, a technology optimised for RDNA-based GPUs. To make sure this setting wouldn’t unfairly penalise Nvidia GPUs, we tested CACAO vs SSAO with both the RX 6800 and RTX 3070:
Both GPUs only lost 3% performance when using CACAO instead of SSAO, so we were happy to use the former setting for our benchmarking today.
Driver Notes
AMD GPUs were benchmarked with a pre-release driver provided by AMD for Resident Evil Village.
Nvidia GPUs were benchmarked with the 466.27 driver.
Test System
We test using the a custom built system from PCSpecialist, based on Intel’s Comet Lake-S platform. You can read more about it over HERE, and configure your own system from PCSpecialist HERE.
CPU
Intel Core i9-10900K
Overclocked to 5.1GHz on all cores
Motherboard
ASUS ROG Maximus XII Hero Wi-Fi
Memory
Corsair Vengeance DDR4 3600MHz (4 X 8GB)
CL 18-22-22-42
Graphics Card
Varies
System Drive
500GB Samsung 970 Evo Plus M.2
Games Drive
2TB Samsung 860 QVO 2.5″ SSD
Chassis
Fractal Meshify S2 Blackout Tempered Glass
CPU Cooler
Corsair H115i RGB Platinum Hydro Series
Power Supply
Corsair 1200W HX Series Modular 80 Plus Platinum
Operating System
Windows 10 2004
Our 1-minute benchmark pass came from quite early on in the game, as the player descends down into the village for the first time. Over the hour or so that I played, the results do seem representative of wider gameplay, with the exception of intense combat scenes which can be a bit more demanding. Those are much harder to benchmark accurately though, as there’s more variation from run to run, so I stuck with this outdoor scene.
1080p Benchmarks
1440p Benchmarks
2160p (4K) Benchmarks
Closing Thoughts
After previously looking at the Resident Evil 3 remake last year, a game which is also built on Capcom’s RE Engine, I wasn’t too surprised to see that overall performance is pretty similar between both games.
That’s certainly a good thing though, as the game plays very well across a wide range of hardware. At the lower end, weaker GPUs like the GTX 1650, or older cards like the GTX 1060 6GB, still deliver a very playable experience at 1080p max settings. Village also scales very well, so if you have a higher-end GPU, you will be rewarded with significantly higher frame rates.
AMD does see the benefit to its partnership with Capcom for this one, as RDNA-based GPUs do over-perform here compared to the average performance we’d expect from those cards. The RX 6700 XT is matching the RX 3070 for instance – when we’d typically expect it to be slower – while the RX 6900 XT is 7% faster than the RTX 3090 at 1440p.
In terms of visual fidelity, I don’t think the RE Engine delivers a cutting edge experience like you’d get from Cyberpunk 2077 or Red Dead Redemption 2 when using Ultra settings, but it still looks good and I am particularly impressed with the detailed character models.
The only negative point for me is that the ray tracing is pretty underwhelming. As we demonstrate in the video above, it doesn’t really deliver much extra from a visual perspective, at least in my opinion. Overall though, Resident Evil Village looks good and runs well on pretty much any GPU, so it definitely gets a thumbs up from me.
Discuss on our Facebook page HERE.
KitGuru says: Capcom’s newest game built on the RE Engine delivers impressive performance and looks good while doing so.
João Silva 15 hours ago Featured Tech News, Gaming PC
Gigabyte is getting into pre-built gaming PCs, starting with two new models – the Model X and the Model S. The Model X is a more traditional ATX system based on your choice of Intel Z590 or AMD X570 and an RTX 3080 GPU, while the Model S is a compact, 14-litre PC that packs high-end hardware despite its small size.
The Aorus Model X chassis offers good thermal performance and stylish aesthetics thanks to a half-vented, half-tempered glass front panel with RGB lighting and a half-vented top panel with RGB. Rated with acoustic performance below 40dB while gaming, the inside of the Model X was organised to allow less experienced users to mount an SSD or add another component to the system with ease. The chassis comes with an integrated GPU bracket and a 360mm AIO cooler. The side panel can either be transparent or metallic.
The Aorus Model S shares some similarities with other cases such as the NZXT H1 and the darkFlash DLH21. Featuring an AIO thermal design, the Model S has more space to fit the remaining components. The air intakes are concealed to keep the sleek aesthetics of the chassis, which features an RGB-lit Aorus logo on the front panel. During operation, the rated noise performance sits just below 36dB.
Whether you choose AMD or Intel for the CPU, some specifications are shared across both variants. For instance, the Model S comes with a 750W power supply for both Intel and AMD configurations. There are also some differences, with AMD-based PCs coming with slower memory options compared to an Intel-based PC.
The following table shows the specifications of the AMD-powered Aorus Model X and S gaming systems:
Model
Aorus Model X
Aorus Model S
Platform
X570
B550
CPU
AMD R9 5900X
AMD R9 5900X
RAM
32GB DDR4-3600 RGB
32GB DDR4-3600
GPU
RTX 3080
RTX 3080
PSU
850W 80 Plus Gold
750W 80 Plus Gold
Storage 1
M.2 2280 Gen4 1TB
M.2 2280 Gen4 1TB
Storage 2
M.2 2280 NVMe 2TB
M.2 2280 NVMe 2TB
The next table shows the specifications of the Intel-based Aorus Model X and S gaming PCs:
Model
Aorus Model X
Aorus Model S
Platform
Z590
Z590
CPU
Intel Core i9-11900K
Intel Core i9-11900K
RAM
16GB DDR4-4400 RGB
32GB DDR4-4000
GPU
RTX 3080
RTX 3080
PSU
850W 80 Plus Gold
750W 80 Plus Gold
Storage 1
M.2 2280 Gen4 1TB
M.2 2280 Gen4 1TB
Storage 2
M.2 2280 NVMe 2TB
M.2 2280 NVMe 2TB
The Intel version of the Model S comes with 32GB DDR4-4000 memory and the Intel Model X with 16GB DDR4-4400 memory. AMD versions of both PCs come with DDR4-3600 or DDR4-4000 memory instead. It’s also worth noting that the AMD Model S comes with a B550 motherboard, while the Model X features an X570 motherboard.
KitGuru says: What do you think of Gigabyte’s latest Aorus gaming PCs? Would you go for an Intel or AMD based system?
Become a Patron!
Check Also
Gamescom 2021 will once again be an all-digital event
2020 saw many of the industries biggest events either get cancelled outright, or translated into …
Home/Tech News/Intel ‘Atlas Canyon’ NUC 11 Essential to feature Jasper Lake processors
João Silva 15 hours ago Tech News
A new leak shows that Intel is working on a new affordable NUC powered by Jasper Lake processors. Codenamed ‘Atlas Canyon’, the NUC 11 Essential leak details its specifications and a possible release date, which might be as late as Q1 2022 due to the ongoing chip shortage.
The leak, which was shared by FanlessTech, shows Intel has apparently removed the 2.5-inch drive from its predecessor and replaced it with an M.2 slot. The small and compact casing includes an active cooling system, but a fanless system seems doable given the low TDP.
The slide below shows that there will be three CPU options: the 4C/4T Pentium Silver J6005 (up to 3.3GHz), the 4C/4T Celeron J5105 (up to 2.9 GHz), and the 2C/2T Celeron J4505 (up to 2.7 GHz). The NUC 11 Essential support up to 16GB of DDR4-2933 memory in dual-channel configuration and up to 2x 4K displays. Some models include 64GB of eMMC storage.
Image credit: FanlessTech
Featuring a vast set of connectivity ports and features, the NUC 11 Essential supports Wi-Fi 6, Bluetooth 5.2, and 1Gbps Ethernet connectivity. As for the ports, there’s an HDMI 2.0b port, a DisplayPort 1.4, 2x front USB-A 3.1 ports, 2x rear USB-A 3.1 ports, 2x rear USB-A 2.0 ports, an audio-in 3.5mm jack, and an audio-out 3.5mm jack.
The NUC 11 Essential will be reportedly available as a mini PC, a barebone kit, and as a board only. All should feature a 3-year warranty.
KitGuru says: Despite its entry-level specs, the NUC 11 Essential is very useful as a media PC for the living room or as a work computer that can be mounted on the back of a mid-size monitor to save some desk space.
Matthew Wilson 2 days ago General Tech, Professional
Synology has a new pair of professional-grade storage racks launching this week. The 12-bay RackStation RS2421+ and RS2421RP+, and 16-bay RS2821RP+ will be available starting this month, built to excel in large-scale infrastructure backups, business-level file serving and private cloud services.
Speaking on the new racks, Julien Chen, product manager at Synology, explained that both new RackStation products support “essential remote work applications” and well as offering a path for mass storage upgrades with redundant power to ensure file servers are protected in the event of a surge or outage.
In the table below, you can see the specs and features for both new RackStations:
RS2421+ RS2421RP+
RS2821RP+
CPU
Quad-core AMD V1500B
Memory
4 GB ECC DDR4 (max. 32 GB)
Form Factor
2U
3U
Drive Bays6
12 (max. 24)
16 (max. 28)
iSCSI 4K random read IOPS
106K
105K
iSCSI 4K random write IOPS
59K
59K
SMB Seq. 64K read
2200 MB/s
2200 MB/s
SMB Seq. 64K write
1154 MB/s
1164 MB/s
Network Interface
4 x Gigabit RJ-45
PCIe Slots
1 x Gen 3.0 8x slot
Redundant Power Supply
RP+ model only
Yes
Warranty7
5-year limited warranty
Both new RackStation units boast higher performance than their predecessors. The RS2421(RP)+ gets a 103% and 161% boost to random write and read IOPS speeds, while the RS2821(RP)+ delivers 115% and 162% higher random write/read IOPS.
Both devices can be fitted with a dual-port Synology 10GbE or 25GbE NIC for better throughput, or a Synology M.2 adapter card and NVMe SSDs to create a speedy cache. Each rackmount also comes with a three year warranty, which can be extended to five years.
Discuss on our Facebook page, HERE.
KitGuru Says: Are you considering a server upgrade for your business? Will you be considering an upgrade to a Synology RackStation for storage?
Become a Patron!
Check Also
Microsoft will begin removing Adobe Flash from Windows next month
Microsoft has revealed plans to start removing Adobe Flash from Windows 10 this year. Flash …
João Silva 2 days ago Featured Announcement, Graphics
Narrowing down the RTX 3080 Ti launch window has been a pain for insiders, with the date shifting every couple of weeks. The latest reports indicate that the RTX 3080 Ti will now be announced on the 31st of May, followed by a retail launch in June. According to some sources, the RTX 3070 Ti will launch in a similar time frame.
After multiple delays, the latest rumours regarding the announcement of the RTX 3080Ti points to a reveal on May 31st. The review embargo is expected to end on June 2nd, so the release should be around this date. The RTX 3070 Ti is also rumoured to be announced on May 31st, but this card will release a bit later, with a reported review embargo of June 9th.
On a similar note, retail listings for the RTX 3080 Ti have been shared by VideoCardz. On Aquila Technology, there’s a listing for an MSI RTX 3080 Ti Ventus 3X 12G OC priced at NZD $2,543.46 (£1316.23) and a Gigabyte RTX 3080Ti Gaming OC priced at NZD $3,152.50 (£1,631.40). Over on Computer Perth, listings for a Gigabyte RTX 3080Ti Vision OC and a Gigabyte RTX 3080Ti Eagle OC were spotted, both with a AUS $1732.75 (£966.75) price tag. Computer Perth’s listings mention the NDA ends in late May.
The RTX 3080Ti is expected to feature the GA102-225GPU with 10240 CUDA cores and a 1665MHz boost clock speed. It should come with 12GB of GDDR6X memory at 19Gbps across a 384-bit bus and the Ethereum mining limiter.
The RTX 3070 Ti is rumoured to ship with a GA104-400 GPU with 6144 CUDA cores, 48 RT cores, 192 Tensor cores and 8GB of GDDR6X memory.
Discuss on our Facebook page, HERE.
KitGuru says: After five delays, let’s just hope that this one will be the last. Are you hoping to get one of Nvidia upcoming Ti GPUs at launch?
Become a Patron!
Check Also
Resident Evil Village GPU Benchmark + Ray Tracing Analysis
We test over 25 GPUs in Resident Evil Village – how well does it perform?
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.