Matthew Wilson 1 day ago Featured Tech News, General Tech
MSI has been dabbling in the world of all-in-one PCs for a while now and this week, we’re getting some brand new models. Today, MSI announced the Modern AM241 and Modern AM271 series of all-in-one PCs, featuring Intel 11th Gen processors.
The new Modern 24 and 27 series PCs are designed with efficiency and productivity in mind, while also looking rather elegant. Each system comes with an OPS display for wide viewing angles and better colours. Under the hood, you’ll find an Intel 11th Gen Core series processor, with MSI offering up to an Intel Core i7-1165G7, but Core i3 and Core i5 configurations are also available.
In the table below, you can see the full specification list for the MSI Modern AM241 and AM271 PCs:
Specification
Modern AM241
Modern AM241T
Modern AM241P
Modern AM241TP
Modern AM271
Modern AM271P
CPU
Up to Intel® Core™ i7-1165G7
OS
Windows 10 Home – MSI recommends Windows 10 Pro for business
23.8″ IPS Grade Panel LED Backlight (1920*1080 FHD) with MSI Anti-Flicker technology
23.8″ IPS Grade Panel LED Backlight (1920*1080 FHD) with MSI Anti-Flicker technology
27″ IPS Grade Panel LED Backlight (1920*1080 FHD) with MSI Anti-Flicker technology
27″ IPS Grade Panel LED Backlight (1920*1080 FHD) with MSI Anti-Flicker technology
TOUCH PANEL
Non-Touch for Modern AM241 /
In-cell 10-Point Touch for ModernAM241T
Non-Touch for Modern AM241P /
In-cell 10-Point Touch for ModernAM241TP
Non-Touch
Non-Touch
ADJUSTABLE STAND
-5° ~ 15° (Tilt)
-4° ~ 20° (Tilt) ;
0 ~ 130mm (Height)
-5° ~ 15° (Tilt)
-4° ~ 20° (Tilt) ;
0 ~ 130mm (Height)
OPTICAL DRIVE
N/A
AUDIO
2 x 2.5W Speakers
LAN
1 x RJ45 (10/100/1000)
WIRELESS LAN
Intel 9462 AC / AX201 AX (either one)
BLUETOOTH
5.1
USB 3.2 PORT
4 (2x USB 3.2 Gen 2 Type C, 2x USB 3.2 Gen 2 Type A)
USB 2.0 PORT
3
HDMI IN
1
HDMI OUT
1
AUDIO
1x Mic-in/Headphone-out Combo
5-WAY NAVIGATOR
1
KEYBOARD / MOUSE
Optional
AC ADAPTER
90W / 120W (Core i3 above)
AIO WALL MOUNT KIT III
Support Standard VESA Mount (75x75mm)
DIMENSION (WXDXH)
541.40 x 175.09 x 406.86 mm (21.31 x 6.89 x 16.02 inch)
541.40 x 194.68 x 534.92 mm (21.31 x 7.66 x 21.06 inch)
611.75 x 169.96 x 436.06 mm (24.08 x 6.69 x 17.17 inch)
611.75 x 169.96 x 553.52 mm (24.08 x 6.69 x 21.79 inch)
NET WEIGHT
4.65 kg (10.25 lbs)
6.16 kg (13.58 lbs)
5.82 kg (12.83 lbs)
7.42 kg (16.36 lbs)
GROSS WEIGHT
7.35 kg (16.20 lbs)
8.45 kg (18.63 lbs)
8.60 kg (18.96 lbs)
10.00 kg (22.05 lbs)
With more people working from home and relying on virtual meetings, MSI has bumped up the specs of the webcam, delivering 1080p quality. The option to remove the webcam is also there for those concerned about privacy.
Using MSI Instant Display Technology, the Modern AM series can also be used as a standalone monitor for a second system, meaning you don’t have to boot up the PC hidden behind the display. These all-in-one systems also support using a second monitor through an additional HDMI output. Standard VESA mounts are supported for those who prefer having a monitor arm – MSI even has a ready to go solution for that with the VESA Arm MT81.
We’re still waiting on pricing and availability information, but we’ll update if/when we hear more. Discuss on our Facebook page, HERE.
KitGuru Says: Do any of you use an all-in-one PC for work at all? What do you think of the new MSI Modern series systems?
Become a Patron!
Check Also
Razer’s Orochi V2 is a compact wireless mouse with up to 900 hours of battery life
Razer is back with another gaming mouse this week. This time around, the Razer Orochi …
Despite Apple’s focus on developing its own chips, it looks like the company still needs AMD’s help for higher-power workstation GPUs. That’s according to new entries on the Geekbench 5, showing an unannounced ‘Radeon Pro W6900X’ SKU powering an Apple Mac Pro 7.1.
With the launch of macOS Big Sur 11.4 Beta, Apple introduced support for Radeon consumer-grade cards on its OS. Professional Radeon cards are not yet supported, but that might change soon with the new Mac Pro 7.1.
Initially found by Benchleaks, we have spotted nine entries of a MacPro 7.1 equipped with an AMD Radeon Pro W6900X and running macOS 11.4 on the Geekbench 5 database. All the entries seem to belong to the same system, which featured a 12C/24T Intel Core i9-10920X CPU and 192GB of DDR4-2933 memory.
The entries do not show the card’s specifications, but performance-wise, it scored slightly above the Radeon RX 6900XT. It’s unclear if the card will be exclusive to Mac systems like the Radeon Pro Vega II, but compared to it, the AMD Radeon Pro W6900X scored about 66% higher.
These entries coincide with the appearance of a photo showing an undisclosed AMD graphics card. The uploader didn’t share any information about the card, but we believe it might be the AMD Radeon Pro W6900X, the OEM variant of the Radeon Pro card we have previously shared, or a combination of both.
Discuss on our Facebook page, HERE.
KitGuru says: Apple plans to become more independent from CPU and GPU manufacturers, but for now, it still depends heavily on the likes of Intel and AMD for high-powered solutions. Will Apple eventually release its own workstation-class CPUs and GPUs?
Become a Patron!
Check Also
Razer’s Orochi V2 is a compact wireless mouse with up to 900 hours of battery life
Razer is back with another gaming mouse this week. This time around, the Razer Orochi …
AMD’s Threadripper consumer HEDT processors continue to be praised strongly for their excellent compute performance and connectivity options. But what if you want more than 256GB of memory? What if you want your RAM to run in 8-channel mode? What if you want more than 64 PCIe Gen 4 lanes? Well… that’s where Threadripper Pro comes in.
Watch via our Vimeo Channel (Below) or over on YouTube at 2160p HERE
Video Timestamps:
00:00 Start
00:15 Some details/pricing
01:15 Star of the show – Threadripper Pro 3975WX
03:20 The CPU cooler
03:46 Memory setup / weird plastic shrouds with fans
05:27 AMD Radeon Pro W5700 GPU
07:00 Motherboard
08:55 Storage options
09:41 1000W PSU (Platinum) and custom setup
10:32 Luke’s thoughts and I/O panels
11:22 The Chassis
11:40 Cooling and tool less design
12:35 Summary so far
14:02 Performance tests
16:49 System temperatures, power and noise testing
19:05 System under idle conditions – ‘rumbling’ noise we experienced
19:22 Pros and Cons / Closing thoughts
Primary Specifications:
32-core AMD Threadripper Pro 3975WX processor
128GB of 3200MHz ECC DDR4 memory in 8-channel mode
AMD Radeon Pro W5700 graphics card with 8GB GDDR6 VRAM
WD SN730 256GB NVMe SSD
1kW 80Plus Platinum PSU
We are examining the Lenovo ThinkStation P620 workstation that is built around Threadripper Pro and its 8-channel memory support. There are a few options for the base processor on Lenovo’s website including 12, 16, 32, and 64 core options. Specifically, we are looking at the 32-core Threadripper Pro 3975WX chip and we are hoping that Lenovo can keep it running at the rated 3.5-4.2GHz speeds beneath that modestly sized CPU cooler.
Partnering this 280W TDP monster with its 128 PCIe Gen 4 lanes is 128GB of 8-channel DDR4 3200MHz ECC memory. While a 128GB installation is merely small-fry for Threadripper Pro, the 3200MHz modules running in 8-channel mode should allow for some excellent results in bandwidth-intensive tasks. Plus, you get a 1600MHz Infinity Fabric link for the Zen 2 cores.
I will, however, emphasise my dislike for Lenovo decision to deploy a 40mm fan and shroud to cool each DIMM bank. This seems unnecessary for a 128GB installation and merely adds additional noise and points of failure. Metal heatspreaders on the DIMMs would have been better, if enhanced cooling is deemed necessary.
Graphics comes in the form of an 8GB Radeon Pro W5700 blower-style card which we have already reviewed on KitGuru. That makes this an all-AMD system as far as the key components go. Another key benefit is ISV certification for the Lenovo P620. That point will be music to the ears of system buyers in a business environment with users who run software on the guaranteed support list.
Another point that will garner particular attention from prospective buyers is the display output connectivity. On its ‘pro-grade’ card, AMD deploys five Mini-DisplayPort 1.4 connections and one USB-C port. That gives you convenient access to six total display outputs which is super. As highlighted in our review of the Radeon Pro W5700, you can power five 4K monitors or three 5K alternatives, making this an excellent workstation proposition.
Lenovo uses its own WRX80 motherboard to house the sWRX8 Threadripper Pro CPU. The power delivery solution looks competent and Lenovo’s use of proper finned VRM heatsinks with passive cooling is to be commended. Six total PCIe Gen 4 slots are provided by the motherboard – four x16 bandwidth and two x8. However, only two x16 slots remain usable due to the slot spacing, and the top one will likely interfere with the RAM fan’s header.
It is actually disappointing to see Lenovo offering up sub-par expansion slot capability. There is no clear way to use the 128 lane capability from Threadripper Pro. That is especially disappointing to users who will want multiple graphics card alongside high-bandwidth networking and storage devices. However, the limited expandability is a clear compromise from Lenovo’s use of a compact chassis with just a couple of 80mm fans for intake and exhaust airflow.
At least you do get dual, cooled M.2 slots on the motherboard. One of those is occupied by a 256GB WD SN730 SSD in our install. Clearly, most users will want to adjust the storage configuration. But this is clearly a very subjective requirement, so I respect Lenovo for offering a basic, cheap drive for the baseline configuration.
Power is delivered by a 1kW 80Plus Platinum unit. Lenovo highlights 92% efficiency on the configurator page, but this is likely a mistake for 230/240V UK customers given the more stringent 80Plus Platinum requirements for those operating voltages. The PSU’s tool-less design is absolutely superb and works very well; a single connector port feeds power from the unit through the motherboard where it is then distributed accordingly, including via break-out cables for PCIe and SATA connectors.
Connectivity for the system is just ‘OK‘. You get 10GbE Aquantia AQC107 networking onboard, but a secondary network adapter is disappointingly omitted. I would have liked to see a few more USB ports on the rear IO, including some in Type-C form and preferably 20Gbps high-speed rated. However, the front IO is excellent with four 10Gbps USB connections, two of which are Type-C. I also appreciated the system’s included audio speaker when using the unit without a proper set of speakers.
The chassis build quality is good and feels very well-built given its compact form. Man-handling the hefty system is easy thanks to the front handle. And the internal tool-less design is excellent. Lenovo’s configurator gives an option to upgrade to a side panel with key locking to prevent unauthorised access, which is good to see.
With that said, cooling certainly looks to be limited with just two 80mm intake fans on the chassis. The graphics card, CPU, PSU, and (annoyingly) RAM also have fans to take care of their own cooling. If you are thinking of adding a second high power GPU, though, the internals are likely to get very toasty.
Priced at around £5.5-6K inc. VAT in the UK (depending on the graphics card situation given current shortages), we are keen to see how Threadripper Pro performs in this reasonably compact workstation.
Detailed Specifications
Processor: AMD Threadripper Pro 3975WX (32 cores/64 threads, 3.5/4.2GHz, 280W TDP, 144MB L2+L3 cache, 128 PCIe Gen 4 lanes, up to 2TB 8-channel DDR4-3200 ECC memory support)
Motherboard: Lenovo WRX80 Threadripper Pro Motherboard
Memory: 128GB (8x16GB) SK Hynix 3200MHz C24 ECC DDR4, Octa-channel
Graphics Card: 8GB AMD Radeon Pro W5700 (RDNA/Navi GPU, 36 compute units, 2304 stream processors, 205W TDP, 1183MHz base clock, 1750MHz GDDR6 memory on a 256-bit bus for 448GBps bandwidth)
System Drive: 256GB WD SN730 PCIe NVMe SSD
CPU Cooler: Lenovo dual-tower heatsink with 2x 80mm fans
Yesterday marked the 36th anniversary of the first power-on of an Arm processor. Today, the company announced the deep-dive details of its Neoverse V1 and N2 platforms that will power the future of its data center processor designs and span up to a whopping 192 cores and 350W TDP.
Naturally, all of this becomes much more interesting given Nvidia’s pending $40 billion Arm acquisition, but the company didn’t share further details during our briefings. Instead, we were given a deep dive look at the technology roadmap that Nvidia CEO Jensen Huang says makes the company such an enticing target.
Arm claims its new, more focused Noverse platforms come with impressive performance and efficiency gains. The Neoverse V1 platform is the first Arm core to support Scalable Vector Extensions (SVE), bringing up to 50% more performance for HPC and ML workloads. Additionally, the company says that its Neoverse N2 platform, its first IP to support newly-announced Arm v9 extensions, like SVE2 and Memory Tagging, delivers up to 40% more performance in diverse workloads.
Additionally, the company shared further details about its Neoverse Coherent Mesh Network (CMN-700) that will tie together the latest V1 and N2 designs with intelligent high-bandwidth low-latency interfaces to other platform additives, such as DDR, HBM, and various accelerator technologies, using a combination of both industry-standard protocols, like CCIX and CXL, and Arm IP. This new mesh design serves as the backbone for the next generation of Arm processors based on both single-die and multi-chip designs.
If Arm’s performance projections pan out, the Neoverse V1 and N2 platforms could provide the company with a much faster rate of adoption in multiple applications spanning the data center to the edge, thus putting even more pressure on industry x86 stalwarts Intel and AMD. Especially considering the full-featured connectivity options available for both single- and multi-die designs. Let’s start with the Arm Neoverse roadmap and objectives, then dive into the details of the new chip IP.
Arm Neoverse Platform Roadmap
Image 1 of 15
Image 2 of 15
Image 3 of 15
Image 4 of 15
Image 5 of 15
Image 6 of 15
Image 7 of 15
Image 8 of 15
Image 9 of 15
Image 10 of 15
Image 11 of 15
Image 12 of 15
Image 13 of 15
Image 14 of 15
Image 15 of 15
Arm’s roadmap remains unchanged from the version it shared last year, but it does help map out the steady cadence of improvements we’ll see over the next few years.
Arm’s server ambitions took flight with the A-72 in 2015, which was equivalent to the performance and performance-per-watt of a traditional thread on a standard competing server architecture.
Arm says its current-gen Neoverse N1 cores, which powers AWS Graviton 2 chips and Ampere’s Altra, equals or exceeds a ‘traditional’ (read: x86) SMT thread. Additionally, Arm says that, given N1’s energy efficiency, one N1 core can replace three x86 threads but use the same amount of power, providing an overall 40% better price-vs-performance ratio. Arm chalks much of this design’s success up to the Coherent Mesh Network 600 (CMN-600) that enables linear performance scaling as core counts increase.
Arm has revised both its core architecture and the mesh for the new Neoverse V1 and N2 platforms that we’ll cover today. Now they support up to 192 cores and 350W TDPs. Arm says the N2 core will take the uncontested lead over an SMT thread on competing chips and offers superior performance-per-watt.
Additionally, the company says that the Neoverse V1 core will offer the same performance as competing cores, marking the first time the company has achieved parity with two threads running on an SMT-equipped core. Both chips utilize Arm’s new CMN-700 mesh that enables either single-die or multi-chip solutions, offering customers plenty of options, particularly when deployed with accelerators.
Ts one would expect, Arm’s Neoverse N2 and V1 target hyperscale and cloud, HPC, 5G, and the infrastructure edge markets. Customers include Tencent, oracle Cloud with Ampere, Alibaba, AWS with Graviton 2 (which is available in 70 out of 77 AWS regions). Arm also has two exascale-class supercomputer deployments planned with Neoverse V1 chips: SiPearl “Rhea” and the ETRI K-AB21.
Overall, ARM claims that its Neoverse N2 and V1 platforms will offer best-in-class compute, performance-per-watt, and scalability over competing x86 server designs.
Arm Neoverse V1 Platform ‘Zeus’
Image 1 of 3
Image 2 of 3
Image 3 of 3
Arm’s existing Neoverse N1 platform scales from the cloud to the edge, encompassing everything from high-end servers to power-constrained edge devices. The next-gen Neoverse N2 platform preserves that scalability across a spate of usages. In contrast, Arm designed the Neoverse V1 ‘Zeus’ platform specifically to introduce a new performance tier as it looks to more fully penetrate HPC and machine learning (ML) applications.
The V1 platform comes with a wider and deeper architecture that supports Scalable Vector Extensions (SVE), a type of SIMD instruction. The V1’s SVE implementation runs across two lanes with a 256b vector width (2x256b), and the chip also supports the bFloat16 data type to provide enhanced SIMD parallelism.
With the same (ISO) process, Arm claims up to 1.5x IPC increase over the previous-gen N1 and a 70% to 100% improvement to power efficiency (varies by workload). Given the same L1 and L2 cache sizes, the V1 core is 70% larger than the N1 core.
The larger core makes sense, as the V-series is optimized for maximum performance at the cost of both power and area, while the N2 platform steps in as the design that’s optimized for power-per-watt and performance-per-area.
Per-core performance is the primary objective for the V1, as it helps to minimize the performance penalties for GPUs and accelerators that often end up waiting on thread-bound workloads, not to mention to minimize software licensing costs.
Arm also tuned the design to provide exceptional memory bandwidth, which impacts performance scalability, and next-gen interfaces, like PCIe 5.0 and CXL, provide I/O flexibility (much more on that in the mesh section). The company also focused on performance efficiency (a balance of power and performance).
Finally, Arm lists technical sovereignty as a key focus point. This means that Arm customers can own their own supply chain and build their entire SoC in-country, which has become increasingly important for key applications (particularly defense) among heightened global trade tensions.
Image 1 of 2
Image 2 of 2
The Neoverse V1 represents Arm’s highest-performance core yet, and much of that comes through a ‘wider’ design ethos. The front end has an 8-wide fetch, 5-8 wide decode/rename unit, and a 15-wide issue into the back end of the pipeline (the execution units).
As you can see on the right, the chip supports HBM, DDR5, and custom accelerators. It can also scale out to multi-die and multi-socket designs. The flexible I/O options include the PCIe 5 interface and CCIX and CXL interconnects. We’ll cover the Arm’s mesh interconnect design a bit later in the article.
Additionally, Arm claims that, relative to the N1 platform, SVE contributes to a 2x increase in floating point performance, 1.8x increase in vectorized workloads, and 4x improvement in machine learning.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
One of V1’s biggest changes comes as the option to use either the 7nm or 5nm process, while the prior-gen N1 platform was limited to 7nm only. Arm also made a host of microarchitecture improvements spanning the front end, core, and back end to provide big speedups relative to prior-gen Arm chips, added support for SVE, and made accommodations to promote enhanced scalability.
Here’s a bullet list of the biggest changes to the architecture. You can also find additional details in the slides above.
Front End:
Net of 90% reduction in branch mispredicts (for BTB misses) and a 50% reduction in front-end stalls
V1 branch predictor decoupled from instruction fetch, so the prefetcher can run ahead and prefetch instruction into the instruction cache
Widened branch prediction bandwidth to enable faster run-ahead to (2x32b per cycle)
Increased capacity of the Dual-level BTB (Branch Target Buffers) to capture more branches with larger instruction footprints and to lower the taken branch latency, improved branch accuracy to reduce mispredicts
Enhanced ability to redirect hard-to-predict branches earlier in the pipeline, at fetch time, for faster branch recovery, improving both performance and power
Mid-Core:
Net increase of 25% in integer performance
Micro-Op (MOP) Cache: L0 decoded instruction cache optimizes the performance of smaller kernels in the microarchitecture, 2x increase in fetch and dispatch bandwidth over N1, lower-latency decode pipeline by removing one stage
Added more instruction fusion capability, improves performance end power efficiency for most commonly-used instruction pairs
OoO (Out of Order) window increase by 2X to enhance parallelism. Also increased integer execution bandwidth with a second branch execution unit and a fourth ALU
SIMD and FP Units: Added a new SVE implementation — 2x256b operations per cycle. Doubled raw execute capability from 2x128b pipelines in N1 to 4x128b in V1. Slide 10 — 4x improvement in ML performance
Back End:
45% increase to streaming bandwidth by increasing load/store address bandwidth by 50%, adding a third load data address generation unit (AGU – 50% increase)
To improve SIMD and integer floating point execution, added a third load data pipeline and improved load bandwidth for integer and vector. Doubled store bandwidth and split scheduling into two pipes
Load/store buffer window sizes increased. MMU capacity, allow for a larger number of cache translations
Reduce latencies in L2 cache to improve single-threaded performance (slide 12)
This diagram shows the overall pipeline depth (left to right) and bandwidth (top to bottom), highlighting the impressive parallelism of the design.
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Arm also instituted new power management and low-latency tools to extend beyond the typical capabilities of Dynamic Voltage Frequency Scaling (DVFS). These include the Max Power Mitigation Mechanism (MPMM) that provides a tunable power management system that allows customers to run high core-count processors at the highest possible frequencies, and Dispatch Throttling (DT), which reduces power during certain workloads with high IPC, like vectorized work (much like we see with Intel reducing frequency during AVX workloads).
At the end of the day, it’s all about Power, Performance, and Area (PPA), and here Arm shared some projections. With the same (ISO) process, Arm claims up to 1.5x IPC increase over the previous-gen N1 and a 70% to 100% improvement to power efficiency (varies by workload). Given the same L1 and L2 cache sizes, the V1 core is 70% larger than the N1 core.
The Neoverse V1 supports Armv8.4, but the chip also borrows some features from future v8.5 and v8.6 revisions, as shown above.
Arm also added several features to manage system scalability, particularly as it pertains to partitioning shared resources and reducing contention, as you can see in the slides above.
Image 1 of 8
Image 2 of 8
Image 3 of 8
Image 4 of 8
Image 5 of 8
Image 6 of 8
Image 7 of 8
Image 8 of 8
Arm’s Scalable Vector Extensions (SVE) are a big draw of the new architecture. Firstly, Arm doubled compute bandwidth to 2x256b with SVE and provides backward support for Neon at 4x128b.
However, the key here is that SVE is vector length agnostic. Most vector ISAs have a fixed number of bits in the vector unit, but SVE lets the hardware set the vector length in bits. However, in software, the vectors have no length. This simplifies programming and enhances portability for binary code between architectures that support different bit widths — the instructions will automatically scale as necessary to fully utilize the available vector bandwidth (for instance, 128b or 256b).
Arm shared information on several fine-grained instructions for the SVE instructions, but much of those details are beyond the scope of this article. Arm also shared some simulated V1 and N2 benchmarks with SVE, but bear in mind that these are vendor-provided and merely simulations.
ARM Neoverse N2 Platform ‘Perseus’
Image 1 of 17
Image 2 of 17
Image 3 of 17
Image 4 of 17
Image 5 of 17
Image 6 of 17
Image 7 of 17
Image 8 of 17
Image 9 of 17
Image 10 of 17
Image 11 of 17
Image 12 of 17
Image 13 of 17
Image 14 of 17
Image 15 of 17
Image 16 of 17
Image 17 of 17
Here we can see the slide deck for the N2 Perseus platform, with the key goals being a focus on scale-out implementations. Hence, the company optimized the design for performance-per-power (watt) and performance-per-area, along with a healthier dose of cores and scalability. As with the previous-gen N1 platform, this design can scale from the cloud to the edge.
Neoverse N2 has a newer core than the V1 chips, but the company isn’t sharing many details yet. However, we do know that N2 is the first Arm platform to support Armv9 and SVE2, which is the second generation of the SVE instructions we covered above.
Arm claims a 40% increase in single-threaded performance over N1, but within the same power and area efficiency envelope. Most of the details about N2 mirror those we covered with V1 above, but we included the slides above for more details.
Image 1 of 20
Image 2 of 20
Image 3 of 20
Image 4 of 20
Image 5 of 20
Image 6 of 20
Image 7 of 20
Image 8 of 20
Image 9 of 20
Image 10 of 20
Image 11 of 20
Image 12 of 20
Image 13 of 20
Image 14 of 20
Image 15 of 20
Image 16 of 20
Image 17 of 20
Image 18 of 20
Image 19 of 20
Image 20 of 20
Arm provided the above benchmarks, and as with all vendor-provided benchmarks, you should take them with a grain of salt. We have also included the test notes at the end of the album for further perusal of the test configurations.
Arm’s SPEC CPU 2017 single-core tests show a solid progression from N1 to N2, and then a higher jump in performance with the V1 platform. The company also provided a range of comparisons against the Intel Xeon 8268 and an unspecified 40-core Ice Lake Xeon system, and the EPYC Rome 7742 and EPYC Milan 7763.
Coherent Mesh Network (CMN-700)
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
Arm allows its partners to adjust core counts, cache sizes, and use different types of memory, such as DDR5 and HBM and select various interfaces, like PCIe 5.0, CXL, and CCIX, requiring a very flexible underlying design methodology. Add in the fact that Neoverse can span from the cloud and edge to 5G, and the interconnect also has to be able to span a full spectrum of various power points and compute requirements. That’s where the Coherent Mesh Network 700 (CMN-700) steps in.
Arm focuses on security through compliance and standards, Arm open-source software, and ARM IP and architecture, all rolled under the SystemReady umbrella that serves as the underpinning of the Neoverse platform architecture.
Arm provides customers with reference designs based on its own internal work, with the designs pre-qualified in emulated benchmarks and workload analysis. Arm also provides a virtual model for software development too.
Customers can then take the reference design, choose between core types (like V-, N- or E-Series) and alter core counts, core frequency targets, cache hierarchy, memory (DDR5, HBM, Flash, Storage Class Memory, etc.), and I/O accommodations, among other factors. Customers also dial in parameters around the system-level cache that can be shared among accelerators.
There’s also support for multi-chip integration. This hangs off the coherent mesh network and provides plumbing for I/O connectivity options and multi-chip communication accommodations through interfaces like PCIe, CXL, CCIX, etc.
The V-Series CPUs address the growth of heterogeneous workloads by providing enough bandwidth for accelerators, support for disaggregated designs, and also multi-chip architectures that help defray the slowing Moore’s Law.
These types of designs help address the fact that the power budget per SoC (and thus thermals) is increasing, and also allow scaling beyond the reticle limits of a single SoC.
Additionally, I/O interfaces aren’t scaling well to smaller nodes, so many chipmakers (like AMD) are keeping PHYs on older nodes. That requires robust chip-to-chip connectivity options.
Here we can see the gen-on-gen comparison with the current CMN-600 interface found on the N1 chips. The CMN-700 mesh interface supports four times more cores and system-level cache per die, 2.2x more nodes (cross points) per die, 2.5x memory device ports (like DRAM, HBM) per die, and 8x the number of CCIX device ports per die (up to 32), all of which supplies intense scalability.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Arm improved cross-sectional bandwidth by 3X, which is important to provide enough bandwidth for scalability of core counts, scaling out with bandwidth-hungry GPUs, and faster memories, like DDR5 and HBM (the design accommodates 40 memory controllers for either/or DDR and HBM). Arm also has options for double mesh channels for increased bandwidth. Additionally, a hot spot reroute feature helps avoid areas of contention on the fabric.
The AMBA Coherent Hub Interface (CHI) serves as the high-performance interconnect for the SoC that connects processors and memory controllers. Arm improved the CHI design and added intelligent heuristics to detect and control congestion, combine operations to reduce transactions, and conduct data-less writes, all of which help reduce traffic on the mesh. These approaches also help with multi-chip scaling.
Memory partitioning and monitoring (MPAM) helps reduce the impact of noisy neighbors on system-level cache and isolates VMs to keep them from hogging system level cache (SLC). Arm also extends this software-controlled system to the memory controller as well. All this helps to manage shared resources and reduce contention. The CPU, accelerator, and PCIe interfaces all have to work together as well, so the design applies the same traffic management techniques between those units, too.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The mesh supports multi-chip designs through CXL or CCIX interfaces, and here we see a few of the use cases. CCIX is typically used inside the box or between the chips, be that heterogenous packages, chiplets, or multi-socket. In contrast, CXL steps in for memory expansion or pools of memory shared by multiple hosts. It’s also used for coherent accelerators like GPUs, NPUs, and SmartNICs, etc.
Slide 14 shows an example of a current connection topology — PCIe connects to the DPU (Data Plane Unit – SmartNic), which then provides the interconnection to the compute accelerator node. This allows multiple worker nodes to connect to shared resources.
Slide 15 shows us the next logical expansion of this approach — adding disaggregated memory pools that are shared between worker nodes. Unfortunately, as shown in slide 16, this creates plenty of bottlenecks and introduces other issues, such as spanning the home nodes and system-level cache across multiple dies. Arm has an answer for that, though.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Addressing those bottlenecks requires a re-thinking of the current approaches to sharing resources among worker nodes. Arm designed a multi-protocol gateway with a new AMBA CXS connection to reduce latency. This connection can transport CCIX 2.0 and CXL 2.0 protocols much faster than conventional interconnections. This system also provides the option of using a Flit data link layer that is optimized for the ultimate in low-latency connectivity.
This new design can be tailored for either socket-to-socket or multi-die compute SoCs. As you can see to the left on Slide 17, this multi-protocol gateway can be used either with or without a PCIe PHY. Removing the PCIe PHY creates an optimized die-to-die gateway for lower latency for critical die-to-die connections.
Arm has also devised a new Super Home Node concept to accommodate multi-chip designs. This implementation allows composing the system differently based on whether or not it is a homogenous design (direct connections between dies) or heterogeneous (compute and accelerator chiplets) connected to an I/O hub. The latter design is becoming more attractive because I/O doesn’t scale well to smaller nodes, so using older nodes can save quite a bit of investment and help reduce design complexity.
Thoughts
ARM’s plans for a 30%+ gen-on-gen IPC growth rate stretch into the next three iterations of its existing platforms (V1, N2, Poseidon) and will conceivably continue into the future. We haven’t seen gen-on-gen gains in that range from Intel in recent history, and while AMD notched large gains with the first two Zen iterations, as we’ve seen with the EPYC Milan chips, it might not be able to execute such large generational leaps in the future.
If ARM’s projections play out in the real world, that puts the company not only on an intercept course with x86 (it’s arguably already there in some aspects), but on a path to performance superiority.
Wrapping in the amazingly well-thought-out coherent mesh design makes these designs all the more formidable, especially in light of the ongoing shift to offloading key workloads to compute accelerators of various flavors. Additionally, bringing complex designs, like chiplet, multi-die, and hub and spoke designs all under one umbrella of pre-qualified reference designs could help spur a hastened migration to Arm architectures, at least for the cloud players. That attraction of licensable interconnections that democratize these complex interfaces is definitely yet another arrow in Arm’s quiver.
Perhaps one of the most surprising tidbits of info that Arm shared in its presentations was one of the smallest — a third-party firm has measured that more than half of AWS’s newly-deployed instances run on Graviton 2 processors. Additionally, Graviton 2-powered instances are now available in 70 out of the 77 AWS regions. It’s natural to assume that those instances will soon have a newer N2 or V1 architecture under the hood.
This type of uptake, and the economies of scale and other savings AWS enjoys from using its own processors, will force other cloud giants to adapt with their own designs in kind, perhaps touching off the type of battle for superiority that can change the entire shape of the data center for years to come. There simply isn’t another vendor more well-positioned to compete in a world where the hyperscalers and cloud giants battle it out with custom silicon than Arm.
The ID-Cooling SE-207-XT is a great option for builders looking for the performance of larger-air cooling on a budget. It isn’t going to perform like a $100 premium air cooler, particularly in the highest-end CPUs, but it does provide enticing performance for a lot less.
For
+ Budget pricing
+ Easy to install
+ Simple, aesthetic design
Against
– Fan noise at full speed
– Lags behind larger, premium air coolers
Features and Specifications
ID-Cooling’s SE-207-XT is a seven-heatpipe, dual-tower assault on large air cooling with a name that makes it difficult to remember–but that might all soon change. Making use of a pair of 120mm cooling fans with zero RGB capability, the SE-207-XT is menacingly matte black, making for a no-nonsense approach for system builders seeking a stealthed-out PC.
The SE-207-XT isn’t as large as some of the behemoth heatpipe coolers we’ve seen in recent years. And while it is true that it isn’t going to jump to the top of our cooling charts, it isn’t lagging that far behind the leaders, either. This makes the SE-207-XT a great mid-range, budget-priced, large air cooler for those looking for the cooling benefits of a huge CPU cooling tower, while focusing the majority of their build budget on other components.
ID-Cooling SE-207-XT Specifications
Height
6.125″ / 155.6mm
Width
4.88″ / 124mmmm
Depth
4.0″ / 101.6mm (5.63″ / 143mm w/ fans)
Base Height
1.75″ / 44.5mm
Assy. Offset
0.0 (centered), 1.0″ / 25.4mm w/ front fan)
Cooling Fans
(2) 120 x 25mm
Connectors
(2) 4-pin PWM
Weight
40.1 oz / 1138g
Intel Sockets
115x, 1200, 2011, 2066
AMD Sockets
AM4
Warranty
2 years
Web Price
$60
Features of ID-Cooling SE-207-XT
Image 1 of 2
Image 2 of 2
The SE-207-XT is accompanied by a modest set of mounting hardware to accommodate most current AMD and Intel desktop CPU sockets. The Intel backplate features pre-assembled mounting posts, making it very strong and eliminating tedious assembly steps which we normally find for backplate setups. A third set of spring wire clips are provided and can be used to allow the cooler to have an additional fan for a push/pull/pull configuration, if you are so inclined. Likewise, an included 3-way PWM splitter is ready to handle the default 2-fan setup out of the box, or ythat triple fan layout.
An included syringe of ID-TG25 (ID-Cooling) thermal compound means system builders won’t be left ordering in a tube of thermal paste or making an extra trip back to your local electronics supply store.
ID-Cooling covers the SE-207-XT with a 2-year warranty.
The SE-207-XT makes use of seven copper heatpipes which snake through 44 individual stacked cooling fins on each divided tower. The heatpipes are offset for dissipation and airflow throughout each cooling tower and collect at the base within the solid cantilever mounting brace. The cooling fins on each cooling tower allow air to flow both straight through as well as out the lateral sides of the tower, rather than ducting air all the way through the cooler.
The solid base collects the seven heatpipes and encapsulates them within the cantilever mounting plate with a milled-copper base to make direct contact with the CPU IHS. The machine screws on the mounting plate are permanently affixed and align over the mounting bars, which are secured to the motherboard socket hardware mounting locations. The mounting screws help align the SE-207-XT when it comes time to tension the cooler down and finish the installation process, which we will detail shortly.
The base of the SE-207-XT is milled perfectly flat, as there is not any visible ambient light seen between a steel rule and the milled copper baseplate. Additionally, the offset of the heatpipes and the fixed tension screws can be seen a bit more clearly from this angle.
The base of the SE-207-XT makes for a consistent thermal compound spread patch during installation and seems to be a bit more ‘clingy’ to residual MX-4 compound than usual, although nothing alarming.
Cooling for the SE-207-XT comes from a pair of included 120mm ID-Cooling ID-12025M12S series, 4-pin PWM fans rated up to 1800 RPM and 76.1 CFM. These fans also feature rubber noise -educing mounting pads on each corner of both sides and utilize a hydraulic bearing.
During installation, the mounting crossbars are affixed atop the SE-207-XT’s plastic offsets to the backplate mounting posts. And chunky, machine-cap nuts hold everything securely to the motherboard. The center of the image shows the tension screws secured to the threaded studs on the mounting cross bars, which help align the cooler directly over the CPU and simplify installation.
Once the SE-207-XT is mounted, each of the 120mm PWM fans are secured to the cooler to move airflow right to left toward the rear case fan, providing a direct channel of air through the cooling tower. While the fan positioning on the cooler via the spring clips can be adjusted to account for taller memory DIMM modules, be advised that RAM height can be an issue in some instances, where those sticks of RAM might cause interference directly beneath the cooling tower itself.
The Patriot Viper Steel RGB DDR4-3600 C20 is only worthy of consideration if you’re willing to invest your time to optimize its timings and if you can find the memory on sale with a big discount.
For
+ Runs at C16 with fine-tuning
+ Balanced design with RGB lighting
+ RGB compatibility with most motherboards
Against
– Very loose timings
– Overpriced
– Low overclocking headroom
Patriot, who isn’t a stranger to our list of Best RAM, has many interesting product lines in its broad repertoire. However, the memory specialist recently revamped one of its emblematic lineups to keep up with the current RGB trend. As the name conveys, the Viper Steel RGB series arrives with a redesigned heat spreader and RGB illumination.
The new series marks the second time that Patriot has incorporated RGB lighting onto its DDR4 offerings, with the first being the Viper RGB series that debuted as far back as 2018. While looks may be important, performance also plays a big role, and the Viper Steel RGB DDR4-3600 memory kit is here to show us what it is or isn’t made of.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Viper Steel RGB memory modules come with the standard black PCB with a matching matte-black heat spreader. It was nice on Patriot’s part to keep the aluminum heat spreader as clutter-free as possible. Only the golden Viper logo and the typical specification sticker is present on the heat spreader, and the latter is removable.
At 44mm (1.73 inches), the Viper Steel RGB isn’t excessively tall, so we expect it to fit under the majority of the CPU air coolers in the market. Nevertheless, we recommend you double-check that you have enough clearance space for the memory modules. The RGB light bar features five customizable lighting zones. Patriot doesn’t provide a program to control the illumination, so you’ll have to rely on your motherboard’s software. The compatibility list includes Asus Aura Sync, Gigabyte RGB Fusion, MSI Mystic Light Sync, and ASRock Polychrome Sync.
The Viper Steel RGB is a dual-channel 32GB memory kit, so you receive two 16GB memory modules with an eight-layer PCB and dual-rank design. Although Thaiphoon Burner picked up the integrated circuits (ICs) as Hynix chips, the software failed to identify the exact model. However, these should be AFR (A-die) ICs, more specifically H5AN8G8NAFR-VKC.
You’ll find the Viper Steel RGB defaulting to DDR4-2666 and 19-19-19-43 timings at stock operation. Enabling the XMP profile on the memory modules will get them to DDR4-3600 at 20-26-26-46. The DRAM voltage required for DDR4-3600 is 1.35V. For more on timings and frequency considerations, see our PC Memory 101 feature, as well as our How to Shop for RAM story.
Comparison Hardware
Memory Kit
Part Number
Capacity
Data Rate
Primary Timings
Voltage
Warranty
G.Skill Trident Z Royal
F4-4000C17D-32GTRGB
2 x 16GB
DDR4-4000 (XMP)
17-18-18-38 (2T)
1.40 Volts
Lifetime
Crucial Ballistix Max RGB
BLM2K16G40C18U4BL
2 x 16GB
DDR4-4000 (XMP)
18-19-19-39 (2T)
1.35 Volts
Lifetime
G.Skill Trident Z Neo
F4-3600C16D-32GTZN
2 x 16GB
DDR4-3600 (XMP)
16-16-16-36 (2T)
1.35 Volts
Lifetime
Klevv Bolt XR
KD4AGU880-36A180C
2 x 16GB
DDR4-3600 (XMP)
18-22-22-42 (2T)
1.35 Volts
Lifetime
Patriot Viper Steel RGB
PVSR432G360C0K
2 x 16GB
DDR4-3600 (XMP)
20-26-26-46 (2T)
1.35 Volts
Lifetime
Our Intel test system consists of an Intel Core i9-10900K and Asus ROG Maximus XII Apex on the 0901 firmware. On the opposite side, the AMD testbed leverages an AMD Ryzen 5 3600 and ASRock B550 Taichi with the 1.30 firmware. The MSI GeForce RTX 2080 Ti Gaming Trio handles the graphical duties on both platforms.
Intel Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
Things didn’t go well for the Viper Steel RGB on the Intel platform. The memory ranked at the bottom of our application RAM benchmarks and came in last place on the gaming tests. Our results didn’t reveal any particular workloads where the Viper Steel RGB stood out.
AMD Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
The loose timings didn’t substantially hinder the Viper Steel RGB’s performance. Logically, it lagged behind its DDR4-3600 rivals that have tighter timings. The Viper Steel RGB’s data rate allowed it to run in a 1:1 ratio with our Ryzen 5 3600’s FCLK so it didn’t take any performance hits, unlike the DDR4-4000 offerings. With a capable Zen 3 processor that can operate with a 2,000 MHz FCLK, the Viper Steel RGB will probably not outperform the high-frequency kits.
Overclocking and Latency Tuning
Image 1 of 3
Image 2 of 3
Image 3 of 3
Overclocking potential isn’t the Viper Steel RGB’s strongest trait. Upping the DRAM voltage from 1.35V to 1.45V only got us to DDR4-3800. Although we had to maintain the tRCD, tRP, and tRAS at their XMP values, we could drop the CAS Latency down to 17.
Lowest Stable Timings
Memory Kit
DDR4-3600 (1.45V)
DDR4-3800 (1.45V)
DDR4-4000 (1.45V)
DDR4-4133 (1.45V)
DDR4-4200 (1.45V)
G.Skill Trident Z Neo DDR4-3600 C16
13-14-14-35 (2T)
N/A
N/A
N/A
19-19-19-39 (2T)
Crucial Ballistix Max RGB DDR4-4000 C18
N/A
N/A
16-19-19-39 (2T)
N/A
20-20-20-40 (2T)
G.Skill Trident Z Royal DDR4-4000 C17
N/A
N/A
15-16-16-36 (2T)
18-19-19-39 (2T)
N/A
Klevv Bolt XR DDR4-3600 C18
16-19-19-39 (2T)
N/A
N/A
18-22-22-42 (2T)
N/A
Patriot Viper Steel RGB DDR4-3600 C20
16-20-20-40 (2T)
17-26-26-46 (2T)
N/A
N/A
N/A
As we’ve seen before, you won’t be able to run Hynix ICs at very tight timings. That’s not to say that the Viper Steel RGB doesn’t have any wiggle room though. With a 1.45V DRAM voltage, we optimized the memory to run at 16-20-20-40 as opposed to the XMP profile’s 20-26-26-46 timings.
Bottom Line
It comes as no surprise that the Viper Steel RGB DDR4-3600 C20 will not beat competing memory kits that have more optimized timings. The problem is that C20 is basically at the bottom of the barrel by DDR4-3600 standards.
The Viper Steel RGB won’t match or surpass the competition without serious manual tweaking. The memory kit’s hefty $199.99 price tag doesn’t do it any favors, either. To put it into perspective, the cheapest DDR4-3600 2x16GB memory kit on the market starts at $154.99, and it checks in with C18. Unless Patriot rethinks the pricing for the Viper Steel RGB DDR4-3600 C20, the memory kit will likely not be on anyone’s radar.
The Android version of Google and Apple’s COVID-19 exposure notification app had a privacy flaw that let other preinstalled apps potentially see sensitive data, including if someone had been in contact with a person who tested positive for COVID-19, privacy analysis firm AppCensus revealed on Tuesday. Google says it’s currently rolling out a fix to the bug.
The bug cuts against repeated promises from Google CEO Sundar Pichai, Apple CEO Tim Cook, and numerous public health officials that the data collected by the exposure notification program could not be shared outside of a person’s device.
AppCensus first reported the vulnerability to Google in February, but the company failed to address it, The Markup reported. Fixing the issue would be as simple as deleting a few nonessential lines of code, Joel Reardon, co-founder and forensics lead of AppCensus, told The Markup. “It’s such an obvious fix, and I was flabbergasted that it wasn’t seen as that,” Reardon said.
Updates to address the issue are “ongoing,” Google spokesperson José Castañeda said in an emailed statement to The Markup. “We were notified of an issue where the Bluetooth identifiers were temporarily accessible to specific system level applications for debugging purposes, and we immediately started rolling out a fix to address this,” he said.
The exposure notification system works by pinging anonymized Bluetooth signals between a user’s phone and other phones that have the system activated. Then, if someone using the app tests positive for COVID-19, they can work with health authorities to send an alert to any phones with corresponding signals logged in the phone’s memory.
On Android phones, the contract tracing data is logged in privileged system memory, where it’s inaccessible to most software running on the phone. But apps that are preinstalled by manufacturers get special system privileges that would let them access those logs, putting sensitive contact-tracing data at risk. There is no indication any apps have actually collected that data at this point, Reardon said.
Preinstalled apps have taken advantage of their special permissions before — other investigations show that they sometimes harvest data like geolocation information and phone contacts.
The analysis did not find any similar issues with the exposure notification system on iPhone.
The problem is an implementation issue and not inherent to the exposure notification framework, Serge Egelman, the chief technology officer at AppCensus, said in a statement posted on Twitter. It should not erode trust in public health technologies. “We hope the lesson here is that getting privacy right is really hard, vulnerabilities will always be discovered in systems, but that it’s in everyone’s interest to work together to remediate these issues,” Egelman said.
If you haven’t gotten your hands on an Xbox Series X, you may be able to pick up one very soon, but without the RDNA 2 graphics, of course. The recently uncovered AMD 4700S Desktop Kit (via momomo_us) has found its way into a mini-ITX gaming PC at Tmall in China.
When the AMD 4700S emerged last week, the obscure processor raised a lot of questions. For one, the chip doesn’t carry the Ryzen branding, suggesting that it might be a custom processor that AMD developed for one of its clients. Stranger still, the processor is available for purchase as part of the AMD 4700S Desktop Kit.
Starting with what we know so far, the AMD 4700S is an octa-core Zen 2 processor with simultaneous multithreading (SMT). The Tmall merchant listed the AMD 4700S with 12MB of L3 cache, although we saw the chip with 8MB in a previous Geekbench 5 submission. The processor runs with a 3.6 GHz base clock and a 4 GHz boost clock. While we saw the AMD 4700S with 16GB of memory, we were uncertain of its nature. However, we suspected that the AMD 4700S is a variant of the processor that powers Microsoft’s latest Xbox Series X gaming console. The new mini-ITX listing appears to confirm our suspicions.
Apparently, the AMD 4700S is outfitted with 16GB of GDDR6 memory, which is the same amount of memory in the Xbox Series X. It appears that AMD is salvaging defective dies that don’t meet the requirements for the Xbox Series X and reselling them as the AMD 4700S.
Logically, AMD can’t just sell the same processor that it produces for Microsoft (for obvious reasons). Therefore, the AMD 4700S could be a result of a defective die with a faulty iGPU, similar to Intel’s graphics-less F-series chips. On the other hand, AMD could simply have disabled the iGPU inside the AMD 4700S, which is a shame given how generous GDDR6 memory is with bandwidth.
The only image of the mini-ITX system’s interior revealed a motherboard that looks like the same size as the Xbox Series X. There are no memory slots, and we can see some of the GDDR6 chips that surround the processor. Naturally, AMD reworked the motherboard for PC usage, as we can see by the addition of capacitors, passive heatsink, power connectors, and connectivity ports. Since the AMD 4700S lacks an iGPU, AMD added a PCIe 3.0 x16 expansion slot for a discrete graphics card.
AMD 4700S Benchmarks
Processor
Cinebench R20 Single-Core
Cinebench R20 Multi-Core
Cinebench R15 Single-Core
Cinebench R15 Multi-Core
Ryzen 7 4750G
411
4,785
199
2,085
AMD 4700S
486
3,965
160
1,612
Core i7-9700
508
3,643
200
1,469
Thanks to the listing, we can also get an idea of just how the processor inside the Xbox Series X performs compared to today’s desktop processors. However, it’s important to highlight that the AMD 4700S may not be the exact processor used in Microsoft’s latest console. The Series X uses a chip that runs at 3.8 GHz and 3.6 GHz when simultaneous multithreading is active. The AMD 4700S, on the other hand, clocks in a 3.6 GHz with a 4 GHz boost clock. On paper, the AMD 4700S should have faster compute cores since it doesn’t have an iGPU that eats into its power budget, so the heightened clock speeds make sense.
In general, the AMD 4700S lags behind the Ryzen 7 4750G (Renoir) and Core i7-9700 (Coffee Lake) in single-core workloads. The AMD 4700S did outperform the Core i7-9700 in multi-core workloads. However, it still placed behind the Ryzen 7 4750G.
It’s remains to be seen whether AMD is selling the AMD 4700S to retail customers or just OEMs. Thus far, we’ve seen the AMD 4700S Desktop Kit retailing for €263.71 (~$317.38) in at Tulostintavaratalo, a retailer in Finland. The Chinese mini-ITX gaming system is listed for 4,599 yuan or $709.12, but the price factors in the Radeon RX 550, 5TB SSD, CPU cooler, power supply and case.
High performance memory kits have evolved over the last few years, both in styling and technology. Styling has shifted to heavier heat sinks, LED light bars, and fancy RGB control software. The technology has done what it inevitably will by producing greater speeds and densities at generally lower cost as DDR4 has matured. The latest processors and graphics cards have been almost impossible to get over the last six months, but memory pricing and availability has remained steady, which makes now the perfect time for Acer to launch a brand-new line of DDR4 memory under their Predator brand. You may recognize the Predator brand from their highly successful gaming monitors or range of gaming laptops and desktops. You may even know the brand because of the Thanos All-In-One gaming chair.
Acer has branched out into a wide variety of gaming products and peripherals. Now, Acer is taking the plunge into core hardware with the aid of business partner BIWIN Storage, a large Chinese OEM with 25 years of experience in the storage and microelectronics business. Acer has granted them permission to produce memory kits under the Predator brand.
The Predator Apollo RGB kit I have for testing today is one of their top-spec kits: 16 GB (2x 8 GB) at 3600 MHz, 14-15-15-35 timings, and 1.45 V. 3600 MHz has become the new gold standard for Ryzen builds, driving new focus into memory kits targeting a previously obscure specification. Let’s see how the Predator Apollo RGB holds up in this ultra-competitive segment!
Colette, a short film featured in the Oculus VR game Medal of Honor: Above and Beyond, has won this year’s Academy Award for Best Documentary (Short Subject). Presented by Oculus Studios and Electronic Arts’ Respawn Entertainment, and later acquired and distributed by The Guardian, it’s the first time a video game industry project has won an Oscar.
Directed by Anthony Giacchino, Colette features a French Resistance survivor, Colette Marin-Catherine, returning to Germany for the first time since the end of World War II to visit a slave labor camp where her brother was killed. The documentary is presented in a traditional 2D format whether you watch it in the Oculus TV app or elsewhere.
“The real hero here is Colette herself, who has shared her story with integrity and strength,” Oculus Studios director of production Mike Doran says in a statement. “As we see in the film, resistance takes courage, but facing one’s past may take even more. Allowing us to preserve this pilgrimage for future generations was a true act of bravery and trust. We hope this award and the film’s reach means, as Colette says, that Jean-Pierre’s memory, as well as all of those who resisted, are no longer lost in the ‘Night and Fog’ of Dora.”
“It’s true what they say: It really is an honor just to be nominated. And it’s an incredible moment to win. We’re humbled by this recognition from the Academy of Motion Picture Arts and Sciences and would like to extend our sincere congratulations to all of our fellow nominees. It’s a privilege to stand alongside you.”
Medal of Honor: Above and Beyond was not well-received as a video game, with many reviews highlighting its huge system requirements and 170GB installation size — much of which was down to the inclusion of extensive historical and documentary footage. Now that one of those films has won an Oscar, the project may get more positive attention than before.
You can watch Colette for free online on YouTube, Oculus TV, or at The Guardian.
It finally happened. Out of the blue, you’re enjoying a game, watching a movie, or just reading on the internet, when your operating system decides it doesn’t want to cooperate and suddenly you’re facing down a BSOD or Blue Screen of Death. A BSOD is something that no Windows user wants to see, because it means that your system has crashed, costing you time and perhaps even resulting in data loss.
Perhaps the worst thing about getting a Blue Screen of Death is that it could be the result of any number of issues, from a faulty piece of hardware to a driver error to having a page fault in a non-paged area (a result of a file not being found in memory). However, all isn’t lost and we’re going to show you how to enable and use a minidump log file to diagnose the problem.
Why You Need a Minidump File to Diagnose Your BSOD
In earlier versions of Windows, the BSOD showed you some error codes that were at least a little bit helpful. However, in Windows 10, the screen gives you a stop code you can write down and research and a QR code you can use with your phone. However, this only sends you to the Microsoft website and provides a description of certain error codes.
What we find useful is configuring Windows to save a file that contains lots of information regarding the BSOD and how we can go about fixing the error. This is called a minidump file.
How to Configure Windows to Save a Minidump File.
By default, the option to create a minidump file is not enabled so you’ll need to turn it on. Do this now, even if you don’t have a BSOD problem, because otherwise you won’t have a log when the crash happens.
1. Navigate to the System Properties Control Panel menu. You can get there by typing “sysdm.cpl” into the Windows search box. Or by going to Settings->System->About and clicking Advanced system settings.
2. Select the Advanced tab.
3. Enable the following options:
● Write an event to the system log
● Automatically restart
● Writing debugging information -> Small memory dump (256kb).
With this enabled, whenever Windows crashes, the minidump file will be created under “%SystemRoot%Minidump”. You can also change this location if you choose to. However, if you do, keep in mind that most programs to troubleshoot the minidump logs are set to look for this location by default. So it’s best to leave it as it is. This also translates to C:WindowsMinidump.
How to Read the Minidump, See What Caused Your BSOD
Now that the minidump is configured, you’ll need to download an application that can read the file and provide useful information. A tool called BlueScreenView comes recommended for doing just this.
You can download BlueScreenView by going to the official website and selecting either the 32-bit or 64-bit version of the application.
After downloading the tool, you’ll need to extract it to a location, so it can be run.
Once the tool is extracted to a directory, double click the “BlueScreenView” icon to get started. BlueScreenView will then look at the default minidump location and will look through the current logs that have been created. If you’ve experienced a number of issues or haven’t removed older minidump files, you’ll need to be mindful of the dates associated with the logs.
Using BlueScreenView to Understand Minidump Files
When you first use BlueScreenView, it will provide you with several pieces of information and at first, it may seem confusing. However, the format is straightforward and it does highlight the important information to get you started.
The files or applications that caused the crash will be highlighted in red, giving you a good idea of where to start correcting the issue.
In this screenshot, we can see that on this specific minidump, there was an issue detected that affected three files; dxgmms2.sys, ntoskrnl.exe and watchdog.sys.
Further on the upper panel, we can see in the right column that there’s a section that tells us what caused the crash. In this image, we can see that the watchdog.sys caused the problem. This is a good starting point as you can now check Google or Bing, to see how this could become a problem and possible solutions.
We know that watchdog.sys is the potential cause, but what about dxgmms2.sys and ntoskrnel.exe? As those were the affected files, we need to find out what those are as well. So those will also need to be looked into. Doing a quick check on Google, we can see that dxgmms2.sys is related to the Windows DirectX drivers, while ntoskrnl.exe is the operating system kernel executable – responsible for keeping the operating system running.
Using this view of the Windows minidump file, we can deduce that the BSOD was likely caused by a graphics driver issue, which can typically be corrected by installing a newer version of the driver or reinstalling the current driver.
What If The Minidump File Shows A Hardware Error?
While driver issues are usually easily fixed, a BSOD that is a result of failed hardware is a different story. Such an example is the FAULTY_HARDWARE_CORRUPTED_PAGE error. Here, you would still use an application such as BlueSceenWindow to find the cause of the error. However, when a hardware error occurs, there’s not a magical fix that will correct this. For this specific error, we’re going to say that the result of this error was due to an installed memory module.
To figure out if this is the actual cause, we’d have to test the memory. There are several ways to do this; using a hardware memory checker or an application. Seeing how most people don’t have access to a physical memory checker, we’ll opt for the application route. Thankfully, Microsoft has included a memory diagnostics tool that has been included dating back to Windows 7. To use this, open up a run prompt and type “mdsched”.
You’ll have two options to choose from; Restart Now or Check for problems the next time you start my computer. If you choose the first option, be sure to save your work as Windows will close out.
Once your computer restarts, the memory checker will load and start checking your memory. Depending on how much memory you have installed, the process can take a while. While the test is running, you’ll see a progress bar and an overall status. Any errors that may be encountered will be displayed under the status section.
Once the test is completed, the memory test will boot into Windows. If there are no errors, you can conclude that your memory is not at fault.
The Oscars are tonight and despite my best efforts I still haven’t seen all the Best Picture contenders, so will hold off on making predictions (but honestly not sure how anyone beats Chadwick Boseman as Best Actor). Lots of good trailers this week including the return of the most wholesome show on all of streaming and the next big Marvel movie.
Ted Lasso Season 2
You know when you really love a show and it gets popular and everyone else likes it… and you brace for its sophomore season to be not quite as good as its debut? I am hopeful that Ted Lasso will not fall into this pattern, and the trailer for season two looks extremely promising (Ted: “Back home if a team was playing poorly, we don’t call them unlucky, what do we call ‘em, Coach?” Beard: “New York Jets.”) Jason Sudeikis, Brendan Hunt, Hannah Waddingham, and Juno Temple return along with others from season one for Ted Lasso, which drops on Apple TV Plus July 23rd. And if you’re interested in a little bonus Ted Lasso content, developer David Smith managed to figure out the recipe for the shortbread cookies Ted makes for Hannah. So wholesome.
Shang-Chi and the Legend of the Ten Rings
The next big Marvel movie is the first to center an Asian superhero (and has an almost entirely Asian cast) and it looks very, very fun even if you don’t know the backstory and connection to the Marvel Cinematic Universe. Simu Liu plays Shang-Chi, whose father trained him as a child to be an assassin. He tries to escape and live a normal life but as anyone who has ever watched a superhero movie knows, it ain’t that easy. Tony Leung, Michelle Yeoh, and Awkwafina also star in Shang-Chi and the Legend of the Ten Rings, coming to theaters September 3rd.
Annette
Adam Driver and Marion Cotillard play a comedian and opera singer who fall in love and have a child they name Annette, “a mysterious little girl with an exceptional destiny.” The trailer has that French cinema feel, for sure, which makes sense, since Annette is the English-language debut of French director Leos Carax. Annette will open the 74th Cannes Film Festival on July 6th.
Fathom
Scientists Michelle Fournet and Ellen Garland embark on separate research trips in opposite hemispheres to try to better understand the songs of humpback whales. No one really knows why whales “sing,” despite decades of research. It’s challenging work that one of the scientists compares to “pointing all of our satellites skyward and listening from a sign from outer space.” The trailer is just beautiful (obviously watch with the sound on). Fathom arrives on Apple TV Plus June 25th.
Here Today
Billy Crystal wrote, directed, and stars in this movie about an aging comedy writer who befriends a woman (Tiffany Haddish) whose boyfriend bet on a date with him at an auction. Who would not be content to just watch Haddish and Crystal just bounce jokes off each other for 90 minutes? The underlying premise, though, is that Crystal is losing his memory and Haddish steps up to help him. Here Today arrives in theaters May 7th.
The Conjuring: The Devil Made Me Do It
I continue to be surprised at the endurance of The Conjuring movies, which continue to be quite scary. I have to be honest that I’m not a super big fan of these movies because of the way they often put children in terrifying situations, a theme which seems to persist in this latest installment. Paranormal investigators Ed and Lorraine Warren (Patrick Wilson and Vera Farmiga) are back to investigate a murder committed by a young man who claims he was possessed by the devil. The Conjuring: The Devil Made Me Do It premieres on HBO Max and in theaters on June 4th.
Comics have never been bigger: with Marvel TV shows, DC movies, and indie adaptations growing by the day, comic books have never been more prominent in pop culture. This biweekly Verge column recommends comic series new and old, whether you’re a longtime fan or a newcomer.
I’ve played a lot of Fortnite over the years, but I’ve never really thought about the internal lives of the many characters that inhabit the battle royale island. I was always too busy seeking shelter and supplies or exploring whatever big event was happening at the time. But now that I’ve read Zero Point, a crossover between Batman and Fortnite, I can’t stop thinking about what’s actually happening to them as they fight to the death over and over again.
What is it? Zero Point is a new six-issue series — the first is available now — that attempts to make sense of the convoluted world of Fortnite. At the outset, a mysterious crack appears in Gotham’s skies, and Batman heads out to investigate. He spots Harley Quinn by the disturbance and is eventually pulled into it against his will. As you can probably guess, on the other side of the rift is the Fortnite island.
Batman immediately notices some strange things about this unknown place. He seems to be suffering from some form of amnesia; aside from some muscle memory — i.e., the ability to fight and use gadgets — he can’t remember anything about who he is. He also can’t talk, and for some reason, everyone is trying to kill each other. At one point, he posits a possible explanation: “I’ve gone mad.” It isn’t until he sees Catwoman that his memory is jostled just a little bit.
The issue ends without much resolution, but it poses a lot of questions I’ve never really considered much before. Why can’t the characters talk? And is there a reason everyone is actually trying to kill each other? The developers at Epic Games have steadily been building out the lore of Fortnite through in-game events and other means, even tying in some of the many licensed characters that have been added. But the comic goes in a different direction. Reading it is like seeing what goes through Batman’s head in the middle of a battle royale match.
It’s completely weird and utterly fascinating — and I’m very curious where it’s headed.
Who’s it by? Zero Point is written by Christos Gage, with art by Nelson Faro DeCastro, Reilly Brown, and John Kalisz. Donald Mustard — the chief creative officer at Epic — is listed as a story consultant. (He also apparently made a variant cover for the second issue.)
Where can I read it?Zero Point is available both in physical and digital forms, but there are some small complications. If you grab either a physical issue or read through DC Universe Infinite, you’ll get some bonus in-game items via a code; for issue one, that means a Harley Quinn Fortnite character. However, if you pick up an issue via Comixology, you won’t get a code, just the book. Issue 1 is available now, with subsequent issues coming on May 4th, May 18th, June 1st, June 15th, and July 6th.
B450 Motherboard (Image credit: GS Group & Philax)
GS Group Holding and Philax have started manufacturing Russia’s first domestically-produced B450 motherboard. Philax plans to release at least 40,000 motherboards to the Russian market.
Philax specifically chose the B450M Pro4 because of the possibility to add a TPM module, which is important for government agencies. GS Group Holding and Philax’s partnership doesn’t just stop with motherboards, though. The duo also has plans to produce up to 50,000 monitors. There’s also an 18-month project to develop and produce motherboards for Russia’s homemade Elbrus and Baikal processors.
Avid enthusiasts will probably find Philax’s B450 motherboard very familiar. That’s because the design is based on ASRock’s B450M Pro4. Philax and ASRock probably reached an agreement for the latter to use the design, probably under a licensing agreement of some sort. Obviously, Philax’s rendition doesn’t carry the ASRock brand. In fact, it doesn’t even sport the model name.
Although the B450 chipset is a bit outdated, it’s compatible with a wide range of Ryzen processors and APUs, including the latest Ryzen 5000 (Vermeer) and Ryzen 4000 (Renoir) lineups. Adhering to the micro-ATX form factor, the motherboard comes with four DDR4 memory slots. It supports DDR4-3200 and above memory modules.
While not generous, the AM4 motherboard does come with the necessities. It provides four SATA III ports for standard hard drives and SSDs and up to two M.2 slots for high-speed drives. The expansion options on the B450 motherboard consist of two PCIe 3.0 x16 slots and one PCIe 2.0 x1 slot. The speed varies depending on the processor choice.
Ryzen APUs will be able to take advantage of the motherboard’s D-Sub, DVI-D, or HDMI port. Connectivity-wise, the B450 motherboard offers a PS/2 combo port, two USB 2.0 ports, four USB 3.1 Gen1 ports, and even USB 3.1 Gen2 Type-A and Type-C ports.
Intel’s latest low-power eight-core Core i9-11900T and Core i7-11700T ‘Rocket Lake’ desktop processors with a 35W TDP for LGA1200 motherboards are already available in Europe and Japan. But there’s no sign of them in the U.S. yet.
Most performance enthusiasts are eager to get Intel’s unlocked K-series processors with a 95W – 125W TDP that can boost their clocks sky-high and support all the latest technologies. But there are also enthusiasts who prefer small form-factor low-power builds, but would still like to have CPUs with eight or ten cores and all the latest technologies. Intel typically addresses these users with its T-series processors featuring a 35W TDP, but sometimes these chips are hard to get.
Intel formally introduced its low-power eight-core Core i9-11900T and Core i7-11700T ‘Rocket Lake’ CPUs with a 35W TDP along with their high-performance i9-11900K and i7-11700K brethren on March 30, 2021. But unlike the ‘unlocked and unleashed’ K-series processors, the new T-series products were not immediately available at launch. Fortunately, the situation is starting to change.
Akiba PC Hotline and Hermitage Akihabara report that the new 35W Core i9-11900T and Core i7-11700T CPUs in bulk and boxed versions are readily available in at least four stores in Tokyo, Japan. The higher-end i9-11900T model is sold for ¥60,478 – ¥62,700 with VAT, whereas the i7-11700T SKU is priced at ¥45,078 – ¥47,300 including tax.
Geizhals.EU, a price search engine in Europe, finds that Intel’s Core i9-11900T is available in dozens of stores in Austria, Germany, and Poland starting at €455 with VAT ($462 without taxes). Meanwhile, there are no offers for the cheaper Core i7-11700T at this point.
But at the moment, the new Rocket Lake-T CPUs are not currently available in the U.S. at Amazon and Newegg. In fact, the stores are not even taking pre-orders on these parts. The situation has already prompted enthusiasts of low-power SFF builds to start a thread at Reddit to monitor their availability.
Intel’s latest Core i9-11900T and Core i7-11700T processors indeed look quite attractive. The CPUs feature eight cores with Hyper-Threading, 16MB of cache, a modern Xe-based integrated GPU and support up to 128GB of memory. Intel’s i9-11900T and i7-11700T CPUs feature relatively low base frequencies of 2.00 GHz and 1.50 GHz (respectively), but rather high all-core boost clocks of 3.60 GHz and 3.70 GHz (respectively). When installed into compatible motherboards, they can hit high frequencies and pretty much guarantee great system responsiveness and decent performance in mainstream applications (assuming adequate cooling). So while these are definitely niche chips, it’s not surprising that demand for these CPUs is fairly high.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.