A star engineer, Raja Koduri is one of the Intel executives who tends to reveal development progress of upcoming products via social media. A couple of months ago, he announced that the first GPU based on Intel’s Intel’s Xe HPG architecture had powered on, and this week, he seemingly teased the bring-up process of Intel’s upcoming entrant into the best gaming graphics card race.
From 2012 to 2021 – same Intel Folsom lab, many of the same engineers with more grey hair , I was at Apple back then, getting hands on with pre-production crystalwell, 9 years later playing with a GPU that’s >20x faster! pic.twitter.com/RgmRJuhOXwMarch 12, 2021
See more
“From 2012 to 2021 — same Intel Folsom lab, many of the same engineers with more grey hair,” Raja Koduri, general manager of Architecture, Graphics, and Software at Intel, wrote in a Twitter post Thursday. “I was at Apple back then, getting hands on with pre-production Crystalwell, nine years later playing with a GPU that is >20x faster!”
The two pictures Koduri included show him running some tests on two development systems: one based on Intel’s Haswell with Iris Pro 5200 graphics equipped, a 64MB eDRAM package (Crystalwell) from 2012 and another powered by an upcoming Intel Xe. He didn’t specify the GPU as Xe HPG specifically, but the running of 3DMark, as well as the video outputs point toward the gaming-focused GPU.
The second image partly reveals the Xe bring-up board. Such boards are designed to provide maximum flexibility in terms of GPU configuration and power delivery, so while they have display outputs, they do not look like graphics cards at all. That said, it’s not surprising to see the Xe HPG development board come with a cooling system that looks like it belongs with a server CPU.
It’s also interesting to see Koduri says that the Xe GPU was over 20 times faster than Intel’s Iris Pro 5200 from 2013. Of course, an upcoming discrete graphics processor should be an order of magnitude faster than an eight-year-old integrated GPU. And “>20x faster” could mean a range of things (not that we would expect Koduri to share performance numbers at this stage).
For some comparison, Intel’s Iris Pro 5200 with 40 execution units (EUs) and 128MB of eDRAM scores 1,426 graphics points in 3DMark FireStrike. By contrast, the latest Intel Iris Xe G7 integrated GPU with 96 EUs scores between 5,800 and 5,900 graphics points, making it over four times faster than its ancestor in said benchmark.
Modern discrete graphics cards, such as Nvidia’s GeForce RTX 3060 Ti, score around 31,000 graphics points in 3DMark FireStrike, so they are, indeed, over 20 times faster than Intel’s Iris Pro 5200. Meanwhile, Nvidia’s top-of-the-range GeForce RTX 3090 scores between 52,000 and 53,000 graphics points in 3DMark Firestrike (37 times faster than the Iris Pro 5200).
We can’t draw any firm conclusions on Intel Xe HPG’s performance based on Koduri’s tweet. But if by over 20 times faster, the exec meant something close to 20 times faster, then we can expect the GPU to compete against products like the GeForce RTX 3060 Ti. We’ll have to wait for much more information to see.
AMD still has its Zen 3 desktop APUs under wraps, but a Chinese eBay merchant already started selling engineering samples. The AMD Ryzen 3 5300G, which was previously sold for $176.99, is no longer available on eBay, but we still have the benchmarks that were listed.
The Zen 3 microarchitecture powers AMD’s latest 7nm processors, spanning from the mobile chips to the core-heavy server offerings. While the chipmaker has already released its Ryzen 5000 mobile (Cezanne) parts, the DIY market is still awaiting the desktop variants, which may be able to compete with the best CPUs. It’s expected that AMD’s next-generation APUs will leverage Zen 3 cores and slot into the AM4 CPU socket. Based on AMD’s history, the chips will likely come with Vega graphics but with a small generational uplift.
The Zen 3 processor listed on eBay carries the 100-000000262-30_Y designation, which is the orderable part number, and the poster listed it as a Ryzen 3 5300G. Without AMD’s confirmation though, we can’t know for sure. It’s possible the chip will come out as the Ryzen 3 Pro 5350G, with equal specs but bringing extra features around things like security. In any case, the chip listed should be the baby brother to the Ryzen 7 5700G or Ryzen 7 Pro 5750G.
AMD Ryzen 3 5300G Specifications
Processor
Cores / Threads
Base Clock (GHz)
L2 Cache (MB)
L3 Cache (MB)
TDP (W)
Ryzen 3 5300G*
4 / 8
3.5 / ?
2
8
65
Ryzen 3 3300X
4 / 8
3.8 / 4.3
2
16
65
Ryzen 3 Pro 4350G
4 / 8
3.8 / 4.0
2
4
65
Ryzen 3 3100
4 / 8
3.6 / 3.9
2
16
65
Core i3-10100
4 / 8
3.6 / 4.3
1
6
65
*Specs not confirmed by AMD
Based on the eBay listing, the Ryzen 3 3500G will arrive as a quad-core, 7nm processor with simultaneous multithreading (SMT) enabled. The APU appears to have a 3.5 GHz base clock, but the boost clock wasn’t shared. It seemingly clocks in lower than its predecessors, but remember that Zen 3’s performance uplift comes from the IPC advancements rather than high clock speeds. On top of that, the clock speeds should be taken with a bit of salt, since the processor in question is an engineering sample.
Cezanne offers twice the amount of L3 cache in comparison to Renoir APUs. So it’s not surprising to see the Ryzen 3 5300G come equipped with an 8MB L3 cache. However, it’s still two times less than what’s found on Ryzen Zen 2 desktop chips.
Given the model name, the Ryzen 3 5300G should be the successor to the Ryzen 3 4300G. Unfortunately, AMD decided to reserve desktop Renoir for pre-built OEM systems. You could still pick one up from the grey market, but it doesn’t come with any support or a warranty.
It’s uncertain if AMD will change its mind with desktop Cezanne. However, the rumors point to the possibility of the Zen 3 APUs arriving on the DIY market.
AMD Ryzen 3 5300G Benchmarks
Processor
CPU-Z Single Thread
CPU-Z Multi Thread
Fritz Chess Benchmark
Cinebench R15
Ryzen 3 5300G
553.22
2,985.12
20,072
1,117
Ryzen 3 3300X
528
2,824
19,674
1,101
Ryzen 3 Pro 4350G
501
2,766
17,831.2
957.46
Ryzen 3 3100
474
2,645
17,251
1,015
Core i3-10100
N/A
2,461
16,037
1,001
In CPU-Z benchmark shared on eBay, the Ryzen 3 5300G reportedly delivered 10.4% and 4.8% higher single-threaded performance than the Ryzen 3 Pro 4350G (Zen 2) and Ryzen 3 3300X (Zen 2), respectively. When it came to multi-threaded performance, the Ryzen 3 5300G was up to 7.9% faster than the Ryzen 3 Pro 4350G and up to 21.3% faster than the Core i3-10100 (Comet Lake).
The Ryzen 3 5300G’s dominance also extended to the other tests, including the Fritz Chess and Cinebench R15 benchmarks. In the former, the Zen 3 APU outperformed the Ryzen 3 Pro 4350G and Core i3-10100 by 12.6% and 25.2%, respectively.
In Cinebench R15, we can see the Ryzen 3 5300G rising above the Ryzen 3 Pro 4350G by 16.7% and the Core i3-10100 by 11.6%.
1080p, Low Settings
1080p, Medium Settings
1080p, High Settings
Battlefield V
48 fps
37 fps
29 fps
Battlefield 4
95 fps
82 fps
47 fps
While the Ryzen 3 5300G’s processing prowess is impressive, many will probably pick up the Zen 3 APU for its gaming potential. The Ryzen 3 5300G already appears to be a decent APU for gaming at 1080p resolution, but its 720p gaming performance should be even more spectacular.
At 1080p, the Ryzen 3 5300G’s Vega graphics engine reportedly pushed frame rates up to 48 frames per second (fps) on Battlefield V and 95 fps on Battlefield 4 with low settings. With medium settings, the APU’s listed frame rates dropped to 37 fps and 82 fps, respectively.
On high settings the Ryzen 3 5300G’s graphical performance took a hit. The APU ran Battlefield V at 29 fps, which is just 1 fps below what we consider playable, and Battlefield 4 at 47 fps.
It’s unclear why AMD is taking so long to announce desktop Cezanne. The engineering samples are evidently out in the wild already. With the current graphics card shortage, the Zen 3 APUs could be a legit option for gamers with tight budgets.
Executives of Loongson Technology, a subsidiary of the Chinese Academy of Engineering, said at a recent conference that the next-gen Loongson 5000-series processors were on track to be released this year. The new MIPS64-compatible CPUs are aimed at client PCs as well as multiprocessor servers. Interestingly, the new chips may be the last high-end MIPS64 offerings from the Chinese Academy of Engineering.
The chips in question are the 2.50 GHz quad-core Loongson 3A5000 for client PCs and 16-core Loongon 3C5000 for servers with up to 16 processors. Both chips are set to be made using a 12nm process technology (most likely one of TSMC’s nodes), reports CnTechPost, citing a small conference that was held earlier this year. Both CPUs are said to be based on a new internal architecture that is compatible with the MIPS64 instruction set, feature enlarged caches, and a new memory controller.
Based on some previous reports, the 3A5000 was taped out in April 2020, which is why it is due in the coming months; whereas the 3C5000 was taped out in August, 2020, so it will be released towards the end of 2021 if everything goes as planned.
One interesting thing about Loongson Technology is that the company is reportedly ‘looking forward to join the open-source instruction consortium.’ The consortium mentioned by Loongson’s executives is almost certainly RISC-V International, which essentially means that going forward, the company will focus on RISC-V.
Loongson has historically developed MIPS-compatible CPU cores, so switching to RISC-V should not be too challenging for the company as the architectures have many similarities. Meanwhile, the adoption of RISC-V means that Loongson’s upcoming processors (or cores) will be supported by a broad ecosystem of software and hardware, something that will inevitably make them more competitive.
Developing new RISC-V-compatible microarchitectures and cores will take several years, so for now, Loongson will have to promote its 3A5000 among PC makers and its 3C5000 among server and HPC customers.
Although Intel has not yet officially launched its 11th-Gen Core processors for desktops codenamed Rocket Lake, these CPUs were available from a single retailer for a brief period of time, so enthusiasts have already begun experimenting. Recently, one experimenter decided to remove the Core i7-10700K’s lid (delid) to reveal the die underneath.
This week MoeBen, an enthusiast from Overclock.net forums, delidded Intel’s Core i7-11700K processor. Even though he used special tools for delidding, the CPU died as a result of his manipulations.
The main thing that strikes the eye about Intel’s Rocket Lake is its rather massive die size. A quick comparison of Rocket Lake’s silicon to delidded Intel’s previous-generation processors reveals that the die of Intel’s eight-core Core i7-11700K is both ‘taller’ and ‘wider’ than the die of Intel’s 10-core Core i9-10900K. Also, the new CPU uses a slightly different packaging with resistors placed differently.
Based on rough napkin math based on the size of Intel’s LGA115x/1200 packaging (38 mm × 38 mm), an estimate for the Rocket Lake die size puts it around 11.78 mm × 24.58 mm, or 289.5 mm2. Such a large die area puts Rocket Lake into the league of the company’s LCC high-end desktop and server processors. For example, Intel’s 10-core Skylake-SP with a massive cache is around 322 mm2.
Intel’s Rocket Lake processors pack eight cores based on the Cypress Cove microarchitecture (which is a derivative of the company’s Willow Cove microarchitecture), an integrated GPU featuring the Xe architecture, a new media encoding/decoding engine, a revamped display pipeline, and a new memory controller.
Essentially, Rocket Lake uses CPU and GPU IP designed for Intel’s 10 nm SuperFin process technology, yet since it is made using one of Intel’s 14 nm nodes, it is natural that the said IP consumes more silicon area. To that end, it is not surprising that the new CPU is substantially bigger than its predecessor despite the fact that it has fewer cores. Obviously, since these cores are larger (and faster), they take up more die space.
Intel is projected to officially launch its Rocket Lake processors on March 30, 2021.
Chrome’s Android app now lets you preview a webpage before committing to clicking on a link, 9to5Google reports. The feature appears to have been enabled via a server-side update to version 89 of the browser, and can be accessed by long-pressing on a link and then tapping “Preview page.” It seems to be Android-only for the time being.
It’s a small, but helpful, feature if you want to quickly check the contents of a webpage without fully leaving your current page. Maybe that’s to get the gist of an article by reading its first couple of paragraphs, or because you’re still vigilant about being Rick-rolled in 2021.
The feature has been included in other browsers for a little while now. On iOS, both Safari and Edge already default to previewing a webpage when you long-press a link, and by default neither require the additional step of selecting “Preview page” from a menu.
Chrome’s support for link previews on Android has been in the works for a little while, and was spotted while it was in development way back in December 2018 by XDA Developers. Now, however, it appears to be available to everyone without having to be manually enabled.
Samsung is being impacted by yet another supply shortage in the tech industry, this time related to its own SSD controllers. Samsung’s Texas factories, which are responsible for producing SSD controllers, have been idle since February due to power outages caused by severe weather conditions. The company still hasn’t resumed production at the facilities, and according to a report from DigiTimes, this will halt the production of Samsung’s PCIe SSD controllers until May.
This situation could be very detrimental to the company over the course of a few months. Samsung’s factories in Texas are responsible for producing most of Samsung’s SSD controllers worldwide. Sources tell DigiTimes that up to 75% of its PCIe SSD controller production will be affected this month, impacting the company’s products for high-end desktop PCs.
According to the report, the situation doesn’t end there—Samsung’s supply issues could spill out into the server and mainstream PC markets by April, as well.
Even though Samsung’s NAND flash is not built in Austin, every SSD needs a controller to function properly, so this shortage will directly affect Samsung’s ability to produce SSDs.
To counter this issue, many OEMs responsible for building PCs have already made arrangements to switch to competing storage solutions for the time being. Expect the DIY market to get hit as well. So if you’re eyeballing a Samsung SSD for a new build, like the Samsung 980 we reviewed earlier today, get your order in as soon as possible before prices skyrocket due to a supply/demand imbalance.
Fortunately, Samsung expects to restart its SSD controller production by April and resume shipping PCIe SSD controllers by May, so this shortage could only last a few months. However, as we’ve seen time and again, shortages can impact the market long after production equalizes, as empty supply chains can take quite a bit of time to resume normal operations.
Supermicro’s 1023US-TR4 is a slim 1U dual-socket server designed for high-density compute environments in high-end cloud computing, virtualization, and enterprise applications. With support for AMD’s EPYC 7001 and 7002 processors, this high-end server packs up to two 64-core Eypc Rome processors, allowing it to cram 128 cores and 256 threads into one slim chassis.
We’re on the cusp of Intel’s Ice Lake and AMD’s EPYC Milan launches, which promise to reignite the fierce competition between the long-time x86 rivals. In preparation for the new launches, we’ve been working on a new set of benchmarks for our server testing, and that’s given us a pretty good look at the state of the server market as it stands today.
We used the Supermicro 1023US-TR4 server for EPYC Rome testing, and we’ll focus on examining the platform in this article. Naturally, we’ll add in Ice Lake and EPYC Milan testing as soon as those chips are available. In the meantime, here’s a look at some of our new benchmarks and the current state of the data center CPU performance hierarchy in several hotly-contested price ranges.
Inside the Supermicro 1023US-TR4 Server
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The Supermicro 1023US-TR4 server comes in the slim 1U form factor. And despite its slim stature, it can host an incredible amount of compute horsepower under the hood. The server supports AMD’s EPYC 7001 and 7002 series chips, with the latter series topping out at 64 cores apiece, which translates to 128 cores and 256 threads spread across the dual sockets.
Support for the 7002 series chips requires a 2.x board revision, and the server can accommodate CPU cTDP’s up to 280W. That means it can accommodate the beefiest of EPYC chips, which currently comes in the form of the 280W 64-core EPYC 7H12 with a 280W TDP.
The server has a tool-less rail mounting system that eases installation into server racks and the CSE-819UTS-R1K02P-T chassis measures 1.7 x 17.2 x 29 inches, ensuring broad compatibility with standard 19-inch server racks.
The front panel comes with standard indicator lights, like a unit identification (UID) light that helps with locating the server in a rack, along with drive activity, power, status light (to indicate fan failures or system overheating), and two LAN activity LEDs. Power and reset buttons are also present at the upper right of the front panel.
By default, the system comes with four tool-less 3.5-inch hot-swap SATA 3 drive bays, but you can configure the server to accept four NVMe drives on the front panel, and an additional two M.2 drives internally. You can also add an optional SAS card to enable support for SAS storage devices. The front of the system also houses a slide-out service/asset tag identifier card to the upper left.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
Popping the top off the chassis reveals two shrouds that direct air from the two rows of hot-swappable fans. A total of eight fan housings feed air to the system, and each housing includes two counter-rotating 4cm fans for maximum static pressure and reduced vibration. As expected with servers intended for 24/7 operation, the system can continue to function in the event of a fan failure. However, the remainder of the fans will automatically run at full speed if the system detects a failure. Naturally, these fans are loud, but that’s not a concern for a server environment.
Two fan housings are assigned to cool each CPU, and a simple black plastic shroud directs air to the heatsinks underneath. Dual SP3 sockets house both processors, and they’re covered by standard heatsinks that are optimized for linear airflow.
A total of 16 memory slots flank each processor, for a total of 32 memory slots that support up to 4TB of registered ECC DDR4-2666 with EPYC 7001 processors, or an incredible 8TB of ECC DDR4-3200 memory (via 256GB DIMMs) with the 7002 models, easily outstripping the memory capacity available with competing Intel platforms.
We tested the EPYC processors with 16x 32GB DDR4-3200 Samsung modules for a total memory capacity of 512GB. In contrast, we loaded down the Xeon comparison platform with 12x 32GB Sk hynix DDR4-2933 modules, for a total capacity of 384GB of memory.
The H11DSU-iN motherboard’s expansion slots consist of two full-height 9.5-inch PCIe 3.0 slots and one low-profile PCIe 3.0 x8 slot, all mounted on riser cards. An additional internal PCIe 3.0 x8 slot is also available, but this slot only accepts proprietary Supermicro RAID cards. All told, the system exposes a total of 64 lanes (16 via NVMe storage devices) to the user.
As one would imagine, Supermicro has other server offerings that expose more of EPYCs available 128 lanes to the user and also come with the faster PCIe 4.0 interface.
Image 1 of 2
Image 2 of 2
The rear I/O panel includes four gigabit RJ45 LAN ports powered by an Intel i350-AM4 controller, along with a dedicated IPMI port for management. Here we find the only USB ports on the machine, which come in the form of two USB 3.0 headers, along with a COM and VGA port.
Two 1000W Titanium-Level (96%+) redundant power supplies provide power to the server, with automatic failover in the event of a failure, as well as hot-swapability for easy servicing.
The BIOS is easy to access and use, while the IPMI web interface provides a wealth of monitoring capabilities and easy remote management that matches the type of functionality available with Xeon platforms. Among many options, you can update the BIOS, use the KVM-over-LAN remote console, monitor power consumption, access health event logs, monitor and adjust fan speeds, and monitor the CPU, DIMM, and chipset temperatures and voltages. Supermicro’s remote management suite is polished and easy to use, which stands in contrast to other platforms we’ve tested.
Test Setup
Cores/Threads
1K Unit Price
Base / Boost (GHz)
L3 Cache (MB)
TDP (W)
AMD EPYC 7742
64 / 128
$6,950
2.25 / 3.4
256
225W
Intel Xeon Platinum 8280
28 / 56
$10,009
2.7 / 4.0
38.5
205W
Intel Xeon Gold 6258R
28 / 56
$3,651
2.7 / 4.0
38.5
205W
AMD EPYC 7F72
24 / 48
$2,450
3.2 / ~3.7
192
240W
Intel Xeon Gold 5220R
24 / 48
$1,555
2.2 / 4.0
35.75
150W
AMD EPYC 7F52
16 / 32
$3,100
3.5 / ~3.9
256
240W
Intel Xeon Gold 6226R
16 / 32
$1,300
2.9 / 3.9
22
150W
Intel Xeon Gold 5218
16 / 32
$1,280
2.3 / 3.9
22
125W
AMD EPYC 7F32
8 / 16
$2,100
3.7 / ~3.9
128
180W
Intel Xeon Gold 6250
8 / 16
$3,400
3.9 / 4.5
35.75
185W
Here we can see the selection of processors we’ve tested for this review, though we use the Xeon Platinum Gold 8280 as a stand-in for the less expensive Xeon Gold 6258R. These two chips are identical and provide the same level of performance, with the difference boiling down to the more expensive 8280 coming with support for quad-socket servers, while the Xeon Gold 6258R tops out at dual-socket support.
Memory
Tested Processors
Supermicro AS-1023US-TR4
16x 32GB Samsung ECC DDR4-3200
EPYC 7742, 7F72, 7F52, 7F32
Dell/EMC PowerEdge R460
12x 32GB SK Hynix DDR4-2933
Intel Xeon 8280, 6258R, 5220R, 6226R, 6250
To assess performance with a range of different potential configurations, we used the Supermicro 1024US-TR4 server with four different EPYC Rome configurations. We outfitted this server with 16x 32GB Samsung ECC DDR4-3200 memory modules, ensuring that both chips had all eight memory channels populated.
We used a Dell/EMC PowerEdge R460 server to test the Xeon processors in our test group, giving us a good sense of performance with competing Intel systems. We equipped this server with 12x 32GB Sk hynix DDR4-2933 modules, again ensuring that each Xeon chip’s six memory channels were populated. These configurations give the AMD-powered platform a memory capacity advantage, but come as an unavoidable side effect of the capabilities of each platform. As such, bear in mind that memory capacity disparities may impact the results below.
We used the Phoronix Test Suite for testing. This automated test suite simplifies running complex benchmarks in the Linux environment. The test suite is maintained by Phoronix, and it installs all needed dependencies and the test library includes 450 benchmarks and 100 test suites (and counting). Phoronix also maintains openbenchmarking.org, which is an online repository for uploading test results into a centralized database. We used Ubuntu 20.04 LTS and the default Phoronix test configurations with the GCC compiler for all tests below. We also tested both platforms with all available security mitigations.
Linux Kernel and LLVM Compilation Benchmarks
Image 1 of 2
Image 2 of 2
We used the 1023US-TR4 for testing with all of the EPYC processors in the chart, and here we see the expected scaling in the timed Linux kernel compile test with the AMD EPYC processors taking the lead over the Xeon chips at any given core count. The dual EPYC 7742 processors complete the benchmark, which builds the Linux kernel at default settings, in 21 seconds. The dual 24-core EPYC 7F72 configuration is impressive in its own right — it chewed through the test in 25 seconds, edging past the dual-processor Xeon 8280 platform.
AMD’s EPYC delivers even stronger performance in the timed LLVM compilation benchmark — the dual 16-core 7F72’s even beat the dual 28-core 8280’s. Performance scaling is somewhat muted between the flagship 64-core 7742 and the 24-core 7F72, largely due to the strength of the latter’s much higher base and boost frequencies. That impressive performance comes at the cost of a 240W TDP rating, but the Supermicro server handles the increased thermal output easily.
Molecular Dynamics and Parallel Compute Benchmarks
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
NAMD is a parallel molecular dynamics code designed to scale well with additional compute resources; it scales up to 500,000 cores and is one of the premier benchmarks used to quantify performance with simulation code. The EPYC processors are obviously well-suited for these types of highly-parallelized workloads due to their prodigious core counts, with the dual 7742 configuration completing the workload 28% faster than the dual Xeon 8280 setup.
Stockfish is a chess engine designed for the utmost in scalability across increased core counts — it can scale up to 512 threads. Here we can see that this massively parallel code scales well with EPYC’s leading core counts. But, as evidenced by the dual 24-core 7F72’s effectively tying the 28-core Xeon 8280’s, the benchmark also generally responds well to the EPYC processors. The dual 16-core 7F52 configuration also beat out both of the 16-core Intel comparables. Intel does pull off a win as the eight-core 6250 processors beat the 7F32’s, though.
We see similarly impressive performance in other molecular dynamics workloads, like the Gromacs water benchmark that simulates Newtonian equations of motion with hundreds of millions of particles and the NAS Parallel Benchmarks (NPB) suite. NPB characterizes Computational Fluid Dynamics (CFD) applications, and NASA designed it to measure performance from smaller CFD applications up to “embarrassingly parallel” operations. The BT.C test measures Block Tri-Diagonal solver performance, while the LU.C test measures performance with a lower-upper Gauss-Seidel solver.
Regardless of the workload, the EPYC processors deliver a brutal level of performance in highly-parallelized applications, and the Supermicro server handled the heat output without issue.
Rendering Benchmarks
Image 1 of 8
Image 2 of 8
Image 3 of 8
Image 4 of 8
Image 5 of 8
Image 6 of 8
Image 7 of 8
Image 8 of 8
Turning to more standard fare, provided you can keep the cores fed with data, most modern rendering applications also take full advantage of the compute resources. Given the well-known strengths of EPYC’s core-heavy approach, it isn’t surprising to see the 64-core EPYC 7742 processors carve out a commanding lead in the C-Ray and Blender benchmarks. Still, it is impressive to see the 7Fx2 models beat the competing Xeon processors with similar core counts nearly across the board.
The performance picture changes somewhat with the Embree benchmarks, which test high-performance ray tracing libraries developed at Intel Labs. Naturally, the Xeon processors take the lead in the Asian Dragon renders, but the crown renders show that AMD’s EPYC can offer leading performance even with code that is heavily optimized for Xeon processors.
Encoding Benchmarks
Image 1 of 3
Image 2 of 3
Image 3 of 3
Encoders tend to present a different type of challenge: As we can see with the VP9 libvpx benchmark, they often don’t scale well with increased core counts. Instead, they often benefit from per-core performance and other factors, like cache capacity.
However, newer encoders, like Intel’s SVT-AV1, are designed to leverage multi-threading more fully to extract faster performance for live encoding/transcoding video applications. Again, we can see the impact of EPYC’s increased core counts paired with its strong per-core performance as the EPYC 7742 and 7F72 post impressive wins.
Python and Sysbench Benchmarks
Image 1 of 2
Image 2 of 2
The Pybench and Numpy benchmarks are used as a general litmus test of Python performance, and as we can see, these tests don’t scale well with increased core counts. That allows the Xeon 6250, which has the highest boost frequency of the test pool at 4.5 GHz, to take the lead.
Compression and Security
Image 1 of 3
Image 2 of 3
Image 3 of 3
Compression workloads also come in many flavors. The 7-Zip (p7zip) benchmark exposes the heights of theoretical compression performance because it runs directly from main memory, allowing both memory throughput and core counts to impact performance heavily. As we can see, this benefits the EPYC 7742 tremendously, but it is noteworthy that the 28-core Xeon 8280 offers far more performance than the 24-core 7F72 if we normalize throughput based on core counts. In contrast, the gzip benchmark, which compresses two copies of the Linux 4.13 kernel source tree, responds well to speedy clock rates, giving the eight-core Xeon 6250 the lead due to its 4.5 GHz boost clock.
The open-source OpenSSL toolkit uses SSL and TLS protocols to measure RSA 4096-bit performance. As we can see, this test favors the EPYC processors due to its parallelized nature, but offloading this type of workload to dedicated accelerators is becoming more common for environments with heavy requirements.
SPEC CPU 2017 Estimated Scores
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
We used the GCC compiler and the default Phoronix test settings for these SPEC CPU 2017 test results. SPEC results are highly contested and can be impacted heavily with various compilers and flags, so we’re sticking with a bog-standard configuration to provide as level of a playing field as possible. It’s noteworthy that these results haven’t been submitted to the SPEC committee for verification, so they aren’t official. Instead, view the above tests as estimates, based on our testing.
The multi-threaded portion of the SPEC CPU 2107 suite is of most interest for the purpose of our tests, which is to gauge the ability of the Supermicro platform to handle heavy extended loads. As expected, the EPYC processors post commanding leads in both the intrate and fprate subtests. And close monitoring of the platform didn’t find any thermal throttling during these extended duration tests. The Xeon 6250 and 8280 processors take the lead in the single-threaded intrate tests, while the AMD EPYC processors post impressively-strong single-core measurements in the fprate tests.
Conclusion
AMD has enjoyed a slow but steadily-increasing portion of the data center market, and much of its continued growth hinges on increasing adoption beyond hyperscale cloud providers to more standard enterprise applications. That requires a dual-pronged approach of not only offering a tangible performance advantage, particularly in workloads that are sensitive to per-core performance, but also having an ecosystem of fully-validated OEM platforms readily available on the market.
The Supermicro 1023US-TR4 server slots into AMD’s expanding constellation of OEM EPYC systems and also allows discerning customers to upgrade from the standard 7002 series processors to the high-frequency H- and F-series models as well. It also supports up to 8TB of ECC memory, which is an incredible amount of available capacity for memory-intensive workloads. Notably, the system comes with the PCIe 3.0 interface while the second-gen EPYC processors support PCIe 4.0, but this arrangement allows customers that don’t plan to use PCIe 4.0 devices to procure systems at a lower price point. As one would imagine, Supermicro has other offerings that support the faster interface.
Overall we found the platform to be robust, and out-of-the-box installation was simple with a tool-less rail kit and an easily-accessible IPMI interface that offers a cornucopia of management and monitoring capabilities. Our only minor complaints are that the front panel could use a few USB ports for easier physical connectivity. The addition of a faster embedded networking interface would also free up an additional PCIe slot. Naturally, higher-end Supermicro platforms come with these features.
As seen throughout our testing, the Supermicro 1023US-TR4 server performed admirably and didn’t suffer from any thermal throttling issues regardless of the EPYC processors we used, which is an important consideration. Overall, the Supermicro 1023US-TR4 server packs quite the punch in a small form factor that enables incredibly powerful and dense compute deployments in cloud, virtualization, and enterprise applications.
If you’ve ever needed remote access to a PC, you’ve probably tried VPN or other apps such as TeamViewer. However, this kind of software only works within the remote computer’s OS, which means that it can’t access the BIOS, reboot, install an operating system or power on the computer. There are several solutions that allow you to remote control a PC independently of its operating system, but using a KVM over IP is one of the most convenient and affordable.
While a store-bought KVM over IP device can cost hundreds of dollars, it’s easy to use a Raspberry Pi to create your own. A developer named Maxim Devaev designed his own system called Pi-KVM, which he is planning to sell as a $130 kit. However, if you have the right parts, you can use the software he’s developed and your Pi, to put it together for far less.
Below, we’ll show you how to build your own Raspberry Pi-powered KVM over IP that can output full HD video, control GPIO ports and USB relays, configure server power using ATX functions and more. You’ll be able to control the whole setup via a web browser from another device over the internet via TailScale VPN or on your local network.
What You Need to Build a KVM Over IP with Raspberry Pi
Raspberry Pi 4 or Raspberry Pi Zero
16 GB or larger microSD Card. (See best microSD cards for Raspberry Pi)
HDMI-to-CSI bridge like this one or or USB HDMI capture dongle. (https://amzn.to/2ZO9tjo
USB female to dual male Type-A splitter like this one.
USB C to Type-A cable
5V, 3 amp power supply with USB Type-A output. You’ll be plugging a type-A cable into it so the official Raspberry Pi power supply won’t do.
Setting Up the SD Card for Raspberry Pi KVM Over IP
The software you need for the Raspberry Pi is all contained on a custom disk image that you must download and burn to a microSD card. Here’s how to do that with Raspberry Pi Imager, but you can also use other burning software such as balenaEtcher.
1. Download the Pi-KVM disk image. The first thing we will need is to download the ready made image from pikvm.org. Note that there are different versions, depending on which Pi you use and whether you use the HDMI-to-CSI bridge or an HDMI-to-USB capture dongle. The image file is in BZ2 format so you’ll need to uncompress it.
2. Extract the IMG file from the BZ2 file you downloaded. If you have Windows, BZ2 support isn’t built-in, but you can use 7Zip to do it.
3. Launch Raspberry Pi Imager. If you don’t have it installed already, you can download it from the Raspberry Pi Foundation’s website.
4. Select “Choose OS” -> “Use Custom” and locate the Pi-KVM image.Pick your microSD card by clicking Choose SD Card. We will now “Choose SD Card”, make sure it’s the correct one you are choosing.
5. Click Write.
Setting Up the Raspberry Pi for KVM Over IP
Now that we have finished burning the microSD Card, we can move on to installing the HDMI-to-CSI-2 bridge or USB-to-HDMI dongle and prepping the OTG USB-c cable
1. Connect the CSI ribbon cable from the HDMI-to-CSI-2 bridge to the Raspberry Pi’s CSI camera port. Make sure that the blue marking faces the black clamp. If you are using an HDMI-to-USB dongle instead, connect it to a USB port on your Pi. If you are using a Pi Zero, you will need microUSB to USB Type-A hub.
2. Disable the 5V pin on one of the USB Type-A male connectors from your splitter. The easiest way to do this is to place a small piece of Kapton tape over the right-most pin on the connector. You could also try cutting that leads to that pin, but that’s more complicated.
This will be the connector that attaches to a USB port on the PC you wish to control. If you don’t disable that 5V pin, it will back feed the power from your wall power to the PC, possibly causing damage to its USB port.
3. Connect the USB C-to-A cable to the Type-A female connector on the splitter. This will provide power to the Pi. Your cables should look like the picture below.
4. Connect the USB-C cable to the Raspberry Pi 4’s USB-C port.
5. Connect the unmodified Type-A male to your power supply.
6. Attach the USB Type-A connector and HDMI to the PC you wish to remote control.
7. Insert the microSD card we created and power on the Raspberry Pi.
Setting Up the Pi-KVM Software
At this point we are ready to start using the Pi-KVM. On first boot it will take longer then expected due to the initial process for enlarging the microSD card so be patient and it will boot.
1. Locate your Raspberry Pi’s IP address. You can do this looking through your router’s control panel to see what devices are logged on, or by using a little method I like to do called ARP.
To find the Pi’s IP using this method, launch Windows PowerShell, run the command “arp -a” and you’ll see a list of devices on your local network. Anything that begins with b8:27:eb: or dc:a6:32: is a Raspberry Pi.
2. Navigate to the Pi’s IP address in a browser on your client computer (the one you are using to control the other PC). You will be redirected you to your login page.
3. Log in. The default username is admin and the password is admin also.
4. Click the KVM icon.
You should now be presented with a screen like the one shown below, providing you with access to the remote PC and a number of other menus. . I have more options then others and you can unlock them by going to the pikvm github for more instructions.
Keep in mind that the more storage you have on your sd card the more ISO images you can store and use for future PC setups.
With the proper GPIO hook ups you can also enable the use of ATX controls
To expand the functionally of the PI-KVM to allow for more display inputs, you can connect it to an HDMI 4 port switch with USB control.
Updating Pi-KVM to the Latest Version
Pi-KVM is always getting new features so it’s important to keep the software up to date. Fortunately, you don’t need to reflash the microSD card. To update:
1. Click the Terminal icon on Pi-KVM’s main menu. A CLI shell will appear.
2. Become a super user by typing “su” and then entering “root” as the password.
3. Type “rw” to make the file system read/write.
4. Enter “pacman -Syu” and “Y” to get updates.
Reminder: set the file system back to ReadOnly with “ro” in the command line when done.
Access Pi-KVM Over the Internet
You can use Tailscale to access Pi-KVM over the internet. This is a convenient and free (for private use) tool for organizing a small VPN network.
1. Create a Tailscale account choosing the Solo Plan will be free for personal use only
2. Click the Terminal icon on Pi-KVM’s main menu.
3. Become a super user by typing “su” and then entering “root” as the password.
4. Type “rw” to make the file system read/write.
5. Type “pacman -S tailscale-pikvm” to install tailscale VPN service on PI-KVM.
6. Type “reboot” to perform a soft reboot on the Pi-KVM
7. After the reboot has been performed we will need to gain access to the terminal again so follow steps 1-4
8. Type “systemctl enable –now tailscaled” to enable to service
9. Type “tailscale up” to start the initiation process
10. Follow the Link to authorize this installation
11. Once connected successfully you will see “Success” appear on the terminal.
12. Navigate to this URLhttps://login.tailscale.com/admin/machines to view the IP address assigned by tailscale VPN.
On the Client Side
This will show you how to install tailscale on the workstation side. Tailscale supports most operating systems including windows, mac, and linux.
1. Download tailscale for your OS from https://tailscale.com/download
2. Navigate to this URLhttps://login.tailscale.com/admin/machines to view the IP address assigned by tailscale VPN.
3. Navigating to the IP address given by tailscale on your browser. It will connect you to your PI-KVM
This is a very affordable way to build a very modern, very fast KVM over IP without the high cost. This software is also provided to you for free. There are more features that I have not covered in this tutorial such as VPN, Sharing network from your PI to PC, VNC and many more and if you wish to learn about it, visit the Pi-KVM github page or join the Discord.
It looks like NCSoft is bringing the Unreal Engine 4 upgrade to Blade & Soul worldwide after all. Initially, the UE4 upgrade was only available on a Korean server, but this summer, the developers will be bringing the new features to the full live game in the west.
This summer, the MMORPG Blade & Soul will be upgraded to Unreal Engine 4 visuals. The change in the game engine will bring improved performance, reduced lag and loading times. As you would expect, there will also be graphical upgrades, bringing new depth and clarity to the in-game visuals.
Previously, these upgrades were previewed on the Final Frontier server in Korea. That server will be closing down as the developers intend to push the full game to Unreal Engine 4 with additional enhancements.
The Unreal Engine 4 upgrade is arriving over the summer, so it is still a few months away. Other content updates coming to Blade & Soul this year include a new character class, new specialisations for existing classes, new dungeons and other new systems.
Discuss on our Facebook page, HERE.
KitGuru Says: Have many of you played Blade & Soul since launch? Will you be returning for the big engine upgrade later this year?
Become a Patron!
Check Also
Ghost of Tsushima’s developers will become ambassadors for the island
Ghost of Tsushima took many by surprise when it launched last Summer, offering a beautiful …
Mustafa Mahmoud 1 day ago Featured Tech News, Software & Gaming, Tech News
Recently, Guilty Gear Strive had an open beta period allowing fans to get an early look at the upcoming fighting game. While initial impressions were positive, it seems the game isn’t quite yet ready for prime time, as Guilty Gear Strive has now been delayed from April to June.
Making the announcement on Twitter, the game’s developer Arc System Works said “We have made the tough decision to move the release date of Guilty Gear Strive (previously planned for April 9, 2021) to June 11, 2021.”
Explaining the reason for this delay, the studio said “Since we have received valuable feedback after the recent Open Beta Test, we would like to make the most of this opportunity to provide the best game possible. We need extra time to polish some aspects of the game, such as the online lobbies and the server’s stability.”
The team concluded by saying “We believe it best to use the extra time to improve the game’s quality and provide a better experience to all our players. Thank you for your patience and understanding.”
Guilty Gear Strive’s visuals and gameplay have been well received by most fans. While disappointing to hear that the game has been delayed, it will undoubtedly make for a better overall experience when the game does finally release on the 11th of June.
KitGuru says: Did you try the open beta? What did you think of the game? Are you disappointed by the delay? Let us know down below.
Become a Patron!
Check Also
Ghost of Tsushima’s developers will become ambassadors for the island
Ghost of Tsushima took many by surprise when it launched last Summer, offering a beautiful …
Building the smart home of your dreams? You may want to check out Steve’s Siri-controlled garage door management system from Steve Does Stuff on YouTube. It uses a Raspberry Pi integrated with Siri support to check the status of and control up to three individual garage doors.
Users can interact with a custom interface hosted on a Flask-based web server. The dashboard offers various control options as well as log information with usage history for each door.
You’ll need a Raspberry Pi Zero W for this project as Wi-Fi support is absolutely critical to the system design. All of the information needs to be accessible through a network connection to provide the web-based support. It also uses a 4-channel relay, magnetic reed switch, and a hammer header for the Pi.
Because the dashboard runs on a Flask server, it can be accessed from a computer, smartphone, tablet or even a smartwatch. The system can accept input from and output data using Siri as an interface.
Visit the project Github page for detailed steps on how the project works and be sure to check out our list of best Raspberry Pi projects for more awesome creations from the Pi community.
GTA Online might be one of the most popular and longest-lasting games to date, but it isn’t without its flaws. More specifically, GTA Online has an insane loading time of up to six minutes on most PC hardware (possibly more on slower PCs), which is absurd given how old the game is. A fix might be in the works, or at least, a fan of GTAV has figured out the problem and fixed it with a simple solution. Well, ‘fixed’ might be too strong a word, but the situation has improved at least.
Before all this, the GTAV fan “T0ST” was super annoyed at the absurdly long wait times he was getting in GTA Online. His load times were a whopping six minutes just to get into the game. To start troubleshooting, he started running Task Manager to see what was taking up so many resources during that 6-minute time frame.
He found out that the game was only loading two CPU cores consistently, for the entire duration of the load time. Nothing else was being utilized, so storage, graphics, and network bandwidth didn’t appear to be the bottleneck. This is unusual behavior, as a game should be loading stuff from storage into system memory and talking with the server, which could have at least partially explained the long wait times.
So T0ST delved deeper, into the game code itself, and eventually found the problem. It turns out the game is basically doing a ton of extra work for no reason at all. The game is basically looking at 10MB JSON file with a bunch of in-game store items, repeatedly, as it works to load the game. This is something the game can completely skip and do later. It also repeats scanning of the 10MB JSON every few bytes, further exacerbating the problem.
T0ST fixed this by creating his own DLL with optimized code and installing it into the game files. That cut load times down to a far more pallatable 1 minute and 50 seconds. That’s still pretty awful for a game that came out so long ago, but it’s one-third the time as before. Hopefully, this will encourage Rockstar to finally take a look this portion of the game code, and using data from T0ST it should be able to implement the fix in a future patch.
Following a report of Gigabyte canceling its GeForce RTX 3090 Turbo, other graphics card manufacturers have also delisted their GeForce RTX 3090 graphics cards with blower designs.
In an age with fancy shrouds and flashy RGB lighting, graphics cards with blower designs are hard to find. But as old as it may be, this kind of cooler still has a place in modern systems, especially those SFF builds where the graphics card’s heat needs to be expelled out the back rather than circulating inside a cramped case. But system integrators had found another use for the GeForce RTX 3090 blower models, which could be the main reason this cancellation.
A previous report from China claims that system integrators were incorporating Gigabyte’s GeForce RTX 3090 Turbo into their server products. It certainly wasn’t good for business, at least from Nvidia’s perspective. The chipmaker might not have been too happy finding out that vendors are preferring the GeForce RTX 3090 over some of its more luxurious models, such as the A100 or some other Quadro offering. GeForce and Titan graphics cards aren’t designed for servers or data centers, but system integrators have found the GeForce RTX 3090’s traits to be too attractive from a price-to-performance standpoint.
Nvidia GeForce RTX 3090 Blower GPUs
Vendor
Model
Asus
Turbo GeForce RTX 3090
Emtek
GeForce RTX 3090 24GB Blower Edition
Galax
GeForce RTX 3090 24GB Classic
Gigabyte
GeForce RTX 3090 24GB Turbo
MSI
GeForce RTX 3090 24GB Aero
News outlet VideoCardz has noticed that other manufacturers, including Asus, MSI and Galax have also removed the product pages for their respective GeForce RTX 3090 blower designs. Galax has reportedly confirmed to the publication that it has canceled the GeForce RTX 3090 24GB Classic, and the GeForce RTX 3080 Classic as well. No reasons were given as to why the brand retired a graphics card that has been on the market for two months.
Asus, on the other hand, has only disbarred the Turbo GeForce RTX 3090. The company is still offering the Turbo GeForce RTX 3080 and RTX 3070 blower cards, so not all is lost. However, it’s still a hit to gamers who want to put together a SFF system with a GeForce RTX 3090 blower design. There might still be some leftover stock of the graphics cards on the market though.
We’ve reached out to the different vendors to see if they can provide some insight on the cancellations.
If you’re after an affordable injection of clarity and detail for your wired headphones, iFi has it in the Can
For
Expansive, detailed, sound
High-end feature set
Classy build and finish
Against
A little sonically polite
Often when the What Hi-Fi? team receives a new product for review, we like to pit it against a similarly specified class leader within its price category. But here, that’s not really possible, because the rather unique iFi Zen Can is an all-analogue headphone amplifier that costs just £149 ($150).
Features
iFi says the Zen Can has many features usually reserved for high-end headphone amps – it employs basically the same Class A discrete power output stage as the outfit’s flagship headphone amplifier, the Pro iCan, which is more than 11 times the price of the model on review here.
It also promises prodigious drive capability for such a modestly priced headphone amp, delivering 1600mW (7.2V) into 32 ohms from the single-ended output. It’s an amp that iFi bills as ‘nitro for your headphones’ and you certainly do get a substantial power jolt for the money.
Although petite, the Zen Can is a desktop headphone amp rather than a portable device since it requires mains power (a 5V charger is included). Although it offers wired listening, you could of course pair it with the Zen Blue to add Bluetooth connectivity. As well as a headphone amp, it can double as a preamp to feed a power amp or a pair of active speakers, with the use of a dedicated, balanced 4.4mm to XLR cable.
Build
As with the other iFi Zen Series products, such as the Zen DAC, Zen Phono stage and aforementioned Zen Blue, the Zen Can is smartly finished with a sturdy and neatly sized aluminium enclosure, the dimensions of which are akin to a large hip-flask or a small pair of binoculars.
In the centre of the Can’s front panel is a premium-feeling rotary volume control. To the left, beside the power button and input switch, is a control for selecting the appropriate gain, with little white LED lights to denote the level you’ve selected. You get four settings in six steps – 0dB, 6dB, 12dB and 18db. These options ensure good headphone matching and an adequate range of operation for the volume control.
iFi Zen Can tech specs
Inputs 4.4mm, RCA, 3.5mm jack
Frequency response 20Hz – 20kHz
Dimensions (hwd) 158 x 117 x 35cm
Weight 515g
To the right are a pair of headphone outputs – a 6.3mm output for headphones that have a standard single-ended connector (compatible with all headphones, provided you have a 3.5mm-to-6.3mm adapter), and a 4.4mm Pentaconn balanced output for headphones with a balanced connection.
Next to the headphone sockets is a button to engage the latest versions of iFi’s ‘XBass’ and ‘3D’ sonic tailoring options for headphones. Again, tiny LED lights signify which options are deployed. XBass adjusts the frequency response to augment low-frequency performance, which could be useful with open-back headphones that might ‘leak’ deep bass. We try it with our Grado SR325e cans and like the extra ounce of power through the low end. It’s not particularly subtle, but it is fun.
Meanwhile, 3D aims to compensate for the ‘in-head localisation’ effect that can occur when using headphones to listen to music that was mixed using a pair of speakers. It does a good job of widening the headphone soundstage to deliver a more speaker-like experience. Both XBass and 3D engage purely analogue processing and may be bypassed entirely if you prefer, but there’s much to like about them – particularly the immersive and opened-out presentation we’re treated to when using 3D.
Around the back, the Zen Can offers stereo RCA and 3.5mm inputs, plus a balanced 4.4mm Pentaconn input. There’s also another 4.4mm connection to provide a balanced output so that the iFi can connect to an appropriately equipped power amplifier or pair of active speakers. All the Zen Can’s inputs and outputs are gold plated, too – a nice premium touch.
Though Class A circuitry often produces a lot of heat, the iFi Zen Can only runs slightly warm – it never gets hot, even when we keep it running overnight. That’s no mean feat and a tribute to iFi’s engineers.
All in all, it’s a lot of attention to detail within a resoundingly classy build. The fact that iFi has implemented all of this in a headphone amp retailing for just £149 ($149) is certainly impressive.
Sound
We stream a Tidal Master file of FKA twigs’ Two Weeks from our MacBook Pro, and the heavily altered vocal and bassy intro are expansive and cohesive. Twigs’ ethereal vocal is three-dimensional, textured, well-timed and hugely impactful. It’s a solid step-up in terms of detail and space over simply plugging the same Grado headphones into our laptop.
Switching to a hi-res (24-bit/88.2kHz) FLAC file from our server, we listen to Michael Jackson’s Thriller album in its entirety. There’s a human feel to Jackson’s vocal, alongside a pleasingly musical and competent layering of each musical passage. We play Billie Jean and while the strings toy with our left ear, a synth presents itself to our right, and Jackson’s numerous harmony lines are all different, emotive and noteworthy.
The melodic outro to Human Nature feels sparkling and accurate across the frequencies too, thanks in part to the space it is afforded, while often imperceptible vocal licks as the track ends aren’t lost. It may not be the liveliest presentation, but it still entertains.
We stream a Tidal Masters file of Joy Division’s Love Will Tear Us Apart, this time on our Astell & Kern Kann Alpha portable music player (using its 4.4mm Pentaconn balanced output), and there’s just a bit of excess politeness to the sound. This track is raw and untethered, but through the iFi it’s a shade off for rhythmic precision and attack. That said, we look again at the Zen Can’s price and find it easy to forgive.
Verdict
The iFi Zen Can is a resoundingly good upgrade on plugging your wired headphones directly into your laptop or other source. It’s a solid, talented and capable little performer. And on top of this, it offers a premium-feeling build for a truly affordable price.
If you’ve read the review of the NETGEAR Orbi LTE router, you might have guessed that this review was on its way. Indeed, this was the very first product I received for review in the UK, with testing done in a hotel while I was sorting out more permanent accommodations, as the next few pages will no doubt indicate. However, circumstances were such that I received two units accidentally, had to return the first one, and test the second unit, which meant the Orbi review was finished first. Regardless, here we are and thanks again to NETGEAR for sending a review sample to TechPowerUp!
The Nighthawk MR2100, also referred to as the Nighthawk M2, is an interesting product in more ways than one. It is obviously a mobile hotspot router, as shown by the way of the form factor and company image above. A few years ago, NETGEAR made waves with their MR1100, a truly all-in-one portable LTE router that worked with just about any carrier worldwide, but had poor battery life and a lower maximum throughput. They aimed to change that with the release of the MR2100 with a better battery and double the WiFi throughput, but somehow managed to create a product that never had a retail launch in the US. Sure, there were some ways to get it through certain carriers, but it is missing some LTE bands that a few specific carriers in the US and some European countries utilize. With the recent launch of their brand-new 5G WiFi 6 mobile router, does it still merit a place in 2021? We aim to address this question in this review that begins with a look at the specifications in the table below.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.