ASRock Rack has quietly unveiled its new 1U shorth depth low-power server based on AMD’s Ryzen 5000 processor as well as X570 chipset. The 1U2-X570/2T can be used for light server workloads, or as a remote desktop.
Both AMD and Intel offer a broad range of Epyc and Xeon processors for a range of workloads. These CPUs support numerous server-grade features and are optimized for 24/7 operation, but overall, these are desktop processors that are sold at a premium since they feature some differences. Meanwhile, there are entry-level servers that are supposed to be inexpensive and do not require any advanced functionality, which is why some server makers offer machines based on desktop CPUs. The 1U2-X570/2T is a good example of such products.
The ASRock 1U2-X570/2T server uses the company’s X570D4I-2T mini-ITX motherboard and supports various AMD’s Ryzen and Ryzen Pro processors for desktops with up to 105W TDP, including the latest 5000-series CPUs with up to 16 cores. The motherboard has four slots for SO-DIMM modules supporting up to 128GB of DDR4-2400 (2R/2DR) or DDR4-2933 (1R) memory with or without ECC (ECC only supported by AMD Pro CPUs). Storage on the server comprises of one M.2-2280 slot for an SSD featuring a PCIe 4.0 x4 or SATA interface, two bays for 2.5-inch/7mm drives as well as two bays for 3.5-inch drives. The server comes with Intel’s X550-AT2 controller that drives two 10GbE ports as well as a 1GbE connector for remote management that is enabled by the ASPeed AST2500 BMC. The machine is fed by a 265W 80Plus Bronze PSU.
While the X570D4I-2T motherboard has a PCIe 4.0 x16 slot for graphics cards, the 1U2-X570/2T machine cannot accommodate any add-in cards since this is a short depth machine. Furthermore, its 265W power supply is not really designed to handle decent graphics cards or special-purpose accelerators that typically consume well over 100W.
The ASRock Rack 1U2-X570/2T is not the first server from the company that is powered by AMD’s Ryzen 4000/5000 processor and X570 chipset as the company has three more machines in the lineup. All the 1U machines are designed to operate like remote entry-level workstations or perform light server workloads, they support up to 128GB of memory, can be equipped with up to seven 3.5-inch hard drives and at least two M.2 SSDs, can accommodate a FHHL PCIe 4.0 x16 add-in-board, and come with a relatively low-wattage PSUs (up to 450W).
Image 1 of 3
Image 2 of 3
Image 3 of 3
The ASRock 1U2-X570/2T is already listed at the company’s website, but there is no word about its price or availability timeframe.
Intel’s new W-1300 series of Xeon processors briefly emerged in a compatibility list showing all CPUs supported in Intel’s latest LGA1200 socket. However, a few hours later they were taken down. This could mean that these new Xeon processors will support the same socket as Intel’s consumer-grade Rocket Lake products.
Because these chips are compatible with the LGA 1200 socket, they should be nearly identical to Intel’s upcoming Core i5, i7, and i9 Rocket Lake parts, but have a few alterations that include and support for vPro technologies and ECC memory.
Intel has been doing this for years with its lower-end Xeon processors, re-purposing lower-end Core i5, i7, and i9 parts and turning them into Xeon chips. The strategy makes a lot of sense, as not all servers and workstations require HEDT levels of processing power and connectivity.
However, since the Skylake generation, Intel has severely limited its entry-level Xeons’ motherboard compatibility and requires them to run in a workstation/server “designed” chipset. So yes, while these chips have the same socket compatibility as Intel’s consumer desktop chips, Intel’s W-1300 Xeons will not work in a standard H-, B-, or Z-series motherboard.
The W-1300 CPUs, which appeared on an ASRock list, were the W-1390, W-1390T, W-1350P, W-1350, and W-1370.
The main differences we can find between each Xeon chip are its TDP and with some, its amount of L3 cache. For instance, chips like the W-1350, W-1390, and W1-1370 have an average TDP of 80W. The 1390T, has the lowest TDP at just 35W, and the W-1350P has the highest TDP of 125W.
Additionally, the W-1350 & W-1350P are equipped with less L3 cache, coming in at 12MB instead of 16MB. Presumably, this reduction in L3 cache is due to a lower core count compared to its other siblings.
Unfortunately, that’s all we know (if it’s even accurate), for now. We still don’t know what prices will be, what core counts these chips will have, and what boost frequencies these chips will be equipped with. (But expect a maximum of 8 cores for W1300 chips due to the Rocket Lake architecture.)
Hopefully, we should have more information on Intel’s new W-1300 chips right around or after the official Rocket Lake launch.
ASRock has just spilled the beans on Intel’s new W-1300 series of Xeon processors in a compatibility list showing all CPUs supported in Intel’s latest LGA1200 socket. This means that these new Xeon processors will support the same socket as Intel’s consumer-grade Rocket Lake products.
Because these chips are compatible with the LGA 1200 socket, they should be nearly identical to Intel’s upcoming Core i5, i7, and i9 Rocket Lake parts, but have a few alterations that include and support for vPro technologies and ECC memory.
Intel has been doing this for years with its lower-end Xeon processors, re-purposing lower-end Core i5, i7, and i9 parts and turning them into Xeon chips. The strategy makes a lot of sense, as not all servers and workstations require HEDT levels of processing power and connectivity.
However, since the Skylake generation, Intel has severely limited its entry-level Xeons’ motherboard compatibility and requires them to run in a workstation/server “designed” chipset. So yes, while these chips have the same socket compatibility as Intel’s consumer desktop chips, Intel’s W-1300 Xeons will not work in a standard H-, B-, or Z-series motherboard.
Currently, the only W-1300 CPUs ASRock has listed so far are the W-1390, W-1390T, W-1350P, W-1350, and W-1370.
The main differences we can find between each Xeon chip are its TDP and with some, its amount of L3 cache. For instance, chips like the W-1350, W-1390, and W1-1370 have an average TDP of 80W. The 1390T, has the lowest TDP at just 35W, and the W-1350P has the highest TDP of 125W.
Additionally, the W-1350 & W-1350P are equipped with less L3 cache, coming in at 12MB instead of 16MB. Presumably, this reduction in L3 cache is due to a lower core count compared to its other siblings.
Unfortunately, that’s all we know, for now, we still don’t know what prices will be at, what core counts these chips will have, and what boost frequencies these chips will be equipped with. (But expect a maximum of 8 cores for W1300 chips due to the Rocket Lake architecture.)
Hopefully, we should have more information on Intel’s new W-1300 chips right around or after the official Rocket Lake launch.
Large companies like Google have been building their own servers for many years now in a bid to get machines that suit their needs the best way possible. Most of these servers run Intel’s Xeon processors with or without customizations, but feature additional hardware that accelerate certain workloads. For Google, this approach is no longer good enough. This week the company announced that it had hired Intel veteran Uri Frank to lead a newly established division that will develop custom system-on-chips (SoC) for the company’s datacenters.
Google is not a newbie when it comes to hardware development. The company introduced its own Tensor Processing Unit (TPU) back in 2015 and today it powers various services, including real-time voice search, photo object recognition, and interactive language translation. In 2018, the company unveiled its video processing units (VPUs) to broaden the number of formats it can distribute videos in. In 2019, it followed with OpenTitan, the first open-source silicon root-of-trust project. Now Google installs its own and third-party hardware onto the motherboards next to an Intel Xeon processor. Going forward, the company wants to pack as many capabilities as it can into SoCs to improve performance, reduce latencies, and reduce the power consumption of its machines.
“To date, the motherboard has been our integration point, where we compose CPUs, networking, storage devices, custom accelerators, memory, all from different vendors, into an optimized system,” Amin Vahdat, Google Fellow and Vice President of Systems Infrastructure, wrote in a blog post. “Instead of integrating components on a motherboard where they are separated by inches of wires, we are turning to SoC designs where multiple functions sit on the same chip, or on multiple chips inside one package.”
These highly integrated system-on-chips (SoCs) and system-in-packages (SiPs) for datacenters will be developed in a new development center in Israel, which will be headed by Uri Frank, vice president of engineering for server chip design at Google, who brings 24 years of custom CPU design and delivery experience to the company. The cloud giant plans to recruit several hundred world-class SoC engineers to design its SoCs and SiPs, so these products are not going to jump into Google’s servers in 2022, but will likely reach datacenters by the middle of the decade.
Google has a vision of tightly integrated SoCs replacing relatively disintegrated motherboards. The company is eager to develop building blocks of its SoCs and SiPs, but will have nothing against buying them from third party if needed.
“Just like on a motherboard, individual functional units (such as CPUs, TPUs, video transcoding, encryption, compression, remote communication, secure data summarization, and more) come from different sources,” said Vahdat. “We buy where it makes sense, build it ourselves where we have to, and aim to build ecosystems that benefit the entire industry.”
Google’s foray into datacenter SoCs is consistent with what its rivals Amazon Web Services and Microsoft Azure are doing. AWS already offers instances powered by its own Arm-powered Graviton processors, whereas Microsoft is reportedly developing its own datacenter chip too. Google yet has to disclose whether it intends to build its own CPU cores or license them from Arm or another party, but since the company is early in its journey, it is probably considering different options at this point.
“I am excited to share that I have joined Google Cloud to lead infrastructure silicon design,” Uri Frank wrote in a blog post. “Google has designed and built some of the world’s largest and most efficient computing systems. For a long time, custom chips have been an important part of this strategy. I look forward to growing a team here in Israel while accelerating Google Cloud’s innovations in compute infrastructure. Want to join me? If you are a world class SOC designer, open roles will be posted to careers.google.com soon.”
Google is expanding efforts to design its own chips with the hiring of Uri Frank, an Intel veteran with over two decades of experience in custom CPU design, the company has announced. Frank will head up a new Isreal-based team for Google, and will serve as the company’s VP of Engineering for server chip design. “I look forward to growing a team here in Israel while accelerating Google Cloud’s innovations in compute infrastructure,” Frank wrote in a LinkedIn post announcing the move.
As Google and other tech giants have sought more performance and power efficiency, they’ve increasingly turned towards custom chip designs tailored towards specific use cases. Google has already introduced several custom chips including its Tensor Processing Unit (to help with tasks like voice search and photo object recognition), Video Processing Units, and OpenTitan, an open-source security-focused chip.
On the consumer side, Google already designs custom chips like the Titan M and Pixel Neural Core for its phones. There have also been reports that Google is designing processors that could eventually power its Pixel phones and Chromebooks.
Despite the hire, Google cautions that it’s not planning on building every server chip itself. “We buy where it makes sense, build it ourselves where we have to, and aim to build ecosystems that benefit the entire industry,” the company explains. But the big change will be trying to integrate these different pieces of hardware on a single system on chip (SoC), rather than via a motherboard where they’re separated by “inches of wires” that introduce latency and reduce bandwidth. “The SoC is the new motherboard,” Google says.
Other tech giants have similar custom chip ambitions. Amazon has its ARM-based Graviton server chips while Facebook has announced data center chip designs of its own. Microsoft is also thought to be working on designing its own server chips, as well as processors for its lineup of Surface PCs. Apple has several chip designs to its credit, and is currently in the process of transitioning its Mac lineup from Intel to its own ARM-based processors.
Mushkin has been away from the memory game for a while, but the company is back with a new series of DDR4 memory. It recently revealed its Redline Lumina RGB product line that’s designed and optimized for the most up-to-date Intel and AMD platforms.
The Redline Lumina RGB memory arrives with an aluminum heat spreader that’s complemented with snazzy RGB lighting. The light bar features 16 high-quality RGB LEDs to provide a vivid and smooth illumination. Like many modern memory kits, Redline Lumina RGB is ready to integrate itself into the majority of motherboard RGB ecosystems, including Asus Aura Sync, MSI Mystic Light Sync and ASRock Polychrome.
Mushkin offers the Redline Lumina RGB in dual-channel packages in three different densities. You can choose from 16GB (2X8GB), 32GB (2X16GB) or 64GB (2X32GB) configurations.
Image 1 of 2
Image 2 of 2
Mushkin has all its bases covered from a performance standpoint. The Redline Lumina RGB starts at DDR4-2666 data rate and extends up to DDR4-4133. The timings are not bad either, with CAS Latency (CL) values spanning from CL16 to CL19. The Redline Lumina RGB stays within specification at all times, regardless of the memory frequency. The DRAM voltage varies from 1.2V to 1.35V.
Equipped with Intel XMP 2.0 support, Redline Lumina RGB memory kits take the hassle out of setup. With the click of a button (or at least the enabling of a preset), you can get the memory kits up to their advertised memory frequency in a jiffy. The company says all Redline Lumina RGB memory kits are thoroughly hand-tested in North America. Confident of their quality, Mushkin backs every Redline Lumina RGB memory kit with a limited lifetime warranty.
You can already find the Redline Lumina at Newegg. The 16GB and 32GB memory kits start at $96.99 and $179.99, respectively, while the 64GB memory kits sell for $324.99.
Leo Waldock 1 day ago Featured Announcement, Featured Tech News
The news embargo has lifted on Intel 11th Gen Intel Rocket Lake-S so we know plenty about feeds, speeds and prices. In addition we have discussed Core i7-11700K after a few hundred CPUs mysteriously went on sale early in Germany and France. Add in the fact that Intel launched its Z590 motherboards all the way back in January and you might think we can only now wait for the review embargo to lift on 30th March. Nope, there’s still plenty to discuss.
Watch the video via our VIMEO Channel (Below) or over on YouTube at 2160p HERE
One of the nuggets of information came from Der 8auer who did some nifty analysis using a photo of a delidded Rocket Lake CPU to calculate the die size.
After that he superimposed an Intel block diagram on the image to show the new Cypress Cove cores are much larger than the Skylake cores that Intel has been using in recent times. Cypress Cove is Intel’s way of using Ice Lake cores on a 14nm fabrication process as they have been unable to use the intended 10nm process. We have known for some time that Intel has been obliged to cut the core count of Core i9 from ten cores to eight but seeing the bare Rocket Lake die drives home the point that they really had little choice in the matter.
We have been wondering why Intel is pushing the Z590 chipset so hard when H570 and B560 have very similar sets of features. When you factor in that Intel is allowing memory overclocking on non-Z chipsets then it boils down to you needing the Z590 if you want to overclock your CPU, and if you’re not fussed about overclocking you can save money with H570 or B560.
We had a similar question hanging over Core i9-11900K as it looks like a decently binned version of Core i7-11700K with slightly better clock speeds and the addition of Thermal Velocity Boost which increases the speed of one or two cores, provided the temperature remains under control.
The breaking news is that Z590 supports a feature called Intel Adaptive Boost Technology that was not included in our technical briefing and which only broke cover in the past few days. Intel ABT allows your 11th Gen Core i9 to run at 5.1GHz on all cores, provided your Z590 motherboard has the correct BIOS support. We have received a steady flow of Beta BIOS for various motherboards and have every expectation that Intel ABT will be included in our launch review.
KitGuru says: We have our doubts about the premium that is being charged for Core i9 and Z590, however the release of new features that are restricted to this hardware will certainly make life more interesting.
Become a Patron!
Check Also
Intel NUC 11 Extreme Compute Element to feature up to Intel Core i9-11980HK
Intel’s next generation NUC is coming soon and recent leaks have given us a good …
Intel Alder Lake Specifications (Image credit: VideoCardz)
Intel’s 12th Generation Alder Lake processors might make it to the market by the end of this year. In the meantime, VideoCardz has shared a truckload of information on what we can expect from Intel’s first hybrid desktop processors. Some of the specifications fall in line with what Intel has told the world, but we still recommend you take the information with some caution.
Staring with the processor itself, Intel manufactures Alder Lake on the company’s 10nm Enhanced SuperFin. The leaked illustration shows Alder Lake with a maximum of eight Golden Cove cores and eight Gracemont cores. The setup matches with one of many Alder Lake configurations that we’ve already seen, and the marketing material throws some numbers around.
Intel is reportedly touting up to 20% higher single-threaded performance with Golden Cove. It’s unknown which previous microarchitecture serves as the chipmaker’s point of reference for this comparison, though. Given the roadmap, Willow Cove, which powers Tiger Lake, precedes Golden Cove. However, Intel didn’t bring Tiger Lake to the desktop. It makes more sense to compare Golden Cove to Sunny Cove (Rocket Lake) if the chipmaker wanted to make a desktop-to-desktop comparison. On the other end, the Gracement cores purportedly deliver twice the multi-threaded performance on Alder Lake. Since Tremont is the only microarchitecture before Gracemont, the reference should be straightforward.
The die shot also confirms the presence of the Xe LP engine for integrated graphics and the Gaussian and Neural Accelerator 3.0 (GNA 3.0) for AI workloads. Alder Lake also appears to support PCIe 5.0, DDR5 memory, Wi-Fi 6E, and Thunderbolt 4.
Image 1 of 2
Image 2 of 2
Alder Lake processors will command a new socket, more specifically the LGA1700 socket. Compared to the current LGA1200 socket, we’re looking at a considerable 41.7% increase in pins. Leaked photographs of Alder Lake reveal that Intel will distribute the extra pins vertically, so Alder Lake chips are taller as opposed to being wider than Intel’s current desktop processors. A transition to a new socket allows Intel to release a brand-new wave of chipsets, and the chipmaker is rumored to be cooking up the 600-series chipsets for Alder Lake.
Besides a motherboard upgrade, Alder Lake will also force adopters to invest in a new cooling solution. It’s been a while since Intel’s desktop processors have suffered such a radical change. For a long time, Intel owners could recycle their coolers from LGA115x through LGA1200, but Alder Lake will finally change that. In addition to motherboard and memory vendors, cooling manufacturers will also benefit from Alder Lake’s release.
Alder Lake is flexible in regards to memory support. The interface is still limited to dual-channel memory, so that remains the same. According to VideoCardz’s information, the hybrid chips natively support DDR5-4800 and DDR4-3200 memory, and we already know that mobile variants support LPDDR4 and LPDDR5. The publication has learned that only the premium Z690 motherboards will arrive with DDR5 support. In terms of expansion, Alder Lake seemingly offers 16 PCIe 5.0 and four PCIe 4.0 lanes.
With the existing 500-series chipset, Intel finally expanded the Direct Media Interface (DMI) from four lanes to eight lanes. The upcoming 600-series chipset will retain the same x8 link, but the connection will be upgraded to Gen4 data rates. Unfortunately, the diagram of the 600-series chipset doesn’t disclose the number of PCIe 4.0 or PCIe 3.0 lanes that it will provide.
As for the 600-series’ other attributes, the chipset supports integrated Wi-Fi 6E connectivity, discrete Thunderbolt 4, USB 3.2 Gen 2×2 ports, and Intel Optane Memory H20.
In an odd disclosure that comes after Intel recently released the details of its 11th-Generation Core Rocket Lake-S processors, the company has unveiled a “new” Adaptive Bost Technology that allows the chip to operate at up to 100C during normal operation. This new tech will feel decidedly familiar to AMD fans, as it operates in a very similar fashion to AMD’s existing boost mechanism that’s present in newer Ryzen processors. This marks the fourth boost technology to come standard with some Intel chips, but in true Intel style, the company only offers the new feature on its pricey Core i9 K and KF processors, giving it a new way to segment its product stack.
In a nutshell, the new Adaptive Boost Technology (ABT) feature allows Core i9 processors to dynamically boost to higher all-core frequencies based upon available thermal headroom and electrical conditions, so the peak frequencies can vary. It also allows the chip to operate at 100C during normal operation.
In contrast, Intel’s other boost technologies boost to pre-defined limits (defined in a frequency lookup table) based on the number of active cores, and you’re guaranteed that the chip can hit those frequencies if it is below a certain temperature and the motherboard can supply enough power. Even though Intel has defined a 5.1 GHz peak for ABT if three or more cores are active, it doesn’t come with a guaranteed frequency – peak frequencies will vary based upon the quality of your chip, cooler, PSU, and motherboard power circuitry.
Think of ABT much like a dynamic auto-overclocking feature. Still, because the chip stays within Intel’s spec of a 100C temperature limit, it is a supported feature that doesn’t fall into the same classification as overclocking. That means the chip stays fully within warranty if you choose to enable the feature (it’s disabled by default in the motherboard BIOS).
Intel does have another boost tech, Thermal Velocity Boost, that allows the processor to shift into slightly higher frequencies if the processor remains under a certain temperature threshold (70C for desktop chips). However, like Intel’s other approaches, it also relies upon a standard set of pre-defined values and you’re guaranteed that your chip can hit the assigned frequency.
In contrast, ABT uplift will vary by chip — much of the frequency uplift depends upon the quality of your chip. Hence, the silicon lottery comes into play, along with cooling and power delivery capabilities. We’ve included a breakdown of the various Intel boost technologies a bit further below.
Image 1 of 2
Image 2 of 2
Intel’s approach will often result in higher operating temperatures during intense work, but that doesn’t differ too much from AMD’s current approach because ABT is very similar to AMD’s Precision Boost 2 technology. AMD pioneered this boosting technique for desktop PCs with its Ryzen 3000 series, allowing the chip to boost higher based upon available thermal and electrical headroom, and not based on a lookup table. Still, the company dialed up the temperature limits with its Ryzen 5000 processors to extract the utmost performance within the chips’ maximum thermal specification.
As you can see in AMD’s official guidelines above, that means the processor can run at much higher temperatures than what we would previously perceive as normal, 95C is common with stock coolers, triggering some surprise from the enthusiast community. However, the higher temperatures are fully within AMD’s specifications, just as Intel’s upper limit of 100C will fall within its own boundaries.
Here’s the breakdown of Intel’s various boost mechanisms:
Turbo Boost 2.0: Increased frequency if chip operates below power, current, and temperature specifications.
Turbo Boost Max 3.0: Fastest cores are identified during binning, then the Windows scheduler targets the fastest two active cores (favored cores) with lightly-threaded applications. Chip must be below power, current, and temperature specifications.
Single-Core Thermal Velocity Boost: Fastest active favored core can boost higher than Turbo Boost Max 3.0 if below a pre-defined temperature threshold (70C) and all other factors adhere to TB 3.0 conditions.
All-Core Thermal Velocity Boost: Increases all-core frequency when all cores are active and the chip is under 70C.
Adaptive Boost Technology: Allows dynamic adjustment of all-core turbo frequencies when four or more cores are active. This feature doesn’t have a guaranteed boost threshold — it will vary based on chip quality, your cooler, and power delivery.
Overall, AMD’s Precision Boost 2 and Intel’s Adaptive Boost Technology represent both company’s attempts to extract the maximum performance possible within the confines of their respective TDP limits. In its traditional style, AMD offers the feature as a standard on all of its newer Ryzen processors, while Intel positions it as a premium feature for its highest-end Core i9 K and KF processors. As you would imagine, we’ll have full testing of the feature in our coming review.
Asus has listed the ROG Maximus XIII Apex on its website, implying that the successor to the ROG Maximus XII Apex may be closer than we think. The new iteration to the Apex series has been engineered to tame Intel’s 11th Generation Rocket Lake processors.
Built around the new Z590 chipset and existing LGA1200 socket, the ROG Maximus XIII Apex comes equipped with an 18-phase power delivery subsystem. Each power stage, which can manage up to 90 amps, is accompanied by a MicroFine Alloy choke that can do 45 amps. Asus revamped the power design on the ROG Maximus XIII Apex completely by getting rid of the phase doublers. The motherboard also employs 10K Japanese black metallic capacitors that can take a beating. The VRM area is properly cooled with thick, aluminum passive heatsinks. The ROG Maximus XIII Apex feeds Rocket Lake chips through a pair of 8-pin EPS power connectors.
The overclocking toolkit on the ROG Maximus XIII Apex includes a double-digit debug LED, voltage read points, and a plethora of buttons and switches to aid in overclocking. There are also three condensation sensors that are placed strategically across the motherboard to notify you when condensation occurs around the processor, memory or PCIe slot. In total, the ROG Maximus XIII Apex has five temperature sensors, five 4-pin fan headers, two full-speed fan headers, and an assortment of headers for watercooling setups.
Like previous Apex motherboards, the ROG Maximus XIII Apex only provides two DDR4 memory slots. While memory capacity is limited to 64GB, the motherboard supports memory frequencies above DDR4-5000 with ease. The ROG Maximus XIII Apex sports Asus’ OptiMem III technology, featuring an optimized memory tracing layout to improve memory overclocking.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The ROG Maximus XIII Apex offers numerous options for storage. It provides eight SATA III ports and up to four M.2 slots. The M.2 slots on the motherboard are PCIe 4.0 ready and come armed with an aluminium heatsink and embedded backplates to provide passive cooling. The other two M.2 slots reside on Asus’ ROG DIMM.2 module that connects to the motherboard through a DDR4-type interface beside the memory slots. The DIMM.2 module accommodates M.2 drives with lengths up to 110mm.
The expansion slots on the ROG Maximus XIII Apex consist of two PCIe x16 slots and one PCIe x8 slot. Wired and wireless networking come in the shape of a 2.5 Gigabit Ethernet port and Wi-Fi 6E connectivity with support for up to 6GHz bands. The audio system on the ROG Maximus XIII Apex uses Realtek’s ALC4080 audio codec complemented with a Savitech SV3H712 amplifier and high-end Nichicon audio capacitors.
In regards to USB ports, the ROG Maximus XIII Apex has four USB 3.2 Gen 1 ports, five USB 3.2 Gen 2 ports and one USB 3.2 Gen 2×2 Type-C port at the rear panel. There’s an additional USB 3.2 Gen 2×2 header on the motherboard. The ROG Maximus XIII Apex doesn’t supply any display outputs so it’s mandatory to pair it with a discrete graphics card.
ROG motherboards have a very rich software suite. On this iteration, Asus has directly implemented MemTest86 into the ROG Maximus XIII Apex’s firmware so overclockers can test memory stability without any hassles. Additionally, a one-year AIDA64 Extreme subscription is also included.
The pricing for the ROG Maximus XIII Apex is currently unknown. The previous Z490 version retailed for $356.99, so we can expect the Z590 followup to be price around that range if not a little bit higher.
Finally, AMD’s motherboard partners have begun rolling out new BIOS updates to fix the widespread USB stability and connectivity issues. However, the current revisions are still in Beta form, with final firmware revisions due in April.
The firmware addresses widespread USB connectivity issues present on a number of Ryzen based systems equipped with Zen 2 or Zen 3 CPUs and 400- or 500-series motherboards. The problems center around random dropouts for USB-connected devices that impact several different types of devices, including unresponsive external capture devices, momentary keyboard connection drops, slow mouse responses, issues with VR headsets, external storage devices, and USB-connected CPU coolers.
The new BIOS patch appears to address the USB 2.0 controllers on 400- and 500-series motherboards. We still aren’t sure if other USB devices, like USB 3.0 headers connected to the CPU directly or other USB 3.0/3.1 controllers, were affected. Also, there is no information yet on whether or not the fix has any impact on performance.
When checking to see if your motherboard has the new fix, your board maker should address the USB 2.0 fixes in the description of the latest BIOS on the board partners’ web page. The fix’s presence is harder to detect because AMD did not update the AGESA code with a new version — instead, this fix still runs on the latest ComboV2 1.2.0.1 AGESA code.
But be patient; most boards still do not have a new BIOS ready with the new AGESA code, with only a few 500-series boards (and no 400-series boards) having the update at this time. Presumably, it will be a few weeks before all mainstream 400- and 500-series motherboards receive the update.
Cooler Master’s MasterFrame 700 is an open-air presentation case and test bench that transforms in just a few minutes. But while it’s built like a tank and has a true quality finish, it requires a skilled, patient builder to make the best of it.
For
+ Gorgeous open-chassis looks
+ Excellent build quality
+ Beautiful paint finish
+ Heavy steel panels
+ Includes lightly-tinted glass panel
Against
– Very heavy
– Paint finish in threads makes some screws difficult to insert
– Transforming from open-air case to test bench requires longer AIO tubes
– Can be tedious to work in (needs a skilled, patient builder)
– Motherboard tray covers back of socket
Assembling the MasterFrame 700
When Cooler Master reached out asking if I could have a look at its upcoming MasterFrame 700 open-air chasis / test bench, I was scratching my head a little about how to approach it. I wondered what mainstream appeal there could be in a test bench.
And while ‘mainstream’ is absolutely not how I would describe the MasterFrame 700, it actually left me quite impressed. After my experience with it, I can appreciate its appeal as an open-air chassis to showcase pretty builds.
It won’t be making it onto our Best PC Cases list as it’s not a chassis meant of the masses, but if you’re into this sort of thing, it might be worth reading on to find out more about the MasterFrame 700 – if the photos haven’t already convinced you.
Specifications
Type
Open-air/Test bench chassis
Motherboard Support
Mini-ITX, Micro-ATX, ATX, E-ATX
Dimensions (HxWxD)
16.1 x 12.1 x 27.6 inches (410 x 306 x 702 mm)
Max GPU Length
17.7 inches (450 mm), Up to 12.2 inches (310 mm) for maximum compatibility
CPU Cooler Height
6.2 inches (158 mm)
Max PSU Length
8.3 inches (210 mm)
External Bays
✗
Internal Bays
4x 3.5-inch
7x 2.5-inch
Expansion Slots
8x
Front I/O
2x USB 3.0, USB-C, 3.5 mm Headphone/Mic Combo
Other
(Removable) Tempered Glass Panel
Left Fans
None (Up to 2x 140mm, 3x 120mm)
Right Fans
None (Up to 2x 140mm, 3x 120mm)
Top Fans
None (Up to 2x 140mm, 3x 120mm in Test Bench Mode on a Radiator)
Bottom Fans
None (Up to 2x 140mm, 3x 120mm in Test Bench Mode on a Radiator)
Side Fans
✗
RGB
No
Damping
No
Warranty
1 Year
Normally, we start off case reviews with a tour of the features, build a standardized system in it, and wrap up with thermal and acoustic testing – but today we’re foregoing the usual format. Instead, I’m going to take you on the path I took to familiarize myself with the product, which starts off with assembling the MasterFrame 700.
Unboxing, Layer by Layer
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The Cooler Master MasterFrame 700 comes flat-packed into a relatively compact, briefcase-style box. None of the components come assembled, and as such we have to start with assembling the case. I started off by fixing the case’s radiator wings to the main frame, which was easily accomplished by using three countersunk screws per hinge, of which there are four. I also stuck on four rubber feet.
The hinges are beautifully manufactured to a mirror finish. In fact, all the parts are quite nicely made with a very smooth and even paint job. The panels themselves are also very thick steel, and altogether, it’s a very heavy chassis that oozes quality – which is no surprise given that it’s partly manufactured by hand.
Image 1 of 2
Image 2 of 2
However, at this stage, I already ran into my first issue – with the wings on, the entire chassis was tilted quite far forward, which didn’t seem right.
With no manual to be found (yet), I played around a bit with the wing layout and eventually got the wings attached the correct way – with the text on the user’s side and the straight edge at the bottom – the top of the wings are slanted down slightly for style.
I then proceeded to attach the PCIe bracket, PSU bracket, and rear cover. The case comes with two PSU covers that you can install above one another for extra power. I don’t really see the need for a second PSU, but I suppose the addition of just one bracket can’t do much harm for those who do.
Image 1 of 3
Image 2 of 3
Image 3 of 3
At this point, the chassis was almost assembled and ready for system installation. I also chucked on the glass panel holder, a small SSD bracket at the rear, a fifth rubber foot at the bottom of the rear cover, the IO panel at the top, and voila:
However, in this assembly I had a few moments where I got stuck, not knowing exactly how to fix a certain bracket to the mainframe. When it was all done and built, which took longer than it looks like from the pictures, I was wondering where the manual was. I had already turned the box over twice looking for it. But eventually, I found it hidden under the glass panel.
Image 1 of 2
Image 2 of 2
Yep. I had placed the glass panel aside for when I neared the end of the build, but in doing so totally overlooked that the manual might be in there. Oh well, we made it this far.
Neat Little Details
The MasterFrame 700 comes with a few neat little details that show thoughtful design. For example, it includes a magnetic rubber pad shaped like the Cooler Master logo that you can use to keep track of screws, a VESA 100 mount for if you want to wall-mount the chassis, and there are instructions on where to place the standoffs for the motherboard.
Image 1 of 3
Image 2 of 3
Image 3 of 3
That said, I’m not sure I fully understand the VESA mount. It’s part of the main frame, but behind it is the PSU mount, cable management space and the rear cover to hang hard drives nto. As such, you’d need to make a lot of sacrifices to be able to wall-mount this chassis if you want it flat against the wall – or you’ll need an arm. And it better be a strong arm, because this chassis is very heavy with a system installed into it.
There’s a newly discovered attack on SMS messaging that’s almost invisible to victims, and seemingly sanctioned by the telecom industry, uncovered in a report by Motherboard. The attack uses text-messaging management services that are aimed at businesses to silently redirect text messages from a victim to hackers, giving them access to any two-factor codes or login links that are sent via text message.
Sometimes, the companies providing the service don’t send any sort of message to the number that’s being redirected, either to ask permission or even to notify the owner that their texts are now going to someone else. Using these services, attackers are not only able to intercept incoming text messages, but they can reply as well.
Joseph Cox, the Motherboard reporter, had someone successfully carry out the attack on his number, and it only cost the attacker $16. When he contacted other companies providing SMS redirection services, some of them reported that they had seen this sort of attack before.
The specific company that Motherboard used has reportedly fixed the exploit, but there are many others like it — and there doesn’t seem to be anyone holding the companies to account. When asked why this type of attack is even possible, AT&T and Verizon simply directed The Verge to contact CTIA, the trade organization for the wireless industry. CTIA wasn’t immediately available for comment, but it told Motherboard thatit had “no indication of any malicious activity involving the potential threat or that any customers were impacted.”
Hackers have found many ways to exploit the SMS and the cellular systems to get at other people’s texts — methods like SIM swapping and SS7 attacks have been seen in the wild for a few years now and have sometimes even been used against high-profile targets. But with SIM swapping, it’s pretty easy to tell that you’re being attacked: your phone will completely disconnect from the cellular network. But with SMS redirection, it could be quite a while before you notice that someone else is getting your messages — more than enough time for attackers to compromise your accounts.
The main concern with SMS attacks are the implications they could have for the security of your other accounts. If an attacker is able to get a password reset link or code sent to your phone number, they would then have access to it and be able to get into your account. Text messages are also sometimes used to send login links, as Motherboard found with Postmates, WhatsApp, and Bumble.
This also serves as a reminder that SMS should be avoided for anything security related, if possible — for two-factor authentication, it’s better to use an app like Google Authenticator or Authy. Some password managers even have support for 2FA built in, like 1Password or many of the other free managers we recommend. That said, there are still services and companies that only use text messages as a second factor — the banking industry is infamous for it. For those services, you’ll want to make sure that your password is secure and unique, and then push both for them to move away from SMS and for the cellular industry to work on making itself more secure.
Zadak is a fairly young manufacturer of computer hardware—they were founded in 2015 in Taiwan. Besides SSDs, they’ve released cases, memory, and closed-loop watercoolers. All their products are targeted at the DIY PC space.
The Zadak Spark RGB is an M.2 NVMe SSD that goes all out on RGB bling capability. Thanks to support for all major motherboard vendors, you can use the Spark RGB with your mobo’s RGB control software, and it just works. No additional cabling is required, the SSD will show up as a separate ARGB element in your motherboard’s RGB software.
Under the hood, the Zadak Spark RGB SSD is based on a Phison PS5012-E12 controller, paired with Micron 96-layer 3D TLC flash and 1 GB of DDR4 DRAM cache. PCI-Express 3.0 x4 is used as the host interface.
The Zadak Spark RGB SSD is available in capacities of 512 GB ($130), 1 TB ($220), and 2 TB ($390). Endurance for these models is set at 360 TBW, 726 TBW, and 1550 TBW respectively. Zadak provides a five-year warranty for the Spark RGB.
Specifications: ZADAK Spark RGB 1 TB
Brand:
ZADAK
Model:
SPARK PCIe Gen 3×4 M.2 RGB SSD
Capacity:
1024 GB (953 GB usable) No additional overprovisioning
AMD has announced via a reddit post that it has found a fix for the widely-reported USB connectivity issues that have impacted systems with Ryzen processors, saying, “With your help, we believe we have isolated the root cause and developed a solution that addresses a range of reported symptoms[…].” The fix comes after AMD acknowledged reports of the issues last month and asked users to help it pinpoint the source of the problem by submitting detailed logs.
AMD will release a new AGESA 1.2.0.2 to motherboard vendors in ‘about a week,’ and downloadable beta BIOSes with the patch will land in early April. Naturally, fully-validated BIOS versions with the fix will arrive shortly thereafter.
AMD hasn’t provided further clarity about the fix or the nature of the underlying problem, but the issues seemed confined to Ryzen 3000 and 5000 series CPUs in 500- and 400-series motherboards (i.e., X570, X470, B550, and B450) and consisted of random dropouts for USB-connected devices. The complaints encompassed several different types of USB devices, including unresponsive external capture devices, momentary keyboard connection drops, slow mouse responses, issues with VR headsets, external storage devices, and USB-connected CPU coolers.
Motherboard vendors build firmware upon the AGESA bedrock, so improvements to the underlying code take some time to filter out to the general public. As a reminder, AGESA (AMD Generic Encapsulated System Architecture) is a bootstrap protocol that initializes processor cores, memory, and the HyperTransport (now Infinity Fabric) controller.
Here’s AMD’s post regarding the matter:
“We would like to thank the community here on r/AMD for its assistance with logs and reports as we investigated the intermittent USB connectivity you highlighted. With your help, we believe we have isolated the root cause and developed a solution that addresses a range of reported symptoms, including (but not limited to): USB port dropout, USB 2.0 audio crackling (e.g., DAC/AMP combos), and USB/PCIe Gen 4 exclusion.
AMD has prepared AGESA 1.2.0.2 to deploy this update, and we plan to distribute 1.2.0.2 to our motherboard partners for integration in about a week. Customers can expect downloadable BIOSes containing AGESA 1.2.0.2 to begin with beta updates in early April. The exact update schedule for your system will depend on the test and implementation schedule for your vendor and specific motherboard model. If you continue to experience intermittent USB connectivity issues after updating your system to AGESA 1.2.0.2, we encourage you to download the standalone AMD Bug Report Tool and open a ticket with AMD Customer Support.”If you’re experiencing issues with USB connectivity issues now, AMD has previously issued a few suggestions on how to resolve the issue. You can apply those fixes now while you wait for the new BIOS revisions. We followed up with AMD for more information on the nature of the problems, but the company says it isn’t providing further information on the matter.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.