Gigabyte has officially launched its CMP 30HX board for cryptocurrency mining. The board uses the same components as the company’s mid-range graphics cards, but naturally lacks display outputs. Perhaps, the most interesting detail about the product is that it comes with a three-months warranty.
Gigabyte’s CMP 30HX D6 6G board is powered by Nvidia’s TU116 graphics processor (possibly with some parts disabled) clocked at 1785 MHz and connected to 6GB of GDDR6 memory using a 192-bit interface. The board measures 224.5 × 121.2 × 39.6 mm and has one eight-pin auxiliary PCIe power connector, so expect its power rating to be at 125W, same as in case of Palit’s CMP 30HX board.
To ensure longevity in tough conditions, Gigabyte uses Ultra Durable certified components for the card, which is an valuable feature for miners who run their boards 24/7. The GPU is cooled using Gigabyte’s WindForce 2X cooling system comprising of an aluminum radiator, a composite copper heat pipe and two fans that spin in alternative directions to exhaust hot air above the card, another valuable feature for miners.
What miners will not be happy about is the three-months warranty that Gigabyte offers with its CMP 30HX D6 6G board. The card is more than likely to survive longer if cooled properly, but Gigabyte wants to play it safe. Meanwhile, the EU mandates a two-year warranty on electronics, so in Europe this product could get the same warranty as regular graphics cards.
Nvidia introduced its Cryptocurrency Mining Processor (CMP) lineup in mid-February. Nvidia originally planned to earn $50 million selling GPUs for cryptocurrency mining, but now the company expects its CMP revenue to be about $150 million in the first quarter. Gigabyte is among the first makers of graphics cards to officially confirm that it sells a CMP product, but it will almost certainly not be the only manufacturer to do so.
The Secretlab Titan is a minimalist chair with many customizable adjustments. Unlike many gaming chairs, there’s no lumbar pillow. But it’s still a good throne for taller people.
For
+ Very sturdy
+ Easy to assemble
+ Firm and comfortable
+ Helps maintain good posture
Against
– No lumbar pillow
– Lumbar adjustment knob inconveniently placed
Besides comfort, a gaming chair’s overall look has a big impact on whether or not it’s a good fit for your gaming den. If you’re streaming or web chatting, your chair, (especially larger gaming chairs), will make an appearance. And some people just don’t want to add an ugly piece of furniture to their home.
View at SecretLab
SecretLab Titan SoftWeave
MSRP $530 Direct Pricing $429
The Secretlab Titan in its SoftWeave Fabric iteration ($429) strikes a good look for gamers tired of the bright colors and stark lines of racing style chairs. Instead, it opts for a cleaner look fitting for minimalists. Yet, as a larger version of the Secretlab Omega targeting a bigger and taller audience, it still offers the features you want in a gaming throne, like adjustable armrests, a tall backrest and a multi-tilt mechanism.
Secretlab Titan Specs
Upholstery
Fabric (tested), faux leather, or leather
Total Height (with base)
51.7 – 55.4 inches
Seat Height
18.7-22.4 inches
Backrest Height
33 inches
Backrest Width (Should Level)
21.7 inches
Seating Area Width (total)
20.5 inches
Seating Area Depth
19.7 inches
Armrest Width
3.9 inches
Armrest Height
26-33 inches
Maximum Weight Supported
290 pounds
Weight
77 pounds
Warranty
3 or 5 years
Design of SecretLab Titan
The Secretlab Titan can come with three different types of upholstery: faux leather (starts at $399), leather (starts at $799) or fabric (starts at $429). Our review unit came in the fabric upholstery, which Secretlab dubbed Softweave Fabric (for a look at the Prime 2.0 PU leather option, see our Secretlab Omega review). Once you go SoftWeave, you get your choice of color: a black color scheme, Charcoal Blue (black with bright blue details) or Cookies & Cream, which is what I tested or, for $20 extra, a D.Va from Overwatch-themed one.
Secretlab’s Cookies & Cream colorway makes for a mostly grey chair. Black runs along the side, from the backrest to the seat. The material is a nice change in pace compared to the many leather and faux leather gaming chairs out there. It also adds to its overall sleek, expensive vibe.
The neck pillow, armrests and the base of the chair are all black, as is the Secretlab logo machine-embroidered into the top of the chair. The stitchwork of the Secretlab name and logo, the giant T on the backrest, and “Titan” on the back of the chair and seat all add to the high-class feel of the chair.
Secretlab’s SoftWeave Fabric is a signature blend, including the company’s own yarn. The yarn produces a fluffy texture that’s cozy to sit in. SoftWeave is a breathable material, which helped cut down on sweating and sticking to the chair during long gaming sessions.
This patented fabric is supposed to be easy to clean but, because it’s weaved together, I found that dirt and crumbs can get caught in the material, even after damping a paper towel to clean it.
After spilling things on my chair I can confirm that the SoftWeave fabric doesn’t hold onto crumbs. The good news is the fabric feels durable enough not to tear accidentally from jewelry or pets claws.
The chair’s frame is made from steel, and that’s all topped with Secretlab’s proprietary Cold-Cure Foam Mix, making for a firm chair that evokes straight posture (more on that in the next section).
Image 1 of 2
Image 2 of 2
Unlike the Secretlab Omega, the Titan doesn’t come with an additional lumbar pillow.
Pillow fans need not fret too much though. Secretlab does include a neck pillow with the Titan. It’s memory foam (different and softer than the chair’s Cold-Cure Foam Mix) covered in a velvet-like material with Secretlab’s name and logo stitched in. The pillow’s super-soft material brings a luxurious touch to the chair.
Sitting in the chair’s aluminum alloy base are five XL caster wheels coated with PU faux leather. They move silently across my hardwood floors and don’t get caught on my low shag carpet either.
Comfort and Adjustments on SecretLab Titan
Secretlab recommends the Titan for gamers 5’9” – 6’7″ and 290 pounds. At 5’8”, I’m just under Secretlab’s recommended height and not tall enough to sit comfortably with the chair raised 3.7 inches to its maximum height. At this setting, my feet don’t touch the ground. In this regard, I may have been better off with Secretlab’s Omega chair, which is considered a smaller size and appropriate for people up to 5’11” and 240 pounds.
Overall, this is a firm chair that can encourage you to sit up straight, thanks to its hard steel frame covered in Secretlab’s dense Cold-Cure Foam Mix. This differs from other types of memory foam in that each piece is crafted using one piece of foam and aluminum molds, rather than multiple layers of foam. The foam mix also uses air pockets to absorb pressure. It’s not the hardest chair; instead it provides some give. But your spine will still be encouraged to get straight by the Titan’s more rigid feel.
One of our favorite parts of Secreltab’s Omega chair is the generous inclusion of two ultra-luxurious pillows. But the Titan foregoes the large lumbar pillow in favor of a knob that changes the firmness of the backrest’s lumbar area. This tactic is less common, but we’ve seen it before, including on the Noblechairs Hero.
Turning the knob on the side of the backrest to the right makes the backrest protrude for more lumbar support. Turn the knob left to lessen support, and the backrest will recede. While an extra pillow would’ve been nice, (especially considering the quality of Secretlab’s lumbar pillow), my body didn’t miss it. The Titan’s lumbar adjustments proved sufficient. I liked it best at the middle setting, where I felt my shoulders were ultimately lifted and my posture improved.
The downside is the knob for adjusting lumbar support isn’t easily accessible because it’s on the side of the chair.
This is the first chair that’s made me comfortable using the recline feature. It has the light weight of a smaller office chair, making the 180-degree recline less daunting, while still offering the full comfort and security of a gaming chair. The lever to recline is conveniently on the right side under the armrest.
The full recline is accompanied by a multi-tilt mechanism that makes the chair tilt based on how you’re sitting for added support. Knowing I can adjust the tension of the tilting mechanism is comforting. If there’s too much tilt, I can just tighten the knob. Or if I like the ability to rock slightly without feeling stuck in place, I can loosen the tension. The benefit of the tilt mechanism is most noticeable when I do a full recline. The chair seemingly adjusts to my body weight shifting in the chair.
The Titan is a large chair with a fitting seat that doesn’t sink in when you plop down. The seat is 19.7 inches from front to back and 20.5 inches across. For comparison, the AndaSeat Jungle gaming chair, which I found to be in need of more seat, is 16.9 inches deep and 14.2 inches across (point of contact only). The Titan’s seat allows me to sit freely without my thighs spilling over the side or being cradled into the chair, like I experienced with the Jungle chair.
There are wider seats to be had though. The Anda Seat Spider-Man, for example, has a seat that’s 20.5 inches deep and has a point of contact that’s 22.4 inches wide, leaving me with more than enough room.
Secretlab crafted the included neck pillow with its own memory foam, plus a layer of cooling gel to dissipate heat. The pillow feels very dense, and when I squeeze it, it does take a while for it to return to its original form. Meanwhile, the cooling gel accompanied by the velvet-like pillow case is a good combination resulting in pillow that’s malleable, stays cool and aids my neck when I’m trying to exhibit good posture.
However, those of you with naturally curly and coily hair may be weary of the pillow’s velvet-like material. Velvet is said to dry out curly hair, and although Secretlab hasn’t confirmed that the pillow uses true velvet, the material is very similar and, therefore, may also be drying.
For even more customization, the Titan’s armrests adjust in four directions for max comfort. A button on the inside of the armrests pivot them in, out, forward and back. The second button on the inside toward the back moves the armrests right and left. The button on the outside of the armrest adjusts the height. Made from metal covered in PU faux leather, the armrests provide a soft, no-slip, strain-free sitting experience.
Assembly of SecretLab Titan
This is a Titan of a chair, but assembling it was a much smaller task. Secretlab provides all the tools needed to put this chair together, namely a Philips head screwdriver, an allen wrench and, of course, nuts and bolts. Included directions were easy to follow as they came via a giant notecard with a QR code for those who’d like to watch an assembly. It took me a little under 30 minutes to build the Titan.
Connecting the backrest to the seat is easier than I’ve experienced with other gaming chairs because there’s a metal frame. Instead of screwing the bolts into the fabric and potentially having a hard time finding the holes inside the chair, the metal bar is on the outside, making it easy to guide the bolts in and secure the chair. And the side panels are magnetic and cover up the metal frame.
The base isn’t as heavy as that of the Anda Seat Spider-Man gaming chair, so attaching the wheels was simple.The levers were already attached to the seat too. All I had to do was slide on the handles.
Bottom Line
The Secretlab Titan has a minimalist feel, from its assembly to its final look. The Titan is not a flashy gaming chair. It’s a subtle, yet pricey-looking design that can fit in any setting, from the office to the gaming space. It’s also available in NAPA and PU leather.
Comfort is king, and the Titan doesn’t disappoint there either. Its firm foam design ensures that my body doesn’t sink. The levers are also conveniently placed, making adjustments to the recline, tilt and height intuitive. And it’s hard not to love the ability to even adjust the armrests, including when I’m laying in a full recline with the Titan. And like a true Titan, the chair is firm and secure in its movements.
If you want a chair with a lot of available adjustments that feel firm and reliable and a look that’s a bit more elevated and matches its price tag, the Secretlab Titan is a good choice.
Before Fantasian launched on Apple Arcade, most of the discussion centered on how it looked. There was a good reason for that. Mistwalker, the studio helmed by Final Fantasy creator Hironobu Sakaguchi, pioneered a new technique that involved crafting more than 150 incredible, charming dioramas, which were then photographed to become the locations you explore in the game. Whether you’re venturing through a wind-swept desert, luxury cruise ship, or robot metropolis, every area you visit in Fantasian was handcrafted from real-world materials. It not only looks amazing, but it lends the game’s fantasy world a very particular vibe.
Fantasian is also just a great game. It’s not particularly original, hewing very closely to the Final Fantasy games that Sakaguchi built his career on. But it also takes those ideas and mechanics and changes them just enough to feel fresh, while also making some notable quality-of-life changes. Fantasian isn’t just an adorable handcrafted game from the creator of Final Fantasy; it’s the most approachable Japanese RPG I’ve ever played.
The premise is, admittedly, not very original. If you’re played a JRPG any time over the last two decades, it’ll probably feel familiar. You play as Leo, a hero who has lost his memory and quickly gets pulled into a quest for the fate of the world. It’s hard to talk too much about details of the story since, for most of the game, Leo doesn’t even know what’s going on. Early on, you’re mostly just following leads, as Leo goes pretty much anywhere he can in order to potentially learn more about who he is. You’ll explore mysterious and well-guarded vaults, visit gorgeous cities on the water, and eventually be transported to a mechanical realm ruled by robots. It’s a lengthy adventure, though what’s available now is actually only the first of two parts, with the second expected to release later this year.
The story itself is… fine. Fantasian certainly doesn’t live up to its spiritual predecessors in that regard. I never felt any real sense of urgency as I saved the world, and few of the main cast of characters are very memorable. It’s cute at times, and there are some interesting surprises and worldbuilding details to dig into. Mostly, it just gets the job done. Fantasian is much more about the vibe than the narrative. I loved poking around each and every location, seeing the buildings and landscape from different angles. I didn’t really care why I was doing it. The handmade diorama art style is reminiscent of the pre-rendered backgrounds from the PlayStation era, except with a much finer level of detail. An incredible soundtrack from frequent Sakaguchi collaborator Nobuo Uematsu only adds to the nostalgia. It’s a world you want to linger in.
Fantasian doesn’t just look and sound familiar; it also plays a lot like the classics it’s inspired by. That means random battles, turn-based combat, and a relatively straightforward character customization. The battles, in particular, do some interesting things. While there are familiar elements — like enemies with elemental weaknesses and bosses that change forms mid-fight — there are also some nice twists, like spells and weapons that you can aim to maximize your attack by hitting as many enemies as possible in one shot. It makes battles, particularly bosses, feel more strategic and less mindless. It’s really satisfying when you line things up just right.
Perhaps the best addition is the awkwardly named dimengeon — a portmanteau of dimension and dungeon — that lets you put off random battles for a short period. Essentially, when you turn the feature on, any monster you come across gets trapped in this device, and it can hold up to 30 of them at a time. Whenever you want — or whenever it’s full — you can then take on all of the trapped monsters in one giant brawl. It’s a great feature for when you just want to explore without fighting or when you’re in a rush to get to a save point.
There is a handful of other quality-of-life tweaks — plentiful save points, locations that are big but simple to navigate, touch and gamepad controls that work equally well, little need for grinding — that add up to an incredibly accessible take on the classic formula. Fantasian streamlines the genre, keeping the parts that work best, while updating the rest for modern tastes. There are a few small hiccups, like flat cutscenes and some jarring difficulty spikes toward the end. But at its best, Fantasian is everything Sakaguchi does best, just in a slightly smaller and more refined package.
SiFive on Tuesday said that that its OpenFive division has successfully taped out the company’s first system-on-chip (SoC) on TSMC’s N5 process technology. The SoC can be used for AI and HPC applications and can be further customized by SiFive customers to meet their needs. Meanwhile, elements from this SoC can be licensed and used for other N5 designs without any significant effort.
The SoC contains the SiFive E76 32-bit CPU core(s) for AI, microcontrollers, edge-computing, and other relatively simplistic applications that do not require full precision. It uses OpenFive’s D2D (die-to-die) interface for 2.5D packages as well as OpenFive’s High Bandwidth Memory (HBM3) IP subsystem, which includes a controller and PHY that supports data transfer rates of up to 7.2 Gbps.
The announcement represents a milestone for SiFive and OpenFive, as the SoC is the first RISC-V-based device to be made using a 5nm node. Meanwhile, the announcement also contains two interesting facts. The first one is of course OpenFive’s implementation of an HBM3 solution and its rather bold data transfer rate expectation (2X compared to the fastest HBM2E available today). The second one is OpenFive’s D2D interface for chiplets that uses 16 Gbps NRZ signals with clock forwarding architecture, comprised of 40 IOs per channel, and provides throughput of up to ~1.75Tbps/mm.
The current design will hardly ever be used ‘as is’, but parties interested in building a high-performance 5nm RISC-V SoC for AI or HPC applications can take it as a the base design and equip it with their own or third-party IP (e.g., custom accelerators, high-performance FP64-capable cores, etc.).
Alternatively, all three key components of the SoC implemented using TSMC’s N5 node — the E76 core, the D2D interface and its physical implementation (which includes built-in PLL, programmable output drivers, and link training state machines), and the HBM3 memory solution (controller, I/O, PHY) — can be licensed separately.
The tape out means that the documentation for the chip has been submitted for manufacturing to TSMC, which essentially means that the SoC has been successfully simulated. The silicon is expected to be obtained in Q2 2021.
Corsair has just announced two all-new models of its Corsair One pre-built, named the a200 and i200. Both models will be upgraded with the latest hardware from Intel, AMD, and Nvidia.
Despite measuring in at just 12 liter’s, Corsair promises an uncompromised desktop experience with the Corsair One. Thanks to dual liquid cooling solutions for both the CPU and GPU, you can expect high performance out of the system’s components.
You also get the same amount of I/O as you would on a standard computer tower, with the front panel including a 3.5mm audio jack, two USB 3.0 ports and a single USB 3.2 Gen 2 Type-C port.
Meanwhile, the rear I/O will change depending on which model you choose, but either way, you will get the same amount of connectivity as you would on a standard mini ITX desktop, so expect plenty of display outputs, and plenty of USB ports as well as WiFi 6.
Corsair One a200 & i200 Specifications
a200
i200
CPU:
Up to a Ryzen 9 5900X
Up to a Core i9-11900K
Motherboard:
AMD B550 Mini-ITX Board
Intel Z490
Memory:
Up to 32GB
Up to 32GB
Graphics Card:
GeForce RTX 3080
GeForce RTX 3080
SSD:
Up to a 1TB NVME Gen 4.0 Drive
Up to a 1TB NVME Gen 4.0 Drive
Hard Drive:
Up to 2TB
Up to 2TB
Power Supply
750W 80 Plus Platinum
750W 80 Plus Platinum
The a200 will be based on AMD’s latest hardware and will come with a B550 chipset motherboard and your choice of a Ryzen 5 5600X, Ryzen 7 5800X, or Ryzen 9 5900X. You will also get up to 32GB of RAM, up to 3TB of SSD and hard disk storage, and a 750W SFX PSU.
The i200 on the other hand will feature Intel’s latest Rocket Lake platform, powered by a Z490 motherboard and up to a Core i9-11900K. The memory, storage, and PSU configuration remain the same here as is on the a200.
Both models will also be getting an RTX 3080 for graphics horsepower featuring a massive 10240 CUDA cores and 12GB of GDDR6X, all in a form factor measuring just 12 liters.
Corsair is currently listing a model of the a200 at $3,799.99 and the i200 at $3,599.99, though it’s possible there may be more options later.
The Corsair One has been one of the most compact high-performance PCs you can buy on the market today, so it’s great to see Corsair updating the chassis with the latest CPUs and GPUs, and we expect to see it in ours labs soon.
Now that Intel has finally launched its 3rd Generation Xeon Scalable ‘Ice Lake’ processors for servers, it is only a matter of time before the company releases its Xeon W-series CPUs featuring the same architecture for workstations. Apparently, some of these upcoming processors are already in the wild evaluated by workstation vendors.
Puget Systems recently built a system based on the yet-to-be-announced Intel Xeon W-3335 processor clocked at 3.40 GHz using Gigabyte’s single-socket MU72-SU0 motherboard, 128 GB of DDR4 memory (using eight 16GB modules), and Nvidia’s Quadro RTX 4000 graphics card. Exact specifications of the CPU are unknown, but given its ’3335‘ model number, we’d speculate that this is an entry-level model. The workstation vendor is obviously evaluating the new Ice Lake platform for workstations from every angle, yet it has published a benchmark result of the machine in its PugetBench for Premiere Pro 0.95.1.
The Intel Xeon W-3335-based system scored 926 overall points (standard export: 88.2; standard live playback: 126.1; effects: 63.6; GPU score: 63.6). For comparison, a system powered by AMD’s 12-core Ryzen 5900X equipped with 16GB of RAM and a GeForce RTX 3080 scored 915 overall points (standard export: 100.9; standard live playback: 79.6; effects: 93.9; GPU score: 100.7).
Given that we do not know exact specifications of the Intel X-3335 CPU, it is hard to make any conclusions about its performance, especially keeping in mind that the platform drivers may not be ready for an Ice Lake-W. Yet, at least we can now make some assumptions about ballpark performance of the CPU.
Intel has not disclosed what to expect from its Xeon W-series ‘Ice Lake’ processors, but in general the company tends to offer key features of its server products to its workstation customers as well. In case of the Xeon W-3335 it is evident that the CPU maintained an eight-channel memory subsystem, though we do not know anything about the number of PCIe lanes it supports.
In any case, since workstation vendors are already testing the new Xeon-W CPUs, expect them to hit the market shortly.
In its review of PowerColor’s Radeon RX 6900 XT Red Devil Ultimate, French publication Overclocking.com discovered that the graphics card is based on a new variant of the Navi 21 (Big Navi) silicon. The review brought to light the possibility that other vendors may also be preparing faster custom Radeon RX 6900 XT graphics cards.
There are currently three variations of the Navi 21 die on the market. The XL version is used in the Radeon RX 6800, the XT in the Radeon RX 6800 XT and lastly, the XTX in the Radeon RX 6900 XT. As exposed in the review, the Radeon RX 6900 XT Red Devil Ultimate leverages the Navi 21 XTXH die, which is why even the latest version of GPU-Z doesn’t recognize the die. Through the help of PowerColor, Overclocking.com got its hands on the latest version of AMDVbFlash, a utility to flash firmware on Radeon graphics cards. The tool effectively confirms the existence of the Navi 21 XTXH silicon on the RX 6900 XT Red Devil Ultimate.
The Radeon RX 6900 XT already utilizes the full Navi 21 die, which brings 5,120 shading units and 80 ray tracing acceleration cores. Therefore, the XTXH variant in all likelihood is just a higher-binned die with improved clock speeds and a more generous power limit. Since AMD provides the dies to its partners, it’s reasonable to think that the XTXH is AMD’s idea rather than the partners doing their own binning.
Coming back to the Radeon RX 6900 XT Red Devil Ultimate, the RDNA 2 graphics card comes with two modes of operation. The silent profile limits the game and boost clocks to 2,135 MHz and 2,335 MHz, respectively, while the OC profile cranks them up to 2,235 MHz and 2,425 MHz, respectively. Basically, we’re looking at a 10.9% and 7.8% higher game and boost clock speeds, respectively, in comparison to the vanilla Radeon RX 6900 XT. Does the increase warrant a new die revision? Apparently AMD (or at least PowerColor) thinks so.
Overclocking.com noticed that with the Radeon RX 6900 XT Red Devil Ultimate the core and memory frequency sliders were unlocked in AMD’s Radeon software. It’s uncertain if the newly lifted limits are a product of the Navi 21 XTXH’s firmware. The power limit option is still locked though. But the Radeon RX 6900 XT Red Devil Ultimate has a 330W power restriction, so there is enough thermal headroom for overclocking. Overclocking.com got its sample to 2,750 MHz on air and up to 2,850 MHz under liquid nitrogen.
The Radeon RX 6900 XT Red Devil Ultimate is probably just one of many custom Radeon RX 6900 XT iterations that will leverage the Navi 21 XTXH silicon. Given the timing of the review, we wouldn’t be surprised if vendors announce these higher-binned Radeon RX 6900 XT graphics cards in the next couple of days. But with the current situation for graphics cards, we fear the announcements might as well be vaporware.
Nvidia this week introduced a host of professional graphics solutions for desktops and laptops, which carry the Nvidia RTX A-series monikers and do not use the Quadro branding. The majority of the new units are based on the Ampere architecture and therefore bring the latest features along with drivers certified by developers of professional software.
Nvidia started to roll-out its Ampere architecture to the professional market last October when it announced the Nvidia RTX A6000 graphics card based on the GA102 GPU with 10,752 CUDA cores and 48GB of memory. The graphics board costs $4,650 and is naturally aimed at high-end workstations that cost well over $10,000. To address market segments with different needs, Nvidia this week introduced its RTX A5000 and RTX A4000 professional graphics cards.
The Nvidia RTX A5000 sits below the RTX A6000 but has the exact same feature set, including support for 2-way multi-GPU configurations using NVLink as well as GPU virtualization, so it can be installed into a server and used remotely by several clients (or used in regular desktop machines). The RTX A5000 is based the GA102 GPU and is equipped with 24GB of GDDR6 memory with ECC. The RTX A5000 peaks at 27.8 FP32 TFLOPS, which is nearly 30% below RTX A6000’s 38.7 FP32 TFLOPS, so it likely has far fewer CUDA cores. The board has four DisplayPort 1.4a outputs and comes with a dual-slot blower-type cooler.
Next up is the Nvidia RTX A4000, which is based on the GA104 and carries 16GB of GDDR6 memory with ECC. The product tops at 19.2 FP32 TFLOPS and is designed solely for good-old ‘individual’ workstations. Meanwhile, to keep up with the latest trends towards miniaturization, the RTX A4000 uses a single-slot blower-type cooling system.
Nvidia plans to start shipments of the new RTX A-series professional graphics cards later this month, so expect them in new workstations in May or June.
Mobile Workstations Get Amperes and Some Turings
In addition to new graphics cards for desktop workstation, Nvidia also rolled-out a lineup of mobile Nvidia RTX A-series GPUs that includes four solutions: the RTX A5000 and the RTX A4000 based on the GA104 silicon (just like the RTX 3070/RTX 3080 for laptops), as well as the RTX A2000 based on the GA106 chip (like the RTX 3060 for laptops).
The higher-end mobile Nvidia RTX A5000 has 6,144 CUDA cores and 16GB GDDR6, and the RTX A4000 has 5,120 CUDA cores and 8 GB GDDR6. These are essentially the mobile GeForce RTX 3080/3070, but with drivers certified by ISVs for professional applications. Performance of these GPUs tops at 21.7 FP32 and 17.8 FP 32 TFLOPS.
By contrast, the RTX A3000 with 4,096 CUDA cores and 6GB of memory seems to be a rather unique solution as it has more execution units than the GeForce RTX 3060, yet it features a similar 192-bit memory interface. As for performance, it will be up to 12.8 FP32 TFLOPS. Meanwhile, the entry-level RTX A2000 with 2,560 CUDA cores and 4GB of GDDR6 memory will offer up to 9.3 FP32 TFLOPS.
All of these GPUs are rated for a wide TGP range (e.g., the RTX A5000 can be limited to 80W or to 165W) and support Max-Q, Dynamic Boost, and WhisperMode technologies, so expect actual performance of Nvidia’s RTX A-series GPUs to vary from design to design, just like it happens with their GeForce RTX counterparts.
Nvidia expects its partners among manufacturers of mobile workstations to adopt its new RTX A-series solutions this quarter.
Some New Turings Too
In addition to new Ampere-based professional graphics solutions for desktops and laptops, Nvidia also introduced its T1200 and T600 laptop GPUs that also come with drivers certified by developers of professional applications. These products use unknown Turing silicon and are mostly designed to replace integrated graphics, so they do not feature very high performance and lack RT as well as Tensor cores.
Microsoft today announced the next iteration of its Surface laptop, the Surface Laptop 4. It will start at $999 when it goes on sale on April 15. Perhaps its biggest selling point is choice, with options for both 11th Gen Intel Core processors or an 8-core AMD Ryzen (again called the Microsoft Surface Edition).
Both the 13.5-inch and 15-inch version of the Surface Laptop 4 will offer Intel and AMD options. This is a change from the Surface Laptop 3, which offered Intel in the 13.5-incher and
AMD in the 15-incher
(with the exception of business models).
Microsoft Surface Laptop 4 (13.5-inches)
Microsoft Surface Laptop 4 (15-inches)
CPU
Up to AMD Ryzen Microsoft Surface Edition R5 4680U (8 cores), Up to Intel Core i7-1185G7
Up to AMD Ryzen Microsoft Surface Edition R7 4980U ( 8 cores), Up to Intel Core i7-1185G7
Graphics
AMD Radeon RX Graphics or Intel Xe Graphics
AMD Radeon RX Graphics or Intel Xe Graphics
RAM
Up to 16GB (AMD), Up to 32GB (Intel), LPDDR4X 3,733 MHz
Up to 16GB (AMD, DDR4, 2,400 MHz), up to 32GB (Intel, LPDDR4, 3,733 MHz)
Storage
Up to 256GB (AMD), Up to 1TB (Intel)
Up to 512GB (AMD), Up to 1TB (Intel)
Display
13.5-inch PixelSense display, 2256 x 1504, 3:2
15-inch PixelSense display, 2496 x 1664, 3:2
Networking
Wi-Fi 6 (802.11ax), Bluetooth 5.0
Wi-Fi 6 (802.11ax), Bluetooth 5.0
Starting Price
$999 (AMD), $1,299 (Intel)
$1,299 (AMD), $1,799 (Intel)
The design of the Surface Laptop 4 is largely unchanged, with a 3:2 touchscreen display with 201 pixels per inch, options for Alcantara fabric or a metal deck. There is, however, one new color, ice blue, which debuted on the Surface Laptop Go last year.
Image 1 of 2
Image 2 of 2
Many of the biggest changes can’t be seen. For the first time, Microsoft is offering a 32GB RAM option on the Surface Laptop (with an Intel Core i7 at 1TB of RAM on both sizes). The company is claiming up to 19 hours of battery life on the smaller device with an AMD Ryzen 5 or 17 hours with a Core i7. On the bigger size, it’s suggesting up to 17.5 hours with an AMD Ryzen 7 and 16.5 hours with Intel Core i7. Microsoft is also claiming a 70% performance increase, though it doesn’t say with which processor.
The new AMD Ryzen Microsoft Surface Edition chips are based on Ryzen 4000 and Zen 2, rather than Ryzen 5000 and Zen 3, which is just rolling onto the market. We understand Microsoft’s chips are somewhat customized, including frequencies similar to the newer chips. But these new processors should, in theory, lead to increased stability and battery life.
While Microsoft is being more flexible on allowing both Intel and AMD options on both size machines, you won’t find them with identical specs when it comes to RAM and storage. The 13.5-inch laptop will offer Ryzen 5 with 8GB or 16GB of RAM and 256GB of storage, while the Intel 11th Gen Core process range will include a Core i5/8GB RAM/512GB SSD option to start, as well as both Core i5 and Core i7 models with 16GB of RAM and 512GB of storage and a maxxed out version with a Core i7, 32GB of RAM and 1TB storage drive. The Ryzzen versions only come in platinum, while all but the top-end Intel model also include ice blue, sandstone and black.
Image 1 of 2
Image 2 of 2
On the 15-inch model, you can get a Ryzen 7 with 8GB of RAM and either 256GB or 512GB of storage, or an R7 with 16GB of memory and a 512GB SSD. For intel, You can choose between an Intel Core i7 with either 16GB of memory and 512GB of storage or 32GB of memory and 1TB of storage. These only come in platinum and black.
Commercial models will add more configurations for businesses, including a 13.5-inch model with 512GB of storage and a Ryzen processor. Overall, there are a lot of configurations, so hopefully people are able to find what they want. But there are definitely more options on the Intel side of the Surface fence.
The port situation is largely the same as last year, including USB Type-A, USB Type-C, a headphone jack and the Surface Connect port. Microsoft still isn’t going with Thunderbolt, and will be using USB-C 3.1 Gen 2 on both the Intel and AMD models. The replaceable SSD is back, though Microsoft continues to state that it isn’t user serviceable, and that it should only be removed by authorized technicians.
It’s been a long wait for the Surface Laptop 4. The Surface Laptop 3 was introduced at an event in October 2019 and went on sale that November. Last year, Microsoft revealed the cheaper, smaller Surface Laptop Go but didn’t update the flagship clamshell. We’ll go hands on with the Surface Laptop 4, so let’s hope the wait was worth it.
Microsoft is also revealing a slew of accessories designed for virtual work. They include the $299.99 Surface Headphones 2+ for Business, which is certified for Microsoft Teams with a dongle, shipping this month; Microsoft Modern USB and wireless headsets ($49.99 and $99.99, respectively, releasing in June); the Microsoft Modern USB-C Speaker ($99.99, releasing in June); and the Microsoft Modern webcam, a $69.99 camera with 1080p video, HDR and a 78-degree field of view that will go on sale in June.
Nvidia introduced two new data processing units (DPUs) at its GTC event on Monday, the BlueField-3 and the BlueField-4, that it says will be available in 2022 and 2024, respectively. The new DPUs are designed to offload network, storage, security, and virtualization loads from datacenter CPUs.
Nvidia’s BlueField-3 DPU is a custom-built system-on-chip that integrates 16 general-purpose Arm Cortex-A78 cores along with multiple accelerators for software-defined storage, networking, security, streaming, and TLS/IPSEC cryptography just to name a few. It also has its own DDR5 memory interface. The Cortex-A78 cores can run a virtualization software stack and offload numerous workloads from host CPUs. Meanwhile, the VLIW engines can be programmed through Nvidia’s DOCA (data-center-on-a-chip architecture) software development kit to accelerate their workloads.
While BlueField-3 looks to be a pretty straightforward SoC on paper, it is an extremely complex chip with formidable compute performance: The DPU contains 22 billion transistors and features performance of 42 SPECrate2017_int (which is comparable to the performance of a 12-core 2nd Generation Xeon Scalable CPU) as well as 1.5 TOPS. Its successor — the BlueField-4 — will integrate 64 billion transistors and feature 160 SPECrate2017_int as well as 1000 TOPS performance.
The BlueField-3 DPU will come on a card equipped with Mellanox’s 400Gbps Ethernet or Infiniband NIC for connectivity. Its descendant, BlueField-4, will feature an 800Gbps connectivity, Nvidia revealed today.
From a technology perspective, Nvidia’s DPUs have nothing to do with the company’s GPUs as they are developed by engineers from Mellanox, a company that Nvidia now owns. But from a business perspective, they are just as promising as datacenter GPUs and SoCs, as demand for cloud computing is growing rapidly.
In recent years Nvidia put a lot of effort into the development of its datacenter and HPC GPUs, and presently it commands the lion’s share of those markets. Meanwhile, as datacenters are becoming more complex and various data manipulations by CPUs are getting costly in terms of performance and power, DPUs may actually face growing demand in the coming years. So it’s not surprising that Nvidia’s engineers are hard at work designing leading-edge DPUs as they are a natural way for the company to grow further in the datacenter.
DPUs can also benefit the company’s GPU business. As CPUs get freed from networking, security, and cryptography workloads, they will spend more clocks running applications, some of which also rely on GPUs. So more freed-up CPU cycles may lead to demand for more GPU cycles.
Nvidia says it will start sampling of its BlueField-3 DPU in the first quarter of 2022.
Nvidia today revealed its updated DGX Station A100 320G. As you might infer from the name, the new DGX Station sports four 80GB A100 GPUs, with more memory and memory bandwidth than the original DGX Station. The DGX Superpod has also been updated with 80GB A100 GPUs and Bluefield-2 DPUs.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
Nvidia already revealed the 80GB A100 variants last year, with HBM2e clocked at higher speeds and delivering 2TB/s of bandwidth per GPU (compared to 1.6TB/s for the 40GB HBM2 model). We already knew as far back as November that the DGX Station would support the upgraded GPUs, but they’re now ‘available’ — or at least more available than any of the best graphics cards for gaming.
The DGX Station isn’t for gaming purposes, of course. It’s for AI research and content creation. It has refrigerated liquid cooling for the Epyc CPU and four A100 GPUs, and runs at a quiet 37 dB while consuming up to 1500W of power. It also delivers up to 2.5 petaFLOPS of floating-point performance and supports up to 7 MIGs (multi-instance GPU) per A100, giving it 28 MIGs total.
If you’re interested in getting a DGX Station, they run $149,000 per unit, or can be leased for $9,000 per month.
The DGX Superpod has received a few upgrades as well. Along with the 80GB A100 GPUs, it now supports Bluefield-2 DPUs (data processing units). These offer enhanced security and full isolation between virtual machines. Nvidia also now offers Base Command, the software the company uses internally to help share its Selene supercomputer among thousands of users.
DGX Superpod spans a range of performance, depending on how many racks are filled. It starts with 20 DGX A100 systems and has 100 PFLOPS of AI performance, and a fully equipped system has 140 DGX A100 systems and 700 PFLOPS of AI performance. Each A100 system has dual AMD EPYC 7742 CPUs with 64-cores each, supports up to 2TB of memory, and has eight A100 GPUs. Each also delivers 5 PFLOPS of AI compute, though FP64 is also available.
The DGX Superpod starts at $7 million and scales up to $60 million for a fully equipped system. They’ll be available in Q2. We’ll take two, thanks.
Corsair is a US-based peripherals and hardware company founded in 1994. It is now one of the leading manufacturers for gaming gear, with a portfolio spanning nearly every component you need: DRAM memory modules, flash SSDs, keyboards, mice, cases, cooling, and much more.
The Corsair MP400 SSD is a value-optimized M.2 NVMe SSD based on the combination of a Phison E12 controller paired with QLC flash from Micron. A Nanya DRAM chip provides 1 GB of storage for the mapping tables of the SSD.
The Corsair MP400 comes in capacities of 1 TB ($110), 2 TB ($230), 4 TB ($600), and 8 TB ($1330). Endurance for these models is set to 200 TBW, 400 TBW, 800 TBW, and 1600 TBW respectively. Corsair includes a five-year warranty with the MP400.
Nvidia’s flagship A100 compute GPU introduced last year delivers leading-edge performance required by cloud datacenters and supercomputers, but the unit is way too powerful and expensive for more down-to-earth workloads. So today at GTC the company introduced two younger brothers for its flagship, the A30 for mainstream AI and analytics servers, and the A10 for mixed compute and graphics workloads.
Comparison of Nvidia’s A100-Series Datacenter GPUs
A100 for PCIe
A30
A10
FP64
9.7 TFLOPS
5.2 TFLOPS
–
FP64 Tensor Core
19.5 TFLOPS
10.3 TFLOPS
–
FP32
19.5 TFLOPS
10.3 TFLOPS
31.2 TFLOPS
TF32
156 TF
82 TF
62.5 TFLOPS
Bfloat16
312 TF
165 TF
125 TF
FP16 Tensor Core
312 TF
165 TF
125 TF
INT8
624 TOPS
330 TOPS
250 TOPS
INT4
1248 TOPS
661 TOPS
500 TOPS
RT Cores
–
–
72
Memory
40 GB HBM2
24 GB HBM2
24 GB GDDR6
Memory Bandwidth
1,555 GB/s
933 GB/s
600 GB/s
Interconnect
12 NVLinks, 600 GB/s
? NVLinks, 200 GB/s
–
Multi-Instance
7 MIGs @ 5 GB
4 MIGs @ 6 GB
–
Optical Flow Acceleration
–
1
–
NVJPEG
–
1 decoder
?
NVENC
–
?
1 encoder
NVDEC
–
4 decoders
1 decoder (+AV1)
Form-Factor
FHFL
FHFL
FHFL
TDP
250W
165W
150W
The Nvidia A30: A Mainstream Compute GPU for AI Inference
Nvidia’s A30 compute GPU is indeed A100’s little brother and is based on the same compute-oriented Ampere architecture. It supports the same features, a broad range of math precisions for AI as well as HPC workloads (FP64, FP64TF, FP32, TF32, bfloat16, FP16, INT8, INT4), and even multi-instance GPU (MIG) capability with 6GB instances. From a performance point of view, the A30 GPU offers slightly more than 50% of A100’s performance, so we are talking about 10.3 FP32 TFLOPS, 5.2 FP64 TFLOPS, and 165 FP16/bfloat16 TFLOPS.
When it comes to memory, the unit is equipped with 24GB of DRAM featuring a 933GB/s bandwidth (we suspect Nvidia uses three HBM2 stacks at around 2.4 GT/s, but the company has not confirmed this). The memory subsystem seems to lack ECC support, which might be a limitation for those who need to work with large datasets. Effectively, Nvidia wants these customers to use its more expensive A100.
Nvidia traditionally does not disclose precise specifications of its compute GPU products at launch, yet we suspect that the A30 is exactly ‘half’ of the A100 with 3456 CUDA cores, though this is something that is unconfirmed at this point.
Nvidia’s A30 comes in a dual-slot full-height, full length (FHFL) form-factor, with a PCIe 4.0 x16 interface and a 165W TDP, down from 250W in case of the FHFL A100. Meanwhile, the A30 supports one NVLink at 200 GB/s (down from 600 GB/s in case of the A100).
The Nvidia A10: A GPU for AI, Graphics, and Video
Nvidia’s A10 does not derive from compute-oriented A100 and A30, but is an entirely different product that can be used for graphics, AI inference, and video encoding/decoding workloads. The A10 supports FP32, TF32, blfoat16, FP16, INT8 and INT4 formats for graphics and AI, but does not support FP64 required for HPC.
The A10 is a single-slot FHFL graphics card with a PCIe 4.0 x16 interface that will be installed into servers running the Nvidia RTX Virtual Workstation (vWS) software and remotely powering workstations that need both AI and graphics capabilities. To a large degree, the A10 is expected to be a remote workhorse for artists, designers, engineers, and scientists (who do not need FP64).
Nvidia’s A10 seems to be based on the GA102 silicon (or its derivative), but since it supports INT8 and INT4 precisions, we cannot be 100% sure that this is physically the same processor that powers Nvidia’s GeForce RTX 3080/3090 and RTX A6000 cards. Meanwhile, performance of the A10 (31.2 FP32 TFLOPS, 125 FP16 TFLOPS) sits in the range of the GeForce RTX 3080. The card comes equipped with 24GB of GDDR6 memory offering a 600GB/s bandwidth, which appears to be the memory interface width of the RTX 3090 but without the GDDR6X clock speeds (or power or temperatures).
Pricing and Availability
Nvidia expects its partners to start offering machines with its A30 and A10 GPUs later this year.
We’ve barely heard a peep out of Nvidia on the CPU front for years, after the lackluster arrival of its Project Denver CPU and its associated Tegra K1 mobile processors in 2014. But now, the company’s getting back into CPUs in a big way with the new Nvidia Grace, an Arm-based processing chip specifically designed for AI data centers.
It’s a good time for Nvidia to be flexing its Arm: it’s currently trying to buy Arm itself for $40 billion, pitching it specifically as an attempt “to create the world’s premier computing company for the age of AI,” and this chip might be the first proof point. Arm is having a moment in the consumer computing space as well, where Apple’s M1 chips recently upended our concept of laptop performance. It’s also more competition for Intel, of course, whose shares dipped after the Nvidia announcement.
The new Grace is named after computing pioneer Grace Hopper, and it’s coming in 2023 to bring “10x the performance of today’s fastest servers on the most complex AI and high performance computing workloads,” according to Nvidia. That will make it attractive to research firms building supercomputers, of course, which the Swiss National Supercomputing Centre (CSCS) and Los Alamos National Laboratory are already signed up to build in 2023 as well.
A Grace Next is already on the roadmap for 2025, too. Here’s a slide from Nvidia’s GTC 2021 presentation where it announced the news:
I’d recommend reading what our friends at AnandTech have to say about where Grace might fit into the data center market and Nvidia’s ambitions. It’s worth noting that Nvidia isn’t releasing much in the way of specs just yet — but Nvidia does say it features a fourth-gen NVLink with a record 900 GB/s interconnect between the CPU and GPU. “Critically, this is greater than the memory bandwidth of the CPU, which means that NVIDIA’s GPUs will have a cache coherent link to the CPU that can access the system memory at full bandwidth, and also allowing the entire system to have a single shared memory address space,” writes AnandTech.
As DDR5 memory and supported platforms are approaching their launch, more makers of sophisticated DRAM modules are teasing their upcoming DDR5 products. Galax did exactly this today.
“DDR5 memory module is coming soon,” a statement over Galax OC Lab on Facebook reads.
The post also shows a palette of Micron’s DRAM chips marked as ICA45 D8BNJ R6KB, which are not currently listed at the company’s website, but which we understand are the devices that Galax OC Lab is playing with at the moment.
Galax OC Lab is known for its rather exotic Hall of Fame (HOF) components, with overengineered PCBs and cooling systems that are designed to enable great out-of-box performance along with some extra overclocking potential. With DDR5, Galax HOF engineers are going to have a lot of things to play with.
As we noted in our coverage of Team Group’s upcoming DDR5 modules for overclockers, one of the innovative features of DDR5 DIMMs is that they can be equipped with their own voltage regulating modules (VRMs) and power management ICs (PMICs) to lower voltage fluctuation ranges, decrease power consumption, potentially increase DRAM yields, and boost performance.
Memory modules with onboard VRMs and PMICs will be particularly important for servers that use up to 4TB of memory per socket (and with DDR5 this number might grow to 32TB in the coming years), where power consumption of the DRAM can surpass that of processors.
Meanwhile, makers of memory modules for client PCs can also take advantage of this capability and equip their DIMMs with sophisticated VRMs and PMICs to amplify performance, differentiate from rivals, and maximize overclocking potential.
Right now, makers of memory modules for enthusiasts are improving performance by improving PCB design, cherry-picking DRAM (after sourcing the ‘right’ devices from IC vendors), playing with voltages, and tweaking timers. With DDR5, the game will get a bit tougher and easier at the same time as companies will be able to choose and tweak VRMs and PMICs.
That said, so far neither Galax nor Team Group confirmed that they will use onboard VRMs and PMICs for their first overclockable DDR5 modules. Still, we do know that the specification supports this capability.
Intel’s Alder Lake-S, expected at some point later this year, promises to be the industry’s first desktop platform to support DDR5 memory.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.