AMD will reveal a new Radeon RX 6000 graphics card during Episode 3 of its “Where Gaming Begins” event on March 3 on 11 AM US Eastern. Although the chipmaker didn’t specify which model, it’s likely going to be the much-awaited Radeon RX 6700 XT. Following AMD’s Big Navi release pattern,. the Radeon RX 6700 Xt is the next SKU in line after all.
AMD’s render of the RDNA 2 graphics card aligns with a previous leaked design of what the Radeon RX 6700 could look like. On an aesthetic level, the Radeon RX 6700 XT shares similar traits as the reference design for other Big Navi models, such as the Radeon RX 6900 XT, RX 6800 XT and RX 6800. However, the Radeon RX 6700 XT features a less robust cooling system with a dual-slot design.
The Radeon RX 6700 XT emerges with a shorter cooler with only two cooling fans. A quick glimpse at the front of the graphics card reveals three DisplayPort 1.4a outputs and a single HDMI 2.1 port. It would seem that AMD has removed the USB Type-C connector on the Radeon RX 6700 XT. While the USB Type-C port has its uses, it never really took off so it will please consumers to know that it has been replaced with an extra DisplayPort 1.4a output instead.
On March 3rd, the journey continues for #RDNA2. Join us at 11AM US Eastern as we reveal the latest addition to the @AMD Radeon RX 6000 graphics family. https://t.co/5CFvT9D2SR pic.twitter.com/tUpUwRfpgkFebruary 24, 2021
See more
The Radeon RX 6700 XT will be gunning after Nvidia’s mid-range Ampere-based graphics cards, such as the GeForce RTX 3060 that launches tomorrow. The specifications for the new Big Navi (I guess this is really Medium Navi) graphics card are still blurry, but we expect to see a full Navi 22 (codename Navy Flounder) die, which houses 40 Compute Units (CUs). As AMD has done in the past, it’s reasonable to think that the chipmaker would also put out a Radeon RX 6700, which would probably leverage a cut-down version of the Navi 22 silicon.
The rumors are painting the Radeon RX 6700 XT and RX 6700 with 2,560 and 2,304 Stream Processors (SPs), respectively. Assuming that the SP count is accurate, the XT variant will have 40 ray accelerators at its disposal, while the non-XT variant should be equipped with 36 of them.
On the memory aspect, Gigabyte has registered multiple custom Radeon RX 6700 XT graphics cards before the EEC (Eurasian Economic Commission) with 12GB of GDDR6 memory. Similary, ASRock has submitted a couple of Radeon RX 6700 SKUs with 6GB of GDDR6 memory.
Pricing and performance are important, but availability has ultimately taken up a bigger role nowadays given the graphics card shortages, crypto-mining boom and scalpers. AMD has made it clear that it’ll announce a Radeon RX 6000 graphics card on March 3. However, it’ll be interesting to see if it will be available for purchase sooner rather than later.
The RTX 3060 launch is almost upon us, but we won’t have to wait until tomorrow to see benchmarks. Videocardz.com today posted early numbers of how the 3060 runs in synthetic benchmarks like 3DMark and Unigine Superposition. The site attributes these tests to anonymous sources, and although synthetic benchmarks and pre-launch tests can be inaccurate, the results are quite underwhelming.
Nvidia’s GeForce RTX 3060 will be the company’s new mid-range card for the Ampere generation, featuring 3584 CUDA cores, 12GB of GDDR6 memory, and a $325 MSRP (good luck getting one at that price). The GPU will be launching on February 25th (tomorrow).
GPUs:
Fire Strike Extreme
Time Spy Extreme
Superposition 1080P Extreme
RTX 3060
10284
4111
5073
RTX 2060
9050
3810
4370
RTX 2060 Super
10560
4070
5150
We’ve highlighted some of the benchmarks Videocardz ran above, but overall, the RTX 3060 managed to beat the 2060 by just 10% overall. Some of the synthetic tests like 3DMark Fire Strike have the 3060 beating out its Turing counterpart by 16%, but in others like Time Spy, the results are nearly identical.
However, like with most of these leaks — performance can be highly skewed due to the use of synthetic benchmarks and the use of a pre-release driver; both of which can significantly alter the card’s performance. So take these results with a grain of salt, as it is almost guaranteed that what we’re seeing here don’t indicate how the card will really perform on launch day, with launch day drivers.
But, if there’s a chance these results end up reflecting the 3060’s actual power, the 3060 runs the risk of presenting a seriously unappealing value at $325, which is 30% more expensive than the RTX 2060 despite the card giving you just 10% more performance.
Stay tuned for our review of the RTX 3060 tomorrow, where you’ll get a more detailed, more accurate overview of where the 3060 sits in performance against the RTX 2060 and the other best GPUs currently on the market.
The Gigabyte camp has added a new rendition of the GeForce RTX 3060 to its arsenal. Barring any subsequent models, the Aorus GeForce RTX 3060 Elite 12G (GV-N3060AORUS E-12GD) appears to carry the flagship tag of Gigabyte’s custom GeForce RTX 3060 models, and is also looking to be the fastest GeForce RTX 3060…for now.
The Aorus GeForce RTX 3060 Elite 12G features a 1,867 MHz boost clock, 5.1% higher than Nvidia’s reference specification. It comes with two vBIOS profiles: one for silent and another for performance. Previous to the Elite model, the GeForce RTX 3060 Gaming OC 12G and GeForce RTX 3060 Vision OC 12G were the fastest SKUs in Gigabyte’s lineup with a 1,837 MHz boost clock. Although the Elite version only has a 30 MHz higher boost clock, the graphics card comes with a steeper power requirement, apparently.
As opposed to Gigabyte’s other GeForce RTX 3060 offerings that leverage a single 8-pin PCIe power connector, the Aorus GeForce RTX 3060 Elite 12G needs the same 8-pin plus an additional 6-pin PCIe power connector. This obviously bumps the recommended minimum power supply capacity from the previous 550W up to 650W.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The Aorus GeForce RTX 3060 Elite 12G is also armed with a triple-fan cooling system and has dimensions of 296 x 117 x 56mm. The cooler seems to be overkill for a GeForce RTX 3060 as the renders show that it extends far beyond the graphics card’s short PCB. On the aesthetics side, Gigabyte kept the design relatively low-key with a simple, black shroud with a tiny Aorus RGB-illuminated logo and cooling fans with ARGB rings.
The cooling system adheres to Gigabyte’s WindForce 3X design with five composite heat pipes transferring the heat away from the GA106 die and memory chips towards the bulky heatsink. Subsequently, a trio of 80mm semi-passive cooling fans dissipate the heat. The Aorus GeForce RTX 3060 Elite 12G comes equipped with a black, metal back plate that helps with cooling and improves the graphics card’s rigidity. According to Gigabyte, the extended heatsink allows hot air to exit through the large cut-out.
In terms of display outputs, the graphics card provides two HDMI 2.1 ports and two DisplayPort 1.4a outputs. The configuration is sufficient to support up to four monitors simultaneously.
Gigabyte hasn’t revealed the pricing for the Aorus GeForce RTX 3060 Elite 12G as the GeForce RTX 3060 isn’t officially available until tomorrow. Since it’ll likely be one of the best GPUS among custom GeForce RTX 3060 models, we don’t think it will be easy on the pockets.
Nvidia announced its RTX 3060 last month, later revealing that it was putting the brakes on Ethereum mining performance with this card. Simultaneously, Nvidia announced the professional ‘CMP’ mining GPUs.
However, within all this, CryptoLeo, a Russian YouTuber that specializes in cryptocurrency, showed that the GPU was still profitable to mine with — as long as you didn’t mine Ethereum. Now, the same guy is tearing down his sample of the Zotac RTX 3060 graphics card for your viewing pleasure.
The first thing that stands out to CryptoLeo in the teardown is that the GPU’s memory comes with thermal pads to aid with cooling. This is an absolute must in nowadays market as the memory can run very hot, especially when mining.
Not far into the teardown, after removing the cooler and cleaning the GPU die, it immediately becomes clear which GPU is sat at the heard of this graphics card: the GA106-300-A1 — again confirming that this is the same GPU as in the mobile RTX 3060, but cut down with slightly fewer cores for this desktop variant. Of course, its significantly higher clock rates enable desktop-class performance on the RTX 3060.
Image 1 of 3
Image 2 of 3
Image 3 of 3
As spotted by @Harakuze5719, the PCB is identical to that of the Zotac RTX 3060 Ti Twin Edge. This isn’t a huge surprise as the two GPUs don’t differ in power requirements all too much, so why make yet another PCB model if the same design can be re-used? Power delivery for the GPU is handled by a 5-phase setup with two power phases cleaning up power for delivery to the vRAM.
No Big Surprises In This Teardown
All things considered, there are no huge surprises to be seen here — this is a midrange card with a midrange cooling solution, and PCB designs have shrunk enough to fit the GPU into a palm of a hand.
The RTX 3060 is set to hit shelves tomorrow at an MSRP of $329 – but we’ll be impressed if even one card trades hands for that price.
There has been a good number of Alder Lake benchmarks that have popped up on the radar. Yesterday’s Geekbench 5 submission (via Benchleaks), however, gives us a first peek into the hybrid processor’s big cores.
Alder Lake-S will go down in Intel’s history as the first hybrid x86 desktop processor, and from how it looks so far, it may be one of the most confusing processor launches. Alder Lake-S brings together a mixture of ‘Big’ Golden Cove and ‘Small’ Gracemont cores. As you would imagine, that gives life to numerous potential configurations. As of this moment, we’ve learned from a Linux driver update that Alder Lake-S could arrive in up to 12 different flavors, assuming that Intel doesn’t have more tricks up its sleeve.
The latest Alder Lake-S sample lacks a name, but given the details that we already know about Intel’s hybrid chips, we don’t doubt its veracity. The processor under test was operating from a motherboard or test platform based on the upcoming LGA1700 socket. Alder Lake-S is pegged to support both DDR4 and DDR5 memory. Although the submission itself doesn’t specify the type of memory, the detailed report revealed the memory running with timings configured to 36-34-34-63. Given the really sloppy timings, the processor was very likely paired with DDR5 memory.
The Alder Lake processor features eight cores and 16 threads, implying that it’s rolling with only the ‘Big’ Golden Cove cores since the Gracemont cores lack Huper-Threading support. The processor appears to feature a 3 GHz base and boost clock, but it may be an early engineering sample. Nonetheless, a previous Alder Lake-S chip emerged with a 4 GHz boost clock. However, it was the 16-core model, alluding to the eight Golden Cove cores and eight Gracemont cores. Will Intel clock the Golden Core-exclusive SKUs higher than the hybrid SKUs, or vice versa? It’s still uncertain how Alder Lake-S will play out.
Unfortunately, the Geekbench 5 submission doesn’t provide us with any meaningful insight into the Alder Lake-S chip’s performance, so it’s unclear how it will stack up in our CPU Benchmarks hierarchy. The OpenCL benchmark only taxes the graphics card, which in this case was a GeForce RTX 2080. So, we can’t really pass judgment on the Alder Lake-S processor’s gaming performance or whether it bottlenecks the Turing-based graphics card or not.
Alder Lake, which is based on Intel’s 10nm Enhanced SuperFin process, will enter mass production in the second half of this year. Not surprisingly, the processors will command fresh LGA1700 motherboards with the 600-series chipset and, of course, DDR5 memory. Upgrading to the new platform certainly won’t be easy on the pockets, and pricing will be the ultimate determinant on whether or not Alder Lake makes an appearance on our list of Best CPUs for gaming.
Courtesy of database detective @Leakbench on Twitter, we now have our first decent look at how Intel’s next-gen Core i9-11900K Rocket Lake CPU will perform in our CPU Benchmark hierarchy. This test is the first clear result from Geekbench 4 for the 11900K, which is nice to see as it can be a more accurate gauge of raw CPU performance than the other benchmark results we’ve seen, like Passmark or Geekbench 5. The latest test results show that Rocket Lake will assuredly climb the gaming ranks, and if the price is right, the new chips could upset our list of Best CPUs.
In a nutshell, you shouldn’t trust Geekbench 5’s overall scores as an accurate measure of Rocket Lake’s performance, and there’s a technical reason why. We’ve encountered strange phenomenons with Geekbench 5, where its use of AVX-512 can widely skew the results in the encryption subtest. In turn, this inflates Rocket Lake’s overall Geekbench 5 scores against all other processors that don’t support AVX-512. This can lead to an inaccurate picture that makes Rocket Lake appear better in relation to AMD’s competing chips, not to mention Intel’s previous-gen models.
Geekbench 4 isn’t perfect either, but its lack of AVX-512 support makes the test much more accurate when gauging per-core performance without using an exotic SIMD instruction (AVX-512) that has no meaningful uptake in mainstream desktop PC software. In fact, Geekbench’s developer has stated that the AVX-512 testing disparity will be addressed in the Geekbench 6 benchmark that’s due out later this year. The big takeaway here — don’t look too deeply into the overall Geekbench 5 test results.
This particular test submission seems about as close as we’ll get to something solid before the launch, but as usual, we have to take the results with a grain of salt. However, the 11900K boosts to 5.3 GHz throughout this test sequence, signaling it’s running at stock core clocks, and the memory appears to be running at the stock DDR4-3200 for Rocket Lake.
This is important because Geekbench 4 is sensitive to memory frequency, especially when it comes to multi-core tests. To compare, we’re using test results that we generated in our own labs for the Core i9-10900K and Ryzen 7 5800X, with both operating at stock memory clocks (DDR4-2933 and DDR4-3200, respectively).
Geekb
CPU
Geekbench 4 Single-Core Score
Geekbench 4 Multi-Core Score
Intel Core i9-11900K
7562
36326
Intel Core i9-10900K
6592
38704
AMD Ryzen 7 5800X
7247
42609
Intel claims a 19% increase in IPC for the Rocket Lake chips, and that appears to be roughly accurate in this test. The Core i9-11900K was ~15% faster than its predecessor, the 10900K, in the single-core tests.
However, looking at the multi-core results, the inverse happens and the 10900K is 6.5% faster due to its higher core count. That’s actually pretty impressive, though: The ten-core Core i9-10900K has two more cores than the eight-core Core i9-11900K, so we expected a much larger advantage in favor of the chip with two extra cores. Increased IPC truly floats all boats.
But against the 5800X, the single-core results are much closer, naturally, with Zen 3’s much higher IPC performance. Here the 11900K pulls ahead of the 5800X by a mere 4.4%. Strangely though, the 5800X pulls ahead of the 11900K in the multi-core department by 17%, which is a larger delta than we expected because these are both eight-core chips.
This is but one benchmark, though, and several factors could influence the score, including early firmware with the Core i9-11900K. We expect more mature BIOS revisions will be headed out before launch. In either case, these results paint a competitive picture for the desktop PC space soon, one in which price (and supply in light of the shortages) will be exceedingly important.
As tweeted by @Harukaze5719, Palit has added one more indication there may be additional (future) RTX 3060 variants. Palit filed listings for new RTX 3060 model names with the NRRA, and one model in particular showcases the RTX 3060 with 6GB of VRAM.
Even before the official RTX 3060 announcement from Nvidia, there were a number of rumors floating around. Considering the history of the -60 line of GPUs, many have assumed Nvidia planned to make a 6GB variant of the RTX 3060. We had GTX 1060 6GB and 3GB (and even 5GB in Asia), so this isn’t without precedent.
Of course, this is just a listing for a product name. We’ve seen listings like this numerous times in the past, where a graphics card manufacturer will pump out a model name, and it never gets used. It may come to fruition, or it may not. We’ll have to wait and see where the fate of the 3060 6GB will lie.
It’s worth noting that the mobile RTX 3060 already exists in a 6GB variant, so this isn’t exactly a difficult switch. Instead of 16Gb chips on each channel, all Nvidia needs to do is put in 8Gb chips. Six channels, 32-bits each, and you end up with either a 6GB or 12GB card. But how would a 6GB variant of the 3060 play out?
From our own experiences testing 6GB graphics cards like the RTX 2060, we’ve found that 6GB is about as low as you want to go with a mid-range card. Especially at 4K resolutions, you might have to turn down a couple of settings (like 4K texture packs) in very graphically demanding games to prevent VRAM bottlenecking.
There are other incentives to buying an RTX card, of course, like DLSS support. Ray tracing often gets more hype, but the Tensor cores are potentially more useful for lower-spec PCs. There are already a few dozen games with DLSS (about 24 with DLSS 2.0), and the list continues to grow. Nvidia just announced that Nioh 2: Complete Edition and Mount & Blade II: Bannerlord now have DLSS 2.0, for example, and Unreal Engine also has a new free marketplace plugin for DLSS that should make it even easier to implement. So, even though a 6GB card might not make a lot of sense for ray tracing, it’s definitely not out of the question.
The card would have to be priced well to make it attractive over the more favorable 12GB offering. Right now, unfortunately, in the world of virtually impossible to buy graphics cards, it’s doubtful good prices will even be a thing. Everything from GTX 1050 Ti through RTX 3090 is basically sold out, with extreme scalper prices on eBay. However, from Nvidia’s standpoint, a 6GB model does make sense — especially if they are suffering from the video memory shortages that have reportedly plagued GDDR6 (and GDDR6X) production.
As usual, Nvidia won’t comment on the existence or potential for a desktop 3060 6GB. Our take is that, given where games are heading and current pricing, it will be a tough sell unless the price is really good (and actually something you can find). The GTX 1060 3GB was in a similar situation a few years back, and we never recommended it without concerns about the lack of VRAM. A 3060 6GB would be the modern equivalent, and at the right price, consumers probably would be okay with the reduced memory capacity. Or, you know, just buy a 3060 laptop.
The Atari arcade cabinet game Race Drivin’ was ported to the Atari ST in the summer of 1991, and then ported again to the SNES a year later. It was the sequel to 1989’s Hard Drivin’, and while it boasted numerous improvements over its predecessor — it could model a car with four wheels, as opposed to Hard Drivin’s two — it was still not particularly fast.
The SNES console port ran at a slideshow-y 4 frames per second. And when the Genesis port arrived in 1993, Electronic Gaming Monthly’s January 1994 issue gave the game a capsule review. It reads in full:
This is another so-so entry in the driving scene where the truly innovative titles (Chase H.Q. II and Rock & Roll Racing) tend to stand out, while others like this get lost in the crowd. The scrolling is very choppy.
It received mostly fours and fives (out of 10) from the magazine’s staff. (This in an issue with an editor’s letter about the California attorney general threatening to do something about violence in video games! Night Trap was terrifying at the time!)
Anyway, Race Drivin’ ran at 4 frames per second on the original Atari ST hardware. Software engineer Vitor Vilela thought that wasn’t good enough and decided to do something about it using contemporaneous hardware — the Nintendo SA-1 processor. As Kotaku reports, the results show exactly how much more powerful the SA-1 chip was; Vilela managed to get around 30 frames per second using a conversion they developed specifically for it. Here’s what that looks like in action.
In the description of the video on YouTube, Vilela writes a little about how they managed to get this frankly very impressive feat working. “Just like my other conversions, this one moves the entire memory to the SA-1 side and moves almost the whole processing to the SA-1 CPU side,” they write. “With all optimizations included, the game runs up to 1000% faster compared with original.”
All the code Vilela wrote for this hack is available on Github, along with the source code for the other hacks they’ve pulled off. It’s a shame that EGM couldn’t have gotten its hands on this version of the game — it looks like something ported directly from an alternate future.
(Pocket-lint) – The Inspire 2 is the cheapest member of the Fitbit family – and effectively replaces the Inspire HR that launched in 2019 – aimed at those wanting to keep to the tracking basics.
The Inspire 2 sticks largely to the same formula as the Inspire HR, making welcome improvements to the design, bolstering battery life to make it last longer than any other Fitbit device, and giving you a tracking experience that just feels very easy to get to grips with.
With the likes of Samsung, Huawei, Amazfit and Xiaomi also making the budget tracker space a more competitive place, does the Fitbit Inspire 2 do enough to pull away from its more affordable rivals?
Design
Large and small wristband options
Water resistant to 50 metres (5ATM)
Finishes: Black, Lunar White, Desert Rose
With the Inspire 2, Fitbit isn’t trying to reinvent the wheel. Put one side-by-side with an Inspire HR and you’d be hard pressed to tell the difference between the two. The colour silicone bands can be removed and come in small and large size options.
Best Fitbit fitness tracker 2021: Which Fitbit is right for you?
By Britta O’Boyle
·
The greyscale touchscreen display – which has a slightly curvier edged look – is now 20 per cent brighter than the previous Inspire, which is definitely a positive move. There’s now a dim mode when you don’t need that extra hit of brightness, which can be disabled when you do. It certainly offers an improvement for visibility out in bright outdoor light, but it feels like it might be time to ditch the greyscale OLED screen and go colour like a lot its competitors have done – Xiaomi, Amazfit and Samsung each offer great colour display options for less money.
To give the Inspire 2 a much cleaner look than its predecessor, it’s also removed the physical button for a setup where you can squeeze the sides of the device to do things like turn on the display or get into the band’s settings. Overall, it works well and that button isn’t hugely missed.
Around the back is where you’ll find the PurePulse heart rate sensor, which means you have the ability to continuously monitor heart rate, exercise in personalised heart rate zones, and unlock new features like Active Zone Minutes.
The big appeal of wearing the Inspire 2 is that it’s a slim, light and comfortable band to wear all the time. As it’s water resistant up to 50 metres, it’s safe to swim and shower with.
Features
24/7 tracking
Connected GPS
Guided breathing
20+ exercise modes
Additional health insights in Fitbit Premium
Fitness tracking is what Fitbit does best – so it’s no surprise that’s where the Inspire 2’s key features lie.
The sensors making that happen haven’t changed from the last Inspire models. There’s an accelerometer to track steps and enable automatic sleep monitoring. You also have that optical heart rate monitor, which unlocks a range of features and is still best suited to daily monitoring as opposed to putting it to work during exercise. You still don’t get an altimeter to track elevation like floor climbs, which you also get on the Fitbit’s flagship Charge 4.
For daily tracking, you can monitor daily steps, distance covered, calories burned, and get reminders to keep moving during the day. Fitbit has also added additional reminders to wash your hands, get your heart pumping, or to stay hydrated.
When it’s time to go to bed, you’ll be able to capture the duration of sleep and get a breakdown of sleep stages. That includes the all-important REM sleep, which is a window into the type of sleep tied to memory and learning. You’ll also get a Sleep Score to give you a clear idea if you’ve had a good night’s sleep.
When you switch to exercise tracking, there are over 20 goal-based modes with core exercises like walking, running and pool swimming. There’s also Fitbit’s SmartTrack tech to automatically recognise when you start moving and working out.
There’s connected GPS support, which means you can lean on your phone’s GPS signal to more accurately track outdoor activities. That GPS support is also useful for the Workout Intensity Maps feature, which along with monitoring your heart rate can show you where you worked hardest during a session.
With that onboard heart rate monitor you’re getting to continuously monitor and capture resting heart rate – day and night. It’s also going to let you train in heart rate zones and generate a Cardio Fitness Score to give you a better sense of your current state of fitness based on your VO2 Max (blood oxygen). Fitbit is also introducing its new Active Zone Minutes feature, which will buzz you when you hit your personalised target heart rate zones. It’s a move to get users to think more about regularly raising heart rate as well as nailing those big daily step counts.
For that time outside of getting sweaty, the Inspire 2 will perform some useful more smartwatch-like duties. There’s notification support for both Google Android and Apple iOS devices, letting you see native and third-party app notifications. There’s a dedicated notifications menu where you can find your latest incoming messages. In addition to notifications, there’s also a collection of different watch faces to choose from.
Beyond the basics, there’s also guided breathing exercises, menstrual health tracking, and app-based features like manually tracking your food intake. You also have access to Premium, Fitbit’s subscription service, which you’ll get a year to trial before deciding whether to continue at your own cost.
What is Fitbit Premium, what does it offer and how much does it cost?
Performance and battery life
Continuous heart rate monitoring
Up to 10 days battery life
Sleep tracking
Those core fitness tracking features is what the Inspire 2 does best. Step counts are largely in line with the fitness tracking features on a Garmin Fenix 6 Pro, also offering similar distance covered and calories data. While those inactivity alerts aren’t groundbreaking, it’s a small way to make sure you keep moving during the day.
When you switch to sleep, the slim, light design of the Inspire 2 makes it a comfortable tracker to take to bed first and foremost. Fitbit offers some of the best sleep tracking features in the business. Compared to the Fitbit Sense and the Withings Sleep Analyzer, we were pretty satisfied with the kind of data Fitbit gave us.
For exercise tracking – as long as you’re not hoping to run for miles on a regular basis and up the intensity in general – the Inspire 2 should just about cut it. The heart rate monitor is better suited to continuous monitoring than it is for strenuous workout time based on our experience. For running, and cardio blasting HIIT sessions on the Fiit home workout app, average readings could be as much as 10bpm out (compared to a Garmin HRM Pro chest strap monitor).
The connected GPS support is also better suited to shorter runs, which brings useful features like those Workout Intensity Maps into the mix.
As for battery life, the Inspire 2 offers the best battery numbers Fitbit has ever offered. It’s promising up to 10 days, which is double that of the Inspire HR. It lives up to that claim, too, as long as you’re not going too bright with that screen and not tracking exercise every day with it. The good news is that things like all-day heart rate monitoring don’t seem to have a tremendous drain, which isn’t the case on all fitness trackers.
When you do need to charge there’s one of Fitbit’s proprietary charging cables, which clips into the charging points on the back and the top and bottom of the rear case. That ensures it stays put and doesn’t budge when you stick the Inspire 2 onto charge.
Software
Fitbit’s companion app, which is available for Android, iOS and Windows 10 devices, remains one of its key strengths – and a strong reason you’d grab one of its trackers over cheaper alternatives.
It’s easy to use and if you want some added motivation to keep you on top of your goals, that’s available too. The main Today screen will give you a snapshot of your daily data and can be edited to show the data you actually care about.
Discover is where you’ll find guided programmes, challenges, virtual adventures and workouts to accompany daily and nightly tracking. If you’ve signed up to Fitbit Premium, you’ll have a dedicated tab for that too. You still have all your device settings hidden away whether you need to adjust step goals, heart rate zones or how you keep closer tabs on your nutrition and weight management.
The Inspire 2 experience is similar to owning a Fitbit Versa 3, a Charge 4, or a Sense. Which is key: that consistent feeling across all devices makes it a good place if you know other Fitbit-owning people. You can delve deeper into data if you want to, but for most, what’s there when you first download it and login will be more than enough to get a sense of your progress.
Best Fitbit fitness tracker: Which Fitbit is right for you?
Verdict
The Fitbit Inspire 2 sticks to a known formula, covering tracking basics, while wrapping it up in a design that’s comfortable to wear all of the time.
The screen changes for this model are welcomed – extra brightness, yay – and if you care about steps, sleep and monitoring heart rate during the day and night, it will serve you well.
All that’s supported by an app that’s one of the most user-friendly if you’re starting to think about monitoring your health and fitness for the first time.
The level of smartwatch features are dictated by the slenderness of the device and while you can get more in the way of these features elsewhere for less money, what the Inspire 2 offers should be good enough for most. It’s still not quite the ready-made sportswatch replacement though.
Cheaper fitness trackers are now offering more features, arguably better displays and battery life. But if you’re looking for a fitness tracker that puts your health and tracking front and centre, then Fitbit is still one of the best.
Also consider
Fitbit Inspire HR
squirrel_widget_147357
If you can live without that brighter display and some of the software extras, the Inspire HR will still offer a solid tracking experience for less cash.
Read our review
Huawei Band 3 Pro
squirrel_widget_166658
Huawei’s fitness band offers one big feature you won’t find on the Inspire 2: built-in GPS. If you like the idea of a tracker a bit better built for sports, this is one worth looking at.
Gigabyte has announced its new server for AI, high-performance computing (HPC), and data analytics. The G262-ZR0 machine is one of the industry’s first servers with four Nvidia A100 compute GPUs. The 2U system will be cheaper when compared to Gigabyte’s and Nvidia’s servers with eight A100 processors, but will still provide formidable performance.
The Gigabyte G262-ZR0 is based on two AMD EPYC 7002-series ‘Rome’ processors with up to 64 cores per CPU as well as four Nvidia A100 GPUs 40GB (with 1.6TB/s bandwidth) or 80GB (with 2.0TB/s bandwidth) of onboard HBM2 memory. Four Nvidia A100 processors feature 13,824 FP64 CUDA cores, 27,648 FP32 CUDA cores as well as an aggregated performance of 38.8 FP64 TFLOPS and 78 FP32 TFLOPS.
The machine can be equipped with 16 RDIMM or LRDIMM DDR4-3200 memory modules, three M.2 SSDs with a PCIe 4.0 x4 interface, and four 2.5-inch HDDs or SSDs with a SATA or a Gen4 U.2 interface,
The machine also has two GbE ports, six low-profile PCIe Gen4 x16 expansion slots, one OCP 3.0 Gen4 x16 mezzanine slot, an ASpeed AST2500 BMC, and two 3000W 80+ Platinum redundant PSUs.
Gigabyte says that its G262-ZR0 machine will provide the highest GPU compute performance possible in a 2U chassis, which will be its main competitive advantage.
Gigabyte did not disclose pricing of its G262-ZR0 server, but it will naturally be significantly cheaper than an 8-way NVIDIA DGX A100 system or a similar machine from Gigabyte.
News outlet IThome has reported that Chinese manufacturer Asgard, which is owned by Shenzhen Jiahe Jinwei Electronic Technology Co., Ltd., has launched the company’s first DDR5 memory module.
Asgard’s memory module, which carries the VMA5AUK-MMH224W3 part number, arrives with a capacity of 64GB. However, Asgard has confirmed that it will also be available in capacities of 32GB and 128GB. The memory can be a bit rough on the eyes if you’re accustomed to fancy heat spreaders and cheesy RGB lighting. However, it’s what’s inside that really counts.
Regardless of the density, the memory module operates at 4,800 MHz, and that’s nowhere near the ceiling for DDR5. The new specification is expected to hit memory frequencies up to 8,400 MHz. Predictably, Asgard’s memory module will only draw a 1.1V DRAM voltage, which is the reference voltage for DDR5. It also adheres to JEDEC’s “B” standard, meaning its timings should be set to 40-40-40.
According to the report, Asgard has not started mass producing the DDR5 memory module, and it makes sense since there aren’t any processors that leverage the standard yet. However, the vendor expects to put the wheels into motion once Intel’s 12th Generation Alder Lake processors and corresponding 600-series chipsets are ready.
The Asgard representative even went ahead and confirmed to IThome some of the future Intel and AMD processors that will support DDR5. On the Blue Team, we have obviously Alder-Lake, Sapphire Rapids and Tiger Lake-U. For the Red Team, the spokesperson mentioned Van Gogh and Rembrandt APUs.
ADATA is Taiwan’s largest manufacturer of flash storage and DRAM memory for computers. They have been at the forefront of SSD development for many years, bringing us famous SSDs like the SX8200, SX900, and S510.
XPG is one of ADATA’s sub-brands and creates products optimized for the needs of gamers.
With the Gammix S70, ADATA XPG uses a controller vendor we rarely see. Innogrit has been building SSD chips with a focus on the value segment for a long time. Now, they’ve released the IG5236 “Rainier” controller with support for PCI-Express 4.0. Besides the controller, 3D TLC chips are used. They have been rebranded by ADATA; I suspect these are Intel/Micron B27 TLC. Two 1 GB DDR4-3200 DRAM chips provide 2 GB of storage for the mapping tables of the SSD.
The XPG Gammix S70 is available in capacities of 1 TB ($200) or 2 TB ($400). Endurance for these models is set to 740 TBW and 1480 TBW respectively. ADATA provides a five-year warranty for the S70.
Specifications: ADATA XPG Gammix S70 2 TB
Brand:
ADATA XPG
Model:
AGAMMIXS70-2T-C
Capacity:
2048 GB (1907 GB usable) No additional overprovisioning
The GTX 340 is perhaps one of the strangest and rarest graphics cards you can find online today. As covered by the Budget-Builds Official YouTube Channel, the GTX 340 is, in fact, an unofficial GeForce 300 series card that Nvidia never actually produced. Instead, enterprising modders created the cards from standard GT units by adding souped-up dual-slot coolers, modifying the BIOS, applying a hefty overclock, and then changing the identifier string to make the card appear as a “GTX” model in the system.
First, a quick backstory into Nvidia’s 300 series; the 300 series started as a rebrand of the 200 series and was based on the Tesla 2.0 architecture, which proved to be very efficient but very weak in terms of raw horsepower. Due to Tesla’s low performance, the 300 series cards were primarily geared for 2D video playback and basic display adapters, but they could run some 3D applications. Oddly enough, the 200 series were nothing more than rebrands of the 200 series. (The only exception being the GT 320 and GT 330, which had very slight improvements over the 200 series counterparts.)
So what about the GTX 340? Apparently, someone years and years ago got into the business of upgrading GT 340s, which were then re-sold to system builders. The mods included a dual-slot cooler, a modified BIOS with the GTX nomenclature, and a hefty core overclock. In fact, the card shows up as a GTX 340 in GPU-z.
This strategy does make sense — the GT 340 is already very power efficient, consuming just 51W, so getting a good overclock out of the GT 340 wasn’t hard at all with a cooler that’s twice the size of the original.
Unfortunately, we don’t know why this person (or people) decided to give the GT 340 the GTX 340 badge. Still, it was probably to help differentiate these heavily modified cards from the traditional GT 340.
The GTX 340 features a Tesla 2.0 GPU with 96 Shader Units, 32 TMUs, 1GB of GDDR5, and a 51W TDP. The GTX card has a core clock speed of 650 MHz and 1700 MHz memory clock, a decent increase from the GT’s original 500 MHz core clock and 850 MHz memory clock.
Of course, with how old this card is, performance is very underwhelming; the card can barely manage 30FPS in CSGO at 480P (yeah, it’s that bad), and the performance delta between the GT 340 and the “GTX 340” is barely 2%, at most.
That’s the story of the GTX 340; it’s perhaps one of the most boring GPUs ever to be released, but it’s cool that someone made a brand new SKU out of “thin air,” and it seemed to have worked for them. At least from a business perspective. But good luck trying to buy one — they are super scarce on eBay and can only be bought overseas.
News outlet CRN reported that Gigabyte has retired the GeForce RTX 3090 Turbo 24G announced back in September of last year. The product page for the graphics card is no longer available, which confirms CRN’s report.
Gigabyte’s sudden plans to cancel the GeForce RTX 3090 Turbo 24G will certainly put its server partners in a tight situation. Although Nvidia has a formidable Ampere compute graphics card in the shape of the A100, many vendors preferred to roll with the GeForce RTX 3090 due to the latter’s better price-to-performance ratio. The A100 retails for close to $10,000 while the GeForce RTX 3090 can be found for $1,499 on a good day.
The GeForce RTX 3090 Turbo 24G was a great option for vendors to put together budget server offerings because the graphics card met all the requirements of a compute graphics card. It features the blower design, only occupied two PCI slots and it also came equipped with 24GB of high-speed GDDR6X memory, which is a big plus for deep learning workloads.
It’s plausible that Gigabyte might have received some sort of warning or recommendation from Nvidia to preemptively ax the GeForce RTX 3090 Turbo 24G. Word was probably getting around town that manufacturers were opting for the GeForce RTX 3090 instead of the more expensive A100 for their data center solutions. In fact, Nvidia discourages the deployment of GeForce and Titan graphics cards in a data center setting. The aforementioned products don’t come with the same level of features as Nvidia’s data center offerings, such as an extended warranty, enterprise support, certification for data center applications and a longer lifecycle. They also come with a smaller price tag.
Now that the GeForce RTX 3090 Turbo 24G has officially reached the end-of-life (EOL) status, vendors will have to look for another viable solution. Luckily, Gigabyte’s GeForce RTX 3090 Turbo 24G wasn’t the only GeForce RTX 3090 with a blower-design on the market. Asus and MSI also put out similar designs. Heck, even South Korean manufacturer Emtek’s GeForce RTX 3090 Blower Edition is a legit alternative if push comes to shove.
Flash partners Kioxia and Western Digital revealed this week a brand new generation of 3D NAND flash memory that promises to be much faster and far denser than anything they have produced before.
BiCS6 features 162 layers and 70% more bits per wafer than the preceding BiCS5 from a few years ago. That 70% allows for a 40% reduction in NAND chip size, helping to cut manufacturing costs.
All these enhancements have yielded excellent performance results in read and especially writing performance, according to the vendors’ announcement. They claimed that with both Circuit Under Array CMOS placement and four-plane operation, we can expect BiCS6 NAND flash to have a write speed around 2.4x faster than its predecessor. Note that BiCS5 has both a four-plane and dual-plane design, and BiCS6 only uses a four-plane design.
There is also reportedly a significant uptick in I/O performance with BiCS6 operating at 1600MT/s, compared to BiCS5’s 1066 MT/s. This helps keep the tech on par with competing vendors like Micron and SK Hynix, which use 1600 MT/s speeds as well.
This is just an announcement of BiCS6’s capabilities, so don’t yet know when it’ll be produced. But if it’s anything like BiCS5, it’ll take another year before we see the best SSDs featuring the new technology.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.