I would like to thank Lamptron for supplying the sample.
Lamptron has been around since the early 2000s and is well known for its slew of fan controllers. In recent years, with the disappearance of external 5.25″ slots, Lamptron has started to expand the line-up to internal components for both fan and RGB control, as well as several LCD monitors and RGB accessories. In this article, we will take a quick look at the Lamptron ATX201 RGB Frame, which, as the name implies, is a unique RGB lighting element to give your motherboard that extra visual appeal. Lamptron also offers mATX and ITX variants.
Packaging and A Closer Look
The ATX201 frame comes packaged in a brown cardboard box with a sticker of the frame in action on front, so you know exactly what you are receiving.
The frame itself is as simple as can be. In essence, it is a translucent plastic frame with a black cover on one side. There are cutouts around where the motherboard standoffs for an ATX form factor would be, and a 5 V 3-pin RGB cable comes out of the corner for you to connect to your controller or straight to your motherboard.
The connector is completely traditional, so you should run into no problems when interfacing it with any modern header on a board or generic controller. If you look closely at the translucent side of the frame, you can see the LEDs on a strip that essentially wraps all the way around inside the frame. The black layer is then glued on to keep everything in place.
Frame in Use
Installing the Lamptron ATX201, if you can call even call it an installation, simply means placing the frame into the case before installing your motherboard. As you can see in the first picture, the ATX201 aligns nicely with the mounts. Once we dug up an open-frame case, we were also able to take a shot of the frame sandwiched between the chassis and the motherboard.
We connected the frame to the MSI Z390 and Zalman Z3 Iceberg’s generic 5 V RGB controller, which was in turn plugged straight into the motherboard. Using the MSI Dragon Center software, the board’s dim backlight was turned off to showcase the ATX201 as an add-on for those with a board without that feature in the first place. While I am not a fan of RGB, the indirect lighting is actually pretty nifty in my opinion. As you can see, the frame emits quite the potent glow all around the motherboard, with nicely diffused illumination—all while syncing up with the rest of the components just fine.
To showcase some basic colors, we went through the RGB set—red, green and blue. Naturally, you have a lot of creative freedom when utilizing software, and the MSI Mystic Light app offers a variety of animations and multi-color settings as well, all of which comes across very nicely on the Lamptron ATX201.
We also shot a quick 10 second video of a full-on RGB animation to give you a real-world sense of what to expect.
Conclusion
The Lamptron ATX201 is a really simple, yet pretty darn nifty RGB add-on to your system. It adds lighting to an area in your system you will have a hard time illuminating otherwise; that is, unless your motherboard already has rear-mounted LEDs. For those who do not have that luxury, the ATX201 is a potent and super simple element that just works and does so extremely well.
That is it. There really is not much else to be said about a product that is made out of a plastic frame and an embedded LED strip. All the ingenuity is in the shape of the frame itself and, thus, its placement. Quite honestly, I am fine with that since it is, in turn, affordable, and the end result is pretty stunning.
From a pricing perspective, things are currently not very clear. Lamptron mentioned an MSRP of $39, which seems quite high for a product that utilizes such a simple list of materials. We suggested a $20–$25 price tag, so hopefully, it will be more affordable once it hits retail.
Twitter Blue — the social network’s first subscription product that adds an undo button to tweets among other minor additions like changing the color of icons and adding folders for bookmarks — launched on Thursday. It’s limited to Canada and Australia for now but has already garnered attention for lacking the features people would be willing to pay Twitter for, like no ads, or better tools to handle harassment.
Which makes Twitter CEO Jack Dorsey’s thread today, the day after the product launched, somewhat humorous and frustrating. What can we say: the guy loves to talk about bitcoin, even when other more pressing matters are at hand!
Square is considering making a hardware wallet for #bitcoin. If we do it, we would build it entirely in the open, from software to hardware design, and in collaboration with the community. We want to kick off this thinking the right way: by sharing some of our guiding principles.
— jack (@jack) June 4, 2021
I’m not a Bitcoin expert, but sure, making a product in the open, with the goal of being inclusive and open source sounds fine by me, especially since Square is already heavily invested in the currency. As with most things people tweet, best to take this as off-the-cuff musing rather than an official product announcement. Dorsey’s made similar pronouncements via Twitter thread — like funding a decentralized version of Twitter — that have only made small amounts of public progress since they were tweeted into the ether.
What this might highlight, though, is how Dorsey’s attention is split acting as the CEO of both Twitter and payment company Square. The issue has been raised before by one of the company’s investors, Elliott Management. Running Twitter is a job he’s been increasingly checked out of, with the Wall Street Journal reporting in October that Dorsey is “hands-off to the extreme, delegating most major decisions to subordinates in part so he can pursue his personal passions.” He came out and said today at the Bitcoin 2021 conference in Miami that if he wasn’t running Square and Twitter, he’d be working on bitcoin. He seems pretty good at finding a way to work on bitcoin anyway.
Twitter’s recent sprint of new product announcements suggests someone wants to change things at Twitter. Social audio features like Spaces and creator subscription systems like Super Follows are legitimately interesting — just maybe not to Dorsey. But as Platformer’s Casey Newton notes, Twitter Blue, as an example of the company’s new focus on power users, doesn’t really offer many features that power users want. And if his silence on the subject is any indication, perhaps Dorsey and users are aligned in their disinterest towards the paid service.
Jack, if you’re listening, there’s absolutely nothing stopping you from becoming a wandering ascetic, living off fake money you minted from an overclocked GPU. Just please, if you hate it so much, let someone else run your website.
that limits CUDA cores on models with an Nvidia GeForce RTX 3070 graphics card.
“We have been made aware that an incorrect setting in Alienware’s vBIOS is limiting CUDA Cores on RTX 3070 configurations,” the company told Tom’s Hardware. This is an error that we are working diligently to correct as soon as possible. We’re expediting a resolution through validation and expect to have this resolved as early as mid-June. In the interim, we do not recommend using a vBios from another Alienware platform to correct this issue. We apologize for any frustration this has caused.”
Forum threads on
Reddit
and
Notebook Review
showed people noticing that software like CPU-Z and HWInfo are reporting the wrong number of CUDA cores for an RTX 3070. Rather than 5,120 cores, it showed 4,608. HWInfo also reportedly showed fewer ray tracing and tensor cores. Some also had issues with the number of render output units (ROPs).
Some people in those threads reported that switching to the BIOS for the Alienware m15 R4 with an RTX 3070 fixed these issues, but that could cause other problems, especially as the R4 is an Intel system and the R5 is AMD-based. Others speculated wildly about potential special-order cards or other potential software problems.
If the cores were indeed being limited, not just misreported, it’s possible that performance will be increased once the fix is released.
Apple’s annual developer extravaganza, the Worldwide Developers Conference (WWDC), is coming up fast, kicking off with the keynote presentation on June 7th at 1PM ET. Like last year, WWDC will be an entirely digital and online-only event due to the COVID-19 pandemic, and for the keynote, that means we can likely expect another tightly produced video highlighting everything Apple has in store.
While we aren’t expecting any announcements on the level of Apple’s shift to custom silicon in its computers, which was WWDC 2020’s big news, Apple presumably has some notable changes in the works for iOS, iPadOS, macOS, and its other operating systems. And if the current rumors pan out, we could also see brand-new MacBook Pros with the return of some long-missed features, such as MagSafe charging.
Read on to learn everything we expect from the big show. And don’t be surprised if Apple has a few surprises in store, too.
iOS 15 may bring improvements to notifications and iMessage
We haven’t heard much about what may be coming to Apple’s next version of its mobile operating system, which will presumably be called iOS 15, but we could see big changes to notifications and possibly iMessage, according to Bloomberg.
For notifications, you may be able to have different notification settings for situations like driving, working, sleeping, or even a custom category, and you’ll be able to flip those on as you need to. You might also be able to set automatic replies based on which notification setting you’re currently using, like what you can do now with Do Not Disturb while driving mode. Personally, I’m hoping iOS 15 will let me allow notifications from a select few people while silencing just about everything else.
As for iMessages, Apple is apparently working on features to make it act like “more of a social network” to compete with Facebook’s WhatsApp, Bloomberg said, but those features are still “early in development” and could be announced at a later date.
Apple also plans to add a feature that shows you apps that are silently collecting data about you, continuing the company’s trend of adding privacy-focused updates to its operating systems.
For iPadOS 15, you can apparently expect a major update to the homescreen, including the ability to put widgets anywhere you want. And with Apple just introducing the new M1-powered iPad Pros, here’s hoping we see some new upgrades to take advantage of the new chip.
In May, Apple also announced a lot of new accessibility features coming to Apple’s operating systems, such as improvements in iOS to VoiceOver, support for bidirectional hearing aids, a built-in background sounds player, and new Memoji customizations like cochlear implants. Apple said these features would arrive “later this year,” which suggests they’ll be included in iOS 15.
We don’t know much about macOS, watchOS 8, and tvOS 15 — but we could see a new “homeOS”
We haven’t heard all that much about upcoming software updates for the Mac, Apple Watch, and Apple TV, so we’ll just have to wait and see what Apple is cooking up. One tidbit: macOS could be a “more minor” update, Bloomberg says. That wouldn’t be too much of a surprise, given that the macOS operating system got a big overhaul with Big Sur last year.
However, we could see the introduction of a brand-new operating system called “homeOS,” which was recently mentioned in and later removed from an Apple job listing. While it’s unclear exactly which devices this OS is for, perhaps it will work on Apple’s home-focused products like the Apple TV and HomePod Mini.
New, redesigned MacBook Pros and a new Apple CPU could be announced
Apple doesn’t always introduce new hardware at WWDC, but this year, new MacBook Pros seem like a possibility. In a May 18th report, Bloomberg said that new MacBook Pros might arrive “as soon as early this summer,” which could indicate an announcement at WWDC.
These new laptops would have new Apple-designed processors that would “greatly outpace the performance and capabilities of the current M1 chips,” according to Bloomberg. The M1 is already pretty dang good, so it sounds like these new chips could be even more impressive.
Apple is apparently planning on releasing two chips for the new Pros. Both should have eight high-performance cores and two energy-efficient cores, while leaving you with the option of either 16 or 32 graphics cores. (By comparison, the M1’s CPU has four high-performance and four energy-efficient cores, while its GPU is offered with either seven or eight cores.) You’ll probably also be able to spec the laptops with as much as 64GB of memory, up from a max of 16GB on M1-equipped computers.
The new laptops should be offered with either 14-inch or 16-inch screens and those screens could have “brighter, higher contrast” displays, according to a Bloomberg report from January. The laptops may also have a new design with flat edges as in the iPhone 12, analyst Ming-Chi Kuo said in January. I’m curious to see what that design might look like in practice — I worry that the hard edges could be uncomfortable if you have the laptop on your lap.
The best rumor is that the new design may also mark the return of some of the ports and features that were taken away with the now-infamous 2016 MacBook Pro redesign, including a MagSafe charger, an HDMI port, and an SD card slot, Bloomberg said in its May report. And, according to Kuo, the OLED Touch Bar currently found on Intel-based MacBook Pros will apparently be removed in favor of physical function keys.
We could see at least one other new Mac
While it seems like MacBook Pros are the only new hardware we’ll be seeing at WWDC this year, that hasn’t stopped some other Mac rumors from swirling lately, and there’s always the chance Apple could announce more at its big event. According to Bloomberg, Apple also has “a revamped MacBook Air, a new low-end MacBook Pro, and an all-new Mac Pro workstation” in the works as well as a “higher-end Mac Mini desktop and larger iMac,” all of which would be powered by Apple’s custom silicon.
The new Mac Mini may have the same chip as the new MacBook Pros. The new Mac Pro could be a beast, with processors that are “either twice or four times as powerful as the new high-end MacBook Pro chip.”
And the redesigned “higher-end” MacBook Air could arrive as early as the end of this year. Frankly, I hope that refreshed Air arrives even later. I just bought the M1-equipped Air and it’s one of the best computers I’ve ever used, but I have a bad feeling I’ll be first in line to buy a redesigned and more capable Air anyway. (Especially if it gets the MagSafe charger that’s rumored for the new Pros.)
Apple might have dropped a hint about its AR / VR headset
Apple has long been rumored to have a mixed reality headset in the works, and recently, we’ve learned a few more potential details about it. The headset might be very expensive — approximately $3,000, according to one report — though it could be packed with 8K displays, more than a dozen cameras to track hand movements and capture footage, and might weigh less than an iPhone, too.
While the headset could be a ways out, as it’s not expected to ship until 2022 at the earliest, a few suspicious details in Apple’s WWDC promotional images may be hinting toward some kind of reveal of Apple’s upcoming headset or the software on which it runs.
Check out this image below (that I also used at the top of this post), which Apple released alongside the announcement of WWDC in March. Notice the way the app icons are reflected in the glasses — I could imagine some sort of mixed reality headset showing icons in front of your eyes in a similar way.
Apple continued that reflections motif with new images released in May — you can see things from the laptop screens reflected in all of the eyes of the Memojis.
Now, these reflections may just be Apple’s artists flexing their design chops. And if I had to guess, given how far out a rumored mixed reality headset is, I don’t think we’re going to see anything about it at WWDC this year.
But Apple has surprised us in the past, and maybe these images are an indication of one more thing Apple has in store for WWDC.
Users are complaining that the Disney Plus and HBO Max Apple TV apps aren’t properly supporting the excellent new Siri Remote. Disney Plus has yet to be updated to make use of the remote’s helpful scroll wheel scrubbing feature, which works on rival Netflix and Apple TV Plus apps. The HBO Max app has more issues, Screen Times reports, and lacks support for several of the remote’s features and voice commands.
When we tried the HBO Max app for ourselves, we found it doesn’t seem to support the new remote’s D-pad. Instead, we could only use the remote’s touchpad circle, which is technically an element of the remote you’re supposed to be able to disable. Scrubbing using the touchpad works, just very badly. The cause of these problems appears to be a recent app update, which replaced the standard tvOS playback UI with HBO’s own (and apparently far less capable) version.
It just goes to show that a platform holder can build a powerful combination of hardware and software in support of third-party apps and services, but it doesn’t mean anything unless those same apps and services actually bother to add support. We’ve reached out to both Disney Plus and HBO Max to see if they’re aware of the issues, and to find out when users can expect a fix.
Google has designed its own new processors, the Argos video (trans)coding units (VCU), that have one solitary purpose: processing video. The highly efficient new chips have allowed the technology giant to replace tens of millions of Intel CPUs with its own silicon.
For many years Intel’s video decoding/encoding engines that come built into its CPUs have dominated the market both because they offered leading-edge performance and capabilities and because they were easy to use. But custom-built application-specific integrated circuits (ASICs) tend to outperform general-purpose hardware because they are designed for one workload only. As such, Google turned to developing its own specialized hardware for video processing tasks for YouTube, and to great effect.
However, Intel may have a trick up its sleeve with its latest tech that could win back Google’s specialized video processing business.
Loads of Videos Require New Hardware
Users upload more than 500 hours of video content in various formats every minute to YouTube. Google needs to quickly transcode that content to multiple resolutions (including 144p, 240p, 360p, 480p, 720p, 1080p, 1440p, 2160p, and 4320p) and data-efficient formats (e.g., H.264, VP9 or AV1), which requires formidable encoding horsepower.
Historically, Google had two options for transcoding/encoding content. The first option was Intel’s Visual Computing Accelerator (VCA) that packed three Xeon E3 CPUs with built-in Iris Pro P6300/P580 GT4e integrated graphics cores with leading-edge hardware encoders. The second option was to use software encoding and general-purpose Intel Xeon processors.
Google decided that neither option was power-efficient enough for emerging YouTube workloads – the Visual Computing Accelerator was rather power hungry itself, whereas scaling the number of Xeon CPUs essentially meant increasing the number of servers, which means additional power and datacenter footprint. As a result, Google decided to go with custom in-house hardware.
Google’s first-generation Argos VCU does not replace Intel’s central processors completely as the servers still need to run the OS and manage storage drives and network connectivity. To a large degree, Google’s Argos VCU resembles a GPU that always needs an accompanying CPU.
Instead of stream processors like we see in GPUs, Google’s VCU integrates ten H.264/VP9 encoder engines, several decoder cores, four LPDDR4-3200 memory channels (featuring 4×32-bit interfaces), a PCIe interface, a DMA engine, and a small general-purpose core for scheduling purposes. Most of the IP, except the in-house designed encoders/transcoders, were licensed from third parties to cut down on development costs. Each VCU is also equipped with 8GB of usable ECC LPDDR4 memory.
The main idea behind Google’s VCU is to put as many high-performance encoders/transcoders into a single piece of silicon as possible (while remaining power efficient) and then scale the number of VCUs separately from the number of servers needed. Google places two VCUs on a board and then installs 10 cards per dual-socket Intel Xeon server, greatly increasing the company’s decoding/transcoding performance per rack.
Increasing Efficiency Leads to Migration from Xeon
Google says that its VCU-based machines have seen up to 7x (H.264) and up to 33x (VP9) improvements in performance/TCO compute efficiency compared to Intel Skylake-powered server systems. This improvement accounts for the cost of the VCUs (vs. Intel’s CPUs) and three years of operational expenses, which makes VCUs an easy choice for video behemoth YouTube.
Offline Two-Pass Single Output (SOT) Throughput in CPU, GPU, and VCU-Equipped Systems
System
Throughput (MPix/s)
Throughput (MPix/s)
Performance/TCO
Performance/TCO
H.264
VP9
H.264
VP9
2-way Skylake
714
154
1x
1x
4x Nvidia T4
2,484
–
1.5x
–
8x Google Argos VCUs
5,973
6,122
4.4x
20.8x
20x Google Argos VCUs
14,932
15,306
7x
33.3x
From performance numbers shared by Google, it is evident that a single Argos VCU is barely faster than a 2-way Intel Skylake server in H.264. However, since 20 VCUs can be installed into such a server, VCU wins from an efficiency perspective. But when it comes to the more demanding VP9 codec, Google’s VCU appears to be five times faster than Intel’s dual-socket Xeon and therefore offers impressive efficiency advantages.
Since Google has been using its Argos VCUs for several years now, it clearly replaced many of its Xeon-based YouTube servers with machines running its own silicon. It is extremely hard to estimate how many Xeon systems that Google actually replaced, but some analysts believe the technology giant could have swapped from four to 33 million Intel CPUs for its own VC. Even if the second number is an overestimate, we are still talking about millions of units.
Since Google needs loads of processors for its other services, it is likely that the number of CPUs that the company buys from AMD or Intel is still very high and is not going to decrease any time soon as it will be years before Google’s own datacenter-grade system-on-chips (SoCs) will be ready.
It is also noteworthy that in an attempt to use innovative encoding technologies (e.g., AV1) right now, Google needs to use general-purpose CPUs even for YouTube as the Argos does not support the codec. Furthermore, as more efficient codecs emerge (and these tend to be more demanding in terms of compute horsepower), Google will have to continue to use CPUs for initial deployments. Ironically, the advantage of dedicated hardware will only grow in the future.
Google is already working on its second-gen VCU that supports AV1, H.264, and VP9 codecs as its needs to further increase the efficiency of its encoding technologies. It is unclear when the new VCUs will be deployed, but it is clear that the company wants to use its own SoCs instead of general-purpose processors where possible.
Intel Isn’t Standing Still
Intel isn’t standing still, though. The company’s DG1 Xe-LP-based quad-chip SG1 server card can decode up to 28 4Kp60 streams as well as transcode up to 12 simultaneous streams. Essentially, Intel’s SG1 does exactly what Google’s Argos VCU does: scale video decoding and transcoding performance separately from the server count and thus reduce the number of general-purpose processors required in a data center used for video applications.
With its upcoming single-tile Xe-HP GPU, Intel will offer transcoding of 10 high-quality 4Kp60 streams simultaneously. Keeping in mind that some of Xe-HP GPUs will scale to four tiles, and more than one GPU can be installed per system, Intel’s market-leading media decoding and encoding capabilities will only become even more solid.
Summary
Google has managed to build a remarkable H.264 and VP9-supporting video (trans)coding unit (VCU) that can offer significantly higher efficiency in video encoding/transcoding workloads than Intel’s existing CPUs. Furthermore, VCUs enable Google to scale its video encoding/transcoding performance independently from the number of servers.
Yet, Intel already has its Xe-LP GPUs and SG1 cards that offer some serious video decoding and encoding capabilities, too, so Intel will still be successful in datacenters with heavy video streaming workloads. Furthermore, with the emergence of Intel’s Xe-HP GPUs, the company promises to solidify its position in this market.
Founded in 2019 by several former Zowie employees, VAXEE is a peripherals company and shop platform. The NP-01S marks VAXEE’s second cooperation with Junya “noppo” Taniguchi and his brand ZYGEN. Whereas the ZYGEN lettering features prominently on the NP-01S (this time on the side, similarly to the Outset AX), the line “powered by VAXEE” is hidden on the back of the mouse, underlining that the NP-01S is the result of a joint creative process. For the sake of simplicity, I’ll nonetheless refer exclusively to VAXEE throughout the review.
The NP-01S retains the improvements first introduced with the Outset AX: Thicker feet, a refined scroll wheel, and a revised matte coating, which is more matte than the NP-01 coating. The biggest difference thus lies in the shape: Much like the NP-01, the NP-01S still blends ambidextrous and right-handed ergonomic design elements, with the top being reminiscent of the former, whereas the bottom is more alike the latter. However, the NP-01S has a more centered, lower-sitting hump, which results in the back interfering less with one’s palm. Additionally, the NP-01S is narrower, and the left-side back flare is less pronounced. Other than that, the NP-01S isn’t much different: PixArt’s PMW3389 sensor, Huano switches for the main buttons, a paracord-like, braided cable, and no RGB or software. Variants in matte black and glossy white are available exclusively through VAXEE’s own shop.
Apple is distributing an AirTags software update that makes it harder for people to use them for surreptitious tracking, CNET reports. According to a statement sent to The Verge,Apple is also working on an app for Android users which will let them detect potentially unwanted trackers.
AirTags were released earlier this year as Apple’s solution for helping people keep track of everyday objects. In its marketing, Apple shows people finding their lost keys or bags that have the trackers attached. However, there have been concerns the tiny devices are being used to secretly track people without their knowledge, and we’ve seen numerous reports since the AirTags’ release indicating that some tweaks were needed to make the devices more privacy-conscious for everyone.
Before the update, if AirTags were away from their owner, they would chime after three days if they detected that they were moving. If you had a fully up-to-date iPhone, you might get an “AirTag Found Moving With You” notification before then — but un-updated iPhone users and Android users were out of luck until that chime. Apple is now updating the AirTags to chime at some point between eight and 24 hours of separation, significantly reducing how long an AirTag can travel before telling on itself. (Apple didn’t respond to request for comment on the range of times it provided.)
Apple says the Android app will be coming “later this year” and will be able to detect both AirTags and other Find My network accessories, such as the Chipolo tracker.
Eight to 24 hours is still a long time to be tracked without knowing, but it’s good to see that Apple is at least starting to address some of people’s concerns. The update is out today and should happen automatically when an AirTag comes into proximity with an iPhone similar to how AirPods silently receive updates, according to CNET.
Nanoleaf, the company best known for illuminated panels that you stick to your wall, has announced the new Nanoleaf Elements line, a set of wall panels that are designed to look as nice turned off as they do when on. The new Elements line accomplishes this through a wood-like veneer that allows the light to shine through when its on but doesn’t look blank when the light isn’t in use.
Aside from the new veneer, the Elements lights are very similar to Nanoleaf’s other hexagonal light panels. They glow from both the front and back of the panel, allowing for a double lighting effect, and can be programmed with different lighting patterns and effects. The Elements do not have a full color spectrum LED, but they are adjustable from warm to cool white. Nanoleaf includes 11 preset lighting effects, or you can use the app to program your own. The touch-sensitive panels can also be synced to music or adjust their white temperature automatically throughout the day based on circadian rhythms.
The base $299.99 Elements Smarter kit includes seven panels that can be arranged in whatever pattern you’d like. Add-on packs of three more light panels are available for $99.99 each.
With this new design, it appears that Nanoleaf is attempting to make its product more accessible in more homes, as the wood veneer and lack of full RGB lighting effects have less of a nightclub vibe than previous products. But the basic premise remains the same: unlike standard lighting that is mostly meant to provide illumination, the Elements are purely decorative and meant to change the mood of a room, not brighten it.
The new Nanoleaf Elements products are available through the company’s website starting today, June 3rd, and will be at Best Buy stores later this month.
In addition, Nanoleaf has announced that its light panels will become Thread border routers, allowing them to act as a hub on a Thread network and extend the signal throughout the home. The company already supports Thread in its Essentials line of smart lights and light strips and it says it will be possible to add Thread devices from other manufacturers to the Nanoleaf border routers. Thread is the main technology in the forthcoming Matter smart home standard, which aims to unite smart home devices and platforms and allow for more interoperability. Nanoleaf says the Thread compatibility will be added through a software update this month.
(Pocket-lint) – Google has announced a new version of the Pixel Buds, its true wireless headphones that originally launched in 2017 – the first-gen weren’t all that, though, while the second-gen Buds 2 stepped things up a little in 2019.
The third model belongs to the A-Series, picking up on the A series that we’ve seen in Google’s phones, presenting an affordable choice of true wireless headset.
What’s different to the previous Pixel Buds?
To look that, there isn’t a huge difference between the A-Series and Buds 2: both have the same overall styling and come in a case that’s smooth, much like a pebble.
Both have the same earbud design with a little promontory at the top to help keep them secure, and a round touch-control area on the outside.
Pocket-lint
The Pixel Buds 2 have wireless charging, however, and inside the case and on the inner part of the ‘buds have a matte finish to the plastics, while the A-Series is glossy. That means the older version looks slightly higher quality.
The A-Series also lacks the option to change the volume via gestures – instead you have to use voice for that – and there are a few minor feature differences. Otherwise, the experience is much the same – but the A-Series is much cheaper.
Design & Build
Earbud: 20.7 x 29.3 x 17.5mm; 5.06g
Colours: Dark Olive / Clearly White
Case: 63 x 47 x 25mm; 52.9g
IPX4 water-resistant
Three ear tip sizes
The Buds A-Series’ case, for all intents and purposes, is the same as that of the Buds 2: it’s the same size, has the same feel, and that same satisfying action when you open and close the lid. Both have a USB-C charging port, a manual connection button on the rear, but the A-Series is slightly lighter.
There’s a satisfying magnetic action when you drop the ‘buds into the case to charge and don’t worry about mixing these up if you happen to have the older version too – the A-Series has two charging contacts inside, the Pixel Buds 2 has three.
Pocket-lint
There are two colours to the A-Series – Clearly White or Dark Olive – and opening the lid reveals the colour you’re looking at, as it’s the touchy smooth round end of these Buds, carrying the ‘G’ logo, which makes them really distinctive.
The A-Series ‘buds have the same design as the previous model, with the body of the earbud designed to sit in the concha of the ear, while sealing into the canal with a choice of three different ear tips. These are round – Google seemingly hasn’t been tempted to move to oval as seen on some rivals.
There’s an additional rubber arm that sticks out the top of the buds that is designed to slot into one of the folds at the top of your ear to help keep things secure. We weren’t a fan of it on the previous version and we have the same reservations here: you can’t remove it from the ‘buds and we’re not convinced it’s necessary. As for us, the Buds A-Series sit securely in the ear anyway – even when exercising.
Indeed, if we rotate the earbuds to get that blobby rubber ant to engage with our ears, the sound from the headphones gets worse because they then don’t sit in the best position for our ears. That’s one thing to consider: all ears are different, so this might work for some people and not for others.
Pocket-lint
The great thing about these earbuds’ design is that they don’t hang out of your ear, so you don’t need to worry about pulling a hat over the top or anything else – we think they look a lot better than the ear-dribble style of Apple’s AirPods and all those who copy them. We find the Google design more comfortable for wearing over long periods, too.
Connection, setup and control
Native Pixel support
Pixel Buds app
Touch controls
Google Fast Pair means you just have to lift the lid of the case and your nearby Android phone will detect the Pixel Buds A-Series and allow you to connect with one tap. It’s essentially the same as Apple’s system with the AirPods and iPhone, linking the Buds to the Google account you register them with so they are then available on other devices too.
If you’re using a Pixel phone then you’ll have native support for the Buds; if using another brand Android device you’ll be prompted to download the Pixel Buds app, which will provide access to firmware updates and details on how to use all the features, as well as some options.
Pocket-lint
As far as setup is concerned, that’s all there is to it: you’ll be asked to walk through things like Google Assistant, and you’ll be prompted to allow notifications access, so you can unlock the potential of the Pixel Buds.
The touch-controls are fairly easy to master, too, with both left and right sides offering the same function: single-tap to play/pause; double-tap to skip forward; triple-tap to skip backwards; press-and-hold to get a notifications update.
The last of those is interesting, because you’ll get a report of the time and then you’ll be told about your notifications – with the option to reply, needing a press-and-hold to speak your reply, before it’s confirmed and then sent.
Pocket-lint
Missing from this selection of touch-controls is volume: unlike the Pixel Buds 2, you can’t swipe to change the volume, you have to ask Google Assistant to do it or you have to thumb the volume controller on your device instead.
This, we feel, is the biggest flaw of these headphones: volume control is pretty important when you’re listening to something, so having to ask Google using voice just isn’t appropriate in all situations.
Google Assistant and smart features
Google Assistant integration
Adaptive Sound
With a lack of volume control, Google pushes its Adaptive Sound option as a solution. This is designed to adapt the volume to the ambient sound levels. As the external noise goes up, so does the volume of the headphones. That’s fine in principal and works when you move from and area of consistent background noise to another – from a quiet library to a server room with whirring fans, for example – but it’s hopeless when you have varying noise levels.
Just walk along a busy street with Adaptive Sound on and you’ll find the volume of the headphones yo-yoing, because it’s not constant noise, it depends on what’s driving past at that moment. This could be corrected by a software update with Google reducing the frequency of volume changes. If you manually adjust the volume then it suspends the system for a bit and leaves the control to you, but in reality, it’s just too irritating to use in many situations and you might as well turn your phone volume up instead.
Pocket-lint
As we’ve said, Google Assistant is fully integrated into the headphones, so you can ask Google anything that you might on your phone or Nest Hub at home. For fans of the system, that’s a great addition, because you don’t need to fish your phone out of your pocket first. Sure, there are lots of headphones out there that offer Google Assistant, but naturally, Google puts Google first and the experience is nice and smooth.
It’s also a two-way experience, with Google Assistant notifying you of incoming messages and it’s able to read them out to you too – with the option to speak a reply. You can disable messages from any apps you don’t want in the Pixel Buds app, to maintain privacy (or, indeed, a barrage of non-stop voiced messaging). You can also trigger message sending through voice – and you’ll get to confirm the message that’s being sent.
Thanks to Voice Match, it will only respond to your voice – and that also means you can access things like your calendar and so on. It’s plain sailing all round.
Sound quality and performance
Buds: 5 hours battery life
Case: 19 hours extra
Spatial Vents
Bass Boost
When it comes to the performance, Google is taking a bit of a gamble. Rather then pursuing isolation from the outside world, it wants to provide an experience that lets some of the ambient sound in, so you don’t feel cut off.
Pocket-lint
Google uses what it calls Spatial Vents, while claiming that the headphones provide a gentle seal rather than trying to block everything out. We’re not huge fans of this approach and with the rise in headphones offering active noise cancellation (ANC), it suggests that generally speaking that’s what people are buying.
Needless to say, there’s no ANC here and you’ll be able to hear what’s happening around you a lot of the time. At home that’s perhaps useful – you can hear the doorbell or the dog bark – but out on public transport, you’ll hear every announcement, door crash, clatter of the wheels on the tracks, and that’s not something we want. This is exactly the same experience as the previous Pixel Buds and whether that suits you will depends very much on where you wear your headphones. If that’s a busy place, the A-Series might not be the best for you.
Aside from that, in quiet conditions, the sound quality is actually very good. The Pixel Buds A-Series benefits from the Bass Boost option that Google added as a software update to the previous Buds in late 2020, so they offer better performance for tracks which want a driving bassline. In quiet conditions at home we have no complaints: the Pixel Buds A-Series is a great pair of headphones, especially at the asking price and given the smart options they offer.
Pocket-lint
When it comes to calling there are two beam-forming mics on each ‘bud, but they still let noise through to the caller. This is reduced, but they’ll hear every car that drives past as a hiss. If you’re after a better calling experience, the Samsung Galaxy Buds Pro offers a far better experience, providing a better veil of silence when making calls.
The Pixel Buds A-Series provides battery life of 5 hours, which we’ve found to be accurate – although we found the left ‘bud to drain slightly faster than the right one. The case supplies 24 hours of life, recharging the buds when they are back in it, and charged itself via USB-C. This isn’t the longest battery life on the market, but it matches the Apple AirPods.
Best Bluetooth headphones 2021 rated: Top on-ear or over-ear wireless headphones
By Mike Lowe
·
Verdict
The Pixel Buds A-Series have a lot to offer considering the price: Google Assistant integration, comfortable design, a lovely case, plus great audio performance when in quieter conditions.
The biggest downsides are the lack of on-bud volume controls and the design decision to not strive for isolation from external noise. The Adaptive Sound – which auto-adjusts volume – is a good idea in principle to compensate for this, but it sees the headphones’ volume yo-yo unnaturally.
Compared to the older Pixel Buds 2, we’d pick the Pixel Buds A-Series every time: they do the important things just as well but the price is much more approachable, meaning you can forgive the omissions given the context of price.
Also consider
Pocket-lint
Samsung Galaxy Buds Pro
Samsung’s Galaxy Buds Pro offer great noise-cancelling – which is especially effective when making calls – while also offering a great set of features.
Read the full review
squirrel_widget_3816695
Jabra Elite Active 75t
These headphones are a little more bulky, but they offer noise-cancellation that will almost entirely eliminate external noise. If you want silence, Jabra delivers it.
Valve has released the results for the May 2021 installment of its monthly Steam Hardware & Software Survey. The biggest takeaway is that AMD took one step forward by claiming 30% of the CPU market, at least as represented by Steam users, but it also took one step back by ceding more of the GPU market to Nvidia.
The survey results showed that AMD‘s share of the CPU market rose from 29.48% in April to 30.13% in May. That follows the same pattern as previous months: The popularity of AMD processors among Steam users has risen by roughly half a percent throughout 2021. Now those incremental gains are starting to add up.
It wouldn’t be a surprise if AMD continued its rise up the Steam survey’s results in the future. The company actually has more entries on our list of the best CPUs for 2021 than Intel, and several of them are more affordable than their Intel counterparts, which could help convince more gamers to give AMD CPUs a shot.
AMD has failed to make similar headway in the graphics market, however, and the survey results showed that its share fell to 16.18% in May. That isn’t a drastic drop—AMD graphics cards have powered roughly 16% of survey respondents’ systems since late 2019—but it does highlight the company’s struggle in that segment.
This isn’t for lack of trying. AMD’s latest Navi offerings are still found in some of the best graphics cards for gaming; at least part of the problem is that they are in short supply. The company said in April that it would increase GPU supply, but that’s going to take some time, so that promise probably didn’t affect these survey results much.
Of course, some of these results do need to be taken with a grain of salt. The Steam hardware survey may not be taken by every user and can’t be completely representative. Still, it provides an idea of trends.
Not that many Steam users are quick to adopt the latest hardware. The most popular GPU on the platform is the GeForce GTX 1060, according to the survey, and 1% of Steam users are somehow continuing to get by with CPUs featuring clock speeds lower than 1.4 GHz. It takes a while for the survey results change much.
NZXT’s N7 Z590 is a full-featured Z590 motherboard that includes two M.2 sockets, Intel-based Wi-Fi 6E, capable power delivery, premium audio, and more. It’s a well-rounded mid-ranger for Intel’s Z590 platform, though for a similar price, there are other options with more and better parts.
For
+ Wi-Fi 6E/2.5 GbE Networking
+ Improved and capable power delivery
Against
– Last-gen audio codec
– Only two M.2 sockets and four SATA ports
Features and Specifications
NZXT has joined the Intel Rocket Lake motherboard party with the N7 Z590, bringing PCIe 4.0 capability (with a Rocket Lake CPU), capable power delivery, Intel Wi-Fi 6E, and a unique design aesthetic that easily matches most themes. With an MSRP of $279.99, the N7 positions itself as a strong competitor in the mid-range Z590 space.
NZXT worked with ASRock to bring you this board which appears to be based on the ASRock Z590 Steel Legend. We reviewed that board a couple of months back, and overall liked what ASRock offered at the price point. The N7 Z590, like previous NZXT motherboards, employs unique-looking shrouds/heatsinks that cover most of the board, giving it that signature NZXT appearance.
Performance-wise, the N7 Z590 did well overall. Its results traded punches with the other boards in most tests. Like the Steel Legend it’s modeled from, this board follows Intel specifications, and in a couple of tests (Handbrake, Cinebench/POV-Ray single thread), the times/scores were lower than the others. To bypass that, simply adjust the power limits up, as the other boards do from the factory. The N7 board set our DDR4 3600 sticks at 1:1 with the memory controller, and we saw solid results in our memory bandwidth and latency tests. Overclocking was a breeze too, as we set our CPU to 5.1 GHz while running the memory at DDR4 4000 (with a few tweaks for stability).
Although you can’t tell the difference by looking at the board, the Z590 version of NZXT’s board strives to improve upon the last generation and does so with aplomb. The N7 Z590 includes PCIe 4.0 M.2 socket (two total sockets, the other is PCIe 3.0) and a PCIe 4.0 x16 slot, additional USB ports including a USB 3.2 Gen2x2 Type-C on the rear panel, improved power delivery, and more. We’ll cover these features in detail and more below. But first, here are the full specs from NZXT.
Inside the box included with the motherboard are a few accessories. Our review board didn’t even have a driver disk (perhaps because this is a pre-launch sample), but what’s inside should allow you to get going without an extra trip to the store. For drivers, if a disk isn’t included in the retail packaging, you can get them from the NZXT website. Below is a complete list of the included parts.
Motherboard Guide
Wi-Fi Antenna
(4) SATA cables
(2) Screw package for M.2 sockets
Image 1 of 3
Image 2 of 3
Image 3 of 3
After taking the board out of the box, we get a chance to see the nearly completely covered 6-layer matte-black PCB. Just about the only design cues are the punched-out circles above the chipset and left VRM bank heatsinks. Additionally, some NZXT branding resides on the top M.2 socket cover and over the rear IO area.
If you’re looking for integrated RGB lighting, you’ll have to look somewhere else, as the N7 Z590 doesn’t include any. That said, the board has two RGB headers, a 3-pin ARGB and 4-pin RGB, along with two NZXT RGB LED connectors. The NZXT CAM software handles all RGB lighting control. Overall, I like how this board looks. The mostly covered PCB gives the board a premium look and feel that matches most any build theme. If a stealthy all-black motherboard isn’t your thing, the N7 Z590 also comes in white.
Looking at the top half of the board, there isn’t much to see outside of shrouds. On the left, the IO cover reaches out over the rear IO chips and touches the left VRM bank. The heatsink doesn’t have a lot of surface area. but does an excellent job keeping the power bits cool, even while overclocked as we’ll see later.
Above the heatsink on the top edge, we find the 8-pin EPS (required) and a 4-pin EPS connector (optional) to feed power to the CPU. To the right are two (of seven) 4-pin fan headers. Each fan/pump header on the board is capable of 2A/24W output, as well as auto-detecting what type of fan is connected (PWM or DC). Continuing right are two 4-pin NZXT RGB headers and two more fan headers.
To the right of the socket area are four unreinforced DRAM slots that support up to 128GB of RAM. NZXT lists supported speeds up to DDR4 4600(OC). This is on the lower side compared to other boards, but most users will run RAM well under that speed. Our DDR4 4000 kit worked with a minor voltage change to the VccIO Memory voltage (0.10+), so we know it’s good to at least that point. Much beyond that and you pay a steep premium for little in the way of gains anyway.
On the right edge of the board, the only thing visible from the top is the 24-pin ATX for board power and a USB 3.2 Gen 2 Type-C front-panel port.
NZXT lists the N7 Z590 as a 12-phase Dr.MOS VRM, which breaks down to a 12+2 configuration for the Vcore and SOC. A Richtek RT3609BE (X+Y=8) 8-channel controller handles the CPU while a Renesas RAA229001 controls the SOC. The eight-channel controller feeds 12 Vishay Sic654 50A MOSFETs for CPU Vcore in a teamed/parallel configuration. In other words, NZXT does not use phase doublers on this board. This configuration is plenty for both 10th and 11th generation CPUs on this platform.
Focusing in on the bottom half of the board, we’re again greeted primarily by shrouds. Hidden underneath on the left-hand side is a Realtek ALC1220 codec, along with five audio caps and a Texas Instruments NE5532 OpAmp. While the ALC1220 codec is a solid audio solution (it was the flagship of Z490) most users are happy with, I would like to have seen the latest and greatest ALC4080/4082 here instead.
In the middle are a few PCIe slots and two M.2 sockets. Starting with PCIe, there are two full-length slots and three x1 size slots. The top slot is PCIe 4.0-capable with a Rocket Lake-based CPU, while the second full-length slot runs PCIe 3.0 x4 max and is fed from the chipset. The documentation doesn’t mention multi-GPU support. But by lane count, it should be able to run 2-Way CrossfireX. The three x1 slots are PCIe 3.0 and fed from the chipset as well. I like the x1 slot location, so you can easily insert an AIC without worrying about the location covering a full-length slot.
The top M.2 socket is located to the right of the top x1 slot. When using a Rocket Lake-based CPU, this socket runs at PCIe 4.0 x4 speeds and supports up to 80mm modules. The bottom M.2 socket runs at PCIe 3.0 and accepts SATA-based modules up to 80mm. The manual does not mention RAID functionality. Worth a mention is that although the M.2 covers are metal, they do not contact the M.2 modules to help with cooling. If you have hot-running M.2 modules, you may want to keep an eye out on temperatures.
As we move further to the right, we pass over the chipset heatsink on the way to the right edge. Here, hidden under the shroud with horizontally oriented connectors, we spy a USB 3.2 Gen 1 header and the four SATA ports. Although most users would be happy with four SATA ports and two M.2 sockets, this is less than most other boards at this price point. Many have three M.2 sockets and six SATA ports, all of which can be active (though not always) in specific configurations.
Across the board’s bottom are several headers, including more USB ports, fan headers and more. Below is the complete list, from left to right:
Front-panel audio
UART header
RGB and ARGB headers
(3) USB 2.0 header
(3) System Fan headers
Q-Code LEDs
Clear CMOS jumper
USB 3.2 Gen 1 connector
Power/Reset buttons
Front panel header
Flipping the board around to the rear IO area, we see the black pre-installed IO plate, which matches the colors and design of the rest of the board, along with the NZXT branding in white. In total, there are 10 USB ports: You get four USB 3.2 Gen 2 ports (3x Type-A, 2x Type-C), four USB 3.2 Gen 1 ports, and two USB 2.0 ports. A single HDMI port handles video output when using the integrated graphics on the CPU. You’ll also find the Wi-Fi antenna headers, a Clear CMOS button, the Realtek 2.5 GbE and the 5-plug plus SPDIF audio stack.
NortonLifeLock announced yesterday that it’s adding Ethereum mining to its Norton 360 antivirus software with an upcoming feature, Norton Crypto, that “select Norton 360 customers in Norton’s early adopter program” are invited to test.
Let’s make this clear from the start: Enthusiasts will probably be better off learning how to mine Ethereum themselves instead of relying on Norton Crypto. The feature is likely intended for the kind of person who’s never heard of a hash rate, shopped for the best mining GPU, or wondered how to optimize their GPU for Ethereum mining. It’s worth considering how Norton Crypto is presented to those people.
“For years, many coin miners have had to take risks in their quest for cryptocurrency, disabling their security in order to run coin mining and allowing unvetted code on their machines that could be skimming from their earnings or even planting ransomware,” the company said in a press release. “Earnings are commonly stored directly on miners’ hard drives, where their digital wallet could be lost should it fail.”
Naturally, Norton positions Norton Crypto—and the accompanying Norton Crypto Wallet—as the solution to those concerns. The former is a mining tool built into antivirus software people already trust; the latter is a cloud-based solution to which people can transfer their earnings ”so it cannot be lost due to hard drive failure.” It’s not hard to see how the feature could appeal to the (barely) crypto curious.
Unfortunately we don’t have many other details about the feature. NortonLifeLock said Norton Crypto “is expected to become available to all Norton 360 customers in the coming weeks.” But at the time of writing, the company hasn’t updated its website to provide more information about how it’s monetizing the feature, how it’s securing the cloud-based wallet, or how the underlying mining process operates.
Those are important questions to answer. It’s already hard to mine Ethereum at a profit even on dedicated hardware; doing so on a basic system through a feature built into antivirus software would probably be even more difficult. Clearly explaining the increased energy costs, potential impact on the system’s performance, and the cryptocurrency market volatility probably wouldn’t be trivial either.
NortonLifeLock might also have to re-teach the concept of cryptocurrency to some of its customers. Right now, there are 10 results on the Norton website for “crypto.” Three are basic explanations of cybersecurity, cryptocurrency, and ransomware; three are about specific vulnerabilities, a malvertising campaign, or Cryptolocker; three are basic support articles; one is just a link to the Emerging Threats page.
Most of those results (and the pages they lead to) don’t exactly portray cryptocurrency in a positive light. That’s fair—cryptojacking is a serious problem. But it’s not hard to imagine there will be a certain amount of cognitive dissonance caused when people load the antivirus app that’s been warning them about malicious crypto mining for years suddenly offers to help them mine crypto.
We should know more about how people will react to Norton Crypto when the feature reaches all Norton 360 customers in the coming weeks, assuming everything goes as scheduled and as the first testers gain access to the feature.
Microsoft has been teasing a “next generation” of Windows for months now, but new hints suggest the company isn’t just preparing an update to its existing Windows 10 software, but a new, numbered version of the operating system: Windows 11.
The software giant announced a new Windows event for June 24th yesterday, promising to show “what’s next for Windows.” The event invite included an image of what looks like a new Windows logo, with light shining through the window in only two vertical bars, creating an outline that looks very much like the number 11. Microsoft followed up with an animated version of this image, making it clear the company intentionally ignored the horizontal bars.
Microsoft’s Windows event also starts at 11AM ET, not the usual start time for typical Windows and Surface events. Following the event invite, Microsoft exec Yusuf Mehdi said he hasn’t “been this excited for a new version of Windows since Windows 95!” It’s the first time we’ve heard Microsoft specifically mention a “new version” of Windows is on the way.
The event invite also comes just a week after Nadella teased a “next generation of Windows” announcement. Nadella promised that Microsoft would soon share “one of the most significant updates to Windows of the past decade.” Microsoft’s chief product officer, Panos Panay, also teased a “next generation” of Windows earlier this year.
If Microsoft is truly readying to move beyond Windows 10 and towards Windows 11, we’re expecting to see big visual changes to reflect that. Microsoft has been working on something codenamed Sun Valley, which the company has referred to as a “sweeping visual rejuvenation of Windows.”
A lot of these visual changes will come from the work Microsoft completed on Windows 10X, a lightweight version of Windows intended to rival Chrome OS, before it was scrapped. That includes a new Start menu, new system icons, File Explorer improvements, and the end of Windows 95-era icons that drag Windows users back to the past in dialog boxes. Rounded corners and updates to the built-in Windows apps are also planned.
Significant changes are also on the way for Windows beyond the user interface. Microsoft appears to be ready to address a lot of lingering problems, with fixes planned for a rearranging apps issue on multiple monitors, an upcoming Xbox Auto HDR feature, and improvements to Bluetooth audio support.
Perhaps the biggest lingering issue waiting to be fixed is the Windows store. Microsoft has been working on a new app store for Windows in recent months, and rumors suggest it will be a significant departure from what exists today. Nadella has promised to “unlock greater economic opportunity for developers and creators” with Windows, and the Windows store seems like the obvious way to do that.
Microsoft is reportedly overhauling its Windows app store to allow developers to submit any Windows application, including browsers like Chrome or Firefox. This would significantly improve the store alone, but Microsoft might also be considering allowing third-party commerce platforms in apps. That would mean Microsoft wouldn’t take a cut from developers who use their own in-app purchase systems.
So far, Microsoft has only announced a cut to 12 percent commission for PC games in the Windows store, but allowing developers to bypass Microsoft’s cut would be a significant change.
Moving to Windows 11 branding would also back up Microsoft’s reinvestment in Windows. The software maker signaled a renewed interest in Windows last year, during a pandemic that has demonstrated how important the operating system is. Windows usage jumped as workers and students across the world turned to laptops and PCs to work from home. PC shipments have also surged over the past year.
After slicing Windows into two parts back in 2018, Microsoft moved parts of Windows development back under Panos Panay’s control last year. The move was a clear admission that Microsoft’s Windows split didn’t work, after months of messy development experiences for Windows 10, delayed Windows updates, a lack of major new features, and lots of Windows update issues.
Moving to Windows 11 would still be a surprise move for Microsoft, though. The company previously referred to Windows 10 as “the last version of Windows” in its big push to position the OS as a service that’s continually updated. While there are monthly updates to Windows, the more significant changes are typically delivered twice a year.
Microsoft has struggled with naming these updates, though. We’ve seen the Windows 10 Anniversary Update, Fall Creators Update, and simple dates like the November 2019 Update. Microsoft has also adopted yet another naming scheme recently, referring to updates as 20H1 or 21H1 to signify both the release year and part of the year the update launched.
A move to Windows 11 wouldn’t necessarily clear up Microsoft’s update naming issues, but if the company also adopted point releases like Windows 11.1, that would certainly help both consumers and IT admins to quickly understand which version is the latest.
OEMs would also be happy to see a Windows 11 release. A new version of Windows always drives new hardware sales and renewed interest in the operating system. If Microsoft backs that up with a new UI and a fresh look and feel for Windows, it will be the typical playbook we’ve seen for Windows for decades.
It’s not long until we find out whether Microsoft is ready to dial the version number of Windows up to 11. The Windows elevent (as I’m now calling it) will start at 11AM ET on June 24th, and The Verge will be covering all the news live as it happens.
Samsung has introduced its first Zones Namespaces (ZNS) solid-state drives that combine high performance, long endurance, relatively low price enabled by the company’s QLC V-NAND, and improved quality of service (QoS) for datacenters. To use Samsung’s new PM1731a ZNS SSDs, datacenters will have to deploy new storage systems and software.
ZNS SSDs work differently than conventional block-based drives and have a number of advantages. ZNS SSDs write data sequentially into large zones and have better control over write amplification, which reduces over-provisioning requirements by an order of magnitude. For example, some enterprise drives rated for 3 DWPD (drive writes per day) reserve about a third of their raw capacity for over-provisioning, but for ZNS SSDs about 10% of that is enough. In addition, since ZNS uses large zones instead of many 4KB blocks, garbage collection is not needed as often as traditional SSDs, it also improves real-world read and write performance.
Samsung’s 2.5-inch PM1731a ZNS SSDs are based on the company’s proprietary dual-port controller as well as 6th Generation QLC V-NAND memory. The drives will be available in 2TB and 4TB capacities.
Samsung says that its new PM1731a ZNS drives will last up to four times longer than conventional NVMe SSDs, which will reduce their total cost of ownership (TCO) and will simplify server infrastructure.
But enhanced endurance and performance come at a cost. ZNS ecosystem requires new software infrastructure that is not yet widely available. In a bid to make ZNS more widespread, Samsung is participating in a number of open-source ZNS projects. The manufacturer also plans to make its ZNS technology available to xNVMe and participate in the Storage Performance Development Kit (SPDK) community to enable NVMe and SPDK users to implement ZNS more easily.
Samsung will start mass production of its PM1731a ZNS SSDs in the second half of the year. The company is the second major maker of SSDs, the first being Western Digital, to unveil a ZNS SSD.
“Samsung’s ZNS SSD reflects our commitment to introducing differentiated storage solutions that can substantially enhance the reliability and lifetime of server SSDs,” said Sangyeun Cho, senior vice president of the Memory Software Development Team at Samsung Electronics. “We plan to leverage quad-level cell (QLC) NAND technology in our next-generation ZNS drives to enable higher thresholds for storage performance and capacity in the enterprise systems of tomorrow.”
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.