Matthew Wilson 1 day ago Featured Tech News, Software & Gaming
Back when the Xbox One launched in 2013, one of Microsoft’s big exclusives, Ryse Son of Rome, released to middling critical reception. As the years have gone on though, the game has developed a cult following and soon, those long-time fans may have a sequel to look forward to.
Crytek went through a lengthy period of not releasing big budget titles after Ryse Son of Rome, but the studio may be ready to return to larger scale games. According to Xbox insider, “Shpeshal Ed”, Crytek is currently working on a new Ryse game, which was in development as of July 2020.
There is reason to believe this, as Crytek was unfortunately hacked in late 2020. In one of the leaked documents, several upcoming projects were listed, with a sequel to Ryse Son of Rome being one of them.
Interestingly, this time around the game may release as a multi-platform title, rather than being exclusive to Xbox. That’s all the information we have for now, but hopefully we’ll hear something more official later this year.
Discuss on our Facebook page, HERE.
KitGuru Says: While it was short, Ryse Son of Rome was genuinely very good and still looks great visually even today. Hopefully Crytek continues working on a sequel that can shoot this IP back into the spotlight at some point during this console generation.
Become a Patron!
Check Also
Razer’s Orochi V2 is a compact wireless mouse with up to 900 hours of battery life
Razer is back with another gaming mouse this week. This time around, the Razer Orochi …
It looks like Respawn is cooking up another brand new IP. The studio recently put up some new job listings, revealing plans for a completely new game not connected to Titanfall or Star Wars.
We first caught wind of this via Vince Zampella himself, who tweeted out about an “exciting new opportunity” for a “new Respawn project”. Diving into the job listings reveals that the team is planning a “new IP from scratch”, unlike Apex Legends, which was based on Titanfall and Star Wars Jedi Fallen Order, which is obviously a licensed game.
The job itself (via PCGamer) is for an Incubation Team Software Engineer, who will “pioneer new ways to enable adventuring until the heat death of the universe” … whatever that means.
Aside from this new IP, we know that Respawn is making a sequel to Jedi Fallen Order, as well as continuing work on Apex Legends. The studio also recently brought back the Medal of Honour IP with a VR game.
Discuss on our Facebook page, HERE.
KitGuru Says: What would you like to see from a brand new game from Respawn?
Become a Patron!
Check Also
SEGA will start selling NFTs
NFTs have become the latest controversial blockchain technology, allowing people to (in theory) sell digital …
AMD’s Threadripper consumer HEDT processors continue to be praised strongly for their excellent compute performance and connectivity options. But what if you want more than 256GB of memory? What if you want your RAM to run in 8-channel mode? What if you want more than 64 PCIe Gen 4 lanes? Well… that’s where Threadripper Pro comes in.
Watch via our Vimeo Channel (Below) or over on YouTube at 2160p HERE
Video Timestamps:
00:00 Start
00:15 Some details/pricing
01:15 Star of the show – Threadripper Pro 3975WX
03:20 The CPU cooler
03:46 Memory setup / weird plastic shrouds with fans
05:27 AMD Radeon Pro W5700 GPU
07:00 Motherboard
08:55 Storage options
09:41 1000W PSU (Platinum) and custom setup
10:32 Luke’s thoughts and I/O panels
11:22 The Chassis
11:40 Cooling and tool less design
12:35 Summary so far
14:02 Performance tests
16:49 System temperatures, power and noise testing
19:05 System under idle conditions – ‘rumbling’ noise we experienced
19:22 Pros and Cons / Closing thoughts
Primary Specifications:
32-core AMD Threadripper Pro 3975WX processor
128GB of 3200MHz ECC DDR4 memory in 8-channel mode
AMD Radeon Pro W5700 graphics card with 8GB GDDR6 VRAM
WD SN730 256GB NVMe SSD
1kW 80Plus Platinum PSU
We are examining the Lenovo ThinkStation P620 workstation that is built around Threadripper Pro and its 8-channel memory support. There are a few options for the base processor on Lenovo’s website including 12, 16, 32, and 64 core options. Specifically, we are looking at the 32-core Threadripper Pro 3975WX chip and we are hoping that Lenovo can keep it running at the rated 3.5-4.2GHz speeds beneath that modestly sized CPU cooler.
Partnering this 280W TDP monster with its 128 PCIe Gen 4 lanes is 128GB of 8-channel DDR4 3200MHz ECC memory. While a 128GB installation is merely small-fry for Threadripper Pro, the 3200MHz modules running in 8-channel mode should allow for some excellent results in bandwidth-intensive tasks. Plus, you get a 1600MHz Infinity Fabric link for the Zen 2 cores.
I will, however, emphasise my dislike for Lenovo decision to deploy a 40mm fan and shroud to cool each DIMM bank. This seems unnecessary for a 128GB installation and merely adds additional noise and points of failure. Metal heatspreaders on the DIMMs would have been better, if enhanced cooling is deemed necessary.
Graphics comes in the form of an 8GB Radeon Pro W5700 blower-style card which we have already reviewed on KitGuru. That makes this an all-AMD system as far as the key components go. Another key benefit is ISV certification for the Lenovo P620. That point will be music to the ears of system buyers in a business environment with users who run software on the guaranteed support list.
Another point that will garner particular attention from prospective buyers is the display output connectivity. On its ‘pro-grade’ card, AMD deploys five Mini-DisplayPort 1.4 connections and one USB-C port. That gives you convenient access to six total display outputs which is super. As highlighted in our review of the Radeon Pro W5700, you can power five 4K monitors or three 5K alternatives, making this an excellent workstation proposition.
Lenovo uses its own WRX80 motherboard to house the sWRX8 Threadripper Pro CPU. The power delivery solution looks competent and Lenovo’s use of proper finned VRM heatsinks with passive cooling is to be commended. Six total PCIe Gen 4 slots are provided by the motherboard – four x16 bandwidth and two x8. However, only two x16 slots remain usable due to the slot spacing, and the top one will likely interfere with the RAM fan’s header.
It is actually disappointing to see Lenovo offering up sub-par expansion slot capability. There is no clear way to use the 128 lane capability from Threadripper Pro. That is especially disappointing to users who will want multiple graphics card alongside high-bandwidth networking and storage devices. However, the limited expandability is a clear compromise from Lenovo’s use of a compact chassis with just a couple of 80mm fans for intake and exhaust airflow.
At least you do get dual, cooled M.2 slots on the motherboard. One of those is occupied by a 256GB WD SN730 SSD in our install. Clearly, most users will want to adjust the storage configuration. But this is clearly a very subjective requirement, so I respect Lenovo for offering a basic, cheap drive for the baseline configuration.
Power is delivered by a 1kW 80Plus Platinum unit. Lenovo highlights 92% efficiency on the configurator page, but this is likely a mistake for 230/240V UK customers given the more stringent 80Plus Platinum requirements for those operating voltages. The PSU’s tool-less design is absolutely superb and works very well; a single connector port feeds power from the unit through the motherboard where it is then distributed accordingly, including via break-out cables for PCIe and SATA connectors.
Connectivity for the system is just ‘OK‘. You get 10GbE Aquantia AQC107 networking onboard, but a secondary network adapter is disappointingly omitted. I would have liked to see a few more USB ports on the rear IO, including some in Type-C form and preferably 20Gbps high-speed rated. However, the front IO is excellent with four 10Gbps USB connections, two of which are Type-C. I also appreciated the system’s included audio speaker when using the unit without a proper set of speakers.
The chassis build quality is good and feels very well-built given its compact form. Man-handling the hefty system is easy thanks to the front handle. And the internal tool-less design is excellent. Lenovo’s configurator gives an option to upgrade to a side panel with key locking to prevent unauthorised access, which is good to see.
With that said, cooling certainly looks to be limited with just two 80mm intake fans on the chassis. The graphics card, CPU, PSU, and (annoyingly) RAM also have fans to take care of their own cooling. If you are thinking of adding a second high power GPU, though, the internals are likely to get very toasty.
Priced at around £5.5-6K inc. VAT in the UK (depending on the graphics card situation given current shortages), we are keen to see how Threadripper Pro performs in this reasonably compact workstation.
Detailed Specifications
Processor: AMD Threadripper Pro 3975WX (32 cores/64 threads, 3.5/4.2GHz, 280W TDP, 144MB L2+L3 cache, 128 PCIe Gen 4 lanes, up to 2TB 8-channel DDR4-3200 ECC memory support)
Motherboard: Lenovo WRX80 Threadripper Pro Motherboard
Memory: 128GB (8x16GB) SK Hynix 3200MHz C24 ECC DDR4, Octa-channel
Graphics Card: 8GB AMD Radeon Pro W5700 (RDNA/Navi GPU, 36 compute units, 2304 stream processors, 205W TDP, 1183MHz base clock, 1750MHz GDDR6 memory on a 256-bit bus for 448GBps bandwidth)
System Drive: 256GB WD SN730 PCIe NVMe SSD
CPU Cooler: Lenovo dual-tower heatsink with 2x 80mm fans
Matthew Wilson 3 days ago Featured Tech News, Software & Gaming
Earlier this week as part of CD Projekt Red’s 2020 financial report, the company revealed that it sold 13.7 million copies of Cyberpunk 2077 in December, and fulfilled 30,000 pre-order requests through its Help Me Refund program. As it turns out, the documentation was a little misleading, as the real cost of Cyberpunk 2077’s refunds was in the $50 million USD range.
As part of the Help Me Refund campaign, which ran for a limited time and ended before Christmas, CD Projekt Red facilitated 30,000 refunds, costing a total of $2.23 million. However, this campaign was a small percentage of the overall cost of Cyberpunk 2077 refunds. As pointed out by Ars Technica, if you dig through the documents, CD Projekt Red does reveal that refunds overall cost $51.2 million in 2020.
This figure covers all Cyberpunk 2077 refunds, whether they were digital or physical and accounts for refunds granted outside of the limited-time Help Me Refund campaign. A chunk of these expenses were also “provisions for returns and expected adjustments of licensing reports related to sales of Cyberpunk 2077 in its release window!.
While CD Projekt Red did lose over $50 million to refunds, the company still had a record financial year in 2020, with Cyberpunk 2077’s launch propelling the company to a $300+ million profit.
Discuss on our Facebook page, HERE.
KitGuru Says: Refunds certainly made a dent, albeit a small one. The real challenge now is getting Cyberpunk 2077 to a point where it can pick up a good number of post-launch sales and the only way to do that is by fixing the game and introducing new content.
Become a Patron!
Check Also
Razer’s Orochi V2 is a compact wireless mouse with up to 900 hours of battery life
Razer is back with another gaming mouse this week. This time around, the Razer Orochi …
Matthew Wilson 3 days ago Featured Tech News, Software & Gaming
This week, DICE and EA began teasing this year’s Battlefield game, promising bigger battles and more destruction than ever before. Battlefield 6 isn’t the only new project in the works though, EA is also planning a Battlefield mobile game to compete with the likes of Call of Duty Mobile and other mobile-based shooters.
In a blog post, DICE GM, Oskar Gabrielson, writes: “It’s always been our vision to bring Battlefield to more platforms. So, after years of prototyping, I’m super happy to be able to let you know that our friends at Industrial Toys, working closely with all of us here at DICE, are developing a completely new Battlefield game bringing all-out warfare to smartphones and tablets in 2022.”
The post goes on to explain that this will be a standalone game and will be completely separate from the PC and console versions of Battlefield. As far as gameplay goes though, the team hasn’t revealed much. It could end up being a mobile-focused battle royale, or it could be something more akin to Call of Duty Mobile, which is a smaller scale version of COD’s multiplayer modes.
Currently, Battlefield Mobile is in the testing phase and isn’t planned to release until 2022, so it could be a while before we see it in action.
Discuss on our Facebook page, HERE.
KitGuru Says: Since PUBG and Fortnite went mobile, it has become more common for larger IP to get dedicated mobile games. With that in mind, it shouldn’t be much of a surprise to see Battlefield following that trend. What do you all think of the idea of a Battlefield mobile game?
Become a Patron!
Check Also
Razer’s Orochi V2 is a compact wireless mouse with up to 900 hours of battery life
Razer is back with another gaming mouse this week. This time around, the Razer Orochi …
The researchers who got the University of Minnesota (UMN) banned from contributing to the Linux kernel are going to have to do more than apologize for their actions. ZDNet reported that the Linux Foundation’s Technical Advisory board sent a list of demands the university will have to meet before it can seek forgiveness.
A quick recap: UMN researchers contributed intentionally flawed code to the Linux kernel in August 2020 for a paper on these so-called “hypocrite commits” that was published in February. A separate project meant to “automatically identify bugs introduced by other patches” then drew the ire of Greg Kroah-Hartman, the developer who oversees the Linux kernel’s stable release channel last week.
Kroah-Hartman banned the entire UMN system from contributing to the Linux kernel as a result of the research projects. That decision was followed by an apology from the UMN Department of Computer Science and Engineering (CSE), a significant amount of discussion amongst the Linux community, and then a separate apology from the faculty and students who actually conducted the controversial research.
ZDNet reported that the Linux Foundation’s Technical Advisory Board contacted UMN Friday—a day before the researchers issued their apology—with a list of demands. (And, of course, additional criticism regarding the amount of work the research projects created for other developers.) The letter was officially penned by Linux Foundation senior VP and general manager of projects Mike Dolan.
To quote:
“Please provide to the public, in an expedited manner, all information necessary to identify all proposals of known-vulnerable code from any U of MN experiment. The information should include the name of each targeted software, the commit information, purported name of the proposer, email address, date/time, subject, and/or code, so that all software developers can quickly identify such proposals and potentially take remedial action for such experiments.”
Dolan also pushed for the UMN researchers to withdraw ”from formal publication and formal presentation all research work based on this or similar research where people appear to have been experimented on without their prior consent.” He said “there should be no research credit“ for information that’s already online, too.
ZDNet reported that the final condition was to “ensure that all future IRB reviews of proposed experiments on people will normally ensure the consent of those being experimented on, per usual research norms and laws,” as Dolan put it. He also made it clear that the Linux Foundation (and presumably the developer community as a whole) wants UMN to respond to this list of demands as quickly as possible.
Kroah-Hartman issued a response to the UMN researchers Sunday. “Thank you for your response. […] As you know, the Linux Foundation and the Linux Foundation’s Technical Advisory Board submitted a letter on Friday to your University outlining the specific actions which need to happen in order for your group, and your University, to be able to work to regain the trust of the Linux kernel community.
“Until those actions are taken, we do not have anything further to discuss about this issue.”
We doubt that will be the final word on the subject—the UMN CSE still has to complete its investigation into these projects, decide if the researchers are going to be punished, provide the information requested by the Linux Foundation’s Technical Advisory Board, and then publicly respond to the controversy as it continues to develop, all while the rest of the Linux community looks into the problem as well.
Ford announced the launch of a new battery development center in Michigan, the first step toward taking on some of the burden of building its own battery cells for electric cars in-house.
The new “global battery center of excellence” will be called Ford Ion Park and will be based in Southeast Michigan. Ford said the purpose is to conduct research on how to go about making its own electric vehicle batteries. A team of 150 experts will work on ways to build EV batteries that are long lasting, quick to charge, and sustainable for the environment. They will also develop a process for making batteries quickly, cheaply, and at scale.
But Ford Ion Park, which will open late next year, will not house any actual battery manufacturing at scale. There will be “lab scale and pilot scale” battery assembly at the new center, said Anand Sankaran, the center’s new director, but the automaker will have to build a new factory to build EV batteries at scale. Ford would not provide a timeline as to when it anticipates launching its own battery cell production line, nor whether it would build a new factory to house battery manufacturing.
“It’s really for us to develop that expertise and competency in house and give us that flexibility in the future,” said Hau Thai-Tang, Ford’s chief platform and operations officer. “So stay tuned.”
Ford, which is still in the early stages of its transition to electric vehicles, has said it plans to spend $22 billion on the shift, including $7 billion on autonomous vehicles, through 2025. The majority of vehicles it plans on producing will be battery-electric vehicles, but the company also has hybrid and plug-in hybrid models with traditional internal combustion engines.
Ford just started delivering its first long-range EV, the Mustang Mach-E, despite some initial hiccups with software updates. It also introduced an all-electric Transit van last year and plans on unveiling an electric version of its bestselling F-150 pickup truck later this year.
It’s a more financially risky move with potentially lucrative rewards in the future if Ford can successfully supplement production from its own suppliers. Ford currently sources its batteries from South Korea’s SK Innovation, which recently lost a trade secret dispute with rival LG Chem that could hinder its imports to the US. (The companies recently reached an agreement that could to avert a possible import ban.) By making its own batteries, Ford can avoid some of the conflicts that arise from sourcing batteries overseas.
Despite these plans, Ford will have to move more aggressively if it hopes to catch up to its competitors like Tesla, Volkswagen, and General Motors. GM is building two battery factories in the US with its partner LG Chem, while VW recently unveiled its own plan to have six “gigafactories” in Europe by 2030. Tesla, meanwhile, is in the early stages of making its own “tabless” battery cells in-house.
As recently as last July, then-CEO Jim Hackett said there was “no advantage” to Ford making its own battery cells. But Jim Farley, who replaced Hackett in October, took the opposite stance, describing the manufacturing of batteries in-house as a “natural” step as EV volumes grow.
According to Thai-Tang, the new center is a signal of how serious Ford is about building its business around the manufacturing and selling of electric vehicles. “We’re much more bullish and aggressive on how fast we think this transition is going to play out,” he said.
Legal services startup DoNotPay is best known for its army of “robot lawyers” — automated bots that tackle tedious online tasks like canceling TV subscriptions and requesting refunds from airlines. Now, the company has unveiled a new tool it says will help shield users’ photos from reverse image searches and facial recognition AI.
It’s called Photo Ninja and it’s one of dozens of DoNotPay widgets that subscribers can access for $36 a year. Photo Ninja operates like any image filter. Upload a picture you want to shield, and the software adds a layer of pixel-level perturbations that are barely noticeable to humans, but dramatically alter the image in the eyes of roving machines.
The end result, DoNotPay CEO Joshua Browder tells The Verge, is that any image shielded with Photo Ninja yields zero results when run through search tools like Google image search or TinEye. You can see this in the example below using pictures of Joe Biden:
The tool also fools popular facial recognition software from Microsoft and Amazon with a 99 percent success rate. This, combined with the anti-reverse-image search function, makes Photo Ninja handy in a range of scenarios. You might be uploading a selfie to social media, for example, or a dating app. Running the image through Photo Ninja first will prevent people from connecting this image to other information about you on the web.
Browder is careful to stress, though, that Photo Ninja isn’t guaranteed to beat every facial recognition tool out there. When it comes to Clearview AI, for example, a controversial facial recognition service that is widely used by US law enforcement, Browder says the company “anticipates” Photo Ninja will fool the company’s software but can’t guarantee it.
In part, this is because Clearview AI probably already has a picture of you in its databases, scraped from public sources long ago. As the company’s CEO Hoan Ton-That said in an interview with The New York Times last year: “There are billions of unmodified photos on the internet, all on different domain names. In practice, it’s almost certainly too late to perfect a technology [that hides you from facial recognition search] and deploy it at scale.”
Browder agrees: “In a perfect world, all images released to the public from Day 1 would be altered. As that is clearly not the case for most people, we recognize this as a significant limitation to the efficacy of our pixel-level changes. Hence, the focal point and intended use case of our tool was to avoid detection from Google Reverse Image Search and TinEye.”
DoNotPay isn’t the first to build this sort of tool. In August 2020, researchers from the University of Chicago’s SAND Lab created an open-source program named Fawkes that performs the same task. Indeed, Browder says DoNotPay’s engineers referenced this work in their own research. But while Fawkes is a low-profile piece of software, very unlikely to be used by the average internet consumer, DoNotPay has a slightly larger reach, albeit one that is still limited to tech-savvy users who are happy to let bots litigate on their behalf.
Tools like this don’t provide a silver bullet to modern privacy intrusions, but as facial recognition and reverse image search tools become more commonly used, it makes sense to deploy at least some protections. Photo Ninja won’t hide you from law enforcement or an authoritarian state government, but it might fool an opportune stalker or two.
Yesterday marked the 36th anniversary of the first power-on of an Arm processor. Today, the company announced the deep-dive details of its Neoverse V1 and N2 platforms that will power the future of its data center processor designs and span up to a whopping 192 cores and 350W TDP.
Naturally, all of this becomes much more interesting given Nvidia’s pending $40 billion Arm acquisition, but the company didn’t share further details during our briefings. Instead, we were given a deep dive look at the technology roadmap that Nvidia CEO Jensen Huang says makes the company such an enticing target.
Arm claims its new, more focused Noverse platforms come with impressive performance and efficiency gains. The Neoverse V1 platform is the first Arm core to support Scalable Vector Extensions (SVE), bringing up to 50% more performance for HPC and ML workloads. Additionally, the company says that its Neoverse N2 platform, its first IP to support newly-announced Arm v9 extensions, like SVE2 and Memory Tagging, delivers up to 40% more performance in diverse workloads.
Additionally, the company shared further details about its Neoverse Coherent Mesh Network (CMN-700) that will tie together the latest V1 and N2 designs with intelligent high-bandwidth low-latency interfaces to other platform additives, such as DDR, HBM, and various accelerator technologies, using a combination of both industry-standard protocols, like CCIX and CXL, and Arm IP. This new mesh design serves as the backbone for the next generation of Arm processors based on both single-die and multi-chip designs.
If Arm’s performance projections pan out, the Neoverse V1 and N2 platforms could provide the company with a much faster rate of adoption in multiple applications spanning the data center to the edge, thus putting even more pressure on industry x86 stalwarts Intel and AMD. Especially considering the full-featured connectivity options available for both single- and multi-die designs. Let’s start with the Arm Neoverse roadmap and objectives, then dive into the details of the new chip IP.
Arm Neoverse Platform Roadmap
Image 1 of 15
Image 2 of 15
Image 3 of 15
Image 4 of 15
Image 5 of 15
Image 6 of 15
Image 7 of 15
Image 8 of 15
Image 9 of 15
Image 10 of 15
Image 11 of 15
Image 12 of 15
Image 13 of 15
Image 14 of 15
Image 15 of 15
Arm’s roadmap remains unchanged from the version it shared last year, but it does help map out the steady cadence of improvements we’ll see over the next few years.
Arm’s server ambitions took flight with the A-72 in 2015, which was equivalent to the performance and performance-per-watt of a traditional thread on a standard competing server architecture.
Arm says its current-gen Neoverse N1 cores, which powers AWS Graviton 2 chips and Ampere’s Altra, equals or exceeds a ‘traditional’ (read: x86) SMT thread. Additionally, Arm says that, given N1’s energy efficiency, one N1 core can replace three x86 threads but use the same amount of power, providing an overall 40% better price-vs-performance ratio. Arm chalks much of this design’s success up to the Coherent Mesh Network 600 (CMN-600) that enables linear performance scaling as core counts increase.
Arm has revised both its core architecture and the mesh for the new Neoverse V1 and N2 platforms that we’ll cover today. Now they support up to 192 cores and 350W TDPs. Arm says the N2 core will take the uncontested lead over an SMT thread on competing chips and offers superior performance-per-watt.
Additionally, the company says that the Neoverse V1 core will offer the same performance as competing cores, marking the first time the company has achieved parity with two threads running on an SMT-equipped core. Both chips utilize Arm’s new CMN-700 mesh that enables either single-die or multi-chip solutions, offering customers plenty of options, particularly when deployed with accelerators.
Ts one would expect, Arm’s Neoverse N2 and V1 target hyperscale and cloud, HPC, 5G, and the infrastructure edge markets. Customers include Tencent, oracle Cloud with Ampere, Alibaba, AWS with Graviton 2 (which is available in 70 out of 77 AWS regions). Arm also has two exascale-class supercomputer deployments planned with Neoverse V1 chips: SiPearl “Rhea” and the ETRI K-AB21.
Overall, ARM claims that its Neoverse N2 and V1 platforms will offer best-in-class compute, performance-per-watt, and scalability over competing x86 server designs.
Arm Neoverse V1 Platform ‘Zeus’
Image 1 of 3
Image 2 of 3
Image 3 of 3
Arm’s existing Neoverse N1 platform scales from the cloud to the edge, encompassing everything from high-end servers to power-constrained edge devices. The next-gen Neoverse N2 platform preserves that scalability across a spate of usages. In contrast, Arm designed the Neoverse V1 ‘Zeus’ platform specifically to introduce a new performance tier as it looks to more fully penetrate HPC and machine learning (ML) applications.
The V1 platform comes with a wider and deeper architecture that supports Scalable Vector Extensions (SVE), a type of SIMD instruction. The V1’s SVE implementation runs across two lanes with a 256b vector width (2x256b), and the chip also supports the bFloat16 data type to provide enhanced SIMD parallelism.
With the same (ISO) process, Arm claims up to 1.5x IPC increase over the previous-gen N1 and a 70% to 100% improvement to power efficiency (varies by workload). Given the same L1 and L2 cache sizes, the V1 core is 70% larger than the N1 core.
The larger core makes sense, as the V-series is optimized for maximum performance at the cost of both power and area, while the N2 platform steps in as the design that’s optimized for power-per-watt and performance-per-area.
Per-core performance is the primary objective for the V1, as it helps to minimize the performance penalties for GPUs and accelerators that often end up waiting on thread-bound workloads, not to mention to minimize software licensing costs.
Arm also tuned the design to provide exceptional memory bandwidth, which impacts performance scalability, and next-gen interfaces, like PCIe 5.0 and CXL, provide I/O flexibility (much more on that in the mesh section). The company also focused on performance efficiency (a balance of power and performance).
Finally, Arm lists technical sovereignty as a key focus point. This means that Arm customers can own their own supply chain and build their entire SoC in-country, which has become increasingly important for key applications (particularly defense) among heightened global trade tensions.
Image 1 of 2
Image 2 of 2
The Neoverse V1 represents Arm’s highest-performance core yet, and much of that comes through a ‘wider’ design ethos. The front end has an 8-wide fetch, 5-8 wide decode/rename unit, and a 15-wide issue into the back end of the pipeline (the execution units).
As you can see on the right, the chip supports HBM, DDR5, and custom accelerators. It can also scale out to multi-die and multi-socket designs. The flexible I/O options include the PCIe 5 interface and CCIX and CXL interconnects. We’ll cover the Arm’s mesh interconnect design a bit later in the article.
Additionally, Arm claims that, relative to the N1 platform, SVE contributes to a 2x increase in floating point performance, 1.8x increase in vectorized workloads, and 4x improvement in machine learning.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
One of V1’s biggest changes comes as the option to use either the 7nm or 5nm process, while the prior-gen N1 platform was limited to 7nm only. Arm also made a host of microarchitecture improvements spanning the front end, core, and back end to provide big speedups relative to prior-gen Arm chips, added support for SVE, and made accommodations to promote enhanced scalability.
Here’s a bullet list of the biggest changes to the architecture. You can also find additional details in the slides above.
Front End:
Net of 90% reduction in branch mispredicts (for BTB misses) and a 50% reduction in front-end stalls
V1 branch predictor decoupled from instruction fetch, so the prefetcher can run ahead and prefetch instruction into the instruction cache
Widened branch prediction bandwidth to enable faster run-ahead to (2x32b per cycle)
Increased capacity of the Dual-level BTB (Branch Target Buffers) to capture more branches with larger instruction footprints and to lower the taken branch latency, improved branch accuracy to reduce mispredicts
Enhanced ability to redirect hard-to-predict branches earlier in the pipeline, at fetch time, for faster branch recovery, improving both performance and power
Mid-Core:
Net increase of 25% in integer performance
Micro-Op (MOP) Cache: L0 decoded instruction cache optimizes the performance of smaller kernels in the microarchitecture, 2x increase in fetch and dispatch bandwidth over N1, lower-latency decode pipeline by removing one stage
Added more instruction fusion capability, improves performance end power efficiency for most commonly-used instruction pairs
OoO (Out of Order) window increase by 2X to enhance parallelism. Also increased integer execution bandwidth with a second branch execution unit and a fourth ALU
SIMD and FP Units: Added a new SVE implementation — 2x256b operations per cycle. Doubled raw execute capability from 2x128b pipelines in N1 to 4x128b in V1. Slide 10 — 4x improvement in ML performance
Back End:
45% increase to streaming bandwidth by increasing load/store address bandwidth by 50%, adding a third load data address generation unit (AGU – 50% increase)
To improve SIMD and integer floating point execution, added a third load data pipeline and improved load bandwidth for integer and vector. Doubled store bandwidth and split scheduling into two pipes
Load/store buffer window sizes increased. MMU capacity, allow for a larger number of cache translations
Reduce latencies in L2 cache to improve single-threaded performance (slide 12)
This diagram shows the overall pipeline depth (left to right) and bandwidth (top to bottom), highlighting the impressive parallelism of the design.
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Arm also instituted new power management and low-latency tools to extend beyond the typical capabilities of Dynamic Voltage Frequency Scaling (DVFS). These include the Max Power Mitigation Mechanism (MPMM) that provides a tunable power management system that allows customers to run high core-count processors at the highest possible frequencies, and Dispatch Throttling (DT), which reduces power during certain workloads with high IPC, like vectorized work (much like we see with Intel reducing frequency during AVX workloads).
At the end of the day, it’s all about Power, Performance, and Area (PPA), and here Arm shared some projections. With the same (ISO) process, Arm claims up to 1.5x IPC increase over the previous-gen N1 and a 70% to 100% improvement to power efficiency (varies by workload). Given the same L1 and L2 cache sizes, the V1 core is 70% larger than the N1 core.
The Neoverse V1 supports Armv8.4, but the chip also borrows some features from future v8.5 and v8.6 revisions, as shown above.
Arm also added several features to manage system scalability, particularly as it pertains to partitioning shared resources and reducing contention, as you can see in the slides above.
Image 1 of 8
Image 2 of 8
Image 3 of 8
Image 4 of 8
Image 5 of 8
Image 6 of 8
Image 7 of 8
Image 8 of 8
Arm’s Scalable Vector Extensions (SVE) are a big draw of the new architecture. Firstly, Arm doubled compute bandwidth to 2x256b with SVE and provides backward support for Neon at 4x128b.
However, the key here is that SVE is vector length agnostic. Most vector ISAs have a fixed number of bits in the vector unit, but SVE lets the hardware set the vector length in bits. However, in software, the vectors have no length. This simplifies programming and enhances portability for binary code between architectures that support different bit widths — the instructions will automatically scale as necessary to fully utilize the available vector bandwidth (for instance, 128b or 256b).
Arm shared information on several fine-grained instructions for the SVE instructions, but much of those details are beyond the scope of this article. Arm also shared some simulated V1 and N2 benchmarks with SVE, but bear in mind that these are vendor-provided and merely simulations.
ARM Neoverse N2 Platform ‘Perseus’
Image 1 of 17
Image 2 of 17
Image 3 of 17
Image 4 of 17
Image 5 of 17
Image 6 of 17
Image 7 of 17
Image 8 of 17
Image 9 of 17
Image 10 of 17
Image 11 of 17
Image 12 of 17
Image 13 of 17
Image 14 of 17
Image 15 of 17
Image 16 of 17
Image 17 of 17
Here we can see the slide deck for the N2 Perseus platform, with the key goals being a focus on scale-out implementations. Hence, the company optimized the design for performance-per-power (watt) and performance-per-area, along with a healthier dose of cores and scalability. As with the previous-gen N1 platform, this design can scale from the cloud to the edge.
Neoverse N2 has a newer core than the V1 chips, but the company isn’t sharing many details yet. However, we do know that N2 is the first Arm platform to support Armv9 and SVE2, which is the second generation of the SVE instructions we covered above.
Arm claims a 40% increase in single-threaded performance over N1, but within the same power and area efficiency envelope. Most of the details about N2 mirror those we covered with V1 above, but we included the slides above for more details.
Image 1 of 20
Image 2 of 20
Image 3 of 20
Image 4 of 20
Image 5 of 20
Image 6 of 20
Image 7 of 20
Image 8 of 20
Image 9 of 20
Image 10 of 20
Image 11 of 20
Image 12 of 20
Image 13 of 20
Image 14 of 20
Image 15 of 20
Image 16 of 20
Image 17 of 20
Image 18 of 20
Image 19 of 20
Image 20 of 20
Arm provided the above benchmarks, and as with all vendor-provided benchmarks, you should take them with a grain of salt. We have also included the test notes at the end of the album for further perusal of the test configurations.
Arm’s SPEC CPU 2017 single-core tests show a solid progression from N1 to N2, and then a higher jump in performance with the V1 platform. The company also provided a range of comparisons against the Intel Xeon 8268 and an unspecified 40-core Ice Lake Xeon system, and the EPYC Rome 7742 and EPYC Milan 7763.
Coherent Mesh Network (CMN-700)
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
Arm allows its partners to adjust core counts, cache sizes, and use different types of memory, such as DDR5 and HBM and select various interfaces, like PCIe 5.0, CXL, and CCIX, requiring a very flexible underlying design methodology. Add in the fact that Neoverse can span from the cloud and edge to 5G, and the interconnect also has to be able to span a full spectrum of various power points and compute requirements. That’s where the Coherent Mesh Network 700 (CMN-700) steps in.
Arm focuses on security through compliance and standards, Arm open-source software, and ARM IP and architecture, all rolled under the SystemReady umbrella that serves as the underpinning of the Neoverse platform architecture.
Arm provides customers with reference designs based on its own internal work, with the designs pre-qualified in emulated benchmarks and workload analysis. Arm also provides a virtual model for software development too.
Customers can then take the reference design, choose between core types (like V-, N- or E-Series) and alter core counts, core frequency targets, cache hierarchy, memory (DDR5, HBM, Flash, Storage Class Memory, etc.), and I/O accommodations, among other factors. Customers also dial in parameters around the system-level cache that can be shared among accelerators.
There’s also support for multi-chip integration. This hangs off the coherent mesh network and provides plumbing for I/O connectivity options and multi-chip communication accommodations through interfaces like PCIe, CXL, CCIX, etc.
The V-Series CPUs address the growth of heterogeneous workloads by providing enough bandwidth for accelerators, support for disaggregated designs, and also multi-chip architectures that help defray the slowing Moore’s Law.
These types of designs help address the fact that the power budget per SoC (and thus thermals) is increasing, and also allow scaling beyond the reticle limits of a single SoC.
Additionally, I/O interfaces aren’t scaling well to smaller nodes, so many chipmakers (like AMD) are keeping PHYs on older nodes. That requires robust chip-to-chip connectivity options.
Here we can see the gen-on-gen comparison with the current CMN-600 interface found on the N1 chips. The CMN-700 mesh interface supports four times more cores and system-level cache per die, 2.2x more nodes (cross points) per die, 2.5x memory device ports (like DRAM, HBM) per die, and 8x the number of CCIX device ports per die (up to 32), all of which supplies intense scalability.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Arm improved cross-sectional bandwidth by 3X, which is important to provide enough bandwidth for scalability of core counts, scaling out with bandwidth-hungry GPUs, and faster memories, like DDR5 and HBM (the design accommodates 40 memory controllers for either/or DDR and HBM). Arm also has options for double mesh channels for increased bandwidth. Additionally, a hot spot reroute feature helps avoid areas of contention on the fabric.
The AMBA Coherent Hub Interface (CHI) serves as the high-performance interconnect for the SoC that connects processors and memory controllers. Arm improved the CHI design and added intelligent heuristics to detect and control congestion, combine operations to reduce transactions, and conduct data-less writes, all of which help reduce traffic on the mesh. These approaches also help with multi-chip scaling.
Memory partitioning and monitoring (MPAM) helps reduce the impact of noisy neighbors on system-level cache and isolates VMs to keep them from hogging system level cache (SLC). Arm also extends this software-controlled system to the memory controller as well. All this helps to manage shared resources and reduce contention. The CPU, accelerator, and PCIe interfaces all have to work together as well, so the design applies the same traffic management techniques between those units, too.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The mesh supports multi-chip designs through CXL or CCIX interfaces, and here we see a few of the use cases. CCIX is typically used inside the box or between the chips, be that heterogenous packages, chiplets, or multi-socket. In contrast, CXL steps in for memory expansion or pools of memory shared by multiple hosts. It’s also used for coherent accelerators like GPUs, NPUs, and SmartNICs, etc.
Slide 14 shows an example of a current connection topology — PCIe connects to the DPU (Data Plane Unit – SmartNic), which then provides the interconnection to the compute accelerator node. This allows multiple worker nodes to connect to shared resources.
Slide 15 shows us the next logical expansion of this approach — adding disaggregated memory pools that are shared between worker nodes. Unfortunately, as shown in slide 16, this creates plenty of bottlenecks and introduces other issues, such as spanning the home nodes and system-level cache across multiple dies. Arm has an answer for that, though.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Addressing those bottlenecks requires a re-thinking of the current approaches to sharing resources among worker nodes. Arm designed a multi-protocol gateway with a new AMBA CXS connection to reduce latency. This connection can transport CCIX 2.0 and CXL 2.0 protocols much faster than conventional interconnections. This system also provides the option of using a Flit data link layer that is optimized for the ultimate in low-latency connectivity.
This new design can be tailored for either socket-to-socket or multi-die compute SoCs. As you can see to the left on Slide 17, this multi-protocol gateway can be used either with or without a PCIe PHY. Removing the PCIe PHY creates an optimized die-to-die gateway for lower latency for critical die-to-die connections.
Arm has also devised a new Super Home Node concept to accommodate multi-chip designs. This implementation allows composing the system differently based on whether or not it is a homogenous design (direct connections between dies) or heterogeneous (compute and accelerator chiplets) connected to an I/O hub. The latter design is becoming more attractive because I/O doesn’t scale well to smaller nodes, so using older nodes can save quite a bit of investment and help reduce design complexity.
Thoughts
ARM’s plans for a 30%+ gen-on-gen IPC growth rate stretch into the next three iterations of its existing platforms (V1, N2, Poseidon) and will conceivably continue into the future. We haven’t seen gen-on-gen gains in that range from Intel in recent history, and while AMD notched large gains with the first two Zen iterations, as we’ve seen with the EPYC Milan chips, it might not be able to execute such large generational leaps in the future.
If ARM’s projections play out in the real world, that puts the company not only on an intercept course with x86 (it’s arguably already there in some aspects), but on a path to performance superiority.
Wrapping in the amazingly well-thought-out coherent mesh design makes these designs all the more formidable, especially in light of the ongoing shift to offloading key workloads to compute accelerators of various flavors. Additionally, bringing complex designs, like chiplet, multi-die, and hub and spoke designs all under one umbrella of pre-qualified reference designs could help spur a hastened migration to Arm architectures, at least for the cloud players. That attraction of licensable interconnections that democratize these complex interfaces is definitely yet another arrow in Arm’s quiver.
Perhaps one of the most surprising tidbits of info that Arm shared in its presentations was one of the smallest — a third-party firm has measured that more than half of AWS’s newly-deployed instances run on Graviton 2 processors. Additionally, Graviton 2-powered instances are now available in 70 out of the 77 AWS regions. It’s natural to assume that those instances will soon have a newer N2 or V1 architecture under the hood.
This type of uptake, and the economies of scale and other savings AWS enjoys from using its own processors, will force other cloud giants to adapt with their own designs in kind, perhaps touching off the type of battle for superiority that can change the entire shape of the data center for years to come. There simply isn’t another vendor more well-positioned to compete in a world where the hyperscalers and cloud giants battle it out with custom silicon than Arm.
The recently released Ubuntu 21.04, is the latest version of the popular Linux distribution and with the latest release we see Wayland arrive as the default display server. But it seems that those wishing to upgrade from a previous release, for example 20.04 / 20.10 are unable to. According to OMG! Ubuntu! there is a bug which is preventing users from updating to the latest release.
The problem seems to be in shim, the bootloader which handles the secure boot process for the OS. Users running the wrong version of their EFI – an early one – can see their PC fail to boot after the upgrade. A new version of shim is on the way to fix the issue, but users who are sure their hardware is new enough to sidestep the problem can manually force an upgrade at their own risk from the command line.
“Due to the severity of the issue we shouldn’t be encouraging people to upgrade at this point in time,” wrote Canonical software engineer Brian Murray in a post to the Ubuntu Developer mailing list. “After we have a new version of shim signed will make it available in Ubuntu 21.04 and then enable upgrades.”
The exact nature of the hardware likely to fail is still unclear. We reached out to Canonical software engineer Dave Jones on Twitter, who suggested modern machines would be unaffected but older machines such as a ThinkPad 420 from 2011 and a MacBook Air from 2012 were affected by the bug.
Codenamed “Hirsute Hippo”, Ubuntu 21.04 brings support for Wayland display server, with Xorg still available for those that need it. Native Active Directory integration and a performance-optimized certified Microsoft SQL Server are new features for this release.
The Patriot Viper Steel RGB DDR4-3600 C20 is only worthy of consideration if you’re willing to invest your time to optimize its timings and if you can find the memory on sale with a big discount.
For
+ Runs at C16 with fine-tuning
+ Balanced design with RGB lighting
+ RGB compatibility with most motherboards
Against
– Very loose timings
– Overpriced
– Low overclocking headroom
Patriot, who isn’t a stranger to our list of Best RAM, has many interesting product lines in its broad repertoire. However, the memory specialist recently revamped one of its emblematic lineups to keep up with the current RGB trend. As the name conveys, the Viper Steel RGB series arrives with a redesigned heat spreader and RGB illumination.
The new series marks the second time that Patriot has incorporated RGB lighting onto its DDR4 offerings, with the first being the Viper RGB series that debuted as far back as 2018. While looks may be important, performance also plays a big role, and the Viper Steel RGB DDR4-3600 memory kit is here to show us what it is or isn’t made of.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Viper Steel RGB memory modules come with the standard black PCB with a matching matte-black heat spreader. It was nice on Patriot’s part to keep the aluminum heat spreader as clutter-free as possible. Only the golden Viper logo and the typical specification sticker is present on the heat spreader, and the latter is removable.
At 44mm (1.73 inches), the Viper Steel RGB isn’t excessively tall, so we expect it to fit under the majority of the CPU air coolers in the market. Nevertheless, we recommend you double-check that you have enough clearance space for the memory modules. The RGB light bar features five customizable lighting zones. Patriot doesn’t provide a program to control the illumination, so you’ll have to rely on your motherboard’s software. The compatibility list includes Asus Aura Sync, Gigabyte RGB Fusion, MSI Mystic Light Sync, and ASRock Polychrome Sync.
The Viper Steel RGB is a dual-channel 32GB memory kit, so you receive two 16GB memory modules with an eight-layer PCB and dual-rank design. Although Thaiphoon Burner picked up the integrated circuits (ICs) as Hynix chips, the software failed to identify the exact model. However, these should be AFR (A-die) ICs, more specifically H5AN8G8NAFR-VKC.
You’ll find the Viper Steel RGB defaulting to DDR4-2666 and 19-19-19-43 timings at stock operation. Enabling the XMP profile on the memory modules will get them to DDR4-3600 at 20-26-26-46. The DRAM voltage required for DDR4-3600 is 1.35V. For more on timings and frequency considerations, see our PC Memory 101 feature, as well as our How to Shop for RAM story.
Comparison Hardware
Memory Kit
Part Number
Capacity
Data Rate
Primary Timings
Voltage
Warranty
G.Skill Trident Z Royal
F4-4000C17D-32GTRGB
2 x 16GB
DDR4-4000 (XMP)
17-18-18-38 (2T)
1.40 Volts
Lifetime
Crucial Ballistix Max RGB
BLM2K16G40C18U4BL
2 x 16GB
DDR4-4000 (XMP)
18-19-19-39 (2T)
1.35 Volts
Lifetime
G.Skill Trident Z Neo
F4-3600C16D-32GTZN
2 x 16GB
DDR4-3600 (XMP)
16-16-16-36 (2T)
1.35 Volts
Lifetime
Klevv Bolt XR
KD4AGU880-36A180C
2 x 16GB
DDR4-3600 (XMP)
18-22-22-42 (2T)
1.35 Volts
Lifetime
Patriot Viper Steel RGB
PVSR432G360C0K
2 x 16GB
DDR4-3600 (XMP)
20-26-26-46 (2T)
1.35 Volts
Lifetime
Our Intel test system consists of an Intel Core i9-10900K and Asus ROG Maximus XII Apex on the 0901 firmware. On the opposite side, the AMD testbed leverages an AMD Ryzen 5 3600 and ASRock B550 Taichi with the 1.30 firmware. The MSI GeForce RTX 2080 Ti Gaming Trio handles the graphical duties on both platforms.
Intel Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
Things didn’t go well for the Viper Steel RGB on the Intel platform. The memory ranked at the bottom of our application RAM benchmarks and came in last place on the gaming tests. Our results didn’t reveal any particular workloads where the Viper Steel RGB stood out.
AMD Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
The loose timings didn’t substantially hinder the Viper Steel RGB’s performance. Logically, it lagged behind its DDR4-3600 rivals that have tighter timings. The Viper Steel RGB’s data rate allowed it to run in a 1:1 ratio with our Ryzen 5 3600’s FCLK so it didn’t take any performance hits, unlike the DDR4-4000 offerings. With a capable Zen 3 processor that can operate with a 2,000 MHz FCLK, the Viper Steel RGB will probably not outperform the high-frequency kits.
Overclocking and Latency Tuning
Image 1 of 3
Image 2 of 3
Image 3 of 3
Overclocking potential isn’t the Viper Steel RGB’s strongest trait. Upping the DRAM voltage from 1.35V to 1.45V only got us to DDR4-3800. Although we had to maintain the tRCD, tRP, and tRAS at their XMP values, we could drop the CAS Latency down to 17.
Lowest Stable Timings
Memory Kit
DDR4-3600 (1.45V)
DDR4-3800 (1.45V)
DDR4-4000 (1.45V)
DDR4-4133 (1.45V)
DDR4-4200 (1.45V)
G.Skill Trident Z Neo DDR4-3600 C16
13-14-14-35 (2T)
N/A
N/A
N/A
19-19-19-39 (2T)
Crucial Ballistix Max RGB DDR4-4000 C18
N/A
N/A
16-19-19-39 (2T)
N/A
20-20-20-40 (2T)
G.Skill Trident Z Royal DDR4-4000 C17
N/A
N/A
15-16-16-36 (2T)
18-19-19-39 (2T)
N/A
Klevv Bolt XR DDR4-3600 C18
16-19-19-39 (2T)
N/A
N/A
18-22-22-42 (2T)
N/A
Patriot Viper Steel RGB DDR4-3600 C20
16-20-20-40 (2T)
17-26-26-46 (2T)
N/A
N/A
N/A
As we’ve seen before, you won’t be able to run Hynix ICs at very tight timings. That’s not to say that the Viper Steel RGB doesn’t have any wiggle room though. With a 1.45V DRAM voltage, we optimized the memory to run at 16-20-20-40 as opposed to the XMP profile’s 20-26-26-46 timings.
Bottom Line
It comes as no surprise that the Viper Steel RGB DDR4-3600 C20 will not beat competing memory kits that have more optimized timings. The problem is that C20 is basically at the bottom of the barrel by DDR4-3600 standards.
The Viper Steel RGB won’t match or surpass the competition without serious manual tweaking. The memory kit’s hefty $199.99 price tag doesn’t do it any favors, either. To put it into perspective, the cheapest DDR4-3600 2x16GB memory kit on the market starts at $154.99, and it checks in with C18. Unless Patriot rethinks the pricing for the Viper Steel RGB DDR4-3600 C20, the memory kit will likely not be on anyone’s radar.
The Android version of Google and Apple’s COVID-19 exposure notification app had a privacy flaw that let other preinstalled apps potentially see sensitive data, including if someone had been in contact with a person who tested positive for COVID-19, privacy analysis firm AppCensus revealed on Tuesday. Google says it’s currently rolling out a fix to the bug.
The bug cuts against repeated promises from Google CEO Sundar Pichai, Apple CEO Tim Cook, and numerous public health officials that the data collected by the exposure notification program could not be shared outside of a person’s device.
AppCensus first reported the vulnerability to Google in February, but the company failed to address it, The Markup reported. Fixing the issue would be as simple as deleting a few nonessential lines of code, Joel Reardon, co-founder and forensics lead of AppCensus, told The Markup. “It’s such an obvious fix, and I was flabbergasted that it wasn’t seen as that,” Reardon said.
Updates to address the issue are “ongoing,” Google spokesperson José Castañeda said in an emailed statement to The Markup. “We were notified of an issue where the Bluetooth identifiers were temporarily accessible to specific system level applications for debugging purposes, and we immediately started rolling out a fix to address this,” he said.
The exposure notification system works by pinging anonymized Bluetooth signals between a user’s phone and other phones that have the system activated. Then, if someone using the app tests positive for COVID-19, they can work with health authorities to send an alert to any phones with corresponding signals logged in the phone’s memory.
On Android phones, the contract tracing data is logged in privileged system memory, where it’s inaccessible to most software running on the phone. But apps that are preinstalled by manufacturers get special system privileges that would let them access those logs, putting sensitive contact-tracing data at risk. There is no indication any apps have actually collected that data at this point, Reardon said.
Preinstalled apps have taken advantage of their special permissions before — other investigations show that they sometimes harvest data like geolocation information and phone contacts.
The analysis did not find any similar issues with the exposure notification system on iPhone.
The problem is an implementation issue and not inherent to the exposure notification framework, Serge Egelman, the chief technology officer at AppCensus, said in a statement posted on Twitter. It should not erode trust in public health technologies. “We hope the lesson here is that getting privacy right is really hard, vulnerabilities will always be discovered in systems, but that it’s in everyone’s interest to work together to remediate these issues,” Egelman said.
If I asked most people to name a laptop with a detachable keyboard, they’d probably say “Surface Pro.” But there is a number of Windows alternatives out there, for business users and consumers alike. Dell’s newly announced Latitude 7320 Detachable looks to be vying for the top spot in the enterprise market.
Like Dell’s previous Latitude detachable models, the Latitude 7320 Detachable is a powerful Windows tablet with a kickstand and detachable keyboard deck aimed at mobile business users. The 7320’s most significant update is that it includes Intel’s newest 11th Gen processors, up to a Core i7.
The new model also has a 3:2, 1920 x 1280 touch display with support for its own stylus (the 7320 Detachable Active Pen). Dell says this is “the world’s fastest charging stylus pen available on a commercial detachable tablet.” The company claims it will charge up to 100 percent in just 30 seconds and will last for 90 minutes of continuous use on one charge.
Like other modern Latitudes, the 7320 Detachable supports the AI-powered Dell Optimizer software, which improves the device’s performance and battery life over time based on your behavior. Dell claims it’s “the world’s most intelligent business PC.”
The device supports Intel’s vPro platform and can come with Windows 10 Pro and Windows 10 Enterprise as well as Windows 10 Home.
The Dell Latitude 7320 Detachable is available for purchase now. Models start at $1,549.
Washington, DC’s police department has confirmed its servers have been breached after hackers began leaking its data online, The New York Times reports. In a statement, the department confirmed it was aware of “unauthorized access on our server” and said it was working with the FBI to investigate the incident. The hacked data appears to include details on arrests and persons of interest.
The attack is believed to be the work of Babuk, a group known for its ransomware attacks. BleepingComputer reports that the gang has already released screenshots of the 250GB of data it’s allegedly stolen. One of the files is claimed to relate to arrests made following the January Capitol riots. The group warns it will start leaking information about police informants to criminal gangs if the police department doesn’t contact it within three days.
Washington’s police force, which is called the Metropolitan Police Department, is the third police department to be targeted in the last two months, according to the NYT, following attacks by separate groups against departments in Presque Isle, Maine and Azusa, California. The old software and systems used by many police forces are believed to make them more vulnerable to such attacks.
The targeting of police departments is believed to be part of a wider trend of attacks targeting government bodies. Twenty-six agencies are believed to have been hit by ransomware in this year alone, with 16 of them seeing their data released online, according to Emsisoft ransomware analyst Brett Callow, Sky News notes. The Justice Department reports that the average ransom demand has grown to over $100,000 as the attacks surged during the pandemic.
The Biden administration is attempting to improve the USA’s cybersecurity defenses, with an executive order expected soon. The Justice Department also recently formed a task force to help defend against ransomware attacks, The Wall Street Journal reports. “By any measure, 2020 was the worst year ever when it comes to ransomware and related extortion events,” acting Deputy Attorney General John Carlin, who’s overseeing the task force, told the WSJ. “And if we don’t break the back of this cycle, a problem that’s already bad is going to get worse.”
The Thermaltake Toughpower GF2 ARGB 850W has good performance, but an earlier model is a little better.
For
+ Full power at 47 degrees Celsius
+ Satisfactory overall performance
+ Efficient at normal loads
+ Effective APFC converter
Long hold-up time
Low inrush current
Adequate distance between the peripheral connectors
Compatible with the alternative sleep mode
Fully modular
10-year warranty
Against
– The Toughpower GF1 ARGB 850W performs better
– Noisy at higher loads
– Low efficiency under light loads
– Poor transient response
Specifications and Part Analysis
In addition to Thermaltake’s GF1 ARGB PSU line, which Channel Well Technology made, Thermaltake also decided to include another similar line in its portfolio, the Toughpower GF2 ARGB. All GF2 units are based on a High Power platform and have similar specifications to the GF1 models, making us wonder why Thermaltake created internal competition. The only differences are the RGB side panels on the GF2 units and the PWM control of the fan since this High Power platform uses an MCU to adjust fan speed.
The 850W member of the GF2 ARGB is a little less performant than the similar-capacity GF1 ARGB model so, if they both available at the same price, the GF1 seems like the better buy. Besides the GF1 ARGB 850W, which with a little more tuning could be added in our best power supplies article, other strong opponents of the GF2 ARGB 850W are the Corsair RM850x (2021), the XPG Core Reactor 850, and the Seasonic GX-850.
The GF2 ARGB line consists of three models with capacities ranging from 650W to 850W. All units are fully modular, and their RGB lighting is compatible with the software provided for the mainboards of Asus, Gigabyte, MSI, and ASRock. Besides the 18-LED fan, the PSUs’ panels also feature RGB lighting, so you have to make sure that you will use these PSUs along with a chassis that doesn’t hide them in a separate compartment.
Image 1 of 12
Product Photos
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
The Toughpower GF2 ARGB 850W will be our test subject. This PSU is strong enough to support a potent gaming station equipped with an Nvidia RTX 3080/90 or an AMD RX 6800/6900 XT graphics card, along with a high-end CPU which will allow the GPU to deliver its full performance without any issues. It has to prove, though, that it is a better choice than the CWT-made Toughpower GF1 ARGB 850W unit since its price is at the same levels.
Image 1 of 7
Product Photos
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
Specifications of Thermaltake Toughpower GF2 ARGB
Manufacturer (OEM)
High Power
Max. DC Output
850W
Efficiency
80 PLUS Gold, Cybenetics Platinum (89-91%)
Noise
Cybenetics Standard++ (30-35 dB[A])
Modular
✓ (fully)
Intel C6/C7 Power State Support
✓
Operating Temperature (Continuous Full Load)
0 – 40°C
Over Voltage Protection
✓
Under Voltage Protection
✓
Over Power Protection
✓
Over Current (+12V) Protection
✓
Over Temperature Protection
✓
Short Circuit Protection
✓
Surge Protection
✓
Inrush Current Protection
✓
Fan Failure Protection
✗
No Load Operation
✓
Cooling
140mm Hydraulic Bearing Fan [TT-1425 (A1425S12S-2)]
Semi-Passive Operation
✓ (selectable)
Dimensions (W x H x D)
150 x 85 x 160mm
Weight
1.64 kg (3.62 lb)
Form Factor
ATX12V v2.53, EPS 2.92
Warranty
10 Years
Power Specifications of Thermaltake Toughpower GF2 ARGB
Rail
3.3V
5V
12V
5VSB
-12V
Max. Power
Amps
22
22
70.9
3
Watts
120
850
15
3.6
Total Max. Power (W)
850
Cables & Connectors for Thermaltake Toughpower GF2 ARGB
Modular Cables
Modular Cables
Description
Cable Count
Connector Count (Total)
Gauge
In Cable Capacitors
ATX connector 20+4 pin (610mm)
1
1
18AWG
No
4+4 pin EPS12V (660mm)
1
1
16AWG
No
8 pin EPS12V (660mm)
1
1
16AWG
No
6+2 pin PCIe (500mm+160mm)
3
6
16-18AWG
No
SATA (510mm+160mm+160mm+160mm)
3
12
18AWG
No
4-pin Molex (500mm+150mm+150mm+150mm)
1
4
18AWG
No
FDD Adapter (+160mm)
1
1
22AWG
No
ARGB Sync Cable (610mm+160mm)
1
2
26AWG
No
AC Power Cord (1400mm) – C13 coupler
1
1
18AWG
–
There are plenty of cables and connectors, including two EPS, six PCIe, twelve SATA, and four 4-pin Molex connectors. On top of that, all cables are long, and the distance between the peripheral connectors is adequate.
Image 1 of 9
Cable Photos
Image 2 of 9
Image 3 of 9
Image 4 of 9
Image 5 of 9
Image 6 of 9
Image 7 of 9
Image 8 of 9
Image 9 of 9
Component Analysis of Thermaltake Toughpower GF2 ARGB
We strongly encourage you to have a look at our PSUs 101 article, which provides valuable information about PSUs and their operation, allowing you to better understand the components we’re about to discuss.
General Data
–
Manufacturer (OEM)
High Power
PCB Type
Double Sided
Primary Side
–
Transient Filter
4x Y caps, 2x X caps, 2x CM chokes, 1x MOV, 1x Champion CMD02X (Discharge IC)
1x PFC P10V45SP SBR (45V, 10A), UTC 2N70L FET (700V, 2A, 6.3Ohm)
Standby PWM Controller
SI8016HSP8
-12V
–
Rectifier
1x KEC KIA7912PI (-12V, 1A)
Image 1 of 4
Overall Photos
Image 2 of 4
Image 3 of 4
Image 4 of 4
This is a High Power platform, and the design looks good. The heat sinks are small, but this is an efficient PSU, so there won’t be any issues there, although larger heat sinks could allow for a more relaxed fan speed profile, hence for lower noise output.
On the primary side, we meet a half-bridge topology and an LLC resonant converter. On the secondary side, a synchronous rectification scheme is used for 12V, and a pair of DC-DC converters regulate the minor rails.
Image 1 of 5
Transient filter
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
The transient/EMI filter has all necessary components, including a Champion CMD02X discharge IC, which provides a small efficiency boost.
We didn’t find an NTC thermistor. Nonetheless, the PSU has low inrush currents, so there is inrush current suppression through another circuit, which the APFC controller controls.
Image 1 of 2
Bridge rectifiers
Image 2 of 2
The pair of bridge rectifiers is installed on a dedicated heat sink, which is pretty small. Still, these rectifiers can easily handle the PSU’s full power even with low voltage input.
Image 1 of 5
APFC converter
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
The APFC converter uses two Infineon FETs and a single CREE C3D08060A boost diode. The bulk caps are provided by Rubycon and have enough capacity to provide a longer than 17ms hold-up time. The APFC controller is an ICE3PCS01G IC.
Image 1 of 3
Main FETs and primary transformer
Image 2 of 3
Image 3 of 3
Two Infineon IPA60R180P7S installed into a half-bridge topology are the primary switching FETs. The LLC resonant controller is a Champion CM6901X IC.
Image 1 of 4
12V FETs and VRMs
Image 2 of 4
Image 3 of 4
Image 4 of 4
Six Infineon BSC027N04LS FETs regulate the 12V rail, while the minor rails are generated through a pair of DC-DC converters.
Image 1 of 3
Filtering caps
Image 2 of 3
Image 3 of 3
The electrolytic filtering caps are provided by Chemi-Con and Rubycon and are of good quality. A large number of polymer caps is also used for ripple filtering purposes.
Image 1 of 2
Supervisor ICs
Image 2 of 2
The supervisor IC is a WT7527RA, and right beside it, we find an STC STC15W401AS MCU, used to control the cooling fan’s speed.
Image 1 of 2
RGB Board
Image 2 of 2
This is the RGB controller’s board.
Image 1 of 4
5VSB circuit
Image 2 of 4
Image 3 of 4
Image 4 of 4
The 5VSB circuit uses a UTC 2N70L FET on its primary side and a PFC P10V45SP SBR on its secondary side.
The -12V rail is regulated through a KEC KIA7912PI regulator IC.
Image 1 of 3
Modular board front
Image 2 of 3
Image 3 of 3
Several polymer caps are installed on the modular board, forming a secondary ripple filtering layer.
Image 1 of 4
Soldering quality
Image 2 of 4
Image 3 of 4
Image 4 of 4
Soldering quality is satisfactory.
Image 1 of 2
Cooling fan
Image 2 of 2
The cooling fan is quite strong, and it uses a hydraulic dynamic bearing so that it will last for long.
MORE: Best Power Supplies
MORE: How We Test Power Supplies
MORE: All Power Supply Content
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.