Intel CEO Pat Gelsinger appeared on 60 Minutes tonight to discuss the ongoing chip shortages in a wide-ranging interview with CBS News correspondent Leslie Stahl. Stahl noted during the program that Intel would announce a $3.5 billion upgrade to its Rio Rancho facility in New Mexico this week. Intel also posted a press release during the program announcing a live news conference tomorrow at 9 am PT about the investment (the press release doesn’t include the $3.5 billion figure).
The pending Rio Rancho announcement comes on the heels of Intel’s transition to its new IDM 2.0 model that will entail producing custom chips for third parties while it also vastly expands its own production capacity. Intel has already announced a $20 billion investment for two new fabs in Arizona, and it is also seeking subsidies from the US government for further US-based expansion. Intel recently confirmed a $10 billion investment in its Israel facilities, and is also reportedly seeking ~$10 billion in funding from the EU for a new fab in Europe.
Intel’s spending spree comes as its rival TSMC has announced its own $100 billion investment in fabs and R&D over the next three years.
Intel’s Rio Rancho investment could include funding for Optane (also called 3D XPoint) production. Intel’s sole source of the exotic tech, Micron, recently announced that it would cease production at the end of 2021. Micron plans to put its 3D XPoint fab in Lehi, Utah, up for sale at the end of the year after it exits the business.
Among other duties, Intel’s Rio Rancho facility is currently used as an R&D and development center for Optane media. Intel originally co-developed the technology in partnership with Micron and owns the associated IP, meaning Intel can produce the media, but the Rio Rancho fab currently isn’t used for volume production.
Optane media is a new type of memory that melds the speed and endurance of DRAM with the persistence of data storage devices, but it ran into quite a few hurdles in its path to market. Additionally, sluggish uptake of consumer-class storage devices based on the speedy material led Intel to kill off the entire range of Optane products for desktop PCs earlier this year. However, Intel has told Tom’s Hardware that it intends to continue to supply both the Optane storage drives and persistent memory DIMMs for its enterprise customers.
All of this means that Intel has to either procure Micron’s 3D XPoint fab or spin up its own production lines. Given that Rio Rancho is the site of its Optane development efforts, it wouldn’t be surprising to see Intel establish its own Optane production lines there. Intel might also be interested in purchasing Micron’s soon-to-be-retired fab tools from the Lehi, Utah facility so it can quickly ramp production.
We’ll learn more details about the Rio Rancho investment tomorrow during Intel’s press conference. Keyvan Esfarjani, Intel senior vice president and general manager of Manufacturing and Operations, will join New Mexico Gov. Michelle Lujan Grisham and several US Senators for the announcement at 9 am PT. We’ll update as we learn more.
I have some old telephones lying around – few of them fully functional anymore. I was going to throw out one of them when I realized I could replace the inner wiring with a Raspberry Pi, and have the Google Assistant running on it.
While it’s certainly easier to call “hey google” across the room, there’s something fun about picking up the phone, asking it a question, and having it immediately respond. This is how to install the Google Assistant on an old rotary telephone with a Raspberry Pi Zero.
What You’ll Need to make an old phone into a Google Assistant
A Raspberry Pi Zero with soldered GPIO pins, a memory card (with Raspberry Pi OS on it), and power adapter
An old telephone with a functional receiver (speaker and microphone), and a functioning hook switch that you don’t mind destroying
A few female jumper cables, wire strippers, and electrical tape or solder
A few types of screwdrivers depending on your phone
1 USB audio adapter compatible with linux
1 male-male 3.5mm audio cable
1 Raspberry Pi Zero micro USB to USB A female adapter
How to Install the Google Assistant on an Old Phone
This Raspberry Pi project is quite extensive and can take a bit of time, so I’ve split it up into four distinct parts:
Registering with Google
Authenticating with Google
Wiring your telephone
Setting up the assistant
Registering With Google
Before we use a Raspberry Pi as a Google Assistant, we must register the device with Google. This process can be a bit confusing if you’ve never used Google Cloud Platform before, but the steps should be easy enough to follow.
1. Clone this repository to your raspberry pi.
cd ~/
git clone https://github.com/rydercalmdown/google_assistant_telephone
2. Navigate to https://console.actions.google.com in your browser. This site allows us to manage Google Assistant Actions, as well as register custom Google Assistant devices.
3. Click “New Project” and fill in the required information. The name doesn’t matter – just pick something you can remember.
4. In a new tab, visit this link to Google Cloud Platform, and confirm that the name of the project you just created appears in the top bar – if not, select it. Then, click the “Enable” button to turn on the API.
5. In your original tab, scroll to the bottom of the page and click “Are you looking for device registration? Click here”
6. On the next page, click “Register Model”.
7. Fill in the required information and copy down the Model ID to file – you will need it later.
8. Click Download OAuth 2.0 credentials to download the credentials file your Raspberry Pi will use to make requests.
9. Rename the downloaded file tooauth_config_credentials.json and transfer it to your pi. Place it into the repository folder you cloned in step 1.
# Rename your downloaded file
cd ~/Downloads
mv your_unique_secret_file_name.json oauth_config_credentials.json
# Move the file into your repository
scp oauth_config_credentials.json pi@your_pis_ip_address:/home/pi/google_assistant_telephone
10. Back in the browser, after downloading and renaming the credentials, click the “Next” button.
11. On the “Specify traits” tab, click “Save Traits” without adjusting any settings to complete the setup.
Authenticating With Google
We’ve now registered a device with Google. Next, it’s time to authenticate this device so it has access to our Google account and personalized assistant.
1. Navigate to https://console.cloud.google.com/apis/credentials/consent ensuring that the project matches the name you decided in Step 3 of “Registering With Google”.
2. Select “External” and click the “Create” button.
3. Fill in an App name. Once again, this doesn’t really matter – but to keep it simple I went with the same name as before.
4. Select your email from the dropdown in User support email. This is in case any users of your app need to contact you, but since we won’t be making the app public, there’s no need to worry.
5. Add that same email under “Developer contact information” and click “Save and Continue”.
6. On the next page, click Add or Remove Scopes to bring up the scopes sidebar.
7. Search for “Google Assistant API” in the search bar, and check the /auth/assistant-sdk-prototype scope. Then click update, followed by “Save and Continue” at the bottom of the page.
8. On the next page, click “Save and Continue” to skip Optional Info.
9. With the setup complete, click “OAuth Consent Screen” in the sidebar to refresh the page – then “Add User” under Test Users.
10. Add your Google account’s email, and click save.
11. Next, on your pi navigate to the downloaded repository and run the following command:
cd ~/google_assistant_telephone
make authenticate
12. Follow the link it gives you and complete the Authentication process in your browser. Once the process is complete, copy the code and paste it back in your terminal.
13. If successful, you’ll see a message indicating “credentials saved:” followed by a path to the credentials. Use this path to move the credentials into the current directory. Transfer these credentials to your repository’s root directory
Depending on your rotary phone, this process will vary widely. These are the steps that I used, but likely this will require a fair bit of trial and error on your part. Make sure you use a phone you don’t care about, as it won’t be able to work normally after this.
1. Take the cover off the telephone. You may need to loosen a screw on the bottom.
2. Find and strip the wires connected to the hook switch. We’ll connect these wires to the pi’s GPIO pins to determine if the receiver has been picked up or set down.
3. Connect the hook switch wires to GPIO Board pin 18 and ground. You may need to solder the wires from the hook switch to jumper wires to connect them easier, or just attach them together with a bit of electrical tape.
4. Connect your USB audio adapter to the raspberry pi zero. You will need a USB-micro to female USB-A adapter to do this.
5. Solder the microphone and speaker in the handset to two separate 3.5mm cables. These will carry the signal from the pi to the speaker, and from the microphone to the pi. You should be able to connect these within the phone case and use the original handset cord. This will take a bit of trial and error to determine which wires belong to the speaker, and which to the microphone.
Image 1 of 2
Image 2 of 2
6. Connect the 3.5mm cables to the USB audio adapter being mindful to connect them in the proper order.
7. Tuck the pi into the phone, and close up the cover – or keep it open while you debug setting up the assistant. Run the power adapter connected to the pi out the back of the case where the original telephone wire would go.
Setting Up The Assistant
1. Run the installation script. It’ll take care of base and python requirements. If you have a Raspberry Pi Zero, the compilation process can take hours and will appear stuck on a step installing grpc (it just moves very slowly). I’d recommend leaving it running over night.
cd google_assistant_telephone
make install
2. Configure your USB audio by running the following command. It will take care of editing your alsamixer config, setting your USB card as your default audio output, and setting volumes for the speaker and microphone.
make configure-audio
3. Test and adjust your volumes by running the following command, speaking, and listening through your phone’s handset. If your volume is not high enough on your microphone or speaker, set it with the alsamixer command.
# Run, speak something into the microphone, and listen
make test
# Set volumes
alsamixer
4. Export your project ID to an environment variable. You can retrieve your project ID by visiting this URL, selecting your project, clicking the three dots next to your profile photo in the top right, and clicking “Project Settings”
export PROJECT_ID=your-project-id
5. Export your model ID from the Registering With Google steps to an environment variable.
6. Run the make run command. It will take care of registering this device, and saving the configuration to disk so you won’t need the environment variables in the future.
make run
7. Test your assistant by picking up the phone, and asking it a simple question, like “What is the capital of Canada?” If all goes well, you’ll see some logs in the terminal, and the assistant will respond. To ask another question, hang up the receiver and pick it up again.
8. Finally, run the following command to configure the assistant to run on boot.
The first benchmark results of Intel’s yet-to-be-announced eight-core Core i9-11950 ‘Tiger Lake-H’ processor for gaming notebooks have been published in Primate Labs’ Geekbench 5 database. The new unit expectedly beats Intel’s own quad-core Core i7-1185G7 CPU both in single and multi-thread workloads, but when it comes to comparison with other rivals, its results are not that obvious.
Intel’s Core i9-11950 processor has never been revealed in leaks, so it was surprising to see benchmark results of HP’s ZBook Studio 15.6-inch G8 laptop based on this CPU in Geekbench 5. The chip has eight cores based on the Willow Cove microarchitecture running at 2.60 GHz – 4.90 GHz, it is equipped with a 24MB cache, a dual-channel DDR4-3200 memory controller, and a basic UHD Graphics core featuring the Xe architecture.
In Geekbench 5, the ZBook Studio 15.6-inch G8 powered by the Core i9-11950H scored 1,365 points in single-thread benchmark and 6,266 points in multi-thread benchmark. The system operated in ‘HP Optimized (Modern Standby)’ power plan, though we do not know the maximum TDP that is supported in this mode.
CPU
Single-Core
Multi-Core
Cores/Threads, uArch
Cache
Clocks
TDP
Link
AMD Ryzen 9 5980HS
1,540
8,225
8C/16T, Zen 3
16MB
3.30 ~ 4.53 GHz
35W
https://browser.geekbench.com/v5/cpu/6027200
AMD Ryzen 9 4900H
1,230
7,125
8C/16T, Zen 2
8MB
3.30 ~ 4.44 GHz
35~54W
https://browser.geekbench.com/v5/cpu/6028856
Intel Core i9-11900
1,715
10,565
8C/16T, Cedar Cove
16 MB
2.50 ~ 5.20 GHz
65W
https://browser.geekbench.com/v5/cpu/7485886
Intel Core i9-11950H
1,365
6,266
8C/16T, Willow Cove
24MB
2.60 ~ 4.90 GHz
?
https://browser.geekbench.com/v5/cpu/7670672
Intel Core i9-10885H
1,335
7,900
8C/16T, Skylake
16MB
2.40 ~ 5.08 GHz
45W
https://browser.geekbench.com/v5/cpu/6006773
Intel Core i7-1185G7
1,550
5,600
4C/8T, Willow Cove
12MB
3.0 ~ 4.80 GHz
28W
https://browser.geekbench.com/v5/cpu/5644005
Apple M1
1,710
7,660
4C Firestorm + 4C Icestorm
12MB + 4MB
3.20 GHz
20~24W
https://browser.geekbench.com/v5/cpu/6038094
The upcoming Core i9-11950H processor easily defeats its quad-core Core i7-1185G7 brother for mainstream and thin-and-light laptops both in single-thread and multi-thread workloads. This is not particularly surprising as the model i7-1185G7 has a TDP of 28W. Meanwhile, the Core i9-11950H is behind AMD’s Ryzen 9 5980HS as well as Apple’s M1 in all kinds of workloads. Furthermore, its multi-thread score is behind that of its predecessor, the Core i9-10885H.
Perhaps, the unimpressive results of the Core i9-11950H in Geekbench 5 are due to a preliminary BIOS, early drivers, wrong settings, or some other anomalies. In short, since the CPU does not officially exist, its test results should be taken with a grain of salt. Yet, at this point, the product does not look too good in this benchmark.
Saturday’s Newegg Shuffle targets the best and the worst, at least in performance, of Nvidia’s Ampere lineup. The RTX 3060 and RTX 3090 are at polar ends of the GPU spectrum, and priced accordingly. Either way, you’re getting one of the best graphics cards — assuming you win the chance to buy one of these GPUs.
A dozen bundles comprise today’s offerings, with power supplies and motherboards being the common option, though there’s also one monitor bundle. Newegg likes to bundle hard-to-find graphics cards with other components as a way to push stock, and while it makes for more money up front, you’re still generally paying less than what you’d have to fork over if you tried to buy just the graphics card on eBay. In fact, our eBay GPU pricing index suggests some people might just be putting their ‘winnings’ up for sale to earn a buck, not that we’d recommend that.
If you’re wondering how the cards perform, our GPU benchmarks hierarchy has the details. We’ve also recently covered how modern GPUs perform in ray tracing benchmarks, where Nvidia’s DLSS can make a huge difference. Basically, RTX 3060 cards are about as fast as the RTX 2070 from 2018, but with more memory (and less memory bandwidth). The RTX 3090 meanwhile reigns as the king of the GPU hill, with 24GB of VRAM for good measure. It’s about 25% faster than the old Titan RTX, and right now costs nearly as much.
For those unfamiliar with the process, Newegg Shuffle uses a lottery format. Just select the component(s) you’d like to potentially buy. Then Newegg will hold a drawing later today, after which the ‘winners’ will be notified by email with the chance to purchase an item (only one) within a several-hour period. Based on our experience, you won’t get selected most of the time. But hey, it’s free to try.
We noted recently that Newegg says about 100,000 people enter the Shuffle each time, which would make your odds of winning pretty poor. Except, we don’t know how many of each combo are available. With 12 options today, if we assume 10 of each that gives 120 total winners. That would be a 0.12% chance of winning, though if there are more — or less — of the combos available, the odds obviously change. Our GPU editor has entered nearly every shuffle for the past month and got selected just once, while others say they’ve never been selected and some claim they’ve won multiple times. YMMV. Here’s the full list of today’s options:
EVGA GeForce RTX 3060 with EVGA 650W Power Supply for $469
EVGA GeForce RTX 3060 with EVGA 750W Power Supply for $510
EVGA GeForce RTX 3060 with EVGA 750W Fully Modular PSU for $512
Asus ROG Strix 3060 with Asus B450M motherboard for $635
Asus ROG Strix 3060 with Asus B450-F motherboard for $675
Asus ROG Strix 3060 with Asus PG259QN 1080p 360Hz monitor for $1,240
Gigabyte RTX 3090 Xtreme with Gigabyte X570 Aorus Master for $2,605
Gigabyte RTX 3090 Xtreme Waterforce with Gigabyte 850W PSU for $2,480
Gigabyte RTX 3090 Xtreme Waterforce with X570 Aorus Master for $2,705
Gigabyte RTX 3090 Gaming with Gigabyte 850W PSU for $2,860
Gigabyte RTX 3090 Gaming with Gigabyte X570 Aorus Master for $3,085
Gigabyte RTX 3090 Gaming with Z490 Aorus Master Waterforce for $3,335
The most enticing options seem to be the EVGA RTX 3060 combos, as we’re never sad about having a decent qualty spare PSU around. Plus, the prices are downright reasonable. Assuming $70–$100 for the power supply, that means the RTX 3060 cards only cost around $400. You won’t find a lower price on a modern graphics card right now!
The Asus 3060 bundles by comparison cost over $100 more, and while they come with more expensive motherboards, having a potential spare board isn’t nearly as helpful in our experience. If you’re planning on building a PC using a B450 motherboard, though, they’re still worth a look. The pairing of a top-tier eSports gaming monitor with a modest graphics card on the other hand just feels a bit… off.
Gigabyte rounds out the list with six different RTX 3090 bundles, all obviously in the extremely expensive range. Oddly, the 3090 Gaming bundles cost quite a bit more than the 3090 Xtreme and 3090 Xtreme Waterforce bundles, even though the 3090 Xtreme models actually have better cooling and higher factory overclocks. In other words, the RTX 3090 Gaming bundles aren’t recommended — unless you really want to drop a bunch of money to try your hand with one of the best mining GPUs?
With component shortages plaguing the PC industry, not to mention the smartphone and automotive industries, the latest word is that prices aren’t likely to return to ‘normal’ throughout 2021. If you can keep chugging along with whatever your PC currently has, that’s the best option, as otherwise prices are painful for all of the Nvidia Ampere and AMD RDNA2 GPUs.
Today’s Newegg shuffle starts at 1 pm EST/10 am PST. The Newegg Shuffle normally lasts for 2 hours, so if you’re interested in any of these components, act fast!For other ways to get hard-to-find graphics cards, check out our RTX 3080 stock tracker and our feature on where to buy RTX 30-series cards. And for more Newegg savings, visit out page of Newegg promo codes.
TSMC produces chips for AMD, but it also now uses AMD’s processors to control the equipment that it uses to make chips for AMD (and other clients too). Sounds like a weird circulation of silicon, but that’s exactly what happens behind the scenes at the world’s largest third-party foundry.
There are hundreds of companies that use AMD EPYC-based machines for their important workloads, sometimes business-critical workloads. Yet, when it comes to mission-critical work, Intel Xeon (and even Intel Itanium and mainframes, rule the world. Luckily for AMD, things have begun to change, and now TSMC has announced that it is now using EPYC-based servers for its mission-critical fab control operations.
“For automation with the machinery inside our fab, each machine needs to have one x86 server to control the operation speed and provision of water, electricity, and gas, or power consumption,” said Simon Wang, Director of Infrastructure and Communication Services Division at TSMC.
“These machines are very costly. They might cost billions of dollars, but the servers that control them are much cheaper. I need to make sure that we have high availability in case one rack is down, then we can use another rack to support the machine. With a standard building block, I can generate about 1,000 virtual machines, which can control 1,000 fab tools in our cleanroom. This will mean a huge cost saving without sacrificing failover redundancy or reliability.”
TSMC started to use AMD EPYC machines quite some time ago for its general data center workloads, such as compute, storage, and, networking. AMD’s 64-core EPYC processors feature 128 PCIe lanes and support up to 4TB of memory, two crucial features for servers used to run virtual machines. But while the infrastructure to support 50,000 of TSMC’s employees globally is very complex and important (some would call it business-critical), it isn’t as important as TSMC’s servers that control fab tools.
Fab tools cost tens or hundreds of millions of dollars and process wafers carrying hundreds of chips that could be used to build products worth tens of thousands of dollars. Each production tool uses one x86 server that controls its operating speed as well as provisions water, electricity, and gas, or power. Sometimes hardware fails, so TSMC runs its workloads in such a way that one server can quickly replace the failed one (naturally, TSMC does not disclose which operating systems and applications it runs at its fabs).
At present TSMC uses HPE’s DL325 G10 platform running AMD EPYC 7702P processors with 64 cores (at 2.0 GHz ~ 3.35 GHz) in datacenters. It also uses servers based on 24-core EPYC 7F72s featuring a 3.20 GHz frequency for its R&D operations. As for machines used in TSMC’s fabs, the foundry keeps their specifications secret.
It is noteworthy that AMD’s data center products are used not only to produce chips, but also to develop them. AMD’s own Radeon Technologies Group uses EPYC processors to design GPUs.
ASRock’s Z590 PG Velocita is a full-featured Z590 motherboard that includes three M.2 sockets, Killer based networking (including Wi-Fi 6E), capable power delivery, premium audio, and more. It’s a well-rounded mid-ranger for Intel’s Z590 platform.
For
Killer based 2.5 GbE and Wi-Fi 6E Networking
10 USB ports on rearIO
Capable power delivery
Against
Last gen audio codec
No USB 3.2 Gen2x2 Type-C on rearIO
Features and Specifications
Next up out of the ASRock stable is the Z590 PG Velocita. The Z590 version of this board comes with an improved appearance, enhanced power delivery, PCIe 4.0 capability for your GPU and M.2 device, fast Killer-based networking and more. Priced around $300, the PG Velocita lands as a feature-rich mid-range option in the Z590 landscape.
ASRock’s Z590 lineup is similar to the previous-generation Z490 product stack. At the time we wrote this, ASRock has 12 Z590 motherboards listed. At the top is Z590 Taichi, followed by the PG Velocita we’re looking at here, and three Phantom Gaming boards, including a Micro-ATX option. Additionally, there are two professional boards in the Z590 Pro4 and Z590M Pro4, two Steel Legend boards, two Extreme boards (also more on the budget end), and a Mini-ITX board round out the product stack. Between price, size, looks, and features, ASRock should have a board that works for everyone looking to dive headlong into Rocket Lake.
Performance testing on the PG Velocita went well and produced scores that are as fast or faster than the other Z590 boards we’ve tested so far. The PG Velocita eschews Intel specifications, allowing the Intel Core i9-11900K to stretch its legs versus boards that more closely follow those specs. Overclocking went well, with the board able to run our CPU at both stock speeds and the 5.1 GHz overclock we’ve settled on. Memory overclocking also went well, with this board running our DDR4 3600 sticks at 1:1, and DDR4 4000 was nice and stable after a few tweaks to voltage to get it there.
The Z590 PG Velocita is an iterative update, just like most other Z590-based motherboards. The latest version uses a Killer-based 2.5 GbE and Wi-Fi 6E network stack, adds a front panel USB 3.2 Gen2x2 Type-C port, premium Realtek audio codec (though it is last generation’s flagship), three M.2 sockets and more. We’ll dig into these details and other features below. But first, here are the full specs from ASRock.
Along with the motherboard, the box includes several accessories ranging from cables to graphic card holders and an additional VRM fan. The included accessories should get you started without a trip to the store. Below is a complete list of all included extras.
Support DVD / Quick installation Guide
Graphics card holder
Wi-Fi Antenna
(4) SATA cables
(3) Screw package for M.2 sockets
(3) Standoffs for M.2 sockets
Wireless dongle USB bracket
3010 Cooling Fan with bracket
4010 Cooling Fan bracket
Image 1 of 3
Image 2 of 3
Image 3 of 3
Once you remove the Z590 PG Velocita from the box, one of the first things you’ll notice (if you’re familiar with the previous model) are the design changes. ASRock sticks with the black and red theme but forgoes the red stenciling on the black PCB from the last generation. The VRM heatsinks are large, connected via heatpipe and actively cooled out of the box by a small fan hidden in the left heatsink. ASRock includes an additional small fan and brackets for the top VRM heatsink (we did not use this in any test). The rear IO cover also sports the black and red Phantom Gaming design theme, along with the ASRock branding lit up with RGB lighting. The heatsinks on the bottom half of the board cover the three M.2 sockets and the chipset heatsink. The latter sports a PCB and chip under clear plastic for a unique look. Overall, I like the changes ASRock made to the appearance of this board, and it should fit in well with more build themes.
As we look closer at the top half of the board, we start by focusing on the VRM area. These aren’t the most robust parts below the heatsink, so additional cooling is welcomed. Just above the VRM heatsinks are two 8-pin EPS connectors (one required) to power the processor. To the right of the socket area are four unreinforced DRAM slots with latches on one side. ASRock lists supported speeds up to DDR4 4800(OC) with a maximum capacity of 128GB. As always, your mileage may vary as support depends on the CPU’s IMC and the kit you use to reach those speeds.
Located above the DRAM slots, we find the first two (of seven) 4-pin fan headers. The CPU/Water Pump and Chassis/Water Pump headers both support 24W/12A, with the remainder of the fan headers supporting 12W/1A. There are plenty of fan/pump headers on this board to support the motherboard running them all without a controller if you choose. A third 4-pin header is located in this area, while a fourth is in an odd spot, just below the left VRM heatsink. Outside of that, all headers auto-sense if a 3- or 4-pin connector is attached.
Just to the right of the fan headers up top are an ARGB (3-pin) and RGB header (4-pin). You’ll find the other two on the bottom edge of the board. The Polychrome Sync application controls these LEDs and any attached to the headers.
On the right edge are power and reset buttons, while just below those are the 24-pin ATX header for power to the board. Just below this is the first USB 3.2 Gen1 front panel header and the USB 3.2 Gen2x2 Type-C front panel header.
ASRock uses a 12-phase configuration for the CPU. Power goes through the 8-pin EPS connector(s) and is sent to the Renesas ISL69269 (X+Y+Z=12) controller. The controller then sends power to six Renesas ISL6617A phase doublers and finally onto the 12 Vishay 50A SIC654 DrMOS power stages. This provides 600A total to the CPU. While not the highest value we’ve seen, the VRM’s easily handled our CPU at stock and overclocked, with some help from the active cooling fan. This board comes with another fan, however, we chose not to use it and after testing, found there wasn’t a need for it.
Moving down to the bottom half of the board, we’ll start on the left side with audio. Hidden below the plastic shroud is the premium Realtek ALC1220 codec. ASRock chose to go with the last generation’s flagship solution instead of jumping up to the latest 4000 series Realtek codec, likely to cut costs. We also spy a few Nichicon Fine Gold audio capacitors poking through the said shroud. This board doesn’t have a fancy DAC as more expensive boards tend to, but this solution will still be satisfactory for an overwhelming majority of users.
In the middle of the board, we see three full-length reinforced PCIe slots (and an x1 slot) as well as the heatsinks that cover the three M.2 sockets. Starting with the PCIe configuration, when using 11th gen CPU, the top two slots are PCIe 4.0 capable with the slot breakdown as follows: x16/x0, x8/x8, or x8/x8/x4 (PCIe 3.0). ASRock says the PG Velocita supports Quad CrossfireX, 3-Way CrossFireX and CrossfireX. As is increasingly common, there’s no mention of SLI support. The x1 slot is connected via the chipset and runs at PCIe 3.0 x1 speeds.
Looking at M.2 storage, the top socket, M2_1, is connected directly to the CPU and offers the fastest speeds (PCIe 4.0 x4 @ 64 Gbps), supporting up to 80mm devices. The second slot down, M2_2, is chipset connected, supporting PCIe 3.0 x4 speeds and accepting SATA-based modules. The bottom socket, M2_3, is also fed from the chipset and runs both SATA-based drives and PCIe, at 3.0 x4 speeds. If M2_2 is occupied, SATA ports 0/1 will be disabled. If M2_3 has a SATA-type drive installed, SATA 3 will be disabled. In the worst-case scenario, when all M.2 sockets are populated (one with a SATA drive), you’ll still have three SATA ports available as well. The top two sockets hold up to 80 mm modules while the bottom supports up to 110 mm drives.
To the right of the PCIe socket sits the chipset heatsink and its PCB-under-plexi look. Continuing to the right edge, we spot another 4-pin fan/pump header, the second USB 3.2 Gen1 header and six SATA ports. Below that is another 4-pin fan header and finally a clear CMOS button to reset your BIOS. Around the SATA ports are the mounting holes for the included GPU support bar. Including this in the box is a great value add, especially with graphics cards seemingly getting larger and heavier as time goes on.
Across the board’s bottom are several headers, including more USB ports, fan headers and more. Below is the complete list, from left to right:
Front-panel audio
Thunderbolt header
UART header
RGB and ARGB headers
USB 2.0 header
TPM header
(2) Chassis/WP headers
Dr. Debug LED
Temperature sensor, water flow headers
Speaker
Front panel header
Flipping the board around to the rear IO area, there’s the pre-installed IO plate which matches the colors and design of the rest of the board. There are 10 USB ports: You get two USB 3.2 Gen 2 ports (Type-A and Type-C), six USB 3.2 Gen 1 ports, and two USB 2.0 ports, all of which have ESD protection. Two of these ports, outlined in red, are the Lightning ports. The ports are sourced from two different controller interfaces, allowing gamers to connect their mice/keyboard with the lowest jitter latency–according to ASRock. On the video front, the PG Velocita includes an HDMI port and DisplayPort for use with the integrated video on the processor.
Also here are the Intel (black) and Killer (blue) Ethernet ports on the networking front. The Killer LAN can communicate directly with the CPU, yielding lower latency than chipset-connected LAN–again according to ASRock. Next up are the antenna ports for Wi-Fi 6E and, finally, the gold-plated 5-plug audio stack plus SPDIF.
Crytek has just released a update for Crysis Remastered, version 2.1.2, with updates to the game’s ray tracing graphics. The new update features an experimental Boost mode that pushes all the game’s ray tracing reflections to even greater levels.
Boost mode enters Crysis Remastered as a purely experimental feature, but when enabled, it adds ray-traced reflections to almost every surface in-game, adds proper support for ray randomization on rough surfaces and boost specular reflectance on all surfaces by “about 5%,” according to the announcement of the update.
Crysis Remastered Boost Mode: Hands-on
I tested the game with an RTX 2060 Super, with all the game’s settings set to their maximum and DLSS set to balanced mode.
In testing the new Boost mode myself, I found the feature brings a very minor improvement to visual quality. As you can see in the screenshots below, surfaces like rocks and tree leaves look a bit brighter with Boost mode on. With Boost mode off, the game looks like it has higher contrast.
Image 1 of 2
Image 2 of 2
I also noticed that my framerate dropped by around 5 frames per second (fps) when using Boost mode, and that’s with my average framerate being 35 fps with Boost mode disabled. That’s a lot of lost performance for a minor bump in visual quality.
Overall, Boost mode seems like a cool idea, but I’d like to see a more noticeable visual improvement if it’s to graduate past the experimental feature phase. As it stands, the ray tracing boost is barely noticeable at all when actually playing the game.
Interestingly, Crytek has been working on this boost mode for quite a while. In their announcement, the game’s developers noted that Boost mode came about during the initial development of Crysis Remastered. Since the game is all about pushing PC graphics to the highest detail possible, the developers wanted to push the ray tracing envelope as high as they could too.
The devs added that the update is targeted toward “ray tracing enthusiasts” specifically. This is probably why Boost mode is an option all by itself and not part of the “Can it Run Crysis?” graphical settings in the menu.
Bug Fixes
On the bright side, this update isn’t only about enhancing ray tracing. There are a lot of bug fixes with this update too, as per the developer:
Motion blur has been reactivated: Motion Blur was temporary disabled with 2.1.1, due to some issues where the motion blur effect was far more intense than intended. A fix has been implemented and motion blur is now available once again.
Improvement made to the model for the SCAR.
Fixed a bug that allowed players to activate Anti-Aliasing (AA) from the options menu when DLSS is turned on.
Developer Note: By design, AA is not supposed to work while Nvidia’s DLSS is active. To increase the visibility on this, AA will now be greyed out once DLSS has been turned on.
Fixed a UI issue that resulted in the selected difficulty settings appearing as set to ‘Easy’ when another difficulty has been selected.
Fixed some black or random textures that could appear when ray tracing is enabled on some PCs.
Reduced and fixed several visible cracks that can appear in ray tracing geometry.
Fixed a rare crash that could occur when ray tracing has been enabled.
Fixed some incorrect texture tiling that could be visible when ray tracing is enabled.
Fixed an issue that could result in stalls when loading ray tracing textures.
Optimized the GPU memory usage in RTX mode (freed around 300+ MB on GPU).
Fixed an issue with some light clip volumes support when ray tracing is enabled which will now allow for more precise RT shading.
Fixed and issue that caused RayTracing Screen Space Reflections to not be visible on distant surfaces.
Optimized RayTracing Screen Space Reflections performance for 4K.
The University of Minnesota’s path to banishment was long, turbulent, and full of emotion
On the evening of April 6th, a student emailed a patch to a list of developers. Fifteen days later, the University of Minnesota was banned from contributing to the Linux kernel.
“I suggest you find a different community to do experiments on,” wrote Linux Foundation fellow Greg Kroah-Hartman in a livid email. “You are not welcome here.”
How did one email lead to a university-wide ban? I’ve spent the past week digging into this world — the players, the jargon, the university’s turbulent history with open-source software, the devoted and principled Linux kernel community. None of the University of Minnesota researchers would talk to me for this story. But among the other major characters — the Linux developers — there was no such hesitancy. This was a community eager to speak; it was a community betrayed.
The story begins in 2017, when a systems-security researcher named Kangjie Lu became an assistant professor at the University of Minnesota.
Lu’s research, per his website, concerns “the intersection of security, operating systems, program analysis, and compilers.” But Lu had his eye on Linux — most of his papers involve the Linux kernel in some way.
The Linux kernel is, at a basic level, the core of any Linux operating system. It’s the liaison between the OS and the device on which it’s running. A Linux user doesn’t interact with the kernel, but it’s essential to getting things done — it manages memory usage, writes things to the hard drive, and decides what tasks can use the CPU when. The kernel is open-source, meaning its millions of lines of code are publicly available for anyone to view and contribute to.
Well, “anyone.” Getting a patch onto people’s computers is no easy task. A submission needs to pass through a large web of developers and “maintainers” (thousands of volunteers, who are each responsible for the upkeep of different parts of the kernel) before it ultimately ends up in the mainline repository. Once there, it goes through a long testing period before eventually being incorporated into the “stable release,” which will go out to mainstream operating systems. It’s a rigorous system designed to weed out both malicious and incompetent actors. But — as is always the case with crowdsourced operations — there’s room for human error.
Some of Lu’s recent work has revolved around studying that potential for human error and reducing its influence. He’s proposed systems to automatically detect various types of bugs in open source, using the Linux kernel as a test case. These experiments tend to involve reporting bugs, submitting patches to Linux kernel maintainers, and reporting their acceptance rates. In a 2019 paper, for example, Lu and two of his PhD students, Aditya Pakki and Qiushi Wu, presented a system (“Crix”) for detecting a certain class of bugs in OS kernels. The trio found 278 of these bugs with Crix and submitted patches for all of them — the fact that maintainers accepted 151 meant the tool was promising.
On the whole, it was a useful body of work. Then, late last year, Lu took aim not at the kernel itself, but at its community.
In “On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits,” Lu and Wu explained that they’d been able to introduce vulnerabilities into the Linux kernel by submitting patches that appeared to fix real bugs but also introduced serious problems. The group called these submissions “hypocrite commits.” (Wu didn’t respond to a request for comment for this story; Lu referred me to Mats Heimdahl, the head of the university’s department of computer science and engineering, who referred me to the department’s website.)
The explicit goal of this experiment, as the researchers have since emphasized, was to improve the security of the Linux kernel by demonstrating to developers how a malicious actor might slip through their net. One could argue that their process was similar, in principle, to that of white-hat hacking: play around with software, find bugs, let the developers know.
But the loudest reaction the paper received, on Twitter and across the Linux community, wasn’t gratitude — it was outcry.
“That paper, it’s just a lot of crap,” says Greg Scott, an IT professional who has worked with open-source software for over 20 years.
“In my personal view, it was completely unethical,” says security researcher Kenneth White, who is co-director of the Open Crypto Audit Project.
The frustration had little to do with the hypocrite commits themselves. In their paper, Lu and Wu claimed that none of their bugs had actually made it to the Linux kernel — in all of their test cases, they’d eventually pulled their bad patches and provided real ones. Kroah-Hartman, of the Linux Foundation, contests this — he told The Verge that one patch from the study did make it into repositories, though he notes it didn’t end up causing any harm.
Still, the paper hit a number of nerves among a very passionate (and very online) community when Lu first shared its abstract on Twitter. Some developers were angry that the university had intentionally wasted the maintainers’ time — which is a key difference between Minnesota’s work and a white-hat hacker poking around the Starbucks app for a bug bounty. “The researchers crossed a line they shouldn’t have crossed,” Scott says. “Nobody hired this group. They just chose to do it. And a whole lot of people spent a whole lot of time evaluating their patches.”
“If I were a volunteer putting my personal time into commits and testing, and then I found out someone’s experimenting, I would be unhappy,” Scott adds.
Then, there’s the dicier issue of whether an experiment like this amounts to human experimentation. It doesn’t, according to the University of Minnesota’s Institutional Review Board. Lu and Wu applied for approval in response to the outcry, and they were granted a formal letter of exemption.
The community members I spoke to didn’t buy it. “The researchers attempted to get retroactive Institutional Review Board approval on their actions that were, at best, wildly ignorant of the tenants of basic human subjects’ protections, which are typically taught by senior year of undergraduate institutions,” says White.
“It is generally not considered a nice thing to try to do ‘research’ on people who do not know you are doing research,” says Kroah-Hartman. “No one asked us if it was acceptable.”
That thread ran through many of the responses I got from developers — that regardless of the harms or benefits that resulted from its research, the university was messing around not just with community members but with the community’s underlying philosophy. Anyone who uses an operating system places some degree of trust in the people who contribute to and maintain that system. That’s especially true for people who use open-source software, and it’s a principle that some Linux users take very seriously.
“By definition, open source depends on a lively community,” Scott says. “There have to be people in that community to submit stuff, people in the community to document stuff, and people to use it and to set up this whole feedback loop to constantly make it stronger. That loop depends on lots of people, and you have to have a level of trust in that system … If somebody violates that trust, that messes things up.”
After the paper’s release, it was clear to many Linux kernel developers that something needed to be done about the University of Minnesota — previous submissions from the university needed to be reviewed. “Many of us put an item on our to-do list that said, ‘Go and audit all umn.edu submissions,’” said Kroah-Hartman, who was, above all else, annoyed that the experiment had put another task on his plate. But many kernel maintainers are volunteers with day jobs, and a large-scale review process didn’t materialize. At least, not in 2020.
On April 6th, 2021, Aditya Pakki, using his own email address, submitted a patch.
There was some brief discussion from other developers on the email chain, which fizzled out within a few days. Then Kroah-Hartman took a look. He was already on high alert for bad code from the University of Minnesota, and Pakki’s email address set off alarm bells. What’s more, the patch Pakki submitted didn’t appear helpful. “It takes a lot of effort to create a change that looks correct, yet does something wrong,” Kroah-Hartman told me. “These submissions all fit that pattern.”
So on April 20th, Kroah-Hartman put his foot down.
“Please stop submitting known-invalid patches,” he wrote to Pakki. “Your professor is playing around with the review process in order to achieve a paper in some strange and bizarre way.”
Maintainer Leon Romanovsky then chimed in: he’d taken a look at four previously accepted patches from Pakki and found that three of them added “various severity” security vulnerabilities.
Kroah-Hartman hoped that his request would be the end of the affair. But then Pakki lashed back. “I respectfully ask you to cease and desist from making wild accusations that are bordering on slander,” he wrote to Kroah-Hartman in what appears to be a private message.
Kroah-Hartman responded. “You and your group have publicly admitted to sending known-buggy patches to see how the kernel community would react to them, and published a paper based on that work. Now you submit a series of obviously-incorrect patches again, so what am I supposed to think of such a thing?” he wrote back on the morning of April 21st.
Later that day, Kroah-Hartman made it official. “Future submissions from anyone with a umn.edu address should be default-rejected unless otherwise determined to actually be a valid fix,” he wrote in an email to a number of maintainers, as well as Lu, Pakki, and Wu. Kroah-Hartman reverted 190 submissions from Minnesota affiliates — 68 couldn’t be reverted but still needed manual review.
It’s not clear what experiment the new patch was part of, and Pakki declined to comment for this story. Lu’s website includes a brief reference to “superfluous patches from Aditya Pakki for a new bug-finding project.”
What is clear is that Pakki’s antics have finally set the delayed review process in motion; Linux developers began digging through all patches that university affiliates had submitted in the past. Jonathan Corbet, the founder and editor in chief of LWN.net, recently provided an update on that review process. Per his assessment, “Most of the suspect patches have turned out to be acceptable, if not great.” Of over 200 patches that were flagged, 42 are still set to be removed from the kernel.
Regardless of whether their reaction was justified, the Linux community gets to decide if the University of Minnesota affiliates can contribute to the kernel again. And that community has made its demands clear: the school needs to convince them its future patches won’t be a waste of anyone’s time.
What will it take to do that? In a statement released the same day as the ban, the university’s computer science department suspended its research into Linux-kernel security and announced that it would investigate Lu’s and Wu’s research method.
But that wasn’t enough for the Linux Foundation. Mike Dolan, Linux Foundation SVP and GM of projects, wrote a letter to the university on April 23rd, which The Verge has viewed. Dolan made four demands. He asked that the school release “all information necessary to identify all proposals of known-vulnerable code from any U of MN experiment” to help with the audit process. He asked that the paper on hypocrite commits be withdrawn from publication. He asked that the school ensure future experiments undergo IRB review before they begin, and that future IRB reviews ensure the subjects of experiments provide consent, “per usual research norms and laws.”
Two of those demands have since been met. Wu and Lu have retracted the paper and have released all the details of their study.
The university’s status on the third and fourth counts is unclear. In a letter sent to the Linux Foundation on April 27th, Heimdahl and Loren Terveen (the computer science and engineering department’s associate department head) maintain that the university’s IRB “acted properly,” and argues that human-subjects research “has a precise technical definition according to US federal regulations … and this technical definition may not accord with intuitive understanding of concepts like ‘experiments’ or even ‘experiments on people.’” They do, however, commit to providing more ethics training for department faculty. Reached for comment, university spokesperson Dan Gilchrist referred me to the computer science and engineering department’s website.
Meanwhile, Lu, Wu, and Pakki apologized to the Linux community this past Saturday in an open letter to the kernel mailing list, which contained some apology and some defense. “We made a mistake by not finding a way to consult with the community and obtain permission before running this study; we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for hypocrite patches,” the researchers wrote, before going on to reiterate that they hadn’t put any vulnerabilities into the Linux kernel, and that their other patches weren’t related to the hypocrite commits research.
Kroah-Hartman wasn’t having it. “The Linux Foundation and the Linux Foundation’s Technical Advisory Board submitted a letter on Friday to your university,” he responded. “Until those actions are taken, we do not have anything further to discuss.”
From the University of Minnesota researchers’ perspective, they didn’t set out to troll anyone — they were trying to point out a problem with the kernel maintainers’ review process. Now the Linux community has to reckon with the fallout of their experiment and what it means about the security of open-source software.
Some developers rejected University of Minnesota researchers’ perspective outright, claiming the fact that it’s possible to fool maintainers should be obvious to anyone familiar with open-source software. “If a sufficiently motivated, unscrupulous person can put themselves into a trusted position of updating critical software, there’s honestly little that can be done to stop them,” says White, the security researcher.
On the other hand, it’s clearly important to be vigilant about potential vulnerabilities in any operating system. And for others in the Linux community, as much ire as the experiment drew, its point about hypocrite commits appears to have been somewhat well taken. The incident has ignited conversations about patch-acceptance policies and how maintainers should handle submissions from new contributors, across Twitter, email lists, and forums. “Demonstrating this kind of ‘attack’ has been long overdue, and kicked off a very important discussion,” wrote maintainer Christoph Hellwig in an email thread with other maintainers. “I think they deserve a medal of honor.”
“This research was clearly unethical, but it did make it plain that the OSS development model is vulnerable to bad-faith commits,” one user wrote in a discussion post. “It now seems likely that Linux has some devastating back doors.”
Corbet also called for more scrutiny around new changes in his post about the incident. “If we cannot institutionalize a more careful process, we will continue to see a lot of bugs, and it will not really matter whether they were inserted intentionally or not,” he wrote.
And even for some of the paper’s most ardent critics, the process did prove a point — albeit, perhaps, the opposite of the one Wu, Lu, and Pakki were trying to make. It demonstrated that the system worked.
Eric Mintz, who manages 25 Linux servers, says this ban has made him much more confident in the operating system’s security. “I have more trust in the process because this was caught,” he says. “There may be compromises we don’t know about. But because we caught this one, it’s less likely we don’t know about the other ones. Because we have something in place to catch it.”
To Scott, the fact that the researchers were caught and banned is an example of Linux’s system functioning exactly the way it’s supposed to. “This method worked,” he insists. “The SolarWinds method, where there’s a big corporation behind it, that system didn’t work. This system did work.”
“Kernel developers are happy to see new tools created and — if the tools give good results — use them. They will also help with the testing of these tools, but they are less pleased to be recipients of tool-inspired patches that lack proper review,” Corbet writes. The community seems to be open to the University of Minnesota’s feedback — but as the Foundation has made clear, it’s on the school to make amends.
“The university could repair that trust by sincerely apologizing, and not fake apologizing, and by maybe sending a lot of beer to the right people,” Scott says. “It’s gonna take some work to restore their trust. So hopefully they’re up to it.”
Minisforum has introduced its new ultra-compact form-factor (UCFF) desktop PC that combines miniature dimensions with decent performance, rich connectivity, and upgradeability. The TL50 system packs Intel’s 11th Generation Core ‘Tiger Lake’ processor with built-in Xe Graphics and features two 2.5GbE connectors, a Thunderbolt 4 port, and three display outputs.
The PC is based on Intel’s quad-core Core i7-1135G7 processor, paired with 12GB of LPDDR4-3200/3733 memory as well as a 512GB M.2-2280 SSD with a PCIe interface. The CPU is cooled via a heatsink and a fan, so the 28W chip should be able to spend a fair amount of time in Turbo mode.
While Intel’s Tiger Lake platform enables PC makers to build very feature-rich computers on a very small footprint, there are only a few UCFF desktops featuring these processors that take full advantage of their capabilities. Minisforum’s TL50— which measures 5.9×5.9×2.2 inches — is a prime example.
Normally, miniature desktops have constraints when it comes to graphics performance and storage capacity, but the Minisforum TL50 can be equipped with two 2.5-inch HDDs or SSDs as well as an external eGFX graphics solution using a Thunderbolt 4 port.
TL50’s connectivity department looks quite solid, including a Wi-Fi 6 + Bluetooth module, two 2.5GbE ports, three display outputs (DisplayPort 1.4, HDMI 2.0, and Thunderbolt 4), six USB Type-A connectors (four USB 3.0, two USB 2.0), audio input and output and one USB Type-C for the power supply.
The Minisforum TL50 is currently available for pre-order through Japanese crowdfunding site Makuake starting from $651, reports Liliputing. The company plans to make the systems available by the end of July, but by that time they will naturally get more expensive.
João Silva 18 hours ago Featured Tech News, Laptop / Mobile
Samsung Unpacked just took place this week and this time around, Samsung introduced a new gaming laptop – the Galaxy Book Odyssey. While we have seen listings and rumours about laptops equipped with an RTX 3050 Ti GPU, the Galaxy Book Odyssey is the first one to be officially announced.
The Galaxy Book Odyssey will feature Nvidia RTX 3050 Ti mobile graphics, which is expected to feature 2560 CUDA cores and 4GB of GDDR6 memory. An RTX 3050 non-TI option will also be available.
Samsung has equipped the Galaxy Book Odyssey laptop with Intel 11th Gen Core H-series processors. The official infographic further informs us it will feature both i5 and i7 models, but it’s unclear if it’s referring to the 4-core parts or the upcoming 6 and 8-core CPUs.
The rest of the specs include up to 2TB of NVMe storage, a maximum of 32GB of DDR4 memory, support for Dolby Atmos, and a 15.6-inch FullHD display. For connectivity, there’s an HDMI port, 2x USB-C ports, 3x USB-A 3.2 ports, a Micro SD card reader, a Gigabit Ethernet port, Wi-Fi 6, Bluetooth 5.1, and a 3.5mm audio-in/out jack. The laptop will only be available in black and comes with an 83Wh battery powered by a 135W USB-C charger.
The Samsung Galaxy Book Odyssey is scheduled to release in August, with a starting price of $1399.
KitGuru says: Are you looking for a new laptop for gaming-on-the-go? What do you think of the Samsung Galaxy Book Odyssey?
Become a Patron!
Check Also
Monster Hunter Rise has outsold Street Fighter V in just one month
For most of its history, the Monster Hunter franchise had seen support from a select …
Home/Component/APU/AMD Zen 5 ‘Strix Point’ APU to be based on 3nm node and hybrid core architecture
João Silva 2 days ago APU, Featured Tech News
We’re still some ways off from seeing AMD launch its Zen 5 architecture, but nonetheless, the rumour mill is churning out some early information. Apparently, AMD’s Zen 5 APUs, codenamed ‘Strix Point’, will be based on 3nm process technology and feature the emerging hybrid core architecture.
According to MoePC (via @Avery78), the Zen 5 APUs will reportedly belong to the Ryzen 8000 series and feature a hybrid architecture with up to 8x big (high-performance) cores and 4x small (high-efficiency) cores, which should total in 20 threads.
Scheduled to release in 2024, the Strix Point APUs iGPU performance targets have already been set, but specific details on this were not shared. Besides the jump to a hybrid core architecture, the Zen 5-based APUs may also bring a new memory subsystem with significant changes. It’s unclear if these changes will also be seen in Zen 5-based CPUs.
The report also notes that AMD is no longer going forward with plans for the recently rumoured ‘Warhol’ series of CPUs, possibly due to the on-going chip shortage. If Warhol is actually out of the picture, then a Zen 3 refresh would be the Ryzen 6000 series, Zen 4 would become the Ryzen 7000 series, and Zen 5 the Ryzen 8000 series.
Discuss on our Facebook page, HERE.
KitGuru says: With Strix Point APUs allegedly releasing in 2024, we are still far from seeing something official from AMD. In a 3-year span, much can change, especially with the current chip situation that we are facing. What do you expect from Zen 5-based chips?
Become a Patron!
Check Also
Monster Hunter Rise has outsold Street Fighter V in just one month
For most of its history, the Monster Hunter franchise had seen support from a select …
Home/Software & Gaming/Cyberpunk 2077 Patch 1.22 brings further optimisations and fixes
Matthew Wilson 2 days ago Software & Gaming
At this point in time, CD Projekt Red has picked up work on upcoming, new content for Cyberpunk 2077 but there is still a team working to squash lingering bugs and improve performance. The latest patch does just that, with some more open world and quest fixes, as well as further optimisations.
Cyberpunk 2077 Patch 1.22 is now live across all platforms, addressing “the most frequently reported issues”. For quests and open world, we have the following fixes:
The Metro: Memorial Park dataterm should now properly count towards the Frequent Flyer achievement.
Fixed glitches in Johnny’s appearance occurring after buying the Nomad car from Lana.
Fixed an issue in Gig: Until Death Do Us Part where it was not possible to use the elevator.
Fixed an issue in Epistrophy where the player could get trapped in the garage if they didn’t follow the drone and ran into the control room instead.
Added a retrofix for the issue we fixed in 1.21, where Takemura could get stuck in Japantown Docks in Down on the Street – for players who already experienced it before update 1.21 and continued playing until 1.22, Takemura will now teleport to Wakako’s parlor.
Fixed an issue preventing the player from opening the phone in the apartment at the beginning of New Dawn Fades.
Fixed an issue where the player could become unable to use weapons and consumables after interacting with a maintenance panel in Riders on the Storm.
This patch will also fix instances of NPC clothing clipping, an issue with subtitles not being properly aligned and memory management improvements for the PC version, which should reduce the number of crashes. On the PC side, further optimisations are now in place for skin and cloth rendering, which should now have less of a performance hit.
For consoles, the Xbox One version gets additional GPU and ESRAM optimisations and memory management has also been improved for the game on PlayStation 5.
Discuss on our Facebook page, HERE.
KitGuru Says: We’ve had several months of bug fixing patches, hopefully soon, CD Projekt Red will be ready to start discussing plans for future Cyberpunk 2077 content. We know new story content is coming thanks to the efforts of dataminers, so it is just a matter of when we’ll start seeing announcements.
Become a Patron!
Check Also
Monster Hunter Rise has outsold Street Fighter V in just one month
For most of its history, the Monster Hunter franchise had seen support from a select …
João Silva 2 days ago Featured Tech News, Graphics
We might be just three weeks away from seeing the RTX 3080 Ti announced. The upcoming graphics card has been rumoured for a long time now and has reportedly been hit with multiple delays. However, sources now reckon we’ll finally see the GPU in action next month.
The rumour first appeared on the Expreview forums, which have since been privatised. Fortunately, ITHome (via Wccftech) spotted it. According to this source, Nvidia may announce the RTX 3080 Ti on the 18th of May, followed by reviews on the 25th of May and a retail launch a day later, on the 26th of May.
Apparently, the RTX 3080 Ti will ship with Nvidia’s mining limiter in place. This will be an updated version, as workarounds are now available for the first revision that shipped on the RTX 3060 in February.
From what we’ve gathered until now, the RTX 3080 Ti is expected to feature the GA102-225 GPU with 10240 CUDA cores, 80 RT cores, and 320 Tensor cores. As far as clock speeds go, rumour has it that we’ll see a 1365MHz base clock and a 1665MHz boost clock, as well as 12GB of GDDR6X memory clocked at 19Gbps across a 384-bit memory bus.
Discuss on our Facebook page, HERE.
KitGuru says: We could have another exciting GPU launch on our hands within the next few weeks. Are any of you hoping to buy an RTX 3080 Ti when it launches?
Become a Patron!
Check Also
Monster Hunter Rise has outsold Street Fighter V in just one month
For most of its history, the Monster Hunter franchise had seen support from a select …
Just when we thought that smart cities, smart factories, IoT devices, autonomous vehicles, and robots will be the main generators of data that will require storage space in the coming years, Chia cryptocurrency just demonstrated that it will also be a formidable generator of data, at least for the time being.
In a about a month’s time storage space allocated to Chia network increased from 120PB all the way to 1143PB, or 1.14 Exabytes. 1.14EB equals 1,140,000TB, or 63,333 20TB hard drives.
Chia is a proof of space-time cryptocurrency that uses storage space on farmers’ systems to store a collection of cryptographic numbers called ‘plots.’ When the blockchain broadcasts a challenge for the next block, farmers’ systems scan their plots to see if they have the hash that is closest to the challenge. This method eliminates the Proof of Work concept used by Bitcoin and Ethereum therefore lowering vast power requirements for mining, which developers of Chia call ‘farming.’
Meanwhile, the probability of winning a block is the percentage of the total space that a farmer has compared to the entire network, which essentially means that someone with more available space has more chances to win. So, while accelerators and GPUs are not needed for Chia farming, someone with more storage space to host more plots earns more.
At present each plot requires around 350GB of storage space and 4GB of RAM, so when one wants to store 100 plots, they need a system with 35TB of space and 400GB of RAM. While buying four 10TB HDDs is not cheap, 400GB of RAM (and host CPUs to support them) cost a lot.
Thousands of Chia farmers now build machines with tens of HDDs that can store tens terabytes of data. While one of such drives does not consume a lot — about 6.5W when operating and about 5.6W when idling — tens of such HDDs can consume hundreds of Watts when they work and usually more when starting up. For example, a system with 32 Western Digital HC550 18TB HDDs (like the one pictured above) powered by a monster motherboard with 32 SATA ports can consume around 180W when idling, which does not count power consumption of memory and compute modules.
For obvious reasons, there are no consumer PC chassis or NAS boxes with 32 3.5-inch bays. Meanwhile, rack-based chassis with backplanes for data centers are quite expensive. As a result, hardware used for Chia farming is either DIY or designed specifically for this purpose and nothing else. Essentially, in just about several months’ time a new segment of hardware market for Chia farming has developed.
It remains to be seen how Chia cryptocurrency mining will develop going forward. But at this rate the amount of storage space used by Chia network will be gargantuan a year from now.
Chinese motherboard manufacturer Onda (via ZOL) has launched the brand’s new Chia-D32H-D4 motherboard. The model name alone is enough to tell you that this motherboard is aimed at farming Chia cryptocurrency, which has already caused hard drive price spikes in Asia.
Designed for mining, rather than to compete with the best motherboards for gaming, the Chia-D32H-D4 is most likely a rebranded version of Onda’s existing B365 D32-D4 motherboard. It measures 530 x 310mm, so the Chia-D32H-D4 isn’t your typical motherboard. In fact, Onda has produced a special case with an included power supply for this specific model. The unspecified 800W power supply arrives with the 80Plus Gold certification, while the case features five cooling fans.
The Chia-D32H-D4’s selling point is obviously the motherboard’s 32 SATA ports, allowing you to leverage up to 32 hard drives. The B365 chipset can only provide a limited amoung of SATA ports, so the Chia-D32H-D4 depends on a third-party SATA controller such as Marvell to get the count up to 32. We counted seven SATA controllers in the render of the motherboard. Assuming that each controller delivers up to four SATA ports, the remaining four should come from the B365 chipset itself.
At 18GB per drive, the motherboard can accommodate up to 576GB of storage for all your Chia farming activities — enough for around 5,760 101.4GiB plots. Based on the current Chia network stats, that would be enough for about 0.05% of the total Chia netspace, though that’s likely to decrease rapidly in the coming days if current trends continue, never mind the time required to actually generate that many plots.
In terms of power connectors, the Chia-D32H-D4 comes equipped with a standard 24-pin power connector, one 8-pin EPS connector and up to two 6-pin PCIe power connectors. The latter is designed exclusively to power the hard drives.
Image 1 of 2
Image 2 of 2
Based on the LGA1151 socket and B365 chipset, the Chia-D32H-D4 is very flexible in regards to processor support. It’s compatible with Intel’s Skylake, Kaby Lake, Coffee Lake and Coffee Lake Refresh processors. The motherboard utilizes a modest six-phase power delivery subsystem, but it should be sufficient to house processors up to the Core i9 tier.
Besides the deep storage requirements, Chia farming is reliant memory as well. A single Chia splot requires around 4GB of memory. The Chia-D32H-D4 offers four DDR4 memory slots, providing the opportunity to have up to 128GB of memory in the system. On paper, you can plot up to 32 plots in parallel.
Expansion options on the Chia-D32H-D4 are limited to one PCIe x16 slot, one PCIe x1 slot and one M.2 slot. Connectivity, however, is pretty generous. For connecting displays, you can choose between the HDMI port or VGA port. There are also four USB 3.0 ports and two Gigabit Ethernet ports. A power button is located on both ends of the motherboard.
Onda hasn’t listed the Chia-D32H-D4 motherboard on its website nor its pricing. However, rumor on the street is that motherboards are already in the hands of Chia farmers.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.