Gigabyte has announced its new server for AI, high-performance computing (HPC), and data analytics. The G262-ZR0 machine is one of the industry’s first servers with four Nvidia A100 compute GPUs. The 2U system will be cheaper when compared to Gigabyte’s and Nvidia’s servers with eight A100 processors, but will still provide formidable performance.
The Gigabyte G262-ZR0 is based on two AMD EPYC 7002-series ‘Rome’ processors with up to 64 cores per CPU as well as four Nvidia A100 GPUs 40GB (with 1.6TB/s bandwidth) or 80GB (with 2.0TB/s bandwidth) of onboard HBM2 memory. Four Nvidia A100 processors feature 13,824 FP64 CUDA cores, 27,648 FP32 CUDA cores as well as an aggregated performance of 38.8 FP64 TFLOPS and 78 FP32 TFLOPS.
The machine can be equipped with 16 RDIMM or LRDIMM DDR4-3200 memory modules, three M.2 SSDs with a PCIe 4.0 x4 interface, and four 2.5-inch HDDs or SSDs with a SATA or a Gen4 U.2 interface,
The machine also has two GbE ports, six low-profile PCIe Gen4 x16 expansion slots, one OCP 3.0 Gen4 x16 mezzanine slot, an ASpeed AST2500 BMC, and two 3000W 80+ Platinum redundant PSUs.
Gigabyte says that its G262-ZR0 machine will provide the highest GPU compute performance possible in a 2U chassis, which will be its main competitive advantage.
Gigabyte did not disclose pricing of its G262-ZR0 server, but it will naturally be significantly cheaper than an 8-way NVIDIA DGX A100 system or a similar machine from Gigabyte.
The high-end Asus ROG Rapture GT-AX11000 router brings oodles of highly configurable features to your network. But in our testing, the performance wasn’t quite in line with the high price.
For
Tri-band router
Trend Micro security included
High-end hardware specs
WTFast
Against
Manual firmware upgrade bug
Only five Ethernet ports
Expensive
For those who live their life by the “these go to eleven” philosophy, Asus has a high-end router for you. The Asus ROG Rapture GT-AX11000 ($450) includes nearly every feature you could reasonably ask for, then and adds in even more features for, as Nigel Tufnel would say “…that extra push over the cliff.” If you’re after a router that gives you lots of software tweaks and gaming-friendly options to prioritize your gaming traffic, it’s a solid choice. But don’t buy it for performance alone, because despite all those antennae, we’ve seen similar speeds on routers that cost much less—some of them from Asus’ own product stack.
Design
If you’re after an unobtrusive router that can sit inconspicuously on a shelf, this is the polar opposite. The Asus ROG Rapture GT-AX11000 is a horizontal router with no less than eight antennas deployed circumferentially around its chunky, square body, two to each side. Even its 3.8-pound weight will preclude it from some shelves, and it is quite visually loud. Adorned with orange accents, it would look more at home on the spaceship set of an Avengers sequel than in most living rooms, so plan your placement accordingly. To complete the look, the ROG logo in the center of the router is lit by Aura RGB, which thankfully can be turned off for those times when, for some reason, you don’t want to draw attention to the large techno-crab-looking beast at the heart of your wireless world.
Specifications
Processor
1.8GHz quad-core processor
Memory
256MB NAND flash and 1GB DDR3 SDRAM
Ports
RJ45 for Gigabits BaseT for WAN x 1, RJ45 for Gigabits BaseT for LAN x 4, Multi-Gig Ethernet port 2.5G/1G x1 USB 3.1 Gen 1 x 2
Encryption
Open system, WPA/WPA2-Personal, WPA/WPA2-Enterprise
Wi-Fi Technology
IPv6
Universal beamforming
2.4GHz x3, 5GHz-1 x3, 5GHz2 x3
Dimensions
11.3 x 4.74 x 14.86 inches
Weight
4.1 pounds
Price
$449.99
The specs for the ROG Rapture GT-AX11000 are undoubtedly impressive. At the heart is a quad-core 1.8 GHz CPU, with access to 256MB of NAND and 1GB of DDR3 SDRAM. The connections include a WAN port, along with four 1 GB Ethernet ports, and a 2.5 GB Ethernet port. If we want to nitpick, that does leave a total of five Ethernet ports, aside from the WAN, and we would have liked to have seen a few more. There are also a pair of USB 3.1 ports for adding networked storage. Physical buttons are as follows:
WPS Button
Reset Button
Power Button
Wireless on/off Button
Boost Key
Wireless specs here also aim to impress, with the ability to send out three simultaneous signals, better known as tri-band, that supports the Wi-Fi 6 standard (also designated as 802.11ax). For the older 2.4 GHz frequency that is up to 1148Mbps, and for the 5 GHz, each frequency is up to 4804Mbps. Peak theoretical throughput is achieved via use of 160 MHz-wide data lanes, and OFDMA with Beamforming.
Setup
Setup of the GT-AX11000 starts with manually screwing in the eight antennas for the router. After attaching and plugging in the requisite wires, we next fired up our computer’s browser and followed the prompts for initial setup, including setting a wireless password.
A glitch we initially encountered was that the shipping firmware on the router out of the box was not able to be updated, even when we purposefully triggered an update. We just got a message that the router could not connect with the server. Thankfully, the workaround to manually search for and download the firmware code from the Asus website and then manually upload it to the router was successful. After that, the router could then connect to the Asus server automatically for further updates.
Features
The GT-AX11000 has bucketloads of features, and is sure to cover the needs of just about every reasonable use case for a gaming router. This includes integrated VPN, the ability to work with other Asus routers to create a mesh network, and a traffic analyzer.
Focusing on the gaming features, the GT-AX11000 starts with tri-band frequencies, with the recommendation of Asus to designate one of the two 5 GHz bands only for gaming to avoid congestion altogether.
Then there is Triple Level Acceleration, with prioritization of the Gaming Port; Game First V which is client-side traffic shaping; Game Boost, Asus’ name for gaming priority adaptive Quality of Service; and WTFast, a gamer’s private network. Yes, that’s four, and perhaps should be renamed Quadruple Level Acceleration.
Finally, there is Game Radar, which can measure ping times to various servers of different worldwide locations. In the above screenshot, we are looking at latency to several Overwatch servers to choose the best one to minimize lag.
Security
The GT-AX11000 has integrated security from Trend Micro, which supplies AiProtection Pro to the router for full network protection. Various functions are provided, which include a router security assessment to locate vulnerabilities and blocking of infected devices.
Performance
Using our Netperf software for throughput testing showed some solid results between this Asus GT-AX11000 router and our Wi-Fi 6 client. The near test gets run at 8 feet away with a direct line of sight, and far is 36 feet away on a different floor with ductwork intervening. This also demonstrates the significantly faster speeds on the 5 GHz frequency.
2.4 GHz near
2.4 GHz far
5 GHz near
5 GHz far
Bandwidth (Mbps)
396.46
143.3
1296.48
937.21
Using our Netperf software for throughput testing showed some solid results between this Asus GT-AX11000 router, and the Wi-Fi 6 client. The near test gets run at 8 feet away with a direct line of sight, and far is 36 feet away on a different floor with ductwork intervening. It also demonstrates the significantly faster speeds on the 5 GHz frequency.
Testing Configuration
QoS
FRAPS avg
min
max
8K dropped frames
Pingplotter spikes
Ethernet
No
111.761
98
139
n/a
0
Ethernet + 10 8k videos
No
110.549
96
137
38.54%
1
Ethernet + 10 8k videos
adaptive, gaming priority
106.933
94
137
35.80%
1
Ethernet, 2.5G port
No
110.883
95
137
n/a
0
Ethernet, 2.5G port, 10 8k videos
No
24.283
9
41
62.20%
10
Ethernet, 2.5G port, 10 8k videos
adaptive, gaming priority
101.717
56
133
13.40%
6
5 GHz
No
105.683
92
132
n/a
0
5 GHz + 10 8k videos
No
109.067
92
134
57.90%
0
5 GHz + 10 8k videos
adaptive, gaming priority
111.467
97
138
3.30%
1
2.4 GHz + 10 8k videos
adaptive, game priority
109.7
94
127
27.80%
4
Next, we look at the network congestion testing of the GT-AX11000. It’s not that the results were not plenty solid—they were—but rather that the bar was set so high in our minds for such a top-end gaming router.
For example, the 5 GHz gaming test with the ten 8k videos playing and QoS set to adaptive/game priority shows us how well that staggering amount of network congestion is handled. Our Overwatch game played at 111.467 FPS, a rate that closely matches the same game on a wired connection, yet the dropped frame rate on our 8k video was low at 3.3%, much lower than the 35.8% rate that was seen when the same test was run on Ethernet.
The tests run on the 2.5G Ethernet port show no improvement compared to the 1G Ethernet. Given that our test laptop (an Asus G512LW-WS74) doesn’t have a 2.5GbE port, that’s not exactly surprising. But oddly, the 2.5G test with the ten 8k video streams had the highest of the dropped frames on the video with QoS disabled at a sky-high 62.2%, worse than the 1G Ethernet port. The reasons for this aren’t entirely clear, but could be some combination of hardware and software issues with the 2.5Gb port. Without a faster 2.5Gb device to test with, it’s hard to say. But if your laptop or desktop doesn’t have a 2.5Gb Ethernet port, the safe bet is to stick with one of the 1GbE port alternatives.
We also found that compared to the Asus RT-AX82U midrange router (which costs more than $200 less than its big brother) the scores are pretty similar, making it hard to justify the price difference, at least from performance alone.
Pricing
At a list price of $449, the Asus GT-AX11000 is clearly priced for the high-end market. The problem whenever you compare the top end of any product, such as a CPU, GPU or this router, is that you often bump up against the law of diminishing returns, with the price increasing substantially at the top end, while the features and performance are only a little better than the lower products beneath it. When analyzed from a pure value proposition, it is hard to argue in favor of the Asus GT-AX11000. But for those who want every possible bell and whistle in their wireless setup, then this Asus option makes a case for its crab-like self.
Bottom Line
Overall, while the Asus GT-AX11000 doesn’t offer the best bang for the buck, it does still provide a solid piece of gear for those that can afford this higher price point. The pros include the integrated gaming features such as WTFast, adaptive QoS, and Game Radar. We also appreciate the included security to protect the network. Some shortcomings are the automatic firmware upgrade issue we encountered, the benchmark results in our testing that did not significantly best Asus’ own midrange alternative and only five Ethernet ports. But for those who like its looks, and who want their router to go to eleven, this Asus GT-AX11000 is a feature-packed, aggressive-looking option.
Just note the one cutting-edge feature this model lacks is 6E Wi-Fi, which makes use of the newly uncluttered 6GHz spectrum. For that, you’ll need to pay $100 or so more, at least on the Asus side, and opt for the ROG Rapture GT-AXE1100. You may have to wait a bit to find that model in stock, however, as availability when we wrote this was pretty spotty, not unlike some of the best graphics cards or best CPUs.
News outlet CRN reported that Gigabyte has retired the GeForce RTX 3090 Turbo 24G announced back in September of last year. The product page for the graphics card is no longer available, which confirms CRN’s report.
Gigabyte’s sudden plans to cancel the GeForce RTX 3090 Turbo 24G will certainly put its server partners in a tight situation. Although Nvidia has a formidable Ampere compute graphics card in the shape of the A100, many vendors preferred to roll with the GeForce RTX 3090 due to the latter’s better price-to-performance ratio. The A100 retails for close to $10,000 while the GeForce RTX 3090 can be found for $1,499 on a good day.
The GeForce RTX 3090 Turbo 24G was a great option for vendors to put together budget server offerings because the graphics card met all the requirements of a compute graphics card. It features the blower design, only occupied two PCI slots and it also came equipped with 24GB of high-speed GDDR6X memory, which is a big plus for deep learning workloads.
It’s plausible that Gigabyte might have received some sort of warning or recommendation from Nvidia to preemptively ax the GeForce RTX 3090 Turbo 24G. Word was probably getting around town that manufacturers were opting for the GeForce RTX 3090 instead of the more expensive A100 for their data center solutions. In fact, Nvidia discourages the deployment of GeForce and Titan graphics cards in a data center setting. The aforementioned products don’t come with the same level of features as Nvidia’s data center offerings, such as an extended warranty, enterprise support, certification for data center applications and a longer lifecycle. They also come with a smaller price tag.
Now that the GeForce RTX 3090 Turbo 24G has officially reached the end-of-life (EOL) status, vendors will have to look for another viable solution. Luckily, Gigabyte’s GeForce RTX 3090 Turbo 24G wasn’t the only GeForce RTX 3090 with a blower-design on the market. Asus and MSI also put out similar designs. Heck, even South Korean manufacturer Emtek’s GeForce RTX 3090 Blower Edition is a legit alternative if push comes to shove.
All of those obnoxious marketing emails that crowd your inbox aren’t just pushing a product. They’re also tracking whether you’ve opened the email, when you opened it, and where you were at the time by using software like MailChimp to embed tracking software into the message.
How does it work? A single tracking pixel is embedded into the email, usually (but not always) hidden within an image or a link. When the email is opened, code within the pixel sends the info back to the company’s server.
There have been some attempts to restrict the amount of information that can be transmitted this way. For example, since 2014, Google has served all images through its own proxy servers, which could hide your location from at least some tracking applications. And extensions such as Ugly Email and PixelBlock have been developed to block trackers on Chrome and Firefox.
There is also a simple basic step you can take to avoid trackers: stop your email from automatically loading images since images are where the majority of these pixels hide. You won’t be able to avoid all of the trackers that can hide in your email this way, but you will stop many of them.
Here’s how to do it in the major desktop and mobile email apps:
Disable image autoloading in Gmail:
Click on the gear icon in the upper right corner to access your settings, and click on “See all settings.”
In the “General” tab (the first one), scroll down to “Images.”
Select “Ask before displaying external images.”
Scroll down to the bottom of the page and click on “Save Changes.”
Note that this will also turn off Gmail’s dynamic email feature, which makes emails more interactive.
Disable image autoloading in Microsoft Outlook (Office 365):
Click on “File” > “Options.”
In the “Outlook Options” window, select “Trust Center.”
Click on the “Trust Center Settings” button.
Check the boxes labeled “Don’t download pictures automatically in standard HTML messages or RSS items” and “Don’t download pictures in encrypted or signed HTML email messages.” You can make a number of exceptions to the first item if you want by checking the boxes underneath it.
Disable image autoloading in Apple Mail:
Select “Mail” > “Preferences.”
Click on the “Viewing” tab.
Uncheck “Load remote content in messages.”
Disable image autoloading in Android Gmail:
Tap on the three lines in the upper left corner.
Scroll down to and select “Settings.”
Tap on the email account that you want to work with.
Scroll down to and select “Images.”
Tap on “Ask before displaying external images.”
Disable image autoloading in iOS Gmail:
Open Gmail for iOS, tap the hamburger menu in the upper left, and scroll down to settings.
Tap the account you want to personalize, and tap into “Images.”
Switch from “Always display external images” to “Ask before displaying external images.”
Note that for those wishing to do this on Gmail’s mobile client, it appears it will only work for personal accounts and not enterprise ones managed through G Suite, for now.
Disable image autoloading on iOS Mail:
Tap on “Settings” > “Mail.”
Find the “Messages” section and toggle off “Load Remote Images.”
Another option is to use an email client such as Thunderbird, which blocks remote images by default; the application allows you to download embedded content on an individual basis or allow pictures from contacts that you trust not to send hidden code in their images.
Update July 3rd, 2019, 3:47PM ET: This article has been updated to include additional information about email clients.
Update September 3rd, 2019, 7:35PM ET: This article has been updated to include directions for disabling image autoloading on Gmail for iOS.
Update February 17th, 2021, 5:30PM ET: Instructions for Microsoft Mail have been removed, and a few instructions updated.
The supply shortage of silicon chips has reached an all-time high, and fabs like TSMC simply don’t have enough production facilities to keep up with the incessant demand from the server, PC, and automotive markets. Per a report from CNBC, President Joe Biden plans to review several critically important industries severely hampered by high demand — including the semiconductor industry.
More specifically, Biden wants the U.S. to be competitive with China and lessen its dependency on Chinese production facilities. For the semiconductor industry, this would mean that millions to billions of dollars would need to be spent on new fabs to both keep up with demand and lessen the strain on Chinese resources. Intel already has a number of fabs in operation in Arizona, and TSMC is currently planning to build fabs in the U.S., so it wouldn’t be a far-fetched idea to see significantly more fabs being built domestically.
Biden’s plan will consist of two phases: The first will be a 100-day review process of analyzing a few of the high-priority supply chains, including the semiconductor, high capacity car battery, rare earth metal and medical industries.
The second phase will begin after the 100-day review, which will begin more broad investigations into production for the U.S military, public health, energy, and transportation.
Finally, a year after these two phases occur, the task force responsible for these investigations will submit recommendations to the president on potential strategies to “ensure supply chains are not monopolized.”
This plan will clearly take time, so for now, don’t expect any major changes with the current semiconductor supply struggle.
Russia’s MCST Elbrus microprocessors made a splash last year, but it takes a lot mot than a microprocessor to develop a completely self-sufficient computing platform. Among other things such nationally-oriented platforms need is a proprietary SSD controller, and apparently server maker Kraftway has developed one and demonstrated it at a conference this week. The chip will enable building encrypted SSDs featuring a proprietary encryption technology.
The Kraftway K1942BK018 is an Arm Cortex-R5-based NVMe 1.2.1-compliant controller with eight NAND channels that supports up to 2TB of flash memory as well as up to 2GB of DDR3 SDRAM cache. The chip features an ONFI 200MHz interface and is compatible with NAND chips produced by Micron and Toshiba and then packaged in Russia by GS Nanotech. The controller connects to the host using a PCIe 2.0 x4 interface and enables building SSDs in an HHHL or U.2 form-factor. The chip is made using TSMC’s 40 nm process technology and comes in a BGA676 package. The developer claims that it consumes from 3.5W to 4W under load.
The manufacturer claims that the drives powered by the new controller will deliver an up to 828 MB/s read speed as well as an up to 659 MB/s read speed.
Surprisingly, the Kraftway K1942BK018 supports a rather outdated BCH 96-bit/1K ECC technology, which means that it may not support all modern types of 3D NAND.
The key feature of the Kraftway K1942BK018 is that it was fully developed in Russia and uses proprietary management algorithms as well as cryptography standards. The primary customers that will use the controller are various government agencies, the ministry of defense, state-controlled companies and other entities interested in maximum security and proprietary algorithms.
Kraftway plans to produce 10,000 SSDs based on its K1942BK018 controller in the coming months in a bid to use them with its PCs aimed at those markets.
Interestingly, but in addition to the Arm Cortex-R5-based K1942BK018 controller there are also two RISC-V-based SSD and USB drive controllers designed in Russia and based on cores developed in Saint Petersburg.
Microsoft has started testing its xCloud game streaming through a web browser. Sources familiar with Microsoft’s Xbox plans tell The Verge that employees are now testing a web version of xCloud ahead of a public preview. The service allows Xbox players to access their games through a browser, and opens up xCloud to work on devices like iPhones and iPads.
Much like how xCloud currently works on Android tablets and phones, the web version includes a simple launcher with recommendations for games, the ability to resume recently played titles, and access to all the cloud games available through Xbox Game Pass Ultimate. Once you launch a game it will run fullscreen, and you’ll need a controller to play Xbox games streamed through the browser.
It’s not immediately clear what resolution Microsoft is streaming games at through this web version. The software maker is using Xbox One S server blades for its existing xCloud infrastructure, so full 4K streaming won’t be supported until the backend hardware is upgraded to Xbox Series X components this year.
Microsoft is planning to bundle this web version of xCloud into the PC version of the Xbox app on Windows 10, too. The web version appears to be currently limited to Chromium browsers like Google Chrome and Microsoft Edge, much like Google’s Stadia service. Microsoft is planning some form of public preview of xCloud via the web in the spring, and this wider internal testing signals that the preview is getting very close.
The big drive behind this web version is support for iOS and iPadOS hardware. Apple imposes limitations on iOS apps and cloud services, and Microsoft wasn’t able to support the iPhone and iPad when it launched xCloud in beta for Android last year. Apple said Microsoft would need to submit individual games for review, a process that Microsoft labeled a “bad experience for customers.”
Meme thieves rejoice! Discord’s latest update on iOS adds a new option to make it easier to save images from Twitter embeds, handy if you want to save memes from tweets dropped in the server. Or, as the changelog for version 60.0 of the iOS app puts it, “Meme-stealing powers are about to go super saiyan.” To use it, long press on the tweet in question, and then select “Save Image.” It’s a lot simpler than the workarounds people were previously forced to use.
There’s no mention of the feature for the service’s Windows or Android apps, which both also received updates this week. However, on Android it already seems to be possible to save images from tweets by tapping the image to make it fullscreen, and then using the download button on the top of the screen. Although the new iOS feature allows you to download static images, it doesn’t appear to work with gifs.
As well as the new feature for meme thieves, Discord now supports mobile screen share on iPad “so you can now do art together, play mobile games on your tablet, and watch Robert get his math homework wrong for the 5th time from the comfort of your bed.” You can read the full changelog in Discord’s settings menu on desktop and mobile.
Game studio CD Projekt Red has been under a lot of fire recently for the messy Cyberpunk 2077 launch, and now it seems the company isn’t getting a break. In a tweet, CDPR announced that it was subject to a targeted cyber-attack, compromising some of the company’s internal systems.
The attackers claim to have obtained full source code for Cyberpunk 2077, The Witcher 3, Gwent, an unreleased version of the Witcher 3, along with heaps of accounting, legal, admin, HR, and investor relations documents, and are threatening to send them to journalists if CDPR doesn’t pay a ransom.
The attackers claim to have encrypted all the server’s data, but CD Projekt Red is currently restoring the data from a backup. Something that the attackers are already aware of.
However, curiously, CD Projekt Red claim that it will not give in to the demands of the attackers, even if that means the data will be released. CDPR claims that to the best of their knowledge, no personal data of players has been compromised.
CDPR is currently working together with law enforcement agencies to shed further light on the breach.
Home security camera systems have exploded in popularity while decreasing in price over the past few years. For example, you could purchase a Ring Indoor Security Camera for around $60, but there are some drawbacks: first, vendors like Ring often charge a monthly fee to store your data and second, you might not want video and photos from inside your home being shared with a third party (in Ring’s case, Amazon) where strangers could potentially see them.
MotionEyeOS, a free open-source application, allows you to turn a Raspberry Pi with a camera into a home video monitoring system, where the photos and videos can either stay on your device (and home network) or, if you choose, be uploaded automatically to a cloud-storage service such as Google Drive or Dropbox.
In this tutorial, we will show you how to set up a Raspberry Pi security camera with MotionEyeOS. This software works with almost any Raspberry Pi (connected to the internet) and almost any webcam or Pi camera. There’s no fancy coding to be done in this project; it just works.
Here are a few of the cameras I’ve successfully used with MotionEye.
This Raspberry Pi security camera can be used to record porch pirates, monitor children or pets or to watch out for burglars.
Disclaimer: This article is provided with the intent for personal use. We expect our users to fully disclose and notify when they collect, use, and/or share data. We expect our users to fully comply with all national, state, and municipal laws applicable.
What You’ll Need
Raspberry Pi 4 or Raspberry Pi 3B+, or Raspberry Pi Zero W
8 GB (or larger) microSD card
Raspberry Pi Cam, HQ Camera, Infrared Camera, or webcam
Monitor/Power supply/Monitor/HDMI Cable (for your Raspberry Pi)
Your Windows or Mac computer.
Install MotionEyeOS
In this section, we will download MotionEyeOS, flash to a microSD card for our Raspberry Pi security camera, and set our WPA credentials.
1. Download the latest version of MotionEyeOS corresponding to the specific model of Raspberry Pi you are using from https://github.com/ccrisan/motioneyeos/releases
Image 1 of 2
Image 2 of 2
2. Insert your microSD card into your computer to be read as a storage device.
3. Launch Raspberry Pi Imager. You can download the imager here if you don’t already have it installed on your computer.
4. Select “Use custom” for the Operating System.
5. Select the motioneyeos version that you just downloaded. This should be a .img.xz file.
6. Select your microSD card under “SD Card.” Note that all data on your microSD card will be erased in the next step.
7. Click “Write” in the Raspberry Pi imager. The ‘write’ process could take 1 to 2 minutes.
8. When the process completes, physically remove and then reinsert your microSD card. We do this because the software automatically ejects the microSD card when the process completes, but we need to add one file before the next step.
9. Create a new file named wpa_supplicant.conf with the following text, replacing “YOUR_NETWORK_NAME” and “YOUR_NETWORK_PASSWORD” with your information. A source code editor such as Atom works great for this purpose. WordPad and Notepad are not recommended to create this file as extra characters are added in the formatting process.
10. Save wpa_supplicant.conf to your microSD card. Eject your microSD card.
11. Insert your microSD card into your Raspberry Pi.
12. Connect your camera, monitor and power supply to your Raspberry Pi. Power up your Pi.
13. Find your internal IP address on the Pi screen. In most cases your internal IP address will start with 192.168.x.x or 10.0.0.x. Alternatively, if you do not have access to a monitor, you can download Angry IP Scanner and find your IP address for your Motioneye Raspberry Pi. Look for “MEYE” to identify your MotionEye Pi.
14. Enter your internal IP address into a browser window of your Windows or Mac computer. Alternatively, you could use a Chromebook or a tablet. At this point your Motioneye should start streaming.
In most cases, the system will automatically stream from the attached camera. If no image comes up, the camera may be incompatible with the Raspberry Pi. For example, an HD webcam may be incompatible with the Raspberry Pi Zero, but will work with a Raspberry Pi 3. There may be some trial and error in this step. Interestingly, most older webcams (manufactured before the Pi) will work with Motioneye. Here’s an old Logitech Pro 9000 connected to a Pi Zero W with a 3D printed stand.
Configuring MotionEye for Raspberry Pi Security Camera
In this section, we will perform a basic configuration of Motioneye and view our Raspberry Pi security camera video stream.
1. Click on the Profile icon near the top left within your browser menu to pull up the Login screen.
2. Log in using the default credentials. The username is admin, and the password field should be blank.
3. Select your Time Zone from the dropdown menu in “Time Zone.” Click Apply. Motioneye will reboot which will take a few minutes. This step is important as each photo and video is timestamped.
4. Motioneye detects motion when _% of frames change. The intent is to set your % low enough to pick up the movement you are tracking, but high enough to avoid recording a passing cloud. In most cases, this is achieved through trial and error. Start with your default 4% Frame Change Threshold and then move up until you reach your optimal setting.
5. Click the down arrow to the right of “Still Images” to reveal the corresponding settings. Do the same for “Movies.” Set Capture Mode and Recording Mode to “Motion Triggered” and length of time to “Preserve Pictures” and “Movies.”
I have chosen “For One Week” since I’m only working with an 8GB microSD card. The photos saved locally will serve as a backup. You’ll save all of the photos to Google in a later step. Click Apply to save your changes.
6. Set your Camera Name, Video Resolution, Frame Rate and other options in the “Video Device” section. Click Apply to save your changes.
Viewing Raspberry Pi Security Camera Images / Video Locally
If you don’t wish to upload images to a third-party service such as Google Drive, you can view the images and/or videos) captured locally on your Raspberry Pi security camera. If you choose this method, the images will never leave your local network.
1. Click on the live camera feed and new icons will appear.
2. Click on the image icon to view images.
3. Or Click on the “Play”button icon to view movies.
Automatic Uploading to Google Drive (Optional)
In this step, we will configure our Raspberry Pi security camera to automatically upload all of the photos (and videos) taken to Google Drive. This method (with a couple of nuances) also works with Dropbox. Of course, you have to be comfortable with having your images in the cloud.
Most users create a separate Gmail account specifically for this purpose, to maximize free storage space from Google. Additionally, this will come in handy if you decide to enable email notifications in the next step.
1. Click the down arrow corresponding to “File Storage” in the main admin menu.
2. Toggle “Upload Media Files” to ON. This should automatically toggle “Upload Pictures” and “Upload Movies” to ON, but if not, hit ON.
3. Select Google Drive from the “Upload Service” dropdown menu.
4. In your Google Drive, create a new folder for storing your photos and videos. I chose “PorchCam” for the name of my folder.
5. Enter “/” followed by your folder name for ‘Location.’
6. Click “Obtain Key” and accept associated permissions by clicking “Allow.”
7. Copy and paste the authorization code into your “Authorization Key” in Motioneye.
8. Click the “Test Service” button. If you don’t get an error message in Motioneye, then it was a success.
9. Go to your Google Drive folder and test your setup by pointing the camera at yourself and waving to the camera.
In this optional step, we will configure our MotionEye to automatically send us emails with attachments containing the photos our Raspberry Pi security camera has taken. It is highly recommended that you create a separate Gmail account specifically for this purpose. These instructions are specific to Gmail only.
1. Enable “Less Secure Apps” in your Gmail account.
2. Expand “Motion Notifications” in Motioneye.
3. Toggle ON “Send An Email”
4. Enter your Email address, password.
SMTP Server = smtp.gmail.com
SMTP Port = 587
Use TLS – Toggle to On
Enter a value for “Attached Pictures Time Span”
5. Click the “Test Email” button.
The first email is a text only email. Subsequent emails will contain attachments.
Mobile App Access to Raspberry Pi Security Camera
MotionEye also features a mobile app for iOS and Android / Google Play stores. Keep in mind that the app will only work while you are on the same network as your Raspberry Pi (unless you enable port forwarding, which is not encouraged for security reasons).
Just when you thought PCIe 4.0 x4 SSD’s were fast with up to 8 GB/s of sequential read speed, PCIe 5.0 drives have emerged on the horizon that could come with up to 16 GB/s speeds.
Silicon Motion said this week that it would start sampling its enterprise-grade SSD controllers with a PCIe Gen 5.0 interface in the second half of next year, which means that they will debut commercially in 2022. This is one of the first times that an SSD controller maker has mentioned a chip with a PCIe 5.0 interface, and even though the controller will debut in the server space, models for consumers will inevitably follow.
No Rush for PCIe 5.0 SSDs?
The PCIe 5.0 interface will increase data transfer speeds to 32 GT/s per lane, which will increase the total bandwidth provided by a PCIe x16 slot to ~64 GB/s, whereas a PCIe x4 slot can transfer up to ~16 GB/s.
Increased transfer rates will be particularly beneficial for various bandwidth-hungry applications, like servers, high-end storage subsystems, and accelerators. Using the PCIe Gen 5 physical layer, various next-gen platforms will also support CXL and Gen-Z protocols designed specifically to connect CPUs with various accelerators and maintain memory and cache coherency at low latencies.
The first platforms to support a PCIe 5.0 interface are Intel’s 12th-Gen Alder Lake CPUs for client PCs, which are expected to debut in the second half of 2021, as well as the company’s 4th Generation Xeon Scalable ‘Sapphire Rapids’ for data centers and supercomputers that is projected to launch in early 2022. In addition to PCIe 5.0, Sapphire Rapids will also support the CXL 1.1 protocol.
So far, several companies have already announced the availability of PCIe 5.0 controllers and PHY IP, and some have demonstrated interoperability of their CXL-enabled PCIe 5.0 solutions with Intel’s Sapphire Rapids or verification equipment, whereas Microchip even announced its PCIe 5.0 retimers and switches.
However, as numerous developers of SSD controllers and platforms introduced their PCIe 4.0 platforms for servers in the second half of last year, including Kioxia, Microchip, Silicon Motion, SK Hynix, it doesn’t seem like they will roll out any new enterprise-grade solutions in the foreseeable future. Of course, some companies tend to introduce next-gen SSDs ahead of competitors, but it does not look like there will be too many PCIe 5.0-supporting enterprise drives available next year, so SMI will be on time with its PCIe Gen 5 controller.
The PCIe Gen 5 specification was finalized in mid-2019, around the same time when the first PCIe 4.0-supporting platforms, SSDs, and GPUs were launched. Back then, some thought that the PCIe 4.0 interface will have a short lifespan (because PCIe 5.0 was ‘already there’) and will not become truly popular particularly in the data center space, as back then, the only server platform to feature PCIe 4.0 lanes was AMD’s EPYC 7002-series ‘Rome’ that was not truly popular at the time. As it turns out, while PCIe 5.0 will debut later this year, it does not look that it will immediately replace PCIe 4.0.
SMI Mentions First PCIe 5.0 SSD Controller
“With our new PCIe Gen5 enterprise SSD controllers sampling in the second half of next year, we are not expecting our enterprise SSD controller to be a material contributor to our $1 billion sales objective,” said Wallace Kuo, chief executive of Silicon Motion, during a conference call with analysts and investors (via The Motley Fool). “We are planning on material enterprise SSD controller sales contribution only after 2023.”
Silicon Motion is a newbie on the market of enterprise SSD controllers. The company first entered China’s enterprise SSD market in 2015 after acquiring Shannon Systems, a supplier of enterprise-grade PCIe SSD and storage arrays to Chinese hyperscalers. So far, SMI’s enterprise SSD business has not really taken off and represents a fraction of its revenue. Still, the company clearly wants to be a part of the datacenter megatrend, so it will continue to invest in enterprise storage solutions.
“We are excited about enterprise-grade PCIe Gen5 controller, which we will have taped out early next year and sample in the second half of 2022,” said Kou. “We believe this will bring us a big momentum in coming to enterprise.”
The chief executive of Silicon Motion naturally did not touch upon technical specifications of the company’s upcoming PCIe 5.0 enterprise SSD controller, but its latest SM8266 SoC supports NVMe 1.4, three dual-core Arm Cortex-R5 complexes, 16 NAND channels, and configurable LDPC ECC.
I would like to thank ICY DOCK for supplying the sample.
ICY Dock is known for its products geared towards workstation, enterprise, and government users. The ToughArmor series is their higher-end product tier focusing on a metal material mix and interfaces usually niche to their target audiences. The MB840M2P-B Is a PCIe adapter for M.2 NVMe drives and unique in that the installed drive is packed onto a sled accessible through the rear of the system.
Packaging
This device ships in simple brown packaging that seems to have undergone a few last minute changes. To better illustrate some of its advantages, corresponding icons have been put on the front. On the opposite side, you will find the product name in several languages, alongside a specification table in English. The PCIe card inside an anti-static bag has been placed in a foam cutout within the box.
The unit may also be used in low-profile enclosures as ICY DOCK provides a smaller backplate for such a scenario. On top of that, there is a single screw to secure the card. A basic but effective manual has also been included.
A Closer Look
The MB840M2P-B is essentially a souped up PCIe 3.0 x4 to M.2 NVMe interface card. This means you get all the same advantages of plugging the SSD directly into an appropriate slot on the motherboard. That said, ICY DOCK has engineered a housing and sled, which allows you to quickly pull out the drive and place another one. The PCB itself comes with a cutout, which is an interesting choice as I see no immediate benefit to the cutout besides a potential thermal angle.
Taking a closer look at the end of the device, you will find the classic M.2 PCIe connector and an LED that acts as an activity indicator. On the corner is also a dual-pin header for your case’s hard-drive activity LED, for example.
To unlock the sled, press the solid side of it down, which pops out the little handle—both of these parts are made out of steel. Once unlocked, you may just pull the whole thing out of the expansion card.
The sled also acts as a heatsink, and the whole contraption weights just over 50 grams, most of which is the actual heatsink itself. That means you should see a tangible temperature drop for your drive.
Assembly and Use
To add a drive to the sled, you do not need any tools. Simply press down on the metal cover to pop it off, which reveals a similar mounting mechanism as with other ICY DOCK products: a sliding bar that secures the M.2 NVMe drive. A thermal pad along the whole interior comes pre-applied, and the sled is long enough to accommodate even the extra-long 110 mm drive formats, which is important as enterprise-level units go beyond the standard 80 mm for consumer drives.
Corsair was kind enough to provide us with one of their MP400 SSDs to use inside the MB840M2P-B, which matches the PCIe 3.0 x4 interface perfectly. The retail package of the drive is bright yellow and has an image on the front and additional details on the back.
The MP400 itself comes with a branding sticker and memory ICs on only one side of the PCB. This benefits our current usage scenario as the filled side will attach to the thermal pad.
Installing the SSD is easy once the housing has been opened. Simply align the interface gap with the metal pin in the housing and slide the lock into place to keep the drive firmly in place and touching the thermal pad. Once done, you may put the steel cover back in place, which leaves only the contact pins of the drive exposed.
Sliding the drive in works as expected, and the sled locking mechanism for the expansion card functions the same way as with other ICY DOCK products—by pushing the lever down until it snaps into place.
When turned on, the green LED on the end of the card lights up to denote read/write activity, with its green glow funneled to the back of the system, next to your drive. There may be scenarios where you want a full tower with seven or eight of these in the PCIe slots, so having individual activity LEDs are certainly helpful.
Performance
As the card is essentially an interface of the same PCIe 3.0 x4 bandwidth between two physical formats, we expected the drive to perform just as it would when installed directly inside the system, and the numbers almost exactly match those advertised on the MP400 retail packaging.
For thermals, we ran DiskSpd on a 30 minute loop to generate sustained drive activity and were never able to push it beyond 48°C. As our test system was not mounted within a case, you should expect it to be a few degrees higher if the unit is sandwiched between other expansion cards, but even so, it is far from the 80°C threshold where most SSDs tend to throttle.
Conclusion
As with most of the brand’s products, the ICY DOCK ToughArmor MB840M2P-B is not meant for the mass market. If you are just an enthusiast who wants good cooling, simply go for a heatsink on your bare drive, for example. This is further underlined by the MSRP of US$84, as the MB840M2P-B is not cheap compared to the many simple, bare PCIe 3.0 x4 to M.2 NVMe adapters that sell for around US$15.
The MB840M2P-B is meant for those users who need quick access to their drives in a high-density environment while keeping their units cool under heavy, sustained loads. Eight of these could be paired with a workstation/server motherboard and appropriate 22110 length NVMe drives, for example. All while allowing for portability between multiple compatible infrastructures and ease of access for maintenance or emergencies. For those types of scenarios, the price for each of these is a no-brainer in the grand scheme of things.
Along with news, features, opinions, and tech reviews, video has become an increasingly important part of The Verge’s content. But to make great, involving videos, you’ve got to have staff with the expertise to create that video — along with the tools that allow those staff members to let their imaginations soar.
Alix Diaconis is one of the directors who helps make video magic for The Verge. We talked to Alix about what she does and what tools she uses.
Alix, what do you do for The Verge?
I’m one of the video directors for The Verge. I get to work every day with my three co-workers (but really, friends) to create the videos on The Verge’s YouTube channel. Sometimes deadlines are fast because tech and news are fast, but our team has been working together for years, so even live events feel seamless and fun. We each shoot, take photos, and edit; then the video gets treated by our sound and graphics wizards. Then bam, on to the next one!
What hardware and software tools are needed to produce a video for a site like The Verge?
It really varies video to video. For some videos, we’ll pull out all the stops, while for others, we need to do quick and light. Heck, I think we’ve shot videos with just a GoPro.
When we go to a press event, we’ll keep it very light with a monopod, lavalier microphone, and a camera we feel most comfortable with. And then I’ll edit at the event on my MacBook Pro.
But most of the time when we’re shooting on location, we’ll bring a bigger kit with an HD monitor, a slider (which helps you do tracking shots), maybe a drone. And when we’re making the big stuff, like a flagship phone review, we like to bring out everything, including a probe lens like the Venus Optics Laowa to make intro shots like this.
The opening shot on this video was created using a probe lens.
Since we’re uploading videos for our job, good internet upload speeds make life a lot easier. We also have a shared server so we have access to our terabytes and terabytes of footage at all times.
Oh, and also teamwork. Lots and lots of teamwork.
What specific hardware tools do you use for your work?
For shooting, I prefer to use the Canon EOS C200 — I think it looks really cinematic — and my preferred lens is the Canon EF 70-200mm (for B-roll at least). Sometimes I’ll use the Sony A7S II or III, which looks extra crisp, but I’m not a big fan of Sony menus. For sound, I’ll typically use a Sennheiser G3 lavalier or a Zoom H6 recorder. For photos, I use the Canon 50D.
For post-production in The Verge offices, I would edit on a 27-inch iMac, which is due for an upgrade. At home, though, I have a more powerful editing PC that my producer built for me. It has an AMD Ryzen 7 3700X 8-core processor, 2TB NVMe drive, a Radeon RX 580 series video card, 32GB RAM, and an Asus 28-inch 4K display. Of course, there are always technical issues — it’s part of editing — but the PC is the best editing machine I’ve personally owned. (Thank you, Phil!) I do miss the beautiful iMac display though.
Also, since video takes up a lot of space, I’ll sometimes use an additional SSD for projects. And as for headphones, I use the Sony MDR-7506, which are the only headphones I can wear comfortably all day.
And then there’s the fun, random gear: a GoPro Hero 8, an Insta360 panoramic video camera (which we recently used for this e-bike video), a Zhiyun Crane, a DJI Mavic Pro drone… and whatever else we can get our hands on.
This video was created using an Insta360 panoramic video camera.
What software tools do you use for your work?
All Adobe everything. Premiere Pro for editing, After Effects for basic graphics, and Photoshop for the video thumbnails. You can do a lot in Premiere, but it does have its bugs, and it’s not always optimized for Apple’s hardware.
What tools do you use for your own projects?
I’ve been teaching myself DaVinci Resolve to color footage. I still barely understand the program, but it makes footage look 100x better than coloring it in Premiere. And purely for fun, I shoot 35mm film on my dad’s old Minolta camera.
What hardware and software tools would you recommend for somebody just starting out?
Premiere is very common for editing. But if you want to try something free and you have an iPhone or iPad, there’s the Splice app. It’s really intuitive, but you’re limited to clips you have on your device. There’s also DaVinci Resolve, which is free and as advanced as most paid editing softwares.
As for cameras, just get one that you feel comfortable using! And for a computer, invest in a good one if you see yourself editing for a long time; iMacs and Windows PCs are both good, and the specs will just depend on how big your projects will be. I haven’t had a chance to use Apple’s new M1 MacBook Air or Pro yet, but both seem like good choices if you’d prefer a laptop.
Intel’s 12th-Gen Alder Lake chip will bring the company’s hybrid architecture, which combines a mix of larger high-performance cores paired with smaller high-efficiency cores, to desktop x86 PCs for the first time. That represents a massive strategic shift as Intel looks to regain the uncontested performance lead against AMD’s Ryzen 5000 series processors. AMD’s Zen 3 architecture has taken the lead in our Best CPUs and CPU Benchmarks hierarchy, partly on the strength of their higher core counts. That’s not to mention Apple’s M1 processors that feature a similar hybrid design and come with explosive performance improvements of their own.
Intel’s Alder Lake brings disruptive new architectures and reportedly supports features like PCIe 5.0 and DDR5 that leapfrog AMD and Apple in connectivity technology, but the new chips come with significant risks. It all starts with a new way of thinking, at least as far as x86 chips are concerned, of pairing high-performance and high-efficiency cores within a single chip. That well-traveled design philosophy powers billions of Arm chips, often referred to as Big.Little (Intel calls its implementation Big-Bigger), but it’s a first for x86 desktop PCs.
Intel has confirmed that its Golden Cove architecture powers Alder Lake’s ‘big’ high-performance cores, while the ‘small’ Atom efficiency cores come with the Gracemont architecture, making for a dizzying number of possible processor configurations. Intel will etch the cores on its 10nm Enhanced SuperFin process, marking the company’s first truly new node for the desktop since 14nm debuted six long years ago.
As with the launch of any new processor, Intel has a lot riding on Alder Lake. However, the move to a hybrid architecture is unquestionably riskier than prior technology transitions because it requires operating system and software optimizations to achieve maximum performance and efficiency. It’s unclear how unoptimized code will impact performance.
In either case, Intel is going all-in: Intel will reunify its desktop and mobile lines with Alder Lake, and we could even see the design come to the company’s high-end desktop (HEDT) lineup.
Intel might have a few tricks up its sleeve, though. Intel paved the way for hybrid x86 designs with its Lakefield chips, the first such chips to come to market, and established a beachhead in terms of both Windows and software support. Lakefield really wasn’t a performance stunner, though, due to a focus on lower-end mobile devices where power efficiency is key. In contrast, Intel says it will tune Alder Lake for high-performance, a must for desktop PCs and high-end notebooks. There are also signs that some models will come with only the big cores active, which should perform exceedingly well in gaming.
Meanwhile, Apple’s potent M1 processors with their Arm-based design have brought a step function improvement in both performance and power consumption over competing x86 chips. Much of that success comes from Arm’s long-standing support for hybrid architectures and the requisite software optimizations. Comparatively, Intel’s efforts to enable the same tightly-knit level of support are still in the opening stages.
Potent adversaries challenge Intel on both sides. Apple’s M1 processors have set a high bar for hybrid designs, outperforming all other processors in their class with the promise of more powerful designs to come. Meanwhile, AMD’s Ryzen 5000 chips have taken the lead in every metric that matters over Intel’s aging Skylake derivatives.
Intel certainly needs a come-from-behind design to thoroughly unseat its competitors, swinging the tables back in its favor like the Conroe chips did back in 2006 when the Core architecture debuted with a ~40% performance advantage that cemented Intel’s dominance for a decade. Intel’s Raja Koduri has already likened the transition to Alder Lake with the debut of Core, suggesting that Alder Lake could indeed be a Conroe-esque moment.
In the meantime, Intel’s Rocket Lake will arrive later this month, and all signs point to the new chips overtaking AMD in single-threaded performance. However, they’ll still trail in multi-core workloads due to Rocket Lake’s maximum of eight cores, while AMD has 16-core models for the mainstream desktop. That makes Alder Lake exceedingly important as Intel looks to regain its performance lead in the desktop PC and laptop markets.
While Intel hasn’t shared many of the details on the new chip, plenty of unofficial details have come to light over the last few months, giving us a broad indication of Intel’s vision for the future. Let’s dive in.
Intel’s 12th-Gen Alder Lake At a Glance
Qualification and production in the second half of 2021
Hybrid x86 design with a mix of big and small cores (Golden Cove/Gracemont)
10nm Enhanced SuperFin process
LGA1700 socket requires new motherboards
PCIe 5.0 and DDR5 support rumored
Four variants: -S for desktop PCs, -P for mobile, -M for low-power devices, -L Atom replacement
Gen12 Xe integrated graphics
New hardware-guided operating system scheduler tuned for high performance
Intel Alder Lake Release Date
Intel hasn’t given a specific date for Alder Lake’s debut, but it has said that the chips will be validated for production for desktop PCs and notebooks with the volume production ramp beginning in the second half of the year. That means the first salvo of chips could land in late 2021, though it might also end up being early 2022. Given the slew of benchmark submissions and operating system patches we’ve seen, early silicon is obviously already in the hands of OEMs and various ecosystem partners.
Intel and its partners also have plenty of incentive to get the new platform and CPUs out as soon as possible, and we could have a similar situation to 2015’s short-lived Broadwell desktop CPUs that were almost immediately replaced by Skylake. Rocket Lake seems competitive on performance, but the existing Comet Lake chips (e.g. i9-10900K) already use a lot of power, and i9-11900K doesn’t look to change that. With Enhanced SuperFIN, Intel could dramatically cut power requirements while improving performance.
Intel Alder Lake Specifications and Families
Intel hasn’t released the official specifications of the Alder Lake processors, but a recent update to the SiSoft Sandra benchmark software, along with listings to the open-source Coreboot (a lightweight motherboard firmware option), have given us plenty of clues to work with.
The Coreboot listing outlines various combinations of the big and little cores in different chip models, with some models even using only the larger cores (possibly for high-performance gaming models). The information suggests four configurations with -S, -P, and -M designators, and an -L variant has also emerged:
Alder Lake-S: Desktop PCs
Alder Lake-P: High-performance notebooks
Alder Lake-M: Low-power devices
Alder Lake-L: Listed as “Small Core” Processors (Atom)
Intel Alder Lake-S Desktop PC Specifications
Alder Lake-S*
Big + Small Cores
Cores / Threads
GPU
8 + 8
16 / 24
GT1 – Gen12 32EU
8 + 6
14 / 22
GT1 – Gen12 32EU
8 + 4
12 / 20
GT1 – Gen12 32EU
8 + 2
10 / 18
GT1 – Gen12 32EU
8 + 0
8 / 16
GT1 – Gen12 32EU
6 + 8
14 / 20
GT1 – Gen12 32EU
6 + 6
12 / 18
GT1 – Gen12 32EU
6 + 4
10 / 16
GT1 – Gen12 32EU
6 + 2
8 / 14
GT1 – Gen12 32EU
6 + 0
6 / 12
GT1 – Gen12 32EU
4 + 0
4 / 8
GT1 – Gen12 32EU
2 + 0
2 / 4
GT1 – Gen12 32EU
*Intel has not officially confirmed these configurations. Not all models may come to market. Listings assume all models have Hyper-Threading enabled on the large cores.
Intel’s 10nm Alder Lake combines large Golden Cove cores that support Hyper-Threading (Intel’s branded version of SMT, symmetric multi-threading, that allows two threads to run on a single core) with smaller single-threaded Atom cores. That means some models could come with seemingly-odd distributions of cores and threads. We’ll jump into the process technology a bit later.
As we can see above, a potential flagship model would come with eight Hyper-Threading enabled ‘big’ cores and eight single-threaded ‘small’ cores, for a total of 24 threads. Logically we could expect the 8 + 8 configuration to fall into the Core i9 classification, while 8 + 4 could land as Core i7, and 6 + 8 and 4 + 0 could fall into Core i5 and i3 families, respectively. Naturally, it’s impossible to know how Intel will carve up its product stack due to the completely new paradigm of the hybrid x86 design.
We’re still quite far from knowing particular model names, as recent submissions to public-facing benchmark databases list the chips as “Intel Corporation Alder Lake Client Platform” but use ‘0000’ identifier strings in place of the model name and number. This indicates the silicon is still in the early phases of testing, and newer steppings will eventually progress to production-class processors with identifiable model names.
Given that these engineering samples (ES) chips are still in the qualification stage, we can expect drastic alterations to clock rates and overall performance as Intel dials in the silicon. It’s best to use the test submissions for general information only, as they rarely represent final performance.
The 16-core desktop model has been spotted in benchmarks with a 1.8 GHz base and 4.0 GHz boost clock speed, but we can expect that to increase in the future. For example, a 14-core 20-thread Alder Lake-P model was recently spotted at 4.7 GHz. We would expect clock rates to be even higher for the desktop models, possibly even reaching or exceeding 5.0 GHz on the ‘big’ cores due to a higher thermal budget.
Meanwhile, it’s widely thought that the smaller efficiency cores will come with lower clock rates, but current benchmarks and utilities don’t enumerate the second set of cores with a separate frequency domain, meaning we’ll have to wait for proper software support before we can learn clock rates for the efficiency cores.
We do know from Coreboot patches that Alder Lake-S supports two eight-lane PCIe 5.0 connections and two four-lane PCIe 4.0 connections, for a total of 24 lanes. Conversely, Alder Lake-P dials back connectivity due to its more mobile-centric nature and has a single eight-lane PCIe 5.0 connection along with two four-lane PCIe 4.0 interfaces. There have also been concrete signs of support for DDR5 memory. There are some caveats, though, which you can read about in the motherboard section.
Intel Alder Lake-P and Alder Lake-M Mobile Processor Specifications
Alder Lake-P* Alder Lake-M*
Big + Small Cores
Cores / Threads
GPU
6 + 8
14 / 20
GT2 Gen12 96EU
6 + 4
10 / 14
GT2 Gen12 96EU
4 + 8
12 / 16
GT2 Gen12 96EU
2 + 8
10 / 12
GT2 Gen12 96EU
2 + 4
6 / 8
GT2 Gen12 96EU
2 + 0
2 / 4
GT2 Gen12 96EU
*Intel has not officially confirmed these configurations. Not all models may come to market. Listings assume all models have Hyper-Threading enabled on the large cores.
The Alder Lake-P processors are listed as laptop chips, so we’ll probably see those debut in a wide range of notebooks that range from thin-and-light form factors up to high-end gaming notebooks. As you’ll notice above, all of these processors purportedly come armed with Intel’s Gen 12 Xe architecture in a GT2 configuration, imparting 96 EUs across the range of chips. That’s a doubling of execution units over the desktop chips and could indicate a focus on reducing the need for discrete graphics chips.
There is precious little information available for the -M variants, but they’re thought to be destined for lower-power devices and serve as a replacement for Lakefield chips. We do know from recent patches that Alder Lake-M comes with reduced I/O support, which we’ll cover below.
Finally, an Alder Lake-L version has been added to the Linux kernel, classifying the chips as ‘”Small Core” Processors (Atom),’ but we haven’t seen other mentions of this configuration elsewhere.
Intel Alder Lake 600-Series Motherboards, LGA 1700 Socket, DDR5 and PCIe 5.0
Intel’s incessant motherboard upgrades, which require new sockets or restrict support within existing sockets, have earned the company plenty of criticism from the enthusiast community – especially given AMD’s long line of AM4-compatible processors. That trend will continue with a new requirement for LGA 1200 sockets and the 600-series chipset for Alder Lake. Still, if rumors hold true, Intel will stick to the new socket for at least the next generation of processors (7nm Meteor Lake) and possibly for an additional generation beyond that, rivaling AMD’s AM4 longevity.
Last year, an Intel document revealed an LGA 1700 interposer for its Alder Lake-S test platform, confirming that the rumored socket will likely house the new chips. Months later, an image surfaced at VideoCardz, showing an Alder Lake-S chip and the 37.5 x 45.0mm socket dimensions. That’s noticeably larger than the current-gen LGA 1200’s 37.5 x 37.5mm.
Because the LGA 2077 socket is bigger than the current sockets used in LGA 1151/LGA 1200 motherboards, existing coolers will be incompatible, but we expect that cooler conversion kits could accommodate the larger socket. Naturally, the larger socket is needed to accommodate 500 more pins than the LGA 1200 socket. Those pins are needed to support newer interfaces, like PCIe 5.0 and DDR5, among other purposes, like power delivery.
PCIe 5.0 and DDR5 support are both listed in patch notes, possibly giving Intel a connectivity advantage over competing chips, but there are a lot of considerations involved with these big technology transitions. As we saw with the move from PCIe 3.0 to 4.0, a step up to a faster PCIe interface requires thicker motherboards (more layers) to accommodate wider lane spacing, more robust materials, and retimers due to stricter trace length requirements. All of these factors conspire to increase cost.
We recently spoke with Microchip, which develops PCIe 5.0 switches, and the company tells us that, as a general statement, we can expect those same PCIe 4.0 requirements to become more arduous for motherboards with a PCIe 5.0 interface, particularly because they will require retimers for even shorter lane lengths and even thicker motherboards. That means we could see yet another jump in motherboard pricing over what the industry already absorbed with the move to PCIe 4.0. Additionally, PCIe 5.0 also consumes more power, which will present challenges in mobile form factors.
Both Microchip and the PCI-SIG standards body tell us that PCIe 5.0 adoption is expected to come to the high-performance server market and workstations first, largely because of the increased cost and power consumption. That isn’t a good fit for consumer devices considering the slim performance advantages in lighter workloads. That means that while Alder Lake may support PCIe 5.0, it’s possible that we could see the first implementations run at standard PCIe 4.0 signaling rates.
Intel took a similar tactic with its Tiger Lake processors – while the chips internal pathways are designed to accommodate the increased throughput of the DDR5 interface via a dual ring bus, they came to market with DDR4 memory controllers, with the option of swapping in new DDR5 controllers in the future. We could see a similar approach with PCIe 4.0, with the first devices using existing controller tech, or the PCIe 5.0 controllers merely defaulting to PCIe 4.0.
Benchmarks have surfaced that indicate that Alder Lake supports DDR5 memory, but like the PCIe 5.0 interface, but it also remains to be seen if Intel will enable it on the leading wave of processors. Notably, every transition to a newer memory interface has resulted in higher up-front DIMM pricing, which is concerning in the price-sensitive desktop PC market.
DDR5 is in the opening stages; some vendors, like Adata, TeamGroup, and Micron, have already begun shipping modules. The inaugural modules are expected to run in the DDR5-4800 to DDR5-6400 range. The JEDEC spec tops out at DDR5-8400, but as with DDR4, it will take some time before we see those peak speeds. Notably, several of these vendors have reported that they don’t expect the transition to DDR5 to happen until early 2022.
While the details are hazy around the separation of the Alder Lake-S, -P, -M, and -L variants, some details have emerged about the I/O allocations via Coreboot patches:
Alder Lake-P
Alder Lake-M
Alder Lake-S
CPU PCIe
One PCIe 5.0 x8 / Two PCIe 4.0 x4
Unknown
Two PCIe 5.0 x8 / Two PCIe 4.0 x4
PCH
ADP_P
ADP_M
ADP_S
PCH PCIe Ports
12
10
28
SATA Ports
6
3
6
We don’t have any information for the Alder Lake-L configuration, so it remains shrouded in mystery. However, as we can see above, the PCIe, PCH, and SATA allocations vary by the model, based on the target market. Notably, the Alder Lake-P configuration is destined for mobile devices.
Intel 12th-Gen Alder Lake Xe LP Integrated Graphics
A series of Geekbench test submissions have given us a rough outline of the graphics accommodations for a few of the Alder Lake chips. Recent Linux patches indicate the chips feature the same Gen12 Xe LP architecture as Tiger Lake, though there is a distinct possibility of a change to the sub-architecture (12.1, 12.2, etc.). Also, there are listings for a GT0.5 configuration in Intel’s media driver, but that is a new paradigm in Intel’s naming convention so we aren’t sure of the details yet.
The Alder Lake-S processors come armed with the 32 EUs (256 shaders) in a GT1 configuration, and the iGPU on early samples run at 1.5 GHz. We’ve also seen Alder Lake-P benchmarks with the GT2 configuration, which means they come with 96 EUs (768 shaders). The early Xe LP iGPU silicon on the -P model runs at 1.15GHz, but as with all engineering samples, that could change with shipping models.
Alder Lake’s integrated GPUs support up to five display outputs (eDP, dual HDMI, and Dual DP++), and support the same encoding/decoding features as both Rocket Lake and Tiger Lake, including AV1 8-bit and 10-bit decode, 12-bit VP9, and 12-bit HEVC.
Intel Alder Lake CPU Architecture and 10nm Enhanced SuperFin Process
Intel pioneered the x86 hybrid architecture with its Lakefield chips, with those inaugural models coming with one Sunny Cove core paired with four Atom Tremont cores.
Compared to Lakefield, both the high- and low-performance Alder Lake-S cores take a step forward to newer microarchitectures. Alder Lake-S actually jumps forward two ‘Cove’ generations compared to the ‘big’ Sunny Cove cores found in Lakefield. The big Golden Cove cores come with increased single-threaded performance, AI performance, Network and 5G performance, and improved security features compared to the Willow Cove cores that debuted with Tiger Lake.
Image 1 of 2
Image 2 of 2
Alder Lake’s smaller Gracemont cores jump forward a single Atom generation and offer the benefit of being more power and area efficient (perf/mm^2) than the larger Golden Cove cores. Gracemont also comes with increased vector performance, a nod to an obvious addition of some level of AVX support (likely AVX2). Intel also lists improved single-threaded performance for the Gracemont cores.
It’s unclear whether Intel will use its Foveros 3D packaging for the chips. This 3D chip-stacking technique reduces the footprint of the chip package, as seen with the Lakefield chips. However, given the large LGA 1700 socket, that type of packaging seems unlikely for the desktop PC variants. We could see some Alder Lake-P, -M, or -L chips employ Foveros packaging, but that remains to be seen.
Lakefield served as a proving ground not only for Intel’s 3D Foveros packaging tech but also for the software and operating system ecosystem. At its Architecture Day, Intel outlined the performance gains above for the Lakefield chips to highlight the promise of hybrid design. Still, the results come with an important caveat: These types of performance improvements are only available through both hardware and operating system optimizations.
Due to the use of both faster and slower cores that are both optimized for different voltage/frequency profiles, unlocking the maximum performance and efficiency requires the operating system and applications to have an awareness of the chip topology to ensure workloads (threads) land in the correct core based upon the type of application.
For instance, if a latency-sensitive workload like web browsing lands in a slower core, performance will suffer. Likewise, if a background task is scheduled into the fast core, some of the potential power efficiency gains are lost. There’s already work underway in both Windows and various applications to support that technique via a hardware-guided OS scheduler.
The current format for Intel’s Lakefield relies upon both cores supporting the same instruction set. Alder Lake’s larger Golden Cove cores support AVX-512, but it appears that those instructions will be disabled to accommodate the fact that the Atom Gracemont cores do not support the instructions. There is a notable caveat that any of the SKUs that come with only big cores might still support the instructions.
Intel Chief Architect Raja Koduri mentioned that a new “next-generation” hardware-guided OS scheduler that’s optimized for performance would debut with Alder Lake, but didn’t provide further details. This next-gen OS scheduler could add in support for targeting cores with specific instruction sets to support a split implementation, but that remains to be seen.
Intel fabs Alder Lake on its Enhanced 10nm SuperFin process. This is the second-generation of Intel’s SuperFin process, which you can learn more about in our deep-dive coverage.
Image 1 of 2
Image 2 of 2
Intel says the first 10nm SuperFin process provides the largest intra-node performance improvement in the company’s history, unlocking higher frequencies and lower power consumption than the first version of its 10nm node. Intel says the net effect is the same amount of performance uplift that the company would normally expect from a whole series of intra-node “+” revisions, but in just one shot. As such, Intel claims these transistors mark the largest single intra-node improvement in the company’s history.
The 10nm SuperFin transistors have what Intel calls breakthrough technology that includes a new thin barrier that reduces interconnect resistance by 30%, improved gate pitch so the transistor can drive higher current, and enhanced source/drain elements that lower resistance and improve strain. Intel also added a Super MIM capacitor that drives a 5X increase in capacitance, reducing vDroop. That’s important, particularly to avoid localized brownouts during heavy vectorized workloads and also to maintain higher clock speeds.
During its Architecture Day, Intel teased the next-gen variant of SuperFin, dubbed ’10nm Enhanced SuperFin,’ saying that this new process was tweaked to increase interconnect and general performance, particularly for data center parts (technically, this is 10nm+++, but we won’t quibble over an arguably clearer naming convention). This is the process used for Alder Lake, but unfortunately, Intel’s descriptions were vague, so we’ll have to wait to learn more.
We know that the 16-core models come armed with 30MB of L3 cache, while the 14-core / 24 thread chip has 24MB of L3 cache and 2.5 MB of L2 cache. However, it is unclear how this cache is partitioned between the two types of cores, which leaves many questions unanswered.
Alder Lake also supports new instructions, like Architectural LBRs, HLAT, and SERIALIZE commands, which you can read more about here. Alder Lake also purportedly supports AVX2 VNNI, which “replicates existing AVX512 computational SP (FP32) instructions using FP16 instead of FP32 for ~2X performance gain.” This rapid math support could be part of Intel’s solution for the lack of AVX-512 support for chips with both big and small cores, but it hasn’t been officially confirmed.
Intel 12th-Generation Alder Lake Price
Intel’s Alder Lake is at least ten months away, so pricing is the wild card. Intel has boosted its 10nm production capacity tremendously over the course of 2020 and hasn’t suffered any recent shortages of its 10nm processors. That means that Intel should have enough production capacity to keep costs within reasonable expectations, but predicting Intel’s 10nm supply simply isn’t reasonable given the complete lack of substantive information on the matter.
However, Intel has proven with its Comet Lake, Ice Lake, and Cooper Lake processors that it is willing to lose margin in order to preserve its market share, and surprisingly, Intel’s recent price adjustments have given Comet Lake a solid value proposition compared to AMD’s Ryzen 5000 chips.
We can only hope that trend continues, but if Alder Lake brings forth both PCIe 5.0 and DDR5 support as expected, we could be looking at exceptionally pricey memory and motherboard accommodations.
The Mercury Research CPU market share results are in for the fourth quarter of 2020, with the headline news being that Intel has clawed back share from AMD in the desktop PC market for the first time in three years. Intel also stopped its slide in notebook PCs, gaining share for the first time since we began collecting data for that segment in early 2018. AMD also lost share in the overall x86 market during the quarter but notched a solid gain for the year. Meanwhile, AMD continued to make slow but steady gains in the server market.
It’s noteworthy that the fourth quarter of 2020 was anything but typical: The PC market continued its pandemic-fueled surge, seeing its fastest growth in a decade. For example, while AMD lost share in the overall x86 market (less IoT) during the quarter, Mercury Research pegs the overall x86 market growth rate at an explosive 20.1%.
Intel obviously captured more of that growth than AMD, but it’s important to remember that a slight loss of share in the midst of an explosive growth environment doesn’t equate to declining sales – AMD grew its processor revenue by 50% last year and posted record financial results for the year.
Shortages have plagued AMD due to ongoing supply chain issues. Given the lack of AMD products on shelves, the company is obviously selling all of the silicon it can punch out, signaling strong demand. AMD expects to see ‘tightness’ throughout the first half of 2021 until added production capacity comes online, meaning we could see a limited supply of AMD’s PC and console chips until the middle of the year (you can see AMD CEO Lisa Su’s take on the situation in our recent interview).
Those shortages led to a scarcity of AMD’s chips during the critical holiday shopping season in the fourth quarter, while Intel’s chips were widely available and often selling at a discount. That obviously helped Intel recoup some share. During its recent earnings call, Intel also cited improving supply of lower-end processors, like those destined for Chromebooks, as a contributing factor. Intel CEO Bob Swan noted the company increased its PC CPU units by 33% during the fourth quarter.
Intel has also expanded its chip production by leaps and bounds over the last several years as it recovered from its own shortage of production capacity. The advantages of its IDM model are on clear display during the pandemic – the company’s tight control of its supply chain and production facilities have allowed it to better weather disruptions. That’s an important consideration as the company has come under intense criticism that it should spin off its fabs while it weighs how much of its own production it should outsource (you can see Bob Swan’s take on the situation in our recent interview).
That said, given the dynamic nature of the market, it’s hard to draw firm conclusions on several of the categories below without more information. Dean McCarron of Mercury Research will provide us with detailed breakdowns for each segment in the morning, and we’ll add his analysis as soon as it is available. For now, here’s our analysis of the raw numbers.
Image 1 of 2
Image 2 of 2
4Q20
3Q20
2Q20
1Q20
4Q19
3Q19
2Q19
1Q2019
4Q18
3Q18
2Q18
1Q18
4Q17
3Q17
2Q17
1Q17
4Q16
3Q16
AMD Desktop Unit Share
19.3%
20.1%
19.2%
18.6%
18.3%
18%
17.1%
17.1%
15.8%
13%
12.3%
12.2%
12.0%
10.9%
11.1%
11.4%
9.9%
9.1%
Quarter over Quarter / Year over Year (pp)
-0.8 / +1.0
+0.9 / +2.1
+0.6 / +2.1
+0.3 / +1.5
+0.3 / +2.4
+0.9 / +5
Flat / +4.8
+1.3 / +4.9
+2.8 / +3.8
+0.7 / +2.1
+0.1 / +1.2
+0.2 / +0.8
+1.1 / +2.1
-0.2 / +1.8
-0.3 / –
+1.5 / –
+0.8 / –
–
AMD recently introduced its Ryzen 5000 processors that take the lead in every meaningful metric from Intel’s Comet Lake chips, but a lack of supply could have hindered the company’s gains in this fast-growing segment. Intel’s Rocket Lake lands in Q1 2021, which could present more competition for AMD’s Ryzen 5000.
While AMD lost some share here during the quarter, it gained 1 percentage point for the year. However, AMD recently noted that its Ryzen 5000 chips doubled the launch sales of any other previous Ryzen generation, and annual processor revenue grew 50% even though the PC market only grew 13%. It’s logical to expect that AMD will prioritize the production of these higher-margin desktop processors to maximize its profitability.
AMD has noted that its shortages are most acute in the lower end of the PC market, while Intel says it has improved its own shipments of small-core (lower-end) CPUs.
4Q20
3Q20
2Q20
1Q20
Q419
3Q19
2Q19
1Q2019
4Q18
3Q18
2Q18
AMD Mobile Unit Share
19%
20.2%
19.9%
17.1%
16.2%
14.7%
14.1%
13.1%
12.2%
10.9%
8.8%
Quarter over Quarter / Year over Year (pp)
-1.2 / +2.8
+0.3 / +5.5
+2.9 / +5.8
+0.9 / +3.2
+1.5 / +4.0
+0.7 / +3.8
+1.0 / +5.3
+0.9 / ?
Recently, this has been AMD’s fastest-growing market. The mobile segment comprises roughly 60% of the client processor market, meaning any gains are very important in terms of overall volume and revenue.
Intel has cited its increasing penetration into the lower-end of the market, like Chromebooks, which likely contributed to its strong gains here. Again, AMD has said that its shortages are most pressing in the lower-end of the market.
Notably, AMD remained in the black here for the year, with a 2.8 percentage point gain. AMD recently launched its Ryzen 5000 Mobile processors, which bring the powerful Zen 3 microarchitecture to laptops for the first time. AMD has 50% more designs coming to market than the previous-gen Ryzen 4000 lineup, but supply could be tight.
AMD bases its server share projections on IDC’s forecasts but only accounts for the single- and dual-socket market, which eliminates four-socket (and beyond) servers, networking infrastructure and Xeon D’s (edge). As such, Mercury’s numbers differ from the numbers cited by AMD, which predict a higher market share. Here is AMD’s comment on the matter: “Mercury Research captures all x86 server class processors in their server unit estimate, regardless of device (server, network or storage), whereas the estimated 1P [single-socket] and 2P [two-socket] TAM [Total Addressable Market] provided by IDC only includes traditional servers.”
4Q20
3Q20
2Q20
1Q20
4Q19
3Q19
2Q19
1Q2019
4Q18
3Q18
2Q18
4Q17
AMD Server Unit Share
7.1%
6.6%
5.8%
5.1%
4.5%
4.3%
3.4%
2.9%
3.2%
1.6%
1.4%
0.8%
Quarter over Quarter / Year over Year (pp)
+0.5 / +2.6
+0.8 / +2.3
+0.7 / +2.4
+0.6 / 2.2
+0.2 / +1.4
+0.9 / +2.7
+0.5 / +2.0
-0.3 / –
+1.6 / 2.4
+0.2 / –
AMD continues to chew away server share from Intel at a steady rate. These gains come on the cusp of the company’s highly-anticipated EPYC Milan launch in March. It’s logical to expect that some customers may have paused purchases on current-gen EPYC Rome processors in anticipation of the looming Milan launch, and the resultant pent up demand could increase AMD’s server penetration next quarter. Also, given the importance of this lucrative segment, AMD will likely prioritize server chip production.
4Q20
3Q20
2Q20
1Q20
4Q19
3Q19
2Q19
4Q18
3Q18
AMD Overall x86
21.7%
22.4%
18.3%
14.8%
15.5%
14.6%
13.9%
12.3%
10.6%
Overall PP Change QoQ / YoY
-0.7 / +6.2
+4.1 / +6.3
+3.5 / +1.2 (+3.7?)
-0.7 / ?
+0.9 / +3.2
+0.7 / +4
?
?
–
The overall x86 market grew at an explosive 20.1% rate during the quarter, reflecting that a growing TAM benefits both players. AMD lost a minor amount of overall share during the quarter, but gained 6.2 percentage points for the year.
We’ll add McCarron’s segment analysis as soon as it is available.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.