ASRock Rack has quietly unveiled its new 1U shorth depth low-power server based on AMD’s Ryzen 5000 processor as well as X570 chipset. The 1U2-X570/2T can be used for light server workloads, or as a remote desktop.
Both AMD and Intel offer a broad range of Epyc and Xeon processors for a range of workloads. These CPUs support numerous server-grade features and are optimized for 24/7 operation, but overall, these are desktop processors that are sold at a premium since they feature some differences. Meanwhile, there are entry-level servers that are supposed to be inexpensive and do not require any advanced functionality, which is why some server makers offer machines based on desktop CPUs. The 1U2-X570/2T is a good example of such products.
The ASRock 1U2-X570/2T server uses the company’s X570D4I-2T mini-ITX motherboard and supports various AMD’s Ryzen and Ryzen Pro processors for desktops with up to 105W TDP, including the latest 5000-series CPUs with up to 16 cores. The motherboard has four slots for SO-DIMM modules supporting up to 128GB of DDR4-2400 (2R/2DR) or DDR4-2933 (1R) memory with or without ECC (ECC only supported by AMD Pro CPUs). Storage on the server comprises of one M.2-2280 slot for an SSD featuring a PCIe 4.0 x4 or SATA interface, two bays for 2.5-inch/7mm drives as well as two bays for 3.5-inch drives. The server comes with Intel’s X550-AT2 controller that drives two 10GbE ports as well as a 1GbE connector for remote management that is enabled by the ASPeed AST2500 BMC. The machine is fed by a 265W 80Plus Bronze PSU.
While the X570D4I-2T motherboard has a PCIe 4.0 x16 slot for graphics cards, the 1U2-X570/2T machine cannot accommodate any add-in cards since this is a short depth machine. Furthermore, its 265W power supply is not really designed to handle decent graphics cards or special-purpose accelerators that typically consume well over 100W.
The ASRock Rack 1U2-X570/2T is not the first server from the company that is powered by AMD’s Ryzen 4000/5000 processor and X570 chipset as the company has three more machines in the lineup. All the 1U machines are designed to operate like remote entry-level workstations or perform light server workloads, they support up to 128GB of memory, can be equipped with up to seven 3.5-inch hard drives and at least two M.2 SSDs, can accommodate a FHHL PCIe 4.0 x16 add-in-board, and come with a relatively low-wattage PSUs (up to 450W).
Image 1 of 3
Image 2 of 3
Image 3 of 3
The ASRock 1U2-X570/2T is already listed at the company’s website, but there is no word about its price or availability timeframe.
Intel’s new W-1300 series of Xeon processors briefly emerged in a compatibility list showing all CPUs supported in Intel’s latest LGA1200 socket. However, a few hours later they were taken down. This could mean that these new Xeon processors will support the same socket as Intel’s consumer-grade Rocket Lake products.
Because these chips are compatible with the LGA 1200 socket, they should be nearly identical to Intel’s upcoming Core i5, i7, and i9 Rocket Lake parts, but have a few alterations that include and support for vPro technologies and ECC memory.
Intel has been doing this for years with its lower-end Xeon processors, re-purposing lower-end Core i5, i7, and i9 parts and turning them into Xeon chips. The strategy makes a lot of sense, as not all servers and workstations require HEDT levels of processing power and connectivity.
However, since the Skylake generation, Intel has severely limited its entry-level Xeons’ motherboard compatibility and requires them to run in a workstation/server “designed” chipset. So yes, while these chips have the same socket compatibility as Intel’s consumer desktop chips, Intel’s W-1300 Xeons will not work in a standard H-, B-, or Z-series motherboard.
The W-1300 CPUs, which appeared on an ASRock list, were the W-1390, W-1390T, W-1350P, W-1350, and W-1370.
The main differences we can find between each Xeon chip are its TDP and with some, its amount of L3 cache. For instance, chips like the W-1350, W-1390, and W1-1370 have an average TDP of 80W. The 1390T, has the lowest TDP at just 35W, and the W-1350P has the highest TDP of 125W.
Additionally, the W-1350 & W-1350P are equipped with less L3 cache, coming in at 12MB instead of 16MB. Presumably, this reduction in L3 cache is due to a lower core count compared to its other siblings.
Unfortunately, that’s all we know (if it’s even accurate), for now. We still don’t know what prices will be, what core counts these chips will have, and what boost frequencies these chips will be equipped with. (But expect a maximum of 8 cores for W1300 chips due to the Rocket Lake architecture.)
Hopefully, we should have more information on Intel’s new W-1300 chips right around or after the official Rocket Lake launch.
Samsung has announced that it has developed the industry’s first 512GB memory module using its latest DDR5 memory devices that use high-k dielectrics as insulators. The new DIMM is designed for next-generation servers that use DDR5 memory, including those powered by AMD’s Epyc ‘Genoa’ and Intel’s Xeon Scalable ‘Sapphire Rapids’ processors.
Samsung’s 512GB DDR5 registered DIMM (RDIMM) memory module uses 32 16GB stacks based on eight 16Gb DRAM devices. The 8-Hi stacks use through silicon via interconnects to ensure low power and quality signaling. For some reason, Samsung does not disclose the maximum data transfer rate its RDIMM supports, which is not something completely unexpected as the company cannot disclose specifications of next-generation server platforms.
An interesting thing about Samsung’s 512GB RDIMM is that it uses the company’s latest 16 Gb DDR5 memory devices which replace traditional insulators with a high-k material originally used for logic gates to lower leakage current. This is not the first time Samsung has used HKMG technology for memory as, back in 2018, it started using it for high-speed GDDR6 devices. Theoretically, usage of HKMG could help Samsung’s DDR5 devices to hit higher data transfer rates too.
Samsung says that because of DDR5’s reduced voltages, the HKMG insulating layer and other enhancements, its DDR5 devices consume 13% less power than predecessors, which will be particularly important for the 512GB RDIMM aimed at servers.
When used with server processors featuring eight memory channels and two DIMMs per channel, Samsung’s new 512GB memory modules allow you to equip each CPU with up to 8TB of DDR5 memory, up from 4TB today.
Samsung says it has already started sampling various DDR5 modules with various partners from the server community. The company expects its next-generation DIMMs to be validated and certified by the time servers using DDR5 memory hit the market.
“Intel’s engineering teams closely partner with memory leaders like Samsung to deliver fast, power-efficient DDR5 memory that is performance-optimized and compatible with our upcoming Intel Xeon Scalable processors, code-named Sapphire Rapids,” said Carolyn Duran, Vice President and GM of Memory and IO Technology at Intel.
ASRock has just spilled the beans on Intel’s new W-1300 series of Xeon processors in a compatibility list showing all CPUs supported in Intel’s latest LGA1200 socket. This means that these new Xeon processors will support the same socket as Intel’s consumer-grade Rocket Lake products.
Because these chips are compatible with the LGA 1200 socket, they should be nearly identical to Intel’s upcoming Core i5, i7, and i9 Rocket Lake parts, but have a few alterations that include and support for vPro technologies and ECC memory.
Intel has been doing this for years with its lower-end Xeon processors, re-purposing lower-end Core i5, i7, and i9 parts and turning them into Xeon chips. The strategy makes a lot of sense, as not all servers and workstations require HEDT levels of processing power and connectivity.
However, since the Skylake generation, Intel has severely limited its entry-level Xeons’ motherboard compatibility and requires them to run in a workstation/server “designed” chipset. So yes, while these chips have the same socket compatibility as Intel’s consumer desktop chips, Intel’s W-1300 Xeons will not work in a standard H-, B-, or Z-series motherboard.
Currently, the only W-1300 CPUs ASRock has listed so far are the W-1390, W-1390T, W-1350P, W-1350, and W-1370.
The main differences we can find between each Xeon chip are its TDP and with some, its amount of L3 cache. For instance, chips like the W-1350, W-1390, and W1-1370 have an average TDP of 80W. The 1390T, has the lowest TDP at just 35W, and the W-1350P has the highest TDP of 125W.
Additionally, the W-1350 & W-1350P are equipped with less L3 cache, coming in at 12MB instead of 16MB. Presumably, this reduction in L3 cache is due to a lower core count compared to its other siblings.
Unfortunately, that’s all we know, for now, we still don’t know what prices will be at, what core counts these chips will have, and what boost frequencies these chips will be equipped with. (But expect a maximum of 8 cores for W1300 chips due to the Rocket Lake architecture.)
Hopefully, we should have more information on Intel’s new W-1300 chips right around or after the official Rocket Lake launch.
Large companies like Google have been building their own servers for many years now in a bid to get machines that suit their needs the best way possible. Most of these servers run Intel’s Xeon processors with or without customizations, but feature additional hardware that accelerate certain workloads. For Google, this approach is no longer good enough. This week the company announced that it had hired Intel veteran Uri Frank to lead a newly established division that will develop custom system-on-chips (SoC) for the company’s datacenters.
Google is not a newbie when it comes to hardware development. The company introduced its own Tensor Processing Unit (TPU) back in 2015 and today it powers various services, including real-time voice search, photo object recognition, and interactive language translation. In 2018, the company unveiled its video processing units (VPUs) to broaden the number of formats it can distribute videos in. In 2019, it followed with OpenTitan, the first open-source silicon root-of-trust project. Now Google installs its own and third-party hardware onto the motherboards next to an Intel Xeon processor. Going forward, the company wants to pack as many capabilities as it can into SoCs to improve performance, reduce latencies, and reduce the power consumption of its machines.
“To date, the motherboard has been our integration point, where we compose CPUs, networking, storage devices, custom accelerators, memory, all from different vendors, into an optimized system,” Amin Vahdat, Google Fellow and Vice President of Systems Infrastructure, wrote in a blog post. “Instead of integrating components on a motherboard where they are separated by inches of wires, we are turning to SoC designs where multiple functions sit on the same chip, or on multiple chips inside one package.”
These highly integrated system-on-chips (SoCs) and system-in-packages (SiPs) for datacenters will be developed in a new development center in Israel, which will be headed by Uri Frank, vice president of engineering for server chip design at Google, who brings 24 years of custom CPU design and delivery experience to the company. The cloud giant plans to recruit several hundred world-class SoC engineers to design its SoCs and SiPs, so these products are not going to jump into Google’s servers in 2022, but will likely reach datacenters by the middle of the decade.
Google has a vision of tightly integrated SoCs replacing relatively disintegrated motherboards. The company is eager to develop building blocks of its SoCs and SiPs, but will have nothing against buying them from third party if needed.
“Just like on a motherboard, individual functional units (such as CPUs, TPUs, video transcoding, encryption, compression, remote communication, secure data summarization, and more) come from different sources,” said Vahdat. “We buy where it makes sense, build it ourselves where we have to, and aim to build ecosystems that benefit the entire industry.”
Google’s foray into datacenter SoCs is consistent with what its rivals Amazon Web Services and Microsoft Azure are doing. AWS already offers instances powered by its own Arm-powered Graviton processors, whereas Microsoft is reportedly developing its own datacenter chip too. Google yet has to disclose whether it intends to build its own CPU cores or license them from Arm or another party, but since the company is early in its journey, it is probably considering different options at this point.
“I am excited to share that I have joined Google Cloud to lead infrastructure silicon design,” Uri Frank wrote in a blog post. “Google has designed and built some of the world’s largest and most efficient computing systems. For a long time, custom chips have been an important part of this strategy. I look forward to growing a team here in Israel while accelerating Google Cloud’s innovations in compute infrastructure. Want to join me? If you are a world class SOC designer, open roles will be posted to careers.google.com soon.”
Intel’s new CEO Pat Gelsinger made several big announcements about its 7nm tech at today’s Intel Unleashed: Engineering the Future event, revealing that the company has solved the primary issues that have resulted in an untenable delay to its 7nm products. As a result, Intel will tape in its first 7nm compute chip for desktop PCs, Meteor Lake, in the second quarter of this year, with the first chips shipping to customers in 2023. Intel also divulged that it will also ship its 7nm Granite Rapids data center CPUs in 2023.
Intel says that 7nm’s issues stemmed from difficulties with series of steps in its manufacturing process and that it has leaned more heavily on EUV manufacturing to rearchitect that series of steps and simplify the design flow.
However, even though Intel has addressed the issues with its 7nm process and says that the majority of its products in 2023 will be produced in house, the delayed 7nm production schedule will leave the company in direct competition with chips built on more advanced nodes from competing foundries. As a result, Intel will also outsource the production of the CPU cores to TSMC for some of its key CPU models that will land in 2023. You can read more about that here.
Intel hasn’t shared the details about its CPUs that will feature outsourced cores, but the company did share that its Meteor Lake processors are built on the 7nm process and feature the company’s Foveros design. This technology allows for die-on-die logic stacking to fabricate 3D processors, as we see with Intel’s Lakefield chips.
Meteor Lake chips are thought to come with a combination of Intel’s Ocean Cove and Gracemont cores, meaning they’ll follow the same hybrid arrangement found in Alder Lake, but in a 3D-stacked fashion. Early Linux hardware enablement code has already shown up for Meteor Lake, so it’s clear that Intel is deep in the design process.
The 3D-stacked design could allow Intel to either use its own 7nm cores or swap in cores based on a process node sourced from a third-party foundry, like TSMC or Samsung, but Intel hasn’t shared any details about its outsourcing strategy yet.
Intel’s first 7nm server CPUs (Granite Rapids) will arrive in 2023, which is later than listed in earlier roadmaps that projected a launch in 2022, but on track with Intel’s initial revised timeline.
In either case, that timeline is concerning in the face of AMD’s continued execution with its EPYC data center chips – AMD’s roadmaps outline its 5nm Genoa processors coming to market before the end of 2022. That’s the obvious rationale behind Intel also using an outsourced TSMC node for some of its data center products that will launch in 2023. Again, it’s hard to tell if those outsourced chips will come as chiplets/tiles that merely snap into the same Granite Rapids package, or if they will come as entirely different new models.
Google is expanding efforts to design its own chips with the hiring of Uri Frank, an Intel veteran with over two decades of experience in custom CPU design, the company has announced. Frank will head up a new Isreal-based team for Google, and will serve as the company’s VP of Engineering for server chip design. “I look forward to growing a team here in Israel while accelerating Google Cloud’s innovations in compute infrastructure,” Frank wrote in a LinkedIn post announcing the move.
As Google and other tech giants have sought more performance and power efficiency, they’ve increasingly turned towards custom chip designs tailored towards specific use cases. Google has already introduced several custom chips including its Tensor Processing Unit (to help with tasks like voice search and photo object recognition), Video Processing Units, and OpenTitan, an open-source security-focused chip.
On the consumer side, Google already designs custom chips like the Titan M and Pixel Neural Core for its phones. There have also been reports that Google is designing processors that could eventually power its Pixel phones and Chromebooks.
Despite the hire, Google cautions that it’s not planning on building every server chip itself. “We buy where it makes sense, build it ourselves where we have to, and aim to build ecosystems that benefit the entire industry,” the company explains. But the big change will be trying to integrate these different pieces of hardware on a single system on chip (SoC), rather than via a motherboard where they’re separated by “inches of wires” that introduce latency and reduce bandwidth. “The SoC is the new motherboard,” Google says.
Other tech giants have similar custom chip ambitions. Amazon has its ARM-based Graviton server chips while Facebook has announced data center chip designs of its own. Microsoft is also thought to be working on designing its own server chips, as well as processors for its lineup of Surface PCs. Apple has several chip designs to its credit, and is currently in the process of transitioning its Mac lineup from Intel to its own ARM-based processors.
Intel today announced that it plans to reveal more information about its upcoming Ice Lake server chips at the “How Wonderful Gets Done 2021” event on April 6.
The company said it plans to launch the 3rd Gen Xeon Scalable processors as well as “the latest additions to Intel’s hardware and software portfolio targeting data centers, 5G networks, and intelligent edge infrastructure” during the event.
Intel’s revealed precious little about Ice Lake. It offered some details about the 10nm chips in August 2020, and it said in November 2020 that its 32-core offerings would offer better performance than AMD’s 64-core EPYC processors, but that’s about it.
Luckily we have learned a bit more from leaks. A 36-core Ice Lake processor leaked via Geekbench in December 2020, and last week Hewlett Packard Enterprise accidentally revealed a 40-core member of the lineup on its support website.
Intel said in January that Ice Lake chips had finally entered production after series of delays pushed the processors back from their intended launch in 2020. Now it seems the company is finally ready to share more about the next-gen Xeon lineup.
The “How Wonderful Gets Done 2021” Launch Event will be held on April 6 at 8am PT and streamed via Intel’s website. Folks who can’t watch the event live should be able to view a replay on the Intel Newsroom.
Instagram, WhatsApp, and Facebook Messenger are down for many right now. More than 123,000 users have reported issues with Instagram on DownDetector. More than 23,000 users have reported issues with WhatsApp on DownDetector, too, and the service is down for one Verge staffer’s family, who is based in Europe. Facebook Messenger seems to be affected as well, with more than 5,000 reports of problems on DownDetector.
When navigating to Instagram’s website, I saw a white page with the message, “5xx Server Error.” And when I re-downloaded Instagram to my phone and tried to log in, I hit an error there, too.
Facebook, which owns Instagram and WhatsApp, didn’t immediately reply to a request for comment. The Facebook Gaming Twitter account acknowledged that “there are a number of issues currently affecting Facebook products, including gaming streams.” The account said that multiple teams are working on the issue.
There are a number of issues currently affecting Facebook products, including gaming streams. Multiple teams are working on it, and we’ll update you when we can.
— Facebook Gaming (@FacebookGaming) March 19, 2021
As you might expect, the memes about the outage are strong on Twitter (which, fortunately, seems to be hanging on):
This isn’t the only recent blip in Facebook’s services — Facebook Messenger and Instagram DMs went down back in December. Facebook and Instagram also experienced big outages over Thanksgiving in 2019.
In a surprising move, Intel this week began its Xe-HPG graphics architecture promotion campaign. So far, the company has posted a teaser video that leads to a website which announces an Xe-HPG-dedicated scavenger hunt game that starts on March 26, next Friday. Also, the video may give a clue about Intel’s internal codename for the first Xe-HPG GPU.
For starters, Intel has posted an Xe-HPG microarchitecture teaser video on Twitter. The footage emphasizes that the Xe-HPG is both evolution and extension of Intel’s Xe-LP architecture and also contains three cryptic messages. When decoded, the first one leads to https://xehpg.intel.com, a website dedicated to the Xe-HPG Scavenger Hunt. Another two messages are coordinates — 79.0731W and 43.0823N — that lead to a point to the west of the Goat Island overlooking the Niagara river near Niagara Falls.
Intel tend to give unannounced products codenames typically after various geographical locations (that cannot be trademarked), such as cities, islands or rivers. Keeping in mind that Intel’s 4th and 5th Generations Xeon Scalable server processor are codenamed Sapphire Rapids and Granite Rapids (i.e., after a section of a river) and their platform is called Eagle Stream (i.e., a body of water), it is highly likely that its first Xe-HPG GPU is codenamed Niagara Falls (i.e., another section of a river).
Intel powered on its first GPU based on the Xe-HPG architecture in late October, 2020. Silicon bringup process, driver development, extensive testing, and other necessary steps to bring a new chip to the market usually take about a year. Therefore, it is unlikely that the upcoming GPU will arrive to the market earlier than in early Q4 2020. Starting a promotion campaign for a product that will not be available for more than half of a year is a bit strange. Meanwhile, Intel possibly wants to attract maximum attention to its gaming GPU architecture, perhaps to show gamers its dedication to the Xe-HPG project.
Hewlett Packard Enterprise briefly listed Intel’s 3rd Generation Xeon Scalable ‘Ice Lake-SP’ processors and inevitably revealed their specifications. As spotted by @9550pro, it turns out higher-end extreme core count (XCC) versions of these products will carry up to 40 cores, significantly more than expected a few months ago. For servers, such core count may be considered moderate by today’s standards. But how about putting such a 10nm Intel CPU into an extreme desktop or workstation?
One of the things that Intel has not officially disclosed about its next-generation Xeon Scalable ‘Ice Lake-SP’ processors is their maximum number of cores. Meanwhile, since Intel has been shipping production release qualification (PRQ) versions of its latest yet-to-be-announced server CPUs for several months now, it is hard to keep their specifications under wraps.
Intel’s 3rd Generation Xeon Scalable ‘Ice Lake-SP’ processor family will include a number of processors with more than 28 cores (the maximum number of cores supported by Intel’s Cooper Lake CPUs), including the following:
Intel’s Ice Lake-SP Processors Listed by HPE
Model
Frequency
Core Count
TDP
Xeon Platinum 8352S
2.20 GHz
32-cores
205W
Xeon Platinum 8352Y
2.20 GHz
32-cores
205W
Xeon Platinum 8358P
2.60 GHz
32-cores
240W
Xeon Platinum XCC 8358
2.65 GHz
32-cores
250W
Xeon Platinum 8352V
2.10 GHz
36-cores
195W
Xeon Platinum 8351N
2.40 GHz
36-cores
225W
Xeon Platinum XCC 8360Y
2.40 GHz
36-cores
250W
Xeon Platinum 8368
2.40 GHz
38-cores
270W
Xeon Platinum XCC 8380
2.30 GHz
40-cores
270W
This lineup is certainly not the complete Intel Ice Lake-SP range, but it gives some basic idea about the family.
When compared to AMD’s 64-core EPYC 7002-series ‘Rome’ or 7003-series ‘Milan’ processors, the 40 cores featured by Intel’s Xeon Scalable ‘Ice Lake-SP’ look relatively pale. Meanwhile, Intel’s strength in single-thread performance along with the increased number of cores will make the CPUs more competitive when compared to today’s Xeon Scalable offerings.
But what will be particularly interesting is whether Intel intends to use these CPUs for its next-generation processors aimed at high-end desktops as well as extreme workstations. Intel’s existing Xeon W-series products are based on the outdated Skylake or Cascade Lake microarchitectures, they feature up to 28 cores, and were launched in 2019. Almost any upgrade to this lineup will inevitably be welcome by PC makers and end-users.
So far, Intel has not confirmed any plans to use its Ice Lake-SP design for HEDTs or workstations, but at least this seems like a reasonable idea keeping in mind that Intel’s Sapphire Rapids is at least a year away.
Nvidia on Thursday announced a new subscription tier for its GeForce Now cloud gaming service called Priority that will replace its existing paid Founders tier and contain the same perks like extended session length and RTX support. The catch: the change will come with a price increase, from what used to be a $4.99-per-month subscription to what will now be a $9.99-per-month one for new subscribers. Nvidia will also start offering a $99.99-per-year Priority subscription.
However, those who had active memberships as of yesterday, March 17th, will be eligible for the Founders pricing for life, Nvidia says, which comes out to a little less than $60 per year. The company still plans to offer a free tier of GeForce Now, too, but that tier restricts you to a one-hour session length. Nvidia says the price hike is meant to represent the platform’s evolution since it launched in beta way back in 2015 and entered what Nvidia has referred to as a public testing phase a year ago.
“As GeForce Now enters year two, and rapidly approaches 10 million members, the service is ready to kick things up a notch,” the company said in a statement. “GeForce Now launched out of beta last February with Founders memberships — a limited time, promotional plan. On Thursday, Founders memberships will close to new registrations and Priority memberships, the new premium offering, will be introduced.”
Those who have tried out GeForce Now with a Founders subscription but let that subscription lapse may be displeased to find out that Nvidia does not intend to give the $4.99-per-month pricing to anyone who may have been a paying subscriber in the recent past, even if you let your subscription lapse a few days ago. You’ll need to have been an active, paying Founders member as of yesterday, and you’ll also need to keep the membership active to continue paying the reduced price. If you cancel, you’ll lose the promotion for good.
“Members need to be subscribed to the Founders membership as of 3/17/2021, and keep their membership in good standing, to be eligible for the benefit. If you were previously a Founders member but downgraded, unfortunately you’re not eligible,” an Nvidia spokesperson clarified to The Verge.
To its credit, Nvidia hasn’t sold monthly memberships for some time now, instead selling a promotional six-month bundle for $24.99. That makes it less likely that someone who subscribed any time in the last few months will find themselves ineligible for this Founders pricing perk.
Nvidia intends to continue upping its investment in the platform as it’s proved quite successful, with close to 10 million members, in the otherwise struggling cloud gaming scene. Most recently, Google closed down its in-house game development studios creating titles for its Stadia service, while Amazon’s Luna platform remains in beta.
GeForce Now differs from those platforms by letting members stream games they’ve already purchased from Epic, Steam, and other digital distributors over the cloud. The service’s paid tier launch last year was a bumpy one after high-profile publishers like Activision Blizzard and 2K Games pulled their libraries, the dispute caused by Nvidia streaming those companies’ games without explicit permission.
Since then, Nvidia has switched to an opt-in model to court game makers to the platform on friendlier terms, a strategy that’s paid off as Nvidia has added roughly 10 new games to the platform every week. The company now has a full list of supported titles on its website, a welcome addition after the rocky licensing fallout of last spring.
Nvidia says the tech will keep improving over time, while its “GFN Thursday” new game onboarding will jump from 10 new titles added per week to 15 by the end of the year. GeForce Now will later this month get support for Adaptive Vsync, which “synchronizes frame rates at 60 or 59.94 Hz server-side to match the display client-side, reducing stutter and latency on supported games,” the company explains. Nvidia says it’s also releasing a “new adaptive de-jitter technology” to increase bit rates for games streamed over slower networks. (Nvidia could not, however, provide a timeline for when the platform will support 4K streaming when asked.)
Other benefits coming soon include account linking for games with cross-platform support and improvements to preloading to cut down load times by half, both coming in the next one to two months. Nvidia says it’s also adding data center capacity in Phoenix, Arizona, as well as bringing online its first Canadian data center in Montreal later this year, both of which will help reduce wait times.
As for the company’s iOS beta, which launched back in November, Nvidia didn’t have much new to share. But a company spokesperson did say that “all previously announced projects continue to be on the roadmap in collaboration with the team at Epic,” referencing the ongoing work to bring Epic’s Fortnite back to the iPhone and iPad via GeForce Now on the mobile web after it was banned by Apple and Google last summer.
AMD unveiled its EPYC 7003 ‘Milan’ processors today, claiming that the chips, which bring the company’s powerful Zen 3 architecture to the server market for the first time, take the lead as the world’s fastest server processor with its flagship 64-core 128-thread EPYC 7763. Like the rest of the Milan lineup, this chip comes fabbed on the 7nm process and is drop-in compatible with existing servers. AMD claims it brings up to twice the performance of Intel’s competing Xeon Cascade Lake Refresh chips in HPC, Cloud, and enterprise workloads, all while offering a vastly better price-to-performance ratio.
Milan’s agility lies in the Zen 3 architecture and its chiplet-based design. This microarchitecture brings many of the same benefits that we’ve seen with AMD’s Ryzen 5000 series chips that dominate the desktop PC market, like a 19% increase in IPC and a larger unified L3 cache. Those attributes, among others, help improve AMD’s standing against Intel’s venerable Xeon lineup in key areas, like single-threaded work, and offer a more refined performance profile across a broader spate of applications.
The other attractive features of the EPYC lineup are still present, too, like enhanced security, leading memory bandwidth, and the PCIe 4.0 interface. AMD also continues its general approach of offering all features with all of its chips, as opposed to Intel’s strict de-featuring that it uses to segment its product stack. As before, AMD also offers single-socket P-series models, while its standard lineup is designed for dual-socket (2P) servers.
The Milan launch promises to reignite the heated data center competition once again. Today marks the EPYC Milan processors’ official launch, but AMD actually began shipping the chips to cloud service providers and hyperscale customers last year. Overall, the EPYC Milan processors look to be exceedingly competitive against Intel’s competing Xeon Cascade Lake Refresh chips.
Like AMD, Intel has also been shipping to its largest customers; the company recently told us that it has already shipped 115,000 Ice Lake chips since the end of last year. Intel also divulged a few details about its Ice Lake Xeons at Hot Chips last year; we know the company has a 32-core model in the works, and it’s rumored that the series tops out at 40 cores. As such, Ice Lake will obviously change the competitive landscape when it comes to the market.
AMD has chewed away desktop PC and notebook market share at an amazingly fast pace, but the data center market is a much tougher market to crack. While this segment represents the golden land of high-volume and high-margin sales, the company’s slow and steady gains lag its radical advance in the desktop PC and notebook markets.
Much of that boils down to the staunchly risk-averse customers in the enterprise and data center; these customers prize a mix of factors beyond the standard measuring stick of performance and price-to-performance ratios, instead focusing on areas like compatibility, security, supply predictability, reliability, serviceability, engineering support, and deeply-integrated OEM-validated platforms. To cater to the broader set of enterprise customers, AMD’s Milan launch also carries a heavy focus on broadening AMD’s hardware and software ecosystems, including full-fledged enterprise-class solutions that capitalize on the performance and TCO benefits of the Milan processors.
AMD’s existing EPYC Rome processors already hold the lead in performance-per-socket and pricing, easily outstripping Intel’s Xeon at several key price points. Given AMD’s optimizations, Milan will obviously extend that lead, at least until the Ice Lake debut. Let’s see how the hardware stacks up.
AMD EPYC 7003 Series Milan Specifications and Pricing
Cores / Threads
Base / Boost (GHz)
L3 Cache (MB)
TDP (W)
1K Unit Price
EPYC Milan 7763
64 / 128
2.45 / 3.5
256
280
$7,890
EPYC Milan 7713
64 / 128
2.0 / 3.675
256
225
$7,060
EPYC Rome 7H12
64 / 128
2.6 / 3.3
256
280
?
EPYC Rome 7742
64 / 128
2.25 / 3.4
256
225
$6,950
EPYC Milan 7663
56 / 112
2.0 / 3.5
256
240
$6,366
EPYC Milan 7643
48 / 96
2.3 / 3.6
256
225
$4.995
EPYC Milan 7F53
32 / 64
2.95 / 4.0
256
280
$4,860
EPYC Milan 7453
28 / 56
2.75 / 3.45
64
225
$1,570
Xeon Gold 6258R
28 / 56
2.7 / 4.0
38.5
205
$3,651
EPYC Milan 74F3
24 / 48
3.2 / 4.0
256
240
$2,900
EPYC Rome 7F72
24 / 48
3.2 / ~3.7
192
240
$2,450
Xeon Gold 6248R
24 / 48
3.0 / 4.0
35.75
205
$2,700
EPYC Milan 7443
24 / 48
2.85 / 4.0
128
200
$2,010
EPYC Rome 7402
24 / 48
2.8 / 3.35
128
180
$1,783
EPYC Milan 73F3
16 / 32
3.5 / 4.0
256
240
$3,521
EPYC Rome 7F52
16 / 32
3.5 / ~3.9
256
240
$3,100
Xeon Gold 6246R
16 / 32
3.4 / 4.1
35.75
205
$3,286
EPYC Milan 7343
16 / 32
3.2 / 3.9
128
190
$1,565
EPYC Rome 7302
16 / 32
3.0 / 3.3
128
155
$978
EPYC Milan 72F3
8 / 16
3.7 / 4.1
256
180
$2,468
EPYC Rome 7F32
8 / 16
3.7 / ~3.9
128
180
$2,100
Xeon Gold 6250
8 / 16
3.9 / 4.5
35.75
185
$3,400
AMD released a total of 19 EPYC Milan SKUs today, but we’ve winnowed that down to key price bands in the table above. We have the full list of the new Milan SKUs later in the article.
As with the EPYC Rome generation, Milan spans from eight to 64 cores, while Intel’s Cascade Lake Refresh tops out at 28 cores. All Milan models come with threading, support up to eight memory channels of DDR4-3200, 4TB of memory capacity, and 128 lanes of PCIe 4.0 connectivity. AMD supports both standard single- and dual-socket platforms, with the P-series chips slotting in for single-socket servers (we have those models in the expanded list below). The chips are drop-in compatible with the existing Rome socket.
AMD added frequency-optimized 16-, 24-, and 32-core F-series models to the Rome lineup last year, helping the company boost its performance in frequency-bound workloads, like databases, that Intel has typically dominated. Those models return with a heavy focus on higher clock speeds, cache capacities, and TDPs compared to the standard models. AMD also added a highly-clocked 64-core 7H12 model for HPC workloads to the Rome lineup, but simply worked that higher-end class of chip into its standard Milan stack.
As such, the 64-core 128-thread EPYC 7763 comes with a 2.45 / 3.5 GHz base/boost frequency paired with a 280W TDP. This flagship part also comes armed with 256MB of L3 cache and supports a configurable TDP that can be adjusted to accommodate any TDP from 225W to 280W.
The 7763 marks the peak TDP rating for the Milan series, but the company has a 225W 64-core 7713 model that supports a TDP range of 225W to 240W for more mainstream applications.
All Milan models come with a default TDP rating (listed above), but they can operate between a lower minimum (cTDP Min) and a higher maximum (cTDP Max) threshold, allowing quite a bit of configurability within the product stack. We have the full cTDP ranges for each model listed in the expanded spec list below.
Milan’s adjustable TDPs now allow customers to tailor for different thermal ranges, and Forrest Norrod, AMD’s SVP and GM of the data center and embedded solutions group, says that the shift in strategy comes from the lessons learned from the first F- and H-series processors. These 280W processors were designed for systems with robust liquid cooling, which tends to add quite a bit of cost to the platform, but OEMs were surprisingly adept at engineering air-cooled servers that could fully handle the heat output of those faster models. As such, AMD decided to add a 280W 64-core model to the standard lineup and expanded the ability to manipulate TDP ranges across its entire stack.
AMD also added new 28- and 56-core options with the EPYC 7453 and 7663, respectively. Norrod explained that AMD had noticed that many of its customers had optimized their applications for Intel’s top-of-the-stack servers that come with multiples of 28 cores. Hence, AMD added new models that would mesh well with those optimizations to make it easier for customers to port over applications optimized for Xeon platforms. Naturally, AMD’s 28-core’s $1,570 price tag looks plenty attractive next to Intel’s $3,651 asking price for its own 28-core part.
AMD made a few other adjustments to the product stack based on customer buying trends, like reducing three eight-core models to one F-series variant, and removing a 12-core option entirely. AMD also added support for six-way memory interleaving on all models to lower costs for workloads that aren’t sensitive to memory throughput.
Overall, Milan has similar TDP ranges, memory, and PCIe support at any given core count than its predecessors but comes with higher clock speeds, performance, and pricing.
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
Milan also comes with the performance uplift granted by the Zen 3 microarchitecture. Higher IPC and frequencies, not to mention more refined boost algorithms that extract the utmost performance within the thermal confines of the socket, help improve Milan’s performance in the lightly-threaded workloads where Xeon has long held an advantage. The higher per-core performance also translates to faster performance in threaded workloads, too.
Meanwhile, the larger unified L3 cache results in a simplified topology that ensures broader compatibility with standard applications, thus removing the lion’s share of the rare eccentricities that we’ve seen with prior-gen EPYC models.
The Zen 3 microarchitecture brings the same fundamental advantages that we’ve seen with the desktop PC and notebook models (you can read much more about the architecture here), like reduced memory latency, doubled INT8 and floating point performance, and higher integer throughput.
AMD also added support for memory protection keys, AVX2 support for VAES/VPCLMULQD instructions, bolstered security for hypervisors and VM memory/registers, added protection against return oriented programming attacks, and made a just-in-time update to the Zen 3 microarchitecture to provide in-silicon mitigation for the Spectre vulnerability (among other enhancements listed in the slides above). As before, Milan remains unimpacted by other major security vulnerabilities, like Meltdown, Foreshadow, and Spoiler.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
The EPYC Milan SoC adheres to the same (up to) nine-chiplet design as the Rome models and is drop-in compatible with existing second-gen EPYC servers. Just like the consumer-oriented chips, Core Complex Dies (CCDs) based on the Zen 3 architecture feature eight cores tied to a single contiguous 32MB slice of L3 cache, which stands in contrast to Zen 2’s two four-core CCXes, each with two 16MB clusters. The new arrangement allows all eight cores to communicate to have direct access to 32MB of L3 cache, reducing latency.
This design also increases the amount of cache available to a single core, thus boosting performance in multi-threaded applications and enabling lower-core count Milan models to have access to significantly more L3 cache than Rome models. The improved core-to-cache ratio boosts performance in HPC and relational database workloads, among others.
Second-gen EPYC models supported either 8- or 4-channel memory configurations, but Milan adds support for 6-channel interleaving, allowing customers that aren’t memory bound to use less system RAM to reduce costs. The 6-channel configuration supports the same DDR4-3200 specification for single DIMM per channel (1DPC) implementations. This feature is enabled across the full breadth of the Milan stack, but AMD sees it as most beneficial for models with lower core counts.
Milan also features the same 32-bit AMD Secure Processor in the I/O Die (IOD) that manages cryptographic functionality, like key generation and management for AMD’s hardware-based Secure Memory Encryption (SME) and Secure Encrypted Virtualization (SEV) features. These are key advantages over Intel’s Cascade Lake processors, but Ice Lake will bring its own memory encryption features to bear. AMD’s Secure Processor also manages its hardware-validated boot feature.
AMD EPYC Milan Performance
Image 1 of 13
Image 2 of 13
Image 3 of 13
Image 4 of 13
Image 5 of 13
Image 6 of 13
Image 7 of 13
Image 8 of 13
Image 9 of 13
Image 10 of 13
Image 11 of 13
Image 12 of 13
Image 13 of 13
AMD provided its own performance projections based on its internal testing. However, as with all vendor-provided benchmarks, we should view these with the appropriate level of caution. We’ve included the testing footnotes at the end of the article.
AMD claims the Milan chips are the fastest server processors for HPC, cloud, and enterprise workloads. The first slide outlines AMD’s progression compared to Intel in SPECrate2017_int_base over the last few years, highlighting its continued trajectory of significant generational performance improvements. The second slide outlines how SPECrate2017_int_base scales across the Milan product stack, with Intel’s best published scores for two key Intel models, the 28-core 6258R and 16-core 4216, added for comparison.
Moving on to a broader spate of applications, AMD says existing two-socket 7H12 systems already hold an easy lead over Xeon in the SPEC2017 floating point tests, but the Milan 7763 widens the gap to a 106% advantage over the Xeon 6258R. AMD uses this comparison for the two top-of-the-stack chips, but be aware that this is a bit lopsided: The 6258R carries a tray price of $3,651 compared to the 7763’s $7,890 asking price. AMD also shared benchmarks comparing the two in SPEC2017 integer tests, claiming a similar 106% speedup. In SPECJBB 2015 tests, which AMD uses as a general litmus for enterprise workloads, AMD claims 117% more performance than the 6258R.
The company also shared a few test results showing performance in the middle of its product stack compared to Intel’s 6258R, claiming that its 32-core part also outperforms the 6258R, all of which translates to improved TCO for customers due to the advantages of lower pricing and higher compute density that translates to fewer servers, lower space requirements, and lower overall power consumption.
Finally, AMD has a broad range of ecosystem partners with fully-validated platforms available from top-tier OEMs like Dell, HP, and Lenovo, among many others. These platforms are fed by a broad constellation of solutions providers as well. AMD also has an expansive list of instances available from leading cloud service providers like AWS, Azure, Google Cloud, and Oracle, to name a few.
Image 1 of 2
Image 2 of 2
Model #
Cores
Threads
Base Freq (GHz)
Max Boost Freq (up to GHz11)
Default TDP (w)
cTDP Min (w)
cTDP Max (w)
L3 Cache (MB)
DDR Channels
Max DDR Freq (1DPC)
PCIe 4
1Ku Pricing
7763
64
128
2.45
3.50
280
225
280
256
8
3200
x128
$7,890
7713
64
128
2.00
3.68
225
225
240
256
8
3200
X128
$7,060
7713P
64
128
2.00
3.68
225
225
240
256
8
3200
X128
$5,010
7663
56
112
2.00
3.50
240
225
240
256
8
3200
x128
$6,366
7643
48
96
2.30
3.60
225
225
240
256
8
3200
x128
$4,995
75F3
32
64
2.95
4.00
280
225
280
256
8
3200
x 128
$4,860
7543
32
64
2.80
3.70
225
225
240
256
8
3200
x128
$3,761
7543P
32
64
2.80
3.70
225
225
240
256
8
3200
X128
$2,730
7513
32
64
2.60
3.65
200
165
200
128
8
3200
x128
$2,840
7453
28
56
2.75
3.45
225
225
240
64
8
3200
x128
$1,570
74F3
24
48
3.20
4.00
240
225
240
256
8
3200
x128
$2,900
7443
24
48
2.85
4.00
200
165
200
128
8
3200
x128
$2,010
7443P
24
48
2.85
4.00
200
165
200
128
8
3200
X128
$1,337
7413
24
48
2.65
3.60
180
165
200
128
8
3200
X128
$1,825
73F3
16
32
3.50
4.00
240
225
240
256
8
3200
x128
$3,521
7343
16
32
3.20
3.90
190
165
200
128
8
3200
x128
$1,565
7313
16
32
3.00
3.70
155
155
180
128
8
3200
X128
$1,083
7313P
16
32
3.00
3.70
155
155
180
128
8
3200
X128
$913
72F3
8
16
3.70
4.10
180
165
200
256
8
3200
x128
$2,468
Thoughts
AMD’s general launch today gives us a good picture of the company’s data center chips moving forward, but we won’t know the full story until Intel releases the formal details of its 10nm Ice Lake processors.
The volume ramp for both AMD’s EPYC Milan and Intel’s Ice Lake has been well underway for some time, and both lineups have been shipping to hyperscalers and CSPs for several months. The HPC and supercomputing space also tend to receive early silicon, so they also serve as a solid general litmus for the future of the market. AMD’s EPYC Milan has already enjoyed brisk uptake in those segments, and given that Intel’s Ice Lake hasn’t been at the forefront of as many HPC wins, it’s easy to assume, by a purely subjective measure, that Milan could hold some advantages over Ice Lake.
Intel has already slashed its pricing on server chips to remain competitive with AMD’s EPYC onslaught. It’s easy to imagine that the company will lean on its incumbency and all the advantages that entails, like its robust Server Select platform offerings, wide software optimization capabilities, platform adjacencies like networking, FPGA, and Optane memory, along with aggressive pricing to hold the line.
AMD has obviously prioritized its supply of server processors during the pandemic-fueled supply chain disruptions and explosive demand that we’ve seen over the last several months. It’s natural to assume that the company has been busy building Milan inventory for the general launch. We spoke with AMD’s Forrest Norrod, and he tells us that the company is taking steps to ensure that it has an adequate supply for its customers with mission-critical applications.
One thing is clear, though. Both x86 server vendors benefit from a rapidly expanding market, but ARM-based servers have become more prevalent than we’ve seen in the recent past. For now, the bulk of the ARM uptake seems limited to cloud service providers, like AWS with its Graviton 2 chips. In contrast, uptake is slow in the general data center and enterprise due to the complexity of shifting applications to the ARM architecture. Continuing and broadening uptake of ARM-based platforms could begin to change that paradigm in the coming years, though, as x86 faces its most potent threat in recent history. Both x86 vendors will need a steady cadence of big performance improvements in the future to hold the ARM competition at bay.
Unfortunately, we’ll have to wait for Ice Lake to get a true view of the competitive x86 landscape over the next year. That means the jury is still out on just what the data center will look like as AMD works on its next-gen Genoa chips and Intel readies Sapphire Rapids.
AMD will unveil its EPYC 7003 Milan processors during a live webcast that you can watch here on March 15, 2021, at 11am ET (8am PT), marking the company’s first release of processors for the data center based on the Zen 3 architecture. The live stream will include presentations from AMD CEO Lisa Su, CTO Mark Papermaster, and SVP and GM of the data center group, Forrest Norrod.
Update: The NDA has expired. You can see our full breakdown and analysis here, which covers the finer details of the live stream below.
Beyond an accidentally-posted presentation in 2019, AMD hasn’t officially revealed many details around its Milan lineup. However, it recently teased a performance benchmark at CES 2021, and a vendor recently posted specifications and pricing for several models.
Early indications suggest that, as with the current-gen EPYC Rome processors, AMD fabs the EPYC Milan chips with the 7nm process, and they top out at 64 cores. The most significant change to the series comes with the infusion of the Zen 3 microarchitecture that lends a 19% in instruction per cycle (IPC) throughput improvement through several changes, like a unified L3 cache and better thermal management that allows the chip to extract more performance within any given TDP range.
Even though we’ve seen shortages on the consumer side of AMD’s business, it has obviously prioritized server chips production. As a result, it has continued to slowly whittle away at Intel’s commanding lead in the data center. Faced with unrelenting pressure from a surprisingly nimble competitor, Intel has significantly reduced gen-on-gen pricing with the debut of its Cascade Lake Refresh Xeon models, by 60% in some cases, by slightly adjusting the capabilities of the chips in a way that largely results in a price reduction that comes in the guise of new chips.
To counter, AMD bulked up its EPYC Rome lineup with its workload-optimized 7F and 7H parts, which come with higher power consumption and thermals than the standard 7002 series chips but feature higher frequencies, allowing AMD to challenge Intel’s traditional lead in per-core performance.
But now the landscape will change once again. The Milan launch, not to mention Intel’s pending 10nm Ice Lake launch, promises to reignite the heated data center competition. You can watch the presentation here live, but be sure to check out our full analysis after the announcement.
Chances are that there are planes flying over your house right now. Using a Raspberry Pi, a device known as an ADS-B receiver, and a standard projector, we can create our very own “radar” that shows the real time location of these aircraft and, if we have a projector, we can project it on the ceiling. Otherwise, we can track it on a regular screen.
About ADS-B Technology
Lots of aircraft are outfitted with a device known as an ADS-B. Standing for “Automatic Dependant Surveillance-Broadcast”, it’s a technology that allows aircraft to transmit positional information about themselves to other aircraft, ground-based stations, and even satellite-based stations. For smaller planes where it isn’t feasible to install more complicated collision-avoidance technologies, an ADS-B transmitter and receiver can do a lot to increase flight safety.
Aircraft outfitted with ADS-B transmitters (which is becoming law in more and more countries), transmit a variety of positional data like altitude, GPS coordinates, and ground speed data. Fortunately for us, all the data is transmitted on a standard frequency and it’s unencrypted. This means with a small USB dongle and a Raspberry Pi, we can listen in to the positional information of aircraft nearby
What You’ll Need For This Project
Raspberry Pi 4 or Raspberry Pi 3 with power adapter
8 GB (or larger) microSD card withRaspberry Pi OS. See our list of best microSD cards for Raspberry Pi.
ADS-B Receiver Kit (Antenna and USB dongle) like this one.
Monitor or Projector with HDMI and power cables. If you want to project on the ceiling, you’ll need a projector.
How to Track Local Airplanes with Raspberry Pi
Before you get started, make sure that you have your Raspberry Pi OS set up. If you haven’t done this before see our article on how to set up a Raspberry Pi for the first time or how to do a headless Raspberry Pi install (without the keyboard and screen).
1. Update Raspberry Pi OS by entering the commands below at the command prompt. This almost goes without saying, but is a good practice.
sudo apt-get update -y
sudo apt-get upgrade -y
2. Install the base components we’ll need to communicate with the ADS-B receiver and display aircraft positions using Python.
3. Clone the dump1090 repository into your home directory. Dump1090 is a decoder that will let us decode ADS-B messages into readable JSON.
cd ~/
git clone https://github.com/flightaware/dump1090.git
4. Build dump1090. This may take a bit of time depending on your type of raspberry pi.
cd dump1090
make
5. Connect your ADS-B receiver to the Raspberry Pi’s USB port.
6. Run dump1090 from within its directory.
./dump1090 --interactive
You should see a table appear in your console with various rows filled with data for overhead airplanes, including their altitude and flight number.
Now that we’ve got our ADS-B decoder installed, we can download the projection code. I wrote a simple program using python and the pygame library that displays the real-time location of aircraft, as well as their flight number and altitude (all from dump1090) on your display. You’re more than welcome to modify it or build your own.
7. Clone the Raspberry Pi Flight Tracker git.
cd ~/
git clone https://github.com/rydercalmdown/raspberry_pi_flight_tracker.git
8. Set up a virtual environment with python3 for the flight tracker.
cd raspberry_pi_flight_tracker
virtualenv -p python3 env
9. Activate the virtual environment, and install python requirements.
10. Rename the environment.sample.sh to environment.sh and open the new file for editing.
mv environment.sample.sh environment.sh
# edit the file with nano
nano environment.sh
11. Edit the file to set the values for your current latitude and longitude, along with maximum latitude and longitude. The maximums will determine how much of the area around your location to display. An easy way to get your latitude and longitude values is to use Google Maps. First, find your location and right click it to display a menu – click the latitude and longitude values to copy them to your clipboard.
Next, zoom out from your current location. Pick a spot north of your current location and copy the value to your clipboard. Then copy the first value (latitude) into your environment.sh file as LAT_MAX (shown below as 43.680222). Do the same with a spot south of your current location, and fill in the first value in your environment.sh file as LAT_MIN. These values represent how far tracking extends north and south of your location.
Next, pick a spot west of your current location, and copy the coordinates to the clipboard. Use the second value (viewed above as -79.49174) to fill in the LON_MAX value. Do the same with a spot east of your location, and LON_MIN. These values represent how far tracking extends east-west from your current location.
When completed, your environment.sh file should look something like this (with your coordinates).
12. Start the dump1090 server and the projection code using the same command:
bash entrypoint.sh
If all goes well, after a moment you’ll be greeted with a blank screen with a dot in the center indicating your current position, and aircraft around you will show up as moving dots across the screen as their signal appears.
If you’re having trouble getting a signal, try moving your antenna to where it has a clear view of the sky, like an upstairs window.
13. If you’re using a projector, point it at the ceiling and line up the top of the screen with your magnetic north.
And there you have it. Your own personal aircraft “radar” system.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.