Siri appears to have spilled the beans on the next Apple launch event. MacRumors was first to reportthatasking the voice assistant “When is the next Apple Event” prompts it to say April 20th. We’ve managed to get the same response, though only on a device that’s associated with a US Apple ID. Apple typically announces its events with invites sent out a week in advance, meaning the news should become official later today.
“The special event is on Tuesday, April 20th, at Apple Park in Cupertino, CA. You can get all the details on Apple.com” says Siri. Tapping the link takes you to the standard Apple Event landing page, where the launch isn’t listed. It seems odd that the response claims the event is happening in Cupertino, since this will almost certainly be another of Apple’s pre-recorded online-only events.
The message itself doesn’t give any hints about what Apple might be planning to announce at the event. However, recent rumors point towards new iPad Pro models, at least. Bloomberg recently reported new iPad Pros will debut in April. The 12.9-inch iPad Pro will reportedly be Apple’s first device with a Mini LED screen, allowing it to offer a high contrast ratio without the risk of burn-in associated with OLED displays. Reports indicate that the new iPad Pros could be in short supply due to production issues Apple is facing with the Mini LED displays. Other rumored iPad Pro upgrades include new processors with a similar amount of power to the M1 chip found in Apple’s recent MacBooks, better cameras, and USB-C ports with faster transfer speeds.
There are rumors that Apple is also close to launching its long-rumored AirTags. The location tracking devices, which should allow users to keep track of items using Apple’s Find My software, were expected to launch last year but ultimately never appeared. With Apple recently opening up its Find My network to track items from third-party companies, the stage is now set for the launch of its own physical trackers.
Unless Siri is lying, the invitations should arrive imminently.
Nvidia introduced two new data processing units (DPUs) at its GTC event on Monday, the BlueField-3 and the BlueField-4, that it says will be available in 2022 and 2024, respectively. The new DPUs are designed to offload network, storage, security, and virtualization loads from datacenter CPUs.
Nvidia’s BlueField-3 DPU is a custom-built system-on-chip that integrates 16 general-purpose Arm Cortex-A78 cores along with multiple accelerators for software-defined storage, networking, security, streaming, and TLS/IPSEC cryptography just to name a few. It also has its own DDR5 memory interface. The Cortex-A78 cores can run a virtualization software stack and offload numerous workloads from host CPUs. Meanwhile, the VLIW engines can be programmed through Nvidia’s DOCA (data-center-on-a-chip architecture) software development kit to accelerate their workloads.
While BlueField-3 looks to be a pretty straightforward SoC on paper, it is an extremely complex chip with formidable compute performance: The DPU contains 22 billion transistors and features performance of 42 SPECrate2017_int (which is comparable to the performance of a 12-core 2nd Generation Xeon Scalable CPU) as well as 1.5 TOPS. Its successor — the BlueField-4 — will integrate 64 billion transistors and feature 160 SPECrate2017_int as well as 1000 TOPS performance.
The BlueField-3 DPU will come on a card equipped with Mellanox’s 400Gbps Ethernet or Infiniband NIC for connectivity. Its descendant, BlueField-4, will feature an 800Gbps connectivity, Nvidia revealed today.
From a technology perspective, Nvidia’s DPUs have nothing to do with the company’s GPUs as they are developed by engineers from Mellanox, a company that Nvidia now owns. But from a business perspective, they are just as promising as datacenter GPUs and SoCs, as demand for cloud computing is growing rapidly.
In recent years Nvidia put a lot of effort into the development of its datacenter and HPC GPUs, and presently it commands the lion’s share of those markets. Meanwhile, as datacenters are becoming more complex and various data manipulations by CPUs are getting costly in terms of performance and power, DPUs may actually face growing demand in the coming years. So it’s not surprising that Nvidia’s engineers are hard at work designing leading-edge DPUs as they are a natural way for the company to grow further in the datacenter.
DPUs can also benefit the company’s GPU business. As CPUs get freed from networking, security, and cryptography workloads, they will spend more clocks running applications, some of which also rely on GPUs. So more freed-up CPU cycles may lead to demand for more GPU cycles.
Nvidia says it will start sampling of its BlueField-3 DPU in the first quarter of 2022.
Nvidia today revealed its updated DGX Station A100 320G. As you might infer from the name, the new DGX Station sports four 80GB A100 GPUs, with more memory and memory bandwidth than the original DGX Station. The DGX Superpod has also been updated with 80GB A100 GPUs and Bluefield-2 DPUs.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
Nvidia already revealed the 80GB A100 variants last year, with HBM2e clocked at higher speeds and delivering 2TB/s of bandwidth per GPU (compared to 1.6TB/s for the 40GB HBM2 model). We already knew as far back as November that the DGX Station would support the upgraded GPUs, but they’re now ‘available’ — or at least more available than any of the best graphics cards for gaming.
The DGX Station isn’t for gaming purposes, of course. It’s for AI research and content creation. It has refrigerated liquid cooling for the Epyc CPU and four A100 GPUs, and runs at a quiet 37 dB while consuming up to 1500W of power. It also delivers up to 2.5 petaFLOPS of floating-point performance and supports up to 7 MIGs (multi-instance GPU) per A100, giving it 28 MIGs total.
If you’re interested in getting a DGX Station, they run $149,000 per unit, or can be leased for $9,000 per month.
The DGX Superpod has received a few upgrades as well. Along with the 80GB A100 GPUs, it now supports Bluefield-2 DPUs (data processing units). These offer enhanced security and full isolation between virtual machines. Nvidia also now offers Base Command, the software the company uses internally to help share its Selene supercomputer among thousands of users.
DGX Superpod spans a range of performance, depending on how many racks are filled. It starts with 20 DGX A100 systems and has 100 PFLOPS of AI performance, and a fully equipped system has 140 DGX A100 systems and 700 PFLOPS of AI performance. Each A100 system has dual AMD EPYC 7742 CPUs with 64-cores each, supports up to 2TB of memory, and has eight A100 GPUs. Each also delivers 5 PFLOPS of AI compute, though FP64 is also available.
The DGX Superpod starts at $7 million and scales up to $60 million for a fully equipped system. They’ll be available in Q2. We’ll take two, thanks.
The first trailer for Exposure — the upcoming, Samsung-branded reality TV show from Hulu that’s one part photography competition, one part Galaxy S21 Ultra advertisement — is here. But for a show that’s been paid for by Samsung explicitly to highlight the power of the cameras on Samsung’s phones, it’s almost suspiciously light on Samsung branding.
If you just gave Exposure a cursory glance (or, more likely, stumbled across it on Hulu one night) there’s nothing about it that immediately betrays its status as Samsung branded content. There’s no Samsung logo on the title card, no mention of the Galaxy S21 Ultra or its 108-megapixel camera system, or the various software add-ons that help Samsung’s smartphone stand out from the rest.
In fact, Exposure’s trailer just gives the impression of being a photography-centered competition series, like The Great British Baking Show or Chopped. Look more closely, of course, and the cracks start to show. A photography contest without any DSLRs or mirrorless cameras? Why doesn’t anyone have a bag full of lenses or a holster with a secondary shooter handy? And yes, I suppose all the contestants do appear to be using the same phone, now that you mention it.
Which is, of course, the point: Exposure is, after all, still meant to highlight the S21 Ultra’s camera — even if its trailer isn’t exactly shoving that fact in your face. This either makes it the subtlest and best piece of branded content ever, or the worst.
It’s possible that Exposure will lean more heavily on its Samsung pedigree in the actual series itself; reality TV isn’t exactly known for its subtlety even in the most ordinary of circumstances.
But there is the chance that the limitations of only using Samsung’s latest smartphone could add some interesting wrinkles to the show. Any trained photographer that’s good enough to make it to Exposure’s level can almost certainly take great photos with their own equipment.
But by introducing a common variable (the S21 Ultra’s hardware, for better and for worse), the show can theoretically be about who can use that specific tool the best with things like staging, lighting, and editing; similar to how cooking shows tend to make contestants work within the limitations of the dish or ingredients of the week, rather than just making the thing that they’re most comfortable preparing.
We’ll find out when Exposure arrives on Hulu on April 26th.
Nvidia’s flagship A100 compute GPU introduced last year delivers leading-edge performance required by cloud datacenters and supercomputers, but the unit is way too powerful and expensive for more down-to-earth workloads. So today at GTC the company introduced two younger brothers for its flagship, the A30 for mainstream AI and analytics servers, and the A10 for mixed compute and graphics workloads.
Comparison of Nvidia’s A100-Series Datacenter GPUs
A100 for PCIe
A30
A10
FP64
9.7 TFLOPS
5.2 TFLOPS
–
FP64 Tensor Core
19.5 TFLOPS
10.3 TFLOPS
–
FP32
19.5 TFLOPS
10.3 TFLOPS
31.2 TFLOPS
TF32
156 TF
82 TF
62.5 TFLOPS
Bfloat16
312 TF
165 TF
125 TF
FP16 Tensor Core
312 TF
165 TF
125 TF
INT8
624 TOPS
330 TOPS
250 TOPS
INT4
1248 TOPS
661 TOPS
500 TOPS
RT Cores
–
–
72
Memory
40 GB HBM2
24 GB HBM2
24 GB GDDR6
Memory Bandwidth
1,555 GB/s
933 GB/s
600 GB/s
Interconnect
12 NVLinks, 600 GB/s
? NVLinks, 200 GB/s
–
Multi-Instance
7 MIGs @ 5 GB
4 MIGs @ 6 GB
–
Optical Flow Acceleration
–
1
–
NVJPEG
–
1 decoder
?
NVENC
–
?
1 encoder
NVDEC
–
4 decoders
1 decoder (+AV1)
Form-Factor
FHFL
FHFL
FHFL
TDP
250W
165W
150W
The Nvidia A30: A Mainstream Compute GPU for AI Inference
Nvidia’s A30 compute GPU is indeed A100’s little brother and is based on the same compute-oriented Ampere architecture. It supports the same features, a broad range of math precisions for AI as well as HPC workloads (FP64, FP64TF, FP32, TF32, bfloat16, FP16, INT8, INT4), and even multi-instance GPU (MIG) capability with 6GB instances. From a performance point of view, the A30 GPU offers slightly more than 50% of A100’s performance, so we are talking about 10.3 FP32 TFLOPS, 5.2 FP64 TFLOPS, and 165 FP16/bfloat16 TFLOPS.
When it comes to memory, the unit is equipped with 24GB of DRAM featuring a 933GB/s bandwidth (we suspect Nvidia uses three HBM2 stacks at around 2.4 GT/s, but the company has not confirmed this). The memory subsystem seems to lack ECC support, which might be a limitation for those who need to work with large datasets. Effectively, Nvidia wants these customers to use its more expensive A100.
Nvidia traditionally does not disclose precise specifications of its compute GPU products at launch, yet we suspect that the A30 is exactly ‘half’ of the A100 with 3456 CUDA cores, though this is something that is unconfirmed at this point.
Nvidia’s A30 comes in a dual-slot full-height, full length (FHFL) form-factor, with a PCIe 4.0 x16 interface and a 165W TDP, down from 250W in case of the FHFL A100. Meanwhile, the A30 supports one NVLink at 200 GB/s (down from 600 GB/s in case of the A100).
The Nvidia A10: A GPU for AI, Graphics, and Video
Nvidia’s A10 does not derive from compute-oriented A100 and A30, but is an entirely different product that can be used for graphics, AI inference, and video encoding/decoding workloads. The A10 supports FP32, TF32, blfoat16, FP16, INT8 and INT4 formats for graphics and AI, but does not support FP64 required for HPC.
The A10 is a single-slot FHFL graphics card with a PCIe 4.0 x16 interface that will be installed into servers running the Nvidia RTX Virtual Workstation (vWS) software and remotely powering workstations that need both AI and graphics capabilities. To a large degree, the A10 is expected to be a remote workhorse for artists, designers, engineers, and scientists (who do not need FP64).
Nvidia’s A10 seems to be based on the GA102 silicon (or its derivative), but since it supports INT8 and INT4 precisions, we cannot be 100% sure that this is physically the same processor that powers Nvidia’s GeForce RTX 3080/3090 and RTX A6000 cards. Meanwhile, performance of the A10 (31.2 FP32 TFLOPS, 125 FP16 TFLOPS) sits in the range of the GeForce RTX 3080. The card comes equipped with 24GB of GDDR6 memory offering a 600GB/s bandwidth, which appears to be the memory interface width of the RTX 3090 but without the GDDR6X clock speeds (or power or temperatures).
Pricing and Availability
Nvidia expects its partners to start offering machines with its A30 and A10 GPUs later this year.
HP has been unintentionally exposing AMD’s Ryzen 5000 (Cezanne) APUs. Apparently, the tech giant (via momomo_us) has also shared the specifications for the Ryzen 5000 Pro lineup via the one of the company’s support documents.
The Pro series, which is oriented to business and professional users, utilizes the same formula as its mainstream counterparts. Ryzen 5000 Pro APUs come equipped with the same potent Zen 3 cores and an improved Vegas graphics engine. The processors still stick to a monolithic die design and are based on TSMC’s 7nm manufacturing process.
The more attractive trait with the Ryzen 5000 Pro lineup is the Zen 3 cores, which have proven to offer an IPC upgrade up to 19% in comparison to Zen 2. Unlike AMD’s Ryzen 5000 (Vermeer) chips, these new Zen 3 APUs will not offer PCIe 4.0 support. Ryzen 5000 Pro will drop into the current AM4 socket so a simple firmware upgrade should be more than sufficient to get the APUs working on existing AMD motherboards.
AMD Ryzen 5000 Pro Cezanne APU Specifications
Processor
Cores / Threads
Base / Boost Clocks (GHz)
L3 Cache (MB)
TDP (W)
Ryzen 7 Pro 5750G
8 / 16
3.8 / ?
16
65
Ryzen 7 5700G
8 / 16
3.8 / 4.6
16
65
Ryzen 5 Pro 5650G
6 / 12
3.9 / ?
16
65
Ryzen 5 5600G
6 / 12
3.9 / 4.4
16
65
Ryzen 3 Pro 5350G
4 / 8
4.0 / ?
8
65
Ryzen 3 5300G
4 / 8
4.0 / 4.2
8
65
The Ryzen 7 Pro 5750G, Ryzen 5 Pro 5650G and Ryzen 3 Pro 5350G are the Pro equivalent to the Ryzen 7 5700G, Ryzen 5 5600G and Ryzen 3 5300G, respectively. The Ryzen 7 and Ryzen 5 SKUs are equipped with eight and six cores, while the Ryzen 3 model sports four cores. All three processors leverage simultaneous multithreading (SMT) to tackle demanding workloads.
Frequency-wise, Ryzen 5000 Pro processors should be identical to their non-Pro versions. The biggest difference between the two product lines is the feature set. The Pro variants come with enhanced security features, 18 months of software stability, 24 months of availability and a 36-month limited warranty.
Ryzen 5000 Pro APUs operate within the 65W thermal limit so they aren’t choosy when it comes to power or cooling requirements. Unlike Ryzen 5000 processors, the Ryzen 5000 Pro APUs possess an integrated Vega engine that’s powerful enough for many daily workloads so a discrete graphics option isn’t mandatory. Many of these Zen 3 APUs will likely find their way into very compact, business-oriented systems.
Unfortunately, there hasn’t been any indication if AMD’s Zen 3 APUs will be available on the retail market. Ryzen 4000 (Renoir) desktop APUs were aimed at OEMs. However, AMD did promised that the next generation of APUs will arrive on the DIY market although the chipmaker didn’t specifically referto Ryzen 5000.
Gigabyte’s Aorus Z590 Master is a well-rounded upper mid-range motherboard with a VRM rivaled by boards that cost twice as much. Between the Wi-Fi 6E and 10 GbE, three M.2 sockets and six SATA ports for storage, plus its premium appearance, the Z590 Master is an excellent option to get into the Z590 platform if you’re willing to spend around $400.
For
+ Fast Networking, Wi-Fi 6E/10 GbE
+ Superior 18-phase 90A VRM
+ 10 USB ports
Against
– No PCIe x1 slot(s)
– Audible VRM fan
– Price
Features and Specifications
Editor’s Note: A version of this article appeared as a preview before we had a Rocket Lake CPU to test with Z590 motherboards. Now that we do (and Intel’s performance embargo has passed), we have completed testing (presented on page 3) with a Core i9-11900K and have added a score and other elements (as well as removing some now-redundant sentences and paragraphs) to make this a full review.
Gigabyte’s Z590 Aorus Master includes an incredibly robust VRM, ultra-fast Wi-Fi and wired networking, premium audio, and more. While its price of roughly $410 is substantial, it’s reasonable for the features you get, and far from the price of the most premium models in recent generations. If you don’t mind a bit of audible VRM fan noise and like lots of USB and fast wired and wireless networking, it’s well worth considering.
Gigabyte’s current Z590 product stack consists of 13 models. There are familiar SKUs and a couple of new ones. Starting with the Aorus line, we have the Aorus Xtreme (and potentially a Waterforce version), Aorus Master, Aorus Ultra, and the Aorus Elite. Gigabyte brings back the Vision boards (for creators) and their familiar white shrouds. The Z590 Gaming X and a couple of boards from the budget Ultra Durable (UD) series are also listed. New for Z590 is the Pro AX board, which looks to slot somewhere in the mid-range. Gigabyte will also release the Z590 Aorus Tachyon, an overbuilt motherboard designed for extreme overclocking.
On the performance front, the Gigabyte Z590 Aorus Master did well overall, performing among the other boards with raised power limits. There wasn’t a test where it did particularly poorly, but the MS Office and PCMark tests on average were slightly higher than most. Overall, there is nothing to worry about when it comes to stock performance on this board. Overclocking proceeded without issue as well, reaching our 5.1 GHz overclock along with the memory sitting at DDR4 4000.
The Z590 Aorus Master looks the part of a premium motherboard, with brushed aluminum shrouds covering the PCIe/M.2/chipset area. The VRM heatsink and its NanoCarbon Fin-Array II provide a nice contrast against the smooth finish on the board’s bottom. Along with Wi-Fi 6E integration, it also includes an Aquantia based 10GbE, while most others use 2.5 GbE. The Aorus Master includes a premium Realtek ALC1220 audio solution with an integrated DAC, three M.2 sockets, reinforced PCIe and memory slots and 10 total USB ports, including a rear USB 3.2 Gen2x2 Type-C port. We’ll cover those features and much more in detail below. But first, here are full the specs from Gigabyte.
Specifications – Gigabyte Z590 Aorus Master
Socket
LGA 1200
Chipset
Z590
Form Factor
ATX
Voltage Regulator
19 Phase (18+1, 90A MOSFETs)
Video Ports
(1) DisplayPort v1.2
USB Ports
(1) USB 3.2 Gen 2×2, Type-C (20 Gbps)
(5) USB 3.2 Gen 2, Type-A (10 Gbps)
(4) USB 3.2 Gen 1, Type-A (5 Gbps)
Network Jacks
(1) 10 GbE
Audio Jacks
(5) Analog + SPDIF
Legacy Ports/Jacks
✗
Other Ports/Jack
✗
PCIe x16
(2) v4.0 x16, (x16/x0 or x8/x8
(1) v3.0 x4
PCIe x8
✗
PCIe x4
✗
PCIe x1
✗
CrossFire/SLI
AMD Quad GPU Crossfire and 2-Way Crossfire
DIMM slots
(4) DDR4 5000+, 128GB Capacity
M.2 slots
(1) PCIe 4.0 x4 / PCIe (up to 110mm)
(1) PCIe 3.0 x4 / PCIe + SATA (up to 110mm)
(1) PCIe 3.0 x4 / PCIe + SATA (up to 110mm)
U.2 Ports
✗
SATA Ports
(6) SATA3 6 Gbps (RAID 0, 1, 5 and 10)
USB Headers
(1) USB v3.2 Gen 2 (Front Panel Type-C)
(2) USB v3.2 Gen 1
(2) USB v2.0
Fan/Pump Headers
(10) 4-Pin
RGB Headers
(2) aRGB (3-pin)
(2) RGB (4-pin)
Legacy Interfaces
✗
Other Interfaces
FP-Audio, TPM
Diagnostics Panel
Yes, 2-character debug LED, and 4-LED ‘Status LED’ display
As we open up the retail packaging, along with the board, we’re greeted by a slew of included accessories. The Aorus Master contains the basics (guides, driver CD, SATA cables, etc.) and a few other things that make this board complete. Below is a full list of all included accessories.
Installation Guide
User’s Manual
G-connector
Sticker sheet / Aorus badge
Wi-Fi Antenna
(4) SATA cables
(3) Screws for M.2 sockets
(2) Temperature probes
Microphone
RGB extension cable
Image 1 of 3
Image 2 of 3
Image 3 of 3
After taking the Z590 Aorus Master out of the box, its weight was immediately apparent, with the shrouds, heatsinks and backplate making up the majority of that weight. The board sports a matte-black PCB, with black and grey shrouds covering the PCIe/M.2 area and two VRM heatsinks with fins connected by a heatpipe. The chipset heatsink has the Aorus Eagle branding lit up, while the rear IO shroud arches over the left VRM bank with more RGB LED lighting. The Gigabyte RGB Fusion 2.0 application handles RGB control. Overall, the Aorus Master has a premium appearance and shouldn’t have much issue fitting in with most build themes.
Looking at the board’s top half, we’ll first focus on the VRM heatsinks. They are physically small compared to most boards, but don’t let that fool you. The fin array uses a louvered stacked-fin design Gigabyte says increases surface area by 300% and improves thermal efficiency with better airflow and heat exchange. An 8mm heat pipe also connects them to share the load. Additionally, a small fan located under the rear IO shroud actively keeps the VRMs cool. The fan here wasn’t loud, but was undoubtedly audible at default settings.
We saw a similar configuration in the previous generation, which worked out well with an i9-10900K, so it should do well with the Rocket Lake flagship, too. We’ve already seen reports indicating the i9-11900K has a similar power profile to its predecessor. Feeding power to the VRMs is two reinforced 8-pin EPS connectors (one required).
To the right of the socket, things start to get busy. We see four reinforced DRAM slots supporting up to 128GB of RAM. Oddly enough, the specifications only list support up to DDR4 3200 MHz, the platform’s limit. But further down the webpage, it lists DDR4 5000. I find it odd it is listed this way, though it does set up an expectation that anything above 3200 MHz is overclocking and not guaranteed to work.
Above the DRAM slots are eight voltage read points covering various relevant voltages. This includes read points for the CPU Vcore, VccSA, VccIO, DRAM, and a few others. When you’re pushing the limits and using sub-ambient cooling methods, knowing exactly what voltage the component is getting (software can be inaccurate) is quite helpful.
Above those on the top edge are four fan headers (next to the EPS connectors is a fifth) of 10. According to the manual, all CPU fan and pump headers support 2A/24W each. You shouldn’t have any issues powering fans and a water cooling pump. Gigabyte doesn’t mention if these headers use auto-sensing (for DC or PWM control), but they handled both when set to ‘auto’ in the BIOS. Both a PWM and DC controlled fan worked without intervention.
The first two (of four) RGB LED headers live to the fan headers’ right. The Z590 Aorus Master includes two 3-pin ARGB headers and two 4-pin RGB headers. Since this board takes a minimal approach to RGB lighting, you’ll need to use these to add more bling to your rig.
We find the power button and 2-character debug LED for troubleshooting POST issues on the right edge. Below is a reinforced 24-pin ATX connector for power to the board, another fan header and a 2-pin temperature probe header. Just below all of that are two USB 3.2 Gen1 headers and a single USB 3.2 Gen2x2 Type-C front-panel header for additional USB ports.
Gigabyte chose to go with a 19-phase setup for the Vcore and SOC on the power delivery front. Controlling power is an Intersil ISL6929 buck controller that manages up to 12 discrete channels. The controller then sends the power to ISL6617A phase doublers and the 19 90A ISL99390B MOSFETs. This is one of the more robust VRMs we’ve seen on a mid-range board allowing for a whopping 1,620A available for the CPU. You won’t have any trouble running any compatible CPU, including using sub-ambient overclocking.
The bottom half of the board is mostly covered in shrouds hiding all the unsightly but necessary bits. On the far left side, under the shrouds, you’ll find the Realtek ALC1220-VB codec along with an ESS Sabre ESS 9118 DAC and audiophile-grade WIMA and Nichicon Fine Gold capacitors. With the premium audio codec and DAC, an overwhelming majority of users will find the audio perfectly acceptable.
We’ll find the PCIe slots and M.2 sockets in the middle of the board. Starting with the PCIe sockets, there are a total of three full-length slots (all reinforced). The first and second slots are wired for PCIe 4.0, with the primary (top) slot wired for x16 and the bottom maxes out at x8. Gigabyte says this configuration supports AMD Quad-GPU Cand 2-Way Crossfire. We didn’t see a mention of SLI support even though the lane count supports it. The bottom full-length slot is fed from the chipset and runs at PCIe 3.0 x4 speeds. Since the board does without x1 slots, this is the only expansion slot available if you’re using a triple-slot video card. Anything less than that allows you to use the second slot.
Hidden under the shrouds around the PCIe slots are three M.2 sockets. Unique to this setup is the Aorus M.2 Thermal Guard II, which uses a double-sided heatsink design to help cool M.2 SSD devices with double-sided flash. With these devices’ capacities rising and more using flash on both sides, this is a good value-add.
The top socket (M2A_CPU) supports up to PCIe 4.0 x4 devices up to 110mm long. The second and third sockets, M2P_SB and M2M_SB, support both SATA and PCIe 3.0 x3 modules up to 110mm long. When using a SATA-based SSD on M2P_SB, SATA port 1 will be disabled. When M2M_SB (bottom socket) is in use, SATA ports 4/5 get disabled.
To the right of the PCIe area is the chipset heatsink with the Aorus falcon lit up with RGB LEDs from below. There’s a total of six SATA ports that support RAID0, 1, 5 and 10. Sitting on the right edge are two Thunderbolt headers (5-pin and 3-pin) to connect to a Gigabyte Thunderbolt add-in card. Finally, in the bottom-right corner is the Status LED display. The four LEDs labeled CPU, DRAM, BOOT and VGA light up during the POST process. If something hangs during that time, the LED where the problem resides stays lit, identifying the problem area. This is good to have, even with the debug LED at the top of the board.
Across the board’s bottom are several headers, including more USB ports, fan headers and more. Below is the full list, from left to right:
Front-panel audio
BIOS switch
Dual/Single BIOS switch
ARGB header
RGB header
TPM header
(2) USB 2.0 headers
Noise sensor header
Reset button
(3) Fan headers
Front panel header
Clear CMOS button
The Z590 Aorus Master comes with a pre-installed rear IO panel full of ports and buttons. To start, there are a total of 10 USB ports out back, which should be plenty for most users. You have a USB 3.2 Gen2x2 Type-C port, five USB 3.2 Gen2 Type-A ports and four USB 3.2 Gen1 Type-A ports. There is a single DisplayPort output for those who would like to use the CPU’s integrated graphics. The audio stack consists of five gold-plated analog jacks and a SPDIF out. On the networking side is the Aquantia 10 GbE port and the Wi-Fi antenna. Last but not least is a Clear CMOS button and a Q-Flash button, the latter designed for flashing the BIOS without a CPU.
Intel has initiated the end-of-life plan for all of its Optane DC P4800X SSDs with Memory Drive Technology (MDT). The same drives without the Memory Drive software will continue to be shipped as long as demand is there, but the SKUs with the said program will not be available from Intel by October.
The discontinued family of Optane SSD DC P4800X with MDT products includes models with 100GB, 375GB, 750GB, and 1.5TB capacities in U.2 and card form-factors with a PCIe 3.0 x4 interface.
Along with the drives, Intel has also EOL’d the Memory Drive Technology Software that’s sold separately for its Optane DC P4800X and SSD 900/905P drives. Interested parties should place their orders on the said products by June 30, 2021; Intel will ship the last drives with MDT on September 30, 2021.
Intel’s Memory Drive Technology software extends system memory to Optane SSDs transparently to the OS and essentially makes 3D XPoint-based drives appear like DRAM to the OS and applications. The software was introduced in 2018 alongside the Optane SSD DC P4800X/P4801X as well as Optane SSD 900P/905P drives and was designed primarily to expand system memory capacity on first-gen Intel Xeon Scalable (and older) machines in a very cost-efficient way, as 3D XPoint is significantly cheaper than DRAM.
Back in 2018, Intel sold its 1st Generation Xeon Scalable processors (and even their predecessors) that did not support yet-to-be-launched Optane Persistent Memory modules, so the Memory Drive Technology software made quite a lot of sense for the company and its customers that needed a cheap system memory expansion for their in-memory applications. In mid-2019 the company introduced its 2nd Generation Xeon Scalable ‘Cascade Lake’ CPUs that added support for Optane Persistent Memory Modules and it became even easier for its clients to expand system memory using 3D XPoint-based PMMs.
By now, the share of outdated Xeon CPUs in Intel’s shipments has probably dropped so significantly that it no longer needs either MDT or drives that come with it. To that end, it does not make sense to keep the SKUs in the catalog. Meanwhile, regular Optane DC P4800X SSDs will continue to be shipped as Intel has not announced any plans about them.
ASRock has quietly introduced one of the industry’s first Intel Z590-based Mini-ITX motherboards with a Thunderbolt 4 port. The manufacturer positions its Z590 Phantom Gaming-ITX/TB4 platform as its top-of-the-range offering for compact gaming builds for enthusiasts that want to have all the capabilities of large tower desktops and then some, so it is packed with advanced features.
The ASRock Z590 Phantom Gaming-ITX/TB4 motherboard supports all of Intel’s 10th and 11th Generation Comet Lake and Rocket Lake processors, including the top-of-the-range Core i9-11900K with a 125W TDP.
One of the main selling points of the Z590 Phantom Gaming-ITX/TB4 motherboard is of course its Thunderbolt 4 port, which supports a 40 Gb/s throughput when attached to appropriate TB3/TB4 devices (or 10 Gb/s when connected to a USB 3.2 Gen 2) such as high-end external storage subsystems (in case internal storage is not enough on a Mini-ITX build) and can handle two 4K displays or one 8K monitor (albeit with DSC). Furthermore, the motherboard has five USB 3.2 Gen 2 ports on the back as well as an internal header to connect a front panel USB 3.2 Gen 2×2 port which supports transfer rates up to 20 Gb/s.
The platform relies on a 10-layer PCB and is equipped with a 10-phase VRM featuring 90A solid-state coils, 90A DrMOS power stage solutions, and solid-state Nichicon 12K capacitors to ensure maximum performance, reliable operation, and some additional overclocking potential. Interestingly, the motherboard’s CPU fan header provides a maximum 2A power to support water pumps.
The Z590 Phantom Gaming-ITX/TB4 also has a PCIe 4.0 x16 slot for graphics cards, two slots for up to 64 GB of DDR4-4266+ memory, two M.2-2280 slots for SSDs (with a PCIe 4.0 x4 as well as a PCIe 3.0 x4/SATA interface), and three SATA connectors. To guarantee the consistent performance and stable operation of high-end SSDs, ASRock supplies its own heat spreaders for M.2 drives that match its motherboard’s design.
Being a top-of-the-range product, the ASRock Z590 Phantom Gaming-ITX/TB4 naturally has support for addressable RGB lighting (using the ASRock Polychrome Sync/Polychrome RGB software) and has a very sophisticated input/output department that has a number of unique features, such as three display outputs and multi-gig networking.
In addition, the mainboard has a DisplayPort 1.4 as well as an HDMI 2.0b connector. Keeping in mind that Intel’s desktop UHD Graphics has three display pipelines, the motherboard can handle three monitors even without a discrete graphics card. Meanwhile, integrated Intel’s Xe-LP architecture used in Rocket Lake’s UHD Graphics 730 has very advanced media playback capabilities (e.g., a hardware-accelerated 12-bit video pipeline for wide-color 8K60 with HDR playback), so it can handle Ultra-HD Blu-ray, contemporary video services that use modern codecs, and next-generation 8Kp60 video formats.
Next up is networking. The Z590 Phantom Gaming-ITX/TB4 comes with an M.2-2230 Killer AX1675x WiFi 6E + Bluetooth 5.2 PCIe module that supports up to 2.4 Gbps throughput when connected to an appropriate router. Also, the motherboard is equipped with a Killer E3100G 2.5GbE adapter. The adapters can be used at the same time courtesy of Killer’s DoubleShot Pro technology that aggregates bandwidth and prioritizes high-priority traffic, so the maximum networking performance can be increased up to 4.9 Gbps.
The audio department of the Z590 Phantom Gaming-ITX/TB4 is managed by the Realtek ALC1220 audio codec withNahimic Audio software enhancements and includes 7.1-channel analog outputs as well as an S/P DIF digital output.
ASRock’s Z590 Phantom Gaming-ITX/TB4 motherboard will be available starting from April 23 in Japan, reports Hermitage Akihabara. In the Land of the Rising Sun, the unit will cost ¥38,000 (around $345) without taxes and ¥41,800 with taxes.
GPUs are known for being significantly better than most CPUs when it comes to AI deep neural networks (DNNs) training simply because they have more execution units (or cores). But a new algorithm proposed by computer scientists from Rice University is claimed to actually flip the tables and make CPUs a whopping 15 times faster than some leading-edge GPUs.
The most complex compute challenges are usually solved using brute force methods, like either throwing more hardware at them or inventing special-purpose hardware that can solve the task. DNN training is without any doubt among the most compute-intensive workloads nowadays, so if programmers want maximum training performance, they use GPUs for their workloads. This happens to a large degree because it is easier to achieve high performance using compute GPUs as most algorithms are based on matrix multiplications.
Anshumali Shrivastava, an assistant professor of computer science at Rice’s Brown School of Engineering, and his colleagues have presented an algorithm that can greatly speed up DNN training on modern AVX512 and AVX512_BF16-enabled CPUs.
“Companies are spending millions of dollars a week just to train and fine-tune their AI workloads,” said Shrivastava in a conversation with TechXplore. “The whole industry is fixated on one kind of improvement — faster matrix multiplications. Everyone is looking at specialized hardware and architectures to push matrix multiplication. People are now even talking about having specialized hardware-software stacks for specific kinds of deep learning. Instead of taking a [computationally] expensive algorithm and throwing the whole world of system optimization at it, I’m saying, ‘Let’s revisit the algorithm.'”
To prove their point, the scientists took SLIDE (Sub-LInear Deep Learning Engine), a C++ OpenMP-based engine that combines smart hashing randomized algorithms with modest multi-core parallelism on CPU, and optimized it heavily for Intel’s AVX512 and AVX512-bfloat16-supporting processors.
The engine employs Locality Sensitive Hashing (LSH) to identify neurons during each update adaptively, which optimizes compute performance requirements. Even without modifications, it can be faster in training a 200-million-parameter neural network, in terms of wall clock time, than the optimized TensorFlow implementation on an Nvidia V100 GPU, according to the paper.
“Hash table-based acceleration already outperforms GPU, but CPUs are also evolving,” said study co-author Shabnam Daghaghi.
To make hashing faster, the researchers vectorized and quantized the algorithm so that it could be better handled by Intel’s AVX512 and AVX512_BF16 engines. They also implemented some memory optimizations.
“We leveraged [AVX512 and AVX512_BF16] CPU innovations to take SLIDE even further, showing that if you aren’t fixated on matrix multiplications, you can leverage the power in modern CPUs and train AI models four to 15 times faster than the best specialized hardware alternative.”
The results they obtained with Amazon-670K, WikiLSHTC-325K, and Text8 datasets are indeed very promising with the optimized SLIDE engine. Intel’s Cooper Lake (CPX) processor can outperform Nvidia’s Tesla V100 by about 7.8 times with Amazon-670K, by approximately 5.2 times with WikiLSHTC-325K, and by roughly 15.5 times with Text8. In fact, even an optimized Cascade Lake (CLX) processor can be 2.55–11.6 times faster than Nvidia’s Tesla V100.
Without any doubt, optimized DNN algorithms for AVX512 and AVX512_BF16-enabled CPUs make a lot of sense since processors are pervasive as they are used by client devices, data center servers, and HPC machines. To that end, it is very important to take advantage of all of their capabilities.
But there might be a catch when it comes to absolute performance, so let’s speculate for a moment. Nvidia’s A100 promises to be 3–6 times faster than Nvidia’s Tesla V100 used by researchers for comparison (perhaps because getting an A100 is hard) in training. Unfortunately, we do not have any A100 numbers with Amazon-670K, WikiLSHTC-325K, and Text8 datasets. Perhaps, an A100 cannot beat Intel’s Cooper Lake when it uses an optimized algorithm, but these AVX512_BF16-enabled CPUs are not exactly widely available (like the A100). So, the question is, how does Nvidia’s A100 stack up against Intel’s Cascade Lake and Ice Lake CPUs?
NASA engineers have decided to delay the Ingenuity helicopter’s debut flight on Mars to at least Wednesday, April 14th, after running into a minor computer glitch during a rotor spin test late Friday night, the agency said on Saturday. The tiny craft is healthy, but engineers need some more time to review telemetry data from the unexpected hiccup before proceeding.
Ingenuity, a mini four-pound helicopter that arrived on Mars February 18th attached to NASA’s Perseverance rover, was initially slated to carry out its first flight test late Sunday night (or, mid-day Mars time). The first bits of data on whether the flight attempt was successful was expected to come early Monday morning, around 4AM ET.
But data from a high-speed rotor test carried out on Friday showed the test sequence “ended early due to a ‘watchdog’ timer expiration,” NASA said. It happened as Ingenuity’s computer was trying to switch from pre-flight mode to flight mode.
Ingenuity’s “watchdog timer” is just that — a software-based watchdog that oversees the helicopter’s test sequences and alerts engineers if anything looks abnormal. “It helps the system stay safe by not proceeding if an issue is observed and worked as planned,” NASA said in a blog post.
NASA emphasized the craft is healthy, and Ingenuity is still in good contact with engineers at the agency’s Jet Propulsion Laboratory in California.
Ingenuity was deployed by Perseverance on the Martian surface on April 4th, kicking off a 31-day clock in which five flight tests are planned. For its first flight demonstration, the helicopter will ascend 10 feet above the surface and hover for about 30 seconds, aiming to achieve the first-ever powered flight on another world. Depending how the first test goes, subsequent tests will involve Ingenuity soaring to higher altitudes and buzzing around within its running track-shaped flight zone at Mars’ Jezero Crater.
A group of enthusiasts has unlocked vGPU (GPU virtualization) capability, which is only supported on select datacenter and professional boards, on standard consumer Nvidia GeForce gaming graphics cards. Since the vGPU capability is supported by the silicon but locked out by software, it was only a matter of time and effort before enthusiasts unlocked the feature. As it turns out, according to a Reddit post, that time has come, potentially saving some users the thousands of dollars they would otherwise have to shell out for a Quadro or Tesla GPU that supports the feature.
GPU virtualization, which allows more than one user to use a GPU simultaneously, is one of the differentiators between GPUs for data centers and those designed for consumer PCs. Nowadays, many workstations and even high-end desktops are located remotely so the users can share the GPUs. Modern hardware is so powerful that its performance is sometimes excessive for one user, so sharing one graphics card between multiple users makes sense.
From a GPU hardware perspective, virtualization is just another feature, so the silicon supports it. But this capability requires a lot of software to work properly (i.e., how companies that buy workstations expect it to) and validation with ISVs since virtualized GPUs are in many cases used for professional applications.
All of these things cost money, so vGPU support comes at a price, and Nvidia has a handful of expensive Tesla, Quadro, and some other GPUs it recommends for virtualization (partly because it does not make a lot of sense to validate a broad fleet of hardware with ISVs). Nvidia’s vGPU software does not support most client GPUs.
The code for the unlocker is available at Github, and the principle behind it is fairly simple: it replaces the device id of a graphics card with a device id of an officially supported GPU that has the same feature set. For now, GP102, GP104, TU102, TU104, and GA102 GPUs are supported, and the capability works on Linux and with KVM virtual machine software.
While the new unlocking technique deserves some attention, the big question is whether your typical consumer actually needs GPU virtualization. Linux users can virtualize their high-end graphics cards and use them for gaming, video encoding, and cryptocurrency mining simultaneously on different virtual machines.
Some of those who happen to have servers with hundreds or thousands of consumer Nvidia GPUs could try to offer commercial remote desktop services to earn money, but the quality of such services would be something to worry about. Since the hack does not work with Windows and Vmware, it is useless for most users.
Logitech said Friday it is discontinuing its line of Harmony universal remotes, ending years of speculation that the devices were on their way out. Models that are currently in stock at retailers will be available while supplies last, and the company says it will continue to provide support and service for the Harmony remote “as long as customers are using it.”
“While Harmony remotes are and continue to be available through various retailers, moving forward Logitech will no longer manufacture Harmony remotes,” according to a blog post on Logitech’s support page. “We expect no impact to our customers by this announcement. We plan to support our Harmony community and new Harmony customers, which includes access to our software and apps to set up and manage your remotes. We also plan to continue to update the platform and add devices to our Harmony database. Customer and warranty support will continue to be offered.”
Logitech acquired Intrigue Technologies, the original maker of the Harmony remote, back in 2004. Harmony universal remotes were popular among consumers seeking one remote to rule them all— cable box, gaming console, and streaming devices. Some newer models were even able to be used with smart home devices.
Logitech’s business has boomed during the coronavirus pandemic as people worked and schooled at home; in January, the company reported its third-quarter sales were up 85 percent year over year to $1.67 billion, enough that it could splurge on its first-ever Super Bowl commercial.
But with the rise of streaming services over the past few years, universal remotes are no longer as crucial as they once were. Logitech CEO Bracken Darrell told The Verge in 2019 that Harmony was a small business for the company; he said the remote business was only about 6 percent of the size of Logitech’s massive keyboard business, for instance.
“I think over time, you’ll have fewer and fewer people who feel like they really need that universal remote,” Darrell said at the time. He added that the company appreciated hardcore Harmony users who love the device: “it’s so rare to have users that love something as much as a lot of our Harmony users do. We’ll always take care of them because we really believe that that’s part of the responsibility of the brand,” Darrell told The Verge. “So we do love Harmony for that reason. How long it will be out there, I don’t know.”
Crowbits’ progressive STEM kits teach future engineers (ages 6-10 and up) the basics of electronics and programming, but nondurable paper elements and poorly translated documentation could lead to frustration and incomplete projects.
For
+ 80+ Lego-compatible electronic modules and sensors
+ Helpful programming software
+ Progressive learning kits
+ Examples are very helpful
+ Engaging projects for pre-teen and teen engineers
Against
– Inadequate and inaccurate project tutorial
– Cable modules are stiff and pop off easily
– Cardboard projects are flimsy and cumbersome
– Labels are hard to read
They say that the best method of teaching is to start with the basics. This is true for most subjects, but even more so for getting kids involved and interested in learning about electronics and programming. This is exactly Elecrow Crowbits’ approach to launching young inventors and creators into the world of technology.
Available via Kickstarter, the STEM kit series starts with building simple projects that make use of basic electronic concepts, then steps up kids’ skills by introducing projects that require some coding and graduates to more advanced application development. The Crowbits lineup consists of five interactive STEM-based packages, each appropriately themed with projects that cater to kids from ages 6 -10 and up. These are the Hello Kit, Explorer Kit, Inventor Kit, Creator Kit and Master Kit.
With the variety of engineering kits out in the market today, Crowbits’ pricing falls in the mid-range category. Ranging from $26 to $90, depending on which kit you prefer, it is money well spent. One of the key values that Crowbits brings is its focus on teaching kids the basics of electronics through the use of these programmable blocks and sensors and ties that learning to current practical uses, like turning the lights on or off. This simple circuit logic is used to program small home appliances like coffee machines, automatic dispensers or even smart home security systems.
Much like the company’s previous Kickstarter project the CrowPi2, a Raspberry Pi-powered laptop which we reviewed last year, Crowbits also presented issues with documentation. Makers and creators know that clear and concise directions are very important for any project building. Unclear and inadequate instructions causes users, especially beginners, to feel that they may have done something wrong. They may be able to troubleshoot some issues themselves, but if left unresolved an air of defeat and frustration ensues.
Crowbits Setup
Setup for Crowbits starts with choosing which components to use depending on the project the child wants to try. The modules are designed to be plug-and-play so young makers can use them to build structures and experiment right away. Modules are also compatible with the entire series of learning kits, so if you purchased more than one, you can use them interchangeably.
If you want to try building from the suggested projects, of which there are plenty to choose from, note that they become more challenging as you move up in the series and may include some coding and firmware downloads.
How Crowbits Work
Every kit consists of a number of modules. Each module has magnetic pogo-pins on all sides that help connect them easily. Another way of connecting modules are by the magnetic cables. At the back of each module are Lego holes for seamless integration of Lego bricks to any structure.
There are four different types of modules and are easily identified by color: Blue for power/logic, yellow for input, green for output and orange for special modules. It’s important to keep in mind a few rules for creating a circuit sequence. There should be at least a power, an input and an output module in order to build a circuit, with the proper sequence having the input block before the output.
There could be multiple input and output blocks in a sequence where the output is controlled by the nearest input block. Lastly, names of modules must be facing up to ensure the correct pins are being used.
Crowbits Module and Sensor Breakdown
There are four different types of modules and sensors for Crowbits and each function is distinguished by color:
Power Modules (Blue) – the power source and a core module that’s required for every project build. You’ll see a green light that indicates when the power is on. Use the included micro-USB cable to re-charge the power supply when needed.
Logic Modules (Blue) – for basic operations. Includes: 315 MHz Controller, Expansion, etc.
Input Modules (Yellow) – accepts input data like touch, vibration or object detection and passes it to the output modules. Includes: Touch module, IR reflective sensor, light sensor, etc.
Output Modules (Green) – receives command from input module and executes ending action. Examples are: Buzzer module (makes a sound), LED (Y) light up, or vibrate
Special Modules (Orange) – used for advanced programming tasks. Examples are: I2C or UART
Crowbits Software and Hardware
Programming Languages Supported: Letscode (Elecrow’s visual programming software based on Scratch 3.0), which supports Python and Arduino IDE.
Open Source Hardware Compatibility: ESP32 TFT, Micro:bit board, Arduino UNO and Raspberry Pi (TBA).
OS Supported: Windows and Mac
Crowbits Learning Kits Use Cases
Hello Kit and Explorer Kit
The Hello Kit and Explorer Kit are learning tools for beginners and targets children ages 6-8 and up. It introduces the concept of modules and their functionality. No coding is required for any of the suggested experiments and projects here. Building the projects with cardboard elements proved to be difficult for my seven-year-old and she got easily frustrated trying to use the thin double-sided tape that came with the kit.
Once the structures were built (with my help) she did enjoy putting the modules together and making things happen like sounding the buzzer on the anti-touching device or making the lights turn on her window display project. Another annoyance to note was when using the cable module that serves to connect modules together. The cable is quite thick and not flexible so it had the tendency to pop off and break the connection for multiple projects.
I would have to say that my daughter was most engaged with the Explorer Kit, perhaps because the projects had more integration with Lego blocks, and some projects were also very interactive like the Quadruped Robot and the Lift, which were her favorites. She enjoyed building the structures and seeing the creations come to life, especially when there was movement, sounds and lights.
Inventor Kit and Creator Kit
The Inventor Kit and Creator Kit are the intermediate learning tools of the Crowbits series and targets children ages 10 and up. The Inventor Kit includes more advanced projects that incorporate the Micro:bit board in the builds. This requires some coding and the use of Letcsode, Elecrow’s Scratch-based drag-and-drop visual programming software.
The software seemed a bit buggy (mainly in steps like downloading custom code) and there were inaccuracies in the project documentation that led to a lot of troubleshooting on our part. Hopefully, by the time Crowbits is ready for release in June, these kinks will have been resolved.
It is worth noting, though, that the list of projects suggested for the Inventor kit seem to be age-appropriate. My tween worked on the Horizontal Bar and the Ultrasonic Guitar projects. She thoroughly enjoyed the experience and had no issues following the diagrams in building the Lego structures. There was a little hiccup in using the software, as I mentioned earlier, where we were wanting for troubleshooting tips and more clear documentation.
Unfortunately, we were not able to try out the Creator Kit as it was not available when we received our evaluation samples. We may update this review when we receive the Kit after its June release.
Master Kit
The Master Kit definitely is the most challenging of the engineering kits in the Crowbits lineup, with the task of programming hardware and software to build real-life products like a mobile phone, a game console and a radar. I’ll set aside my comments for this kit as I was unsuccessful in trying to make the phone and console work due to a corrupted SD card.
Additionally, we had intermittent issues while uploading firmware. It is unfortunate because I was looking forward to this kit the most, but perhaps I can re-visit the Master Kit and post an update at a later time.
The one successful project build out of this kit, the radar, honestly left us scratching our heads. The expected results were not seen as we tried a placing variety of objects in the vicinity of the rotating radar dish and none of them seemed to be detected.
Crowbits Learning Kits Specs and Pricing
Modules
Projects
Age
Price
Hello Kit
7 Modules
5 Cardboard Projects
6+
$26
Explorer Kit
13 Modules
12 Projects
8+
$70
Inventor Kit
10 Modules
12 Lego, graphic programming projects and Letscode introduction
10+
$80
Creator Kit
TBD
TBD
10+
$90
Master Kit
TBD
TBD
10+
$90
Crowbits Available Bundles and Special Pricing
Bundles
Kits Included
Pricing
Bundle #1
Explorer Kit, Creator Kit, Master Kit
$239
Bundle #2
Explorer Kit, Inventor Kit, Master Kit
$249
Bundle #3
Hello Kit, Explorer Kit, Inventor Creator Kit, Master Kit
$354
Bottom Line
Despite all its kinks, overall the Crowbits STEM Kit appears to be another great educational tool from Elecrow with the emphasis on educating kids on electrical engineering. Whether it be building simple circuit projects or coding more complex applications for use in everyday living, the Crowbits series provides a complete learning platform for kids ages 6-10 and up.
With its average pricing and the flexibility to pick and choose which kit to purchase, it is an attractive choice for someone looking to buy an educational STEM kit for their child or loved one. Of course you can also buy the entire set as a bundle and enjoy helping your child build models and program as you go through the different stages of electronic learning from basic to advanced concepts. It’s also worth noting that the Letscode software program that comes with the packages is free and supports Python and Arduino programming which is a welcome added bonus.
Have a fear of being spied on? Then look out because the Eyecam is an eye-opening, open-source, Raspberry Pi-powered camera that eerily resembles a human eye. Half pet, half unused concept from David Cronenberg’s eXistenZ, this fleshy camera from Germany’s Saarland University is less of a Best Webcams contender and more of an art piece that highlights the surveillance we open ourselves up to when attaching sensors to personal devices.
“Imagine Eyecam waking up on its own,” the camera’s reveal video says. “Imagine bonding with Eyecam,” the video continues as a man pets the camera. “Imagine Eyecam becoming emotional,” the video eventually says, as the camera scowls.
The goal here, according to Eyecam creator Marc Teyssier, is to “broaden the discourse on sensing technologies and spark speculations on aestheticism and functions.”
That’s a lot of big words, but
Teyssier’s website
goes into deeper detail about the problems traditional cameras present by capturing data but not conveying emotion,. It also discusses the societal consequences of surrounding ourselves with sensing devices “up to the point where we become unaware of their presence.”
Well, Eyecam certainly does a good job of making me feel aware of it.
Kidding aside, Eyecam has a good point here. The idea of being spied on by a real human eye terrifies me, but keeping my webcam plugged in at all times without even putting a cover over it doesn’t (hackers, forget you read that).
Still, despite being branded as a “design fiction prototype,” you can totally build an Eyecam of your own right now, if for some reason you want to turn your monitor into a cyclops. All the software and .stl files for the Eyecam are free on Teyssier’s Github. You’ll need a Raspberry Pi, an Arduino Nano, a small camera and a 3D printer. You’ll also need plenty of gumption, since Teyssier’s build tutorial isn’t quite finished yet.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.