Gotham Knights, the upcoming title by Warner Bros. Games Montreal, has been delayed into 2022, the company announced today. In a statement posted on Twitter, the development team explains that it wants to give the game “more time to deliver the best possible experience for players.”
While not directly stated, the delay is likely COVID-related, as studios continue to adapt to working from home. This is the second highly anticipated title from Warner Bros. Interactive Entertainment getting pushed back into next year, following the announcement in January that Avalanche Software’s Hogwarts Legacy will not hit its 2021 release window.
Announced during the DC FanDome event in late August, Gotham Knights is an open-world action RPG featuring Batman and some supporting cast such as Batgirl and Robin. While no firm release date has yet been announced, Gotham Knights will release on PC, PS5, PS4, Xbox Series X / S, and Xbox One.
If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.
Any business laptop that comes out these days is entering a tough field full of very established players. The world is already stuffed full of ThinkPads and Latitudes, which have strong followings, cover price ranges across the board, and are highly attuned to what workers need.
So my question with lesser-known business laptops is usually: Where does this fit? What customer is it catering to who might be underserved by a ThinkPad?
With its TravelMate line (specifically the TravelMate P6), Acer seems to be going for two potential openings. The first is that the TravelMate is, as the name implies, specifically intended for frequent business travelers. It’s light, portable, and sturdy, at the expense of some other traits. And the second is its price. Starting at $1,199.99, the TravelMate line is targeting a more price-conscious demographic than many business laptops that would be considered “premium” are. I think the TravelMate succeeds in filling these two niches in particular. But it has some other drawbacks that make it tough to recommend for a general audience.
The aspect of the TravelMate that should be a big help to mobile business users is the port selection. Despite being quite thin, the laptop is able to fit a USB Type-C (supporting USB 3.1 Gen 2, DisplayPort, Thunderbolt 3, and USB charging), two USB 3.1 Type-A Gen 1 (one with power-off USB charging), one HDMI 2.0, one microSD reader, one combination audio jack, one Ethernet port (with a trap-door hinge), one DC-In jack for Acer’s adapter, one lock slot, and an optional SmartCard reader. The fewer dongles and docks you have to travel with, the better.
Portability is another priority here and is another one of the TravelMate’s highlight features. At just 2.57 pounds and 0.65 inches thick, the TravelMate should be a breeze to carry around in a backpack or briefcase. Acer says it’s put the product through a slew of durability tests for weight and pressure, drops, shocks, vibrations, and other hiccups you may encounter during the day.
Another area that’s likely important to some mobile professionals is videoconferencing capability. I found that to be a mixed bag here. The TravelMate’s four-microphone array had no trouble catching my voice, in both voice recognition and Zoom meeting use cases. Acer says they can pick up voices from up to 6.5 feet away. The webcam also produces a fine picture (though this unit doesn’t support Windows Hello for easy logins) and has a physical privacy shutter. The speakers are not great, though — music was tinny with thin percussion and nonexistent bass.
The TravelMate also includes some business-specific features including a TPM 2.0 chip and Acer’s ProShield security software.
In other, less business-y areas, though, the TravelMate has a few shortcomings. Shoppers looking for anything more than portability out of the chassis may be disappointed. While most of the TravelMate is made of magnesium-aluminum alloy, it has a bit of a plasticky feel — and while the keyboard is sturdy, there’s considerable flex in the screen. And then there’s the aesthetic: the P6 is far from the prettiest computer you can buy for $1,199.99. It’s almost entirely black, with very few accents (and the ones it has are a drab gray color). And the bezels around the 16:9 screen are quite chunky by modern standards. Plus, the 16:9 aspect ratio is falling out of fashion for a reason — it’s cramped for multitasking, especially on a 13- or 14-inch screen — and the panel maxed out at 274 nits in my testing, which is a bit too dim for outdoor use.
The TravelMate looks and feels like it was made a bit better than budget fare. But it also looks and feels closer to an Aspire 5 than it does to a top ThinkPad. For context, you can get an Aspire 5 with identical specs to this TravelMate model for just over $700. Another comparison: the Swift 5, a gorgeous consumer laptop that’s even lighter than the TravelMate, can be purchased with comparable specs for just $999.99. This is all to emphasize that you’re sacrificing a bit of build quality (as well as some extra money) for the TravelMate’s weight and business-specific offerings.
The touchpad is also not my favorite. For one, I had some palm-rejection issues. Those didn’t interfere with my work per se, but it was still unnerving to see my cursor jumping around the screen while I was typing. In addition, the touchpad on my unit had a bit of give before the actuation point, meaning one click required me to make (and hear) what felt like two clicks. And its off-center placement meant that I was constantly right-clicking when I meant to left-click, and I had to consciously reach over to the left side in order to click with my right hand. Finally, the click itself is shallow and far from the most comfortable.
I also didn’t love the power button. It contains a fingerprint sensor, which worked quite well. But the button itself is stiff and very shallow. I know this sounds like a small nitpick, but it was really irksome and made turning the TravelMate on in the morning more of a hassle than it could’ve been.
The TravelMate model that I received to review is sold out everywhere I’ve looked as of this writing. The closest model to it is listed at $1,199.99 (though it’s cheaper through some retailers) and comes with a Core i5-10310U, 8GB of RAM, and 256GB of SSD storage. My unit is the same, but it has a Core i5-10210U. Those processors don’t have a significant performance difference, so my testing here should give you a good idea of what to expect from that model. You can also buy a model with a Core i7-10610U, 16GB of memory, and a 512GB SSD for $1,399.99. Both configurations run Windows 10 Pro and include a 1920 x 1080 non-touch display.
For my office workload of emails, spreadsheets, Zoom calls, etc., the TravelMate did just fine. I sometimes heard the fans spinning at times when my load wasn’t super heavy, but the noise wasn’t loud enough to be a problem. Note that this processor has Intel’s UHD graphics, rather than its upgraded Iris Xe graphics, which means the system wouldn’t be a good choice for gaming, video software, or other graphics work.
But there’s one area where the TravelMate really impressed, and it’s one that’s quite useful for travelers: battery life. Running through my daily workload at 200 nits of brightness, my system averaged nine hours and 15 minutes of continuous use. That’s almost twice what the budget Aspire 5 got with my same workload. It also beats the Swift 5 and the pricier ThinkPad X1 Nano. If your workload is similar to (or lighter than) mine, you should be able to bring this device around an airport or conference for a full work day without being attached to a wall.
One performance complaint, though: this thing comes with bloatware. My unit was pre-installed with all kinds of junk, including games (Amazon was pinned to the taskbar) and other software like Dropbox. Most annoyingly, it came with Norton, which bugged me with annoying pop-ups all the time and also seemed to impact battery life: the TravelMate consistently lasted around an hour longer after I uninstalled the program. It doesn’t take too long to uninstall everything, but I’m still morally put off by the idea of so much cheap crapware being loaded onto a laptop that costs over $1,000. And it’s especially troubling to see on a business laptop, because it can expose users to cybersecurity risk.
The TravelMate line is filling a pretty specific niche, and it fills it just fine. If you’re a frequent business traveler who needs a light device with plentiful ports and all-day battery life, you’re shopping in the $1,199 price range, and you’re willing to overlook a mediocre touchpad, dim 16:9 display, and other hiccups, then the P6 will be a better choice for you than something like a pricier and heavier Dell Latitude or the shorter-lived and port-starved ThinkPad X1 Nano.
That said, the P6 has enough drawbacks that I think the bulk of customers would be better served by other laptops. Those who like the Acer brand may like some of Acer’s other offerings — especially those who don’t need the business-specific security features. The Swift 5 is lighter, nicer-looking, and more affordable than the TravelMate, with a better touchpad, screen, and processor. And budget shoppers can find much of what the TravelMate offers in any number of cheaper laptops. The Aspire 5 and the Swift 3 don’t have the TravelMate’s battery or port selection, but they do improve upon its touchpad, audio (in the Aspire’s case), and looks (in the Swift’s case). And, of course, there’s a litany of other laptops in this price range — from HP’s Spectre x360 to Dell’s XPS 13 — that are excellent in almost every way and also offer 3:2 screens.
Ultimately, the TravelMate isn’t a bad laptop — but if it’s the best laptop for you, you probably know who you are.
(Pocket-lint) – GoPro put a colour screen on the front of the Hero 9 Black, bringing it more in line with the DJI Osmo Action, and while it was at it decided we needed a bigger battery too. That means you can finally see yourself when you’re filming, and you can shoot for longer.
With that said, its predecessor – the Hero 8 Black – was and still is a great action camera. So should you stump up the extra for the 9 or will the Hero 8 do everything you need it to?
squirrel_widget_2670590
Design and Displays
Hero 8: 66.3 x 48.6 x 28.4mm
Hero 9: 71.0 x 55.0 x 33.6mm
Hero 8: Monochrome status screen on the front
Hero 9: Colour live preview screen on the front
Both: Built-in mounting arms
Both: Colour touchscreen on the back, Hero 9 larger
Both: Waterproof to 10m
The Hero 8 Black was an important product for GoPro, freeing the company from the constraints of needing to fit its tech into a specific size body, just so it would fit in the mounting accessories. Instead, it built-in mounting arms to the bottom of the camera, allowing you to mount it to all the accessories, without a clip-on shell, and that has returned in the Hero 9.
That’s seen GoPro increase the size of its flagship action camera by a noticeable – but not huge – amount. It’s a few millimetres taller, wider and thicker than the 8 Black, but the trade-off should prove worth it for the bigger battery and more powerful internals. Plus, the bigger screen and colour screen on the front.
Speaking of those displays, the latest model’s front screen is full colour and can be used as a live preview display, while the 8 Black has the more traditional monochrome status display which only shows you status information.
The best GoPro: Which should you buy today?
Both cameras feature a similar design in terms of button and port placement. They both have the shutter button on the top and the mode/power button on the left edge. However, the mode/power button on the 9th gen protrudes more from the surface and is much easier to press and to feel without looking. The Hero 8’s button is flush with the surface, and so virtually impossible to find by touch.
Just underneath that, the Hero 9 also has a speaker designed to pump out water, similar to the feature Apple has used in its watches for a while. So if you do take it underwater to test its 10m depth resistance, it will expel any water that seeps into the speaker channels.
Video capture and streaming
Hero 8: Up to 4K/60 FHD/240 footage
Hero 9: Up to 5K/30, 4K/60, FHD/240
Both: 1080p live streaming
Both Heros support a wide range of resolution and frame-rate combinations at various focal lengths, thanks to the ‘digital lenses’ that are built into the software.
As far as resolution goes, the Hero9 is the champ here. It can shoot up to 5K resolution at a 16:9 ratio with wide, linear and narrow ‘lenses’. At 4K resolution, it can go up to 60 frames per second and up to 240 frames per second at 1080p. It can also shoot at 2.7k resolution, and various resolutions using up to4K at 4:3 ratio. Hero 8 is similar, except it maxes out at 4K resolution. It also doesn’t feature horizon levelling feature available at certain settings.
Both cameras can be used for live streaming and both can do so at 1080p resolution. Both also use a combination of EIS and algorithms to stabilise footage using a feature called HyperSmooth. With the Hero 9, that’s been boosted further, making it even smoother than before while also offering the horizon levelling feature. What’s more, if you buy the additional Max lens you get horizon levelling on everything, even when you rotate the camera 360-degrees.
Stills and performance
Hero 8: 12MP stills
Hero 9: 20MP stills
Both: SuperPhoto + HDR
Both: RAW support
Hero 8: 1220mAh battery
Hero 9: 1720mAh battery
Both: GP1 chip
There are two big performance upgrades with Hero 9: Photo resolution and battery life. It has a 20-megapixel sensor versus the 12-megapixel sensor on the previous model. Similarly, it has a higher capacity battery, with an additional 500mAh on top of the 8th gen’s 1220mAh battery to give a total of 1720mAh.
GoPro says you’ll get an extra 30% video capture time from that battery, and that is definitely useful when it comes to action cameras. There’s nothing worse than running the battery flat during a downhill biking session.
Both cameras have the same image/data processor – called the GP1 – and they both support RAW image capture as well as GoPro’s advanced HDR image processing.
squirrel_widget_168058
Price
Hero 8: $299 with a subscription ($349 without)
Hero 9: $399 with a subscription ($499 without)
The most cost-effective way to buy a new Hero camera is with an annual GoPro subscription. If you buy Hero 8 with the subscription, the camera will cost you $299/£279, while Hero 9 is $399/£329. If you buy the cameras without the subscription, the Hero 8 is $349/£329 and Hero 9 is $499/£429.
Given the added value of the subscription – which gets you unlimited cloud storage, a replacement camera when yours breaks and accessory discounts – it makes complete sense to opt for that with the lower upfront outlay. You get 12 months subscription paid for in advance with that price. GoPro is obviously hoping users stick around for more than a year and keep subscribing afterwards.
Conclusion
Given the price difference, the Hero 8 Black is actually very good value for money. It’s $100/£100 cheaper than the Hero 9 but does a lot of the same stuff.
With that said, with its new colour screen, higher resolution sensor and longer battery life the additional outlay is definitely worth it for the Hero 9. Especially when you consider that its price with the subscription is only a little higher than the price of the Hero 8 Black without a subscription.
If you want the best action camera going, grab the Hero 9. If you’d rather save the cash, or if you’re coming from an older model like the Hero 5 or Hero 6, the Hero 8 will do you just fine and is still a major upgrade on those two.
AMD’s latest Radeon driver update, Adrenalin version 21.3.1, adds several new features to team red’s graphics cards, but the biggest update is a new stress testing utility that allows you to check the stability of your overclocked AMD graphics card right from the Adrenalin software.
When you install the 21.3.1 driver, the new stress test option should be available to you called “Performance Tuning Stress Test.” According to AMD, Adrenaline has also been updated to help novice overclockers with newer temperature gauges and easier-to-understand performance readouts, PC Gamer reported. AMD also said it added more indicators to show where performance is being limited on your best graphics card.
We aren’t sure how much better this stress test is compared to stress testing your graphics card in popular games and applications like 3DMark, Superposition and your favorite graphically demanding video game. But it is nice that you can now stress test right from the Adrenaline software without using any other software to see if your GPU overclock is stable or not.
AMD didn’t say if this new stress test was limited to newer Radeon GPUs, so we assume that this new stress testing utility will work on any Radeon GPU that supports the 21.3.1 driver.
More Adrenalin 21.3.1 Updates
A few more highlights from 21.3.1 include added support for Doom Eternal: The Ancient Gods – Part Two, as well as major updates to Radeon Boost and Radeon Anti-Lag with both technologies now supporting the DirectX 12 API. Plus, there are a few more updates to Vulkan support.
Here’s the full list of issues the driver fixes, as per AMD:
Radeon Software may sometimes have higher than expected CPU utilization, even when a system is at idle.
A system hang or crash may be experienced when upgrading Radeon Software while an Oculus VR headset is connected to your system on Radeon GCN graphics products.
Minecraft DXR may exhibit corrupted or missing textures when ray tracing is enabled on Radeon RX 6000 series graphics products.
An application crash may occur in Call of Duty: Modern Warfare when ray tracing is enabled on Radeon RX 6000 series graphics products.
Lighting fails to render correctly on Radeon RX 6800 series graphics products in Star Citizen.
A black screen may occur when enabling and disabling Enhanced Sync while Vsync is enabled in some Vulkan API games.
A black screen or system hang may occur on Hybrid Graphics systems for some Vulkan API games when Enhanced Sync is enabled.
Bethesda launcher may experience an application crash on startup when launching some games.
Users may be unable to create a new scene in the Radeon Software Streaming tab on first launch or after a settings factory reset.
Game specific performance tuning profiles may fail to load when a global performance tuning profile has been created or set.
Disabling HDCP support and performing a factory reset and/or system restart may sometimes trigger a system crash or hang on boot.
Epic Games social overlay or launcher may exhibit color corruption.
Xuan-Yuan Sword VII may experience an application crash with DirectX12 ray tracing enabled on Radeon RX 6000 series graphics.
Color corruption may be experienced in Cyberpunk 2077 when Radeon Boost is enabled.
Display flicker or corruption may occur on high refresh rate/resolution multi-monitor system configurations on Radeon RX Vega series graphics.
Audio loss or cutout may intermittently occur on some TV displays when Windows audio is set to use 5.1 or 7.1 speaker configurations.
Writing an operating system to a Raspberry Pi involves micro SD cards and a tool such as balenaEtcher. Around a year ago an official Raspberry Pi Imager tool was released, this tool offered a simple means to write an OS to a card and it came with a great choice of OS for retro gaming, 3D printing and general computing. In the latest update there is a hidden advanced menu offering more configuration options.
Raspberry Pi Imager v1.6 has an advanced menu which is hidden away from general users just looking to write an operating system for the Pi. To activate the menu we need to press CTRL + SHIFT + X and we then gain access to advanced options that enable advanced users to customize the OS to meet their needs before they write the software to a micro SD card.
In the Advanced Options menu we can change
Overscan, to remove borders from our screen.
Hostname, identify your Pi on a network.
SSH on boot, useful for headless and remote projects.
WiFi, setup your WiFi without editing a config file.
Locale, set your language and location.
These changes can be made for a single session, for example writing a one off OS to an micro SD card, or we can set Raspberry Pi Imager to use these settings each time. For Raspberry Pi users these advanced features are a welcome addition to an already great application. For they an now quickly and easily set these settings and then write the OS to a micro SD card, rather than tweak config files which could be quite a task if working with multiple cards.
It is well known that Taiwan Semiconductor Manufacturing Co. and Samsung Foundry dominate the market of contract chip production. They are the only companies to offer leading-edge process technologies and have the largest capacities. Meanwhile, TSMC and Samsung Foundry are on track to become the dominant manufacturers of advanced chips as nobody, including Intel, can match their capital expenditures.
TSMC: Big Can Only Get Bigger
Founded in 1987, TSMC was the world’s first pure play foundry that manufactured chips for others. In 34 years of its history, the company has grown from a small entity with one fab to a multi-billion corporation with five 300mm fabs, seven 20mm fabs, and one 150mm production facility. Having developed dozens of process technologies throughout its history and having installed vast production capacities, TSMC can offer services to almost any fabless chip designer with almost any requirements. At present, TSMC serves over 460 customers.
As the demand for leading-edge fabrication processes and volumes from its large customers (such as Apple, HiSilicon, Qualcomm, Nvidia, and AMD) have grown in recent years, TSMC intensified building of new GigaFabs — production facilities with a capacity of more than 100,000 300-mm wafer starts per month (WSPM). Each costs around $20 billion, and TSMC also increased its research and development (R&D) budgets. The strategy has paid off and today TSMC has not only left Intel and Samsung Foundry behind with its manufacturing technologies, but it also has more leading-edge capacity than other makers of semiconductors. This is largely because because it serves virtually all fabless designers that require advanced technologies.
This year the company decided to radically increase its capital expenditure (CapEx) spending to $25 billion ~ $28 billion, an increase of 45% ~ 62% year-over-year from $17.2 billion in 2020. IC Insights believes that TSMC will “begin what is likely to be a huge multi-year ramp of spending,” and expects the company to boost its CapEx budget in 2022 and 2023 once again.
Being the leading maker of semiconductors both in terms of volumes and in terms of technology leadership has its advantages. First, it’s easier to get the fab tools when you buy them in high volumes. Second, it’s easier to set up your own production and supply chain standards, something that is tremendously important in an industry that is all about standardization.
Samsung Foundry: Closing the Gap with TSMC, Widening the Gap with Intel?
Samsung Electronics has been the world’s largest maker of dynamic random access memory (DRAM) and NAND flash for quite a while and has been in the semiconductor business for decades. Furthermore, it has produced various chips for its own needs. The company started to offer foundry services in mid-2000s, as it realized that only the largest chipmakers will survive in the long term. Samsung Foundry has been trying to catch up with TSMC for years, and while the gap is closing, it is still not quite there yet.
Samsung Foundry’s largest customer is still its parent company Samsung, which strives to make the world’s best smartphones, televisions, PCs, displays and other electronics. To that end, SF’s design decisions at times resemble those of an integrated device manufacturer (IDM) that makes money on actual products rather than on manufacturing services.
Samsung realized early enough that demand for chips (all chips, including DRAM, 3D NAND, SoCs, etc.) will only grow, so its corporate semiconductor CapEx spending exceeded $10 billion for the first time in 2010. Having spent $93.2 billion on expanding production capacities over the 2017–2020 period, the company significantly closed the gap with TSMC from a capacity point of view.
Samsung Foundry is still about three times smaller than TSMC in terms of wafer starts per month (and also in terms of the number of nodes it offers), but the gap between the two has been closing. So far, Samsung has not unveiled its 2021 semiconductor CapEx budget, but analysts believe that it could spend at least as much as it spent last year — around $28.1 billion.
Cumulative CapEx of Samsung and TSMC will total approximately $55.5 billion this year, according to IC Insights. A significant part Samsung’s funds will of course be used to buy equipment for Samsung’s memory businesses, but these two companies will be able to influence development of fab production tools and supply chains.
Should Intel Worry?
Intel traditionally spends tens of billions of dollars on CapEx (it spent about $14.3 billion last year), so it will remain a leading maker of processors. Yet, its spending on fabs will be about half that of Samsung and TSMC this year. Furthermore, since Intel will not start production of chips using a node that relies on EUV, it will not have an immediate significant influence on development of the industry and supply chains.
Historically, Intel had several competitive advantages that set it apart from all of its direct and indirect rivals:
Intel’s CPUs were the fastest in the industry.
Intel’s microarchitectures and CPU designs were scalable for all market segments.
Intel had enough power to ensure that its architectural innovations were supported by software makers.
Intel had the best process technologies, which could offset certain imperfections of its microarchitectures or design.
Intel could produce CPUs in volumes unachievable by any of its competitors.
Since Intel was the de facto leader of the semiconductor market both financially and technologically, it set standards for the rest of the industry, which further ensured its leadership position.
While Intel competed against most companies in the semiconductor industry, it could build alliances or partnerships that strengthened it (e.g., with Microsoft, Dell, HP, Apple, and ATI Technologies) and helped it to better compete.
Intel spent hundreds of millions of dollars on marketing and advertising, usually more than all of its rivals combined.
So far, Intel has lost at least three out of eight advantages. These days Intel’s CPUs are not the undisputed leaders, and in many cases competing products from AMD are unchallenged. While Intel’s 2nd generation and 3rd generation 10nm fabrication technologies are competitive against TSMC’s N7, the company’s nodes cannot offer the same transistor density as TSMC’s N5. Finally, Intel no longer spends as much as its rivals on fabs and no longer has technological leadership.
If/when AMD becomes TSMC’s second largest customer, it could ask its production partner to customize the nodes it uses in a bid to gain performance and/or lower power consumption. Meanwhile, we still know nothing about Intel’s outsourcing plans other than the fact that some of its products will be made at TSMC in 2022.
Intel remains a driving force behind many industry initiatives, and no technology can get widespread in the PC world without Intel’s support. Yet, there are no more Wintel-like initiatives and Intel is no longer an exclusive CPU supplier for companies like Apple.
Meanwhile, Intel has extremely capable x86 CPU architectures that offer higher single-thread performance when compared to those from AMD. Intel also produces more processors than any other maker, and it can supply its partners with volumes of chips not available from anyone else. Given Intel’s market share and volume leadership, virtually all of its initiatives are supported by the software industry. Furthermore, the company knows how to advertise its products and promote its brand.
In general, Intel has many things to worry about, as it no longer can compete against all of its rivals on all fronts successfully. Hopefully, the company’s new CEO will shed some light on the chip giant’s future plans next week in a live chat.
Could Countries Compete Against Dominant Makers of Semiconductors?
Now that TSMC and Samsung spend around $28 billion each on manufacturing facilities and billions on R&D, it is extremely hard for a commercial company to catch up with these chipmakers. Even Apple, with its massive earnings and cash reserves, is hardly willing to invest tens of billions on chip manufacturing. In the recent years, the governments of the EU, US, and China started to talk about local semiconductor production industries and expressed willingness to assist chipmakers.
IC Insights deems that it is close to impossible to catch up with TSMC and Samsung. Keeping in mind the two leading makers of semiconductors are way ahead of the industry both in terms of R&D and CapEx, analysts believe that “governments would need to spend at least $30 billion per year for a minimum of five years to have any reasonable chance of success.” The Chinese corporation SMIC has received a lot of help both from local authorities and Chinese government over the years, but the company is still about five years behind GlobalFoundries, Samsung Foundry, and TSMC.
Summary
Both TSMC and Samsung Foundry started to use EUV tools to produce chips using their leading-edge process technologies several years before Intel, so they have been gaining experience with new tools and supply chains for quite a while now.
Both TSMC and Samsung will invest two times more in their production facilities than Intel will in 2021. Arguably, Intel does not need to spend as much as TSMC and Samsung on CapEx since it only produces chips for itself, whereas its peers offer foundry services. Yet, previously Intel’s technological leadership was enabled by massive spending on fabs and R&D.
In theory, governments could stimulate development of the local semiconductor industry using direct help, tax breaks, and incentives. However, their total spending over the next five years would need to exceed $150 billion, and chances of success are not high.
Cricut has announced in a new blog post that its automated cutting and printing craft machine will no longer require a subscription for unlimited uploads next year. The company took a step back when it announced it would postpone the change until 2022. But now Cricut CEO Ashish Arora is reversing the company’s plans entirely, guaranteeing Cricut machines will work how they’ve always worked.
“We’ve made the decision to reverse our previously shared plans. Right now, every member can upload an unlimited number of images and patterns to Design Space for free, and we have no intention to change this policy. This is true whether you’re a current Cricut member or are thinking about joining the Cricut family before or after December 31, 2021,” Arora writes.
Limits on uploads to Cricut’s required Design Space app were a contentious issue for regular Cricut crafters. While Design Space can work as separate creation software, many users prefer to create their art in other applications and bring them into Design Space to finalize them, before their Cricut cuts them out of paper, fabric and other materials.
In Cricut’s controversial plan, the company offered 20 Design Space uploads per month for free, and locked unlimited uploads behind a paid Cricut Access subscription. The move did not go over well with users, who were not only used to having unlimited uploads for free, but also wanted to avoid additional costs, or being forced to use Design Space for the entirety of their projects.
Cricut’s original response seemed to satisfy a lot of its customers, but now the company has gone a step further and made its subscription plan what it started out as, an add-on, rather than a requirement for normal use.
The National Highway Traffic Safety Administration is investigating another Tesla crash in which Autopilot was allegedly in use.
The crash took place outside of Lansing, Michigan, when the driver of a Tesla Model Y smashed into a state trooper’s cruiser. Michigan police said the driver was using Autopilot, Tesla’s advanced driver-assistance system (ADAS), at the time of the crash. No one was injured, but the government sent investigators to the scene to determine how Autopilot may have contributed to the crash.
“NHTSA is aware of the incident involving a Tesla vehicle near Lansing, Michigan,” a spokesperson said in a statement. “Consistent with NHTSA’s vigilant oversight and robust authority over the safety of all motor vehicles and equipment, including automated technologies, we have launched a Special Crash Investigation team to investigate the crash.”
This is the latest crash involving a Tesla to be scrutinized by federal investigators. NHTSA has sent teams to inspect similar crashes involving Teslas that took place in recent weeks in Houston and Detroit. Local law enforcement has said it doesn’t believe Autopilot was involved in the Detroit crash, but they have yet to make the same determination in Houston.
This is also the latest incident to involve a driver using Autopilot crashing into a stationary object. There have been at least two fatal crashes in which a Tesla owner has smashed into a stopped vehicle, and Tesla has yet to address it in any meaningful way.
Tesla didn’t respond to a request for comment, likely because the company has dissolved its press office and typically doesn’t respond to media requests anymore. In the past, Tesla has warned its customers that Autopilot is not an autonomous driving system and still requires constant attention to the road while in use.
At the same time, the company recently rolled out a beta version of Autopilot called “Full Self Driving” that has given many people the false impression that Tesla vehicles are autonomous and don’t require drivers to pay attention to the road. Tesla recently expanded the number of people who have access to the beta software. “Still be careful, but it’s getting mature,” CEO Elon Musk tweeted recently.
Tesla has a checkered history with the NHTSA, the federal agency that can issue recalls and investigate automobile crashes. The agency has investigated multiple fatal crashes involving Autopilot. Last year, the National Transportation Safety Board concluded that the ADAS was one of the probable causes of a fatal 2018 crash, in which a California man was killed after his Model X smashed into a concrete barrier. Later, the chair of the safety board said Tesla was ignoring its recommendations. And last year, a spokesperson for NHTSA said the agency was “monitoring” the rollout of Tesla’s Full Self Driving software.
Safety advocates decried Tesla’s decision to test its driver-assistance software on its customers as irresponsible. The executive director of the Center for Auto Safety accused Tesla of “intentionally misleading the public regarding the capabilities and shortcomings of their technology,” according to TheAssociated Press.
(Pocket-lint) – Sony Mobile revealed the second generation of its flagship Xperia 1 4K smartphone in Febraury 2020. Following the naming structure of the Sony Alpha cameras, the Xperia 1 II offers a very similar design to its predecessor but with a few upgrades.
Here’s how the Sony Xperia 1 II and the Sony Xperia 1 compare to help you work out which to buy and whether to upgrade. Keep in mind that the Xperia 1 III is also expected to appear at some point in the next few months.
squirrel_widget_2707912
What’s the same?
Display
Triple rear camera
Fingerprint sensor
The Sony Xperia 1 II offers the same Omnibalance design we have come to expect from Sony Xperia devices, with a metal frame sandwiched between two glass panels like the Xperia 1. There are some differences, which we will go into in a minute, but it’s clear the Xperia 1 II and Xperia 1 are part of the same family.
Both devices are IP65/68 water and dust resistant and both have a 6.5-inch CinemaWide display with a 4K resolution and a 21:9 aspect ratio. The Xperia 1 II and Xperia 1 also both have a triple rear camera, single front camera, fingerprint sensor and a number of Sony technologies including Stamina Mode for the battery.
What’s different?
Despite looking similar and sharing many of the same technologies, there are a few differences between the Xperia 1 II and Xperia 1 which are worth considering if you are planning to upgrade or choosing between the two models.
Design
Xperia 1 II: 166 × 72 × 7.9mm, 181g
Xperia 1: 167 x 72 x 8.2mm, 178g
The Xperia 1 II has squarer edges than the Xperia 1, though the overall tall and slender look is shared between the two handsets, with both measuring 72mm in height.
The Xperia 1 II is ever so slightly narrower and slimmer than the Xperia 1 however, and a little heavier. It also sees the reintroduction of the 3.5mm headphone jack and it repositions the rear camera housing from the centre to the top left of the handset, as it was on previous Xperia handsets before the Xperia 1.
Although the Xperia 1 II and the Xperia 1 have the same size and resolution display, the Xperia 1 II adds a couple of extras. Both devices have the Creator mode “powered by CineAlta”, which is designed to deliver a true representation of colours like a Master Monitor, and they are both HDR compatible.
The Xperia 1 II has also a feature called Motion Blur Reduction though, which aims to deliver visuals like a 90Hz display.
Camera
Xperia 1 II: Triple rear + ToF sensor, single front
Xperia 1: Triple rear, single front
Both the Xperia 1 II and the Xperia 1 feature a triple rear camera made up of three 12-megapixel sensors, though the Xperia 1 II adds a Time of Flight sensor too. It also builds on the camera features offered by the Xperia 1, including a Photo Pro mode with a visual layout that reflects the user interface of the Sony Alpha 9.
There’s also a 20fps burst mode that can give you autofocus and auto exposure with subject tracking through the burst.
Both the Xperia 1 II and the Xperia 1 have an 8-megapixel front camera.
The Xperia 1 II runs on the Qualcomm Snapdragon 865 with the X55 modem, allowing for 5G connectivity. The Xperia 1 meanwhile, runs on the Qualcomm Snapdragon 855 and offers 4G LTE connectivity.
There is also a little extra RAM in the Xperia 1 II at 8GB compared to 6GB in the Xperia 1 and the 2020 device offers double the internal storage too at 256GB over 128GB. Both handsets have microSD support for storage expansion, but the Xperia 1 II supports up to 1TB cards, while the Xperia 1 supports up to 512GB cards.
Battery
Xperia 1 II: 4000mAh battery, wireless charging
Xperia 1: 3330mAh battery, no wireless charging
The Xperia 1 II comes with a larger battery capacity than the Xperia 1, offering a 4000mAh cell over the 3330mAh battery in the 2019 model. That’s not the only differences in the battery department though.
The Xperia 1 II finally sees Sony offer wireless charging – a feature the Xperia 1 notably leaves off its spec sheet, despite many competitors offering the technology. USB Type-C charging is on board both devices and both have technology like Sony’s Stamina Mode as we mentioned previously.
Price
The Xperia 1 II costs £1099 in the UK, which is pricier than what the Xperia 1 first retailed at.
The Xperia 1 cost £849 when it first arrived. You’ll likely find it available cheaper now though.
squirrel_widget_261774
Conclusion
The Sony Xperia 1 II builds upon the Xperia 1, offering a more advanced processor, more RAM, more storage and a larger battery – all of which you would expect from a succeeding flagship. Sony has also refined the design though, reintroduced the 3.5mm headphone jack and enhanced the camera capabilities.
The display is pretty much the same between the two handsets and the software experience will be pretty much identical too, aside from a few extra features on the Xperia 1 II.
On paper, the Xperia 1 II is the device to go for, but consider that the Xperia 1 III will likely appear at some point this year, and depending on what features are most important to you, you might consider the some compromises if you can find the Xperia 1 at a decent price now.
AMD Radeon CVP and GM Scott Herkelman said in a video interview with PCWorld that FidelityFX Super Resolution (FSR), the company’s response to Nvidia DLSS, is “progressing very well internally” and that he believes it could debut later this year.
Nvidia introduced DLSS in 2018 and released DLSS 2.0 in 2020, so in some ways, AMD’s response to the technology is coming later than some might have expected. But that can partly be attributed to AMD’s plans to make FSR a cross-platform tool.
DLSS is currently limited to Nvidia graphics cards. FSR is supposed to operate across AMD’s GPUs, including those used in the Xbox Series X|S and PlayStation 5, as well as graphics products made by Intel and Nvidia. That’s a much bigger undertaking.
“Our commitment to the gaming community is [FSR] needs to be open, it needs to work across all things, and our game developers need to adopt it and feel like it’s a good thing,” Herkelman said in the PCWorld interview, which you can watch here:
Herkelman also said that FSR is “probably one of the biggest software initiatives we have internally, because we know how important it is that if you want to turn on ray tracing, you don’t want to […] have your GPU get hit so hard.”
Unfortunately, it seems like foundational aspects of FSR still have to be figured out. Herkelman told PCWorld the tool wouldn’t have to be based on machine learning, and that it’s working with game developers to find the best way to improve performance.
In the meantime, Nvidia said Wednesday that nearly 40 titles currently support DLSS and that “there are many more implementations of these technologies waiting in the wings to be announced and released in the coming weeks and months.”
This could turn out to be a tortoise-and-the-hare situation. An open source, cross-platform solution like FSR could easily appeal to developers more than a proprietary technology like DLSS. The problem is FSR hasn’t even shown up to the race track.
We could see new iPad Pros launch next month, according to Bloomberg, with one model due a significant upgrade in screen tech.
The site’s sources say that the 12.9in iPad Pro will get a Mini LED display. This would have better contrast ratios than the existing OLED panel, and be less susceptible to burn-in.
It’s not the first we’ve heard about the tech coming to Apple devices, either. Back in 2019, analyst Ming-Chi Kuo claimed the firm was working on Mini LED-equipped laptops and tablets. He also predicted that Apple would debut the tech in the 12.9in iPad Pro. So there’s a lot in this story that stacks up.
The new Pros are thought to look similar to the current models, but with speedier processors inside. In fact, performance should be “on a par” with Apple’s M1 MacBook Airs, MacBook Pros and Mac Mini, the report says.
Apple’s M1 devices launched last year, and are the first to feature Apple’s own silicon chips, marking a break from Intel’s processors. With Apple making the hardware and software, performance has increased significantly, with noted gains in battery life.
Both the new iPad Pros are also thought to feature new cameras. Both have reportedly been tested with Thunderbolt ports, which would transfer data quicker than the current USB-C. But there’s no word on whether they will launch with Thunderbolt or USB-C.
It’s not just new Pros that Apple has in the iPad pipeline. The company is reportedly working on an iPad Mini with a bigger screen than the current 7.9in, and a standard iPad that’s slimmer and lighter than the current model (that tallies with what we’ve already heard).
Both should launch later this year, possibly around September, a year on from the last iPad range refresh. Of course, that’s also when we’re expecting to see the iPhone 13 launch. Better start saving.
MORE:
Read our in-depth review of the Apple iPad (2020)
Find the perfect Apple tablet for you: best iPads
Apple’s first over-ear headphones rated: AirPods Max
Everything we know so far about the rumoured AirPods 3
We wouldn’t have tech without science, and The Verge wouldn’t be what it is without its team of science reporters. In this time of pandemics, Mars landings, and climate controversies, our skilled science team is more important than ever. We talked to Nicole Wetsman, one of our top science and health reporters, to find out how she does her job and what tools she uses.
What is your job at The Verge?
I’ve always been interested in science and health, but I never wanted to work in a lab or be a doctor. Reporting on those subjects gave me a way to learn and work with those ideas. I write about science, health, and health technology for The Verge. For the past year, that’s primarily meant covering COVID-19 — everything from testing technology to the vaccine rollout to public health data systems. I also help our video team script health-related videos and sometimes jump in as an on-camera host.
What is the process you follow when you are writing a science article?
I usually start by reading through any research articles on a particular topic and then talking with scientists and other experts who work in that area. That might include people who did a study or built a new health app or people who work in fields that might apply the new innovation. Then, I organize my research, synthesize what I found, and write up a story.
What hardware tools do you use for your work?
I’m embarrassingly low-tech for a reporter at a technology website. For the most part, I just use my 13-inch MacBook Pro, AirPods, and iPhone 12 to do everything. Occasionally, I pull out a Zoom F1 Field Recorder to record voice-overs for video projects.
What software tools do you use for your work?
I do most of my writing and research organization in Google Docs. I use the recording and transcription service Otter for interviews. It matches audio with the transcript, so I can easily go back and find whatever part of the interview I need, even if the transcription isn’t perfect. (It usually isn’t.)
When I need to find scientific research on any topic vaguely medical, I turn to PubMed, a search engine housed at the National Institutes of Health. I also use Google Scholar to find academic research articles.
Are there any other tools that you use?
I write out my to-do lists and schedule in a Moleskine weekly planner, which is the only notebook I’ve found with a layout that works for me.
What advice do you have for people who are considering reporting as a profession?
Journalism can sometimes seem like a competitive field, with reporters jockeying for scoops, intel, and access. At the core, though, it’s inherently collaborative. Working with others means benefiting from their ideas, edits, and perspectives, and it makes the final product better.
If you’ve got a recent Canon DSLR or mirrorless camera and are interested in getting more use out of it, Canon announced today that it would soon begin selling webcam accessory kits, allowing Canon owners an easier way to retool their cameras as high-quality webcams.
Each kit has different parts, but all three packages include a USB cable to connect your camera to your desktop computer or laptop, and a battery insert and power cord so that you can continually keep your Canon camera powered with a wall outlet. It is important to note that none of these kits include a tripod to mount your camera.
Canon has announced three versions of its accessory kit, which are available for preorder at Adorama, Amazon, Best Buy, and B&H Photo, with both retailers noting that preorders are expected to ship on March 25th. The two most affordable kits are compatible with EOS M cameras (M50, M50 Mark II & M200) and EOS Rebel cameras (Rebel T3, T5, T6 & T7), and cost $89.99 each. The more expensive kit, which works with EOS RP cameras, will retail for $159.99. Alternatively, you can buy a third-party charging kit and supply your own USB cable for less than the Canon kits cost; or if you don’t own a camera at all, you can buy a webcam starter kit for as low as $466 on Canon’s website.
Within the past year, webcams have been in high demand due to the pandemic as many people work from home or rely on videoconferencing tools to communicate. Canon, like its competitors, already released software that allowed owners to repurpose select Canon cameras as webcams to address the shortages.
Asus has listed the ROG Maximus XIII Apex on its website, implying that the successor to the ROG Maximus XII Apex may be closer than we think. The new iteration to the Apex series has been engineered to tame Intel’s 11th Generation Rocket Lake processors.
Built around the new Z590 chipset and existing LGA1200 socket, the ROG Maximus XIII Apex comes equipped with an 18-phase power delivery subsystem. Each power stage, which can manage up to 90 amps, is accompanied by a MicroFine Alloy choke that can do 45 amps. Asus revamped the power design on the ROG Maximus XIII Apex completely by getting rid of the phase doublers. The motherboard also employs 10K Japanese black metallic capacitors that can take a beating. The VRM area is properly cooled with thick, aluminum passive heatsinks. The ROG Maximus XIII Apex feeds Rocket Lake chips through a pair of 8-pin EPS power connectors.
The overclocking toolkit on the ROG Maximus XIII Apex includes a double-digit debug LED, voltage read points, and a plethora of buttons and switches to aid in overclocking. There are also three condensation sensors that are placed strategically across the motherboard to notify you when condensation occurs around the processor, memory or PCIe slot. In total, the ROG Maximus XIII Apex has five temperature sensors, five 4-pin fan headers, two full-speed fan headers, and an assortment of headers for watercooling setups.
Like previous Apex motherboards, the ROG Maximus XIII Apex only provides two DDR4 memory slots. While memory capacity is limited to 64GB, the motherboard supports memory frequencies above DDR4-5000 with ease. The ROG Maximus XIII Apex sports Asus’ OptiMem III technology, featuring an optimized memory tracing layout to improve memory overclocking.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The ROG Maximus XIII Apex offers numerous options for storage. It provides eight SATA III ports and up to four M.2 slots. The M.2 slots on the motherboard are PCIe 4.0 ready and come armed with an aluminium heatsink and embedded backplates to provide passive cooling. The other two M.2 slots reside on Asus’ ROG DIMM.2 module that connects to the motherboard through a DDR4-type interface beside the memory slots. The DIMM.2 module accommodates M.2 drives with lengths up to 110mm.
The expansion slots on the ROG Maximus XIII Apex consist of two PCIe x16 slots and one PCIe x8 slot. Wired and wireless networking come in the shape of a 2.5 Gigabit Ethernet port and Wi-Fi 6E connectivity with support for up to 6GHz bands. The audio system on the ROG Maximus XIII Apex uses Realtek’s ALC4080 audio codec complemented with a Savitech SV3H712 amplifier and high-end Nichicon audio capacitors.
In regards to USB ports, the ROG Maximus XIII Apex has four USB 3.2 Gen 1 ports, five USB 3.2 Gen 2 ports and one USB 3.2 Gen 2×2 Type-C port at the rear panel. There’s an additional USB 3.2 Gen 2×2 header on the motherboard. The ROG Maximus XIII Apex doesn’t supply any display outputs so it’s mandatory to pair it with a discrete graphics card.
ROG motherboards have a very rich software suite. On this iteration, Asus has directly implemented MemTest86 into the ROG Maximus XIII Apex’s firmware so overclockers can test memory stability without any hassles. Additionally, a one-year AIDA64 Extreme subscription is also included.
The pricing for the ROG Maximus XIII Apex is currently unknown. The previous Z490 version retailed for $356.99, so we can expect the Z590 followup to be price around that range if not a little bit higher.
The Raspberry Pi HQ Camera module has appeared in some of the best Raspberry Pi projects we’ve seen, like this stellar astrophotography project. But this maker brings the HQ module into a professional environment with this custom, 3D-printed cinema-style housing project. It’s complete with a custom interface that also provides useful photography features and settings.
In addition to the Raspberry Pi 4 and HQ Camera module, it’s designed to use an adjustable LCD touchscreen and has plenty of room for mounting accessories externally. There is even a dedicated battery slot should the user need to go mobile.
Image 1 of 2
Image 2 of 2
Eat-sleep-code was kind enough to make the project open source. Anyone interested in recreating this project can download the STL files from Thingiverse which includes print notes with optimal print settings and suggestions for each piece of the camera housing.
The custom camera software is available for anyone to use on GitHub. It offers built-in features such as timelapse and additional camera settings like exposure adjustment. For a sleek final touch, it’s optimized to work with touchscreen interfaces.
If you want to read more about the development of this professional HQ camera build, check out the original thread on Reddit.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.