Free is always nice, and when a free app is no longer free (or when the free version becomes so limited it is virtually useless), then you have to decide whether to pay up or move on. This happened with the Evernote note manager almost five years ago, and now it is time for users of the popular password manager LastPass to make the same decision. LastPass is changing its free version so that it will only work on one type of device — either your computer or your mobile device. If you, like most of us, use both a phone and a computer, then you will have to either start paying $3 a month or find an alternative.
If you’d rather not pay at all, there are other password managers out there that have free versions that may work better for you. And there are, of course, other alternatives. Most browsers, such as Chrome, Microsoft Edge, and Firefox, have their own password managers. In addition, many security apps such as Norton offer their own password managers, so if you already subscribe to one, you may have a password manager on hand.
But if you’d rather use an independent password manager, here are a few that are currently available. We have not yet tried them out; this is just a brief look until we have a chance to make recommendations.
Bitwarden
Bitwarden is a well-known open-source password manager that offers a solid selection of features, including saving unlimited items, syncing across devices, and generating passwords. For day-to-day password usage, Bitwarden could be a good alternative.
Other pricing: For $10 a year, you can add 1GB of encrypted file storage and two-step login, among other extras.
Zoho Vault
Zoho Vault, which is one of Zoho’s wide variety of productivity apps, has a free version that includes unlimited storage of passwords and notes, access from both computers and mobile devices, two-factor authentication, and password generation, among a fairly impressive number of other features.
Other pricing:Zoho’s paid plan, which starts at $1 / month per user, offers business options such as password sharing and expiration alerts.
KeePass
KeePass is another free open-source password manager, but judging from its website, it may be a little difficult for less technically adept users to adopt. Nothing is kept in the cloud, so while that can be more secure (you can store your passwords in a master key-locked encrypted database), it is also less convenient. However, if you don’t mind manually transferring your password database from one device to another, this could be worth a try.
Other pricing: None
LogMeOnce
LogMeOnce’s free version provides unlimited passwords and use on unlimited devices, along with autofill, sync, password generation, and two-factor authentication. LogMeOnce uses ads to fund its free version, so that could be a setback, depending on your tolerance for advertising.
Other pricing:Additional features start at $2.50 a month and include emergency access, additional password sharing, and priority technical support, among others.
NordPass
NordPass has a free version that includes unlimited passwords and syncing across devices. While there is no limit on the number of devices you can use, only one can be active at a time — so, for example, if you use it on your phone, you will be logged out of your computer’s version.
Other pricing:The premium version of NordPass lets you have up to six active accounts running at a time, and it includes secure item sharing and a data breach scanner, among other features.
RoboForm
RoboForm has been around for a while, although it’s never been as well-known as LastPass or 1Password. Its free version offers unlimited passwords, form filling, and emergency access, among other features. However, it does not sync across devices, which can be a definite inconvenience.
Other pricing: RoboForm Everywhere costs $18 for a one-year subscription, and it lets you sync across devices, perform cloud backup, and use two-factor authentication, among other features.
IBM plans to get rid of its planet-heating carbon dioxide emissions from its operations by 2030, the company announced today. And unlike some other tech companies that have made splashy environmental commitments lately, IBM’s pledge emphasized the need to prevent emissions rather than developing ways to capture carbon dioxide after it’s released.
The company committed to reaching net zero greenhouse gas emissions by the end of this decade, pledging to do “all it can across its operations” to stop polluting before it turns to emerging technologies that might be able to capture carbon dioxide after it’s emitted. It plans to rely on renewable energy for 90 percent of its electricity use by 2030. By 2025, it wants to slash its greenhouse gas emissions by 65 percent compared to 2010 levels.
“I am proud that IBM is leading the way by taking actions to significantly reduce emissions,” said IBM chairman and CEO Arvind Krishna in a press release.
IBM is putting more emphasis on its cloud computing and AI after announcing in October that it would split into two public companies and house its legacy IT services under a new name. That pivot puts IBM in more direct competition with giants like Amazon and Microsoft in the cloud market, which is notorious for guzzling up energy. Data centers accounted for about 1 percent of global electricity use in 2018, according to the International Energy Agency, and can strain local power grids. All three companies have now made big pledges to rein in pollution that drives climate change.
Microsoft’s climate pledge focuses on driving the development of technologies that suck carbon dioxide out of the atmosphere; it reached net zero emissions in 2012 but still relies heavily on investing in forests to offset its carbon pollution. Amazon committed to reaching net zero emissions by 2040. Amazon’s emissions, however, continue to grow as its business expands.
There is still room for more ambition in IBM’s new climate commitment since the company so far is not setting targets for reducing emissions coming from its supply chain or the use of its products by consumers. These kinds of indirect emissions often make up a majority of a company’s carbon footprint. IBM does not track all of the pollution from its supply chain, but other indirect emissions (like those from the products it sells) made up the biggest chunk of its carbon footprint in 2019. Microsoft and Amazon, on the other hand, consider all of these sources of emissions in their climate pledges.
Microsoft has started testing its xCloud game streaming through a web browser. Sources familiar with Microsoft’s Xbox plans tell The Verge that employees are now testing a web version of xCloud ahead of a public preview. The service allows Xbox players to access their games through a browser, and opens up xCloud to work on devices like iPhones and iPads.
Much like how xCloud currently works on Android tablets and phones, the web version includes a simple launcher with recommendations for games, the ability to resume recently played titles, and access to all the cloud games available through Xbox Game Pass Ultimate. Once you launch a game it will run fullscreen, and you’ll need a controller to play Xbox games streamed through the browser.
It’s not immediately clear what resolution Microsoft is streaming games at through this web version. The software maker is using Xbox One S server blades for its existing xCloud infrastructure, so full 4K streaming won’t be supported until the backend hardware is upgraded to Xbox Series X components this year.
Microsoft is planning to bundle this web version of xCloud into the PC version of the Xbox app on Windows 10, too. The web version appears to be currently limited to Chromium browsers like Google Chrome and Microsoft Edge, much like Google’s Stadia service. Microsoft is planning some form of public preview of xCloud via the web in the spring, and this wider internal testing signals that the preview is getting very close.
The big drive behind this web version is support for iOS and iPadOS hardware. Apple imposes limitations on iOS apps and cloud services, and Microsoft wasn’t able to support the iPhone and iPad when it launched xCloud in beta for Android last year. Apple said Microsoft would need to submit individual games for review, a process that Microsoft labeled a “bad experience for customers.”
In this tutorial, we will train our Raspberry Pi to identify other Raspberry Pis (or other objects of your choice) with Machine Learning (ML). Why is this important? An example of an industrial application for this type of ML is identifying defects in circuit boards. As circuit boards exit the assembly line, a machine can be trained to identify a defective circuit board for troubleshooting by a human.
We have discussed ML and Artificial Intelligence in previous articles, including facial recognition and face mask identification. In the facial recognition and face mask identification projects, all training images were stored locally on the Pi and the model training took a long time as it was also performed on the Pi. In this article, we’ll use a web platform called Edge Impulse to create and train our model to alleviate a few processing cycles from our Pi. Another advantage of Edge Impulse is the ease of uploading training images, which can be done from a smartphone (without an app).
We will use BalenaCloudOS instead of the standard Raspberry Pi OS since the folks at Balena have pre-built an API call to Edge Impulse. The previous facial recognition and face mask identification tutorials also required tedious command line package installs and Python code. This project eliminates all terminal commands and instead utilizes an intuitive GUI interface.
What You’ll Need
Raspberry Pi 4, Raspberry Pi 400, or Raspberry Pi 3
8 GB (or larger) microSD card
Raspberry Pi Camera, HQ Camera, or USB webcam
Power Supply for your Raspberry Pi
Your smartphone for taking photos
Windows, Mac or Chromebook
Objects for classification
Notes:
If you are using a Raspberry Pi 400, you will need a USB webcam as the Pi 400 does not have a ribbon cable interface.
You do NOT need a monitor, mouse, or keyboard for your Raspberry Pi in this project.
Timing: Please plan for a minimum 1-2 hours to complete this project.
Create and Train the Model in Edge Impulse
1. Go to Edge Impulse and create a free account (or login), from a browser window on your desktop or laptop (Windows, Mac, or Chromebook).
Data Acquisition
2. Select Data Acquisition from the menu bar on the left.
3. Upload photos from your desktop or scan a QR code with your smartphone and take photos. In this tutorial we’ll opt for taking photos with our smartphone.
4. Select “Show QR code” and a QR code should pop-up on your screen.
5. Scan the QR code with your phone’s camera app.
6. Select Open in browser and you’ll be taken to a data collection website. You will not need to download an app to collect images.
7. Accept permissions on your smartphone and tap “Collecting images?” in your phone’s browser screen.
8. If prompted for permissions, tap the “Give access to the camera” button and allow access on your device.
9. Tap “Label” and enter a tag for the object you will take photos of.
10. Take 30-50 photos of your item at various angles. Some photos will be used for training and other photos will be used for testing the model. Edge Impulse automatically splits photos between training and testing.
11. Repeat the process of Entering a label for the next object and taking 30-50 photos per object until you have at least 3 objects complete. We recommend 3 to 5 identified objects for your initial model. You will have an opportunity to re-train the model with more photos and/or types of objects later in this tutorial.
From your “Data Acquisition” tab in the Edge Impulse browser window, you should now see the total number of photos taken (or uploaded) and the number of labels (type of objects) you have classified. (You may need to refresh the tab to see the update.) Optional: You can click on any of the collected data samples to view the uploaded photo.
Impulse Design
12. Click “Create impulse” from “Impulse design” in the left column menu.
13. Click “Add a processing block” and select “Image” to add Image to the 2nd column from the left.
14. Click “Add a learning block” and select “Transfer Learning.”
15. Click the “Save Impulse” button on the far right.
16. Click “Image” under “Impulse design” in the left menu column.
17. Select “Generate features” to the right of “Parameters” near the top of the page.
18. Click the “Generate features” button in the lower part of the “Training set” box. This could take 5 to 10 minutes (or longer) depending on how many images you have uploaded.
19. Select “Transfer learning” within “Impulse design,” set your Training settings (keep defaults, check “Data augmentation” box), and click “Start training.” This step will also take 5 minutes or more depending on your amount of data.
After running the training algorithm, you’ll be able to view the predicted accuracy of the model. For example, in this model, the algorithm can only correctly identify a Raspberry Pi 3 – 64.3% of the time and will misidentify a Pi 3 as a Pi Zero 28.6% of the time.
Model Testing
20. Select “Model testing” in the left column menu.
21. Click the top check box to select all and press “Classify selected” to test your data. The output of this action will be a percent accuracy of your model.
If the level of accuracy is low, we suggest going back to the “Data Acquisition” step and adding more images or removing a set of images.
Model Testing
22. Select “Deployment” in the left menu column.
23. Select “WebAssembly” for your library.
24. Scroll down (“Quantized” should be selected by default) and click the “Build” button. This step may also take 3 minutes or more depending on your amount of data.
Setting Up BalenaCloud
Instead of the standard Raspberry Pi OS, we will flash BalenaCloudOS to our microSD card. The BalenaCloudOS is pre-built with an API interface to Edge Impulse and eliminates the need for attaching a monitor, mouse, and keyboard to our Raspberry Pi.
25. Create a free BalenaCloud account here. If you already have a BalenaCloud account, login to BalenaCloud.
26. Deploy a balena-cam-tinyxml application here. Note: You must already be logged into your Balena account for this to automatically direct you to creating a balena-cam-tinyml application.
27. Click “Deploy to Application.”
After creating your balena-cam-tinyml application, you’ll land on the “Devices” page. Do not create a device yet!
28. In Balena Cloud, select “Service Variables” and add the following 2 variables.
Variable 1:
Service: edgeimpulse-inference
Name: EI_API_KEY
Value: [API key found from your Edge Impulse Dashboard].
To get your API key, go to your Edge Impulse Dashboard, select “Keys” and copy your API key.
Go back to Balena Cloud and paste your API key in the value field of your service variable.
Click “Add”.
Variable 2:
Service: edgeimpulse-inference
Name: EI_PROJECT_ID
Value: [Project ID from your Edge Impulse Dashboard].
To get your Project ID, go to your Edge Impulse Dashboard, select “Project Info,” scroll down, and copy your “Project ID.”
Go back to Balena Cloud and paste your Project ID in the value field.
Click Add.
27. Select “Devices” from the left column menu in your BalenaCloud, and click “Add device.”
28. Select your Device type, (Raspberry Pi 4, Raspberry Pi 400, or Raspberry Pi 3).
29. Select the radio button for Development.
30. If using Wifi, select the radio button for “Wifi + Ethernet” and enter your Wifi credentials.
31. Click “Download balenaOS” and a zip file will start downloading.
32. Download, install, and open the Balena Etcher app to your desktop (if you don’t already have it installed). Raspberry Pi Imager also works, but Balena Etcher is preferred since we are flashing the BalenaCloudOS.
33. Insert your microSD card into your computer.
34. Select your recently-downloaded BalenaCloudOS image and flash it to your microSD card. Please note that all data will be erased from your microSD card.
Connect the Hardware and Update BalenaCloud
35. Remove the microSD card from your computer and insert into your Raspberry Pi.
36. Attach your webcam or Pi Camera to your Raspberry Pi.
37. Power up your Pi. Allow 15 to 30 minutes for your Pi to boot up and BalenaOS to update. Only the initial boot requires the long update. You can check the status of your Pi Balena Cloud OS in the BalenaCloud dashboard.
Object Identification
38. Identify your internal IP address from your BalenaCloud dashboard device.
39. Enter this IP address in a new browser Tab or Window. Works great in Safari, Chrome, and Firefox.
40. Place an object in front of the camera.
You should start seeing a probability rating for your object in your browser window (with your internal IP address).
41. Try various objects that you entered into the model and perhaps even objects you didn’t use to train the model.
Refining the Model
If you find that the identification is not very accurate, first check your model’s accuracy for that item in the Edge Impulse Model Testing tab.
You can add more photos by following the Data Acquisition steps and then selecting “Retrain model” in Edge Impulse.
You can also add more items by labeling and uploading in Data Acquisition and retraining the model.
After each retraining of the model, check for accuracy and then redeploy by running x “WebAssembly” within Deployment.
Cloud computing company Salesforce is joining other Silicon Valley tech giants in announcing a substantial shift in how it allows its employees to work. In a blog post published Tuesday, the company says the “9-to-5 workday is dead” and that it will allow employees to choose one of three categories that dictate how often, if ever, they return to the office once it’s safe to do so.
Salesforce will also give employees more freedom to choose what their daily schedules look like. The company joins other tech firms like Facebook and Microsoft that have announced permanent work-from-home policies in response to the coronavirus pandemic.
“As we enter a new year, we must continue to go forward with agility, creativity and a beginner’s mind — and that includes how we cultivate our culture. An immersive workspace is no longer limited to a desk in our Towers; the 9-to-5 workday is dead; and the employee experience is about more than ping-pong tables and snacks,” writes Brent Hyder, Salesforce’s chief people officer.
“In our always-on, always-connected world, it no longer makes sense to expect employees to work an eight-hour shift and do their jobs successfully,” Hyder adds. “Whether you have a global team to manage across time zones, a project-based role that is busier or slower depending on the season, or simply have to balance personal and professional obligations throughout the day, workers need flexibility to be successful.”
Hyder cites picking up young kids from school or caring for sick family members as reasons why an employee should not be expected to report to work on a strict eight-hour shift every day. He also points to how the removal of strict in-office requirements will allow Salesforce to expand its recruitment of new employees beyond expensive urban centers like San Francisco and New York.
In his blog post, Hyder defines the three different categories of work as flex, fully remote, and office-based. Flex would mean coming into the office one to three days per week and typically only for “team collaboration, customer meetings, and presentations.” Fully remote is what it sounds like — never coming into the office except perhaps in very rare situations or for work-related events. Office-based employees will be “the smallest population of our workforce,” Hyder says, and constitute employees whose roles require them be in the office four to five days per week.
“Our employees are the architects of this strategy, and flexibility will be key going forward,” Hyder writes. “It’s our responsibility as employers to empower our people to get the job done during the schedule that works best for them and their teams, and provide flexible options to help make them even more productive.”
Adobe is making it easier for multiple people to work on the same file in Photoshop, Illustrator, or Fresco. The three apps are getting a new feature called “invite to edit,” which will let you type in a collaborator’s email address to send them access to the file you’re working on.
Collaborators will not be able to work on the file live alongside you, but they will be able to open up your work, make changes of their own, save it, and have those changes sync back to your machine. If someone is already editing the file, the new user be given the choice to either make a copy or wait until the current editor is finished. It’s not quite Google Docs-style editing for Photoshop, but it should be easier than emailing a file back and forth.
The feature works with .PSD and .AI files saved to Adobe’s cloud. (It’s already available inside of Adobe XD as well.) It also supports version history, so you’ll be able to reverse course if a collaborator messes something up.
Adobe announced that this feature was in the works back in October. The company has been steadily building more collaboration features into Creative Cloud — the service tying its suite of apps together — in the hopes of making the platform quick, simple, and reliable enough that teams can count on it to move their documents around. Adobe recently updated a related feature that allows documents to be sent to others for review.
Matthew Wilson 22 hours ago Featured Tech News, Software & Gaming
At this point, Halo: The Master Chief Collection is complete on both Xbox consoles and PC. So what’s next for the MCC development team? It looks like we’ll be finding out quite soon, with 343 Industries teasing a ‘new place and way to play’.
In the latest Halo Waypoint developer blog, community manager ‘Postums’ discussed the future for MCC community flighting. Some of the additions are expected, like FOV slider support on Xbox consoles and improved keyboard/mouse support across platforms. One note on the list stands out from the rest though, teasing “a new place and way to play”.
Halo: The Master Chief Collection is already playable on xCloud, so this isn’t teasing a cloud launch for the game. It is also very unlikely to be related to a release on a rival console like the Nintendo Switch or PlayStation.
Currently, the leading theory is that Microsoft will be bringing Halo to the Epic Games Store on PC to widen the player base. The game is already available on PC via Xbox Game Pass, Microsoft Store and Steam.
KitGuru Says: We should hear more on this in the next few weeks. What do you think this tease means? Is this indicating an EGS launch, or could it be something bigger?
Become a Patron!
Check Also
ESA planning Digital E3 in June, needs publisher backing for keynotes
We’ve known for a while now that the ESA is planning a digital version of …
Matthew Wilson 1 day ago Featured Tech News, Software & Gaming
Last week, Square Enix officially announced ‘Endwalker’, the next expansion for Final Fantasy 14. Now, the development team’s pre-expansion patch plans are beginning to come to light, with the first arriving in April.
Final Fantasy XIV Patch 5.5 is coming on the 13th of April, coinciding with the game’s open beta on PlayStation 5. This update will be split into two parts, setting the world up for the events of Endwalker later this year.
The update is called ‘Death Unto Dawn’, with part one featuring the third chapter of YoRHa: Dark Apocalypse, a Nier-inspired alliance raid. Other features of this patch include:
New Main Scenario Quests – A two-part story paving the way for Endwalker.
NewAlliance Raid – The third chapter of the NieR-inspired YoRHa: Dark Apocalypse alliance raid series.
“Sorrow of Werlyt” Questline Update – The conclusion of the Warrior of Light and Gaius’ quest to thwart the Empire’s warmachina development project.
New Trial: The Cloud Deck – Players can face off against the fearsome Diamond Weapon in this latest trial, which will be available in both Normal and Extreme difficulties.
New Dungeon: Paglth’an.
“Save the Queen”Questline Update – Alongside the addition of a new field area, “Zadnor,” players can further upgrade their Resistance Weapons to their final and most powerful stage.
New Unreal Trial – The next powered-up version of an existing primal will be unleashed upon level 80 heroes, providing players with a new challenge and a chance to collect unique prizes.
Crafter Updates.
Ishgard Restoration Update.
“Explorer Mode” Update – The Explorer Mode feature will be expanded to all Level 70 dungeons. Explorer Mode allows players to explore dungeons free from danger to capture striking and fun screenshots while enabling the use of mounts and minions. Players will also now be able to use performance actions while in dungeons, such as playing musical instruments.
Performance Action Updates – Players will now be able to change instruments at any time while performing, and a new instrument will be added.
Job Adjustments for PvE and PvP Actions, New Custom Deliveries, Ocean Fishing Update, New Mounts and more.
Currently, Final Fantasy XIV: Endwalker is scheduled to release in Autumn 2021, featuring the finale of the Hydaelyn and Zodiark story that began in A Realm Reborn.
KitGuru Says: Are many of you still playing Final Fantasy XIV? Are you looking forward to the new expansion later this year?
Become a Patron!
Check Also
ESA planning Digital E3 in June, needs publisher backing for keynotes
We’ve known for a while now that the ESA is planning a digital version of …
HyperX delivers a headset that’s meant to roll out of the box and into service. The Cloud Revolver offers 7.1 surround sound for gaming and wide soundscape, and listening to music is a great experience. But the price tag is a stumbling block for what you get in the box.
For
Great audio clarity
Steel lends it fantastic build quality
Solid sound out-of-the-box
Against
Very few audio tweaking options
Can make ears a little warm
Expensive for the offering
The HyperX Cloud Revolver + 7.1 gets some things right in its quest to compete among the best gaming headsets. Compared to some of its other offerings, like the HyperX Cloud II Wireless, the Cloud Revolver + 7.1 offers more and higher quality memory foam, as well as firm steel. And despite the smaller drivers, HyperX promises a stronger, more robust sound scape on the Cloud Revolver 7.1 than some of its other offerings.
But at $150, this is an odd product. Although it’s wired, it’s the same price as the Cloud II Wireless, which offers similar features, like virtual 7.1 surround sound and a detachable noise-cancelling microphone.
The Cloud Revolver + 7.1 comes with an audio-boosting digital signal processor (DSP) via a handy USB sound card that also provides audio controls and virtual 7.1 surround sound. But it’s surround sound and audio in general isn’t tweak-friendly, keeping the package simple but hard to perfect.
HyperX Cloud Revolver + 7.1 Specs
Driver Type
50mm neodymium
Impedance
32 Ohms
Frequency Response
10 Hz-23.2 KHz
Microphone Type
Detachable condenser noise-cancelling
Connectivity
USB Type-A or 3.5mm
Weight
Headset-only: 0.83 pounds (375g)
Headset, mic, cable: 1 pound (452g)
Cords
6.67 feet (2.03m) USB-A cable and 7.1 dongle
3.33 feet (1m) 3.5mm
Lighting
None
Software
HyperX Ngenuity (Beta)
Design and Comfort
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
The HyperX Cloud Revolver + 7.1 is an update of an older design: the original HyperX Cloud Revolver released in 2016. The general build remains the same, though HyperX has removed all the color from the overall design. While the original was black matte plastic and steel with red HyperX red highlights, the 2021 edition saps all the color. Instead, the highlights are in a simple, understated white. There’s no RGB on this headset, just crisp, clean black and white.
A single piece of steel runs across the entire headband from ear cup to ear cup. Not only is that the most striking part of the design, it also provides stability. Underneath that steel band is an adjustable smaller band that sits on top of your head. That band is made of leatherette and memory foam, providing a smooth cushion for the Cloud Revolver + 7.1 to rest upon.
The ear cups themselves are pretty hefty, with a design that looks like speakers on the outside flanked by the steel fins of the headband. On the inside of the ear cups, you’ll find more leatherette and memory foam. There’s more foam here than in some of HyperX’s cheaper headset models. There are no controls on the ear cups—no volume roller or mute button here—but there is a 3.5mm jack for the detachable microphone. The mic itself is flexible but can’t be slid into a position where it’s out of your face and lacks any indicator for when it’s muted.
All told, while it’s not the lightest headset I’ve tested, the Cloud Revolver + 7.1 feels pretty good. The headset itself is 0.83 pounds (375g), but the distribution of weight is fantastic. It sits light on the top of your head, and any clamping pressure around the ears is lessened by the memory foam pads. I have a pretty big head though, and I get the feeling it might be too roomy for those with tiny heads—the metal band is around 9 inches across, and the gap between the earcup pads is around 6-6.5 inches. There’s also not a ton of twist in ear cups, and for long sessions I could feel the insides getting a little warm.
The Cloud Revolver is a fully-wired headset. There’s a braided cable that runs from the left ear cup that cannot be detached. It’s around 3.33 feet (1m) in length, ending in a 3.5mm jack. HyperX only specs the headset to work with PC and PS4, but with the 3.5mm connection it should work with an Nintendo Switch, Xbox One, Xbox Series X, PlayStation 4 (PS4) and PlayStation 5 (PS5).
Then there’s the USB sound card. It has a 3.5mm jack for plugging in the headset and ends in a USB Type-A connector for use with your PC. With the box, you get boosted audio via a digital signal processing sound card, as well as the ability to use virtual 7.1 surround sound. This plugs into your PC, PS4, or PS5. On the sound card dongle, you’ll find volume controls for the headset and microphone, a mute button on the side and a big button for activating the 7.1 surround sound capabilities. The mute button and 7.1 button both light up, letting you know which mode you’re in for each feature. The dongle also has a clip on the back for attaching to your shirt or pants to keep it in handy.
Cloud Revolver + 7.1 Audio Performance
HyperX markets the Cloud Revolver + 7.1 as a “studio-grade” headset. It has 50mm drivers, in line with most of its competition, but sports a larger frequency range than most. The can stretches from 10 Hz to 23.2 kHz, giving it an edge on both ends against some in this price range. That means a relatively wide soundscape.
There is one problem though. This headset utilizes HyperX’s own version of virtual 7.1 surround sound. There’s no tweaking and no equalizer available in HyperX’s software suite. And there’s no support for something like DTS Headphone:X or Dolby surround. HyperX’s 7.1 utilizes Windows Sonic on PC for any tweaks; the problem I have is that while Windows Sonic is great for positioning, I find the overall audio quality and available settings are far better on DTS Headphone:X or Dolby. The company did have a version of this headset that had Dolby support, the Cloud Revolver S, but that product doesn’t look like it’s being produced anymore. The headset we’re reviewing is essentially a non-Dolby rebrand of the S.
I loaded up Hitman 3; One of the new levels in this entry in the series, Berlin, is an excellent test with 7.1 on. The level takes place in an underground club hidden in a derelict power plant. Voices came through on the headset clearly, from the correct virtual channels with no distortion. The real test was below though. As you round the stairs into the club proper, there’s loud, booming techno music playing, with a good meaty bass beat to it. Even among the cacophony, Hitman 3 is still great about allowing you to hear audio dialog that may point to future assassinations. It’s a pretty chaotic scene in terms of sound, especially with the ebb and flow of the techno as you move around the environment, and the Cloud Revolver + 7.1 handled it well.
The Cloud Revolver + 7.1 is only guaranteed to work with PC and PS4, as per HyperX. But my PS5 recognized it immediately in sound devices when I plugged it in via USB. I didn’t have any sound initially, leading me to assume it didn’t work, but the trick with the Cloud Revolver + 7.1 is the audio controls on the dongle work independently of the system volume. You can have the system volume up, but the dongle volume down, and hear nothing.
Playing Marvel’s Spider-Man: Miles Morales, I found the system’s 3D audio worked well with the Cloud Revolver + 7.1. Walking around the city to get a feel of the directional sound, I could walk around a running car and clearly hear the engine humming along from the correct direction.
In terms of clarity, I could hear every thwip of the web-shooters alongside the whipping winds, the low bass beat of the soundtrack and even J. Jonah Jameson’s annoying radio broadcast. However, I did notice a little loss of clarity in the highs, with strings in the ambient soundtrack blending a bit with some of the city’s sounds.
The first music track I tried on the headset was Jason Derulo’s “Lifestyle.” It works well as a test case because of the transition from the early parts of the song. You have the thrumming of the bass guitar contrasted with Derulo’s vocals, which are then joined by accompaniment and staccato claps. Once the chunky bass in the chorus comes in, the song is playing on nearly every level. It’s got a little bit of everything.
Listening to the track on the Cloud Revolver + 7.1 allowed me to test the difference in the standard stereo versus the 7.1 surround. In stereo, there was wonderful differentiation and clarity between the different parts of the song. The wider soundscape really showed up to play. Switching to surround sound, it was clear that HyperX’s solution pushes the mids back, really playing up the highs and lows.
Across few other tracks, I actually found aspects of the music that was missing in my day-to-day headset. Gfriend’s “Labyrinth” had a sort of alternating high xylophone-style sound in the background of the chorus I never noticed before. And the understated low piano in the bridge of Clean Bandit’s “Higher” was suddenly apparent. There’s just an excellent amount of separation and clarity to the overall sound on this headset. It’s probably one of the better music listening experiences at this price point.
Microphone
Image 1 of 3
Image 2 of 3
Image 3 of 3
The microphone on the Cloud Revolver + 7.1 is a unidirectional condenser mic that you can detach from the headset. My recordings sounded pretty good, though they came across a little warm overall. Vocal clarity was pretty good, but there was still audible popping.
Noise cancellation, meanwhile, was decent. The headset took care of a good amount of environmental sound. There was someone mowing the lawn outside of my apartment, for example, and that wasn’t in the recording much. My local television noise also didn’t come through on recordings.
The boom mic is flexible, allowing for decent placement in front of your mouth. I also actually prefer having the mic mute on the dongle because it means you’re not getting a noise in your recording trying to mute your mic.
HyperX specs the Cloud Revolver + 7.1’s mic for a frequency response of 50 Hz – 7.7 kHz.
Software
HyperX has beta software, NGenuity, that works with many of its gaming peripherals, including some headsets. The Cloud Revolver + 7.1, however, is not meant to work with any software. Instead, HyperX targets this at users who want a simple plug-and-play package. But those who like to tweak their audio or want to address any perceived weaknesses in the Cloud Revolver + 7.1’s performance is out of luck.
If you want to do any virtual speaker positioning regarding the 7.1 surround sound, you can use the standard Windows menus via Windows Sonic.
Bottom Line
With the Cloud Revolver + 7.1, HyperX has crafted cans with great build quality, effective virtual 7.1 surround sound support, a nice wide soundscape and versatility through its two connection options (3.5mm or USB Type-A).It also delivers one of the better music-listening experiences I’ve had in the $150 price range.
However, the virtual 7.1 surround sound here is a step down from the immersive feel and customization options premium competitors, like Dolby, offer. And HyperX’s lack of audio tweaking options means you’re essentially stuck with what you get out of the box. The company could gain some ground simply by fixing that.
There are more customizable options with advanced surround sound for less. As of writing, Logitech Pro X is about $20 cheaper than our review focus, and you get DTS Headphone X 2.0 support, an extensive audio equalizer and Blue microphone audio tweaks via Logitech software. The Razer BlackShark V2 offers THX Spatial Audio for a whopping $50 less. And that’s all before you even get into wireless headset options, which are pretty price-competitive these days.
Sure, I might love listening to music on the Cloud Revolver + 7.1, but a gaming headset is more than that. And frankly, HyperX is still behind the competition in terms of bells-and-whistles.
But if you’re not into tweaking and just want something that offers decent virtual surround sound and covers a wide range of frequencies out of the box while sitting comfortably on your noggin, the Cloud Revolver + 7.1 is worth a look.
An ex-employee of Intel allegedly walked away with 3,900 documents pertaining to confidential data from the company. Intel has filed a lawsuit and wants $75,000 as payment for potential damage from the stolen documents.
The former employee in question is Dr. Varun Gupta, who left Intel last year to join Microsoft as a “Principal for Strategic Planning in Cloud and AI.” On his last day working for Intel, Dr. Gupta is alleged to have grabbed nearly 4,000 confidential files and stashed them on a couple of USB drives. The data pertains to Intel’s Xeon processors, pricing data, strategies, and Intel’s manufacturing capabilities of the chips.
Intel has filed a lawsuit, saying that Gupta used this confidential information to gain an unfair advantage with Microsoft in negotiations concerning (Xeon) product specifications and pricing.
Intel had a security team begin an investigation into Gupta, with help from Microsoft, to see how damaging the theft was. Ultimately they found Gupta had indeed taken thousands of Intel documents and had access to the thumb drives with the confidential data over a hundred times during his time with Microsoft.
Dr. Gupta has admitted to having one thumb drive, but he says he turned the drive over to Microsoft immediately for analysis. The other thumb drive has not been found so far. Besides admitting to having one thumb drive with some of Intel’s data, Gupta denies all of Intel’s other claims.
While Gupta’s motives are still unclear, it seems the matter will be settled in court. As a worst-case scenario (for Intel at least), Microsoft might get Xeon processors at a steeper discount than anyone else.
Home security camera systems have exploded in popularity while decreasing in price over the past few years. For example, you could purchase a Ring Indoor Security Camera for around $60, but there are some drawbacks: first, vendors like Ring often charge a monthly fee to store your data and second, you might not want video and photos from inside your home being shared with a third party (in Ring’s case, Amazon) where strangers could potentially see them.
MotionEyeOS, a free open-source application, allows you to turn a Raspberry Pi with a camera into a home video monitoring system, where the photos and videos can either stay on your device (and home network) or, if you choose, be uploaded automatically to a cloud-storage service such as Google Drive or Dropbox.
In this tutorial, we will show you how to set up a Raspberry Pi security camera with MotionEyeOS. This software works with almost any Raspberry Pi (connected to the internet) and almost any webcam or Pi camera. There’s no fancy coding to be done in this project; it just works.
Here are a few of the cameras I’ve successfully used with MotionEye.
This Raspberry Pi security camera can be used to record porch pirates, monitor children or pets or to watch out for burglars.
Disclaimer: This article is provided with the intent for personal use. We expect our users to fully disclose and notify when they collect, use, and/or share data. We expect our users to fully comply with all national, state, and municipal laws applicable.
What You’ll Need
Raspberry Pi 4 or Raspberry Pi 3B+, or Raspberry Pi Zero W
8 GB (or larger) microSD card
Raspberry Pi Cam, HQ Camera, Infrared Camera, or webcam
Monitor/Power supply/Monitor/HDMI Cable (for your Raspberry Pi)
Your Windows or Mac computer.
Install MotionEyeOS
In this section, we will download MotionEyeOS, flash to a microSD card for our Raspberry Pi security camera, and set our WPA credentials.
1. Download the latest version of MotionEyeOS corresponding to the specific model of Raspberry Pi you are using from https://github.com/ccrisan/motioneyeos/releases
Image 1 of 2
Image 2 of 2
2. Insert your microSD card into your computer to be read as a storage device.
3. Launch Raspberry Pi Imager. You can download the imager here if you don’t already have it installed on your computer.
4. Select “Use custom” for the Operating System.
5. Select the motioneyeos version that you just downloaded. This should be a .img.xz file.
6. Select your microSD card under “SD Card.” Note that all data on your microSD card will be erased in the next step.
7. Click “Write” in the Raspberry Pi imager. The ‘write’ process could take 1 to 2 minutes.
8. When the process completes, physically remove and then reinsert your microSD card. We do this because the software automatically ejects the microSD card when the process completes, but we need to add one file before the next step.
9. Create a new file named wpa_supplicant.conf with the following text, replacing “YOUR_NETWORK_NAME” and “YOUR_NETWORK_PASSWORD” with your information. A source code editor such as Atom works great for this purpose. WordPad and Notepad are not recommended to create this file as extra characters are added in the formatting process.
10. Save wpa_supplicant.conf to your microSD card. Eject your microSD card.
11. Insert your microSD card into your Raspberry Pi.
12. Connect your camera, monitor and power supply to your Raspberry Pi. Power up your Pi.
13. Find your internal IP address on the Pi screen. In most cases your internal IP address will start with 192.168.x.x or 10.0.0.x. Alternatively, if you do not have access to a monitor, you can download Angry IP Scanner and find your IP address for your Motioneye Raspberry Pi. Look for “MEYE” to identify your MotionEye Pi.
14. Enter your internal IP address into a browser window of your Windows or Mac computer. Alternatively, you could use a Chromebook or a tablet. At this point your Motioneye should start streaming.
In most cases, the system will automatically stream from the attached camera. If no image comes up, the camera may be incompatible with the Raspberry Pi. For example, an HD webcam may be incompatible with the Raspberry Pi Zero, but will work with a Raspberry Pi 3. There may be some trial and error in this step. Interestingly, most older webcams (manufactured before the Pi) will work with Motioneye. Here’s an old Logitech Pro 9000 connected to a Pi Zero W with a 3D printed stand.
Configuring MotionEye for Raspberry Pi Security Camera
In this section, we will perform a basic configuration of Motioneye and view our Raspberry Pi security camera video stream.
1. Click on the Profile icon near the top left within your browser menu to pull up the Login screen.
2. Log in using the default credentials. The username is admin, and the password field should be blank.
3. Select your Time Zone from the dropdown menu in “Time Zone.” Click Apply. Motioneye will reboot which will take a few minutes. This step is important as each photo and video is timestamped.
4. Motioneye detects motion when _% of frames change. The intent is to set your % low enough to pick up the movement you are tracking, but high enough to avoid recording a passing cloud. In most cases, this is achieved through trial and error. Start with your default 4% Frame Change Threshold and then move up until you reach your optimal setting.
5. Click the down arrow to the right of “Still Images” to reveal the corresponding settings. Do the same for “Movies.” Set Capture Mode and Recording Mode to “Motion Triggered” and length of time to “Preserve Pictures” and “Movies.”
I have chosen “For One Week” since I’m only working with an 8GB microSD card. The photos saved locally will serve as a backup. You’ll save all of the photos to Google in a later step. Click Apply to save your changes.
6. Set your Camera Name, Video Resolution, Frame Rate and other options in the “Video Device” section. Click Apply to save your changes.
Viewing Raspberry Pi Security Camera Images / Video Locally
If you don’t wish to upload images to a third-party service such as Google Drive, you can view the images and/or videos) captured locally on your Raspberry Pi security camera. If you choose this method, the images will never leave your local network.
1. Click on the live camera feed and new icons will appear.
2. Click on the image icon to view images.
3. Or Click on the “Play”button icon to view movies.
Automatic Uploading to Google Drive (Optional)
In this step, we will configure our Raspberry Pi security camera to automatically upload all of the photos (and videos) taken to Google Drive. This method (with a couple of nuances) also works with Dropbox. Of course, you have to be comfortable with having your images in the cloud.
Most users create a separate Gmail account specifically for this purpose, to maximize free storage space from Google. Additionally, this will come in handy if you decide to enable email notifications in the next step.
1. Click the down arrow corresponding to “File Storage” in the main admin menu.
2. Toggle “Upload Media Files” to ON. This should automatically toggle “Upload Pictures” and “Upload Movies” to ON, but if not, hit ON.
3. Select Google Drive from the “Upload Service” dropdown menu.
4. In your Google Drive, create a new folder for storing your photos and videos. I chose “PorchCam” for the name of my folder.
5. Enter “/” followed by your folder name for ‘Location.’
6. Click “Obtain Key” and accept associated permissions by clicking “Allow.”
7. Copy and paste the authorization code into your “Authorization Key” in Motioneye.
8. Click the “Test Service” button. If you don’t get an error message in Motioneye, then it was a success.
9. Go to your Google Drive folder and test your setup by pointing the camera at yourself and waving to the camera.
In this optional step, we will configure our MotionEye to automatically send us emails with attachments containing the photos our Raspberry Pi security camera has taken. It is highly recommended that you create a separate Gmail account specifically for this purpose. These instructions are specific to Gmail only.
1. Enable “Less Secure Apps” in your Gmail account.
2. Expand “Motion Notifications” in Motioneye.
3. Toggle ON “Send An Email”
4. Enter your Email address, password.
SMTP Server = smtp.gmail.com
SMTP Port = 587
Use TLS – Toggle to On
Enter a value for “Attached Pictures Time Span”
5. Click the “Test Email” button.
The first email is a text only email. Subsequent emails will contain attachments.
Mobile App Access to Raspberry Pi Security Camera
MotionEye also features a mobile app for iOS and Android / Google Play stores. Keep in mind that the app will only work while you are on the same network as your Raspberry Pi (unless you enable port forwarding, which is not encouraged for security reasons).
If you live in one of the twelve states where Comcast is planning to roll out 1.2TB data caps, we have some moderately good news: you won’t have to start monitoring your bill for extra charges until July. The ISP had planned to start charging customers $10-and-up fees for using more than 1.2TB of data starting this March, but the rollout has been delayed (via The Washington Post). This gives us a few more months until the scourge of Comcast home internet data caps are truly nationwide.
The areas affected are in Comcast’s Northeast region: Connecticut, Delaware, Massachusetts, Maryland, Maine, New Hampshire, New Jersey, New York, Pennsylvania, Virginia, Vermont, West Virginia, and the District of Columbia, as well as parts of North Carolina and Ohio. If you live in one of those areas, your bill in August could have up to $100 of overage fees for your July use. That’s a lot of extra money, but at least now you have a bit more time to see if you’ll be affected and to make a plan if you are.
The cap was scheduled to roll out this March, but it’s being delayed after Pennsylvania’s attorney general raised objections, saying that now, when we’re struggling with the pandemic and using the internet for work and school, is “not the time to change the rules when it comes to internet data usage and increase costs.” After negotiations, Comcast has agreed to not only push back the data cap start date, but to also waive early cancellation fees for customers who don’t want to be subjected to the caps, according to a press release from the attorney general’s office.
While Comcast customers in the region are probably happy for the delay, the ability to cancel your service with no fees is only useful if you have another ISP that will provide you service, which many across the US do not. The rest of the country has had data caps for a while, and people haven’t liked them. Yet they’ve rolled out anyway because the ISPs have basically no real competition.
Comcast is, however, giving its low-income customers a bit of a break. It announced that it was doubling the speeds of its Internet Essentials plan yesterday, and it’s apparently not going to be imposing data caps on that plan for the rest of 2021. Comcast confirmed to The Verge that this policy was nationwide.
The 1.2TB-plus overage fees will come to the Northeast in July, showing up on August bills. If you go over the 1.2TB limit, you’ll have to pay $10 for every additional 50GB, with the fees capped at $100. You do get one “courtesy” month, where if you go over you won’t be charged extra, but after that the fees will start rolling in. Of course, if you find yourself going over often, Comcast is happy to upgrade you to unlimited data for only $30 a month or as part of a $25-a-month xFi Complete bundle.
If you live in the Northeast and are worried about your bill going up come July, Comcast has a tool to check how much data you use. Since that usage is what they’ll be billing you for, you can see if you’re typically over 1.2TB of usage or not — and now, you’ll have a few more months to figure out what to do if you’re consistently over. It is possible you won’t be, though. Back when I had a data cap, I generally stayed under, and I’m a pretty heavy internet user who backs up a lot of photo and video to the cloud.
Disclosure: Comcast is an investor in Vox Media, The Verge’s parent company.
The exec helped Amazon soar in the cloud, and now he’ll determine the company’s future
Amazon is getting a new CEO for the first time in its 27-year-history: cloud computing chief Andy Jassy, who will be replacing co-founder Jeff Bezos later this year. Jassy, currently the CEO of Amazon Web Services (AWS), is a core believer in Bezos’ business philosophies and a longtime veteran of the company, having run the cloud division since its inception nearly two decades ago.
Jassy, who turned 53 last year, is now getting the opportunity to make his mark not just on Amazon, but also the world and the major ways the company shaped it, from Whole Foods to a million-person-plus warehouse workforce to massive logistics and AI divisions with far-reaching real-world effects.
Far from a household name, Jassy is still one of the most consequential executives in Amazon’s history. His promotion underscores the importance of cloud computing to the biggest tech titans that now play vital roles in powering the entire internet. In the case of AWS, that includes everything from Netflix and Spotify to the Central Intelligence Agency and the Democratic National Committee.When AWS goes down, huge chunks of the internet go with it.
The transition of power is reminiscent of Satya Nadella’s promotion to the CEO role at Microsoft in 2014, after Nadella spent three years running the company’s Azure cloud business. Nadella modernized many elements of Microsoft’s business and company culture with a focus on the cloud and mobile computing, as well as an excellent eye for major acquisitions. Jassy’s ascent to the top job at Amazon may similarly usher in an era of transformation for the e-commerce giant.
The big question Amazon insiders and those on the outside looking in will try to answer in the next six months, before he takes the job in the third quarter of the year, will be whether Jassy deviates from Bezos’ approach or sticks to business as usual. Yet if Jassy continues to see himself as an acolyte of Bezos and his famous “Day 1” mentality — which argues that companies start to decline and die the moment they rest on their laurels — it will mean plenty of change is on the horizon. For Amazon, change is both the most important survival instinct and its most successful business tool.
When Jassy joined Amazon in the late ‘90s, the company was years away from thinking about the cloud and still focused solely on e-commerce. Jassy graduated from Harvard Business School in 1997 and joined Amazon soon thereafter as part of a wave of fresh MBAs flocking to the tech industry before the dot-com boom. Jassy moved out West with the intention of one day returning to New York, according to an interview last year on The Disruptive Voice podcast, but he’s never held a job at another company.
Jassy went on to become Bezos’ first “shadow” adviser, something like a corporate chief of staff who followed the CEO every day and sat in on all of his meetings, according to a profile of Jassy published late last month by Insider. Jassy also made a peculiar first impression on his boss by accidentally hitting him in the head with a kayak paddle during a characteristically competitive game of company broomball, as recounted in Brad Stone’s 2013 book, The Everything Store: Jeff Bezos and the Age of Amazon.
Bezos and Jassy’s relationship deepened in the years after, with Bezos tasking his younger lieutenant with exploring the then-nascent technology of cloud computing around 2003. The goal was to see whether it made sense for Amazon to offer hosting services to other websites and businesses, back when many of the largest tech companies mainly relied on third-party data centers or had already begun looking into or actively building their own. The idea came from Amazon’s own struggles to build an external development platform for retailers three years earlier, so third-party companies could build their own e-commerce operations.
It was Jassy who helped identify the problem: Amazon’s development tools, frankly, sucked. The company set out to improve them by creating easier-to-use APIs and other technology that would let any one team at Amazon pull from a common pool of resources. “So very quietly around 2000, we became a services company with really no fanfare,” Jassy told a crowd at the re:Invent conference in 2018, according to TechCrunch.
It took Amazon another six years of exploring and experimenting — with the effort to formally develop AWS really taking off after a fateful 2003 executive retreat at Bezos’ house, Jassy recounted — before the company launched its first cloud product in 2006. “In retrospect it [AWS] seems fairly obvious, but at the time I don’t think we had ever really internalized that,” Jassy said at re:Invent. The company’s early investments paid off, as it took competitors years to realize the business opportunity and launch comparable cloud products.
“If you believe companies will build applications from scratch on top of the infrastructure services if the right selection [of services] existed, and we believed they would if the right selection existed, then the operating system becomes the internet, which is really different from what had been the case for the [previous] 30 years,” Jassy explained.
That belief about the future of the internet proved prescient. Today, AWS powers a huge bulk of apps, services, and websites consumers and employees use every day, largely because Amazon has unparalleled resources and developer tools that make building and tapping into its massive resources as easy as using a standard API. It’s why so many companies forgo building their own data center operations and instead choose AWS or one of its competitors. Unless you’re Facebook or Google, both of which built out their own global data center operations, it’s simply easier to use Amazon than to do it yourself.
Jassy deserves credit for architecting the company’s cloud vision, having run AWS since it was created and becoming its CEO after Bezos promoted him to the position from a senior vice president role back in 2016. His tenure at AWS has also turned cloud computing into the most profitable of Amazon’s divisions, accounting for roughly 63 percent of the company’s profits in 2020 and putting it on track to make more than $50 billion in revenue this year. Amazon now controls about a third of the entire cloud infrastructure market, more than its next closest competitors (Microsoft and Google) combined, according to Synergy Research.
Without AWS’s momentous growth, Amazon may not have had the resources to invest as much money back into its retail, logistics, streaming video, hardware, smart home, AI, and other divisions over the years. That makes AWS effectively the engine of Amazon’s continuous reinvention, and Jassy is the spark that helps drive it.
In recent years, Jassy has clearly fashioned himself as an heir apparent to Bezos, spinning tales of Amazon’s early days and the remarkable beginnings of AWS and how those learnings can be applied to other businesses. He’s a keynote speaker at Amazon’s high-profile re:Invent conference, an industry gathering dedicated to cloud computing, and he’s become a more public face of Amazon in recent years. Last summer, when longtime logistics executive Jeff Wilke, another potential Bezos successor, announced his retirement, the writing was on the wall. Someone would eventually take over from Bezos, and it was looking more likely than ever to be Jassy.
Jassy’s management quirks and persona have also become somewhat legendary within the company, similar to Bezos’ infamous email style and meeting decorum. Jassy is known internally for his exhaustive attention to detail and hands-on approach, his penchant for back-to-back meetings, and his welcoming embrace of social justice issues, according to Insider.
In September, he tweeted publicly about accountability for the killing of Breonna Taylor, and he’s been outspoken in his support for the Black Lives Matter movement and LGBTQ issues. Jassy, however, is also known for having defended controversial decisions, like Amazon’s sale of its flawed facial recognition technology to police departments and the government. (Amazon announced a one-year ban on its sale of the tech to law enforcement starting in June of last year.)
Jassy’s approach is also characterized as one of making tough and unprecedented calls, best exemplified in AWS’ decision to ban social media platform Parler last month following the US Capitol riot. It was a move the company did not take lightly considering its “religious” commitment to maintaining service for customers, Insider reported at the time. But it felt compelled to do so after outcry from employees and because Parler posed “a very real risk to public safety,” Amazon said in a statement at the time.
Jassy will no doubt be in charge of making even tougher calls in the future. But that’s part of both the job and the Amazon culture he’s helped cultivate. “It’s really hard to build a business that sustains for a long period of time,” Jassy told a virtual crowd at the all-digital Amazon re:Invent last December. “To do it, you’re going to reinvent yourself, and often you’re going to have to reinvent yourself many times over.”
That’s precisely what Amazon has done over the years, transforming from an online bookseller into an e-commerce giant and onward into a hardware maker, a major Hollywood and entertainment industry player, and now the second largest employer in the country. All the while, Jassy has worked behind the scenes to ensure AWS was growing into the profit machine it is today.
Now, Jassy appears ready for a reinvention of his own, at a time when Amazon is still at the forefront of so many industries and continuing to explore new territory, all while it faces increasing antitrust pressure in the US and overseas and mounting competition in the AI, cloud, and e-commerce industries.
“Typically, what you see is the desperate kind of reinvention — companies on the verge of falling apart or going bankrupt, deciding they have to reinvent themselves. When you wait until that point, it’s a crapshoot whether you’re going to be successful or not,” Jassy explained. “You want to be reinventing when you’re healthy. You want to be reinventing all the time.”
Microsoft is updating its OneDrive app for Android this week with a new home screen, Samsung Motion Photos support, and the ability to play 8K videos. The new home screen includes quick access to recent files, offline files, and the “On This Day” feature of OneDrive that reminds you of your old photos.
Samsung Motion Photos support is also included in this update, allowing owners of Samsung phones to play back photos captured with motion in the OneDrive app or online. These photos work like Apple’s Live Photos and capture a still image alongside several seconds of video and sound before the capture. Microsoft says Samsung Motion Photos playback is rolling out worldwide and will require Android version 6 or above.
The final addition is 8K video playback for compatible Samsung phones like the new Galaxy S21 or last year’s S20. While you’ve always been able to store 8K videos on OneDrive, the service now supports playback on compatible screens and devices. This could tempt more to store 8K video on Microsoft’s cloud storage service, particularly when OneDrive now supports up to 250GB files.
This latest Android update for OneDrive focuses a lot on Samsung’s phones and is an example of the ongoing partnership between Microsoft Samsung. Both companies are working on a variety of ways to integrate Microsoft’s software and services into Samsung Android phones, and there’s even a partnership for cloud gaming through xCloud.
Google parent company Alphabet weathered the tail end of 2020 to post better-than-expected earnings for the fourth quarter of the year. But the bigger story is that Alphabet broke out Google Cloud’s sales for the first time ever, revealing an eye-popping $5.6 billion annual loss last year, but a nearly 50 percent jump in revenue (to $13 billion) compared to 2019. And Google Cloud maintained that growth well into the fourth quarter, when the division generated $3.8 billion in sales. That’s a 46 percent jump from the fourth quarter of 2019.
Those numbers are notable for a few reasons: Google Cloud lags behind the competition, in particular Microsoft’s Azure platform and Amazon’s dominant Amazon Web Services, the CEO of which was just promoted to run the entirety of Amazon now that co-founder Jeff Bezos is stepping back into an executive chairman role in a surprise announcement this afternoon. But the division’s fast-growing revenues, second only to the search giant’s core ad business, suggest Google could become a major cloud player and fiercer competitor to Azure and AWS in the years to come.
The message is now clear: cloud computing is the dominant business for these major tech titans, and the execs who can excel in the cloud industry are stepping in to take the reins of the entire business. Microsoft CEO Satya Nadella famously ran Azure before he took over for Steve Ballmer, and Google Cloud chief Thomas Kurian was a top exec at Oracle before replacing VMWare co-founder Diane Green in the top Google Cloud role. These are the people running the show at the most important divisions of the most important tech companies, until they get promoted to steer the entire ship apparently.
In many ways, Google’s cloud business is going through the same growing pains its competitors once did; it took AWS nearly 10 years to become profitable for the first time, and it’s now a more than $45 billion annual business. But Google is facing an uphill battle, one that will take a considerable time and financial investment to come to fruition.
Thankfully, Alphabet is not dependent on Google’s cloud business, as Amazon relies on AWS, to make considerable profit every quarter, thanks to Google’s dominant ad business that brought in a staggering $52.9 billion in revenue last quarter alone and nearly $170 billion for the year.
The Google Services division’s profit margins there are immense — Google made more than $19 billion in net income for the fourth quarter of 2020, a 41 percent increase from the fourth quarter of 2019. YouTube also continues to grow at a steady clip. The video site posted more than $6.8 billion in revenue last quarter, a 47 percent increase from the fourth quarter of 2019.
The always fluctuating Other Bets division — which includes Alphabet’s X lab, Waymo, and other non-Google companies — took in $196 million last quarter and $657 million in all of 2020, but it also posted an operating loss of $4.48 billion for the year.
Developing…
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.