six-free-alternatives-to-the-lastpass-password-manager

Six free alternatives to the LastPass password manager

Free is always nice, and when a free app is no longer free (or when the free version becomes so limited it is virtually useless), then you have to decide whether to pay up or move on. This happened with the Evernote note manager almost five years ago, and now it is time for users of the popular password manager LastPass to make the same decision. LastPass is changing its free version so that it will only work on one type of device — either your computer or your mobile device. If you, like most of us, use both a phone and a computer, then you will have to either start paying $3 a month or find an alternative.

If you’d rather not pay at all, there are other password managers out there that have free versions that may work better for you. And there are, of course, other alternatives. Most browsers, such as Chrome, Microsoft Edge, and Firefox, have their own password managers. In addition, many security apps such as Norton offer their own password managers, so if you already subscribe to one, you may have a password manager on hand.

But if you’d rather use an independent password manager, here are a few that are currently available. We have not yet tried them out; this is just a brief look until we have a chance to make recommendations.

Image: Bitwarden

Bitwarden

Bitwarden is a well-known open-source password manager that offers a solid selection of features, including saving unlimited items, syncing across devices, and generating passwords. For day-to-day password usage, Bitwarden could be a good alternative.

Other pricing: For $10 a year, you can add 1GB of encrypted file storage and two-step login, among other extras.

Image: Zoho

Zoho Vault

Zoho Vault, which is one of Zoho’s wide variety of productivity apps, has a free version that includes unlimited storage of passwords and notes, access from both computers and mobile devices, two-factor authentication, and password generation, among a fairly impressive number of other features.

Other pricing: Zoho’s paid plan, which starts at $1 / month per user, offers business options such as password sharing and expiration alerts.

Image: KeePass

KeePass

KeePass is another free open-source password manager, but judging from its website, it may be a little difficult for less technically adept users to adopt. Nothing is kept in the cloud, so while that can be more secure (you can store your passwords in a master key-locked encrypted database), it is also less convenient. However, if you don’t mind manually transferring your password database from one device to another, this could be worth a try.

Other pricing: None

Image: LogMeOnce

LogMeOnce

LogMeOnce’s free version provides unlimited passwords and use on unlimited devices, along with autofill, sync, password generation, and two-factor authentication. LogMeOnce uses ads to fund its free version, so that could be a setback, depending on your tolerance for advertising.

Other pricing: Additional features start at $2.50 a month and include emergency access, additional password sharing, and priority technical support, among others.

Image: Nordpass

NordPass

NordPass has a free version that includes unlimited passwords and syncing across devices. While there is no limit on the number of devices you can use, only one can be active at a time — so, for example, if you use it on your phone, you will be logged out of your computer’s version.

Other pricing: The premium version of NordPass lets you have up to six active accounts running at a time, and it includes secure item sharing and a data breach scanner, among other features.

Image: RoboForm

RoboForm

RoboForm has been around for a while, although it’s never been as well-known as LastPass or 1Password. Its free version offers unlimited passwords, form filling, and emergency access, among other features. However, it does not sync across devices, which can be a definite inconvenience.

Other pricing: RoboForm Everywhere costs $18 for a one-year subscription, and it lets you sync across devices, perform cloud backup, and use two-factor authentication, among other features.

ibm-sets-new-climate-goal-for-2030

IBM sets new climate goal for 2030

IBM plans to get rid of its planet-heating carbon dioxide emissions from its operations by 2030, the company announced today. And unlike some other tech companies that have made splashy environmental commitments lately, IBM’s pledge emphasized the need to prevent emissions rather than developing ways to capture carbon dioxide after it’s released.

The company committed to reaching net zero greenhouse gas emissions by the end of this decade, pledging to do “all it can across its operations” to stop polluting before it turns to emerging technologies that might be able to capture carbon dioxide after it’s emitted. It plans to rely on renewable energy for 90 percent of its electricity use by 2030. By 2025, it wants to slash its greenhouse gas emissions by 65 percent compared to 2010 levels.

“I am proud that IBM is leading the way by taking actions to significantly reduce emissions,” said IBM chairman and CEO Arvind Krishna in a press release.

IBM is putting more emphasis on its cloud computing and AI after announcing in October that it would split into two public companies and house its legacy IT services under a new name. That pivot puts IBM in more direct competition with giants like Amazon and Microsoft in the cloud market, which is notorious for guzzling up energy. Data centers accounted for about 1 percent of global electricity use in 2018, according to the International Energy Agency, and can strain local power grids. All three companies have now made big pledges to rein in pollution that drives climate change.

Microsoft’s climate pledge focuses on driving the development of technologies that suck carbon dioxide out of the atmosphere; it reached net zero emissions in 2012 but still relies heavily on investing in forests to offset its carbon pollution. Amazon committed to reaching net zero emissions by 2040. Amazon’s emissions, however, continue to grow as its business expands.

There is still room for more ambition in IBM’s new climate commitment since the company so far is not setting targets for reducing emissions coming from its supply chain or the use of its products by consumers. These kinds of indirect emissions often make up a majority of a company’s carbon footprint. IBM does not track all of the pollution from its supply chain, but other indirect emissions (like those from the products it sells) made up the biggest chunk of its carbon footprint in 2019. Microsoft and Amazon, on the other hand, consider all of these sources of emissions in their climate pledges.

here’s-a-first-look-at-microsoft’s-xcloud-for-the-web

Here’s a first look at Microsoft’s xCloud for the web

Microsoft has started testing its xCloud game streaming through a web browser. Sources familiar with Microsoft’s Xbox plans tell The Verge that employees are now testing a web version of xCloud ahead of a public preview. The service allows Xbox players to access their games through a browser, and opens up xCloud to work on devices like iPhones and iPads.

Much like how xCloud currently works on Android tablets and phones, the web version includes a simple launcher with recommendations for games, the ability to resume recently played titles, and access to all the cloud games available through Xbox Game Pass Ultimate. Once you launch a game it will run fullscreen, and you’ll need a controller to play Xbox games streamed through the browser.

Microsoft’s xCloud service on the web.

It’s not immediately clear what resolution Microsoft is streaming games at through this web version. The software maker is using Xbox One S server blades for its existing xCloud infrastructure, so full 4K streaming won’t be supported until the backend hardware is upgraded to Xbox Series X components this year.

Microsoft is planning to bundle this web version of xCloud into the PC version of the Xbox app on Windows 10, too. The web version appears to be currently limited to Chromium browsers like Google Chrome and Microsoft Edge, much like Google’s Stadia service. Microsoft is planning some form of public preview of xCloud via the web in the spring, and this wider internal testing signals that the preview is getting very close.

The big drive behind this web version is support for iOS and iPadOS hardware. Apple imposes limitations on iOS apps and cloud services, and Microsoft wasn’t able to support the iPhone and iPad when it launched xCloud in beta for Android last year. Apple said Microsoft would need to submit individual games for review, a process that Microsoft labeled a “bad experience for customers.”

how-to-build-a-raspberry-pi-object-identification-machine

How to Build a Raspberry Pi Object Identification Machine

In this tutorial, we will train our Raspberry Pi to identify other Raspberry Pis (or other objects of your choice) with Machine Learning (ML). Why is this important? An example of an industrial application for this type of ML is identifying defects in circuit boards. As circuit boards exit the assembly line, a machine can be trained to identify a defective circuit board for troubleshooting by a human.

We have discussed ML and Artificial Intelligence in previous articles, including facial recognition and face mask identification. In the facial recognition and face mask identification projects, all training images were stored locally on the Pi and the model training took a long time as it was also performed on the Pi. In this article, we’ll use a web platform called Edge Impulse to create and train our model to alleviate a few processing cycles from our Pi. Another advantage of Edge Impulse is the ease of uploading training images, which can be done from a smartphone (without an app).

We will use BalenaCloudOS instead of the standard Raspberry Pi OS since the folks at Balena have pre-built an API call to Edge Impulse. The previous facial recognition and face mask identification tutorials also required tedious command line package installs and Python code. This project eliminates all terminal commands and instead utilizes an intuitive GUI interface. 

What You’ll Need

  • Raspberry Pi 4, Raspberry Pi 400, or Raspberry Pi 3
  • 8 GB (or larger) microSD card
  • Raspberry Pi Camera, HQ Camera, or USB webcam
  • Power Supply for your Raspberry Pi
  • Your smartphone for taking photos
  • Windows, Mac or Chromebook
  • Objects for classification

Notes: 

  • If you are using a Raspberry Pi 400, you will need a USB webcam as the Pi 400 does not have a ribbon cable interface. 
  • You do NOT need a monitor, mouse, or keyboard for your Raspberry Pi in this project.
  • Timing: Please plan for a minimum 1-2 hours to complete this project.

Create and Train the Model in Edge Impulse  

1. Go to Edge Impulse and create a free account (or login), from a browser window on your desktop or laptop (Windows, Mac, or Chromebook).

Data Acquisition

2. Select Data Acquisition from the menu bar on the left.

3. Upload photos from your desktop or scan a QR code with your smartphone and take photos. In this tutorial we’ll opt for taking photos with our smartphone.

4. Select “Show QR code” and a QR code should pop-up on your screen. 

(Image credit: Tom’s Hardware)

5. Scan the QR code with your phone’s camera app. 

(Image credit: Tom’s Hardware)

6. Select Open in browser and you’ll be taken to a data collection website. You will not need to download an app to collect images.

7. Accept permissions on your smartphone and tap “Collecting images?” in your phone’s browser screen. 

(Image credit: Tom’s Hardware)

8. If prompted for permissions, tap the “Give access to the camera” button and allow access on your device. 

(Image credit: Tom’s Hardware)

9. Tap “Label” and enter a tag for the object you will take photos of.

(Image credit: Tom’s Hardware)

10. Take 30-50 photos of your item at various angles. Some photos will be used for training and other photos will be used for testing the model. Edge Impulse automatically splits photos between training and testing. 

(Image credit: Tom’s Hardware)

11. Repeat the process of Entering a label for the next object and taking 30-50 photos per object until you have at least 3 objects complete. We recommend 3 to 5 identified objects for your initial model. You will have an opportunity to re-train the model with more photos and/or types of objects later in this tutorial. 

(Image credit: Tom’s Hardware)

From your “Data Acquisition” tab in the Edge Impulse browser window, you should now see the total number of photos taken (or uploaded) and the number of labels (type of objects) you have classified. (You may need to refresh the tab to see the update.) Optional: You can click on any of the collected data samples to view the uploaded photo. 

(Image credit: Tom’s Hardware)

Impulse Design

12. Click “Create impulse” from “Impulse design” in the left column menu. 

13. Click “Add a processing block” and select “Image” to add Image to the 2nd column from the left.

14. Click “Add a learning block” and select “Transfer Learning.”

(Image credit: Tom’s Hardware)

15. Click the “Save Impulse” button on the far right.

16. Click “Image” under “Impulse design” in the left menu column.

17. Select “Generate features” to the right of “Parameters” near the top of the page.

18. Click the “Generate features” button in the lower part of the “Training set” box. This could take 5 to 10 minutes (or longer) depending on how many images you have uploaded. 

(Image credit: Tom’s Hardware)

19. Select “Transfer learning” within “Impulse design,” set your Training settings (keep defaults, check “Data augmentation” box), and click “Start training.”  This step will also take 5 minutes or more depending on your amount of data. 

(Image credit: Tom’s Hardware)

After running the training algorithm, you’ll be able to view the predicted accuracy of the model. For example, in this model, the algorithm can only correctly identify a Raspberry Pi 3 – 64.3% of the time and will misidentify a Pi 3 as a Pi Zero 28.6% of the time. 

(Image credit: Tom’s Hardware)

Model Testing

20. Select “Model testing” in the left column menu.

21. Click the top check box to select all and press “Classify selected” to test your data. The output of this action will be a percent accuracy of your model.

(Image credit: Tom’s Hardware)

If the level of accuracy is low, we suggest going back to the “Data Acquisition” step and adding more images or removing a set of images. 

(Image credit: Tom’s Hardware)

Model Testing

22. Select “Deployment” in the left menu column. 

23. Select “WebAssembly” for your library. 

24. Scroll down (“Quantized” should be selected by default) and click the “Build” button. This step may also take 3 minutes or more depending on your amount of data.

(Image credit: Tom’s Hardware)

Setting  Up BalenaCloud 

Instead of the standard Raspberry Pi  OS, we will flash BalenaCloudOS to our microSD card. The BalenaCloudOS is pre-built with an API interface to Edge Impulse and eliminates the need for attaching a monitor, mouse, and keyboard to our Raspberry Pi. 

25. Create a free BalenaCloud account here. If you already have a BalenaCloud account, login to BalenaCloud.

26. Deploy a balena-cam-tinyxml application here. Note: You must already be logged into your Balena account for this to automatically direct you to creating a balena-cam-tinyml application.

27. Click “Deploy to Application.”

(Image credit: Tom’s Hardware)

After creating your balena-cam-tinyml application, you’ll land on the “Devices” page. Do not create a device yet!

28. In Balena Cloud, select “Service Variables” and add the following 2 variables.

Variable 1:

      Service: edgeimpulse-inference 

      Name: EI_API_KEY 

      Value: [API key found from your Edge Impulse Dashboard]. 

(Image credit: Tom’s Hardware)

To get your API key, go to your Edge Impulse Dashboard, select “Keys” and copy your API key.  

(Image credit: Tom’s Hardware)

Go back to Balena Cloud and paste your API key in the value field of your service variable.

Click “Add”.

(Image credit: Tom’s Hardware)

Variable 2:

      Service: edgeimpulse-inference 

      Name: EI_PROJECT_ID

      Value: [Project ID from your Edge Impulse Dashboard]. 

(Image credit: Tom’s Hardware)

To get your Project ID, go to your Edge Impulse Dashboard, select “Project Info,” scroll down, and copy your “Project ID.”  

(Image credit: Tom’s Hardware)

Go back to Balena Cloud and paste your Project ID in the value field.

Click Add.

(Image credit: Tom’s Hardware)

27. Select “Devices” from the left column menu in your BalenaCloud, and click “Add device.”

28. Select your Device type, (Raspberry Pi 4, Raspberry Pi 400, or Raspberry Pi 3). 

(Image credit: Tom’s Hardware)

29. Select the radio button for Development.

30. If using Wifi, select the radio button for “Wifi + Ethernet” and enter your Wifi credentials. 

(Image credit: Tom’s Hardware)

31. Click “Download balenaOS” and a zip file will start downloading.

32. Download, install, and open the Balena Etcher app to your desktop (if you don’t already have it installed). Raspberry Pi Imager also works, but Balena Etcher is preferred since we are flashing the BalenaCloudOS.

33. Insert your microSD card into your computer.

34. Select your recently-downloaded BalenaCloudOS image and flash it to your microSD card. Please note that all data will be erased from your microSD card. 

(Image credit: Tom’s Hardware)

Connect the Hardware and Update BalenaCloud 

35. Remove the microSD card from your computer and insert into your Raspberry Pi.

36. Attach your webcam or Pi Camera to your Raspberry Pi. 

(Image credit: Tom’s Hardware)

37. Power up your Pi. Allow 15 to 30 minutes for your Pi to boot up and BalenaOS to update. Only the initial boot requires the long update. You can check the status of your Pi Balena Cloud OS in the BalenaCloud dashboard.

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

Object Identification

38. Identify your internal IP address from your BalenaCloud dashboard device. 

(Image credit: Tom’s Hardware)

39. Enter this IP address in a new browser Tab or Window. Works great in Safari, Chrome, and Firefox. 

40. Place an object in front of the camera

(Image credit: Tom’s Hardware)

You should start seeing a probability rating for your object in your browser window (with your internal IP address). 

(Image credit: Tom’s Hardware)

41. Try various objects that you entered into the model and perhaps even objects you didn’t use to train the model. 

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

Refining the Model

  • If you find that the identification is not very accurate, first check your model’s accuracy for that item in the Edge Impulse Model Testing tab.
  • You can add more photos by following the Data Acquisition steps and then selecting “Retrain model” in Edge Impulse.
  • You can also add more items by labeling and uploading in Data Acquisition and retraining the model.
  • After each retraining of the model, check for accuracy and then redeploy by running x “WebAssembly” within Deployment.

Salesforce declares the 9-to-5 workday dead, will let employees work remotely from now on

Cloud computing company Salesforce is joining other Silicon Valley tech giants in announcing a substantial shift in how it allows its employees to work. In a blog post published Tuesday, the company says the “9-to-5 workday is dead” and that it will allow employees to choose one of three categories that dictate how often, if ever, they return to the office once it’s safe to do so.

Salesforce will also give employees more freedom to choose what their daily schedules look like. The company joins other tech firms like Facebook and Microsoft that have announced permanent work-from-home policies in response to the coronavirus pandemic.

“As we enter a new year, we must continue to go forward with agility, creativity and a beginner’s mind — and that includes how we cultivate our culture. An immersive workspace is no longer limited to a desk in our Towers; the 9-to-5 workday is dead; and the employee experience is about more than ping-pong tables and snacks,” writes Brent Hyder, Salesforce’s chief people officer.

“In our always-on, always-connected world, it no longer makes sense to expect employees to work an eight-hour shift and do their jobs successfully,” Hyder adds. “Whether you have a global team to manage across time zones, a project-based role that is busier or slower depending on the season, or simply have to balance personal and professional obligations throughout the day, workers need flexibility to be successful.”

Hyder cites picking up young kids from school or caring for sick family members as reasons why an employee should not be expected to report to work on a strict eight-hour shift every day. He also points to how the removal of strict in-office requirements will allow Salesforce to expand its recruitment of new employees beyond expensive urban centers like San Francisco and New York.

In his blog post, Hyder defines the three different categories of work as flex, fully remote, and office-based. Flex would mean coming into the office one to three days per week and typically only for “team collaboration, customer meetings, and presentations.” Fully remote is what it sounds like — never coming into the office except perhaps in very rare situations or for work-related events. Office-based employees will be “the smallest population of our workforce,” Hyder says, and constitute employees whose roles require them be in the office four to five days per week.

“Our employees are the architects of this strategy, and flexibility will be key going forward,” Hyder writes. “It’s our responsibility as employers to empower our people to get the job done during the schedule that works best for them and their teams, and provide flexible options to help make them even more productive.”

adobe-makes-it-easier-to-share-photoshop-and-illustrator-projects-with-collaborators

Adobe makes it easier to share Photoshop and Illustrator projects with collaborators

Adobe is making it easier for multiple people to work on the same file in Photoshop, Illustrator, or Fresco. The three apps are getting a new feature called “invite to edit,” which will let you type in a collaborator’s email address to send them access to the file you’re working on.

Collaborators will not be able to work on the file live alongside you, but they will be able to open up your work, make changes of their own, save it, and have those changes sync back to your machine. If someone is already editing the file, the new user be given the choice to either make a copy or wait until the current editor is finished. It’s not quite Google Docs-style editing for Photoshop, but it should be easier than emailing a file back and forth.

The feature works with .PSD and .AI files saved to Adobe’s cloud. (It’s already available inside of Adobe XD as well.) It also supports version history, so you’ll be able to reverse course if a collaborator messes something up.

Adobe announced that this feature was in the works back in October. The company has been steadily building more collaboration features into Creative Cloud — the service tying its suite of apps together — in the hopes of making the platform quick, simple, and reliable enough that teams can count on it to move their documents around. Adobe recently updated a related feature that allows documents to be sent to others for review.

halo-master-chief-collection-devs-tease-‘a-new-place-and-way-to-play’

Halo Master Chief Collection devs tease ‘a new place and way to play’

Matthew Wilson
22 hours ago
Featured Tech News, Software & Gaming

At this point, Halo: The Master Chief Collection is complete on both Xbox consoles and PC. So what’s next for the MCC development team? It looks like we’ll be finding out quite soon, with 343 Industries teasing a ‘new place and way to play’. 

In the latest Halo Waypoint developer blog, community manager ‘Postums’ discussed the future for MCC community flighting. Some of the additions are expected, like FOV slider support on Xbox consoles and improved keyboard/mouse support across platforms. One note on the list stands out from the rest though, teasing “a new place and way to play”.

Halo: The Master Chief Collection is already playable on xCloud, so this isn’t teasing a cloud launch for the game. It is also very unlikely to be related to a release on a rival console like the Nintendo Switch or PlayStation.

Currently, the leading theory is that Microsoft will be bringing Halo to the Epic Games Store on PC to widen the player base. The game is already available on PC via Xbox Game Pass, Microsoft Store and Steam.

KitGuru Says: We should hear more on this in the next few weeks. What do you think this tease means? Is this indicating an EGS launch, or could it be something bigger? 

Become a Patron!

Check Also

ESA planning Digital E3 in June, needs publisher backing for keynotes

We’ve known for a while now that the ESA is planning a digital version of …