How Google Pixel's Portrait Light mode benefits from AI

Source: HW Upgrade added 16th Dec 2020

  • how-google-pixel's-portrait-light-mode-benefits-from-ai

Google Pixel 5 and Pixel 4a (5G) , announced in September (their official arrival in Italy is not expected) , are the latest smartphones of the US company. Now, thanks to a post on the official blog dedicated to Artificial Intelligence , we can have new information about the Portrait Light mode integrated in the app Photo. Behind what seems to be just a trivial feature, there is a lot of work from the engineers.

The Portrait Light mode of Google Pixel 5 and Pixel 4a (5G)

Among the innovations announced with the arrival of Google Pixel 5 and Pixel 4a (5G) we find the Portrait Light which resumes a concept similar to what has been seen on the iPhone for some time. This is the ability to give different lighting when taking a portrait of a person. Instead of using “real” spotlights or lights , algorithms and Artificial Intelligence are used to modify the image.

As explained by Google , actually when shooting with the latest models (Pixel 4, Pixel 4a, Pixel 4a (5G) and Pixel 5) the functionality Portrait Light is automatically used in basic and night modes when there is at least one person in the scene. Thanks to the machine learning and to a new learning model it was also possible to insert a dynamic light that can be positioned at will by the user. The algorithms used allow for a virtual light that illuminates the scene “as a professional photographer would have done” and additional lighting in post-production to enhance the beauty of the photograph.

The automatic lighting of the scene and the subject

To allow Google Pixel to understand how to illuminate a photo automatically, the same techniques applied by photographers were used, evaluating the directionality of the light source and impact that this has on the subject.

Initially the basic lighting of the scene with the various light sources is evaluated. To do this, it was necessary to create ad hoc models taking the face of a person as a reference to determine part of the information. At this point we consider what has been done in the studio where the main light is positioned at 30 ° above the eye line and at 30 ° / 60 ° off-axis with respect to the camera. This is also applied in photographs with the Portrait Light mode.

The directional light and the image processing

After completing the previous step, you have switched to directional lighting. The Artificial Intelligence was “trained” thanks to pairs of photographs both with and without additional lighting. To obtain reliable starting data, Google used 70 people with different physiognomies and has photographed with a particular system.

The set included 64 cameras positioned (ball) around to the subject and 331 different LED light sources . Several photographs were then captured, one for each light source that was turned on individually. Digital portraits were then generated by simulating the different lighting conditions thus having millions of images samples available without actually needing millions of subjects in different light conditions.

The result of individual photographs with only one light source at a time.

The processing uses a low resolution sample image where the light source is applied and once the calculations have been performed to understand the effect it will have then oversampled on the high resolution image. This allows to have an optimization of the data analysis and a lower performance impact. The analyzes also take into account the differences in light reflection based on the opacity or not of the material, thus making artificial light more realistic. The optimization led the whole process to have a complete model within 10 MB and be usable on devices such as Google Pixel .