# Longrid: Apple and its future in the world of mobile photography


When the first iPhone appeared ten years ago, he could only take photographs and could not record video. Since then, Apple has rapidly increased its capabilities in the field of mobile photography to the point where the iPhone began to learn not only for its stylish design and performance, but also for the opportunity to create giant billboards and even feature films. With the release of iOS 11 and the introduction of the new Depth function, the photo and video shooting capabilities of the device have reached a whole new level. Today we will try to present the future of photography from the perspective of new features that debuted in the iPhone 8 Plus.

Billboards created with the help of the iPhone. Berlin, Germany

Throughout the history of Apple, and in particular over the last ten years of its existence, the iPhone has introduced a whole range of new and emerging technologies that are looking to the consumer market. It should be noted that many of these ideas were invented and presented before Apple, take the same technology of multi-touch displays, electronic gyroscopes, accelerometers and cameras with depth function.

Apple rarely became the first company to launch this or that technology, but it often became the first company to succeed in the mass market. This can only be explained by the fact that Apple has always been aimed at making them as efficient and practical to use as possible before actually releasing new technologies on the market.

Here's an example: Jeff Khan, one of the first developers to introduce the multi-touch display technology, could not bring to the market millions of mobile devices, and Apple with its iPhone could. Nevertheless, between the case, Khan developed an idea that turned into huge screens for conference rooms. But again, the level of coverage of the market turned out to be completely incompatible with what Apple showed, and this despite the fact that this idea was later bought by Microsoft itself and eventually turned into a giant Surface Hub tablet computer worth $ 9,000.

Even Khan himself after the presentation of 2007 noted: "The iPhone is simply incredible, and I always said that if there is a company in the world that can bring this technology to the consumer market, it's Apple."

Similarly, Apple introduced the latest result to its market – camera technology with a perception of the depth of the scene, first with the release of the iPhone 7 Plus last year, and now with the release of the iPhone 8 Plus and the upcoming iPhone X. The company conducted a bunch of experiments with products, varying their size from the level of the guest system Xbox Kinect and to more portable devices of the Occipital Structure Sensor level or the same Project Tango from Google.

But instead of using this technology in the next bizarre controller or some experimental niche product, Apple decided that the best place to use the camera technology with a deep perception of the scene would be those devices that are used by tens of millions of people every day. In general, Apple has decided to once again change our idea of ​​how to use mobile devices, from authorization to communication, as well as the creation of photographs and video recording.

Today we'll talk about how in the past year Apple has developed camera technology with a deep perception of the scene, as well as what it can expect in the future.

iPhone 7 Plus: two cameras, two levels of depth perception

Last year, iPhone 7 Plus offered us two cameras with an independent mode of operation. In retrospect, this function (which became exclusive for the Plus model) not only attracted more buyers to a larger, more expensive Plus model (which allowed the company to earn more money), but also allowed Apple to accelerate the technology's output to the masses much more quickly, than it would be possible in the event that the company produced only one model of the device on the market every year and was forced to make some compromises, refusing some functions, because it does not fit into one device.

As for the older models, the iPhone 6 / 6s Plus, they also offered some exclusive camera features that distinguished them from the standard iPhone models, for example, the same optical image stabilization (OIS). The main feature of OIS technology lies in the presence around the camera lens of very accurate micromotors, which can very quickly change their position depending on what data they come from motion sensors. In this way, it makes it possible to compensate for the trembling of hands when shooting.

The OIS function is especially useful and allows you to get better pictures at low light levels, when the camera shutter needs to be as strong and longer as possible to open it, which makes it very sensitive to movements. Nevertheless, the OIS function itself is invisible, it can not be touched or demonstrated in the same way as it can, for example, be done with the mode of increasing iPhone 7 Plus.

The subsequent transition to a dual camera in the iPhone 7 Plus only increased the potential for high-quality shooting on mobile devices. It consists of two lenses, one of which has a standard viewing angle, while the other operates in the x2 zoom mode, which allows for more approximate shooting of an object without loss of image quality, whether it is with a normal photo, or video recording, or video in slow motion or even panoramic shooting. Two lenses have independent camera sensors, but they can work together. This allows for smooth, gradual scaling of the image moving from the operation of the standard image sensor to the zoomed image when the user selects the appropriate settings.

At that time, at the time of the announcement, Apple also casually mentioned the function, which will appear in future updates and add sharp focus images due to the bokeh effect. In other words, the function should blur the background and direct the entire focus to the object in the foreground. Of course, this innovation was a real hit. After all, children, friends and animals who got into the frame now became real central stars in every photo. And unlike such programs with comical filters and effects like Photo Booth and Snapchat, the portrait mode of photography in iOS 10 offered the creation of realistic, high-quality images, with the addition of a "dramatic focus", allowing, among other things, to feel the person making these photos, more confident in their skills.

The details of the implementation of the portrait mode the company did not discuss until the event WWDC17, which was held this summer, where she stated that this function is by far not the only trick the dual camera of the iPhone 7 is capable of. At the presentation, Apple explained that the portrait mode was realized for an account of the fact that both lenses are configured in such a way that they are able to create images equivalent to a 2-fold increase, simply each of the lenses takes an object at a slightly different angle.

Due to the differential processing of the depth of the scene, that is, when scanning differences between two points in the image to determine if the object in the picture is closer to the camera or further away in the background, two images can be used to create a depth map or a kind of three-dimensional typographic layer of metadata, which are combined in the original photograph. The portrait mode, in turn, used this data to determine which parts of the image are farther away from the lens, and thereby superimposed the blur effect on these areas.

Photo taken from iPhone 7 Plus, and a depth map of the same image in gray

Now, with the release of iOS 11, anyone can feel all the delights of the Depth API released with it and after using the portrait mode to deal with processing certain layers of the image. For example, to create a completely colorless background, leaving only the central object on the photo colored.

For the sake of justice it is worth noting that almost any existing photo filter on the market can work with different layers. Therefore, the bokeh effect in portrait mode may seem only the tip of the iceberg of the possibilities offered by the Depth technology.

However, within the framework of the WWDC'17 presentation, Apple representative Etienne Gerard demonstrated how the depth map created with the iPhone's double camera can be visualized in a 3D image and, as far as possible selectively, applying filters and other effects, edit the various parts of this image.

Having developed and correctly advertised one particular effect of the portrait mode, working and looking really good, so good that it gives owners of the device a feeling of using a real SLR camera, the company was able to convince customers that this portrait mode is quite a convincing reason for that to buy the iPhone 7 Plus.

If Apple had introduced the dual camera Plus simply as a kind of experimental tool that offers a whole bunch of confusing possibilities that you have to deal with on your own, most potential owners of the device would most likely just be ignored, perceiving it as another over-complicated and no one needs new innovation.

iPhone 8 Plus: portrait lighting and A11 Bionic processor

With the release of iOS 11 on the iPhone 8 Plus and the iPhone X, work with the depth of the scene has become even more detailed. Used in devices, the new A11 Bionic processor has a special image processing engine (Image Signal Processing), which calculates information from incoming sensor sensors. At the same time, the Neural Net technology allows the system to understand and analyze objects, faces and bodies that are in motion within the created scene.

With this new computational power, the new portrait lighting function allows not only to apply an image to the bokeh on the image, but also to use the smart illumination function of the foreground of the photograph, imitating balanced lighting and allowing to separate even more individual elements of the human face, for example, the same line cheekbones, again at the same time distancing them from the background.

With the new portrait lighting function, even the most common pictures turn into something more interesting

Again, as you can see, instead of offering an endless set of new tools for creating a different level of scene depth in the photographs, Apple decided to focus on one of them, specifically portrait lighting, but work it out so much that the function can indeed turn out useful in a variety of situations.

Competitors

Dual cameras in smartphones have appeared relatively long ago. Back in 2011 (when Apple just introduced Siri with the release of the iPhone 4s model), HTC and LG showed their stereoscopic camera phones that could shoot in 3D and play this 3D on the screen without the need for special glasses. At the same time, a real boom of 3D HDTV was already taking place for several months. But then the buyers realized that they really do not really need dizzying 3D effects. Since then, 3D-smartphones no longer produced.

Years later, in 2014, HTC introduced the new M8. The device was equipped with two rear cameras that took double pictures, mixing them and calculating the depth of the scene, as well as adding a blur of the back focus or digitally refocusing the image. Nevertheless, experts and the profile press noted that "the effect of having two cameras was rather strange than adding an advantage against the background of competitors."

Earlier this year, LG introduced a dual optical module, where one lens has a wide viewing angle, and the second is super-wide, so that even more objects can fit into the photo. However, fixed focus depth does not allow using them simultaneously. In addition, despite the fact that the wide-angle lens does have its advantages, zooming in this case is still a more practical solution.

Huawei recently established cooperation with Leica and began developing a smartphone with a dual camera. One will be for shooting in color, the other is for monochromatic details. Thus, the manufacturer tries to imitate human vision, in order to obtain more realistic images. The truth is still unclear how this approach really will increase the quality of the resulting photographs.

Pixel 2: one camera is better than two! At least in the view of Google

The latest novelty from Google, the Pixel 2 smartphone, is also trying to simulate the work of Apple's portrait mode, creating a bokeh effect, but using only one camera. CNET notes: "The camera uses Dual Pixel technology, dividing each pixel into two in a photo. The presence of two special sensors divides each photo into two parts – the right and the left, then merging into one. "

Still, this realization implies that one camera will be able to create a less accurate level of image depth, compared to two cameras located one from another at a certain distance. It is likely that the device Google when shooting in portrait mode determines the face and those parts of the hair and body that are associated with it, and the rest of the front and back plans just impose a blur effect.

In the above article CNN mentioned Mario Keirush from Google, responsible for the hardware direction of the company, which says that "they have a camera with everything in order, and they did not miss anything in it." However, as journalists from Western AppleInsider point out, using only one camera and lens deprives the device of the optical zoom function offered by the same models of the iPhone 7/8 Plus. The lens here is also simple, not wide-angle, like from LG. In addition, the novelty, it seems, does not even try to offer more complex effects, like portrait lighting, presented in the iPhone 8 Plus.

Of course, you need to wait for comparative tests of portrait mode Pixel and iPhone 7 Plus, but at the moment the only reason why Google decided to use only one rear camera seems to be the company's desire to make the device cheaper in production, while sacrificing not only the opportunity adding creative effects to static photos, but also processing the depth of the scene, which is the same iOS 11 is also offered for video, as well as for zoom.

In addition to working with the dual cameras of the iPhone 7 Plus and the iPhone 8 Plus, Apple has also embraced machine learning and computer vision technologies for the entire iOS 11-based line of devices, as well as an even more advanced deep-grab image capture technology that will be used in the iPhone X .

Leave a Reply

Your email address will not be published. Required fields are marked *

# Longrid: Apple and its future in the world of mobile photography

log in

Captcha!
Don't have an account?
sign up

reset password

Back to
log in

sign up

Captcha!
Back to
log in
Free BoomBox WordPress Theme