Apple and its future in the world of mobile photography. Part Two


In the previous article, we went over the deep scanning technology of the scene, the portrait functions of the iPhone 7 Plus model, as well as the portrait lighting inherent in the iPhone 8 Plus camera and the upcoming iPhone X. In this part, we'll take a look at the more exclusive TrueDepth technology, which will be is implemented only in the iPhone X, as well as other features, including the machine learning technology, which will be used on all devices that use the operating system iOS 11.

TrueDepth takes Selfie to a new level

At this summer, WWDC'17, Apple introduced a new tool package developer Depth API for iOS 11 to use and process various layers of image data obtained with the dual camera iPhone 7 Plus. The company also hinted at developing an alternative technology that can even more accurately create a deep image map, compared to what the iPhone 7 Plus dual camera offers.

One of the first applications where it was decided to use the new TrueDepth technology was portrait lighting, but this time for the front camera, so that the photos of your loved one could be made even more juicy by simulating the effect of studio lighting, previously available only for the dual rear camera iPhone 7 Plus. True, while a dual camera calculates depth maps by scanning the differences between two spatial points on two images taken from two cameras, TrueDepth uses a matrix of reflected invisible light for the front camera, which in theory allows one to create an even more similar depth map and, respectively, a more detailed snapshot.

Animodzi and TrueDepth

In addition to processing the spatial depth of the image, similar to the dual cameras of the models iPhone 7 Plus and iPhone 8 Plus, for portrait and portrait lighting, TrueDepth sensors on the iPhone X can analyze the movement of more than 50 facial muscles and, based on these data, create an animated user avatar in the form of the head of some creature. Apple called these animoji avatars. As a basis, the company chose the 12 most popular ordinary emoji and turned them into real 3D masks that can be used, for example, when communicating with iMessage.
In addition, with the release of iOS 11, TrueDepth technology has become open to third-party developers and allows you to create own effects. The creators of applications and games can create their own avatars and synchronize them with the facial expressions of the user's face.

To the disappointment of the users of Android, adapting these animated avatars on this system will be somewhat more difficult, since the standard emoji from Google, Samsung and other manufacturers, to put it mildly, is not entirely suitable for this. No wonder, even Google decided to abandon its strange emoji in the latest iteration of Android 8 "Oreo" and make a choice in favor of iOS'o-like. The truth is that in order for the "chip to go to the masses", it will take a certain amount of time.

Android simply does not have the ability to quickly implement those features that Apple can implement and have already implemented. There are too many features in both the hardware and software environments and there are too many manufacturers creating these features. Even the Google-made Pixel smartphones fall behind Apple in this respect by several orders of magnitude.

In the same last Pixel 2 tried to "copy" the portrait mode introduced by Apple last year, and Live Photos, which are already two years old. Of course, there is no question of any analogue of the same portrait lighting here, let alone animoji or the same real calculation of the depth of the scene in the image that is necessary to support these functions.

TrueDepth and Face ID

Thanks to TrueDepth, Apple decided to introduce a new Face ID authentication function, making it an alternative to Touch ID. Interestingly, critics and skeptics have already begun to complain about the "curve" of the work of Face ID, even, actually, without even testing it. However, the truth is that the new system offers an even more reliable way to evaluate the user's biometric data, compared to the tiny Touch ID sensor.

Face ID does not just "use your face for authorization". The system is much more complicated than it may seem at first glance. You can still use the usual security password to log in and change it if you want. But criminals are unlikely to be able to obtain a three-dimensional image of your face for authorization. On the demonstration stand of iPhone X, the attempt to unblock the phone even by a registered user failed because the image contained, among other things, the face of an outsider.

"The aspect of distance is important here. Unlock the phone when someone else holds it, it's almost impossible, "commented the man who showed the device and at the same instant easily unlocked his arm at the usual distance. And as noted by journalists of Western AppleInsider, the process of unlocking with Face ID had even to be removed in the slo-mo, because the system works so quickly that it worked almost instantly in the hands of an authorized user.

The Face ID (like the Touch ID) simply offers a simpler yet reliable way to skip an exciting lesson by manually entering your password to unlock the device, complicating the task of hacking it. In addition, in case of loss of the device, the user will be able to remotely disable this system. Thus, for someone who decides to hack a biometric system on a stolen phone, there will be very little time and opportunity for this to do so.

The biometric identification system can be disabled throughout the iOS system. But it is important to note that, despite the initial cries of analysts and experts that the biometric identification system introduced since the time of iPhone 5s will increase the level of vulnerability of users' personal data, the statistics showed rather a serious decline in the theft of devices using such systems, but at the same time time has increased the attention and concern of law enforcement services, who now have access to personal data of interest to them has become much more difficult due to increased effectiveness of protection.

Despite the apparent complexity of the system, its proper implementation on iOS has shown that it really can be effective. What can not be said about Android, where the main licensees in the face of Samsung and HTC, the first to introduce such systems, were the first to screw up with them, and then quickly switching to an uncomfortable facial recognition system, again miscalculated, as its efficiency and reliability were under great question.

Using the unique, specially developed and optimized TrueDepth camera technology on the iPhone X, Apple received a more efficient tool that reduces the risk of false positives and significantly reduces the possibility of a system traversal with biometric identification. Of course, the cost of such a system of sensors, most likely, protects most Android-manufacturers from its use. Most recently, for example, Google noted that, according to her expectations, 1/3 of all Android smartphones sold will be devices worth up to $ 100.

Despite the serious differences in the price tag, Apple expects that its smartphone costing from $ 1000 and more will be bought by more than a third of its future customers (and perhaps half). And there are a number of reasons why users will want to purchase such a device, but most importantly, all of them will have access to TrueDepth technology, which means that third-party developers will get their hands on seeing this huge user base of several tens of millions of potential customers .

In addition to the complexities of adapting fully functional 3D hardware in their smartphones, Android licensees are faced with another problem that Apple, in turn, has long decided – the lack of a full ecosystem and really high-quality tools for demonstration.

As reported in recent reports, Chinese manufacturers have faced the difficulty of adapting 3D sensors built by Qualcomm / Himax. It was also noted that "smartphone manufacturers will need more time to create the necessary ecosystem in the form of firmware, software and applications required for efficient performance of sensor 3D modules designed to support functions such as fingerprint or touch control", making of this the conclusion that "such difficulties become the main barrier before integrating 3D-sensory technologies into smartphones."

Android and the lack of depth

Attempts by third-party manufacturers to fill the market with sensors of 3D cameras for mobile devices did not turn out to be massive. Google has been working with PrimeSense technologies for several years as part of its Tango project, but it has not yet been able to convince pricing sensitive Android licensees to adapt the technologies necessary for the operation of this system.

As soon as Apple demonstrated the ARKit Augmented Reality (AR) tool, Google immediately renamed part of the Tango platform to "ARCore", apparently in this way trying to return interest to its technology due to the noise around the competitor. But again, the lack of an installed base of Android-devices supporting the functions of augmented reality and an even smaller margin of the ability to work with in-depth analysis of data collected either by a dual camera or by any type of depth sensor did not eventually lead to anything concrete. ]

In addition, the decentralized nature of the Android system itself not only leads to problems such as fragmentation and lack of optimization, but also sets the course for the production of ultra-cheap devices, rather than the production equipment and specialized cameras required for processing complex AR data, collected on the basis of cameras with the technology of depth analysis of the scene. And instead of starting to produce really powerful hardware, Google has for many years promoted the idea that Android, Chrome and Pixel are products that can take the lower price segment and demonstrate their effectiveness by connecting to powerful cloud services. But this, in turn, gave rise to the assumption among the people that the only thing the company really needs is data about its users, while the issue of the opportunity to offer the best options for the implementation of advanced technologies to its customers is secondary to it.

The performance of the Apple A11 Bionic processor on the background of competitor processors

According to analysts, Apple continues to increase its gap in increasing the power of devices, and this, in turn, leads to the fact that iOS-devices are becoming more and more capable of efficient operation without the need for a high-speed connection with cloud services. For example, the same function of biometric authentication works completely in local mode, which drastically reduces the risk of interception of user data by third parties.

A new level of depth: Vision and CoreML

The presence of huge computing power, coupled with the ability to recognize the objects, position, movement and even the faces of specific people in the photo, makes it possible for Apple to use the so-called computer vision on their devices – a technology that has already been used in the processing of static photographs, and now available and for use in direct shooting mode.

Of course, Depth has become not the only technology intended for image processing on iOS. In addition to the new features that require two cameras, as well as the new TrueDepth technology for the iPhone X, the iOS 11 output was accompanied by the release of a real machine learning algorithm that will be used by iOS 11-based camera devices.

The new Vision framework for iOS 11 provides high-performance image analysis and, using the technology of computer vision, recognizes in the frame faces, their features, and also simplifies the alignment of the scene in the created photos and videos. At the heart of Vision is the CoreML framework, which uses machine learning technology. The results of this framework can be evaluated before this, at least in the same Siri or keyboard QuickType. We will not go into the technical features of each, but we list some of the possibilities that can be realized through their joint use in the created applications:

  • recognition of images in real time;
  • predictive text input;
  • pattern recognition;
  • tone analysis;
  • handwriting recognition;
  • search ranking;
  • stylized images;
  • face recognition;
  • voice identification;
  • definition of music;
  • abstracting the text;
  • and not only.

This summer, Apple noted that CoreML already runs six times faster than existing Android counterparts. In this case, it was about working on the iPhone 7. All functions of the new framework Apple promised to provide third-party developers who want to improve their applications.

Leave a Reply

Your email address will not be published. Required fields are marked *

Apple and its future in the world of mobile photography. Part Two

log in

Captcha!
Don't have an account?
sign up

reset password

Back to
log in

sign up

Captcha!
Back to
log in
Free BoomBox WordPress Theme