So far, we have seen how we can leverage Core ML and, in general, machine learning (ML) to better understand the physical world we live in (perceptual tasks). From the perspective of designing user interfaces, this allows us to reduce the friction between the user and the system. For example, if you are able to identify the user from a picture of their face, you can remove the steps required for authentication, as demonstrated with Apple's Face ID feature which is available on iPhone X. With Core ML, we have the potential to have devices better serve us rather than us serving them. This adheres to a rule stated by developer Eric Raymond that a computer should never ask the user for any information that it can auto detect, copy, or deduce.
We can take this idea even further; given sufficient amounts of data, we can anticipate what the user is...