Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Mobile

204 Articles
article-image-facebook-released-hermes-an-open-source-javascript-engine-to-run-react-native-apps-on-android
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Facebook released Hermes, an open source JavaScript engine to run React Native apps on Android

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Facebook released a new JavaScript engine called Hermes under an open source MIT license. According to Facebook, this new engine will speed up start times for native Android apps built with React Native framework. https://twitter.com/reactnative/status/1149347916877901824 Facebook software engineer Marc Horowitz unveiled Hermes at the Chain React 2019 conference held yesterday in Portland, Oregon. Hermes is a new tool for developers to primarily improve app startup performance in the same way Facebook does for its apps, and to make apps more efficient on low-end smartphones. The supposed advantage of Hermes is that developers can target all three mobile platforms with a single code base; but as with any cross-platform framework, there are trade offs in terms of performance, security and flexibility. Hermes is available on GitHub for all developers to use. It has also got its own Twitter account and home page. In a demo, Horowitz showed that a React Native app with Hermes was fully loaded within half the time the same app without Hermes loaded, or about two seconds faster. Check out the video below: Horowitz emphasized on the fact that Hermes cuts the APK size (the size of the app file) to half the 41MB of a stock React Native app, and removes a quarter of the app's memory usage. In other words, with Hermes developers can get users interacting with an app faster with fewer obstacles like slow download times and constraints caused by multiple apps sharing in a limited memory resources, especially on lower-end phones. And these are exactly the phones Facebook is aiming at with Hermes, compared to the fancy high-end phones that well-paid developers typically use themselves. "As developers we tend to carry the latest flagship devices. Most users around the world don't," he said. "Commonly used Android devices have less memory and less storage than the newest phones and much less than a desktop. This is especially true outside of the United States. Mobile flash is also relatively slow, leading to high I/O latency." It's not every day a new JavaScript engine is born, but while there are plenty such engines available for browsers, like Google's V8, Mozilla's SpiderMonkey, Microsoft's Chakra, Horowitz notes Hermes is not aimed at browsers or, for example, how Node.js on the server side. "We're not trying to compete in the browser space or the server space. Hermes could in theory be for those kinds of use cases, that's never been our goal." The Register reports that Facebook has no plan to push Hermes' beyond React Native to Node.js or to turn it into the foundation of a Facebook-branded browser. This is because it's optimized for mobile apps and wouldn't offer advantages over other engines in other usage scenarios. Hermes tries to be efficient through bytecode precompilation – rather than loading JavaScript and then parsing it. Hermes employs ahead-of-time (AOT) compilation during the mobile app build process to allow for more extensive bytecode optimization. Along similar lines, the Fuchsia Dart compiler for iOS is an AOT compiler. There are other ways to squeeze more performance out of JavaScript. The V8 engine, for example, offers a capability called custom snapshots. However, this is a bit more technically demanding than using Hermes. Hermes also abandons the just in time (JIT) compiler used by other JavaScript engines to compile frequently interpreted code into machine code. In the context of React Native, the JIT doesn't do that much to ease mobile app workloads. The reason Hermes exists, as per Facebook, is to make React Native better. "Hermes allows for more optimization on mobile since developers control the build stack," said a Facebook spokesperson in an email to The Register. "For example, we implemented bytecode precompilation to improve performance and developed more efficient garbage collection to reduce memory usage." In a discussion on Hacker News, Microsoft developer Andrew Coates claims that internal testing of Hermes and React Native in conjunction with Microsoft Office for Android shows TTI using Hermes at 1.1s, compared to 1.4s for V8, and with 21.5MB runtime memory impact, compared to 30MB with V8. Hermes is mostly compatible with ES6 JavaScript. To keep the engine small, support for some language features is missing, like with statements and local mode eval(). Facebook’s spokesperson also said to The Register that they are planning to publish benchmark figures in the next week to support its performance claims. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more
Read more
  • 0
  • 0
  • 6954

article-image-facetime-attention-correction-in-ios-13-beta-3-uses-arkit-to-fake-eye-contact
Bhagyashree R
04 Jul 2019
3 min read
Save for later

‘FaceTime Attention Correction’ in iOS 13 Beta 3 uses ARKit to fake eye contact

Bhagyashree R
04 Jul 2019
3 min read
On Tuesday, Apple released iOS 13 beta 3, which came with an interesting feature called FaceTime Attention Correction. This feature aims to fix a long-standing issue of maintaining eye contact in FaceTime calls with the help of augmented reality. Mike Rundle, an app designer was the first to spot the feature while testing the latest iOS 13.  https://twitter.com/flyosity/status/1146136279647772673 Back in 2017, he predicted that this feature will be a reality in “years to come.” https://twitter.com/flyosity/status/1146136649883107328 While FaceTiming, users naturally tend to look at the person they are talking to instead of looking at the camera. As a result, to the person who is on the other side, it will appear as if you are not maintaining eye contact. This feature, when enabled, adjusts your gaze so that it appears to be on camera. This helps you maintain eye contact while still letting you keep your gaze on the person you are talking to.  Many Twitter users speculated that the FaceTime Attention Correction feature is powered by Apple's ARKit framework. It creates a 3D face map and depth map of the user through the front-facing TrueDepth camera. It then determines where the eyes are and adjusts them accordingly. The TrueDepth camera system is the same camera system used for Animoji, unlocking the phone, and even the augmented reality features we see in FaceTime. https://twitter.com/schukin/status/1146359923158089728 To enable this feature, one can go to Settings > FaceTime after installing the latest iOS 13 developer beta 3. On Twitter, people also speculated that it is only available on iPhone XS, iPhone XS Max, and iPhoneXR devices for now. It is unclear whether Apple plans to roll out the feature more broadly in the future. It would be interesting to see whether this feature works when there are multiple people in the frame.  https://twitter.com/WSig/status/1146149222665900033 Users have mixed feelings for this feature. While some developers who tested this out felt that it is a little creepy, others thought that this is a remarkable solution for the eye contact problem.  A Hacker News user expressed his concern, “I can't help but think all this image recognition/manipulation tech being silently applied is a tad creepy. IMHO going beyond things like automatic focus/white balance or colour adjustments, and identifying more specific things to modify, crosses the line from useful to creepy.” Another Hacker News user said in support of the feature, “I fail to see how this is creepy (outside of potential uncanny valley issues in edge cases). There is a toggle to disable it, and this is something that most average non-savvy users would either want by default or wouldn't even notice happening (because the end result will look natural to most).”  OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Apple gets into chip development and self-driving autonomous tech business Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter
Read more
  • 0
  • 0
  • 2784

article-image-react-native-0-60-releases-with-accessibility-improvements-androidx-support-and-more
Bhagyashree R
04 Jul 2019
4 min read
Save for later

React Native 0.60 releases with accessibility improvements, AndroidX support, and more

Bhagyashree R
04 Jul 2019
4 min read
Yesterday, the team behind React Native announced the release of React Native 0.60. This release brings accessibility improvements, a new app screen, AndroidX support, CocoaPods in iOS by default, and more. Following are some of the updates introduced in React Native 0.60: Accessibility improvements This release ships with several improvements to accessibility APIs both on Android and iOS. As the new features directly use APIs provided by the underlying platform, they’ll easily integrate with native assistance technologies. Here are some of the accessibility updates to React Native 0.60: A number of missing roles have been added for various components. There’s a new Accessibility States API for better web support in the future. AccessibilityInfo.announceForAccessibility is now supported on Android. Extended accessibility actions will now include callbacks that deal with accessibility around user-defined actions. iOS accessibility flags and reduce motion are now supported on iOS. A clickable prop and an onClick callback are added for invoking actions via keyboard navigation. A new start screen React Native 0.60 comes with a new app screen, which is more user-friendly. It shows useful instructions like editing App.js, links to the documentation, how you can start the debug menu, and also aligns with the upcoming website redesign. https://www.youtube.com/watch?v=ImlAqMZxveg CocoaPods are now part of React Native's iOS project React Native for iOS now comes with CocoaPods by default, which is an application level dependency manager for Swift and Objective-C Cocoa projects. Developers are recommended to open the iOS platform code using the ‘xcworkspace’ file from now on. Additionally, the Pod specifications for the internal packages have been updated to make them compatible with the Xcode projects, which will help with troubleshooting and debugging. Lean Core removals In order to bring the React Native repository to a manageable state, the team started the Lean Core project. As a part of this project, they have extracted WebView and NetInfo into separate repositories. With React Native 0.60, the team has finished migrating them out of the React Native repository. Geolocation has also been extracted based on the community feedback about the new App Store policy. Autolinking for iOS and Android React Native libraries often consist of platform-specific or native code. The autolinking mechanism enables your project to discover and use this code. With this release, the React Native CLI team has made major improvements to autolinking. Developers using React Native before version 0.60, are advised to unlink native dependencies from a previous install. Support for AndroidX (Breaking change) With this release, React Native has been migrated to AndroidX (Android Extension library). As this is a breaking change, developers need to migrate all their native code and dependencies as well. The React Native community has come up with a temporary solution for this called “jetifier”, an AndroidX transition tool in npm format, with a react-native compatible style. Many users are excited about the release and considered it to be the biggest RN release. https://twitter.com/cipriancaba/status/1146411606076792833 Other developers shared some tips for migrating to AndroidX, which is an open source project that maps the original support library API packages into the androidx namespace. We can’t use both AndroidX and the old support library together, which means “you are either all in or not in at all.” Here’s a piece of good advice shared by a developer on Reddit: “Whilst you may be holding off on 0.60.0 until whatever dependency you need supports X you still need to make sure you have your dependency declarations pinned down good and proper, as dependencies around the react native world start switching over if you automatically grab a version with X when you are not ready your going to get fun errors when building, of course this should be a breaking change worthy of a major version number bump but you never know. Much safer to keep your versions pinned and have a googlePlayServicesVersion in your buildscript (and only use libraries that obey it).” Considering this release has major breaking changes, others are also suggesting to wait for some time till 0.60.2 comes out. “After doing a few major updates, I would suggest waiting for this update to cool down. This has a lot of breaking changes, so I would wait for at least 0.60.2 to be sure that all the major requirements for third-party apps are fulfilled ( AndroidX changes),” a developer commented on Reddit. Along with these exciting updates, the team and community have introduced a new tool named Upgrade Helper to make the upgrade process easier. To know more in detail check out the official announcement. React Native VS Xamarin: Which is the better cross-platform mobile development framework? Keeping animations running at 60 FPS in a React Native app [Tutorial] React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial]  
Read more
  • 0
  • 0
  • 6232

article-image-openid-foundation-questions-apples-sign-in-feature-says-it-has-security-and-privacy-risks
Sugandha Lahoti
01 Jul 2019
3 min read
Save for later

OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks

Sugandha Lahoti
01 Jul 2019
3 min read
The OpenID foundation has written an open letter to Apple arguing that the upcoming ‘Sign in with Apple’ feature bears similarities to OpenID Connect,  but lacks privacy and security. ‘Sign in with Apple’ was launched at WWDC 2019 earlier this month. Users can simply use their Apple ID for authentication purpose instead of using a social account, or their email addresses, etc. Apple will be protecting users’ privacy by providing developers with a unique random ID. However, the OpenID Foundation is questioning some of the decisions Apple made for Sign In with Apple. The OpenID Foundation is a non-profit organization with members such as PayPal, Google, Microsoft, and more. The OpenID Foundation controls numerous universal sign-in platforms using its OpenID Connect platform. The letter states, “It appears Apple has largely adopted OpenID Connect for their Sign In with Apple implementation offering, or at least has intended to. However, there are differences between the two are tracked in a document managed by the OIDF certification team. The current set of differences between OpenID Connect and Sign In with Apple reduces the places where users can use Sign In with Apple and exposes them to greater security and privacy risks. It also places an unnecessary burden on developers of both OpenID Connect and Sign In with Apple.” Issues with Sign in with Apple and differences with OpenID The OpenID team has listed down the differences between Apple’s Sign in and OpenID Connect. The differences were identified by the OpenID Foundation’s Certification team and the identity community at large. In Apple’s No Discovery document, developers have to read through the Apple docs to find out about endpoints, scopes, signing algorithms, authentication methods, etc. No UserInfo endpoint is provided, which means all of the claims about users have to be included in the (expiring and potentially large) id_token. Does not include different claims in the id_token based on requested scopes. The token endpoint does not accept client_secret_basic as a client authentication method. Using unsupported or wrong parameters always results in the same message in the browser that says “Your request could not be completed because of an error. Please try again later.” without any explanation about what happened, why this is an error, or how to fix it. Absence of PKCE [Proof Key for Code Exchange] in the Authorization Code grant type, which could nominally leave people exposed to code injection and replay attacks. When using the sample app, adding openid as a scope leads to an error message and it works just with name and email as scope values. The letter asks for Apple to "address the gaps," use the OpenID Connect Self Certification Test Suite, state that Sign in with Apple is compatible with Relying Party software, and finally join the OpenID Foundation. You can read the full open letter here. Testing of Sign in with Apple will start later this summer ahead of iOS 13's fall launch window. Apple showcases privacy innovations at WWDC 2019: Sign in with Apple, AdGuard Pro, and more. WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and more. Jony Ive, Apple’s chief design officer departs after 27 years at Apple to form an independent design company.
Read more
  • 0
  • 0
  • 2891

Banner background image
article-image-lyft-announces-envoy-mobile-an-ios-and-android-client-network-library-for-mobile-application-networking
Sugandha Lahoti
19 Jun 2019
3 min read
Save for later

Lyft announces Envoy Mobile, an iOS and Android client network library for mobile application networking

Sugandha Lahoti
19 Jun 2019
3 min read
Yesterday, Lyft released the initial OSS preview release of Envoy Mobile. This is an iOS and Android client network library that brings Lyft’s Envoy Proxy to mobile platforms. https://twitter.com/mattklein123/status/1140998175722835974 Envoy proxy was initially built at Lyft to solve the networking and observability issues inherent in large polyglot server-side microservice architectures. It soon gained large scale public appreciation and was used by major public cloud providers, end user companies, and infrastructure startups. Now, Envoy Proxy is brought to iOS and Android platforms, providing an API and abstraction for mobile application networking. Envoy Mobile is currently in a very early stage of development. The initial release brings the following features: Ability to compile Envoy on both Android and iOS: Envoy Mobile uses an intelligent protobuf code generation and an abstract transport to help both iOS and Android provide similar interfaces and ergonomics for consuming APIs. Ability to run Envoy on a thread within an application, utilizing it effectively as an in-process proxy server. Swift/Obj-C/Kotlin demo applications that utilize exposed Swift/Obj-C/Kotlin “raw” APIs to interact with Envoy and make network calls, Long term goals Envoy Mobile provides support for Swift APIs for iOS and Kotlin APIs for Android initially, but depending on community interest they will consider adding support for additional languages in the future. In the long term, they are also planning to include the gRPC Server Reflection Protocol into a streaming reflection service API. This API will allow both Envoy and Envoy Mobile to fetch generic protobuf definitions from a central IDL service, which can then be used to implement annotation driven networking via reflection. They also plan to bring Envoy Mobile to xDS configuration to mobile clients, in the form of routing, authentication, failover, load balancing, and other policies driven by a global load balancing system. Envoy Mobile can also add cross-platform functionality when using strongly typed IDL APIs. Some examples of annotations that are planned in their roadmap are Caching, Priority, Streaming, Marking an API as offline/deferred capable and more. Envoy Mobile is getting loads of appreciation from developers with many happy they have open sourced its development. A comment on Hacker news reads, “I really like how they're releasing this as effectively the first working proof of concept and committing to developing the rest entirely in the open - it's a great opportunity to see how a project of this scale plays out in real-time on GitHub.” https://twitter.com/omerlh/status/1141225499139682305 https://twitter.com/dinodaizovi/status/1141157828247347200   Currently the project is in a pre-release stage. Not all features are implemented, and it is not ready for production use. However, you can get started here. Also see the demo release and their roadmap where they plan to develop Envoy Mobile entirely in the open. Related News Uber and Lyft drivers go on strike a day before Uber IPO roll-out Lyft introduces Amundsen;  a data discovery and metadata engine for its researchers and data scientists Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race
Read more
  • 0
  • 0
  • 3476

article-image-developers-can-now-incorporate-unity-features-into-native-ios-and-android-apps
Sugandha Lahoti
18 Jun 2019
2 min read
Save for later

Developers can now incorporate Unity features into native iOS and Android apps

Sugandha Lahoti
18 Jun 2019
2 min read
Yesterday, Unity made an update stating that from Unity 2019.3.a2 onwards, Android and iOS developers will be able to incorporate Unity features into their apps and games. Developers will be able to integrate the Unity runtime components and their content (augmented reality, 3D/2D real-time rendering, 2D mini-games, and more)  into a native platform project so as to use Unity as a library. “We know there are times when developers using native platform technologies (like Android/Java and iOS/Objective C) want to include features powered by Unity in their apps and games,” said J.C. Cimetiere, senior technical product manager for mobile platforms, in a blog post. How it works The mobile app build process overall is still the same. Unity creates the iOS Xcode and Android Gradle projects. However, to enable this feature, Unity team has modified the structure of the generated iOS Xcode and Android Gradle projects as follows: A library part – iOS framework and Android Archive (AAR) file – that includes all source files and plugins A thin launcher part that includes app representation data and runs the library part They have also released step-by-step instructions on how to integrate Unity as a library on iOS and Android, including basic sample projects. Currently, Unity as a Library supports full-screen rendering only. For now, rendering on only a part of the screen is not supported. Also loading more than one instance of the Unity runtime is not supported. Developers need to adapt third-party plugins (native or managed) for them to work properly.   Unity hopes that this integration will boost AR marketing by helping brands and creative agencies easily insert AR directly into their native mobile apps. Unity Editor will now officially support Linux Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players.
Read more
  • 0
  • 0
  • 5743
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-android-8-forces-foss-apps-to-use-firebase-for-push-notifications-or-label-them-as-using-too-much-battery
Vincy Davis
11 Jun 2019
6 min read
Save for later

Android 8 forces FOSS apps to use Firebase for push notifications or label them as “using too much battery”

Vincy Davis
11 Jun 2019
6 min read
Recently, Google imposed background limitations on the Android 8.0 (API level 26) for what apps can do while running in the background. Per this new update, Android 8 forces developers to use Firebase for their push notifications, or otherwise tell the user that the app has misbehaved. Push notifications are needed by all messaging apps such as Telegram-FOSS, riot.im, and other FOSS apps The problem here is that the Firebase Android client library is not open source. FOSS apps now cannot keep push notification features in Android 8 while also remaining 100% open source and not being stigmatized as misbehaved.. Google official reason for putting this limitation is to improve the user experience. They state that when many Android apps and services are run simultaneously, it places a load on the system. Further if additional apps or services, run in the background, it places an additional load on the system, which could result in a poor user experience. For example, when a user is playing a game in one window while browsing the web in another window, and using a third app to play music, this could result in abrupt shut down of one of the apps, due to immense load on the system. What are the Background Service limitations? Google has mentioned that under certain circumstances, a background app is placed on a temporary whitelist for several minutes. While an app is on the whitelist, it can launch services without limitation, and its background services are permitted to run. An app is placed on the whitelist when it handles a task that's visible to the user, such as: Handling a high-priority Firebase Cloud Messaging (FCM) message. Receiving a broadcast, such as an SMS/MMS message. Executing a PendingIntent from a notification. Starting a VpnService before the VPN app promotes itself to the foreground. Prior to Android 8.0, the usual way to create a foreground service was to create a background service, then promote that service to the foreground. From Android 8.0, the system will not allow a background app to create a background service. This means that all apps on Android will now be forced to use its use Google’s proprietary service, Firebase for push notifications. Since apps like Telegram-FOSS, riot.im, and other Free and Open source software  apps cannot use the service, these apps are being reported to the user as ‘using too much battery’. Telegram-FOSS team has notified its users The Telegram-FOSS team has now notified its users that since they cant use “Google's push messaging in a FOSS app”, it  will show a notification to users, to keep the background service running, else the users will not be notified about new messages. If the app would set the notification to lower priority (such as hiding it in the lower part of the notification screen), users would immediately get a system notification about Telegram "using battery", which is confusing and is the reason for this not being the default. The Telegram-FOSS team has also claimed that “Despite Google's misleading warnings, there is no difference in battery usage between v4.6 in "true background" and v4.9+ with notification.” This news has received varied reaction from users. Some are being extremely critical about Google. A user on Reddit says that “Google is probably regretting that they made Android open source. They will fight tooth and nail to undo that.” Another user on Hacker News adds, “Google is one of the most evil companies out there for a company that started out with don't be evil. The have some very smart people, some amazing tech, but unfortunately they have some very evil people working for them help bent on maintaining their advantage by any means necessary. Without using Google's push notifications, you are going to end up with something that works about 75% of the time. When this first started happening to me, I lost tons of time thinking it was a bug only to finally realize I needed to use Google's library to get reliability for what once worked.” Some users have pointed out that Apple has been restricting push notifications from a long time allowing apps to use nothing but APNS, run nothing in background or even include GPL source code. Another user comments, “The difference is Apple has been the same from the beginning. There was no bait and switch. People who bought Apple products knew what Apple was and will be and what the terms were. With Google there is a bait and switch. They came to market defining themselves as the open alternative to Apple to get market share and developer interest, and now that they've achieved dominance the terms are changing. There's no surprise that there's going to be massive pushback (and probably antitrust implications)” Another user suggested that it’s better to opt for non-Android phones. https://twitter.com/datenteiler/status/1137743892009406466 Few believe that Google is taking this measure clearly to counter iOS phones in the market. A user on Hacker News says that, “The competition in this case is Apples iOS, for which even HackerNews users love to harp over and over and over again how amazing it is and how little battery it uses because it doesn't allow apps to use anything but APNS, run anything in background or even include GPL source code. This is what's Android competing against - a completely locked down operating system which cannot deliver any kind of GPL code. And every time it allows more freedom to developers it's punished in the market by losing against iOS and mocked on this very website about how it allows their app developers to drain battery and access data. What exactly do you expect Google to do here?” Seeing the backlash, Google may relax its Firebase licensing or change the rules about background apps in the future. For now though, the FOSS apps will have to resort to guiding users to lower the priority of the resulting notification and the battery warning. SENSORID attack: Calibration fingerprinting that can easily trace your iOS and Android phones, study reveals Tor Browser 8.5, the first stable version for Android, is now available on Google Play Store! Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users
Read more
  • 0
  • 0
  • 4184

article-image-apple-previews-ios-13-sign-in-with-apple-dark-mode-advanced-photo-and-camera-features-and-an-all-new-maps-experience
Vincy Davis
04 Jun 2019
6 min read
Save for later

Apple previews iOS 13: Sign in with Apple, dark mode, advanced photo and camera features, and an all-new Maps experience

Vincy Davis
04 Jun 2019
6 min read
Update: Five days after the release of iOS 13, Apple has released its first update, iOS 13.1 on September 24th. It contains many features that were announced at the Worldwide Developers Conference (WWDC) but were not part of the iOS 13 release. The major updates include sharing ETA (estimated time of arrival) with contacts while using Maps, new tweaked colors, and designs for dynamic wallpapers and its availability on more devices. iOS 13.1 also has a new volume slider which shows icons of the type of devices connected, such as AirPods, or HomePod. It also brings many bug fixes to the iOS 13 release.  https://twitter.com/jaylyne0821/status/1176786286771724290   Update: More than three months after previewing iOS 13 at the WWDC, Apple finally released iOS 13, on September 19th. Users can check out the many features of iOS 13 below. Many Apple users are excited about the release of iOS 13. https://twitter.com/popootel/status/1174910893076836353   https://twitter.com/zerobbh/status/1174929742958477312 Update: On 30th July, 2019, Apple released public betas of iOS13 and macOS catalina. However, these betas are extremely buggy. Users may incur severely reduced battery life and many broken apps, particularly on the iOS beta. On the upside, this is an invaluable preview particularly for an app developer or Apple enthusiast. At the ongoing Worldwide Developers Conference (WWDC) 2019, Apple has previewed iOS 13. It has distinct features like Sign In with Apple, Dark Mode, Advanced Photo and Camera Features, and an All-New Maps Experience. iOS 13 will also support Xbox One and PS4 controllers. A framework for iOS 13, BackgroundTasks has also been released. Sign In with Apple This is a simple, fast, and more private way to sign into Apple apps and websites. Instead of going through the lengthy process of using a social account or filling out forms, verifying email addresses or choosing passwords, users can now use their Apple ID for authentication. Apple has also maintained that it will protect users’ privacy by providing developers with a unique random ID. If in a particular case, developers are asking for name and email address details, users will still have the option to keep their email addresses private and share a unique random email address instead. Sign In with Apple will make it easy for users to authenticate with Face ID or Touch ID. It will have a two-factor authentication built in, for an added layer of security. Also, Apple will not use Sign In with Apple to monitor users or their activity in apps. Dark Mode Providing a dramatic look, iOS 13 will have a Dark Mode which is a new dark color scheme that will work system-wide and across all native apps. This will provide users’ with a great viewing experience, especially in low light environments. It will also be available to third-party app developers for integrating it into their own apps. This mode can also be scheduled to turn on automatically at a particular time or at sunset. Advanced Photo and Camera Features In iOS 13, photos and videos are arranged in a systematic way. This will make browsing, and discovering and reliving favorite memories much easier. iOS 13 is also enabled with auto play videos. New tools have been introduced in photo editing which will make it easier to apply, adjust and review photos at a glance. Most of the photo editing tools will also be available for video editing, making it possible to rotate, crop or apply filters within the photos app. In the camera app of iOS 13, portrait lighting adjustments can be made. This will enable to virtually move the light closer to sharpen eyes and brighten and smooth facial features or push the light farther away to create a subtle, refined look. A new High-Key Mono effect has also been introduced to create a beautiful, and monochromatic look for Portrait mode photos. All -Nnew Maps Experience Apple Maps in iOS 13, will provide a broader road coverage, better pedestrian data, more precise addresses, and more detailed land cover. This new map is available in select cities and states presently. It will rollout across the US by the end of 2019 and to more countries in 2020. A new Look Around feature, using the new base-map and high-resolution 3D photography, has been introduced. It delivers a beautiful street-level imagery of a city with smooth and seamless transitions. iOS 13 also brings additional new features to the Maps app, including collections to easily share favorite restaurants, travel destinations or places to shop with friends and more. Support to Xbox One and PS4 controllers iOS 13 and Apple TV will support Xbox One and PS4 controllers. This controller support has arrived, after Apple had planned to launch the Apple Arcade game subscription service, for iOS, Mac, and Apple TV. Background Tasks Apple has released a framework for background tasks. This will support iOS 13, along with UIKit for Mac 13.0 and tvOS 13.0, all in beta. This framework will enable the app content to remain up to date and run tasks, while the app is in the background. Additional Features in iOS 13 AirPods Siri will be able to read incoming messages as soon as they arrive from Messages or any SiriKit-enabled messaging app. HomePod This can distinguish voices from anyone in the home to deliver personal requests, including messages, music and more. Handoff will enable users to easily move music, podcasts or a phone call to HomePod when they arrive home. Health It offers ways to monitor hearing health and has new ways to track, visualize and predict a woman’s menstrual cycle. Siri Siri will have a new and more natural voice. Siri shortcuts will now support suggested automation that will provide personalized routines, like heading to work or going to the gym. Messages It can automatically share a user’s name and photo or customized Memoji or Animoji, so that the user can be easily identified, in the message thread. These distinct features of iOS 13 have made users’ very excited for its release. https://twitter.com/gregbarbosa/status/1135668685882974210 https://twitter.com/SuperSaf/status/1135606599362514947 https://twitter.com/sascha_p/status/1135600265741053952 https://twitter.com/ijustine/status/1135605052742152198 These are some of the features of iOS 13. For more details, head over to the Apple press release Check out WWDC 2019 highlights, for all releases and updates announced during the conference. Apple releases native SwiftUI framework with declarative syntax, live editing, and support of Xcode 11 beta Apple promotes app store principles & practices as good for developers and consumers following rising antitrust worthy allegations Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users
Read more
  • 0
  • 0
  • 2631

article-image-wwdc-2019-highlights-apple-introduces-swiftui-new-privacy-focused-sign-in-updates-to-ios-macos-and-ipad-and-more
Sugandha Lahoti
04 Jun 2019
8 min read
Save for later

WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more

Sugandha Lahoti
04 Jun 2019
8 min read
Apple held its annual conference Worldwide Developers Conference, WWDC 2019 in San Jose on Monday and to say that the keynote session, was jammed with announcements would be an understatement. This time Apple has really tried to innovate, it was not just the usual product hardware updates but also key features in the business, software space, along with a focus on privacy. Some new products and services which were showcased included a Mac Pro and Pro Display XDR, iOS 13 with a dark mode, MacOS Catalina and an operating system for the iPad called iPadOS. There was also a new privacy-focused sign-in, SwiftUI framework, and Feedback Assistant. Needless to say, people were enthralled by the event and expressed their excitement on Twitter. https://twitter.com/ShaneyBoy112/status/1135628489065996288 https://twitter.com/stroughtonsmith/status/1135653636145590273 https://twitter.com/nickchapsas/status/1135616253677191169 New enhancements to Apple’s flagship operating systems macOS Catalina in beta Apple previewed a new version of Mac operating system, macOS Catalina (macOS 10.5) as a result of the expansion of the company's Marzipan program -- now called Project Catalyst -- which brings iOS apps to the Mac. With macOS Catalina, Apple is replacing iTunes with its popular entertainment apps — Apple Music, Apple Podcasts and the Apple TV app. It also has a new Sidecar feature which helps users extend their Mac desktop by using their iPad as a second display or as a high-precision input device across Mac apps. Catalina has new security features such as ‘Find My app’ and ‘Approve’ to keep users better protected. Voice Control lets users control their Mac entirely with their voice. It also has the Screen Time feature, giving users options such as monitoring usage, scheduling downtime, and setting limits for both apps and websites across all devices. macOS Catalina will be available this fall as a free software update for Macs introduced in mid-2012 or later. For more information, see our detailed coverage here. iOS 13 Beta Apple previewed the latest update of its mobile operating system, iOS 13 at WWDC. It introduces dark mode, advanced photo and camera features, Sign in with Apple and a new Maps experience. Sign in with Apple allows customers to simply use their Apple ID to authenticate instead of using a social account or filling out forms, verifying email addresses or choosing passwords. The Sign in will protect users’ privacy by providing developers with a unique random ID. Siri has a new, more natural voice. HomePod can also distinguish voices from anyone in the home to deliver personal requests, including messages, music and more. https://twitter.com/astralwave/status/1135602897821917184 The developer preview of iOS 13 is available to Apple Developer Program members at starting yesterday, and a public beta program will be available to iOS users later this month. New software features will be available this fall as a free software update for iPhone 6s and later. Read more about iOS 13 features here. Apple iPadOS WWDC 2019 also saw iPad getting its own Apple iPadOS. Basically, iPadOS builds on the same foundation as iOS, adding intuitive features specific to the large display of iPad. It has a Split view allowing iPad users to work with multiple files and documents from the same app simultaneously. They can also quickly view and switch between multiple apps in Slide Over. It also introduces mouse support for both USB and Bluetooth devices. Source: Apple Now Apple Pencil is even more integrated into the iPad experience. Customers can now mark up and send entire webpages, documents or emails on iPad by swiping Apple Pencil from the corner of the screen. The Files app also comes with iCloud Drive support for folder sharing. Text editing on the iPad receives a major update with iPadOS. It is easier and faster to point with precision and speed, select text with just a swipe and use new gestures to cut, copy, paste and undo. iPadOS will be available this fall as a free software update for iPad Air 2 and later, all iPad Pro models, iPad 5th generation and later and iPad mini 4 and later. tvOS 13 and watchOS 6 Apple also released tvOS 13 operating system for Apple TV 4K. With tvOS 13, Apple TV 4K now has a new Home screen; multi-user support for customers to access their own TV shows, movies, music, and recommendations; support for Apple Arcade; expanded game controller support for Xbox One S and PlayStation DualShock 4; and new 4K HDR screen savers. tvOS 13 will be available this fall as a free software update for Apple TV 4K and Apple TV HD. Apple previewed watchOS 6, offering users better health and fitness management such as cycle tracking and Noise app, gives access to dynamic new watch faces and the Activity Trends and App Store directly on Apple Watch. New software features will be available this fall as a free software update for Apple Watch Series 1 or later paired with iPhone 6s or later running iOS 13 or later. Source: Apple Swift UI framework Apple unveiled its SwiftUI framework at WWDC offering a simple way for developers to build user interfaces across all Apple platforms using just one set of tools and APIs. It features a declarative Swift syntax that’s easy to read and natural to write. SwiftUI Working with new Xcode design tools, Swift UI keeps code and design perfectly in sync. It also offers features such as automatic support for Dynamic Type, Dark Mode, localization, and accessibility. Swift UI got developers quite excited who compared it with React Native and Flutter. https://twitter.com/chrismaddern/status/1135624920036184067 https://twitter.com/wilshipley/status/1135835228696600576 https://twitter.com/benjaminencz/status/1135797158807035904 Apple aims to protect user privacy with Apple Sign in Apple has a new way to stop third-party sites and services from getting your information when you sign up to an app. The “Sign in with Apple” button introduced at WWDC, can authenticate a user using Face ID on their iPhone without turning over any of their personal data to a third-party company.  Often users are dependent on third-party apps for using a social account or filling out forms, verifying email addresses or choosing passwords. Sign in with Apple allows customers to simply use their Apple ID to authenticate. It protects users’ privacy by providing developers with a unique random ID. Other privacy updates to the App Store Apps intended for kids cannot include third-party advertising or analytics software and may not transmit data to third parties. HTML5 games distributed in apps may not provide access to real money gaming, lotteries, or charitable donations, and may not support digital commerce. VPN apps may not sell, use, or disclose to third parties any data for any purpose, and must commit to this in their privacy policy. Apps that compile information from any source that is not directly from the user or without the user’s explicit consent, even public databases, are not permitted on the App Store. Apps must get consent for data collection, even if the data is considered anonymous at the time of or immediately following collection. More privacy related announcements here. New Mac Pro with Apple Pro Display XDR At the WWDC, Apple also announced its all-new redesigned Mac Pro, starting at $5,999. The design is a homage to Apple’s classic “cheese grater” look, but it is far from a simple grater when it comes to features. https://twitter.com/briantong/status/1135612966789820418 The new Intel Xeon processor inside the Mac Pro will have up to 28 cores, with up to 300W of power and heavy-duty cooling. System memory can be maxed out at 1.5TB, says Apple, with six-channel memory across 12 DIMM slots. It also has 32GB of memory, Radeon Pro 580X graphics, and a 256GB SSD. With this Mac Pro, Apple is launching a custom expansion module, the MPX Module. It has a quad-wide PCIe card that fits two graphics cards, has its own dedicated heat sink, and also has a custom Thunderbolt connector to hook into Thunderbolt 3 backbone that Apple built into the motherboard to deliver additional power and high-speed connectivity to components. The power supply of the new Mac Pro maxes out at 1.4kW. Three large fans sit at the front, just behind the new aluminum grille, blowing air across the system at a rate of 300 cubic feet per minute. Alongside the new Mac Pro, Apple also introduced a matching 6K monitor, the 32-inch Pro Display XDR at WWDC, whose starting price is $4,999. The Pro Display XDR, which stands for extreme dynamic range, has P3 and 10-bit color with reference modes built in, as well as Apple’s True Tone automatic color adjustment for ambient lighting. It’s 40 percent larger than the iMac 5K display and has an anti-reflective coating, and comes in a matte option called nanotexture. Source: Apple Feedback Assistant for Developers Bug reporter is now replaced with Feedback Assistant which is available on iPhone, iPad, Mac, and the web, making it easy for developers to submit effective bug reports and request enhancements to APIs and tools. When developers file a bug, they will receive a Feedback ID to track the bug within the app or on the website. Other features include automatic on-device diagnostics, remote bug filing, detailed bug forms, and bug statuses. For more coverage of Apple’s special event keep watching this space. You can also stream the WWDC Keynote from the San Jose Convention Center here. Apple releases native SwiftUI framework with declarative syntax, live editing, and support of Xcode 11 beta. Apple Pay will soon support NFC tags to trigger payments Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users
Read more
  • 0
  • 0
  • 3773

article-image-samsung-and-amd-partner-for-low-power-and-high-performance-mobile-apps
Fatema Patrawala
04 Jun 2019
2 min read
Save for later

Samsung and AMD partner for low power and high performance mobile apps

Fatema Patrawala
04 Jun 2019
2 min read
Today, Samsung Electronics announced a multi-year strategic partnership with AMD. The strategic alliance is for ultra low power, high performance mobile graphics IP based on AMD Radeon graphics technologies. As part of the partnership, Samsung will license AMD graphics IP and will focus on advanced graphics technologies and solutions that are critical for enhancing innovation across mobile applications, including smartphones. “As we prepare for disruptive changes in technology and discover new opportunities, our partnership with AMD will allow us to bring groundbreaking graphics products and solutions to market for tomorrow’s mobile applications," said Inyup Kang, president of Samsung Electronics. "We look forward to working with AMD to accelerate innovations in mobile graphics technologies that will help take future mobile computing to the next level,'' he said “Adoption of our Radeon graphics technologies across the PC, game console, cloud and HPC markets has grown significantly and we are thrilled to now partner with industry leader Samsung to accelerate graphics innovation in the mobile market,” said Dr. Lisa Su, AMD president and CEO. “This strategic partnership will extend the reach of our high-performance Radeon graphics into the mobile market, significantly expanding the Radeon user base and development ecosystem.” Key terms of the partnership include: AMD will license custom graphics IP based on the recently announced, highly-scalable RDNA graphics architecture to Samsung for use in mobile devices, including smartphones, and other products that complement AMD product offerings. Samsung will pay AMD technology license fees and royalties. Read more on this news from the Samsung official announcement. Microsoft introduces Service Mesh Interface (SMI) for interoperability across different service mesh technologies Samsung AI lab researchers present a system that can animate heads with one-shot learning Samsung opens its AI based Bixby voice assistant to third-party developers
Read more
  • 0
  • 0
  • 1953
article-image-as-us-china-tech-cold-war-escalates-google-revokes-huaweis-android-support-allows-only-those-covered-under-open-source-licensing
Sugandha Lahoti
20 May 2019
4 min read
Save for later

As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing

Sugandha Lahoti
20 May 2019
4 min read
Update: On Wednesday, according to a leaked memo received by BBC, UK-based chip designer ARM has told staff it must suspend business with Huawei. Also, BT Group Plc won’t offer phones from Huawei when it starts Britain’s first 5G mobile network next week. A number of wireless operators are ditching Huawei’s handsets. On Monday, The U.S. Commerce Department granted a 90-day license for mobile phone companies and internet broadband providers to work with Huawei allowing Google to send software updates to Huawei phones which use its Android operating system till August 19. As of 20th May, the U.S. government temporarily minimized some trade restrictions on Huawei, to help the company’s customers around the world. The U.S. Commerce Department will allow Huawei Technologies to purchase American-made goods in order to maintain existing networks and provide software updates to existing Huawei handsets. According to a report by Reuters, Google has suspended all business with Huawei that requires the transfer of hardware, software and technical services. Huawei will also be limited from getting updates to Google’s Android operating system. They will only be able to use the public version of Android (known as the Android Open Source Project (AOSP). Huawei will have to create their own update mechanism for security patches. Future versions of Huawei smartphones that run on Android will also lose access to popular services, including the Google Play Store and Gmail and YouTube apps said Reuters. However, the impact is expected to be minimal in the Chinese market, considering most Google mobile apps are already banned in China. This also means that alternatives offered by domestic competitors such as Tencent and Baidu may see a rise in popularity. https://twitter.com/asymco/status/1130397070181916672 Holders of current Huawei smartphones with Google apps, however, will continue to be able to use and download app updates provided by Google, a Google spokesperson told Reuters. They further added, “We are complying with the order and reviewing the implications. For users of our services, Google Play and the security protections from Google Play Protect will continue to function on existing Huawei devices." https://twitter.com/Android/status/1130313848332988421 Per a Bloomberg report, chipmakers including Intel, Qualcomm, Xilinx, and Broadcom have told their employees they will not supply Huawei till further notice. This can also disrupt the businesses of American chip giants and slow down the rollout of critical 5G wireless networks worldwide -- including in China. Last week the FCC voted unanimously to deny China Mobile’s bid to provide US telecommunications services. Huawei suspension from Google comes after The Trump administration added the Chinese telecom giant to trade blacklist, last week. The Commerce Department said by adding Huawei Technologies and its 70 affiliates under this list means it will ban the company from acquiring components and technology from US firms without government approval. President Donald Trump has taken this decision to “prevent American technology from being used by foreign-owned entities in ways that potentially undermine US national security or foreign policy interests”, said US Secretary Wilbur Ross in a statement. The order signed by the President did not specify any country or company but, US officials have previously labeled Huawei a “threat” and actively lobbied allies not to use Huawei network equipment in next-generation 5G networks. Huawei’s ban was not received well by the public especially those with Huawei devices. This is a lose-lose situation for both companies, short term, this hurts Huawei, long term this hurts Android. The news of the US ban did not sit well with Chinese citizens as well.  Per a report by Buzzfeed, people in China are calling for a boycott of Apple products. In February, Huawei was accused of stealing Apple’s trade secrets. Per Buzzfeed, many people took to Weibo, China’s popular social media platform to speak against Apple. “The functions in Huawei are comparable to Apple iPhones or even better. We have such a good smartphone alternative, why are we still using Apple?” commented one user.” “I think Huawei’s branding is amazing, it chops an apple into eight pieces,” said another post, describing the company's spliced, red logo. On Twitter, people openly criticized Google’s move as well as the US ban. https://twitter.com/FearbySoftware/status/1130234526137966592 https://twitter.com/iainthomson/status/1130232015276535808 The U.S. China cold war has escalated to become a messy trade war. Now, China faces incremental pressure to build its own smartphone operating system, design its own chips, develop its own semiconductor technology, and implement its own technology standards. https://twitter.com/tomwarren/status/1130229043272531968 US blacklist China’s telecom giant Huawei over threat to national security Elite US universities including MIT and Stanford break off partnerships with Huawei. China’s Huawei technologies accused of stealing Apple’s trade secrets, reports The Information
Read more
  • 0
  • 0
  • 2831

article-image-apple-pay-will-soon-support-nfc-tags-to-trigger-payments
Vincy Davis
14 May 2019
3 min read
Save for later

Apple Pay will soon support NFC tags to trigger payments

Vincy Davis
14 May 2019
3 min read
At the beginning of this month, Apple’s Vice President of Apple Pay, Jennifer Bailey announced a new NFC feature for Apple Pay. Now, Apple Pay will be supported with NFC stickers/tags, that will trigger it for payment without needing to have an app installed. This announcement was made during the keynote address at the TRANSACT Conference, Las Vegas, which focused on global payment technology. The new iPhones will have special NFC tags that will trigger Apple Pay purchases when tapped. This means all you need to do is tap on the NFC tag, confirm the purchase through Apple Pay(through Face ID or Touch ID) and the payment would be done. This will require no separate app and will be handled by Apple Pay along with the Wallet app. As per 9to5Mac, Apple is partnering with Bird scooters, Bonobos clothing store, and PayByPhone parking meters in the initial round. Also, users will soon be able to sign up for loyalty cards within the Wallet app, with a single tap with no third party or setup required. According to NFC World, Dairy Queen, Panera Bread, Yogurtland, Jimmy John's, Dave & Busters, and Caribou Coffee are all planning to launch services later this year that will use NFC tags allowing customers to sign up for loyalty cards. https://twitter.com/SteveMoser/status/1127949077432426496 This could be another step towards Apple’s goal of replacing the wallet. This feature will make instant and on the go purchases a lot more faster and easier. A user on Reddit has commented, “From a user's point of view, this seems great. No need to wait for congested LTE to download an app in order to pay for a scooter or parking.” Another user is comparing Apple Pay with QR code, stating “QR code requires at least one more step which is using the camera. Hopefully, Apple Pay will be just a single tap and confirm, which would be invoked automatically whenever the phone is near a point of sale. And since the NFC tags will have a predetermined, set payment amount associated with them, even biometrics shouldn’t be necessary.” https://twitter.com/lgeffen/status/1128083948410744832 More details on this feature can be expected at the Apple Worldwide Developers Conference (WWDC) 2019 (WWDC19) coming up in June. Apple’s March Event: Apple changes gears to services, is now your bank, news source, gaming zone, and TV Spotify files an EU antitrust complaint against Apple; Apple says Spotify’s aim is to make more money off others’ work Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws
Read more
  • 0
  • 0
  • 4164

article-image-google-i-o-2019-d1-highlights-smarter-display-search-feature-with-ar-capabilities-android-q-linguistically-advanced-google-lens-and-more
Fatema Patrawala
09 May 2019
11 min read
Save for later

Google I/O 2019 D1 highlights: smarter display, search feature with AR capabilities, Android Q, linguistically advanced Google lens and more

Fatema Patrawala
09 May 2019
11 min read
This year’s Google IO 2019 was meant to be big, and it didn't disappoint at all. There's a lot of big news to talk about as it introduced and showcased exciting new products, updates, features and functionalities for a much better user experience. Google I/O kicked off yesterday and it will run through Thursday May 9 at the Shoreline Amphitheater in Mountain View, California. It has approximately 7000 attendees from all around the world. “To organize the world’s information and make it universally accessible and useful. We are moving from a company that helps you find answers to a company that helps you get things done. Our goal is to build a more helpful Google for everyone.” Sundar Pichai, Google CEO commenced his Keynote session with such strong statements. He further listed a few recent tech advances and said “We continue to believe that the biggest breakthroughs happen at the intersection of AI.” He then went on to discuss how Google is confident that it can do more AI without private data leaving your devices, and that the heart of the solution will be federated learning. Basically, federated learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. It enables mobile phones at different geographical locations to collaboratively train a machine learning model without transferring any data that may contain personal information from the devices. While the keynote lasted for nearly two hours, some of the breakthrough innovation in tech were introduced which will be briefed in detail moving ahead in the article. Google Search at Google IO 2019 Google remains a search giant, and that's something it has not forgotten at Google IO 2019. However, search is about to become far more visually rich, thanks to the inclusion of AR camera trick which is now introduced directly into search results. They held an on-stage demonstration at Google IO which showed how a medical student could search for a muscle group, and be presented within mobile search results with a 3D representation of the body part. Not only could it be played with within the search results, it could be placed on the user’s desk to be seen at real scale from their smartphone’s screen. Source: Google And even larger things, like an AR shark, could be put into your AR screen, straight from the app. The Google team showcased this feature as the shark virtually appeared live in front of the audience. Google Lens bill splitting and food recommendations Google Lens was something which caught audience’s interest in the Google's App arsenal. Lens was presented in a way that it can use image recognition to deliver information based on what your camera is looking at. A demo was shown on how a combination of mapping data and image recognition will let Google Lens make recommendations from a restaurant’s menu, just by pointing your camera at it. And when the bill arrives, point your camera at the receipt and it'll show you tipping info and bill splitting help. They also announced their partnership with recipe providers to allow Lens to produce video tutorials when your phone is pointed at a written recipe. Source: Google Debut of Android Q beta 3 version At Google IO Android Q beta 3 was introduced, it is the 10th generation of the Android operating system, and it comes with new features for phone and tablet users. Google announced that there are over 2.5 billion active Android devices as the software extends to televisions, in-car systems and smart screens like the Google Home Hub. Further it was discussed how the Android will work with foldable devices, and how it will be able to seamlessly tweak its UI depending on the format and ratio of the folding device. Another new feature of live caption system in Android Q will turn audio instantly into text to be read. It's a system function triggered within the volume rocker menu. They can be tweaked for legibility to your eyes, doesn't require an internet connection, and happens on videos that have never been manually close-captioned. It's at an OS level, letting it work across all your apps. Source: Google The smart reply feature will now work across all messaging apps in Android Q, with the OS smartly predicting your text. The Dark Theme activated by battery saver or the quick tile was introduced. Lighting up less pixels in your phone will save its battery life. Android Q will also double down on security and privacy features, such as a Maps incognito mode, reminders for location usage and sharing (such as only when a delivery app is in use), and TLSV3 encryption for low end devices. Security updates will roll out faster too, updating over the air without a reboot needed for the device. With Android Q Beta 3 which will be launched today on 21 new devices, Google also announced that it will make Kotlin, a statically typed programming language for writing its Android apps. Chrome to be more transparent in terms of cookie control Google announced that it will update Chrome to provide users with more transparency about how sites are using cookies, as well as simpler controls for cross-site cookies. A number of changes will be made to Chrome to enable features, like modifying how cookies work so that developers need to explicitly specify which cookies are allowed to work across websites — and could be used to track users. The mechanism is built on the web's SameSite cookie attribute and you can find the technical details on web.dev. In the coming months, Chrome will require developers to use this mechanism to access their cookies across sites. This change will enable users to clear all such cookies while leaving single domain cookies unaffected, preserving user logins and settings. It will also enable browsers to provide clear information about which sites are setting these cookies, so users can make informed choices about how their data is used. This change also has a significant security benefit for users, protecting cookies from cross-site injection and data disclosure attacks like Spectre and CSRF by default. They further announced that they will eventually limit cross-site cookies to HTTPS connections, providing additional important privacy protections for our users. Developers can start to test their sites and see how these changes will affect behavior in the latest developer build of Chrome. They have also announced Flutter for web, mobile and desktop. It will allow web-based applications to be built using the Flutter framework. The core framework for mobile devices will be upgraded to Flutter 1.5. And for the desktop, Flutter will be used as an experimental project. “We believe these changes will help improve user privacy and security on the web — but we know that it will take time. We’re committed to working with the web ecosystem to understand how Chrome can continue to support these positive use cases and to build a better web.” says Ben Galbraith - Director, Chrome Product Management and Justin Schuh - Director, Chrome Engineering Next generation Google Assistant Google has been working hard to compress and streamline the AI that Google Assistant taps into from the cloud when it is processing voice commands. Currently every voice request has to be run through three separate processing models to land on the correctly-understood voice command. It is only data that until now has taken up 100GB of storage on many Google servers. But that's about to change. As Google has figured how to shrink that down to just 500MB of storage space, and to put it on your device. This will help lower the latency between your voice request and the task you've wished to trigger being carried out. It's 10x faster - 'real time', according to Google. It also showed a demo where, a Google rep fired off a string of voice commands that required Google Assistant to access multiple apps, execute specific actions, and understand not only what the rep was saying, but what she actually meant. For example she said, “Hey Google, what’s the weather today? What about tomorrow? Show me John Legend on Twitter; Get a Lyft ride to my hotel; turn the flashlight on; turn it off; take a selfie.” Assistant executed the whole sequence flawlessly, in a span of about 15 seconds. Source: Google Further demos showed off its ability to compose texts and emails that drew on information about the user’s travel plans, traffic conditions, and photos. And last but not the least it can also silence your alarms and timers by just saying 'Stop' to help you go back to your slumber. Google Duplex gets smarter Google Duplex is a Google Assistant service which earlier use to make calls and bookings on your behalf based on the requests. But now It's getting more smarter as it comes with the new 'Duplex on the web' feature. Now you can ask Google Duplex to plan a trip, and it'll begin filling in website forms such as reservation details, hire car bookings and more, on your behalf. And it only awaits you to confirm the details it has input. Google Home Hub is dead, Long live the Nest Hub Max At Google IO, the company announced it was dropping the Google Home moniker, instead rebranding its devices with the Nest name, bringing them in line with its security systems. The Nest Hub Max was introduced, with a camera and larger 10-inch display. With a built-in Nest Cam wide-angle lens security camera (127 degrees), which the original Home Hub omitted due to privacy concerns, it's now a far more security-focussed device. It also lets you make video calls using a wide range of video calling apps. Cameras and mics can be physically switched off with a slider that cuts off the electronics, for the privacy-conscious. Source: Google Voice and Face match features, allowing families to create voice and face models, will let the Hub Max know to only show an individual's information or recommendations. It'll also double up as a kitchen TV, if you've access to YouTube TV plans, and lowering the volume is as simple as raising your hand in front of the display. It'll be launched this summer for $229 in the US, and AU$349 in Australia. The original Hub also gets a price cut to $129 / AU$199. Other honorable mentions Google Stadia: Google had introduced its new game-streaming service, called Stadia in March. The service uses Google’s own servers to store and run games, which you can then connect to and play whenever you’d like on literally any screen in your house including your desktop, laptop, TV, phone and tablet. Basically, if it’s internet-connected and has access to Chrome, it can run Stadia. Today at I/O they announced that Stadia will not only stream games from the cloud to the Chrome browser but on the Chromecast, and other Pixel and Android devices. They plan to launch ahead this year in the US, Canada, UK, and Europe. A cheaper Pixel phone: While other smartphones are getting more competitive in terms of pricing, Google introduced its new Pixel 3a which is less powerful than the existing Pixel 3, and at a base price of $399, which is half as expensive as Pixel 3. In 2017 Forbes had done an analysis on why Google Pixel failed in the market and one of the reason was its exorbitant high price. It states that the tech giant needs to come to the realization that its brand in the phone hardware business is just not worth as much as Samsung's or Apple's that it can command the same price premium. Source: Google “Focus mode:” A new feature coming to Android P and Q devices this summer will let you turn off your most distracting apps to focus on a task, while still allowing text messages, calls, and other important notifications through. Augmented reality in Google Maps: AR is one of those technologies that always seems to impress the tech companies that make it more than it impresses their actual users. But Google may finally be finding some practical uses for it, like overlaying walking directions when you hold up your phone’s camera to the street in front of you. Incognito mode for Google Maps: It also announced a new “incognito” mode for Google Maps, which will stop keeping records of your whereabouts while it’s enabled. And they will further roll out this feature in Google Search and YouTube. Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop You can now permanently delete your location history, and web and app activity data on Google Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says
Read more
  • 0
  • 0
  • 3663
article-image-you-can-now-permanently-delete-your-location-history-and-web-and-app-activity-data-on-google
Sugandha Lahoti
03 May 2019
4 min read
Save for later

You can now permanently delete your location history, and web and app activity data on Google

Sugandha Lahoti
03 May 2019
4 min read
Google keeps a track of everything that you do online, including the websites you visit, the ads you see, the videos you watch, and the things you search. Soon, this is (partially) going to change. Google, on Wednesday, launched a new feature allowing users to delete all or part of the location history and web and app activity data, manually. This has been a long requested feature by all internet users, and Google says it “ has heard user feedback that they need to provide simpler ways for users to manage or delete their data.” In the Q1 earnings shared by Google’s parent company Alphabet, they said that EU’s USD 1.49 billion fine on Google is one of the reasons their profit sagged in the first three months of this year.  This was Google’s third antitrust fine by EU since 2017. In the Monday report, Alphabet said that profit in the first quarter fell 29 percent to USD 6.7 billion on revenue that climbed 17 percent to USD 36.3 billion. “Without identifying you personally to advertisers or other third parties, we might use data that includes your searches and location, websites and apps you’ve used, videos and ads you’ve seen, and basic information you’ve given us, such as your age range and gender,” the company explains on its Safety Center Web page. Google already allows you to turn off their location history and Web and app activity. You can also manually delete data that’s generated from searches and other Google services. This new feature, however, lets you remove such information automatically. It has a time limit for how long you want your activity data to be saved: Keep until I delete manually Keep for 18 months, then delete automatically Keep for 3 months, then delete automatically Based on the option chosen, any data older than that will be automatically deleted from your account on an ongoing basis. Surprisingly, Google still does not have an option that says 'don't track me' or 'automatically delete after I close website', which would ensure 100 percent data privacy and security for users. Source: Google Blog Enabling privacy has not been one of Google’s strongholds in recent times. Last year, Google was caught in a scandal which allowed Google to track a person’s location history in incognito mode, even when they had turned it off. In November last year, Google came under scrutiny by the European Consumer Organisation (BEUC). They published a report stating that Google is using various methods to encourage users to enable the settings ‘location history’ and ‘web and app activity’ which are integrated into all Google user accounts. They allege that Google is using these features to facilitate targeted advertising. “These practices are not compliant with GDPR, as Google lacks a valid legal ground for processing the data in question. In particular, the report shows that users’ consent provided under these circumstances is not freely given,” BEUC, speaking on behalf of the countries’ consumer groups, said. Google was also found helping the police use Google’s location database to catch potential crime suspects, and sometimes capturing innocent people in the process, per a recent New York Times investigation. The new feature will be rolled out in the coming weeks for location history and for web and app activity data. It is likely to be incorporated in data history as well, but it has not been officially confirmed. To enable this privacy feature, visit your Google account activity controls. European Consumer groups accuse Google of tracking its users’ location, calls it a breach of GDPR. Google’s incognito location tracking scandal could be the first real test of GDPR Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says.
Read more
  • 0
  • 0
  • 5268

article-image-android-studio-3-4-releases-with-android-q-beta-emulator-a-new-resource-manager-and-more
Sugandha Lahoti
18 Apr 2019
2 min read
Save for later

Android Studio 3.4 releases with Android Q Beta emulator, a new resource manager and more

Sugandha Lahoti
18 Apr 2019
2 min read
Yesterday, Google released Android Studio 3.4, the latest version of its integrated development environment (IDE). Version 3.3 was released earlier this year. This release is the continuation of 'Project Marble’, Google’s initiative to improve Android Studio features. Android Studio 3.4 has an updated Project Structure Dialog (PSD). It also replaces Proguard with R8 as the default code shrinker and obfuscator. This release also supports the Android Q Beta and Intellij 2018.3.4. New features in Android Studio 3.4 Project Structure Dialog: This is a new user interface front end to manage Gradle project files. PSD allows developers to see and add dependencies to their project at a module level. Additionally, it displays build variables, suggestions to improve build file configuration etc. New Resource Manager: The resource manager is a new tool to visualize the drawables, colors, and layouts across your app project in a consolidated view. In addition to visualization, the panel supports drag & drop bulk asset import, and bulk SVG to VectorDrawable conversion. R8 replaces Proguard: R8 is now used as the default code shinker for new projects created with Android Studio 3.4. R8 code shrinking helps reduce the size of your APK by getting rid of unused code and resources as well as making your actual code take less space. Additionally, in comparison to Proguard, R8 combines shrinking, desugaring and dexing operations into one step. Import Intentions: Android Studio 3.4 will now recognize common classes in Jetpack and Firebase libraries. It will also suggest, via code intentions, adding the required import statement and library dependency to your Gradle project files. Android Emulator Skin updates and Android Q Beta Emulator System Image: Users can now download Android Q Beta emulator system images for app testing on Android Q. Android Studio 3.4 also includes the latest Google Pixel 3 and Google Pixel 3 XL device skins. Read more about this release on the Android Developers Blog. You can download the latest version of Android Studio 3.4 from the Android download page. Android Studio 3.3 released with support for Navigation Editor, C++ code lint inspections, and more Google announces the stable release of Android Jetpack Navigation Android Q will reportedly give network carriers more control over network devices
Read more
  • 0
  • 0
  • 2408