Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Mobile

204 Articles
article-image-darpa-plans-to-develop-a-communication-platform-similar-to-whatsapp
Savia Lobo
16 Apr 2019
2 min read
Save for later

DARPA plans to develop a communication platform similar to WhatsApp

Savia Lobo
16 Apr 2019
2 min read
Yesterday, the Defense and Advanced Research Projects Agency (DARPA) announced that they are developing a new and highly secure communication platform. This program will be called as ‘Resilient Anonymous Communication for Everyone (RACE)’ and will be similar to WhatsApp. The RACE program plans on building a distributed messaging system which can exist completely within a network; provide confidentiality, integrity, and availability of messaging; and also preserve privacy to any participant in the system. DARPA in the program description mention that "compromised system data and associated networked communications should not be helpful for comprising any additional parts of the system." RACE also mentions that the program will further seek to explore approaches to preserving privacy, such as secure multiparty computation and obfuscated communication protocols. According to the program description on DARPA’s official website, “The goal of the RACE program is to create a system capable of avoiding large-scale compromise.” “RACE research efforts will explore: 1) preventing compromised information from being useful for identifying any of the system nodes because all such information is encrypted on the nodes at all times, even during computation; and 2) preventing communications compromise by virtue of obfuscating communication protocols”, the description reads. For now, the team has not revealed complete details of the project. However, they’ll update the users on any new changes made. Visit DARPA’s official website for more. DARPA’s $2 Billion ‘AI Next’ campaign includes a Next-Generation Nonsurgical Neurotechnology (N3) program Katie Bouman unveils the first ever black hole image with her brilliant algorithm Obfuscating Command and Control (C2) servers securely with Redirectors [Tutorial]
Read more
  • 0
  • 0
  • 2391

article-image-apple-plans-to-make-notarization-a-default-requirement-in-all-future-macos-updates
Sugandha Lahoti
09 Apr 2019
4 min read
Save for later

Apple plans to make notarization a default requirement in all future macOS updates

Sugandha Lahoti
09 Apr 2019
4 min read
In an updated developer documentation released yesterday, Apple has announced its plans to make notarization a default requirement for all software in the future. Now, starting from macOS 10.14.5, all new software distributed with a new Developer ID must be notarized in order to run. “Beginning in macOS 10.14.5, all new or updated kernel extensions and all software from developers new to distributing with Developer ID must be notarized in order to run. In a future version of macOS, notarization will be required by default for all software.” writes Apple in a blog post. What is notarization? First introduced in macOS Mojave for apps distributed outside of the Mac App Store, Apple’s notary service is an automated system that scans software for malicious content and checks for code-signing issues. Based on these checks, notarization generates a ticket and publishes that ticket online where Gatekeeper (Apple’s flagship security feature) can find it and deem it as notarized. The Gatekeeper then places descriptive information in the initial launch dialog to help the user make an informed choice about whether to launch the app. macOS 10.14.5 requires new developers to notarize Apple has encouraged Mac app developers to submit their apps to Apple to be notarized. The Gatekeeper dialog has also been streamlined to reassure users that an app is not known malware. For non-Mac App Store developers, Apple provides a Developer ID that is required to allow the Gatekeeper function on macOS to install non-Mac App Store apps without extra warnings. However, from macOS 10.14.5 onwards, all new software distributed with a new Developer ID will need to go through the notarization process for their apps to work on the Mac. Apple notes that some preexisting software might not run properly after being successfully notarized. For example, “Gatekeeper might find code signing issues that a relaxed notarization process didn’t enforce.” They recommend developers to always review the notary log for any warnings, and test the software before distribution. Developers will not need to rebuild or re-sign their software before submitting it for notarization, but they must use Xcode 10 to perform the notarization steps. More information on notarization can be found on Apple's developer site. Some Hacker News users were unsure of what Apple means by “by default”. “kind of makes it sound like all software will have to be notarized, which implies that you have to be an Apple Developer to distribute at all. But saying "by default" makes it seems like there's some kind of option given to the user, so maybe it just means that software that's distributed by a registered Apple Developer but isn't notarized just moves down into the third tier of software that has to be explicitly allowed to run by the user.” “I interpret the "by default" as meaning the exact same thing as "Developer ID is required by default for Mac apps" today. Or in other words, I would assume that getting around a non-notarized app in the future would have the exact same sequence of steps as getting around a non-Developer ID-signed app today.” “I'd read the 'by default' as it being turned on system-wide and up to the user to override on a per case basis. Of course, Apple's ideal model is that they want everything going through them. They're going to enable it 'by default' and if customers don't scream too much, they'll likely make it mandatory a release or two later.” Final release for macOS Mojave is here with new features, security changes and a privacy flaw. macOS gets RPCS3 and Dolphin using Gfx-portability, the Vulkan portability implementation for non-Rust apps Swift 5 for Xcode 10.2 is here!
Read more
  • 0
  • 0
  • 3049

article-image-androidhardening-project-renamed-to-grapheneos-to-reflect-progress-and-expansion-of-the-project
Natasha Mathur
29 Mar 2019
2 min read
Save for later

AndroidHardening Project renamed to GrapheneOS to reflect progress and expansion of the project

Natasha Mathur
29 Mar 2019
2 min read
The AndroidHardening project team announced yesterday that they’ve changed the Project name to GrapheneOS. Daniel Micay, a security researcher, shared the details about GrapheneOS on Twitter yesterday. Micay states that the name-change has been done to reflect significant progress of the AndroidHardening Project and how it is becoming a broader and more sustainable project as more developers will be joining the project soon. GrapheneOS is a security and privacy-focused mobile operating system which will now be focused more on developing privacy and security improvements for the Android Open Source Project. In addition to that, it will also include more standalone sub-projects with hardened malloc implementation that can be easily ported to other operating systems, states Micay. Examples of standalone sub-projects within GrapheneOS include the Auditor app and attestation service. Auditor is currently released for only a few selected Android Devices. It is capable of performing local verification with another Android device using a QR code or via a scheduled server-based verification. These standalone projects will be MIT licensed, similar to hardened malloc implementation. Attestation work will also be made MIT licensed soon. Moreover, changes to the other existing projects will make use of upstream licenses (eg; Apache 2). Micay states that although GrapheneOS is currently being supported by some companies, there would still be a strong focus on maintaining distance from other corporations, governments, etc.    “Lots of care will be taken to avoid dependence / coercion. There's already much more diverse sources of support and collaboration”, states Micay. After the project has successfully expanded, support for more devices will be added with the help of Treble. Support for QubesOS as a first-class target will also be added in the future and is currently under work. HTTP-over-QUIC will be officially renamed to HTTP/3 NIPS finally sheds its ‘sexist’ name for NeurIPS Alibaba launches an AI chip company named ‘Ping-Tou-Ge’ to boost China’s semiconductor industry
Read more
  • 0
  • 0
  • 3977
Banner background image

article-image-google-podcasts-is-transcribing-full-podcast-episodes-for-improving-search-results
Bhagyashree R
28 Mar 2019
2 min read
Save for later

Google Podcasts is transcribing full podcast episodes for improving search results

Bhagyashree R
28 Mar 2019
2 min read
On Tuesday, Android Police reported that Google Podcasts is automatically transcribing episodes. It is using these transcripts as metadata to help users find the podcasts they want to listen even if they don’t know its title or when it was published. Though this is coming into light now, Google’s plan of using transcripts for improving search results has already been shared even before the app was actually launched. In an interview with Pacific Content, Zack Reneau-Wedeen, Google Podcasts product manager, said that Google could “transcribe the podcast and use that to understand more details about the podcast, including when they are discussing different topics in the episode.” This is not a user-facing feature but instead works in the background. You can see the transcription of these podcasts in the web page source of the Google Podcasts web portal. After getting a hint from a user, Android Police searched for “Corbin dabbing port” instead of Corbin Davenport, a writer for Android Police. Sure enough, the app’s search engine showed Episode 312 of the Android Police Podcast, his podcast, as the top result: Source: Android Police The transcription is enabled by Google’s Cloud Speech-to-Text transcription technology. Using transcriptions of such a huge number of podcasts Google can do things like including timestamps, index the contents, and make text easily searchable. This will also allow Google to actually “understand” what is being discussed in the podcasts without having to solely rely on the not-so-detailed notes and descriptions given by the podcasters. This could prove to be quite helpful if users don’t remember much about the shows other than a quote or interesting subject matter and make searching frictionless. As a user-facing feature, this could be beneficial for both a listener and a creator. “It would be great if they would surface this as feature/benefit to both the creator and the listener. It would be amazing to be able to timestamp, tag, clip, collect and share all the amazing moments I've found in podcasts over the years, “ said a Twitter user. Read the full story on Android Police. Google announces the general availability of AMP for email, faces serious backlash from users European Union fined Google 1.49 billion euros for antitrust violations in online advertising Google announces Stadia, a cloud-based game streaming service, at GDC 2019
Read more
  • 0
  • 0
  • 8171

article-image-replete-2-0-updated-with-clojurescript-1-10-516
Natasha Mathur
25 Mar 2019
2 min read
Save for later

Replete 2.0 updated with ClojureScript 1.10.516!

Natasha Mathur
25 Mar 2019
2 min read
Mike Fikes, the creator of Replete, a ClojureScript REPL for iOS and Android, announced last month that Replete 2.0 has been updated with ClojureScript 1.10.516. https://twitter.com/mfikes/status/1096224601124806656 Replete 2.0 offers a full-featured REPL environment, ideal for ClojureScript language learning.  Replete 2.0 is a new flagship version that includes an Android application along with a handful of other functions. Replete 2.0 introduces file & network IO via replete.core, replete.io and replete.http namespaces. Replete depends on the ability of ClojureScript to self-host, and thus always essentially carry its compiler with it. Replete iOS was one of the first applications that used self-hosted ClojureScript. Replete was then ported to Android, just recently, states Fikes on Hacker Rank. Fikes had also tweeted earlier in January this year, stating that he wanted help from people who can test the beta version of Replete 2.0. https://twitter.com/mfikes/status/1086743669825261569 Moreover, with Replete 2.0, users can now send text to Replete via a URL from other apps for evaluation as the ‘generic hooks’ still exist. In fact, there used to be an iOS app called Lisping earlier that allowed you to edit text and then send it to Replete for evaluation. Fikes further stated that he only uses Lisping and Replete for ‘quick checks of things’ (observing what a particular form might evaluate to, checking a docstring, etc). “I've never really used Replete for any heavy development”, states Fikes.   Clojure 1.10 released with Prepl, improved error reporting and Java compatibility ClojureCUDA 0.6.0 now supports CUDA 10 Clojure 1.10.0-beta1 is out!
Read more
  • 0
  • 0
  • 1973

article-image-oculus-rift-s-a-new-vr-with-inside-out-tracking-improved-resolution-and-more
Savia Lobo
22 Mar 2019
4 min read
Save for later

Oculus Rift S: A new VR with inside-out tracking, improved resolution and more!

Savia Lobo
22 Mar 2019
4 min read
At GDC 2019, the team at Oculus launched a brand new addition to their VR collection, Oculus Rift S in partnership with Lenovo. The reason for this partnership is to speed up manufacturing and to improve upon the design of the original Rift. Oculus Rift S will be priced at $399 during launch in Spring, this year. Jason Rubin, Facebook and Oculus’ vice president of VR partnerships and Nate Mitchell, Oculus’ head of VR product said that every existing and future game on the Rift platform will be playable on the Rift S. Features of the Oculus Rift S Improved Resolution This new VR device has a 2560 x 1440 resolution (or 1280 x 1440 per eye), which is 1.4 times the total pixels higher than the original Rift. The new display is LCD instead of OLED, which brings a handful of benefits like a better fill-factor (less unlit space between pixels). However, this often lacks the rich colors and contrast of OLED. Rift S’s LCD display seems quite up to the task, despite running at 80Hz compared to the Rift’s 90Hz. Clarity and Field of View With improved fill-factor of LCD, the screen door effect (unlit space between pixels) sees a pretty solid reduction which makes the Rift S clarity seem better than the moderate change in resolution. With a slightly larger field of view and minimal mura, what you see inside the headset looks a lot like the original Rift but with better clarity. The screen door effect is less distracting, and it’s easier to get lost in the content. No Hardware IPD Adjustment As Rift S uses a single display, it has no hardware IPD adjustment (unlike the original Rift) to change the distance between the lenses to match the distance between the eyes. A proper IPD setting is important for visual comfort (and makes it easier to achieve maximum panel utilization). While IPD on the Rift S can be adjusted, to an extent, in software, users on the outer limits of the IPD range might be left wanting. Oculus hasn’t specified what they consider to be the headset’s acceptable IPD range. A new ‘Passthrough+’ Rift S new Passthrough+ allows users to ‘see through’ their headset by piping the video feed from the onboard cameras into the displays. The company says they’ve paid special attention to make sure the feed is low latency, high frame rate, and stereo-correct, which is why it’s called ‘Passthrough+’. An inside-out ‘Insight’ tracking Rift S uses an ‘inside-out’ tracking system (called Insight) which places five cameras onto the headset itself. The cameras look at the world around the user, and computer vision algorithms use the information to determine the position of the headset. The on-board cameras also look for glowing lights on the controllers (also invisible via infrared) to determine their location relative to the headset. An inside-out system like this is vastly more complex than the outside-in system of the Rift. Room-scale Out of the Box Rift S is easy to set up, and now has room-scale tracking out of the box, which means players can be more immersed in some games by walking around larger spaces and turning around naturally instead of relying on stick-based turning. Depending upon the game, having full 360 room-scale tracking can really enhance immersion levels. Other features in Rift S include hidden audio, improved design and Ergonomics, and much more. To know more about Oculus Rift S in detail, head over to its official website. Google announces Stadia, a cloud-based game streaming service, at GDC 2019 Epic Games announces: Epic MegaGrants, RTX-powered Ray tracing demo, and free online services for game developers Microsoft announces Game stack with Xbox Live integration to Android and iOS  
Read more
  • 0
  • 0
  • 3494
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-is-planning-to-bring-node-js-support-to-fuchsia
Natasha Mathur
20 Mar 2019
2 min read
Save for later

Google is planning to bring Node.js support to Fuchsia

Natasha Mathur
20 Mar 2019
2 min read
Google is, reportedly, planning to bring Node.js to Fuchsia. Yang Guo, a staff software engineer at Google, posted a tweet, yesterday, where he says he’s looking for a full-time software engineer at Google Munich, Germany, who can port Node.js to Fuchsia. https://twitter.com/hashseed/status/1108016705920364544 “We are interested in bringing JS as a programming language to that platform to add to the list of supported languages,” states Guo. Currently, Fuchsia supports languages such as C/C++, Dart, FIDL, Go, Rust, Python, and Flutter modules. Fuchsia is a new operating system that Google is currently working on. Google has been working over Fuchsia for over two years in the hope that it will replace the much dominant Android. Fuchsia is a capability-based operating system and is based on a new microkernel called "Zircon".   Zircon is the core platform responsible for powering the Fuchsia OS. It is made up of a microkernel (source in kernel/...) and a small set of userspace services, drivers, and libraries (source in system/...) that are necessary for the system to boot, talk to hardware, as well as load and run the userspace processes. The Zircon Kernel is what helps provide syscalls to Fuchsia, thereby, helping it manage processes, threads, virtual memory, inter-process communication, state changes, and locking. Fuchsia can run on a variety of platforms ranging from embedded systems to smartphones, and tablets. Earlier this year in January, 9to5Google published evidence, stating that Fuchsia can also run Android Applications. Apparently, a new change was spotted by the 9to5 team in the Android Open Source Project that makes use of a special version of ART (Android Runtime) to run Android apps. This feature would allow devices such as computers, wearables, etc to leverage Android apps in the Google Play Store. Public reaction to the news is positive, with people supporting the announcement: https://twitter.com/aksharpatel47/status/1108136513575882752 https://twitter.com/damarnez/status/1108090522508410885 https://twitter.com/risyasin/status/1108029764957294593 Google’s Smart Display – A push towards the new OS, Fuchsia Fuchsia’s Xi editor is no longer a Google project Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation
Read more
  • 0
  • 0
  • 3918

article-image-google-announces-the-stable-release-of-android-jetpack-navigation
Bhagyashree R
15 Mar 2019
2 min read
Save for later

Google announces the stable release of Android Jetpack Navigation

Bhagyashree R
15 Mar 2019
2 min read
Yesterday, Google announced the stable release of the Android Jetpack Navigation component. This component is a suite of libraries and tooling to help developers implement navigation in their apps, whether it is incorporating simple button clicks or more complex navigation patterns such as app bars and navigation drawers. Some features of Android Jetpack Navigation Handle basic user actions You can make basic user actions like Up and Back buttons work consistently across devices and screens for better user experience. Deep linking Deep linking gets complicated as your app gets more complex. With deep linking, you can enable users to land directly on any part of your app. In the Navigation component, deep linking is a first-class citizen to make your app navigation more consistent and predictable. Reducing the chances of runtime crashes It ensures the type safety of arguments that are passed from one screen to another. This, as a result, will decrease the chances of runtime crashes as users navigate in your app. Adhering to Material Design guidelines You will be able to add navigation experiences like navigation drawers and navigation bottom bars to make your app navigation more aligned with Material Design guidelines. Navigation Editor You can use the Navigation Editor to easily visualize and manipulate the navigation graph, a resource file that contains all of your destinations and actions, for your app. The Navigation Editor is available in Android Studio 3.3 and above. To know more in detail, check out the official announcement. Android Q Beta is now available for developers on all Google Pixel devices Android Studio 3.5 Canary 7 releases! Android Things is now more inclined towards smart displays and speakers than general purpose IoT devices
Read more
  • 0
  • 0
  • 2504

article-image-android-q-developer-beta-is-now-available-on-all-google-pixel-devices
Natasha Mathur
14 Mar 2019
6 min read
Save for later

Android Q Beta is now available for developers on all Google Pixel devices

Natasha Mathur
14 Mar 2019
6 min read
Google released Android Q beta along with a preview SDK for developers, yesterday. Android Q beta is available for any Pixel device, including the first-gen Pixel and Pixel XL. Also, Google has given no clue about which Q-named snack the operating system will be named after. Android Q developer Beta explores a number of additional privacy and security features for users such as Google Play Protect and runtime permissions. Android Q beta also includes other features such as new APIs for connectivity, new media codecs, camera capabilities, NNAPI (Neural Networks API) extensions, Vulkan 1.1 support, and faster app startup, among others. What’s new in Android Q beta? More control over apps Android Q offers users more control when apps can get the location. Android Q allows users to give apps permission that makes sure it never tracks their location. It only allows the app to see user location when the app is running or when it is open in the background. More privacy protections Users can control apps' access to the Photos and Videos or the Audio collections with the help of new runtime permissions. In the case of Downloads, apps use the system file picker, which allows the users to decide the download files that the app can access. There are also other changes for developers that allow them to figure out how these apps can use shared areas on external storage. Foldables and innovative new screens Android Q brings changes to onResume and onPause that enables support for multi-resume and notifies your app when it has focus. The resizeableActivity manifest attribute has also been changed to help users better manage how the app is displayed on foldable and large screens. Sharing shortcuts Android Q comes with a new feature called Sharing Shortcuts that allows the users to jump directly into another app to share content. It allows the developers to publish share targets that launch a specific activity in their apps. These share targets are also shown to the users in the share UI. Google has also expanded the ShortcutInfo API to make the integration of both features easier. Settings Panels There’s a new Settings Panel API, which makes use of the Slices feature in Android 9 Pie.A settings panel is a floating UI and shows the system settings that users might need, such as internet connectivity, NFC, and audio volume. Connectivity permissions, privacy, and security Google has increased the protection on Android Q, for Bluetooth, Cellular, and Wi-Fi by requiring the FINE location permission instead. Google has also added new Wi-Fi standard support, WPA3, and Enhanced Open. This improves security for home, work networks as well as open/public networks Improved internet connectivity Wi-Fi stack has been refactored on Android Q to improve privacy, performance, and common use-cases such as managing IoT devices and suggesting internet connections without location permission.  These network connection APIs make it easier to manage IoT devices over local Wi-Fi, for peer-to-peer functions like configuring, downloading, or printing. Wi-Fi performance mode Users can request adaptive Wi-Fi in Android Q by allowing high performance and low latency modes. These are highly beneficial for cases where low latency is important to the user experience including real-time gaming, active voice calls, and similar use-cases. Dynamic depth format for photos Apps can request a Dynamic Depth image in Android Q consisting of a JPEG, XMP metadata related to depth related elements. Requesting for a JPEG + Dynamic Depth image makes it possible for the users to offer specialized blurs and bokeh options in your app. Users can also use the data to create 3D images or support AR photography use-cases in the future. New audio and video codecs Android Q comes with support for the open source video codec AV1. This allows media providers to stream high-quality video content on Android devices that use less bandwidth. Android Q also supports audio encoding using Opus i.e. a codec optimized for speech and music streaming, along with HDR10+ for high dynamic range video. There’s also a MediaCodecInfo API which allows users to determine the video rendering capabilities of an Android device. Native MIDI API Android Q introduces a native MIDI API for apps that perform audio processing in C++. It allows users to communicate with MIDI devices through the NDK. This API also allows MIDI data to be retrieved inside an audio callback with the help of a non-blocking read, enabling low latency processing of MIDI messages. ANGLE on Vulkan Google developers are working on adding experimental support for ANGLE on top of Vulkan in Android Q. ANGLE refers to a graphics abstraction layer that is designed for high-performance OpenGL compatibility across implementations. Google is also planning to add support for OpenGL ES 2.0, with ES 3.0 in Android Q. Neural Networks API 1.2 Google has added 60 new ops including ARGMAX, ARGMIN, quantized LSTM,  in Android Q, alongside a range of performance optimizations. Google is also working with hardware vendors and popular machine learning frameworks such as TensorFlow to optimize and enable support for NNAPI 1.2. ART performance Android Q offers a range of new improvements to the ART runtime that helps the apps start faster and consume less memory. Since Android Nougat, ART has provided Profile Guided Optimization (PGO), that helps speed up the app startup over time by identifying and precompiling the frequently executed parts of your code. Also, to aid with the initial app startup, Google Play can now deliver the cloud-based profiles along with APKs. These cloud-based profiles allow ART to pre-compile parts of your app even before it's run. This enhances the overall optimization process. Other features include extended support for passive authentication methods in Android Q. These methods include the face and adding implicit and explicit authentication flows. Android Q also adds support for TLS 1.3,  which is a major revision to the TLS standard that includes performance benefits and enhanced security. For more information, check out the official Android Q blog post. Android Q will reportedly give network carriers more control over network devices Android Studio 3.5 Canary 7 releases! Android Things is now more inclined towards smart displays and speakers than general purpose IoT devices
Read more
  • 0
  • 0
  • 2888

article-image-android-studio-3-5-canary-7-releases
Natasha Mathur
12 Mar 2019
2 min read
Save for later

Android Studio 3.5 Canary 7 releases!

Natasha Mathur
12 Mar 2019
2 min read
Android Studio team released version 3.5 Canary 7 of Android Studio, an officially integrated development environment for Google's Android operating system, yesterday. Android Studio 3.5 Canary 7 is now made available in the Canary and Dev channels. The latest release explores bug fixes for the public issues. Improvements in Android Studio 3.5 Canary 7 The illegal character '-' in module name has been fixed. The databinding annotation processor injecting an absolute path into KotlinCompile that can default Gradle's remote build cache has been fixed. Earlier it was impossible to specify more than 255 file extensions for aaptOptions noCompress, this issue has now been fixed. The issue of AAPT2 crashing in case of plurals in XML contain an apostrophe, has been fixed. The refactoring method name didn’t work and has been fixed. Layout preview used to rerender when typing in the XML editor, this has now been fixed. The issue of DDMLIB process using a full CPU core for times when there is no device/emulator connected has been fixed. Kotlin main classes used to appear on the class path before test classes while running unit tests, this has been fixed now. For more information, check out the official release notes for Android Studio 3.5 Canary 7. Android Studio 3.2 releases with Android App Bundle, Energy Profiler, and more! Android Studio 3.2 Beta 5 out, with updated Protobuf Gradle plugin 9 Most Important features in Android Studio 3.2
Read more
  • 0
  • 0
  • 3695
article-image-ionic-4-1-named-hydrogen-is-out
Bhagyashree R
08 Mar 2019
2 min read
Save for later

Ionic 4.1 named Hydrogen is out!

Bhagyashree R
08 Mar 2019
2 min read
After releasing Ionic 4.0 in January this year, the Ionic team announced the release of Ionic  4.1 on Wednesday. This release is named “Hydrogen” based on the name of elements in the periodic table. Along with a few bugfixes, Ionic 4.1 comes with features like skeleton text update, indeterminate checkboxes, and more. Some of the new features in Ionic 4.1 Skeleton text update Using the ion-skeleton-text component, developers can now make showing skeleton screens for list items more natural. You can use ‘ion-skeleton-text’ inside media controls like ‘ion-avatar’ and ‘ion-thumbnail’. The size of skeletons placed inside of avatars and thumbnails will be automatically adjusted according to their containers. You can also style the skeletons to have custom border-radius, width, height, or any other CSS styles for use outside of Ionic components. Indeterminate checkboxes A new property named ‘indeterminate’ is now added to the ‘ion-checkbox’ component. When the value of ‘indeterminate’ is true it will show the checkbox in a half-on/half-off state. This property will be handy in cases where you are using a ‘check all’ checkbox, but only some of the options in the group are selected. CSS display utilities Ionic 4.1 comes with a few new CSS classes for hiding elements and responsive design: ion-hide and ion-hide-{breakpoint}-{dir}. To hide an element, you can use the ‘ion-hide’ class. You can use the ion-hide-{breakpoint}-{dir} classes to hide an element based on breakpoints for certain screen sizes. To know more about the other features in detail, visit Ionic's official website. Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular Ionic v4 RC released with improved performance, UI Library distribution and more The Ionic team announces the release of Ionic React Beta  
Read more
  • 0
  • 0
  • 3092

article-image-react-native-community-announce-march-updates-post-sharing-the-roadmap-for-q4
Sugandha Lahoti
04 Mar 2019
3 min read
Save for later

React Native community announce March updates, post sharing the roadmap for Q4

Sugandha Lahoti
04 Mar 2019
3 min read
In November, last year, the React Native team shared a roadmap for React Native to provide better support to its users and collaborators outside of Facebook. The team is planning to open source some of the internal tools and improve the widely used tools in the open source community. Yesterday, they shared updates on the progress they have made in the two months since the release of the roadmap. Per the team, the goals were to “reduce outstanding pull requests, reduce the project's surface area, identify leading user problems, and establish guidelines for community management.” Updates to Pull Requests The number of open pull requests was reduced to 65. The average number of pull requests opened per day increased from 3.5 to 7. Almost two-thirds of pull requests were merged and one-third of the pull requests closed. Out of all the merged pull requests, only six caused issues; four only affected internal development and two were caught in the release candidate state. Cleaning up for leaner core The developers are planning on reducing the surface area of React Native by removing non-core and unused components. The community response on helping with the Lean Core project was massive. The maintainers jumped in for fixing long-standing issues, adding tests, and supporting long-requested features. Examples of such projects are WebView that has received many pull requests since their extraction and the CLI that is now maintained by members of the community and received much-needed improvements and fixes. Helping people upgrade to newer versions of React Native One of the highest voted problems was the developer experience of upgrading to newer versions of React Native. The team is planning on recommending CocoaPods by default for iOS projects which will reduce churn in project files when upgrading React Native. This will make it easier for people to install and link third-party modules. The team also acknowledged contributions from members of the community. One maintainer, Michał Pierzchała from Callstack helped in improving the react-native upgrade by using rn-diff-purge under the hood. Releasing React Native 0.59 For future releases, the team plans to: work with community members to create a blog post for each major release show breaking changes directly in the CLI when people upgrade to new versions reduce the time it takes to make a release by increasing automated testing and creating an improved manual test plan These plans will also be incorporated in the upcoming React Native 0.59 release. It is currently published as a release candidate and is expected to be stable within the next two weeks. What’s next The team will now focus on managing pull requests while also starting to reduce the number of outstanding GitHub issues. They will continue to reduce the surface area of React Native through the Lean Core project. They also plan to address five of the top community problems and work on the website and documentation. React Native 0.59 RC0 is now out with React Hooks, and more Changes made to React Native Community’s GitHub organization in 2018 for driving better collaboration The React Native team shares their open source roadmap, React Suite hits 3.4.0
Read more
  • 0
  • 0
  • 3163

article-image-magic-leap-announces-selections-for-the-magic-leap-independent-creator-program
Sugandha Lahoti
28 Feb 2019
1 min read
Save for later

Magic Leap announces selections for the Magic Leap Independent Creator Program

Sugandha Lahoti
28 Feb 2019
1 min read
Last year in November, Magic Leap introduced an Independent Creator Program. Yesterday, they named their selections for this program. The Magic Leap team reviewed over 6,500 entries, and selected projects in a wide range of categories, including education, entertainment, gaming, enterprise and more. Magic Leap Independent Creator Program is a development fund to help individual developers and teams to kick-start their Magic Leap One projects. They are offering grants between $20,000 and $500,000 per project along with the developer, hardware, and marketing support. The teams selected include: Source: MagicLeap The selected teams will now be paired with Magic Leap’s Developer Relations team for guidance and support. Once the teams have built, submitted, and launched their projects, the best experiences will be showcased at L.E.A.P. Conference in 2019. Teams will receive dedicated marketing support, including planning, promotion, and social media amplification. The Developer Relations team consisting of Magic Leap’s subject matter experts and QA testers will give developers one on one guidance. Magic Leap acquires Computes Inc to enhance spatial computing Magic Leap unveils Mica, a human-like AI in augmented reality Magic Leap teams with Andy Serkis’ Imaginarium Studios to enhance Augmented Reality
Read more
  • 0
  • 0
  • 2947
article-image-google-launches-flutter-1-2-its-first-feature-update-at-mobile-world-congress-2019
Bhagyashree R
27 Feb 2019
3 min read
Save for later

Google launches Flutter 1.2, its first feature update, at Mobile World Congress 2019

Bhagyashree R
27 Feb 2019
3 min read
At the ongoing Mobile World Congress event, Google announced the release of Flutter 1.2, yesterday. This first feature update comes with support for Android App Bundles, improved Material and Cupertino widget sets, and more. Mobile World Congress is a four-day event, starting from 25th of this month. It is the largest annual gathering, where some of the world’s leading companies of the mobile industry talk about their latest innovations and technology. Following are some of the updates Flutter 1.2 includes: Improved Material and Cupertino widget sets The team has been putting their efforts into improving the Material and Cupertino widget sets. Now developers will have more flexibility when using Material widgets. For Cupertino widgets, they have added support for floating cursor text adding on iOS. This can be triggered by either force pressing the keyboard or by long pressing the spacebar. Support for Android App Bundles Flutter 1.2 supports Android App Bundles, a new upload format that includes all the app’s compiled code and resources. This format helps in reducing the app size and enables new features like dynamic delivery for Android apps. Support for Dart 2.2 SDK This release includes the Dart 2.2 SDK, which was also released yesterday. Dart 2.2 comes with significant performance improvements to make ahead-of-time compilation even faster and a literal language for initializing sets. It also introduces Dart Common Front End (CFE) that parses Dart code, performs type inference, and translates Dart into a lower-level intermediate language. Other updates Flutter 1.2 also supports a broader set of animation easing functions, which are inspired by Robert Penner’s work. The team is already preparing it for desktop-class operating systems by adding new keyboard events and mouse hover support. Flutter’s plug-in team has added some changes to Flutter 1.2 that will work well to support the In App Purchases plugin. Along with these updates, they have also made some bug fixes for video player, webview, and maps. Along with Flutter 1.2, the team has also released a preview of Dart DevTools, a suite of performance tools for Dart and Flutter. Some of the tools from this suite including web inspector, timeline view, and others are now available for installation. Read the full set of updates in Flutter 1.2 on the Google Developers blog. Google to make Flutter 1.0 “cross-platform”; introduces Hummingbird to bring Flutter apps to the web Flutter challenges Electron, soon to release a desktop client to accelerate mobile development Google announces Flutter Release Preview 2 with extended support for Cupertino themed controls and more!
Read more
  • 0
  • 0
  • 2977

article-image-microsoft-workers-protest-the-lethal-use-of-hololens2-in-the-480m-deal-with-us-military
Sugandha Lahoti
25 Feb 2019
4 min read
Save for later

Microsoft workers protest the lethal use of Hololens2 in the $480m deal with US military

Sugandha Lahoti
25 Feb 2019
4 min read
Microsoft employees are outraged over the company’s $480 million deal with the U.S. Army to provide them with Hololens2, Microsoft’s latest augmented-reality headsets, to be used on the battlefield. Although Microsoft won the contract in November, it was last Friday, that Microsoft workers took to Twitter to express their concerns. In an open letter, addressed to Microsoft CEO Satya Nadella, and president and chief legal officer Brad Smith, employees wrote that the deal has "crossed the line" and "is designed to help people kill." https://twitter.com/MsWorkers4/status/1099066343523930112 This is not the first time tech workers have stood up in solidarity against tech giants over discrepancies in business or policies. Last year, ‘Employees of Microsoft’ asked Microsoft not to bid on US Military’s Project JEDI in an open letter. Google employees also protested against the companies’ censored search engine in China, codenamed Project Dragonfly. In October 2018, an Amazon employee has spoken out against Amazon selling its facial recognition technology, named, Rekognition to the police departments across the world. Yesterday, Microsoft unveiled the HoloLens2 AR device at the Mobile World Congress (MWC) in Barcelona. They also signed a contract with US military services called Integrated Visual Augmentation System. Per the terms of the deal, the AR headsets will be used to insert holographic images into the wearer’s field of vision. The contract’s stated objective is to “rapidly develop, test, and manufacture a single platform that Soldiers can use to Fight, Rehearse, and Train that provides increased lethality, mobility, and situational awareness necessary to achieve overmatch against our current and future adversaries," the letter said. What are Microsoft employees saying? The letter which was signed by more than 100 Microsoft employees, was published on an internal message board and circulated via email to employees at the company on Friday. The letter condemned the IVAS contract demanding for its cancellation and a call for stricter ethical guidelines. “We are alarmed that Microsoft is working to provide weapons technology to the US Military, helping one country's government ‘increase lethality’ using tools we built. We did not sign up to develop weapons, and we demand a say in how our work is used,” the letter said. Aligning Hololens2 with military turns “warfare into a simulated ‘video game,’ further distancing soldiers from the grim stakes of war and the reality of bloodshed,” adds the letter. In October, Brad Smith defended Microsoft's work with the military, via a blog post, "First, we believe that the people who defend our country need and deserve our support. And second, to withdraw from this market is to reduce our opportunity to engage in the public debate about how new technologies can best be used in a responsible way. We are not going to withdraw from the future." He also suggested that employees concerned about working on unethical projects “would be allowed to move to other work within the company”.  This statement ignores “the problem that workers are not properly informed of the use of their work”, the letter stated. Netizens are also in solidarity with Microsoft employees and criticize the military involvement. https://twitter.com/tracy_karin/status/1099880041721352192 https://twitter.com/Durrtydoesit/status/1099840664978817024 https://twitter.com/cgallagher036/status/1099826879090118657 A comment on Hacker news reads, “Whether you agree with this sentiment or not, people waking up to ethical questions in our field is unquestionably a good thing. It's important to ask these questions.” Rights groups pressure Google, Amazon, and Microsoft to stop selling facial surveillance tech to the government. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. The new tech worker movement: How did we get here? And what comes next?
Read more
  • 0
  • 0
  • 3105