Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Mobile

204 Articles
article-image-homebrew-1-9-0-released-with-periodic-brew-cleanup-beta-support-for-linux-windows-and-much-more
Melisha Dsouza
10 Jan 2019
2 min read
Save for later

Homebrew 1.9.0 released with periodic brew cleanup, beta support for Linux, Windows and much more!

Melisha Dsouza
10 Jan 2019
2 min read
Yesterday, Mike McQuaid, Homebrew’s lead maintainer announced the release of Homebrew 1.9.0. The release has major updates like Linux support, (optional) automatic brew cleanup, providing bottles (binary packages) to more Homebrew users and much more. Homebrew is an open-source software package management system that simplifies the installation of software on Apple's macOS operating system. Homebrew automatically handles all dependencies and installs requested software into one common location thus providing users with easy access and quick updates. Features of Homebrew 1.9.0 Beta support for Linux and Windows 10; with the Windows Subystem for Linux. Linuxbrew (Homebrew on Linux) does not require root access. If the HOMEBREW_INSTALL_CLEANUP environment variable is set, brew cleanup runs periodically on the system. On re-install, install or upgrade; the HOMEBREW_INSTALL_CLEANUP environment variable will also trigger individual formula cleanup on reinstall, install or upgrade. brew prune has been replaced by brew cleanup and is now run as part of brew cleanup. Homebrew 1.9.0 will not run on 32-bit Intel CPUs. Incomplete downloads can now be resumed when the server rejects HEAD requests. This is particularly useful since some HTTP servers apparently don't support HEAD. brew bottle will allow relocation of more bottles. This will be done by ignoring source code and skipping matches to build dependencies. macOS Mojave bottles are optimized for the newer CPUs required by Mojave. ..and much more! What to expect in Homebrew 2.0.0? Official support for Linux, Windows10; with the Windows Subsystem for Linux Homebrew 2.0.0 will stop running on macOS versions 10.8 and below. Homebrew 2.0.0 will stop the migration of old installations from the legacy Homebrew/homebrew repository. While most users are excited about the news, some of them are not satisfied with Homebrew’s documentation. Source: Hacker News You can head over to Homebrews’ official blog to know more about the additional features introduced in Homebrew 1.9.0. Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks? An update on Bcachefs- the “next generation Linux filesystem” The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA)
Read more
  • 0
  • 0
  • 4097

article-image-htc-intel-lenovo-showcase-their-products-at-day-2-of-ces-2019
Sugandha Lahoti
08 Jan 2019
4 min read
Save for later

HTC, Intel, Lenovo showcase their products at Day 2 of CES 2019

Sugandha Lahoti
08 Jan 2019
4 min read
CES 2019 is kicking off in Las Vegas, Nevada today, January 8, Monday for 3 days. The conference unofficially kicked off on Sunday, January 6 and you may have a look at the announcements made on that day. Yesterday was the main press day when the majority of the announcements were made with a lot of companies showcasing their latest projects and announcing new products, software, and services. HTC HTC announced their partnership with Mozilla, bringing Firefox’s virtual reality web browser to the Vive headset. Mozilla first announced Firefox Reality as a dedicated VR web browser in April. In September, they announced that the browser is now available on Viveport, Oculus, and Daydream. Now, it is available for the HTC Vive headset. As part of the deal, HTC is also teaming up with Amazon to make use of Amazon Sumerian. HTC also announced the Vive Pro Eye virtual reality headset with native, built-in eye tracking. It uses “foveated rendering” to render sharp images for wherever the human eye is looking in a virtual scene and reduces the image quality of objects on the periphery. Intel Intel made a number of announcements at CES 2019. They showcased new processors and also released a press release with updates on Project Athena. With this project, they are getting PC makers ready for “a new class of advanced laptops.” These laptops will be Ultrabooks part two with 5G and artificial intelligence support. New Intel processors: New 9th Gen Core processors for a limited number of desktops and laptops. A 10nm Ice Lake processor for thin laptops. A 10nm Lakefield processor using 3D stacking technology for very small computers and tablets. A 10nm Cascade Lake Xeon processor for data processing. 3D Athlete Tracking tech which runs on the Cascade Lake chip and shows data about how fast and far athletes are traveling. Intel’s 10nm Snow Ridge SOC for 5G base stations. Lenovo Lenovo has made minor updates to their ThinkPad X1 Carbon and X1 Yoga laptops with new designs for 2019. They are mostly going to have a material change, and are also going to be thinner and lighter this year. Lenovo has also released two Lenovo is two large display monitor. The first is Lenovo’s ThinkVision P44W, which is aimed at business users, and the second is the Legion Y44w Gaming Monitor. They both have a 43.4-inch panel. Uber One of Uber’s partners in the air taxi domain, Bell, has revealed the design of its vertical takeoff and landing air taxi at CES 2019. Their flying taxi, dubbed the Bell Nexus, can accommodate up to 5 people and is a hybrid-electric powered vehicle. CES 2019 also saw the release of the game Marvel's Avengers: Rocket's Rescue Run. This is the first demo product from startup Holoride, which has Audi as one of its stakeholders. It's the result of Audi and Disney's new media format, which aims to bring virtual reality to passengers in cars, specifically to Uber. More announcements: Harley-Davidson gave a preview of their first all-electric motorcycle. It will launch in August 2019 and will cost $29,799 TCL announced its first soundbars and a 75-inch version of the excellent 6-Series 4K Roku TV Elgato announced a professional $199 light rig for Twitch streamers and YouTube creators Hisense announces its new 2019 4K TV lineup and the Sonic One TV Griffin introduces new wireless chargers for the iPhone and Apple Watch Amazon is planning to let people deliver packages inside your garage Kodak’s release a new instant camera and printer line GE announced a 27-inch smart display for the kitchen that streams Netflix. Google Assistant will soon be on a billion devices. Their next stop - feature phones Vizio announces the most advanced 4K TV ever and support for Apple’s AirPlay 2 Toyota shared details of it’s Guardian Driver-Assist System which will mimic a technique used in fighter jets to serve as a smart intermediary between driver and car. CES 2019: Top announcements made so far HTC Vive Focus 2.0 update promises long battery life, among other things for the VR headset Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’
Read more
  • 0
  • 0
  • 3070

article-image-changes-made-to-react-native-communitys-github-organization-in-2018-for-driving-better-collaboration
Bhagyashree R
08 Jan 2019
3 min read
Save for later

Changes made to React Native Community’s GitHub organization in 2018 for driving better collaboration

Bhagyashree R
08 Jan 2019
3 min read
Yesterday, Lorenzo Sciandra, a React Native developer shared his experience on how the React Native Community took an added ownership over the development of React Native and also enhanced collaboration with Facebook in 2018. In 2019, the community will be sharing guidelines for ensuring quality code that complies to community-agreed standards. Here are the three channels they created for better transparency and sharing the recent happenings in React Native Community: react-native-releases As the name suggests, the react-native-releases repository was created to keep everyone up-to-date about new releases of React Native in a more collaborative manner and get a clear idea of what features would be a part of a certain release. This allowed the team to follow a long-term support approach instead of the monthly release cycle, which they are using in version 0.57.x. discussions-and-proposals The discussions-and-proposals repository was aimed at providing a more open environment for discussing new features or enhancements to React Native. It provided better transparency from the Core and Facebook teams and acted as a communication channel for all the members of the community. The team wanted to adopt an RFC (request for comments) approach instead of having all the discussion and proposals on the main repository. This repository provides a consistent and controlled path for new features to be proposed. Also, the Facebook team is using the RFC process to discuss what could be improved in React and is co-ordinating their efforts around the Lean Core project. @ReactNativeComm The team has created this new Twitter account to give users regular updates on everything going on in the React Native Community i.e. from releases to active discussions. In addition to enhancing the collaboration in the community, the team is also aiming for creating a formal structure. For this, they are planning to enforce a set of standards for all the packages and repos. With these guidelines in place, they will be able to help each other and contribute quality code that conforms to community-agreed standards. In his blog post,  Lorenzo also said, ”This organization can set the example for everyone in the larger developer community by enforcing a set of standards for all the packages/repos hosted in it, providing a single place for maintainers to help each other and contribute quality code that conforms to community-agreed standards.” JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript The React Native team shares their open source roadmap, React Suite hits 3.4.0 React Native 0.57 released with major improvements in accessibility APIs, WKWebView-backed implementation, and more!
Read more
  • 0
  • 0
  • 3424
Banner background image

article-image-project-erasmus-former-apple-engineer-builds-a-user-interface-that-responds-to-environment-light
Bhagyashree R
04 Jan 2019
3 min read
Save for later

Project Erasmus: Former Apple engineer builds a user interface that responds to environment light

Bhagyashree R
04 Jan 2019
3 min read
A former Apple software development manager, Bob Burrough has developed an environmentally-lit user interface. He demonstrated this unique UI concept named Project Erasmus on Tuesday in a YouTube video. This UI design basically changes the light, shade, and reflect on the UI elements based on the lighting of the surroundings. With this project, Burrough tries to bring back skeuomorphic design in UIs, which Apple got rid of since iOS 7. In the demonstration, he explained how this concept works. Basically, an Olloclip lens is attached to an iPhone’s front camera to capture a wide-angle shot of the room. This captured data is then used to create a lighting map, which includes data about the reflections and shadows across the environment. According to this map, the lighting effects on the graphic elements of the screen changes. For instance, based on the lighting in the room the toggle buttons and menu bars drop shadows and highlights. This UI design makes elements on the screen appear as real-life objects below the display. “It looks like the user-interface elements are physical objects that reside just beneath the surface of the screen, like you could reach in and touch them,” said Burrough in the demo. The project is still work-in-progress but looks very promising. Though this environmentally-lit UI concept wouldn’t make any performance upgrades, it would surely make the user experience even more immersive and open more possibilities for further inventions. Burrough said that developers can create a backlight effect for the UI elements when the device is in a dark room, similar to the keyboards that can light up in the dark. This UI design will make user interfaces more interactive, but developers have to ensure that it is not eating up the battery when implementing in an app. One of the YouTube users also pointed out an interesting benefit of this UI design, “As someone with color-deficient vision, I haven't been impressed by Apple's choice of colors to differentiate user interface elements. The presence of shadows would do a lot to define elements. This isn't useless at all. It's incredibly useful for people with issues like mine.” Watch the demo by Burrough here: https://www.youtube.com/watch?v=TIUMgiQ7rQs Thunderbird welcomes the new year with better UI, Gmail support and more Ionic v4 RC released with improved performance, UI Library distribution and more HashiCorp Vault 1.0 released with batch tokens, updated UI and more
Read more
  • 0
  • 0
  • 3475

article-image-introducing-feelreal-sensory-mask-a-vr-mask-that-adds-a-sense-of-smell-while-viewing-vr-content
Prasad Ramesh
28 Dec 2018
2 min read
Save for later

Introducing Feelreal Sensory Mask, a VR mask that adds a sense of smell while viewing VR content

Prasad Ramesh
28 Dec 2018
2 min read
FeelReal launched their Feelreal Sensory Mask, earlier this week, that not only allows you to see things in virtual reality (VR) but also gives you a sense of smell among other senses. VR headsets have seen major progress in recent years from a higher resolution to a wider field of view. To be able to smell different odors, and other sensory inputs in a VR headset is something entirely new. Feelreal Inc brings a sensory mask that adds a sense of smell when you are viewing VR content. Feelreal puts it as: “Imagine the depth of interaction when users can truly feel themselves on a racing track and actually smell burned rubber. Or being able to grasp the feeling of being on a battlefield complete with the intense gunpowder odor. This is what the multi-sensory virtual reality experience is all about.” The company tweeted on Wednesday announcing the new VR mask: https://twitter.com/feelreal_com/status/1077963480324587521 The smells are developed using a scent generator that holds a replaceable cartridge which contains nine aroma capsules. There are 255 scents to choose from in their store. Along with providing various odors, the VR mask delivers other inputs to the user: Water Mist: The ultrasonic ionizing system will make you feel the rain on your cheeks. Heat: Safe micro-heaters will allow you to sense the warmth of the desert. Wind: Two micro-coolers will let you experience the cool mountain breeze. Vibration: Force-feedback haptic motors to induce impactful vibrations. There are many applications of the Feelreal multi-sensory mask. They can be used for movies at 360°, Feelreal dreams, VR games, immersive meditation, and aromatherapy controlled by their mobile app. You can connect the mask to Samsung Gear VR, Oculus Rift, Oculus Go, HTC Vive, or PlayStation VR via Bluetooth or WiFi. Feelreal is planning to bring this mask to Kickstarter and get funding. In 2015, they attempted crowdfunding for the mask with seven cartridges but could not get the necessary funding. The Feelreal mask comes in three colors, white, gray, and black. The Feelreal Sensory Mask is currently in the crowdfunding stage. For more details, visit the Feelreal website. Why mobile VR sucks Building custom views in vRealize operation manager [Tutorial] Google’s Daydream VR SDK finally adds support for two controllers
Read more
  • 0
  • 0
  • 2897

article-image-ieee-computer-society-predicts-top-ten-tech-trends-for-2019-assisted-transportation-chatbots-and-deep-learning-accelerators-among-others
Natasha Mathur
21 Dec 2018
5 min read
Save for later

IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others

Natasha Mathur
21 Dec 2018
5 min read
IEEE Computer Society (IEEE-CS) released its annual tech future predictions, earlier this week, unveiling the top ten most likely to be adopted technology trends in 2019. "The Computer Society's predictions are based on an in-depth analysis by a team of leading technology experts, identify top technologies that have substantial potential to disrupt the market in the year 2019," mentions Hironori Kasahara, IEEE Computer Society President. Let’s have a look at their top 10 technology trends predicted to reach wide adoption in 2019. Top ten trends for 2019 Deep learning accelerators According to IEEE computer society, 2019 will see widescale adoption of companies designing their own deep learning accelerators such as GPUs, FPGAs, and TPUs, which can be used in data centers. The development of these accelerators would further allow machine learning to be used in different IoT devices and appliances. Assisted transportation Another trend predicted for 2019 is the adoption of assisted transportation which is already paving the way for fully autonomous vehicles. Although the future of fully autonomous vehicles is not entirely here, the self-driving tech saw a booming year in 2018. For instance, AWS introduced DeepRacer, a self-driving race car, Tesla is building its own AI hardware for self-driving cars, Alphabet’s Waymo will be launching the world’s first commercial self-driving cars in upcoming months, and so on. Other than self-driving, assisted transportation is also highly dependent on deep learning accelerators for video recognition. The Internet of Bodies (IoB) As per the IEEE computer society, consumers have become very comfortable with self-monitoring using external devices like fitness trackers and smart glasses. With digital pills now entering the mainstream medicine, the body-attached, implantable, and embedded IoB devices provide richer data that enable development of unique applications. However, IEEE mentions that this tech also brings along with it the concerns related to security, privacy, physical harm, and abuse. Social credit algorithms Facial recognition tech was in the spotlight in 2018. For instance, Microsoft President- Brad Smith requested governments to regulate the evolution of facial recognition technology this month, Google patented a new facial recognition system that uses your social network to identify you, and so on.  According to the IEEE, social credit algorithms will now see a rise in adoption in 2019. Social credit algorithms make use of facial recognition and other advanced biometrics that help identify a person and retrieve data about them from digital platforms. This helps them check the approval or denial of access to consumer products and services. Advanced (smart) materials and devices IEEE computer society predicts that in 2019, advanced materials and devices for sensors, actuators, and wireless communications will see widespread adoption. These materials include tunable glass, smart paper, and ingestible transmitters, will lead to the development of applications in healthcare, packaging, and other appliances.   “These technologies will also advance pervasive, ubiquitous, and immersive computing, such as the recent announcement of a cellular phone with a foldable screen. The use of such technologies will have a large impact on the way we perceive IoT devices and will lead to new usage models”, mentions the IEEE computer society. Active security protection From data breaches ( Facebook, Google, Quora, Cathay Pacific, etc) to cyber attacks, 2018 saw many security-related incidents. 2019 will now see a new generation of security mechanisms that use an active approach to fight against these security-related accidents. These would involve hooks that can be activated when new types of attacks are exposed and machine-learning mechanisms that can help identify sophisticated attacks. Virtual reality (VR) and augmented reality (AR) Packt’s 2018 Skill Up report highlighted what game developers feel about the VR world. A whopping 86% of respondents replied with ‘Yes, VR is here to stay’. IEEE Computer Society echoes that thought as it believes that VR and AR technologies will see even greater widescale adoption and will prove to be very useful for education, engineering, and other fields in 2019. IEEE believes that now that there are advertisements for VR headsets that appear during prime-time television programs, VR/AR will see widescale adoption in 2019. Chatbots 2019 will also see an expansion in the development of chatbot applications. Chatbots are used quite frequently for basic customer service on social networking hubs. They’re also used in operating systems as intelligent virtual assistants. Chatbots will also find its applications in interaction with cognitively impaired children for therapeutic support. “We have recently witnessed the use of chatbots as personal assistants capable of machine-to-machine communications as well. In fact, chatbots mimic humans so well that some countries are considering requiring chatbots to disclose that they are not human”, mentions IEEE.   Automated voice spam (robocall) prevention IEEE predicts that the automated voice spam prevention technology will see widespread adoption in 2019. It will be able to block a spoofed caller ID and in turn enable “questionable calls” where the computer will ask questions to the caller for determining if the caller is legitimate. Technology for humanity (specifically machine learning) IEEE predicts an increase in the adoption rate of tech for humanity. Advances in IoT and edge computing are the leading factors driving the adoption of this technology. Other events such as fires and bridge collapses are further creating the urgency to adopt these monitoring technologies in forests and smart roads. "The technical community depends on the Computer Society as the source of technology IP, trends, and information. IEEE-CS predictions represent our commitment to keeping our community prepared for the technological landscape of the future,” says the IEEE Computer Society. For more information, check out the official IEEE Computer Society announcement. Key trends in software development in 2019: cloud native and the shrinking stack Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity Quantum computing, edge analytics, and meta learning: key trends in data science and big data in 2019
Read more
  • 0
  • 0
  • 5355
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ionic-v4-rc-released-with-improved-performance-ui-library-distribution-and-more
Amrata Joshi
20 Dec 2018
3 min read
Save for later

Ionic v4 RC released with improved performance, UI Library distribution and more

Amrata Joshi
20 Dec 2018
3 min read
Yesterday, the team at Ionic released the final candidate version for Ionic v4. The final version of Ionic v4 is expected to be released in early 2019. Ionic is the app platform for web developers for building mobile, web, and desktop apps with one shared code base and open web standards. Ionic v4 RC has been nicknamed Ionic Neutronium following the earlier RC versions Ionic 4.1 Hydrogen, Ionic 4.2 Helium, 4.3 Lithium, etc. With the final release candidate, the API has now been stabilized along with fixing patch and minor releases. Improvements in  Ionic v4 RC? Mobile performance Ionic v4 will come with improvement in app startup times, especially on mobile devices. It comes with smaller file sizes that work well with apps on iOS, Android, Electron, and PWAs.                                                         Source: Ionic Ivy Renderer for Angular users Ionic v4 will be shipped with Angular Ivy Renderer. It is Angular’s fastest and smallest renderer so far and will prove to be useful for Angular and Ionic developers. Interestingly, even a simple Ivy “Hello World” app reduces down to a size of 2.7kb. UI Library Distribution Ionic Neutronium has made improvements to the UI library. It features the use of standard Web APIs like custom elements that are capable of lazy-loading themselves on-demand. Now both Angular and Ionic can iterate and improve independently, and developers can take advantage of these improvements with fewer restrictions. Support for Angular tooling Angular CLI and Router both have become production ready with this release and are  capable of native-style navigation required by Ionic apps. This release also comes with an added support for Angular schematics, so Angular developers can run ng add @ionic/angular to add Ionic directly to their app. Angular, React, and Vue Ionic v4 RC allows users to continue using Ionic in their projects based on React or Vue. The goal behind this release was to decouple Ionic from any specific version of one single framework’s runtime and component model. Last month, Ionic community and Modus Create released the alpha version of @ionic/vue. @ionic/react is in progress and is expected to release soon. Major bug fixes The issues with scrollable options have been fixed. The Cordova browser error has been fixed. Fixes have been made to sibling router-outlets and router-outlet memory. There is an improvement in progress-bar as it looks better now. Issues with virtual-scroll have been fixed. Many users are happy with this release and awaiting the final version in 2019. This might prove to be the perfect new year celebration for the developers as well as the Ionic team. Read more about this releases on Ionic’s blog post. JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript Ionic framework announces Ionic 4 Beta Ionic Components  
Read more
  • 0
  • 0
  • 4043

article-image-facebook-releases-deepfocus-an-ai-powered-rendering-system-to-make-virtual-reality-more-real
Natasha Mathur
20 Dec 2018
3 min read
Save for later

Facebook releases DeepFocus, an AI-powered rendering system to make virtual reality more real

Natasha Mathur
20 Dec 2018
3 min read
Facebook released a new “AI-powered rendering system”, called DeepFocus yesterday, that works with Half Dome, a special prototype headset that Facebook’s Reality Lab (FRL) team had been working on over the past three years. HalfDome is an example of a "varifocal" head-mounted display (HMD) that comprises eye-tracking camera systems, wide-field-of-view optics, and adjustable display lenses that move forward and backward to match your eye movements. This makes the VR experience a lot more comfortable, natural, and immersive. However, HalfDome needs software to work in its full potential, that is where DeepFocus comes into the picture. “Our eyes are like tiny cameras: When they focus on a given object, the parts of the scene that are at a different depth look blurry. Those blurry regions help our visual system make sense of the three-dimensional structure of the world and help us decide where to focus our eyes next. While varifocal VR headsets can deliver a crisp image anywhere the viewer looks, DeepFocus allows us to render the rest of the scene just the way it looks in the real world: naturally blurry,” mentions Marina Zannoli, a vision scientist at FRL. Facebook is also open-sourcing DeepFocus, making the system’s code and the data set used to train it available to help other VR researchers incorporate it into their work. “By making our DeepFocus source and training data available, we’ve provided a framework not just for engineers developing new VR systems, but also for vision scientists and other researchers studying long-standing perceptual questions,” say the researchers. https://www.youtube.com/watch?v=Xp6OlfJEmAo DeepFocus A research paper presented at SIGGRAPH Asia 2018 explains that DeepFocus is a unified rendering and optimization framework based on convolutional neural networks that solve a full range of computational tasks. It also helps with enabling real-time operation of accommodation-supporting HMDs. The CNN comprises “volume-preserving” interleaving layers that help it quickly figure out the high-level features within an image. For instance, the paper mentions, that it accurately synthesizes defocus blur, focal stacks, multilayer decompositions, and multiview imagery. Moreover, it makes use of only commonly available RGB-D images, that enable real-time, near-correct depictions of a retinal blur. Researchers explain that DeepFocus is  “tailored to support real-time image synthesis..and ..includes volume-preserving interleaving layers..to reduce the spatial dimensions of the input, while fully preserving image details, allowing for significantly improved runtimes”. This model is more efficient unlike the traditional AI systems used for deep learning based image analysis as DeepFocus can process the visuals while preserving the ultrasharp image resolutions that are necessary for delivering high-quality VR experience. The researchers mention that DeepFocus can also grasp complex image effects and relations that includes foreground and background defocusing. However, DeepFocus isn’t just limited to Oculus HMDs. Since DeepFocus supports high-quality image synthesis for multifocal and light-field display, it is applicable to a complete range of next-gen head-mounted display technologies. “DeepFocus may have provided the last piece of the puzzle for rendering real-time blur, but the cutting-edge research that our system will power is only just beginning”, say the researchers. For more information, check out the official Oculus Blog.  Magic Leap unveils Mica, a human-like AI in augmented reality MagicLeap acquires Computes Inc to enhance spatial computing Oculus Connect 5 2018: Day 1 highlights include Oculus Quest, Vader Immortal and more!
Read more
  • 0
  • 0
  • 3604

article-image-google-to-discontinue-allo-plans-to-power-messages-with-rich-communication-services-rcs-chat
Bhagyashree R
06 Dec 2018
3 min read
Save for later

Google to discontinue Allo; plans to power ‘Messages’ with Rich Communication Services (RCS) Chat

Bhagyashree R
06 Dec 2018
3 min read
Yesterday, Google announced that they are shutting down Allo, an instant messaging app for the Android and iOS platforms. This news does not come as a surprise given that Google stopped investing in Allo earlier this year in April. People will be able to use Allo till March 2019, until which users can export all of their existing conversation history from the app. Anil Sabharwal, head of the communications group at Google, shared that they are discontinuing the further development of Allo because it was not able to attract many users. He says, “The product as a whole has not achieved the level of traction we’d hoped for. [...] We set out to build this thing, that it [would be] a product that we would get hundreds of millions of people to get excited about and use. And where we are, we’re not feeling like we’re on that trajectory.” The team working on Allo will now work primarily on the implementation of the carrier-based Rich Communication Services (RCS), under the branding ‘Chat’. This will be included within the Android Messages app used for SMS. RCS is a protocol that will potentially replace SMS and bring more advanced features such as group chat, high-resolution photo sharing, read receipts etc. in mobile messaging. Google now wants to focus more on the development of ‘Messages’, which is described as “Google’s official app for texting”. It has brought some of Allo’s most liked features such as Smart Reply, GIFs, and desktop support into Messages. Since then Messages has shown amazing adoption and is now being used by nearly 175 million users. Along with this announcement they have also shared details of their other two communication platforms, Duo and Hangouts. Duo is now supported in various devices such as iPad, Android Tablet, Chromebook, and Smart Displays. They recently added a feature to allow users to leave video messages and they are planning to introduce more quality improvements based on machine learning. Google also pointed out that their expansion of Hangouts to the enterprise (Hangouts Chat and Meet) has been taken well by users. In the coming months, Chat will allow customers to include people from outside of their organization, making it easy to stay aligned with clients, vendors, partners, and others, all from one place. To know more in detail, check Google’s official announcement. Google bypassed its own security and privacy teams for Project Dragonfly reveals Intercept Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations Google employees join hands with Amnesty International urging Google to drop Project Dragonfly
Read more
  • 0
  • 0
  • 2743

article-image-google-to-make-flutter-1-0-cross-platform-introduces-hummingbird-to-bring-flutter-apps-to-the-web
Bhagyashree R
05 Dec 2018
3 min read
Save for later

Google to make Flutter 1.0 “cross-platform”; introduces Hummingbird to bring Flutter apps to the web

Bhagyashree R
05 Dec 2018
3 min read
Yesterday, Google announced the release of Flutter 1.0, its first stable release, at the Flutter Live event. They further shared that they are working on a project called Hummingbird, which is a way to bring Flutter apps to the “modern, standards-based web”. https://twitter.com/flutterio/status/1070021432934055936 Flutter Live was held yesterday at the Science Museum on Exhibition Rd, Kensington, London SW7 2DD, UK. At this event, Google shared the latest from Flutter, Google’s free and open source SDK for building high-quality native iOS and Android apps from a single codebase. Flutter 1.0 updates The primary focus of Flutter 1.0 was bug fixes and stabilization. Some of the updates introduced in this release are: They have added support for nearly twenty different Firebase services. Performance is improved and work has been done around reducing the Flutter apps size. Dart platform has been updated to 2.1, which offers smaller code size, faster type checks, and better usability for type errors. They have also introduced previews of two new major features namely, Add to App and platform views, which are estimated to be released in February 2019. Developers can try these features in the preview mode. Add to App Add to App is introduced for the developers who wanted to use Flutter for adding new features in their existing applications or to convert their existing application to Flutter in stages. This feature makes it easier to incrementally adopt Flutter by updating templates, tooling, and guidance for existing apps. Also, the tooling has been reworked to make it easy to attach to an existing Flutter process without launching the debugger with the application. Platform views The newly-added platform view widgets, AndroidView and UiKitView allow you to embed an Android or iPhone platform in a Flutter app. These platform view widgets participate in the composition model, which means that you can integrate it with other Flutter content. Hummingbird to bring Flutter to web Flutter primarily focussed on iOS and Android, but now Google is extending it to a broader set of platforms. To achieve this goal, they recently shared a project called Flutter Desktop Embedding, which aims to brings Flutter to desktop operating systems. Also, to expand Flutter to the web they introduced Hummingbird. It is a web-based implementation of the Flutter runtime that utilizes the capability of Dart to compile not just to native ARM code but also to JavaScript. Google’s product manager for Flutter, Tim Sneath told TechCrunch, ”From the beginning, we designed Flutter to be a portable UI toolkit, not just a mobile UI toolkit. And so we’ve been experimenting with how we can bring Flutter to different places.” To explain more about what Hummingbird exactly is, Mr. Sneath added, “One of the great things about Flutter itself is that it compiles to machine code, to Arm code. But Hummingbird extends that further and says, okay, we’ll also compile to JavaScript and we’ll replace the Flutter engine on the web with the Hummingbird engine which then enables Flutter code to run without changes in web browsers. And that, of course, extends Flutter’s perspective to a whole new ecosystem.” Read the official announcement about Flutter, check out Google’s blog. Google Flutter moves out of beta with release preview 1 Google Dart 2.1 released with improved performance and usability JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript
Read more
  • 0
  • 0
  • 5659
article-image-flutter-challenges-electron-soon-to-release-a-desktop-client-to-accelerate-mobile-development
Bhagyashree R
03 Dec 2018
3 min read
Save for later

Flutter challenges Electron, soon to release a desktop client to accelerate mobile development

Bhagyashree R
03 Dec 2018
3 min read
On Saturday, the Flutter team announced that, as a competition to Electron, they will soon be releasing native desktop client to accelerate mobile development. Flutter native desktop client will come with support for resizing the emulator during runtime, using assets from PC, better RAM usage, and more. Flutter is Google’s open source mobile app SDK, which enables developers to write once and deploy natively on different platforms such as Android, iOS, Windows, Mac, and Linux. Additionally, they can also share the business logic to the web using AngularDart. Here’s what Flutter for desktop brings in: Resizable emulator during runtime To check how your layout looks on different screen sizes you need to create different emulators, which is quite cumbersome. To solve this issue Flutter desktop client will provide resizable emulator. Use assets saved on your PC When working with apps that interact with assets on the phone, developers have to first move all the testing files to the emulator or the device. With this desktop client, you can simply pick the file you want with your native file picker. Additionally, you don’t have to make any changes to the code as the desktop implementation uses the same method as the mobile implementation. Hot reloads and debugging Hot reload and debugging allows quickly experimenting, building UIs, adding new features, and fixing bugs. The desktop client supports these capabilities for better productivity. Better RAM usage The Android emulator consumes up to 1 GM RAM, but the RAM usage becomes worse when you are running the IntelliJ and the ever-RAM-hungry Chrome. Since the embedder is running natively, there will be no need for Android. Universally usable widgets You will be able to universally use most of the widgets such as buttons, loading indicators, etc that you create. And those widgets that require a different look as per the platform can be encapsulated pretty easily but checking the TargetPlatfrom property. Pages and plugins Pages differ in layout, depending on the platform and screen sizes, but not in functionality. You will be able to easily create accurate layouts for each platform with PageLayoutWidget. With regards to plugins, you do not have to make any changes in the Flutter code when using a plugin that also supports the desktop embedder. The Flutter desktop client is still in alpha, which means there will be more changes in the future. Read the official announcement on Medium. Google Flutter moves out of beta with release preview 1 Google Dart 2.1 released with improved performance and usability JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript
Read more
  • 0
  • 0
  • 6943

article-image-microsoft-wins-480-million-us-army-contract-for-hololens
Natasha Mathur
30 Nov 2018
3 min read
Save for later

Microsoft wins $480 million US Army contract for HoloLens

Natasha Mathur
30 Nov 2018
3 min read
Microsoft won a $480 million contract, earlier this week, to develop and supply prototypes for augmented reality systems, for use in combat and military training for the US army. The project ‘Integrated Visual Augmentation system’ (IVAS) (formerly identified as Heads Up Display (HUD) 3.0) aims to rapidly develop, test and manufacture a single platform for soldiers to fight, rehearse, and train. It will also offer increased lethality, Mobility, and Situational Awareness. The system would also provide remote viewing of weapon sights to enable low risk, rapid target acquisition, and will integrate both thermal and night vision cameras. Moreover, it will also be capable of tracking a soldier’s heart and breathing rates along with detecting concussions. This contract will have the military ordering an initial run of 2,550 prototypes, along with them buying more than 100,000 of these devices. As per the FBO (Federal Business Opportunities), Close Combat Force goes through the highest casualty rate in combat. Current and future battles are going to be fought in urban and subterranean environments where the current capabilities are not sufficient. IVAS will efficiently address this issue by providing increased sets and repetitions in complex environments that make use of its STESquad Capability integrated with HUD 3.0. “Soldier lethality will be vastly improved through cognitive training and advanced sensors, enabling squads to be first to detect, decide, and engage” reads the white paper.  HUD 3.0 will offer integration of Head-Body-Weapon and provide significant enhancement of detection, targeting, engagements, and AI to match with the speed of the war. The STE Squad capability provides global terrain replication of operational environments for close combat training and rehearsals before them actually engaging in such an environment. Generally, dismounted training relies on computer/projector screens that restrict Soldier movement severely. The new STE Squad capability brings together the live and virtual environments, thereby, developing an enhanced live training capability using operationally worn HUD 3.0. The U.S. Army and the Israeli military have already been customers of Microsoft’s HoloLens devices that they have used in training. With the contract, the Army has become one of Microsoft’s top and most important HoloLens consumers. Magic Leap was also trying to win the contract that would have been a part of  $500+ million Army program. “Augmented reality technology will provide troops with more and better information to make decisions. This new work extends our longstanding, trusted relationship with the Department of Defense to this new area,” mentioned a Microsoft spokesman in an email. For more information, check out the official IVAS white paper. Microsoft’s move towards ads on the Mail App in Windows 10 sparks privacy concerns Microsoft amplifies focus on conversational AI: Acquires XOXCO; shares guide to developing responsible bots Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool
Read more
  • 0
  • 0
  • 2975

article-image-project-fi-is-now-google-fi-will-support-multiple-android-based-phones-offer-beta-service-for-iphone
Melisha Dsouza
29 Nov 2018
3 min read
Save for later

Project Fi is now Google Fi, will support multiple Android based phones, offer beta service for iPhone

Melisha Dsouza
29 Nov 2018
3 min read
Google has officially announced that its Project Fi will be rebranded to ‘Google Fi’. They have also expanded Fi’s support to multiple phones like Samsung, Moto, LG, iPhone and OnePlus. The service for iPhone will be in beta for the time being. Even though Google admits that the process for iPhone will require “a few extra steps to get set up”, there will be a new Google Fi iOS app to help customers get comfortable with the process. What is Google Fi? Google Fi is a “mobile virtual network operator” and is recognized for its unique approach compared to most other network carriers. It does not operate on its own network, but piggybacks on those of T-Mobile, Sprint, and US Cellular, handing a customer's phone to whichever offers the strongest connection at any given time. Fi also offers simplified data plans, easy international use, and a slew of other perks. It has no long-term contracts- a customer has to pay on a month to month basis. The data costs the same internationally as it does at home, in most countries. There's just a single payment "plan," which starts at $20 for access to a line, plus an additional $10 for every gigabyte consumed. If a user has only one line and uses more than 6GB, they only pay a maximum of $80 for that month. The Catch with Fi for iPhones Fi operates as a virtual network operator, and only a few phones including Google Pixels and those that are explicitly “designed for Fi” will be able to dynamically switch between those carriers’ networks. Android phones and iPhones that are that aren't built specifically for Google Fi will miss out on this functionality. In addition, since the iPhone will receive support in beta, there can be a less-than-smooth experience for customers who choose to use Fi on their iPhones. Important secondary features like visual voicemail, calls and texts over Wi-Fi, automated spam detection, and international tethering will be left out because of the beta support. The Fi website cautions that iPhone users will have to do a bit of tweaking to get their texting feature to work properly. The iMessage service  will function "out of the box," APN settings will need to be modified to enable MMS. That being said, the real catch with Google Fi has always been its simplicity and affordability, both of which will remain irrespective of the device a customer chooses to use. Google Fi still has some catching up to do with other carriers when it comes to features like including support for the RCS Universal Profile for texting and number sharing for things like LTE smartwatches. This announcement of extending Fi’s support for multiple devices does signal Google’s efforts to broaden its user base and boost device support. Head over to Google’s official Blog for more information on this announcement. A year later, Google Project Zero still finds Safari vulnerable to DOM fuzzing using publicly available tools to write exploits BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration” #GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment
Read more
  • 0
  • 0
  • 3019
article-image-apple-app-store-antitrust-case-to-be-heard-by-u-s-supreme-court-today
Sugandha Lahoti
26 Nov 2018
2 min read
Save for later

Apple app store antitrust case to be heard by U.S. Supreme Court today

Sugandha Lahoti
26 Nov 2018
2 min read
An antitrust case filed against Apple which accuses the company of breaking antitrust laws by monopolizing the market for iPhone apps will be heard in the U.S. Supreme Court today. According to a report by Reuters, Apple collects the payments from iPhone users on it's Apple app store, keeping a 30 percent commission on each purchase, leading to inflated prices compared to if apps were available from other sources. This results in customers having to pay more than they should. The antitrust lawsuit dates to 2011 and alleges that Apple has created a monopoly by allowing apps to be sold only through its App Store and to charge excessive commissions. Apple is appealing a lower-court decision saying that its practices are not monopolizing. It argues that they are only acting as an agent for developers who sell to consumers via the Apple App Store, not a distributor. If the supreme court favors customers it would “threaten the burgeoning field of e-commerce”, says Apple. In its defense, Apple has cited a 1977 Supreme Court ruling as part of its defense. Reuters reports: Apple has seized upon a 1977 Supreme Court ruling that limited damages for anti-competitive conduct to those directly overcharged instead of indirect victims who paid an overcharge passed on by others. Part of the concern, the court said in that case, was to free judges from having to make complex calculations of damages. Apple is backed by the attorneys general of 30 states including California, Texas, Florida and New York. The U.S. Chamber of Commerce business group who is also backing Apple, says “The increased risk and cost of litigation will chill innovation, discourage commerce, and hurt developers, retailers, and consumers alike.” The nine justices of the U.S. Supreme Court will hear arguments in Apple’s bid to escape damages today. The justices will ultimately decide a broader question: Can consumers even sue for damages in an antitrust case like this one? writes Reuters. Apple has quietly acquired privacy-minded AI startup Silk Labs, reports Information. The White House is reportedly launching an antitrust investigation against social media companies Tim Cook criticizes Google for their user privacy scandals but admits to taking billions from Google Search
Read more
  • 0
  • 0
  • 2458

article-image-the-ethical-mobile-os-e-mvp-beta2-ported-to-android-oreo-e-powered-smartphone-may-be-released-soon
Melisha Dsouza
26 Nov 2018
3 min read
Save for later

The ethical mobile OS, /e/-MVP beta2 ported to Android-Oreo, /e/ powered smartphone may be released soon!

Melisha Dsouza
26 Nov 2018
3 min read
Early last month, e.foundation announced the beta release of their OS called /e/  from the creator of Mandrake-Linux. The OS is completely focused on user privacy. The team recently announced that they have finished porting /e/-MVP beta2 to Android-Oreo, which means that the OS can now support many new, more recent, devices. The /e/-MVP beta2 is now supported on 49 different devices including: Xiaomi Redmi Note 5 pro Xiaomi Mi A1 Xiaomi Mi 6 Xiaomi Pocophone F1 OnePlus 5T Google Pixel XL  “At /e/, we want to build an alternative mobile operating system that everybody can enjoy using, one that is a lot more respectful of user’s data privacy while offering them real freedom of choice. We want to be known as the ethical mobile operating system, built in the public interest.” -/e/ project leader, Gaël Duval This OS is free and open-source.  Its ROM uses microG instead of Google’s core apps. /e/ has new default applications including a mail app, an SMS app (Signal), a chat application (Telegram), and much more. The Mozilla NLP makes geolocation available even when GPS signal is not available. The team has also been obtaining requests to release a smartphone with /e/. They have finally taken the suggestion into consideration and started talks with several hardware makers for the same. They have started a poll asking users which OS they would prefer on the next Fairphone. The Fairphone is a smartphone which is designed with ecological and ethical issues in mind. It is made from recycled, recyclable and responsibly-sourced along with minimal packaging. In a Fairphone, if any component breaks down or the user wants to update it, only that element need to be replaced. Apart from this short announcement on the blog, there is not much documentation for users to refer to and clarify their doubts about this project. Here is what users had to say regarding the announcement: Source: HackerNews  If you are interested in "leaving" Apple and Google and "reconquer" their privacy, read Duval's Twitter thread that answers common user queries on /e/ Head over to e.foundation’s official blog to know more about this announcement. Gaël Duval, creator of the ethical mobile OS, /e/, calls out Tim Cook for being an ‘opportunist’ in the ongoing digital privacy debate Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study  
Read more
  • 0
  • 0
  • 2699