Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Application Development

279 Articles
article-image-fedora-workstation-31-to-come-with-wayland-support-improved-core-features-of-pipewire-and-more
Bhagyashree R
26 Jun 2019
3 min read
Save for later

Fedora Workstation 31 to come with Wayland support, improved core features of PipeWire, and more

Bhagyashree R
26 Jun 2019
3 min read
On Monday, Christian F.K. Schaller, Senior Manager for Desktop at Red Hat, shared a blog post that outlined the various improvements and features coming in Fedora Workstation 31. These include Wayland improvements, more PipeWire functionality, continued improvements around Flatpak, Fleet Commander, and more.  Here are some of the enhancements coming to Fedora Workstation 31: Wayland transitioning to complete soon Wayland is a desktop server protocol that was introduced to replace the X Windowing System with a modern and simpler windowing system in Linux and other Unix-like operating systems. The team is focusing on removing the X Windowing System dependency so that the GNOME Shell will be able to run without the need of XWayland.  Schaller shared that the work related to removing X dependency is done for the shell itself. However, some things are left in regards to the GNOME Setting daemon. Once this work is complete an X server (XWayland) will only start if an X application is run and will shut down when the application is stopped. Another aspect that the team is working on is allowing X applications to run as root under XWayland. Running desktop applications as root is generally not considered safe. However, there are few applications that only work when they are run as root. This is why the team has decided to continue support for running applications as root in XWayland. The team is also adding support for NVidia binary driver to allow running a native Wayland session on top of the binary driver. PipeWire with improved desktop sharing portal PipeWire is a multimedia framework that aims to improve the handling of audio and video in Linux. This release will come with more improved core features of PipeWire. The existing desktop sharing portal is now enhanced and will soon have Miracast support. The team’s ultimate goal is to make the GNOME integration even more seamless than the standalone app.  Better infrastructure for building Flatpaks Flatpak is a utility for software deployment and package management in Linux. The team is making the infrastructure for building Flatpaks from RPMS better. They will also be offering applications from flathub.io and quay.io out of the box and in accordance with Fedora rules for third-party software. The team will also be making a Red Hat UBI based runtime available. A third-party developer can use this runtime to build their applications and be sure that it will be supported by Red Hat for the lifetime of a given RHEL release. Fedora Toolbox with improved GNOME Terminal  Fedora Toolbox is a tool that gives developers a seamless experience when using an immutable OS like Silverblue. Currently, improvements are being done to GNOME Terminal that will ensure a more natural behavior inside the terminal when interacting with pet containers. The is looking for ways to make the selection of containers more discoverable so that developers will easily get access to a Red Hat UBI container or a Red Hat TensorFlow container for instance.  Along with these, the team is improving the infrastructure for Linux fingerprint reader support, securing Gamemode, adding support for Dell Totem, improving media codec support, and more. To know more in detail check out Schaller’s blog post. Fedora 30 releases with GCC 9.0, GNOME 3.32, performance improvements, and much more! Fedora 30 Beta released with desktop environment options, GNOME 3.32, and much more Fedora 31 will now come with Mono 5 to offer open-source .NET support
Read more
  • 0
  • 0
  • 6352

article-image-introducing-pyoxidizer-an-open-source-utility-for-producing-standalone-python-applications-written-in-rust
Bhagyashree R
26 Jun 2019
4 min read
Save for later

Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust

Bhagyashree R
26 Jun 2019
4 min read
On Monday, Gregory Szorc, a Developer Productivity Engineer at Airbnb, introduced PyOxidizer, a Python application packaging and distribution tool written in Rust. This tool is available for Windows, macOS, and Linux operating systems. Sharing his vision behind this tool, Szorc wrote in the announcement, “I want PyOxidizer to provide a Python application packaging and distribution experience that just works with a minimal cognitive effort from Python application maintainers.” https://twitter.com/indygreg/status/1143187250743668736 PyOxidizer aims to solve complex packaging and distribution problems so that developers can put their efforts into building applications instead of juggling with build systems and packaging tools. According to the GitHub README, “PyOxidizer is a collection of Rust crates that facilitate building libraries and binaries containing Python interpreters.” Its most visible component is the ‘pyoxidizer’ command line tool. With this tool, you can create new projects, add PyOxidizer to existing projects, produce binaries containing a Python interpreter, and various related functionality. How PyOxidizer is different from other Python application packaging/distribution tools PyOxidizer provides the following benefits over other Python application packaging/distribution tools: It works across all popular platforms, unlike many other tools that only target Windows or macOS. It works even if the executing system does not have Python installed. It does not have special system requirements like SquashFS, container runtimes, etc. Its startup performance is comparable to traditional Python execution. It supports single file executables with minimal or none system dependencies. Here are some of the features PyOxidizer comes with: Generates a standalone single executable file One of the most important features of PyOxidizer is that it can produce a single executable file that contains a fully-featured Python interpreter, its extensions, standard library, and your application's modules and resources. PyOxidizer embeds self-contained Python interpreters as a tool and software library by exposing its lower-level functionality. Serves as a bridge between Rust and Python The ‘Oxidizer’ part in PyOxidizer comes from Rust. Internally, it uses Rust to produce executables and manage the embedded Python interpreter and its operations. Along with solving the problem of packaging and distribution with Rust, PyOxidizer can also serve as a bridge between these two languages. This makes it possible to add a Python interpreter to any Rust project and vice versa. With PyOxidizer, you can bootstrap a new Rust project that contains an embedded version of Python and your application. “Initially, your project is a few lines of Rust that instantiates a Python interpreter and runs Python code. Over time, the functionality could be (re)written in Rust and your previously Python-only project could leverage Rust and its diverse ecosystem,” explained Szorc. The creator chose Rust for the run-time and build-time components because it is considered to be one of the superior systems programming languages and does not require considerable effort solving difficult problems like cross-compiling. He believes that implementing the embedding component in Rust also opens more opportunities to embed Python in Rust programs. “This is largely an unexplored area in the Python ecosystem and the author hopes that PyOxidizer plays a part in more people embedding Python in Rust,” he added. PyOxidizer executables are faster to start and import During the execution, binaries built with PyOxidizer does not have to do anything special like creating a temporary directory to run the Python interpreter. Everything is loaded directly from the memory without any explicit I/O operations. So, when a Python module is imported, its bytecode is loaded from a memory address in the executable using zero-copy. This results in making the executables produced by PyOxidizer faster to start and import. PyOxidizer is still in its early stages. Yesterday’s initial release is good at producing executables embedding Python. However, not much has been implemented yet to solve the distribution part of the problem. Some of the missing features that we can expect to come in the future are an official build environment, support for C extensions, more robust packaging support, easy distribution, and more. The creator encourages Python developers to try this tool and share feedback with him or file an issue on GitHub. You can also contribute to this project via Patreon or PayPal. Many users are excited to try this tool: https://twitter.com/kevindcon/status/1143750501592211456 https://twitter.com/acemarke/status/1143389113871040517 Read the announcement made by Szorc to know more in detail. Python 3.8 beta 1 is now ready for you to test PyPI announces 2FA for securing Python package downloads Matplotlib 3.1 releases with Python 3.6+ support, secondary axis support, and more
Read more
  • 0
  • 0
  • 7748

article-image-a-vulnerability-discovered-in-kubernetes-kubectl-cp-command-can-allow-malicious-directory-traversal-attack-on-a-targeted-system
Amrata Joshi
25 Jun 2019
3 min read
Save for later

A vulnerability discovered in Kubernetes kubectl cp command can allow malicious directory traversal attack on a targeted system

Amrata Joshi
25 Jun 2019
3 min read
Last week, the Kubernetes team announced that a security issue (CVE-2019-11246) was discovered with Kubernetes kubectl cp command. According to the team this issue could lead to a directory traversal in such a way that a malicious container could replace or create files on a user’s workstation.  This vulnerability impacts kubectl, the command line interface that is used to run commands against Kubernetes clusters. The vulnerability was discovered by Charles Holmes, from Atredis Partners as part of the ongoing Kubernetes security audit sponsored by CNCF (Cloud Native Computing Foundation). This particular issue is a client-side defect and it requires user interaction to exploit the system. According to the post, this issue is of high severity and  the Kubernetes team encourages to upgrade kubectl to Kubernetes 1.12.9, 1.13.6, and 1.14.2 or later versions for fixing this issue. To upgrade the system, users need to follow the installation instructions from the docs. The announcement reads, “Thanks to Maciej Szulik for the fix, to Tim Allclair for the test cases and fix review, and to the patch release managers for including the fix in their releases.” The kubectl cp command allows copying the files between containers and user machine. For copying files from a container, Kubernetes runs tar inside the container for creating a tar archive and then copies it over the network, post which, kubectl unpacks it on the user’s machine. In case, the tar binary in the container is malicious, it could possibly run any code and generate unexpected, malicious results. An attacker could use this to write files to any path on the user’s machine when kubectl cp is called, which is limited only by the system permissions of the local user. The current vulnerability is quite similar to CVE-2019-1002101 which was an issue in the kubectl binary, precisely in the kubectl cp command. The attacker could exploit this vulnerability for writing files to any path on the user’s machine. Wei Lien Dang, co-founder and vice president of product at StackRox, said, “This vulnerability stems from incomplete fixes for a previously disclosed vulnerability (CVE-2019-1002101). This vulnerability is concerning because it would allow an attacker to overwrite sensitive file paths or add files that are malicious programs, which could then be leveraged to compromise significant portions of Kubernetes environments.” Users are advised to run kubectl version --client and in case it does not say client version 1.12.9, 1.13.6, or 1.14.2 or newer, then it means the user is running a vulnerable version which needs to be upgraded. To know more about this news, check out the announcement.  Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more! HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more!    
Read more
  • 0
  • 0
  • 6430

article-image-qt-and-lg-electronics-partner-to-make-webos-as-the-platform-of-choice-for-embedded-smart-devices
Amrata Joshi
25 Jun 2019
3 min read
Save for later

Qt and LG Electronics partner to make webOS as the platform of choice for embedded smart devices

Amrata Joshi
25 Jun 2019
3 min read
The team at Qt and LG Electronics partner to provide webOS as the platform for embedded smart devices in the automotive, robotics and smart home sectors. The webOS, also known as LG webOS, is a Linux kernel-based multitasking operating system for smart devices. The webOS platform powers smart home devices including LG Smart TVs and smart home appliances, and can also deliver greater consumer benefits in high-growth industries such as the automotive sector. The system UI of LG webOS is written mostly using Qt Quick 2 and Qt technology. Last year in March, LG announced an open-source edition of webOS. I.P. Park, president and CTO of LG Electronics, said in a statement, “Smart devices have the potential to deliver an unmatched customer experience wherever we may be – in our homes, cars, and anywhere in between.” Park further added, “Our partnership with Qt enables us to dramatically enhance webOS, providing our customers with the most advanced platform for the creation of highly immersive devices and services. We look forward to continuing our long-standing collaboration with Qt to deliver memorable experiences in the exciting areas of automotive, smart homes and robotics.” LG selected Qt as its business and technical partner for webOS to meet the challenging requirements and also to navigate the market dynamics of the automotive, smart home and robotics industries. With this partnership, Qt will provide LG with end-to-end, integrated as well as a hardware-agnostic development environment for engineers, developers, and designers for creating innovative and immersive apps and devices. Also, officially, the webOS will become a reference operating system of Qt. This partnership will help the customers to leverage webOS’ set of middleware-enabled functionality that saves customer time and effort in their embedded development projects. Qt’s feature-rich development tools such as Qt Creator, Qt Design Studio and Qt 3D Studio will also support webOS. Juha Varelius, CEO of Qt, said to us, “LG has been a technology leader for generations, which is one of the many reasons they’ve become such a trusted partner of Qt.” Varelius further added, “With the company’s initiative to expand the reach of webOS into rapidly growing markets, LG is underscoring the massive potential of Qt-enabled connected experiences. By collaborating with LG on this initiative, we’re able to make it easy as possible for our customers to build devices that bring a new definition to the word ‘smart’.” Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more Qt Creator 4.9.0 released with language support, QML support, profiling and much more Qt installation on different platforms [Tutorial]  
Read more
  • 0
  • 0
  • 3408

article-image-gnu-apl-1-8-releases-with-bug-fixes-fft-gtk-re-and-more
Vincy Davis
24 Jun 2019
2 min read
Save for later

GNU APL 1.8 releases with bug fixes, FFT, GTK, RE and more

Vincy Davis
24 Jun 2019
2 min read
Yesterday, the GNU APL version 1.8 was released with bug fixes, FFT, GTK, RE, user defined APL commands and more. GNU APL is a free interpreter for the programming language APL. What's new in GNU APL 1.8? Bug fixes, FFT (fast fourier transforms; real, complex, and windows), GTK (create GUI windows from APL), RE (regular expressions), User-defined APL commands, An interface from Python into GNU APL.With this interface one can use APL's vector capabilities in programs written in Python. People are excited to use the GNU APL 1.8 version. A user on Hacker News states that “1Wow, each of ⎕FFT, ⎕GTK and ⎕RE are substantial and impressive additions! Thank you, and congratulations on the new release!” Another user says that “APL can do some pretty cool stuff” Another user comments “I'd like to play with this as it is a free APL that I could use for work without paying a license (like Dyalog APL requires). J is another free array language, but it doesn't use the APL characters that I enjoy. I've had a little trouble in the past getting it to install (this was version 1.7) on Ubuntu. Granted I've never been an expert at installing from source, but a more in-depth installation guide or YouTube tutorial would help some. Thanks for doing this btw! I hope to eventually get to check this out!” Introducing Luna, world’s first programming language with dual syntax representation, data flow modeling and much more! Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study Stack Overflow survey data further confirms Python’s popularity as it moves above Java in the most used programming language list
Read more
  • 0
  • 0
  • 4154

article-image-kubernetes-1-15-releases-with-extensibility-around-core-kubernetes-apis-cluster-lifecycle-stability-and-more
Vincy Davis
20 Jun 2019
5 min read
Save for later

Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more!

Vincy Davis
20 Jun 2019
5 min read
Update: On July 23rd, the Enhancements Lead of Kubernetes 1.15 at VMware, Kenny Coleman,  published a “What's New in Kubernetes 1.15” video with Cloud Native Computing Foundation (CNCF). In the video, he explains in detail about the three new major features in Kubernetes 1.15, which include Dynamic HA Clusters with Kubeadm, Volume Cloning and CustomResourceDefinition (CRDs). Coleman also highlights each feature and explains its importance to users.  Watch the video below to know in detail about Kenny Coleman’s talk about Kubernetes 1.15. https://www.youtube.com/watch?v=eq7dgHjPpzc On June 19th, the Kubernetes team announced the release of Kubernetes 1.15, which consists of 25 enhancements, including 2 moving to stable, 13 in beta, and 10 in alpha. The key features of this release include extensibility around core Kubernetes APIs, cluster lifecycle stability, and usability improvements. This is Kubernetes’ second release this year. The previous version Kubernetes 1.14, released three months ago, had 10 stable enhancements--the most amount of stable features revealed in a release. In an interview to the New Stack, Claire Laurence, the team lead at Kubernetes said that in this release, “We’ve had a fair amount of features progress to beta. I think what we’ve been seeing a lot with these alpha and beta features as they progress is a lot of continued focus on stability and overall improvement before indicating that those features are stable.” Let’s have a brief look at all the new features and updates. #1 Extensibility around core Kubernetes APIs The theme of the new developments around CustomResourceDefinitions is data consistency and native behavior. The Kubernetes team wants that a user should not notice whether the interaction is with a CustomResource or with a Golang-native resource. Hence, from v1.15 onwards, Kubernetes will check each schema against a restriction called “structural schema”. This enforces non-polymorphic and complete typing of each field in a CustomResource. Out of the five enhancements, the ‘CustomResourceDefinition Defaulting’ is an alpha release. It is specified using the default keyword in the OpenAPI validation schema. Defaulting will be available as alpha in Kubernetes 1.15 for structural schemas. The other four enhancements are in beta which include: CustomResourceDefinition Webhook Conversion In Kubernetes, CustomResourceDefinitions gain the ability to convert between different versions on-the-fly, just like users are used to from native resources for the long term. CustomResourceDefinition OpenAPI Publishing OpenAPI publishing for CRDs will be available with Kubernetes 1.15 as beta, but only for structural schemas. CustomResourceDefinitions Pruning Pruning is the automatic removal of unknown fields in objects sent to a Kubernetes API. A field is unknown if it is not specified in the OpenAPI validation schema. It enforces that only data structures specified by the CRD developer are persisted to etcd. This is the behaviour of native resources, and will be available for CRDs as well, starting as beta in Kubernetes 1.15. Admission Webhook Reinvocation & Improvements In the earlier versions, mutating webhooks were only called once, in alphabetical order. An earlier run webhook cannot react on the output of webhooks, called later in the chain. With Kubernetes 1.15, mutating webhooks can opt-in into at least one re-invocation by specifying reinvocationPolicy: IfNeeded. If a later mutating webhook modifies the object, the earlier webhook will get a second chance. #2 Cluster Lifecycle Stability and Usability Improvements The cluster lifecycle building block, kubeadm, continues to receive features and stability work, which is needed for bootstrapping production clusters efficiently. kubeadm has promoted high availability (HA) capability to beta, allowing users to use the familiar kubeadm init and kubeadm join commands to configure and deploy an HA control plane. With kubeadm, certificate management has become more robust in 1.15, as it seamlessly rotates all the certificates before expiry. The kubeadm configuration file API is moving from v1beta1 to v1beta2 in 1.15. kubeadm now has its own new logo. Continued Improvement of CSI In Kubernetes 1.15, the Special Interests Groups (SIG) Storage enables migration of in-tree volume plugins to Container Storage Interface (CSI). SIG Storage worked on bringing CSI to feature parity with in-tree functionality, including functionality like resizing and inline volumes. SIG Storage introduces new alpha functionality in CSI that doesn’t exist in the Kubernetes Storage subsystem yet, like volume cloning. Volume cloning enables users to specify another PVC as a “DataSource” when provisioning a new volume. If the underlying storage system supports this functionality and implements the “CLONE_VOLUME” capability in its CSI driver, then the new volume becomes a clone of the source volume. Additional feature updates Support for go modules in Kubernetes Core Continued preparation on cloud provider extraction and code organization. The cloud provider code has been moved to kubernetes/legacy-cloud-providers for easier removal later and external consumption. Kubectl get and describe now work with extensions. Nodes now support third party monitoring plugins. A new Scheduling Framework for schedule plugins is now Alpha ExecutionHook API designed to trigger hook commands in the containers for different use cases is now Alpha. These extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs will continue to depreciate and eventually will be retired in the next version 1.16. To know about the additional features in detail check out the release notes. https://twitter.com/markdeneve/status/1141135440336039936 https://twitter.com/IanColdwater/status/1141485648412651520 For more details on Kubernetes 1.15, check out Kubernetes blog. HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes
Read more
  • 0
  • 0
  • 5577
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-curls-lead-developer-announces-googles-plan-to-reimplement-curl-in-libcrurl
Amrata Joshi
20 Jun 2019
4 min read
Save for later

Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”

Amrata Joshi
20 Jun 2019
4 min read
Yesterday, Daniel Stenberg, the lead developer of curl announced that Google is planning to reimplement curl in libcrurl and it will be renamed as libcurl_on_cronet. https://twitter.com/bagder/status/1141588339100934149 The official blog post reads, “The Chromium bug states that they will create a library of their own (named libcrurl) that will offer (parts of) the libcurl API and be implemented using Cronet.” Daniel Stenberg explains the reason for reimplementation, “Implementing libcurl using Cronet would allow developers to take advantage of the utility of the Chrome Network Stack, without having to learn a new interface and its corresponding workflow. This would ideally increase ease of accessibility of Cronet, and overall improve adoption of Cronet by first-party or third-party applications.” According to him, the team might also hope that 3rd party applications can switch to this library without the need for switching to another API. So if this works then there is a possibility that the team might also create “crurl” tool which then will be their own version of the tool using their own library. Daniel Stenberg states in the post, “In itself is a pretty strong indication that their API will not be fully compatible, as if it was they could just use the existing curl tool…” He writes, “As the primary author and developer of the libcurl API and the libcurl code, I assume that Cronet works quite differently than libcurl so there’s going to be quite a lot of wrestling of data and code flow to make this API work on that code.” The libcurl API is quite versatile and has developed over a period of almost 20 years. There’s a lot of functionality, options and subtle behavior that may or may not be easy to mimic. If the subset is limited to a number of functions and libcurl options and they are made to work exactly the way they have been documented, then it could be difficult as well as time-consuming. He writes, “I don’t think applications will be able to arbitrarily use either library for a very long time, if ever. libcurl has 80 public functions and curl_easy_setopt alone takes 268 different options!” Read Also: Cisco merely blacklisted a curl instead of actually fixing the vulnerable code for RV320 and RV325 According to Stenberg, there’s still no clarity on API/ABI stability or how are they planning to ship or version their library. Stenberg writes, “There’s this saying about imitation and flattery but getting competition from a giant like Google is a little intimidating. If they just put two paid engineers on their project they already have more dedicated man power than the original libcurl project does…” So, the team from Google’s end finds and fixes issues in the code and API such that curl improves. This makes more users aware of libcurl and its API and the team behind curl make it easier for users and applications to do safe and solid Internet transfers. According to Stenberg, applications need to be aware of the APIs they work with to avoid confusion. He also highlighted that users might have been confused because of the names, “libcrurl” and “crurl” as they appear to be like typos. He added, “Since I don’t think “libcrurl” will be able to offer a compatible API without a considerable effort, I think applications will need to be aware of which of the APIs they work with and then we have a “split world” to deal with for the foreseeable future and that will cause problems, documentation problems and users misunderstanding or just getting things wrong.” “Their naming will possibly also be the reason for confusion since “libcrurl” and “crurl” look so much like typos of the original names,” he said. To know more about this news, check out the blog post by Daniel Stanberg. Google Calendar was down for nearly three hours after a major outage How Genius used embedded hidden Morse code in lyrics to catch plagiarism in Google search results Google, Facebook and Twitter submit reports to EU Commission on progress to fight disinformation  
Read more
  • 0
  • 0
  • 4038

article-image-qt-5-13-releases-with-a-fully-supported-webassembly-module-chromium-73-support-and-more
Bhagyashree R
20 Jun 2019
3 min read
Save for later

Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more

Bhagyashree R
20 Jun 2019
3 min read
Yesterday, the team behind Qt announced the release of Qt 5.13. This release comes with fully-supported Qt for WebAssembly, Chromium 73-based QT WebEngine, and many other updates. In this release, the Qt community and the team have focused on improving the tooling to make designing, developing, and deploying software with Qt more efficient. https://twitter.com/qtproject/status/1141627444933398528 Following are some of Qt 5.13 highlights: Fully-supported Qt for WebAssembly Qt for WebAssembly makes it possible to build Qt applications for web browsers. The team previewed this platform in Qt 5.12 and beginning this release Qt for WebAssembly is fully-supported. This module uses Emscripten, the LLVM to JavaScript compiler to compile Qt applications for a web server. This will allow developers to run their native applications in any browser provided it supports WebAssembly. Updates in the QT QML module The QT QML module enables you to write applications and libraries in the QML language. Qt 5.13 comes with improved support for enums declared in  C++. With this release, JavaScript “null” as the binding value will be optimized at compile time. Also, QML will now generate function tables on 64-bit Windows making it possible to unwind the stack through JITed functions. Updates in Qt Quick and Qt Quick Controls 2 Qt Quick is the standard library for writing QML applications, which provides all the basic types required for creating user interfaces. With this release, support is added to TableView that allows hiding rows and columns. Qt Quick Controls 2 provides a set of UI controls for creating user interfaces. This release brings a new control named SplitView using which you can lay out items horizontally or vertically with a draggable splitter between each item. Additionally, the team has also added a cache property to the icon. Qt WebEngine Qt WebEngine provides a web browser engine that makes embedding content from the web into your applications easier on platforms that do not have a native web engine. This engine uses the code from the open-source Chromium project. Qt WebEngine is now based on Chromium 73. This latest version supports PDF viewing via an internal Chromium extension, Web Notifications API, and thread-safe and page-specific URL request interceptors. It also comes with an application-local client certificate store and client certificate support from QML. Lars Knoll, Qt’s CTO and Tuukka Turunen, Qt’s Head of R&D will be holding a webinar on July 2 to summarize all the news around Qt 5.13. Read the official announcement on Qt’s official website to know more in detail. Qt Creator 4.9.0 released with language support, QML support, profiling and much more Qt installation on different platforms [Tutorial] Qt Creator 4.9 Beta released with QML support, programming language support and more!
Read more
  • 0
  • 0
  • 6065

article-image-ubuntu-has-decided-to-drop-i386-32-bit-architecture-from-ubuntu-19-10-onwards
Vincy Davis
19 Jun 2019
4 min read
Save for later

Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards

Vincy Davis
19 Jun 2019
4 min read
Update: Five days after the announcement of dropping the i386 structure, Steve Langasek has now changed his stance. Yesterday, 23rd June, Langasek apologised to his users and said that this is not the case. He now claims that Ubuntu is only dropping the updates to the i386 libraries, and it will be frozen at the 18.04 LTS versions. He also mentioned that they are planning to have i386 applications including games, for versions of Ubuntu later than 19.10. This update comes after Valve Linux developer Pierre-Loup Griffais tweeted on 21st June that Steam will not support Ubuntu 19.10 and its future releases. He also recommended its users the same. Griffais has stated that are planning to switch to a different distribution and are evaluating ways to minimize breakage for their users. https://twitter.com/Plagman2/status/1142262103106973698 Between all the uncertainties of i386, Wine developers have also raised their concern because many 64-bit Windows applications still use a 32-bit installer, or some 32-bit components. Rosanne DiMesio, one of the admins in Wine’s Applications Database (AppDB) and Wiki, has said in a mail archive that there are many possibilities, such as building a pure 64 bit Wine packages for Ubuntu. Yesterday the Ubuntu engineering team announced their decision to discontinue i386 (32-bit) as an architecture, from Ubuntu 19.10 onwards. In a post to the Ubuntu Developer Mailing List, Canonical’s Steve Langasek explains that “i386 will not be included as an architecture for the 19.10 release, and we will shortly begin the process of disabling it for the eoan series across Ubuntu infrastructure.” Langasek also mentions that the specific distributions of builds, packages or distributes of the 32-bit software, libraries or tools will no longer work on the newer versions of Ubuntu. He also mentions that the Ubuntu team will be working on the 32-bit support, over the course of the 19.10 development cycle. The topic of dropping i386 systems has been in discussion among the Ubuntu developer community, since last year. One of the mails in the archive, mentions that “Less and less non-amd64-compatible i386 hardware is available for consumers to buy today from anything but computer parts recycling centers. The last of these machines were manufactured over a decade ago, and support from an increasing number of upstream projects has ended.” Earlier this year, Langasek stated in one of his mail archives that running a 32-bit i386 kernel on the recent 64-bit Intel chips had a risk of weaker security than using a 64-bit kernel. Also the usage of i386 has reduced broadly in the ecosystem, and hence it is “increasingly going to be a challenge to keep software in the Ubuntu archive buildable for this target”, he adds. Langasek also informed users that the automated upgrades to 18.10 are disabled on i386. This has been done to enable users of i386 to stay on the LTS, as it will be supported until 2023. This will help users to not be stranded on a non-LTS release, which will be supported only until early 2021. The general reaction to this news has been negative. Users have expressed outrage on the discontinuity of i386 architecture. A user on Reddit says that “Dropping support for 32-bit hosts is understandable. Dropping support for 32 bit packages is not. Why go out of your way to screw over your users?” Another user comments, “I really truly don't get it. I've been using ubuntu at least since 5.04 and I'm flabbergasted how dumb and out of sense of reality they have acted since the beginning, considering how big of a headstart they had compared to everyone else. Whether it's mir, gnome3, unity, wayland and whatever else that escapes my memory this given moment, they've shot themselves in the foot over and over again.” On Hacker News, a user commented “I have a 64-bit machine but I'm running 32-bit Debian because there's no good upgrade path, and I really don't want to reinstall because that would take months to get everything set up again. I'm running Debian not Ubuntu, but the absolute minimum they owe their users is an automatic upgrade path.” Few think that this step was needed, for the sake of riddance. Another Redditor adds,  “From a developer point of view, I say good riddance. I understand there is plenty of popular 32-bit software still being used in the wild, but each step closer to obsoleting 32-bit is one step in the right direction in my honest opinion.” Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32 Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users
Read more
  • 0
  • 0
  • 6132

article-image-docker-and-microsoft-collaborate-over-wsl-2-future-of-docker-desktop-for-windows-is-near
Amrata Joshi
18 Jun 2019
5 min read
Save for later

Docker and Microsoft collaborate over WSL 2, future of Docker Desktop for Windows is near

Amrata Joshi
18 Jun 2019
5 min read
WSL was a great effort towards emulating a Linux Kernel on top of Windows. But due to certain differences between Windows and Linux, it was quite impossible to run the Docker Engine and Kubernetes directly inside WSL. So, the Docker Desktop developed an alternative solution with the help of Hyper-V VMs and LinuxKit to achieve the seamless integration. On 16th June, Docker announced WSL 2 with a major architecture change where the company will provide a real Linux Kernel running inside a lightweight VM instead of emulation. This approach is architecturally similar to LinuxKit and Hyper-V but  WSL 2 has an additional benefit that it is more lightweight and tightly integrated with Windows. Even the Docker daemon runs properly on it with great performance. The team further announced that they are working on new version of Docker Desktop that would leverage WSL 2 and the public preview will be expected in July. The official blog reads, “We are very excited about this technology, and we are happy to announce that we are working on a new version of Docker Desktop leveraging WSL 2, with a public preview in July. It will make the Docker experience for developing with containers even greater, unlock new capabilities, and because WSL 2 works on Windows 10 Home edition, so will Docker Desktop.” In context with integration of Microsoft the blog reads, “As part of our shared effort to make Docker Desktop the best way to use Docker on Windows, Microsoft gave us early builds of WSL 2 so that we could evaluate the technology, see how it fits with our product, and share feedback about what is missing or broken. We started prototyping different approaches and we are now ready to share a little bit about what is coming in the next few months.” The future of Docker Desktop will have WSL 2 The team will replace the Hyper-V VM by a WSL 2 integration package. The package will offer the same features as the current Docker Desktop including automatic updates, transparent HTTP proxy configuration, VM: Kubernetes 1-click setup, access to the daemon from Windows, etc. This package will contain both the server-side components that are required to run Docker and Kubernetes and the CLI tools to interact with those components within WSL. WSL 2 will enable seamless integration with Linux With WSL 2 integration, users will experience seamless integration with Windows, but even Linux programs that are running inside WSL will be able to do the same. This creates a huge impact for developers that are working on projects related to the Linux environment, or with a build process for Linux. Now there won’t be a need for maintaining both Linux and Windows build scripts. For example, a developer at Docker can now work on the Linux Docker daemon on Windows, using the same set of tools and scripts as a developer on a Linux machine. The bind mounts from WSL will now support inotify events (inotify is a Linux kernel subsystem) and will have almost identical I/O performance as on a native Linux machine. This will solve one of the major Docker Desktop issues with I/O-heavy toolchains. This feature will benefit NodeJS, PHP and other web development tools. Improved performance and reduced memory consumption The VM has been setup to use dynamic memory allocation and schedule work on all the Host CPUs. It will be consuming lesser memory which would be in the limit of what the host can provide. Docker Desktop will leverage this for improving its resource consumption and use CPU and memory as per its needs. The CPU/Memory intensive tasks such as building a container will also run much faster. Leveraging WSL 2 Docker desktop will support bind mount One of the major problems that the users have with Docker Desktop is the reliability of Windows file bind mounts. The current implementation is dependent on Samba Windows service, which could be deactivated, blocked by enterprise GPOs or even blocked by 3rd party firewalls etc. But Docker Desktop with WSL 2 solves these issues by leveraging WSL features to implement the bind mounts of Windows files.   Few users seem to be unhappy with this news, one of them commented on HackerNews, “So, I think the main sticking point here is the lock-in of Hyper-V. By making a new popular feature completely dependent on a technology that explicitly disables the use of competitive hypervisors, they're giving with one hand and taking with the other. If I was on VM-Ware's executive team, I'd be seriously thinking about filing an anti-trust complaint and the open source community should be thinking about whether submarining virtualbox is worth what Microsoft is doing here.” Others think that WSL 2 is a full Linux kernel that runs in Hyper-V. Another comment reads, “WSL 2 is a full Linux kernel running in Hyper-V rather than an emulation layer on top of NT.” To know more about this news, check out the official post by Docker. How to push Docker images to AWS’ Elastic Container Registry(ECR) [Tutorial] All Docker versions are now vulnerable to a symlink race attack Docker announces collaboration with Microsoft’s .NET at DockerCon 2019      
Read more
  • 0
  • 0
  • 5065
article-image-pull-panda-is-now-a-part-of-github-code-review-workflows-now-get-better
Amrata Joshi
18 Jun 2019
4 min read
Save for later

Pull Panda is now a part of GitHub; code review workflows now get better!

Amrata Joshi
18 Jun 2019
4 min read
Yesterday, the team at GitHub announced that they have acquired Pull Panda for an undisclosed amount, to help teams create more efficient and effective code review workflows on GitHub. https://twitter.com/natfriedman/status/1140666428745342976 Pull Panda helps thousands of teams to work together on the code and further helps in improving their process by combining three new apps including Pull Reminders, Pull Analytics, and Pull Assigner. Pull Reminders: Users can get a prompt in Slack whenever a collaborator needs a review. It facilitates automatic reminders that ensures the pull requests aren’t missed. Pull Analytics: Users can now get real-time insight and make data-driven improvements for creating a more transparent and accountable culture. Pull Assigner: Users can automatically distribute code across their team such that no one gets overloaded and knowledge could be spread around. Pull Panda helps the team to ship faster and gain insight into bottlenecks in the process. Abi Noda, the founder of Pull Panda highlighted the major reasons for starting Pull Panda. According to him, there were two major pain points, the first one was that on fast moving teams, usually pull requests are forgotten which causes delays in the code reviews and eventually delays in shipping new features to the customers. Abi Noda stated in a video, “I started Pull Panda to solve two major pain points that I had as an engineer and manager at several different companies. The first problem was that on fast moving teams, pull requests easily are forgotten about and often slip through the cracks. This leads to frustrating delays in code reviews and also means it takes longer to actually ship new features to your customers.” https://youtu.be/RtZdbZiPeK8 The team built Pull Reminders which is a GitHub app that automatically notifies the team about their code reviews, to solve the above mentioned problem. The second problem was that it was difficult to measure and understand the team's development process for identifying bottlenecks. To solve this issue, the team built Pull Analytics to provide real-time insights into the software development process. It also highlights the current code review workload across the team such that the team knows who is overloaded and who might be available. Also, a lot of customers have discovered that the majority of their code reviews were done by the same set of people on the team. For solving this problem,  the team built Pull Assigner that offers two algorithms for automatically assigning reviewers. First is the Load Balance, which equalizes the number of reviews so everyone on the team does the same number of reviews. The second one is the round robin algorithm that randomly assigns additional reviewers such that knowledge can be spread across the team. Nat Friedman, CEO at GitHub said, “We'll be integrating everything Abi showed you directly into GitHub over the coming months. But if you're impatient, and you want to get started now, I'm happy to announce that all three of the Pull Panda products are available for free in the GitHub marketplace starting today. So we hope you enjoy using Pull Panda and we look forward to your feedback. Goodbye. It's over.” Pull Panda will no longer offer the Enterprise plan. Existing customers of Enterprise plans can continue to use their on-premises offering. All paid subscriptions have been converted to free subscriptions. New users can install Pull Panda for their organizations for free at our website or GitHub Marketplace. The official GitHub blog post reads, “We plan to integrate these features into GitHub but hope you’ll start benefiting from them right away. We’d love to hear what you think as we continue to improve how developers work together on GitHub.” To know more about this news, check out GitHub’s post. GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution Github Sponsors: Could corporate strategy eat FOSS culture for dinner? GitHub Satellite 2019 focuses on community, security, and enterprise          
Read more
  • 0
  • 0
  • 7088

article-image-introducing-luna-worlds-first-programming-language-with-dual-syntax-representation-data-flow-modeling-and-much-more
Amrata Joshi
17 Jun 2019
3 min read
Save for later

Introducing Luna, world’s first programming language with dual syntax representation, data flow modeling and much more!

Amrata Joshi
17 Jun 2019
3 min read
Luna, a data processing and visualization environment, provides a library of highly tailored, domain-specific components as well as a framework for building new components. Luna focuses on domains related to data processing, such as IoT, bioinformatics, data science, graphic design and architecture. What’s so interesting about Luna? Data flow modeling Luna is a data flow modeling whiteboard that allows users to draw components and the way data flows between them. Components in Luna have simply nested data flow graphs and users can enter into any component or into its subcomponents to move from high to low levels of abstraction. It is also designed as a general purpose programming language with two equivalent representations, visual and textual. Data processing and visualizing Luna components can visualise their results and further use colors for indicating the type of data they exchange. Users can compare all the intermediate outcomes and also understand the flow of data by looking at the graph. Users can also work around the parameters and observe how they affect each step of the computation in real time. Debugging Luna can help in assisting and analyzing network service outages and data corruption. In case any errors occur, Luna tracks and display its path through the graph so that users can easily follow and understand where it comes from.  It also records and visualizes information about performance and memory consumption. Luna explorer, the search engine Luna comes with Explorer which is a context-aware fuzzy search engine that lets users query libraries for desired components as well as browse their documentation. Since the Explorer is context-aware, it can easily understand the flow of data and also predict users’ intentions and adjust the search results accordingly. Dual syntax representation Luna is also the world’s first programming language that features two equivalent syntax representations, that is visual and textual. Automatic parallelism Luna also features parallelism that uses the state of the art Haskell’s GHC runtime system which helps to run thousands of threads in a fraction of a second. It also automatically partitions a program and schedules its execution over available CPU cores. Users seem to be happy with Luna, a user commented on HackerNews, “Luna looks great. I've been doing work in this area myself and hope to launch my own visual programming environment next month or so.” Few others are happy because Luna features text syntax supports building functional blocks. Another user commented, “I like that Luna has a text syntax. I also like that Luna supports building graph functional blocks that can be nested inside other graphs. That's a missing link in other tools of this type that limits the scale of what you can do with them.” To know more about this, check out the official Luna website. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter Polyglot programming allows developers to choose the right language to solve tough engineering problems Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study
Read more
  • 0
  • 0
  • 11161

article-image-net-core-3-0-preview-6-is-available-packed-with-updates-to-compiling-assemblies-optimizing-applications-asp-net-core-and-blazor
Amrata Joshi
13 Jun 2019
4 min read
Save for later

.NET Core 3.0 Preview 6 is available, packed with updates to compiling assemblies, optimizing applications ASP.NET Core and Blazor

Amrata Joshi
13 Jun 2019
4 min read
Yesterday, the team at Microsoft announced that .NET Core 3.0 Preview 6 is now available. It includes updates for compiling assemblies for improved startup, optimizing applications for size with linker and EventPipe improvements. The team has also released new Docker images for Alpine on ARM64. Additionally they have made updates to ASP.NET Core and Blazor. The preview comes with new Razor and Blazor directive attributes as well as authentication, authorization support for Blazor apps and much more. https://twitter.com/dotnet/status/1138862091987800064 What’s new in the .NET Core 3.0 Preview 6 Docker images The .NET Core Docker images and repos including microsoft/dotnet and microsoft/dotnet-samples are updated. The Docker images are now available for .NET Core as well as ASP.NET Core on ARM64. Event Pipe enhancements With Preview 6, Event Pipe now supports multiple sessions, users can consume events with EventListener in-proc and have out-of-process event pipe clients. Assembly linking .NET core 3.0 SDK offers a tool that can help in reducing the size of apps by analyzing IL linker and cutting down on unused assemblies. Improving the startup time Users can improve the startup time of their .NET Core application by compiling their application assemblies as ReadyToRun (R2R) format. R2R, a form of ahead-of-time (AOT) compilation is supported with .NET Core 3.0. But it can’t be used with earlier versions of .NET Core. Additional functionality The Native Hosting sample posted by the team lately, demonstrates an approach for hosting .NET Core in a native application. The team is now exposing general functionality to .NET Core native hosts as part of .NET Core 3.0.  The functionality is majorly related to assembly loading that makes it easier to produce native hosts. New Razor features In this release, the team has added support for the new Razor features which are as follows: @attribute This release comes with new @attribute directive that adds specified attribute to the generated class. @code This release comes with new @code directive that is used in .razor files for specifying a code block for adding to the generated class as additional members. @key In .razor files, the new @key directive attribute is used for specifying a value that can be used by the Blazor diffing algorithm to preserve elements or components in a list. @namespace The @namespace directive works in pages and views apps and it is also supported with components (.razor). Blazor directive attributes In this Blazor release, the team has added standardized common syntax for directive attributes on Blazor which makes the Razor syntax used by Blazor more consistent and predictable. Event handlers In Blazor, event handlers now use the new directive attribute syntax than the normal HTML syntax. This new syntax is similar to the HTML syntax, but it has @ character which makes C# event handlers distinct from JS event handlers. Authentication and authorization support With this release, Blazor has a built-in support for handling authentication as well as authorization. The server-side Blazor template also supports the options that are used for enabling the standard authentication configurations with ASP.NET Core Identity, Azure AD, and Azure AD B2C. Certificate and Kerberos authentication to ASP.NET Core Preview 6 comes along with a Certificate and Kerberos authentication to ASP.NET Core. Certificate authentication requires users to configure the server for accepting certificates, and then add the authentication middleware in Startup.Configure and the certificate authentication service in Startup.ConfigureServices. Users are happy with this news and they think the updates will be useful. https://twitter.com/gcaughey/status/1138889676192997380 https://twitter.com/dodyg/status/1138897171636531200 https://twitter.com/acemod13/status/1138907195523907584 To know more about this news, check out the official blog post. .NET Core releases May 2019 updates An introduction to TypeScript types for ASP.NET core [Tutorial] What to expect in ASP.NET Core 3.0
Read more
  • 0
  • 0
  • 6533
article-image-scala-2-13-is-here-with-an-overhauled-collections-improved-compiler-performance-and-more
Bhagyashree R
12 Jun 2019
2 min read
Save for later

Scala 2.13 is here with overhauled collections, improved compiler performance, and more!

Bhagyashree R
12 Jun 2019
2 min read
Last week, the Scala team announced the release of Scala 2.13. This release brings a number of improvements including overhauled standard library collections, a 5-10% faster compiler, and more. Overhauled standard library collections The major highlight of Scala 2.13 is standard library collections that are now better in simplicity, performance, and safety departments as compared to previous versions.  Some of the important changes made in collections include: Simpler method signatures The implicit CanBuildFrom parameter was one of the most powerful abstractions in the collections library. However, it used to make method signatures too difficult to understand. Beginning this release, transformation methods will no longer take an implicit ‘CanBuildFrom’ parameter making the resulting code simpler and easier to understand. Simpler type hierarchy The package scala.collection.parallel is now a part of the Scala standard module. This module will now come as a separate JAR that you can omit from your project if it does not uses parallel collections. Additionally, Traversable and TraversableOnce are now deprecated. New concrete collections The Stream collection is now replaced by LazyList that evaluates elements in order and only when needed. A new mutable.CollisionProofHashMap collection is introduced that implements mutable maps using a hashtable with red-black trees in the buckets. This provides good performance even in worst-case scenarios on hash collisions. The mutable.ArrayDeque collection is added, which is a double-ended queue that internally uses a resizable circular buffer. Improved Concurrency In Scala 2.13, Futures are “internally redesigned” to ensure it provides expected behavior in a broader set of failures. The updated Futures will also provide a foundation for increased performance and support more robust applications. Changes in the language The updates in language include the introduction of literal-based singleton types, partial unification on by default, and by-name method arguments extended to support both implicit and explicit parameters. Compiler updates The compiler will now be able to perform a deterministic and reproducible compilation. This essentially means that it will be able to generate identical output for identical input in more cases. Also, operations on collections and arrays are now optimized making the compiler 5-10% better compared to Scala 2.12. These were some of the exciting updates in Scala 2.13. For a detailed list, check out the official release notes. How to set up the Scala Plugin in IntelliJ IDE [Tutorial] Understanding functional reactive programming in Scala [Tutorial] Classifying flowers in Iris Dataset using Scala [Tutorial]
Read more
  • 0
  • 0
  • 5466

article-image-grapheneos-now-comes-with-new-device-support-for-auditor-app-hardened-malloc-and-a-new-website
Amrata Joshi
11 Jun 2019
4 min read
Save for later

GrapheneOS now comes with new device support for Auditor app, Hardened malloc and a new website

Amrata Joshi
11 Jun 2019
4 min read
GrapheneOS, an open source privacy and security focused mobile OS comes with Android app compatibility. The GrapheneOS releases are supported by the Auditor app as well as attestation service for hardware-based attestation. The GrapheneOS research and engineering project has been in progress for over 5 years. In March, the AndroidHardening project got renamed to GrapheneOS. Two days ago, GrapheneOS released a new website grapheneos.org with additional documentation, tutorials and coverage of topics related to software, firmware and hardware as well as privacy/security features expected in the future. The team has also released a new version PQ3A.190605.003.2019.06.03.18 with device support, Auditor app and Hardened malloc among other fixes. Changes in GrapheneOS project Auditor: update to version 12 The Auditor app has an added support for verifying CalyxOS on the Pixel 2, Pixel 2 XL, Pixel 3 and Pixel 3 XL and even verified boot hash display has been added. Auditor uses hardware security features on supported devices for validating the integrity of the operating system from another Android device. The Auditor app will now also verify that the device is running the stock operating system with the bootloader locked and further will check that no tampering has occurred with the operating system. The list of supported devices for the auditor app include BlackBerry Key2, BQ Aquaris X2 Pro, Google Pixel, 2, Google Pixel 2 XL, Google Pixel 3, Google Pixel 3 XL, Google Pixel 3a, Google Pixel 3a XL, Huawei Honor 7A Pro, Huawei Honor 10, and more. Full list here. https://twitter.com/GrapheneOS/status/1125928692671057920 Hardened malloc Hardened malloc is a security-focused general purpose memory allocator that provides the malloc API along with various extensions. This security-focused design leads to lesser metadata overhead and memory waste from fragmentation than a traditional allocator design. https://twitter.com/GrapheneOS/status/1113556017768325120 It also offers substantial hardening against heap corruption vulnerabilities and aims to provide a decent overall performance focused on long-term performance and memory usage. Hardened malloc currently supports Bionic (Android), musl and glibc and it may also support other non-Linux operating systems in the future. There's custom integration along with other hardening features for which has also been planned for musl in the future. The hardened_malloc for GrapheneOS only is further expanded to workaround for Pixel 3 and Pixel 3 XL camera issues. GrapheneOS now needs to move towards a microkernel-based model with a Linux compatibility layer and it needs to adopt virtualization-based isolation. According to the team, the project will have to move into the hardware space in the long term. Restoration of past features Restoration of past features since the 2019.05.18.20 release include: Exec spawning while using debugging options has been disabled. Exec spawning has been enabled by default. Verizon visual voicemail support has been enabled. Toggle for disabling newly added USB devices has been added to Pixel, Pixel XL, Pixel 2, Pixel 2 XL, Pixel 3, Pixel 3 XL Properties for controlling deny_new_usb has been added Implementation of dynamic deny_new_usb toggle mode deny_new_usb feature is set to dynamic by default Many are happy with this latest update. A user commented on HackerNews, “They're making good progress and I can't wait to be able to update my handheld device with mainline pieces for as long as anyone who still uses one cares to update it. Currently my Samsung Android device is at Dec 2018 patchlevel and nothing I can do about it.” Few others are skeptical about this news, another user commented, “There is security, and then there is freedom. You can have the most secure system in the world -- but if there are state sponsored, or company back doors it means nothing.” To know more about this news, check out the official website. AndroidHardening Project renamed to GrapheneOS to reflect progress and expansion of the project GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database      
Read more
  • 0
  • 0
  • 7297