Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Application Development

279 Articles
article-image-introducing-abscissa-security-oriented-rust-application-framework-by-iqlusion
Bhagyashree R
19 Jul 2019
2 min read
Save for later

Introducing Abscissa, a security-oriented Rust application framework by iqlusion

Bhagyashree R
19 Jul 2019
2 min read
Earlier this month, iqlusion, an infrastructure provider for next-generation cryptocurrency technologies, announced the release of Abscissa 0.1, a security-oriented microframework for building Rust applications. Yesterday, the team announced the release of Abscissa 0.2. Tony Arcieri, the co-founder of iqlusion, wrote in a blog post, “After releasing v0.1, we’ve spent the past few weeks further polishing it up in tandem with this blog post, and just released a follow-up v0.2.” After developing a lot of Rust applications ranging from CLI to network services and managing a lot of the same copy/paste boilerplate, iqlusion decided to create the Abscissa framework. It aims to maximize functionality while minimizing the number of dependencies. What features does Abscissa come with? Command-line option parsing Abscissa comes with simple declarative option parser, which is based on the gumdrop crate. The option parser encompasses several improvements to provide better UX and tighter integration with the other parts of the framework, for example, overriding configuration settings using command-line options. Uses component architecture It uses a component architecture for extensibility, with a minimalist implementation and still is able to offer features like calculating dependency ordering and providing hooks into the application lifecycle. Configuration Allows simple parsing of Tom's Obvious, Minimal Language (TOML) configurations to serde-parsed configuration types that can be dynamically updated at runtime. Error handling Abscissa has a generic ‘Error’ type based on the ‘failure’ crate and a unified error-handling subsystem. Logging It uses the ‘log’ crate to provide application-level logging. Secrets management The optional ‘secrets’ module contains a ‘Secret’ type that derives serde’s Deserialize, which can be used for representing secret values parsed from configuration files or elsewhere. Terminal interactions It supports colored terminal output and is useful for Cargo-like status messages with easy-to-use macros. Read the official announcement for more details on Abscissa. You can also check out its GitHub repository. Introducing Ballista, a distributed compute platform based on Kubernetes and Rust Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more
Read more
  • 0
  • 0
  • 2704

article-image-fedora-announces-the-first-preview-release-of-fedora-coreos-as-an-automatically-updating-linux-os-for-containerized-workloads
Vincy Davis
19 Jul 2019
3 min read
Save for later

Fedora announces the first preview release of Fedora CoreOS as an automatically updating Linux OS for containerized workloads

Vincy Davis
19 Jul 2019
3 min read
Three days ago, Fedora announced the first preview release of the open-source project Fedora CoreOS as a secure and reliable host for computer clusters. It is specifically designed for running containerized workloads with automatic updates to the latest OS improvements, bug fixes, and security updates. It is secure, minimal, monolithic and is optimized for working with Kubernetes. The main goal of Fedora CoreOS is to be a reliable container host to run containerized workloads securely and at scale. It integrates Ignition from Container Linux technology and rpm-ostree and SELinux hardening from Project Atomic Host. Fedora CoreOS is expected to be a successor to Container Linux eventually. The Container Linux project will continue to be supported throughout 2019, leaving users with ample time to migrate and provide feedback. Fedora has also assured Container Linux users that continued support will be provided to them without any disruption. Fedora CoreOS will also become the successor to Fedora Atomic Host. The current plan is for Fedora Atomic Host to have at least a 29 version and 6 months of lifecycle. Fedora CoreOS will support AWS, Azure, DigitalOcean, GCP, OpenStack, Packet, QEMU, VirtualBox, VMware, and bare-metal system platforms. The initial release of Fedora CoreOS will only run on bare metal, Quick Emulator (QEMU), VMware, and AWS on the 64-bit version of the x86 instruction set (x86_64) only. It supports provisioning via Ignition spec 3.0.0 and the Fedora CoreOS Config Transpiler, and will provide automatic updates with Zincati and rpm-ostree, and will run containers with Podman and Moby. Benjamin Gilbert from Red Hat, who is the primary sponsor for FedoraOS wrote a mail archive announcing the preview. Per Gilbert,  in the coming months, more platforms will be added to Fedora CoreOS and new functionalities will be explored. He has also notified users that the Fedora CoreOS preview should not be used for production workloads, as it may change before the stable release. Since Fedora CoreOS is freely available, it will embrace a variety of containerized use cases while Red Hat CoreOS will continue to provide a focused immutable host for OpenShift. It will be released and life-cycled at the same time as the platform. Users are happy with the first preview of Fedora CoreOS. https://twitter.com/datamattsson/status/1151963024175050758 A user on Reddit comments, “Wow looks awesome”. For details on how to create Ignition configs, head over to the Fedora Project docs. Fedora Workstation 31 to come with Wayland support, improved core features of PipeWire, and more Fedora 30 releases with GCC 9.0, GNOME 3.32, performance improvements, and much more! Fedora 30 Beta released with desktop environment options, GNOME 3.32, and much more
Read more
  • 0
  • 0
  • 2862

article-image-nativescript-6-0-releases-with-nativescript-appsync-tabview-dark-theme-and-much-more
Amrata Joshi
19 Jul 2019
2 min read
Save for later

NativeScript 6.0 releases with NativeScript AppSync, TabView, Dark theme and much more!

Amrata Joshi
19 Jul 2019
2 min read
Yesterday, the team behind NativeScript announced the release of NativeScript 6.0. This release features faster delivery of patches with the help of NativeScript AppSync and it comes with the NativeScript Core Theme that works for all NativeScript components. This release comes with an improved TabView that enables common scenarios without custom development. NativeScript 6.0 comes with support for AndroidX and Angular 8. https://twitter.com/ufsa/status/1151755519062958081 Introducing NativeScript AppSync Yesterday, the team also introduced NativeScript AppSync which is a beta service that enables users to deliver a new version of their application instantly. Users can have a look at the demo here: https://youtu.be/XG-ucFqjG6c Core Theme v2 and Dark Theme The NativeScript Core Theme provides common UI infrastructure for building consistent and good-looking user interface. The team is also introducing a Dark Theme that comes with the skins of the Light Theme.  Kendo Themes  The users who are using the Kendo components for their web applications can now reuse their Kendo theme in NativeScript. They can also use the Kendo Theme Builder for building a new theme for their NativeScript application.  Plug and play With this release, the NativeScript Core Theme is now completely plug and play. Users can now manually set classes to their components and can easily install the theme. TabView All the components of the TabView are now styleable and also the font icons are now supported. Users can now have multiple TabView components that are nested, similar to having tabs and bottom navigation on the same page. These new capabilities are still in beta. Bundle Workflow With NativeScript 6.0, the NativeScript CLI will now support the Bundle Workflow, a single unified way for building applications. Hot Module Replacement (HMR) is also enabled by default and users can disable it by providing the `--no-hmr` flag to the executed command. To know more about this news, check out the official blog post. NativeScript 5.0 released with code sharing, hot module replacement, and more! JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript Nativescript 4.1 has been released  
Read more
  • 0
  • 0
  • 1505

article-image-linux-mint-19-2-beta-releases-with-update-manager-improved-menu-and-much-more
Amrata Joshi
18 Jul 2019
3 min read
Save for later

Linux Mint 19.2 beta releases with Update Manager, improved menu and much more!

Amrata Joshi
18 Jul 2019
3 min read
This week the team behind Linux Mint announced the release of Linux Mint 19.2 beta, a desktop Linux distribution used for producing a modern operating system. This release is codenamed as Tina. This release comes with updated software and refinements and new features for making the desktop more comfortable to use. What’s new in Linux Mint 19.2 beta? Update Manager The Update Manager now shows how long kernels are supported and users no longer need to install or remove kernels one by one anymore. Users can now queue installations and removals as well as install and remove multiple kernels in one go. A new button called "Remove Kernels" has been added to make for removing obsolete kernels. There is also support for kernel flavors now. The Update Manager will now show a combobox for users to switch between flavors. Improved menu mintMenu, the main application menu, has received many bug fixes and performance improvements. Also,Even the search bar position and the tooltips are now configurable. In this release, the applet icon now supports both icon files and themed icons. Software Manager A loading screen now shows up when the cache is being refreshed in the Software Manager. Software Manager can now share the same cache and can also list the applications which were installed via other means (other than Software Manager). The cache used by the Software Manager has been moved to mint-common and is turned into a Python module that can recognize manually installed software.  New buttons added in the Maintenance section In this release, two new buttons are made available in the "Maintenance" section of the "Software Sources" configuration tool: Add Missing Keys: With the help of this button, users can now scan their repositories and PPAs and download any key that might be missing. Remove duplicate sources: With the help of this button, users can find and fix duplicated definitions in their sources configuration. Read Also: Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released Announcing MATE 1.22 The Mint team also announced that Linux Mint 19.2 will be shipped with MATE 1.22 which now comes with improved stability and bug fixes. MATE is the Linux desktop that started as a fork of GNOME 2 in 2011 due to the poor reception of GNOME 3.  What’s new in MATE 1.22? It comes with support for metacity-3 themes. This release features better-looking window and desktop switchers. MATE 1.22 features systemd support in the session manager. It has support for new compression formats and can easily pause/resume compression/decompression. It seems users are happy with this news. A user commented on the official post, “Hi Mint Team. Great job so far. Looks very smooth – even for a beta. Menu is crazy fast!!!”  Few others are complaining about the graphical glitches they faced. Another user commented, “Hi team and thanks for your latest offering, there is a LOT to like about this and I will provide as much useful feedback as I can, I have had an issue with graphical glitches from Linux Mint 19x Cinnamon.” To know more about this news, check out the official blog post. Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released Is Linux hard to learn? Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32
Read more
  • 0
  • 0
  • 2674

Banner background image
article-image-introducing-ballista-a-distributed-compute-platform-based-on-kubernetes-and-rust
Amrata Joshi
18 Jul 2019
3 min read
Save for later

Introducing Ballista, a distributed compute platform based on Kubernetes and Rust

Amrata Joshi
18 Jul 2019
3 min read
Andy Grove, a software engineer introduced Ballista, a distributed compute platform and in his recent blog post, he explained his journey on this project. Roughly around eighteen months ago, he started the DataFusion project, an in-memory query engine that uses Apache Arrow as the memory model. The aim was to build a distributed compute platform in Rust that can compete with Apache Spark but which turned out to be difficult for him. Grove writes in a blog post, “Unsurprisingly, this turned out to be an overly ambitious goal at the time and I fell short of achieving that. However, some very good things came out of this effort. We now have a Rust implementation of Apache Arrow with a growing community of committers, and DataFusion was donated to the Apache Arrow project as an in-memory query execution engine and is now starting to see some early adoption.” He then took a break from working on Arrow and DataFusion for a couple of months and focused on some deliverables at work.  He then started a new PoC (Proof of Concept) project which was his second attempt at building a distributed platform with Rust. But this time he had the advantage of already having Arrow and DataFusion in his plate. His new project is called Ballista, a distributed compute platform that is based on Kubernetes and the Rust implementation of Apache Arrow.  A Ballista cluster currently comprises of a number of individual pods within a Kubernetes cluster and it can be created and destroyed via the Ballista CLI. Ballista applications can be deployed to Kubernetes with the help of Ballista CLI and they use Kubernetes service discovery for connecting to the cluster. Since there is no distributed query planner yet, Ballista applications must manually build the query plans that need to be executed on the cluster.  To make this project practically work and push it beyond the limit of just a PoC, Grove listed some of the things on the roadmap for v1.0.0: First is to implement a distributed query planner. Then bringing support for all DataFusion logical plans and expressions. User code has to be supported as part of distributed query execution. They plan to bring support for interactive SQL queries against a cluster with gRPC. Support for Arrow Flight protocol and Java bindings. This PoC project will help in driving the requirements for DataFusion and it has already led to three DataFusion PRs that are being merged into the Apache Arrow codebase. It seems there are mixed reviews for this initiative, a user commented on HackerNews, “Hang in there mate :) I really don't think you deserve a lot of the crap you've been given in this thread. Someone has to try something new.” Another user commented, “The fact people opposed to your idea/work means it is valuable enough for people to say something against and not ignore it.” To know more about this news, check out the official announcement.  Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more Introducing Vector, a high-performance data router, written in Rust  
Read more
  • 0
  • 0
  • 4533

article-image-graphql-api-is-now-generally-available
Amrata Joshi
17 Jul 2019
3 min read
Save for later

GraphQL API is now generally available

Amrata Joshi
17 Jul 2019
3 min read
Last month, the team at Fauna, provider of FaunaDB, the cloud-first database announced the general availability of its GraphQL API, a query language for APIs. With the support for GraphQL, FaunaDB now provides cloud database services in the market and allows developers to use any API of choice to manipulate all their data. GraphQL also helps developers with their productivity by enabling fast, easy development of serverless applications. It makes FaunaDB the only serverless backend that has support for universal database access. Matt Biilmann, CEO at Netlify, a Fauna partner said, “Fauna’s GraphQL support is being introduced at a perfect time as rich, serverless apps are disrupting traditional development models.” Biilmann added, “GraphQL is becoming increasingly important to the entire developer community as they continue to leverage JAMstack and serverless to simplify cloud application development. We applaud Fauna’s work as the first company to bring a serverless GraphQL database to market.” GraphQL helps developers in specifying the shape of the data they need without requiring changes to the backend components that provide data. The GraphQL API in FaunaDB helps teams in collaborating smoothly and allows back-end teams to focus on security and business logic, and helps front-end teams to concentrate on presentation and usability.  In 2017, the global serverless architecture market was valued at $3.46 billion in 2017 and is expected to reach $18.04 billion by 2024 as per the Zion Research. GraphQL brings growth and development to serverless development so developers can look for back-end GraphQL support like the one found in FaunaDB. The GraphQL API also supports three general functions: Queries, Mutations, and Subscriptions and currently, FaunaDB natively supports Queries and Mutations.  FaunaDB's GraphQL API provides developers with uniform access to transactional consistency, quality of service (QoS), user authorization, data access, and temporal storage. No limits on data history FaunaDB is the only database that provides support without any limits on data history. Any API such as SQL in FaunaDB can return data at any given time. Consistency FaunaDB provides the highest consistency levels for its transactions that are automatically applied to all APIs. Authorization FaunaDB provides access control at the row level which is applicable to all APIs, be it GraphQL or SQL. Shared data access It also features shared data access, so the data which is written by one API (e.g., GraphQL) can be read and modified by another API such as FQL.  To know more about the news, check out the press release. 7 reasons to choose GraphQL APIs over REST for building your APIs Best practices for RESTful web services : Naming conventions and API Versioning [Tutorial] Implementing routing with React Router and GraphQL [Tutorial]
Read more
  • 0
  • 0
  • 3736
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-intellij-idea-2019-2-beta-2-released-with-new-services-tool-window-and-profiling-tools
Bhagyashree R
11 Jul 2019
4 min read
Save for later

IntelliJ IDEA 2019.2 Beta 2 released with new Services tool window and profiling tools

Bhagyashree R
11 Jul 2019
4 min read
Yesterday, JetBrains announced the release of IntelliJ IDEA 2019.2 Beta 2, which marks the next step towards the stable release. The team has already implemented major features like profiling tools, better shell script support, a new Services tool window, among others. With this release, the team has given a final polish to the existing features including the Terminal that now soft-wraps long lines better. This solves the previous problem of breaking links while wrapping lines. Source: IntelliJ IDEA Shell script support This release will come with rich editing features for shell scripts including word and path completion, quick documentation preview, and textual rename. Additionally, it will also allow integration with various other external tools to provide developers an enhanced shell script support. For instance, the IDE will prompt you to install ShellCheck to detect possible errors in your scripts and also suggest quick fixes for them. A new Services tool window IntelliJ IDEA 2019.2 will introduce a new Services tool window, which will be your single stop to view all connections and run configurations that are configured to be reported to the Services view.  The Services view will incorporate windows for several tools such as RunDashboard, Database Console, Docker, and Application Servers. You have the option of viewing all the service types as nodes or tabs. To view a service type on a separate tab you can either use the Show in New tab action from the toolbar or simply drag and drop the needed node on to the edge of the Services tool window. You can also create a custom tab to group various services using the Group Services action from the context menu or from the toolbar. Source: IntelliJ IDEA Profiling tools for IntelliJ IDEA Ultimate You will be able to analyze the performance of your application right from the IDE using the new CPU Profiler integration and Memory Profiler integration on macOS, Linux, and Windows. It will also come integrated with Java Flight Recorder and Async profiler. This will help you get an insight into how the CPU and memory resources are allocated in your application. To run Java Flight Recorder or Async profiler, you just need to click the icon on the main toolbar or the run icon in the gutter. These tools will only be available in the professional and fully-featured commercial IDE, IntelliJ IDEA Ultimate. Source: IntelliJ IDEA Syntax highlighting for over 20 different programming languages IntelliJ IDEA 2019.2 will provide syntax highlighting for more than 20 different languages. To provide this support, this upcoming version comes integrated with TextMate text editor and a collection of built-in grammar files for various languages. You can find the full list of supported languages in Preferences / Settings | Editor | TextMate Bundles. In case you require syntax highlighting for any additional languages, you can download the TextMate bundle for the selected language and import it into IntelliJ IDEA. Commit directly from the Local Changes With this version, developers will be able to commit directly from the Local Changes tab without having to go through a separate Commit dialog. While working on a commit, you will be able to browse through the source code, view the file history, view the diff for the file in the same area as the commit, or use other features of the IDE. In previous versions, all these actions were impossible because the modal commit dialog blocked all the other IDE functionality. Additionally, there is a new feature for projects that are using version systems like Git or Mercurial. You just need to press the Commit shortcut (Ctrl-K on Windows, Linux/Cmd-K on macOS) and the IDE will select the modified files for the commit. You will then be able to review the selected files and change the file or code chunk. Source: IntelliJ IDEA These were some of the features coming in IntelliJ IDEA 2019.2. You can read the entire release notes and stay updated with the IntelliJ IDEA blog to know more in detail. Developers are excited about the profiling tools and other shining features bundled with this release: https://twitter.com/Rahamat87523498/status/1149221123256492032 https://twitter.com/goKarumi/status/1148849477136146432 https://twitter.com/matsumana/status/1140659765518852097 What’s new in IntelliJ IDEA 2018.2 IntelliJ IDEA 2018.3 Early Access Program is now open! Netbeans, Intellij IDEA and PyCharm come to Haiku OS
Read more
  • 0
  • 0
  • 3924

article-image-risc-v-foundation-officially-validates-risc-v-base-isa-and-privileged-architecture-specifications
Vincy Davis
11 Jul 2019
2 min read
Save for later

RISC-V Foundation officially validates RISC-V base ISA and privileged architecture specifications

Vincy Davis
11 Jul 2019
2 min read
Yesterday, the RISC-V Foundation announced that the RISC-V base Instruction Set Architecture (ISA) and privileged architecture specifications have been ratified.  The RISC-V Foundation drives the adoption and implementation of the free and open RISC-V ISA. The RISC-V base architecture acts as the interface between application software and hardware.  Krste Asanović, chairman of the RISC-V Foundation Board of Directors says, “The RISC-V ecosystem has already demonstrated a large degree of interoperability among various implementations. Now that the base architecture has been ratified, developers can be assured that their software written for RISC-V will run on all similar RISC-V cores forever.” The RISC-V privileged architecture covers all aspects of RISC-V systems including privileged instructions, additional functionality required for running operating systems and attaching external devices. Privilege levels are used to provide protection between different components of the software stack, such that it has a core set of privileged ISA extensions. The ISA extensions have optional extensions and variants, including the machine ISA, supervisor ISA and hypervisor ISA. “The RISC-V privileged architecture serves as a contract between RISC-V hardware and software such as Linux and FreeBSD. Ratifying these standards is a milestone for RISC-V,” said Andrew Waterman, chair of the RISC-V Privileged Architecture Task Group.  To know more about this announcement in detail, head over to RISC-V blog. Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap AdaCore joins the RISC-V Foundation, adds support for C and Ada compilation Western Digital RISC-V SweRV Core is now on GitHub
Read more
  • 0
  • 0
  • 2745

article-image-glitch-hits-2-5-million-apps-secures-30m-in-funding-and-is-now-available-in-vs-code
Sugandha Lahoti
10 Jul 2019
5 min read
Save for later

Glitch hits 2.5 million apps, secures $30M in funding, and is now available in VS Code

Sugandha Lahoti
10 Jul 2019
5 min read
Glitch, the web apps creating tool, has made a series of major announcements yesterday. Glitch is a tool that lets you code full-stack apps right in the browser, where they’re instantly deployed. Glitch, formerly known as Fog Creek Software, is an online community where people can upload projects and enable others to remix them. Creating web apps with Glitch is as easy as working on Google Docs. The Glitch community reached a milestone by hitting 2.5 million free and open apps, more than the number in Apple's app store. Many apps on Glitch are decidedly smaller, simpler, and quicker to make on average focused on single-use things. Since all apps are open source, others can then remix the projects into their own creations. Glitch raises $30M with a vision of being a healthy, responsible company Glitch has raised $30M in a Series A round funding from a single investor, Tiger Global. The round closed in November 2018, but Anil Dash, CEO of Glitch said he wanted to be able to show people that the company did what it said it would do, before disclosing the funding to the public; the company has grown twice in size since. Glitch is not your usual tech startup. The policies, culture, and creative freedom offered are unique. Their motto is to be a simple tool for creating web apps for people and teams of all skill levels, while fostering a friendly and creative community and a different kind of company aiming to set the standard for thoughtful and ethical practices in tech. The company is on track for building one of the friendliest, most inclusive, and welcoming social platforms on the internet. They’re built with sustainability in mind, are independent, privately held, and transparent and open in business model and processes. https://twitter.com/firefox/status/1148716282696601601 They are building a healthy, responsible company and have shared their inclusion statistics, and benefits like salary transparency, paid climate leave (consists upto 5 consecutive work days taken at employee’s discretion, for extreme weather), full parental leave and more in a public handbook. This handbook is open-sourced so anyone, anytime, anywhere can see how the company runs day to day. Because this handbook is made in Glitch, users can remix it to get their own copy that is customizable. https://twitter.com/Pinboard/status/1148645635173670913 As the community and the company have grown, they have also invested significantly in diversity, inclusion, and tech ethics. On the gender perspective, 47% of the company identifies as cisgender women, 40% identify as cisgender men, 9% identify as non-binary/gender non-conforming/questioning and 4% did not disclose. On the race and ethnicity front, the company is 65% white, 7% Asian, 11% black, 4% Latinx, 11% two or more races and 2% did not disclose. Meanwhile, 29% of the company identifies as queer and 11% of people reported having a disability. Their social platform, Anil notes has no wide-scale abuse, systematic misinformation, or surveillance-based advertising. The company wants to, “prove that a group of people can still create a healthy community, a successful business, and have a meaningful impact on society, all while being ethically sound.” A lot of credit for Glitch and it’s inclusion policies goes to Anil Dash, the CEO. As pointed by Kimberly Bryant, who is the founder of BlackGirlsCode, “'A big reason for Glitch's success and vision though is Anil. This "inclusion mindset" starts at the top and I think that is evidenced by the companies and founders who get it right.” Karla Monterroso, CEO Code2040 says, “It becomes about operationalizing strategy. About creating actual inclusion. About how you intentionally build a diverse team and an org that is just.” https://twitter.com/karlitaliliana/status/1148641017823764480 https://twitter.com/karlitaliliana/status/1148653580842196992   Dash notes, “It’s the entire team working together. Buy-in at every level of the organization, people being brave enough to be vulnerable, all doing the hard work of self-reflection & not being defensive. And knowing we’re only getting started.” Other community members and tech experts have also appreciated Dash’s resilience into building an open source, sustainable, inclusive platform. https://twitter.com/TheSamhita/status/1148706941432225792 https://twitter.com/LeeTomson/status/1148655031308210176   People have also used it for activist purposes and highly recommend it. https://twitter.com/schep_/status/1148654037518168065 Glitch now on VSCode offering real-time code collab Glitch is also available in Visual Studio Code allowing everyone from beginners to experts to code.  Features include real-time collaboration, code rewind, and live previews. This feature is available in preview; users can download the Glitch VS Code extension on the Visual Studio Marketplace. Features include: Rewind: look back through code history, rollback changes, and see files as they were in the past with a diff. Console: Open the console and run commands directly on Glitch container. Logs: See output in logs just like on Glitch. Debugger: make use of the built-in Node debugger to inspect full-stack code. Source: Medium https://twitter.com/horrorcheck/status/1148635444218933250 For now the company is dedicated solely to building out Glitch and release specialized and powerful features for businesses later this year. How do AWS developers manage Web apps? Introducing Voila that turns your Jupyter notebooks to standalone web applications PayPal replaces Flow with TypeScript as their type checker for every new web app
Read more
  • 0
  • 0
  • 3479

article-image-linux-5-2-releases-with-inclusion-of-sound-open-firmware-project-new-mount-api-improved-pressure-stall-information-and-more
Vincy Davis
09 Jul 2019
5 min read
Save for later

Linux 5.2 releases with inclusion of Sound Open Firmware project, new mount API, improved pressure stall information and more

Vincy Davis
09 Jul 2019
5 min read
Two days ago, Linus Torvalds, the principal developer of the Linux kernel announced the release of Linux 5.2 in his usual humorous way, describing it as a ‘Bobtail Squid’. The release has new additions like the inclusion of the Sound Open Firmware (SOF) project, improved pressure stall information, new mount API, significant performance improvements in the BFQ I/O scheduler, new GPU drivers, optional support for case-insensitive names in ext4 and more. The earlier version, Linux 5.1 was released exactly two months ago. Torvalds says, “there really doesn't seem to be any reason for another rc, since it's been very quiet. Yes, I had a few pull requests since rc7, but they were all small, and I had many more that are for the upcoming merge window. So despite a fairly late core revert, I don't see any real reason for another week of rc, and so we have a v5.2 with the normal release timing.” Linux 5.2 also kicks off the Linux 5.3 merge window. What’s new in Linux 5.2? Inclusion of Sound Open Firmware (SOF) project Linux 5.2 includes Sound Open Firmware (SOF) project, which has been created to reduce firmware issues by providing an open source platform to create open source firmware for audio DSPs. The SOF project is backed by Intel and Google. This will enable users to have open source firmware, personalize it, and also use the power of the DSP processors in their sound cards in imaginative ways. Improved Pressure Stall information With this release, users can configure sensitive thresholds and use poll() and friends to be notified, whenever a certain pressure threshold is breached within the user-defined time window. This allows Android to monitor and prevent mounting memory shortages, before they cause problems for the user. New mount API With Linux 5.2, Linux developers have redesigned the entire mount API, thus resulting in addition of six new syscalls: fsopen(2), fsconfig(2), fsmount(2), move_mount(2), fspick(2), and open_tree(2). The previous mount(2) interface was not easy for applications and users to understand the returned errors, was not suitable for the specification of multiple sources such as overlayfs need and it was not possible to mount a file system into another mount namespace. Significant performance improvements in the BFQ I/O scheduler BFQ is a proportional-share I/O scheduler available for block devices since the 4.12 kernel release. It associates each process or group of processes with a weight, and grants a fraction of the available I/O bandwidth to that proportional weight. In Linux 5.2, there have been performance tweaks to the BFQ I/O scheduler such that the application start-up time has increased under load by up to 80%. This drastically increases the performance and decreases the execution time of the BFQ I/O scheduler. New GPU drivers for ARM Mali devices In the past, the Linux community had to create open source drivers for the Mali GPUs, as ARM has never been open source friendly with the GPU drivers. Linux 5.2 has two new community drivers for ARM Mali accelerators, such that lima covers the older t4xx and panfrost the newer 6xx/7xx series. This is expected to help the ARM Mali accelerators. More CPU bug protection, and "mitigations" boot option Linux 5.2 release has more bug infrastructure added to deal with the Microarchitectural Data Sampling (MDS) hardware vulnerability, thus allowing access to data available in various CPU internal buffers. Also, in order to help users to deal with the ever increasing amount of CPU bugs across different architectures, the kernel boot option mitigations= has been added. It's a set of curated, arch-independent options to enable/disable protections regardless irrespective of the system they are running in. clone(2) to return pidfds Due to the design of Unix, sending signals to processes or gathering /proc information is not always safe due to the possibility of PID reuse. With clone(2) returning to pidfds, it will allow users to get pids at process creation time, which are usable with the pidfd_send_signal(2) syscall. pidfds helps Linux to avoid this problem, and the new clone(2) flag will make it even easier to get pidfs, thus providing an easy way to signal and process PID metadata safely. Optional support for case-insensitive names in ext4 This release implements support for case-insensitive file name lookups in ext4, based on the feature bit and the encoding stored in the superblock. This will enable users to configure directories with chattr +F (EXT4_CASEFOLD_FL) attribute. This attribute is only enabled on empty directories for filesystems that support the encoding feature, thus preventing collision of file names that differ by case. Freezer controller for cgroups v2 added A freezer controller provides an ability to stop the workload in a cgroup and temporarily free up some resources (cpu, io, network bandwidth and, potentially, memory) for some other tasks. Cgroup v2 lacked this functionality, until this release. This functionality is always available and is represented by cgroup.freeze and cgroup.events cgroup control files. Device mapper dust target added Linux 5.2 adds a device mapper 'dust' target to simulate a device that has failing sectors and/or read failures. It also adds the ability to enable the emulation of the read failures at an arbitrary time. The 'dust' target aims to help storage developers and sysadmins that want to test their storage stack. Users are quite happy with the Linux 5.2 release. https://twitter.com/ejizhan/status/1148047044864557057 https://twitter.com/konigssohne/status/1148014299484512256 https://twitter.com/YuzuSoftMoe/status/1148419200228179968 Linux 5.2 has many other performance improvements introduced in the file systems, memory management, block layer and more. Visit the kernelnewbies page, for more details. “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 Canonical, the company behind the Ubuntu Linux distribution, was hacked; Ubuntu source code unaffected OpenWrt 18.06.4 released with updated Linux kernel, security fixes Curl and the Linux kernel and much more!
Read more
  • 0
  • 0
  • 3283
article-image-dont-break-your-users-and-create-a-community-culture-says-linus-torvalds-creator-of-linux-at-kubecon-cloudnativecon-open-source-summit-china-2019
Sugandha Lahoti
09 Jul 2019
5 min read
Save for later

“Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019

Sugandha Lahoti
09 Jul 2019
5 min read
At the Cloud Native Computing Foundation’s flagship conference, KubeCon + CloudNativeCon + Open Source Summit China 2019, Linus Torvalds, creator of Linux and Git was in a conversation with Dirk Hohndel, VP and Chief Open Source Officer, VMware on the past, present, and future of Linux. The cloud Native conference gathers technologists from leading open source and cloud native communities scheduled to take place in San Diego, California from November 18-21, 2019. When I think about Linux, Linus says, I worry about the technology and not care about the market. In a lot of areas of technology, being first is more important than being best because if you get a huge community around yourself you have already won. Linus says he and the Linux community and maintainers don’t focus on individual features; what they focus on is the process of getting those features out and making releases. He doesn’t believe in long term planning; there are no plans that span more than roughly six months. Top questions on security, gaming and Linux’s future, learnings and expectations Is the interest in Linux from people outside of the core Linux community declining? Linus opposes this statement stating that it’s still growing albeit not at quite the same rate it used to be. He says that people outside the Linux kernel community should care about Linux’s consistency and the fact that there are people to make sure that when you move to a new kernel your processes will not break. Where is the major focus for security in IT infrastructure? Is it in the kernel, or in the user space? When it comes to security you should not focus on one particular area alone. You need to have secure hardware, software, kernels, and libraries at every stage. The true path to security is to have multiple layers of security where even if one layer gets compromised there is another layer that picks up that problem. The kernel, he says, is one of the more security conscious projects because if the kernel has a security problem it's a problem for everybody. What are some learnings that other projects like Kubernetes and the whole cloud native world can take from the kernel? Linus acknowledges that he is not sure how much the kernel development model really translates to other projects. Linux has a different approach to maintenance as compared to other projects as well as a unified picture of where it is headed. However other projects can take up two learnings from Linux: Don't break your users: Linus says, this has been a mantra for the kernel for a long time and it's something that a lot of other projects seem to not have learned. If you want your project to flourish long term you shouldn’t let your users worry about upgrades and versions and instead make them aware of the fact that you are a stable platform. Create a common culture: In order to have a long life for a platform/project, you should create a community and have a common culture, a common goal to work together for a long term. Is gaming a platform where open source is going to be relevant? When you take up a new technology, Linus states,  you want to take as much existing infrastructure as possible to make it easy to get to your goals. Linux has obviously been a huge part of that in almost every setting. So the only places where Linux isn't completely taking over are those where there was a very strong established market and code base already. If you do something new, exciting and interesting you will almost inevitably use Linux as the base and that includes new platforms for gaming. What can we expect for Linux for the second thirty years? Will it continue just as today or where do you think we're going? Realistically if you look at what Linux does today, it's not that different from what operating systems did 50-60 years ago. What has changed is the hardware and the use. Linux is right in between those two things. What an operating system fundamentally does is act as a resource manager and as the interface between software and hardware. Linus says, “ I don't know what software and hardware will look like in 30 years but I do know we'll still have an operating system and that will probably be called Linux. I may not be around in 30 years but I will be around in 2021 for the 30 year Linux anniversary.” Go through the full conversation here. Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’. Linux 5.1 out with Io_uring IO interface, persistent memory, new patching improvements and more! Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities
Read more
  • 0
  • 0
  • 3374

article-image-githubs-hub-command-line-tool-makes-using-git-easier
Bhagyashree R
08 Jul 2019
3 min read
Save for later

GitHub's 'Hub' command-line tool makes using git easier

Bhagyashree R
08 Jul 2019
3 min read
GitHub introduced ‘Hub’ that extends git command-line with extra functionality to enable developers complete their everyday GitHub tasks right from the terminal. Hub does not have any dependencies, but as it is designed to wrap git, it is recommended to have at least git 1.7.3 or newer.  Hub provides both new and some extended version of commands that already exist in git. Here are some of them: hub-am: Used to replicate commits locally from a GitHub pull request.  hub-cherry-pick: Allows cherry-picking a commit from a fork on GitHub. hub-alias: Used to show shell instructions for wrapping git.  hub-browse: Used to open a GitHub repository in a web browser. hub-create: Used to create a new repository on GitHub and add a git remote for it. hub-fork: Allows forking the current repository on GitHub and adds a git remote for it. You can see the entire list of commands on the Hub Man Page. Most of these commands are expected to be run in a context of an existing local git repository. What are the advantages of using Hub Contributing to open source: This tool makes contributing to open source much easier by providing features for fetching repositories, navigating project pages, forking repos, and even submitting pull requests, all from the command-line. Script your workflows: You can easily script your workflows and set priorities by listing and creating issues, pull requests, and GitHub releases. Easily maintain projects: It allows you to easily fetch from other forks, review pull requests, and cherry-pick URLs. Use GitHub for work: It saves your time by allowing you to open pull requests for code reviews and push to multiple remotes at once. It also supports GitHub Enterprise, however, it needs to be whitelisted.  Hub is not the only tool of its kind, there are tools like Magit Forge and Lab. Though developers think that it is convenient, some feel that it increases GitHub lock-in. "While it is pretty cool, using such tool increases general lock-in to GitHub, in terms of both habits and potential use of it for automation of processes," a user expressed its opinion on Hacker News.  Another Hacker News user suggested, “I wish there was an open standard for operations that hub allows to do and all major Git forges, including open source ones, such as Gogs/Gitea and GitLab, supported it. In that case having a command-line tool that, like Git itself, is not tied to a particular vendor, but allows to do what hub does, could have been indispensable.” To know more in detail, check out Hub’s GitHub repository. Pull Panda is now a part of GitHub; code review workflows now get better! Github Sponsors: Could corporate strategy eat FOSS culture for dinner?
Read more
  • 0
  • 0
  • 3500

article-image-openwrt-18-06-4-released-with-updated-linux-kernel-security-fixes-curl-and-the-linux-kernel-and-much-more
Amrata Joshi
05 Jul 2019
3 min read
Save for later

OpenWrt 18.06.4 released with updated Linux kernel, security fixes Curl and the Linux kernel and much more!

Amrata Joshi
05 Jul 2019
3 min read
This month, the OpenWrt Community announced the release of OpenWrt 18.06.4, the fourth service release of the stable OpenWrt 18.06 series. This release comes with a number of bug fixes in the network and system and brings updates to the kernel and base packages. The official page reads, “Note that the OpenWrt 18.06.3 release was skipped in favor to 18.06.4 due to a last-minute 4.14 kernel update fixing TCP connectivity problems which were introduced with the first iteration of the Linux SACK (Selective Acknowledgement)vulnerability patches.” What is the OpenWrt project? The OpenWrt Project, a Linux operating system, targets embedded devices and is a replacement for the vendor-supplied firmware consisting of a wide range of wireless routers and non-network devices.  OpenWrt ​is an easily modifiable operating system for routers and is powered by a Linux kernel. It offers a fully writable filesystem with optional package management instead of creating a single, static firmware. It is useful for developers as OpenWrt provides a framework for building an application without having to create a complete firmware image and distribution around it. It also gives freedom of full customization to the users that allows them to use an embedded device in many ways. What’s new in OpenWrt 18.06.4? In this release, Linux kernel has been updated to versions 4.9.184/4.14.131 from 4.9.152/4.14.95 in v18.06.2. It also comes with SACK (Selective Acknowledgement) security fixes for the Linux kernel and WPA3 security fixes in hostapd. It further offers security fixes for Curl and the Linux kernel, and comes with MT76 wireless driver updates. In this release, there are many network and system service fixes. Many users seem to be happy about this news and they choose routers based on the fact if they are supported by OpenWrt or not. A user commented on HackerNews, “I choose my routers based on if they are supported or not by OpenWrt. And for everybody that asks my opinion, too. Because they might not need/want/know/have a desire to install OpenWrt now, but it's good to have the door open for the future.” Users are also happy with OpenWrt’s interface, a user commented, “For people asking about the user interface of OpenWrt. I think it is very well dun. I get a long with it just fine and I am blind and have to use a screen reader. A11y in Luci is grate. All the pages make sence to most people you do not have to be a networking expert.” To know more about this news, check out OpenWrt’s official page. OpenWrt 18.06.2 released with major bug fixes, updated Linux kernel and more! Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices Linux use-after-free vulnerability found in Linux 2.6 through 4.20.11  
Read more
  • 0
  • 0
  • 4221
article-image-gitlab-faces-backlash-from-users-over-performance-degradation-issues-tied-to-redis-latency
Vincy Davis
02 Jul 2019
4 min read
Save for later

GitLab faces backlash from users over performance degradation issues tied to redis latency

Vincy Davis
02 Jul 2019
4 min read
Yesterday, GitLab suffered major performance degradation in terms of 5x increased error rate and site slow down. The degradation was identified and rectified within few hours of its discovery. https://twitter.com/gabrielchuan/status/1145711954457088001 https://twitter.com/lordapo_/status/1145737533093027840 The GitLab engineers promptly started investigating the slowdown on GitLab.com and notified users that the slow down is in redis and lru cluster, thus impacting all web requests serviced by the rails front-end. What followed next was a very comprehensive detailing about the issue, its causes, who’s handling what kind of issue and more. GitLab’s step by step response looked like this: First, they investigated slow response times on GitLab. Next, they added more workers to alleviate the symptoms of the incident. Then, they investigated jobs on shared runners that were being picked up at a low rate or appeared being stuck. Next, they tracked CI issues and observed performance degradation as one incident. Over the time, they continued to investigate the degraded performance and CI pipeline delays. After a few hours, all services restored to normal operation and the CI pipelines continued to catch up from delays earlier with nearly normal levels. David Smith, the Production Engineering Manager at GitLab also updated users that the performance degradation was due to few issues tied to redis latency. Smith also added that, “We have been looking into the details of all of the network activity on redis and a few improvements are being worked on. GitLab.com has mostly recovered.” Many users on Hacker News wrote about their unpleasant experience with GitLab.com. A user states that, “I recently started a new position at a company that is using Gitlab. In the last month I've seen a lot of degraded performance and service outages (especially in Gitlab CI). If anyone at Gitlab is reading this - please, please slow down on chasing new markets + features and just make the stuff you already have work properly, and fill in the missing pieces.” Another user comments, “Slow down, simplify things, and improve your user experience. Gitlab already has enough features to be competitive for a while, with the Github + marketplace model.” Later, a GitLab employee by the username, kennyGitLab commented that GitLab is not losing sight and is just following the company’s new strategy of ‘Breadth over depth’. He further added that, “We believe that the company plowing ahead of other contributors is more valuable in the long run. It encourages others to contribute to the polish while we validate a future direction. As open-source software we want everyone to contribute to the ongoing improvement of GitLab.” Users were indignant by this response. A user commented, “"We're Open Source!" isn't a valid defense when you have paying customers. That pitch sounds great for your VCs, but for someone who spends a portion of their budget on your cloud services - I'm appalled. Gitlab is a SaaS company who also provides an open source set of software. If you don't want to invest in supporting up time - then don't sell paid SaaS services.” Another comment read, “I think I understand the perspective, but the messaging sounds a bit like, ‘Pay us full price while serving as our beta tester; sacrifice the needs of your company so you can fulfill the needs of ours’.” Few users also praised GitLab for prompt action and for providing everybody with in-depth detailing about the investigation. A user wrote that, “This is EXACTLY what I want to see when there's a service disruption. A live, in-depth view of who is doing what, any new leads on the issue, multiple teams chiming in with various diagnostic stats, honestly it's really awesome. I know this can't be expected from most businesses, especially non-open sourced ones, but it's so refreshing to see this instead of the typical "We're working on a potential service disruption" that we normally get.” GitLab goes multicloud using Crossplane with kubectl Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note
Read more
  • 0
  • 0
  • 3299

article-image-google-proposes-a-libc-in-llvm-rich-felker-of-musl-libc-thinks-its-a-very-bad-idea
Vincy Davis
28 Jun 2019
4 min read
Save for later

Google proposes a libc in LLVM, Rich Felker of musl libc thinks it’s a very bad idea

Vincy Davis
28 Jun 2019
4 min read
Earlier this week, Siva Chandra, Google LLVM contributor asked all LLVM developers on their opinion about starting a libc in LLVM. He mentioned a list of high-level goals and guiding principles, that they are intending to pursue. Three days ago, Rich Felker the creator of musl libc, made his thoughts about libc very clear by saying that “this is a very bad idea.” In his post, Chandra has said that he believes that a libc in LLVM will be beneficial and usable for the broader LLVM community, and may serve as a starting point for others in the community to flesh out an increasingly complete set of libc functionality.  Read More: Introducing LLVM Intermediate Representation One of the goals, mentioned by Chandra, states that the libc project would mesh with the “as a library” philosophy of the LLVM and would help in making the “the C Standard Library” more flexible. Another goal for libc states that it will support both static non-PIE and static-PIE linking. This means enabling the C runtime and the PIE loader for static non-PIE and static-PIE linked executables. Rich Felker posted his thoughts on the libc in LLVM as follows: Writing and maintaining a correct, compatible, high-quality libc is a monumental task. Though the amount of code needed is not that large, but “the subtleties of how it behaves and the difficulties of implementing various interfaces that have no capacity to fail or report failure, and the astronomical "compatibility surface" of interfacing with all C and C++ software ever written as well as a large amount of software written in other languages whose runtimes "pass through" the behavior of libc to the applications they host,”. Felkar believes that this will make libc not even of decent quality.  A corporate-led project is not answerable to the community, and hence they will leave whatever bugs it introduces, for the sake of compatibility with their own software, rather than fixing them. This is the main reason that Felkar thinks that if at all, a libc is created, it should not be a Google project.  Lastly Felkar states that avoiding monoculture preserves the motivation for consensus-based standard processes rather than single-party control. This will prove to be a motivation for people writing software, so they will write it according to proper standards, rather than according to a particular implementation.   Many users agree with Rich Felkar’s views.  A user on Hacker News states that “This speaks volumes very clearly. This highlights an immense hazard. Enterprise scale companies contributing to open-source is a fantastic thing, but enterprise scale companies thrusting their own proprietary libraries onto the open-source world is not. I'm already actively avoiding becoming beholden to Google in my work as it is already, let alone in the world where important software uses a libc written by Google. If you're not concerned by this, refer to the immense power that Google already wields over the extremely ubiquitous web-standards through the market dominance that Chrome has.” Another user says that, “In the beginning of Google's letter they let us understand they are going to create a simplified version for their own needs. It does mean they don't care about compatibility and bugs, if it doesn't affect their software. That's not how this kind of libraries should be implemented.” Another comment reads, “If Google wants their own libc that’s their business. But LLVM should not be part of their “manifest destiny”. The corporatization of OSS is a scary prospect, and should be called out loud and clear like this every time it’s attempted” While there are few others who think that Siva Chandra’s idea of a libc in LLVM might be a good thing. A user on Hacker News comments that “That is a good point, but I'm in no way disputing that Google could do a great job of creating their own libc. I would never be foolish enough to challenge the merit of Google's engineers, the proof of this is clear in the tasting of the pudding that is Google's software. My concerns lie in the open-source community becoming further beholden to Google, or even worse with Google dictating the direction of development on what could become a cornerstone of the architecture of many critical pieces of software.” For more details, head over to Rich Felkar’s pipermail.  Introducing InNative, an AOT compiler that runs WebAssembly using LLVM outside the Sandbox at 95% native speed LLVM 8.0.0 releases! LLVM officially migrating to GitHub from Apache SVN
Read more
  • 0
  • 0
  • 4302