Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Web Development

354 Articles
article-image-ghost-3-0-an-open-source-headless-node-js-cms-released-with-jamstack-integration-github-actions-and-more
Savia Lobo
23 Oct 2019
4 min read
Save for later

Ghost 3.0, an open-source headless Node.js CMS, released with JAMStack integration, GitHub Actions, and more!

Savia Lobo
23 Oct 2019
4 min read
Yesterday, the team behind Ghost, an open-source headless Node.js CMS, announced its major version, Ghost 3.0. The new version represents “a total of more than 15,000 commits across almost 300 releases”  Ghost is now used by the likes of Apple, DuckDuckGo, OpenAI, The Stanford Review, Mozilla, Cloudflare, Digital Ocean, and many, others. “To date, Ghost has made $5,000,000 in customer revenue whilst maintaining complete independence and giving away 0% of the business,” the official website highlights. https://twitter.com/Ghost/status/1186613938697338881 What’s new in Ghost 3.0? Ghost on the JAMStack The team has revamped the traditional architecture using JAMStack which makes Ghost a completely decoupled headless CMS. In this way, users can generate a static site and later add dynamic features to make it powerful. The new architecture unlocks content management that is fundamentally built via APIs, webhooks, and frameworks to generate robust modern websites. Continuous theme deployments with GitHub Actions The process of manually making a zip, navigating to Ghost Admin, and uploading an update in the browser can be difficult at times. To deploy Ghost themes to production in a better way, the team decided to combine with Github Actions. This makes it easy to continuously sync custom Ghost themes to live production sites with every new commit. New WordPress migration plugin Earlier versions of Ghost included a very basic WordPress migrator plugin that made it extremely difficult for anyone to move their data between the platforms or have a smooth experience. The new Ghost 3.0 compatible WordPress migration plugin provides a single-button-download of the full WordPress content + image archive in a format that can be dragged and dropped into Ghost's importer. Those who are new and want to explore Ghost 3.0 can create a new site in a few clicks with an unrestricted 14 day free trial as all new sites on Ghost (Pro) are running Ghost 3.0. The team expects users to try out Ghost 3.0 and get back with feedback for the team on the Ghost forum or help out on Github for building the next features with the Ghost team. Ben Thompson’s Stratechery, a subscription-based newsletter featuring in-depth commentary on tech and media news, recently posted an interview with Ghost CEO John O’Nolan. This interview features questions on what Ghost is, where it came from, and much more. Ghost 3.0 has received a positive response from many and also the fact that it is moving towards adopting static site JAMStack approach. A user on Hacker News commented, “In my experience, Ghost has been the no-nonsense blog CMS that has been stable and just worked with very little maintenance. I like that they are now moving towards static site JAMStack approach, driven by APIs rather than the current SSR model. This lets anybody to customise their themes with the language / framework of choice and generating static builds that can be cached for improved loading times.” Another user who is using Ghost for the first time commented, “I've never tried Ghost, although their website always appealed to me (one of the best designed website I know). I've been using WordPress for the past 13 years, for personal and also professional projects, which means the familiarity I've built with building custom themes never drew me towards trying another CMS. But going through this blog post announcement, I saw that Ghost can be used as a headless CMS with frontend frameworks. And since I started using GatsbyJS extensively in the past year, it seems like something that would work _really_ well together. Gonna try it out! And congrats on remaining true to your initial philosophy.” To know more about the other features in detail, read the official blog post. Google partners with WordPress and invests $1.2 million on “an opinionated CMS” called Newspack Verizon sells Tumblr to WordPress parent, Automattic, for allegedly less than $3million, a fraction of its acquisition cost FaunaDB brings its serverless database to Netlify to help developers create apps
Read more
  • 0
  • 0
  • 4632

article-image-introducing-stateful-functions-an-os-framework-to-easily-build-and-orchestrate-distributed-stateful-apps-by-apache-flinks-ververica
Vincy Davis
19 Oct 2019
3 min read
Save for later

Introducing Stateful Functions, an OS framework to easily build and orchestrate distributed stateful apps, by Apache Flink’s Ververica

Vincy Davis
19 Oct 2019
3 min read
Last week, Apache Flink’s stream processing company Ververica announced the launch of Stateful Functions. It is an open source framework developed to reduce the complexity of building and orchestrating distributed stateful applications. It is built with an aim to bring together the benefits of stream processing with Apache Flink and Function-as-a-Service (FaaS). Ververica will propose the project, licensed under Apache 2.0, to the Apache Flink community as an open source contribution. Read More: Apache Flink 1.9.0 releases with Fine-grained batch recovery, State Processor API and more The co-founder and CTO at Ververica, Stephan Ewen says, “Orchestration for stateless compute has come a long way, driven by technologies like Kubernetes and FaaS — but most offerings still fall short for stateful distributed applications.” He further adds, “Stateful Functions is a big step towards addressing those shortcomings, bringing the seamless state management and consistency from modern stream processing to space.” Stateful Functions is designed as a simple and powerful abstraction based on functions that can interact with each other asynchronously. It is also composed of complex networks of functionality. This approach helps in eliminating the requirement of additional infrastructure for application state management, reduces operational overhead and also the overall system complexity. The stateful functions are aimed to help users define independent functions with a small footprint, thus enabling the resources to interact reliably with each other. Each function has a persistent user-defined state in local variables that can be used to arbitrarily message other functions. The stateful function framework simplifies use cases such as: Asynchronous application processes (checkout, payment, logistics) Heterogeneous, load-varying event stream pipelines (IoT event rule pipelines) Real-time context and statistics (ML feature assembly, recommenders) The runtime of stateful functions API is based on the stream processing capability of Apache Flink. It also extends its powerful model for state management and fault tolerance. The major advantage of this framework is that the state and computation are co-located on the same side of the network. This means that “you don’t need the round-tripper record to fetch state from an external storage system nor a specific state management pattern for consistency.”  Though the stateful functions API is independent of Flink, its runtime is built on top of Flink’s DataStream API and uses a lightweight version of process functions. “The core advantage here, compared to vanilla Flink, is that functions can arbitrarily send events to all other functions, rather than only downstream in a DAG,” stated the official blog. Image source: Ververica blog As shown in the above figure, the applications of stateful functions include the multiple bundles of functions that are multiplexed into a single Flink application. This enables them to interact consistently and reliably with each other. This enables the many small jobs to share the same pool of resources and tackle them as needed. Many Twitterati are excited about this announcement. https://twitter.com/sijieg/status/1181518992541933568 https://twitter.com/PasqualeVazzana/status/1182033530269949952 https://twitter.com/acmurthy/status/1181574696451620865 Head over to the stateful functions website to know more details. OpenBSD 6.6 comes with GCC disabled in base for ARMv7 and i386, SMP Improvements, and more Developers ask for an option to disable Docker Compose from automatically reading the .env file Ubuntu 19.10 releases with MicroK8s add-ons, GNOME 3.34, ZFS on root, NVIDIA-specific improvements, and much more! Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices
Read more
  • 0
  • 0
  • 3574

article-image-what-to-expect-from-d-programming-language-in-the-near-future
Fatema Patrawala
17 Oct 2019
3 min read
Save for later

What to expect from D programming language in the near future

Fatema Patrawala
17 Oct 2019
3 min read
On Tuesday, Atila Neves the Deputy leader for D programming language posted about his vision for D and what he would like to do with D lang in the near future. Make D programming language default for web dev and mobile applications D’s static reflection and code generation capabilities make it an ideal candidate to implement a codebase that needs to be called from several different languages and environments (e.g. Python, Java, R). Traditionally this is done by specifying data structures and RPC calls in an Interface Definition Language (IDL) then translating that to the supported languages, with a wire protocol to go along with it. With D, none of that is necessary. One can write the production code in D and have libraries automatically making the code callable from other languages. Hence it will be easy to write D code that runs as fast or faster than the alternatives, and it will be a win on all fronts. Memory Safety for D lang Atila believes that D is a systems programming language with value types and pointers, it isn’t memory safe. He says that DIP1000 is in the right direction, but it still needs to be memory safe unless programmers opt-out via @trusted block or function. The DIP1000 proposal includes a scope mechanism that will know when the lifetime of a reference is over by providing a mechanism to guarantee that a reference cannot escape lexical scope. Thus it can safely implement memory management schemes rather than tracing the garbage collection. Safe and easy concurrency in D programming language As per Atila safe and easy concurrency in D is mostly achieved through actor models, but they still need to finalize shards and make everything @safe as well. Centralizing all reflection needs with an API Atila says instead of disparate ways of getting things done with fragmented APIs like (__traits, std.traits, custom code), he would like there to be a library that centralizes all reflection needs with a great API. Easy interoperability for C++ developers C++ has been successful so far in making the transition from C virtually seamless. Atila wants current C++ programmers with legacy codebases to just as easily be able to start writing D code. Faster development times D needs a fast interpreter so that developers can skip machine code generation and linking. This should be the default way of running unittest blocks for faster feedback, with programmers only compiling their code for runtime performance and/or to ship binaries to final users. String interpolation in D programming language Code generation is one of D’s greatest strengths, and token strings enable visually pleasing blocks of code that are actually “just strings”. Hence, String interpolation would make it vastly easier to use. To know more about D programming language, check out the official post by Atila Neves. “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett The V programming language is now open source – is it too good to be true? Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 6731

article-image-severity-issues-raised-for-python-2-debian-packages-for-not-supporting-python-3
Fatema Patrawala
16 Oct 2019
5 min read
Save for later

Severity issues raised for Python 2 Debian packages for not supporting Python 3

Fatema Patrawala
16 Oct 2019
5 min read
On Monday, Neil Williams a software developer from Linux CodeHelp raised severity issues for Python 2 leaf packages in Debian which do not support Python 3. Neil has urged Debian maintainers to remove Python 2 from all the Debian packages. He specifically mentions one of the packages, Calibre, an e-book management software which is completely open source and licensed under the GNU GPL v3. Calibre is written primarily in Python with some C/C++ code for speed and system interfacing. But it is not yet compatible with Python 3 as it requires at least Python 2.7.9. In 2017, an issue was raised on the Calibre platform by a user, “Python 2 is retiring in thirty months. Calibre needs to convert to Python 3.” Kovid Goyal, author of the Calibre platform responded saying, “No, it doesn't. I am perfectly capable of maintaining python 2 myself. Far less work than migrating the entire calibre codebase.” Now the latest Calibre version requires Python modules which are no longer available for Python 2. Gregor Riepl, a systems engineer in response to Neil says, “As of now, calibre is not of sufficient quality to be part of a Debian release and until it drops all Python2 requirements, it must be considered RC buggy.” This means that Calibre >= 4.0 for the foreseeable future will not be available in Debian. Calibre version 3.48 will be the last version that can run on Debian until the upstream Calibre switches to Python 3. Riepl further asked Neil if his quality argument is due to the Calibre authors resistance to migrate to Python 3. Neil responded, “No, it is based just on the removal of Python2 from Debian and avoiding special cases. Right now, any and every package in Debian testing which requires Python2 and has no Python3 alternative in Debian or ready for upload is of poor quality for no other reason than that. All such packages are of such poor quality that the package should be removed from testing - in an orderly manner, leaf packages first. That is in the best interests of all users, despite what may or may not happen to any particular subset(s) of users.  The decision flow is easy - if the answer in each case is "no", then move on to the next and if you get to the bottom, the bug should be RC. * Has the package already been removed from testing? * Is a Python3-only version already in Debian? * Is a Python3-only version available upstream? * Does the package have any reverse dependencies? * If you get here, it is already too late, there have already been   enough warnings. Upgrade the bug to RC and get the package   auto-removed from testing.” Neil said he was aware of the history of Calibre and understood what would happen if it is no longer a part of Debian. But that did not matter as removal of Python 2 is more important for the next Debian release. He also believes that Calibre has a relatively large user base that doesn't know much or care about the Python 2 deprecation. User will simply perceive dropping Calibre as a bad move on Debian's side and rush towards other packages of significantly lower quality. He further concluded, “Calibre is nothing special - it's a Python2 leaf package like vland and tftpy and any one of far too many others. Calibre can stay in unstable - it will go FTBFS, of course, but that isn't a problem either, IMHO. It's calibre's problem - not Debian's problem. There's always the option of users installing the old Python2 stuff from Buster to keep calibre hobbling along. Debian is the higher priority here. Calibre would be nice to have but it does not deserve to cause delays on anybody else's voluntary effort. No package has that right.” Community feels Python 2 will result in unmaintained runtime and libraries in packages On Hacker News, users are discussing how Python foundation is pushing in packages to migrate to Python 3 that will result in Python 2 having an entire set of unmaintained runtime and libraries in the package repository. One user comments, “Historically, Debian hasn't particularly objected to packaging obsolete versions of programming languages without upstream support. I doubt anyone is checking for potential security problems in Algol 68 and Fortran 77 implementations that Debian ships, and I don't think the people using those packages are particularly inconvenienced by that. It seems a shame that the social pressure to persuade people to port their code to Python 3 means that Debian is going to have weaker support for 10-year-old Python than 40-year-old Fortran. In particular, there are ongoing efforts to try to make it the normal thing for scientists to make the programs they ran on their data available so that their results can be reproduced; aggressively dropping older programming language implementations rather gets in the way of that.” Another user responded, “This isn't about "languages". It's about software! Algol 68 and Fortran 77 may have stale (but maintained) compilers or interpreters in the package repository. Starting very soon - Python 2 will have an entire set of unmaintained runtime and libraries in the package repository. You know - actual, officially, unmaintained software! Unmaintained software that other packages, including Calibre in this example, further build on. Of course they're throwing this out.” Python 3.8 is now available with walrus operator, positional-only parameters support for Vectorcall, and more Core Python team confirms sunsetting Python 2 on January 1, 2020 PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3
Read more
  • 0
  • 0
  • 2532

article-image-ionic-react-released-ionic-framework-pivots-from-angular-to-a-native-react-version
Sugandha Lahoti
15 Oct 2019
3 min read
Save for later

Ionic React released; Ionic Framework pivots from Angular to a native React version

Sugandha Lahoti
15 Oct 2019
3 min read
Yesterday the team behind the Ionic Framework announced the general availability of Ionic React, which is a native React version of Ionic Framework, pivoting from its traditional Angular-focused app framework. “Ionic React makes it easy to build apps for iOS, Android, Desktop, and the web as a Progressive Web App”, states the team in a blog post. It uses Typescript and combines core Ionic experience with the tooling and APIs that are tailored to developers. It is a fully-supported, enterprise-ready offering with services, advisory, tooling, and supported native functionality. @ionic/react projects will work like standard React projects, leveraging react-dom and with setup normally found in a Create React App (CRA) app. For routing and navigation, React Router is used under the hood. One difference is the usage of TypeScript, which provides a more productive experience. To use plain JavaScript, you can rename files to use a .js extension then remove any of the type annotations with each file. Explaining the reason behind choosing React, the team says, “With Ionic, we envisioned being able to build rich JavaScript-powered controls and distribute them as simple HTML tags any web developer could assemble into an awesome app. We realized that building a version of the Ionic Framework for React made perfect sense. Combined with the fact that we had several React fans join the Ionic team over the years, there was a strong desire internally to see Ionic Framework support React as well.” How is Ionic React different from React Native The team realized that there was a gap in the React ecosystem that Ionic could fill as an easier mobile and Progressive Web App development solution. Developers were also interested in incorporating it in their existing React Native apps, by building more screens in their app out of a native WebView frame. There were two major reasons why the Ionic team built @ionic/react. First, it is DOM-native and uses the standard react-dom library. In contrast, React Native builds an abstraction on top of iOS and Android native UI controls. The team states, “When we looked at installs for react-dom compared to react-native, it was clear to us that vastly more React development was happening in the browser and on top of the DOM than on top of the native iOS or Android UI systems” Secondly, Ionic is one of the most popular frameworks for building PWA, most notably the Stencil project. React Native, on the other hand, does not officially support Progressive web apps. PWAs are, at best, an afterthought in the React Native ecosystem. @ionic/react has been well appreciated by developers on Twitter. https://twitter.com/dipakcreation/status/1183974237125693441 https://twitter.com/MichaelW_PWC/status/1183836080170323968 https://twitter.com/planetoftheweb/status/1183809368934043653 You can go through Ionic’s blog for additional information and for getting started. Ionic React RC is now out! Ionic 4.1 named Hydrogen is out! React Native Vs Ionic: Which one is the better mobile app development framework?
Read more
  • 0
  • 0
  • 6306

article-image-after-backlash-for-rejecting-a-ublock-origin-update-from-the-chrome-web-store-google-accepts-ad-blocking-extension
Bhagyashree R
15 Oct 2019
6 min read
Save for later

After backlash for rejecting a uBlock Origin update from the Chrome Web Store, Google accepts ad-blocking extension

Bhagyashree R
15 Oct 2019
6 min read
Last week, Raymond Hill, the developer behind uBlock Origin shared that the extension’s dev build 1.22.5rc1 was rejected by Google's Chrome Web Store (CWS). uBlock Origin is a free and open-source browser extension widely used for content-filtering and adblocking.  Google stated that the extension did not comply with its extension standards as it bundles up different purposes into a single extension. An email to Hill from Google reads, “Do not create an extension that requires users to accept bundles of unrelated functionality, such as an email notifier and a news headline aggregator.” Hill on a GitHub issue mentioned that this is basically “stonewalling” and in the future, users may have to switch to another browser to use uBlock Origin. He does plans to upload the stable version. He commented, “I will upload stable to the Chrome Web Store, but given 1.22.5rc2 is rejected, logic dictates that 1.23.0 will be rejected. Actually, logic dictates that 1.22.5rc0 should also be rejected and yet it's still available in the Chrome Web Store.” Users’ reaction on Google rejecting the uBlock Origin dev build This news sparked a discussion on Hacker News and Reddit. Users speculated that probably this outcome is the result of the “crippling” update Google has introduced in Chrome (beta and dev versions currently): deprecating the blocking ability of the WebRequest API. The webRequest API permits extensions to intercept requests to modify, redirect, or block them. The basic flow of handling a request using this API is, Chrome receives the request, asks the extension, and then gets the result. In Manifest V3, the use of this API will be limited in its blocking form. While the non-blocking form of the API, which permit extensions to observe network requests will be allowed.  In place of webRequest API, Google has introduced the declarativeNetRequest API. This API allows adding up to 30,000 rules, 5000 dynamic rules, and 100 pages. Due to its limiting nature, many ad blocker developers and maintainers have expressed that this API will impact the capabilities of modern content blocking extensions. Google’s reasoning for introducing this change is that this API is much more performant and provides better privacy guarantees. However, many developers think otherwise. Hill had previously shared his thoughts on deprecating the blocking ability of the webRequest API.  “Web pages load slow because of bloat, not because of the blocking ability of the webRequest API -- at least for well-crafted extensions. Furthermore, if performance concerns due to the blocking nature of the webRequest API was their real motive, they would just adopt Firefox's approach and give the ability to return a Promise on just the three methods which can be used in a blocking manner.” Many users also mentioned that Chrome is using its dominance in the browser market to dictate what type of extensions are developed and used. A user commented, “As Chrome is a dominant platform, our work is prevented from reaching users if it does not align with the business goals of Google, and extensions that users want on their devices are effectively censored out of existence.” Others expressed that it is better to avoid all the drama by simply switching to some other browser, mainly Firefox. “Or you could cease contributing to the Blink monopoly on the web and join us of Firefox. Microsoft is no longer challenging Google in this space,” a user added. While some others were in support of Google saying that Hill could have moved some of the functionalities to a separate extension. “It's an older rule. It does technically apply here, but it's not a great look that they're only enforcing it now. If Gorhill needed to, some of that extra functionality could be moved out into a separate extension. uBlock has done this before with uBlock Origin Extra. Most of the extra features (eg. remote font blocking) aren't a huge deal, in my opinion.” How Google reacted to the public outcry Simeon Vincent, a developer advocate for Chrome extensions commented on a Reddit discussion that the updated extension was approved and published on the Chrome Web Store.  “This morning I heard from the review team; they've approved the current draft so next publish should go through. Unfortunately it's the weekend, so most folks are out, but I'm planning to follow up with u/gorhill4 with more details once I have them. EDIT: uBlock Origin development build was just successfully published. The latest version on the web store is 1.22.5.102.” He also further said that this whole confusion was because of a “clunkier” developer communication process. When users asked him about the Manifest V3 change he shared, “We've made progress on better supporting ad blockers and content blockers in general in Manifest V3. We've added rule modification at runtime, bumped the rule limits, added redirect support, header modification, etc. And more improvements are on the way.” He further added, “But Gorhill's core objection is to removing the blocking version of webRequest. We're trying to move the extension platform in a direction that's more respectful of end-user privacy, more secure, and less likely to accidentally expose data – things webRequest simply wasn't designed to do.” Chrome ignores the autocomplete=off property In other Chrome related news, it was reported that Chrome continues to autofill forms even if you disable it using the autocomplete=off property. A user commented, “I've had to write enhancements for Web apps several times this year with fields which are intended to be filled by the user with information *about other users*. Not respecting autocomplete="off" is a major oversight which has caused a lot of headache for those enhancements.” Chrome decides on which field should be filled with what data based on a combination of form and field signatures. If these do not match, the browser will resort to only checking the field signatures.  A developer from the Google Chrome team shared, “This causes some problems, e.g. in input type="text" name="name", the "name" can refer to different concepts (a person's name or the name of a spare part).” To solve this problem the team is working on an experimental feature that gives users the choice to “(permanently) hide the autofill suggestions.”  Check out the reported issue to know more in detail. Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end GitHub updates to Rails 6.0 with an incremental approach React DevTools 4.0 releases with support for Hooks, experimental Suspense API, and more!
Read more
  • 0
  • 0
  • 14066
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-mozilla-thunderbird-78-will-include-openpgp-support-expected-to-be-released-by-summer-2020
Savia Lobo
09 Oct 2019
3 min read
Save for later

Mozilla Thunderbird 78 will include OpenPGP support, expected to be released by Summer 2020

Savia Lobo
09 Oct 2019
3 min read
Yesterday, the Thunderbird developers announced to implement OpenPGP support in Thunderbird 78, which is planned to be a Summer 2020 release. This means that the support for Thunderbird in Enigmail will be discontinued. Enigmail is a data encryption and decryption extension for Mozilla Thunderbird and SeaMonkey internet suite that provides OpenPGP public key email encryption and signing. Patrick Brunschwig, the lead developer of the Enigmail project, says “this is an inevitable step.” The Mozilla developers have been and still are actively working on removing old code from their codebase. This affects not only Thunderbird but also add-ons. “While it was possible for Thunderbird to keep old "legacy" add-ons alive for a certain time, the time has come for Thunderbird to stop supporting them,” Brunschwig added. Thunderbird is unable to bundle GnuPG software due to incompatible licenses (MPL version 2.0 vs. GPL version 3+). Instead of relying on users to obtain and install external software like GnuPG or GPG4Win, the developers intend to identify and use an alternative, compatible library (Thunderbird 78), and distribute it as part of Thunderbird on all supported platforms. Will OpenPGP support in Thunderbird 78 mark an end to Enigmail? Brunschwig, in an email thread, writes that he “will continue to support and maintain Enigmail for Thunderbird 68 until 6 months after Thunderbird 78 will have been released (i.e. a few months beyond Thunderbird 68 EOL).”  He further mentioned that Enigmail will not run anymore on Thunderbird 72 beta and newer. Thunderbird 78 will no longer support the APIs that Enigmail requires and only allow new "WebExtensions". WebExtensions have a completely different API than classical add-ons, and a much-reduced set of capabilities to the user interface. Enigmail will not end; however, it will continue to maintain and support Enigmail for Postbox, which is running on a different release schedule than Thunderbird for the foreseeable future. “The Thunderbird developers and I have therefore agreed that it's much better to implement OpenPGP support directly in Thunderbird. The set of functionalities will be different than what Enigmail offers, and at least initially likely be less feature-rich. But in my eyes, this is by far outweighed by the fact that OpenPGP will be part of Thunderbird and no add-on and no third-party tool will be required,” Brunschwig writes. To process OpenPGP messages, GnuPG stores secret keys, public keys of correspondents, and trusted information for public keys in its own file format. Thunderbird 78 will not reuse the GnuPG file format, but will rather implement its own storage for keys and trust. Users who already own secret keys from their previous use of Enigmail and GnuPG, and who wish to reuse their existing secret keys, will be required to transfer their keys to Thunderbird 78. On systems that have GnuPG installed, the team may offer assisted importing. Many users are awaiting the summer release next year. https://twitter.com/robertjhansen/status/1181561188301320192 https://twitter.com/glynmoody/status/1181550756916334592 ZDNet writes, “What Mozilla devs will do remains to be seen, and they might end up creating a new OpenPGP library from scratch -- which might take up a lot of Mozilla's resources but will be a win for the open-source community as a whole.” To know more about this news in detail, read Mozilla Wiki. Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol Mozilla proposes WebAssembly Interface Types to enable language interoperability
Read more
  • 0
  • 0
  • 3782

article-image-fcc-cant-block-states-from-passing-their-own-net-neutrality-laws-states-a-u-s-court
Vincy Davis
07 Oct 2019
5 min read
Save for later

FCC can’t block states from passing their own net neutrality laws, states a U.S. court

Vincy Davis
07 Oct 2019
5 min read
In November last year, Mozilla filed a case against the FCC (Federal Communications Commission), opposing their decision of retracting the net neutrality protection rules. This was in the order of FCC’s decision to classify the ISPs as Title II (common carrier services) service providers under the Communications Act of 1934. The Title II classifier, unlike Title I, allowed the FCC to have regulatory power over the ISPs. On the other hand, Mozilla and other companies, trade groups, states, and organizations protested this decision as an unethical move by FCC. [box type="shadow" align="" class="" width=""]Net neutrality compels ISPs to treat all the data on the internet as equal. This means that all the data on the internet should be presented at the same rate, without any discrepancy. In a non-net neutrality scenario, the ISPs can create data in fast and slow lanes, block sites, or even charge companies more money to prioritize their content.[/box] Read Also: The future of net neutrality is being decided in court right now, as Mozilla takes on the FCC The latest development in this issue came last week when a U.S. court of appeals for the D.C. Circuit supported the FCC’s decision of revoking the net neutrality rules. The Chairman of the FCC, Ajit Pai, lauded this decision and asserted it as a victory for consumers and broadband deployments. He added, “The court also upheld our robust transparency rule so that consumers can be fully informed about their online options. Since we adopted the Restoring Internet Freedom Order, consumers have seen 40% faster speeds and millions more Americans have gained access to the Internet.” https://twitter.com/AjitPaiFCC/status/1179046254833262597 States can pass their own net neutrality laws Notably, the court has also restricted FCC from preemptively ceasing the states from adopting their own, stricter rules. According to Mozilla, the three-judge panel disagreed with FCC’s argument about preempting state net neutrality legislation across the board. The judges affirmed, “States have already shown that they are ready to step in and enact net neutrality rules to protect consumers, with laws in California and Vermont among others.”  https://twitter.com/mozilla/status/1179107413699497984 This rule paves the way for the 34 U.S. states who have already introduced or passed net neutrality rules. Last year, California’s legislature had passed the Internet Consumer Protection and Net Neutrality Act of 2018 which is dubbed as the strongest net neutrality law in the country. The bill bans internet providers from blocking and throttling legal content and prioritizing sites or services over others. These restrictions are applied to both home and mobile connections. Soon after the bill was passed, the FCC and Department of Justice (DoJ) had filed lawsuits blocking its implementation. Following on the same lines, Vermont had also passed a bill to establish consumer protection and net neutrality standards applicable to internet service providers. However, the Vermont Attorney General’s Office has said that despite the federal appeals court giving states more power to regulate internet providers, the state will not be able to implement the law until the appeals process in the federal case is fully resolved. Verge reports that the judges on the panel criticized FCC for exhibiting “disregard of its duty” by not evaluating how its rule would affect public safety. The court has also asked the FCC to consider the impact that reclassification will have on pole attachments. It also added that the FCC has not sufficiently addressed the concerns about how the change would affect the Lifeline internet access program for low-income Americans. Commenting on allegations, Pai said that they are working on addressing “the narrow issues” that the court has identified. In a blog post, Mozilla said the court’s decision clearly “underscores the frailty of the FCC’s approach” as the judges have questioned the FCC’s reclassification of broadband internet access from a ‘telecommunications service’ to an ‘information service.’ Mozilla has maintained that they are looking forward to continuing this fight with the FCC. If either of the party now decides to appeal, Congress may have to step in to settle the issue. With the Democrats vowing to restore the protections and the Republicans opposing the bill, this battle may end up as a bipartisan compromise. Many U.S. Senators have extended their support for net neutrality on Twitter. https://twitter.com/SenatorBennet/status/1179070818267078656 https://twitter.com/SenMarkey/status/1179111199121772544 US Supreme Court ends the net neutrality debate by rejecting the 2015 net neutrality repeal allowing the internet to be free and open again Spammy bots most likely influenced FCC’s decision on net neutrality repeal, says a new Stanford study Google Project Zero discloses a zero-day Android exploit in Pixel, Huawei, Xiaomi and Samsung devices The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack
Read more
  • 0
  • 0
  • 1654

article-image-the-openjs-foundation-accepts-nvm-as-its-first-new-incubating-project-since-the-node-js-foundation-and-jsf-merger
Bhagyashree R
04 Oct 2019
2 min read
Save for later

The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger

Bhagyashree R
04 Oct 2019
2 min read
Yesterday, the OpenJS Foundation announced that Node Version Manager (NVM) is joining the organization as an incubating project. It is the first new project to enter the OpenJS Foundation’s incubation process since the Node.js Foundation and JSF merger. The merger happened in March this year for accelerating the development of JavaScript, combined governance structure, and more. “nvm is joining the OpenJS Foundation as an incubating project, and upon successful completion of onboarding, it will become an “At-Large” project. An “At -Large” project is one which is “stable projects with minimal needs,” the announcement reads. Node Version Manager (NVM) and its functions NVM is a tool that allows programmers to seamlessly switch between different versions of Node.js. It comes in handy when you are working on different Node.js projects or want to check your library for maximum backward compatibility. It is a POSIX-compliant bash script and supports multiple types of shells including Sh, Zsh, Dash, Ksh, except Fish. NVM also makes installing node a very easy process by handling the compilation for systems that don’t have prebuilt binaries available. You can install multiple versions of node in a single system, each with its own node_modules directory for global package installs. Since NVM stores globally installed modules inside the user directory, it removes the need for sudo when used with npm. NVM is an important part of the Node.js and JavaScript ecosystem. Joining the OpenJS Foundation will help in its further development, stability, and governance. “By joining the OpenJS Foundation, there are multiple organizational and infrastructure areas that will be better supported, helping both current users and future users including ensuring no single point of failure for the nvm.sh domain, GitHub repo, and more,” OpenJS Foundation wrote in the announcement. Check out the official announcement by the OpenJS Foundation to know more in detail. Node.js and JS Foundation announce intent to merge; developers have mixed feelings 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Electron 5.0 ships with new versions of Chromium, V8, and Node.js Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more
Read more
  • 0
  • 0
  • 3725

article-image-microsoft-announces-new-dual-screen-device-surface-neo-and-windows-10-x-to-be-launched-next-year
Vincy Davis
03 Oct 2019
3 min read
Save for later

Microsoft announces new dual-screen device Surface Neo and Windows 10 X, to be launched next year

Vincy Davis
03 Oct 2019
3 min read
Yesterday, Microsoft made a list of announcements at their annual Surface hardware event in New York. While the event included interesting announcements such as the new surface PRO 7, surface earbuds and more, the main highlights were the new dual-screen Microsoft Surface Neo and Windows 10 X. The dual-screen Surface Neo is expected to go on sale in 2020, before the holiday season. To enable the smooth working of dual devices, Microsoft has exclusively redesigned Windows 10 version to present the Windows 10 X, also known by the codename “Santorini”. In a statement to Verge, Joe Belfiore, head of Windows experiences said, “We see people using laptops. We see people using tablets. We saw an opportunity both at Microsoft and with our partners to fill in some of the gaps in those experiences and offer something new.” At the event, Microsoft said that they are announcing the new hardware and software early to help developers come up with exclusive applications ahead of the launch. It also added that Windows 10 X is not a new operating system, but just a more adaptable format of it. This means that Windows 10 X will only be available on dual-screen devices and not as a standalone copy of Windows 10 X. Read Also: A Microsoft Windows bug deactivates Windows 10 Pro licenses and downgrades to Windows 10 Home, users report The Surface Neo device has two separate 9-inch displays that can fold out to a full 13-inch workspace. It has an intricate hinge which will allow the device to switch into a variety of modes. The device also has a Bluetooth keyboard that flips, slides, and locks into place with magnets that can be stored and secured to the rear of the device. Surface Neo also has a Surface Slim Pen, which gets attached magnetically. Microsoft has maintained that the device is not completely ready and more developments can be expected by the time of its launch. Microsoft may modularize the Windows 10 core technology and use the Start menu to display that in HoloLens. Similarly, Windows 10 X can also put the taskbar or Start menu on either panel as needed. Users will also be able to use the Start menu on either panel, depending on the work they are doing on either of the two panels. Read Also: Microsoft Azure VP demonstrates Holoportation, a reconstructed transmittable 3D technology Windows 10 X could be used in a number of ways like note-taking, mobile presentation, portable all-in-one, laptop, and reading. The engineering lead on Windows 10X, Carmen Zlateff says, “We’re working to take the best of the applications that people need and use most — things like Mail, Calendar, and PowerPoint — and bring them over to dual screens in a way that creates flexible and rich experiences that are unique to this OS and devices.” He further adds, “Our goal is that the vast majority of apps in the Windows Store will work with Windows 10X.” Users are excited about the new Surface Neo and Windows 10 X announcements. https://twitter.com/RoguePlanetoid/status/1179447805036941312 https://twitter.com/BenBajarin/status/1179414618021748737 TensorFlow 2.0 released with tighter Keras integration, eager execution enabled by default, and more! How Chaos Engineering can help predict and prevent cyber-attacks preemptively An unpatched vulnerability in NSA’s Ghidra allows a remote attacker to compromise exposed systems
Read more
  • 0
  • 0
  • 1237
article-image-cloudflare-and-google-chrome-add-http-3-and-quic-support-mozilla-firefox-soon-to-follow-suit
Bhagyashree R
30 Sep 2019
5 min read
Save for later

Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit

Bhagyashree R
30 Sep 2019
5 min read
Major web companies are adopting HTTP/3, the latest iteration of the HTTP protocol, in their experimental as well as production systems. Last week, Cloudflare announced that its edge network now supports HTTP/3. Earlier this month, Google’s Chrome Canary added support for HTTP/3 and Mozilla Firefox will soon be shipping support in a nightly release this fall. The ‘curl’ command-line client also has support for HTTP/3. In an announcement, Cloudflare shared that customers can turn on HTTP/3 support for their domains by enabling an option in their dashboards. “We’ve been steadily inviting customers on our HTTP/3 waiting list to turn on the feature (so keep an eye out for an email from us), and in the coming weeks we’ll make the feature available to everyone,” the company added. Last year, Cloudflare announced preliminary support for QUIC and HTTP/3. Customers could also join a waiting list to try QUIC and  HTTP/3 as soon as they become available. Those customers who are on the waiting list and have received an email from Cloudflare can enable the support by flipping the switch from the "Network" tab on the Cloudflare dashboard. Cloudflare further added, “We expect to make the HTTP/3 feature available to all customers in the near future.” Cloudflare’s HTTP/3 and QUIC support is backed by quiche. It is an implementation of the QUIC transport protocol and HTTP/3 written in Rust. It provides a low-level API for processing QUIC packets and handling connection state. Why HTTP/3 is introduced HTTP 1.0 required the creation of a new TCP connection for each request/response exchange between the client and the server, which resulted in latency and scalability issues. To resolve these issues, HTTP/1.1 was introduced. It included critical performance improvements such as keep-alive connections, chunked encoding transfers, byte-range requests, additional caching mechanisms, and more. The keep-alive or persistent connections allowed clients to reuse TCP connections. A keep-alive connection eliminated the need to constantly perform the initial connection establishment step. It also reduced the slow start across multiple requests. However, there were still some limitations. Multiple requests were able to share a single TCP connection, but they still needed to be serialized on after the other. This meant that the client and server could execute only a single request/response exchange at a time for each connection. HTTP/2 tried to solve this problem by introducing the concept of HTTP streams. This allowed the transmission of multiple requests/responses over the same connection at the same time. However, the drawback here is that in case of network congestion all requests and responses will be equally affected by packet loss, even if the data that is lost only concerns a single request. HTTP/3 aims to address the problems in the previous versions of HTTP.  It uses a new transport protocol called Quick UDP Internet Connections (QUIC) instead of TCP. The QUIC transport protocol comes with features like stream multiplexing and per-stream flow control. Here’s a diagram depicting the communication between client and server using QUIC and HTTP/3: Source: Cloudflare HTTP/3 provides reliability at the stream level and congestion control across the entire connection. QUIC streams share the same QUIC connection so no additional handshakes are required. As QUIC streams are delivered independently, packet loss affecting one stream will not affect the others. QUIC also combines the typical three-way TCP handshake with TLS 1.3 handshake to provide. This provides users encryption and authentication by default and enables faster connection establishment. “In other words, even when a new QUIC connection is required for the initial request in an HTTP session, the latency incurred before data starts flowing is lower than that of TCP with TLS,” Cloudflare explains. On Hacker News, a few users discussed the differences between HTTP/1, HTTP/2, and HTTP/3. Comparing the three a user commented, “Not aware of benchmarks, but specification-wise I consider HTTP2 to be a regression...I'd rate them as follows: HTTP3 > HTTP1.1 > HTTP2 QUIC is an amazing protocol...However, the decision to make HTTP2 traffic go all through a single TCP socket is horrible and makes the protocol very brittle under even the slightest network delay or packet loss...Sure it CAN work better than HTTP1.1 under ideal network conditions, but any network degradation is severely amplified, to a point where even for traffic within a datacenter can amplify network disruption and cause an outage. HTTP3, however, is a refinement on those ideas and gets pretty much everything right afaik.” Some expressed that the creators of HTTP/3 should also focus on the “real” issues of HTTP including proper session support and getting rid of cookies. Others appreciated this step saying, “It's kind of amazing seeing positive things from monopolies and evergreen updates. These institutions can roll out things fast. It's possible in hardware too-- remember Bell Labs in its hay days?” These were some of the advantages HTTP/3 and QUIC provide over HTTP/2. Read the official announcement by Cloudflare to know more in detail. Cloudflare plans to go public; files S-1 with the SEC Cloudflare finally launches Warp and Warp Plus after a delay of more than five months Cloudflare RCA: Major outage was a lot more than “a regular expression went bad”
Read more
  • 0
  • 0
  • 3810

article-image-google-chrome-keystone-update-can-render-your-mac-system-unbootable
Fatema Patrawala
25 Sep 2019
4 min read
Save for later

Google Chrome Keystone update can render your Mac system unbootable

Fatema Patrawala
25 Sep 2019
4 min read
Yesterday, Mr Macintosh website reported of Google Chrome Keystone updated to remove the /var symlink on NON SIP protected Mac computers, causing account and booting issues. Few MacAdmins started to report that their systems would not boot properly. And they had following issues:   1. After rebooting the affected system it would Kernel Panic. The system will reboot only to KP again 2. User Logs out and the system shows the Setup Assistant. 3. The System Kernel Panics into a boot Loop.   The MacOS versions 10.9 – 10.14 Mojave were affected by this. It seems the issue affects all Macs that have SIP (System Integrity Protection) Disabled or turned off. Google Chrome keystone update causes booting issues AVID users were some of the first to report the issue. They said that AVID Media Creators use 3rd Party Graphics cards connected to their Mac Pro. When the issue hit yesterday, it was thought that AVID was the main cause of the problems since all the users experiencing the issue had AVID software. Only later after a MacAdmins dived deep in an investigation. After investigation from some of the top minds in the MacAmins Slack Chat #varsectomy channel it was found that the Google Chrome Keystone Updater was at the heart of the issue. How to check if the /var symlink was modified You can check to see if the /var symlink was modified by running the following command. ls -ldO /var The following outputs appear. The first one below means that your /var volder is SIP protected (notice the restrictedflag) and the proper sym link /var -> private/var lrwxr-xr-x@ 1 root wheel restricted,hidden 11 Apr 1 2018 /var -> private/var The next one means that your symlink is broken and the folder is NOT SIP Protected. drwxr-xr-x 5 503 wheel - 170 Sep 24 14:37 /var If you find /var in this condition you are affected! If you LOGOUT, SHUTDOWN OR RESTART your Mac will NOT Boot! You will need to boot into recovery, repair the /var symlink and reset the restricted flags. And there are two ways to fix the issue. First is to fix from MacAdmins User Juest and second is from Google Support, you need to use commands while booting through macOS Recovery. Community hates automatic updates from Google Chrome On Hacker News, users are discussing about sudden updates on Google Chrome Keystone which cause such issues and prefer using Safari or Firefox instead of Chrome. One of them commented, “I've always hated that "service" (more like malware given this news) like everything else that installs itself into the autolaunch sequence without permission, and remove* it whenever I notice/remember it, but it keeps coming back whenever I touch Google Chrome, which I prefer not to use in favor of Safari/FireFox because of reasons like this. Things like these (including secretly signing you into Search when you sign into YouTube† or refusing to support PiP on iPadOS/macOS) just solidify Google's image in my mind as a forever scummy, intrusive company that I wish I could leave behind like I did Microsoft, but sadly Google Search and YouTube still don't have good enough alternatives yet.” Another user commented, “The trends that Google has spearheaded have had a real effect on me over the years. I feel alienated from my computer. Subtle things will just change. If I really dig I might be able to find out why, but I don't have the time, so I just accept it. Usually very small things that are barely noticeable. My Chromecast extension disappeared and was integrated into the browser. My brain could not help but notice this benign change, which caused a hard to place sense of unease. Or when Google decided to remove rotation from the home screen on Android 2.3 -- it wasn't a huge problem, but I could have sworn that something changed. Users were conflicted, many convincing themselves that the homescreen never rotated at all. It has made me not trust my computer. I second guess myself much more. If some option no longer exists, I wonder if it was just my imagination or if it was quietly deprecated while I wasn't looking. Does it even matter? I think that we are being trained to see devices as ephemeral, and not to get too attached to them.” To know more about Google Chrome Keystone update and the issue regarding this, check out the Mr. Macintosh website. Other interesting news in web development Google Chrome 76 now supports native lazy-loading Google Chrome to simplify URLs by hiding special-case subdomains Google Chrome will soon support LazyLoad, a solution to lazily load below-the-fold images and iframes
Read more
  • 0
  • 0
  • 3405

article-image-pi-hole-4-3-2-removes-adblock-style-lists-support-and-implements-many-core-and-web-interface-changes
Vincy Davis
25 Sep 2019
3 min read
Save for later

Pi-hole 4.3.2 removes adblock style lists support and implements many core and web interface changes

Vincy Davis
25 Sep 2019
3 min read
Last week, Pi-hole, the open-source Linux network-level advertisement and internet tracker blocking application released their latest version Pi-hole 4.3.2. It includes many changes in its core and web interfaces. Users can run pihole -up to update this version from a terminal session. One of the core contributors to Pi-hole, Adam Warner revealed that the major change in this release is the removal of support for adblock style lists like Easylist/Easyprivacy. He alerted users that this may lead to a reduction in the number of blocked domains by Pi-hole. Warner also specified the motive behind the removal of adblock support as, “these lists were never designed to be parsed into a HOST formatted file, and while it may catch some domains, there are far too many false positives produced by using them in this way. If you have lists in this format, Pi-hole will now ignore them, and attempts to get around the detection will likely end up with a broken gravity list.” Pi-hole uses dnsmasq, cURL, lighttpd, PHP, and other tools to block Domain Name System (DNS) requests for known tracking and advertising. Intended for a private network, Pi-hole is implemented on embedded devices with network capabilities like Raspberry Pi. A Pi-hole can also block traditional website adverts in smart TVs, mobile operating systems, and more. If Pi-hole obtains any requests from adverts or tracking domains, it does not resolve the requested domain and responds to the requesting device with a blank webpage. Users are happy with Pi-hole 4.3.2 release and are all praises for it on Hacker News. A user said, “I'm a huge fan of this project! I have 3 set-up right now. One as a container on my Nuc at home for myself, and 2 other on old Pi's (one is a 1st gen B model) for family. A simple job to run every 2 months keeps everything up to date. For myself, I use Wireguard to only forward DNS packets to the PiHole when I'm outside the house. If you install a PiHole your help desk calls from family will drop by 90% (personal experience).” Another user comments, “I have Pi-hole running on my LAN and it's amazing. It also helped me identify that my Amcrest PoE security cameras aggressively phone home, even when no cloud functionality is configured on them. All the reasons to keep them on their own VLAN and off the Internet.” Another comment read, “One unadvertised advantage of pi-hole is monitoring and blocking sites that you don't want kids to use, such as the thousands of io-games and whatnot.” Check out the Pi-hole 4.3.2 release notes to know full updates of this release. Brave ad-blocker gives 69x better performance with its new engine written in Rust Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Opera Touch browser lets you browse with one hand on your iPhone, comes with e2e encryption and built-in ad blockers too!
Read more
  • 0
  • 0
  • 6481
article-image-googles-v8-javascript-engine-adds-support-for-top-level-await
Fatema Patrawala
25 Sep 2019
3 min read
Save for later

Google’s V8 JavaScript engine adds support for top-level await

Fatema Patrawala
25 Sep 2019
3 min read
Yesterday, Joshua Litt from the Google Chromium team announced to add support for top-level await in V8. V8 is Google’s open source high-performance JavaScript and WebAssembly engine, written in C++. It is used in Chrome and in Node.js, among others. It implements ECMAScript and WebAssembly, and runs on Windows 7 or later, macOS 10.12+, and Linux systems that use x64, IA-32, ARM, or MIPS processors. V8 can run standalone, or can be embedded into any C++ application. The official documentation page on Google Chromium reads, “Adds support for parsing top level await to V8, as well as many tests.This is the final cl in the series to add support for top level await to v8.” Top-level await support will ease running JS script in V8 As per the latest ECMAScript proposal on top-level await allows the await keyword to be used at the top level of the module goal. Top-level await enables modules to act as big async functions: With top-level await, ECMAScript Modules (ESM) can await resources, causing other modules who import them to wait before they start evaluating their body. Earlier developers used IIFE for top-level awaits, a JavaScript function that runs as soon as it is defined. But there are certain limitations in using IIFE, that is with await only available within async functions, a module can include await in the code that executes at startup by factoring that code into an async function. And this pattern will be immediately invoked with IIFE and it is appropriate for situations where loading a module is intended to schedule work that will happen some time later. While Top-level await function lets developers rely on the module system itself to handle all of these, and make sure that things are well-coordinated. Community is really happy to know that top-level support has been added to V8. On Hacker News, one of the users commented, “This is huge! Finally no more need to use IIFE's for top level awaits”. Another user commented, “Top level await does more than remove a main function. If you import modules that use top level await, they will be resolved before the imports finish. To me this is most important in node where it's not uncommon to do async operations during initialization. Currently you either have to export a promise or an async function.” To know more about this read the official Google Chromium documentation page. Other interesting news in web development New memory usage optimizations implemented in V8 Lite can also benefit V8 LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more  
Read more
  • 0
  • 0
  • 4160

article-image-mozilla-introduces-neqo-rust-implementation-for-quic-new-http-protocol
Fatema Patrawala
24 Sep 2019
3 min read
Save for later

Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol

Fatema Patrawala
24 Sep 2019
3 min read
Two months ago, Mozilla introduced Neqo, code written in Rust to implement QUIC, a new protocol for the web developed on top of UDP instead of TCP. As per the GitHub page, web developers who want to run test on http 0.9 programs using neqo-client and neqo-server, below is the code: cargo build ./target/debug/neqo-server 12345 -k key --db ./test-fixture/db ./target/debug/neqo-client http://127.0.0.1:12345/ -o --db ./test-fixture/db While developers who want to run test on http 3 programs using neqo-client and neqo-http3-server must check the code given below: cargo build ./target/debug/neqo-http3-server [::]:12345 --db ./test-fixture/db ./target/debug/neqo-client http://127.0.0.1:12345/ --db ./test-fixture/db What is QUIC and why is it important for web developers According to Wikipedia, QUIC is the next generation encrypted-by-default transport layer network protocol designed by Jim Roskind at Google. It is designed to secure and accelerate web traffic on the Internet. It was implemented and deployed in 2012, and announced publicly in 2013 as an experimentation broadened and described to the IETF. While still an Internet Draft, QUIC is used by more than half of all connections from the Chrome web browser to Google's servers. As per the QUIC’s official website, “QUIC is an IETF Working Group that is chartered to deliver the next transport protocol for the Internet.” One of the users on Hacker News commented, “QUIC is an entirely new protocol for the web developed on top of UDP instead of TCP. UDP has the advantage that it is not dependent on the order of the received packets, hence non-blocking unlike TCP. If QUIC is used, the TCP/TLS/HTTP2 stack is replaced to UDP/QUIC stack.”  The user further comments, “If QUIC features prove effective, those features could migrate into a later version of TCP and TLS (which have a notably longer deployment cycle). So basically, QUIC wants to combine the speed of the UDP protocol, with the reliability of the TCP protocol.” Additionally, the Rust community on Reddit were asked if QUIC is royalty free. To which one of the Rust developer responded, “Yes, it is being developed and standardized by a working group (under the IETF) and the IETF respectively. So it will become an internet standard just like UDP, TCP, HTTP, etc.” If you are interested to know more about Neqo and QUIC, check out the official GitHub page. Other interesting news in web development Chrome 78 beta brings the CSS Properties and Values API, the native file system API, and more! Apple releases Safari 13 with opt-in dark mode support, FIDO2-compliant USB security keys support, and more! Inkscape 1.0 beta is available for testing  
Read more
  • 0
  • 0
  • 6528