Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Server-Side Web Development

85 Articles
article-image-django-3-0-released-with-built-in-async-functionality-and-support-for-mariadb-and-python-3-6-3-7-and-3-8
Sugandha Lahoti
03 Dec 2019
2 min read
Save for later

Django 3.0 released with built-in async functionality and support for MariaDB and Python 3.6, 3.7 and 3.8

Sugandha Lahoti
03 Dec 2019
2 min read
Yesterday, Django released its latest major update - Django 3.0. Django is a Python-based web framework designed to help developers build apps faster with less code. Django 3.0 now comes with built-in async functionality, Python 3.6, 3.7 and 3.8 support and third-party library support for the older version of Django. New features in Django 3.0 MariaDB support Django now officially supports MariaDB 10.1 and higher. To use MariaDB you should use the MySQL backend, which is shared between the two. ASGI support for async programming Django 3.0 provides support for running as an ASGI application, making Django fully async-capable (Django already has existing WSGI support). However, async features will only be available to applications that run under ASGI. As a side-effect of this change, Django is now aware of asynchronous event loops and will block you calling code marked as “async unsafe” - such as ORM operations - from an asynchronous context. This was one of the most eagerly awaited features. https://twitter.com/jmcampbell72/status/1201502666431619072 https://twitter.com/arocks/status/1201711143103807490 https://twitter.com/gtcarvalh0/status/1201475826564382720 Exclusion constraints on PostgreSQL Django 3.0 adds a new ExclusionConstraint class which adds exclusion constraints on PostgreSQL. Constraints are added to models using the Meta.constraints option. Filter expressions Expressions that output BooleanField may now be used directly in QuerySet filters, without having to first annotate and then filter against the annotation. Enumerations for model field choices Custom enumeration types TextChoices, IntegerChoices, and Choices are now available as a way to define Field.choices. TextChoices and IntegerChoices types are provided for text and integer fields. Django 3.0 has also dropped support for PostgreSQL 9.4 which ends in December 2019. It also removes private Python 2 compatibility APIs. The upstream support for Oracle 12.1 also ends in July 2021. Django 2.2 will be supported until April 2022. Django 3.0 officially supports Oracle 12.2 and 18c. The complete list of updates is available in the release notes. Django 3.0 is going async! Which Python framework is best for building RESTful APIs? Django or Flask? Django 2.2 is now out with classes for custom database constraints
Read more
  • 0
  • 0
  • 6335

article-image-apple-shares-tentative-goals-for-webkit-2020
Sugandha Lahoti
11 Nov 2019
3 min read
Save for later

Apple shares tentative goals for WebKit 2020

Sugandha Lahoti
11 Nov 2019
3 min read
Apple has released a list of tentative goals for WebKit in 2020 catering to WebKit users as well as Web, Native, and WebKit Developers. These features are tentative and there is no guarantee if these updates will ship at all. Before releasing new features, Apple looks at a number of factors that are arranged according to a plan or system. They look at developer interests and harmful aspects associated with a feature. Sometimes they also take feedback/suggestions from high-value websites. WebKit 2020 enhancements for WebKit users Primarily, WebKit is focused on improving performance as well as privacy and security. Some performance ideas suggested include Media query change handling, No sync IPC for cookies, Fast for-of iteration, Turbo DFG, Async gestures, Fast scrolling on macOS, Global GC, and Service Worker declarative routing. For privacy, Apple is focusing on improving Address ITP bypasses, logged in API, in-app browser privacy, and PCM with fraud prevention. They are also working on improving Authentication, Network Security, JavaScript Hardening, WebCore Hardening, and Sandbox Hardening. Improvements in WebKit 2020 for Web Developers For web platforms, the focus is on three qualities - Catchup, Innovation, and Quality. Apple is planning to bring improvements in Graphics and Animations (CSS overscroll-behavior, WebGL 2, Web Animations), Media (Media Session Standard MediaStream Recording, Picture-in-Picture API) and DOM, JavaScript, and Text. They are also looking to improve CSS Shadow Parts, Stylable pieces, JS builtin modules, Undo Web API and also work on WPT (Web Platform Tests). Changes suggested for Native Developers For Native Developers in the obsolete legacy WebKit, the following changes are suggested: WKWebView API needed for migration Fix cookie flakiness due to multiple process pools WKWebView APIs for Media Enhancements for WebKit Developers The focus is on improving Architecture health and service & tools. Changes suggested are: Define “intent to implement” style process Faster Builds (finish unified builds) Next-gen layout for line layout Regression Test Debt repayment IOSurface in Simulator EWS (Early Warning System) Improvements Buildbot 2.0 WebKit on GitHub as a project (year 1 of a multi-year project) On Hacker News, this topic was widely discussed with people pointing out what they want to see in WebKit. “Two WebKit goals I'd like to see for 2020: (1) Allow non-WebKit browsers on iOS (start outperforming your competition instead of merely banning your competition), and (2) Make iOS the best platform for powerful web apps instead of the worst, the leader instead of the spoiler.” Another pointed, “It would be great if SVG rendering, used for diagrams, was of equal quality to Firefox.” One said, “WebKit and the Safari browsers by extension should have full and proper support for Service Workers and PWAs on par with other browsers.” For a full list of updates, please see the WebKit Wiki page. Apple introduces Swift Numerics to support numerical computing in Swift Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications
Read more
  • 0
  • 0
  • 3895

article-image-introducing-stateful-functions-an-os-framework-to-easily-build-and-orchestrate-distributed-stateful-apps-by-apache-flinks-ververica
Vincy Davis
19 Oct 2019
3 min read
Save for later

Introducing Stateful Functions, an OS framework to easily build and orchestrate distributed stateful apps, by Apache Flink’s Ververica

Vincy Davis
19 Oct 2019
3 min read
Last week, Apache Flink’s stream processing company Ververica announced the launch of Stateful Functions. It is an open source framework developed to reduce the complexity of building and orchestrating distributed stateful applications. It is built with an aim to bring together the benefits of stream processing with Apache Flink and Function-as-a-Service (FaaS). Ververica will propose the project, licensed under Apache 2.0, to the Apache Flink community as an open source contribution. Read More: Apache Flink 1.9.0 releases with Fine-grained batch recovery, State Processor API and more The co-founder and CTO at Ververica, Stephan Ewen says, “Orchestration for stateless compute has come a long way, driven by technologies like Kubernetes and FaaS — but most offerings still fall short for stateful distributed applications.” He further adds, “Stateful Functions is a big step towards addressing those shortcomings, bringing the seamless state management and consistency from modern stream processing to space.” Stateful Functions is designed as a simple and powerful abstraction based on functions that can interact with each other asynchronously. It is also composed of complex networks of functionality. This approach helps in eliminating the requirement of additional infrastructure for application state management, reduces operational overhead and also the overall system complexity. The stateful functions are aimed to help users define independent functions with a small footprint, thus enabling the resources to interact reliably with each other. Each function has a persistent user-defined state in local variables that can be used to arbitrarily message other functions. The stateful function framework simplifies use cases such as: Asynchronous application processes (checkout, payment, logistics) Heterogeneous, load-varying event stream pipelines (IoT event rule pipelines) Real-time context and statistics (ML feature assembly, recommenders) The runtime of stateful functions API is based on the stream processing capability of Apache Flink. It also extends its powerful model for state management and fault tolerance. The major advantage of this framework is that the state and computation are co-located on the same side of the network. This means that “you don’t need the round-tripper record to fetch state from an external storage system nor a specific state management pattern for consistency.”  Though the stateful functions API is independent of Flink, its runtime is built on top of Flink’s DataStream API and uses a lightweight version of process functions. “The core advantage here, compared to vanilla Flink, is that functions can arbitrarily send events to all other functions, rather than only downstream in a DAG,” stated the official blog. Image source: Ververica blog As shown in the above figure, the applications of stateful functions include the multiple bundles of functions that are multiplexed into a single Flink application. This enables them to interact consistently and reliably with each other. This enables the many small jobs to share the same pool of resources and tackle them as needed. Many Twitterati are excited about this announcement. https://twitter.com/sijieg/status/1181518992541933568 https://twitter.com/PasqualeVazzana/status/1182033530269949952 https://twitter.com/acmurthy/status/1181574696451620865 Head over to the stateful functions website to know more details. OpenBSD 6.6 comes with GCC disabled in base for ARMv7 and i386, SMP Improvements, and more Developers ask for an option to disable Docker Compose from automatically reading the .env file Ubuntu 19.10 releases with MicroK8s add-ons, GNOME 3.34, ZFS on root, NVIDIA-specific improvements, and much more! Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices
Read more
  • 0
  • 0
  • 3593
Banner background image

article-image-cloudflare-and-google-chrome-add-http-3-and-quic-support-mozilla-firefox-soon-to-follow-suit
Bhagyashree R
30 Sep 2019
5 min read
Save for later

Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit

Bhagyashree R
30 Sep 2019
5 min read
Major web companies are adopting HTTP/3, the latest iteration of the HTTP protocol, in their experimental as well as production systems. Last week, Cloudflare announced that its edge network now supports HTTP/3. Earlier this month, Google’s Chrome Canary added support for HTTP/3 and Mozilla Firefox will soon be shipping support in a nightly release this fall. The ‘curl’ command-line client also has support for HTTP/3. In an announcement, Cloudflare shared that customers can turn on HTTP/3 support for their domains by enabling an option in their dashboards. “We’ve been steadily inviting customers on our HTTP/3 waiting list to turn on the feature (so keep an eye out for an email from us), and in the coming weeks we’ll make the feature available to everyone,” the company added. Last year, Cloudflare announced preliminary support for QUIC and HTTP/3. Customers could also join a waiting list to try QUIC and  HTTP/3 as soon as they become available. Those customers who are on the waiting list and have received an email from Cloudflare can enable the support by flipping the switch from the "Network" tab on the Cloudflare dashboard. Cloudflare further added, “We expect to make the HTTP/3 feature available to all customers in the near future.” Cloudflare’s HTTP/3 and QUIC support is backed by quiche. It is an implementation of the QUIC transport protocol and HTTP/3 written in Rust. It provides a low-level API for processing QUIC packets and handling connection state. Why HTTP/3 is introduced HTTP 1.0 required the creation of a new TCP connection for each request/response exchange between the client and the server, which resulted in latency and scalability issues. To resolve these issues, HTTP/1.1 was introduced. It included critical performance improvements such as keep-alive connections, chunked encoding transfers, byte-range requests, additional caching mechanisms, and more. The keep-alive or persistent connections allowed clients to reuse TCP connections. A keep-alive connection eliminated the need to constantly perform the initial connection establishment step. It also reduced the slow start across multiple requests. However, there were still some limitations. Multiple requests were able to share a single TCP connection, but they still needed to be serialized on after the other. This meant that the client and server could execute only a single request/response exchange at a time for each connection. HTTP/2 tried to solve this problem by introducing the concept of HTTP streams. This allowed the transmission of multiple requests/responses over the same connection at the same time. However, the drawback here is that in case of network congestion all requests and responses will be equally affected by packet loss, even if the data that is lost only concerns a single request. HTTP/3 aims to address the problems in the previous versions of HTTP.  It uses a new transport protocol called Quick UDP Internet Connections (QUIC) instead of TCP. The QUIC transport protocol comes with features like stream multiplexing and per-stream flow control. Here’s a diagram depicting the communication between client and server using QUIC and HTTP/3: Source: Cloudflare HTTP/3 provides reliability at the stream level and congestion control across the entire connection. QUIC streams share the same QUIC connection so no additional handshakes are required. As QUIC streams are delivered independently, packet loss affecting one stream will not affect the others. QUIC also combines the typical three-way TCP handshake with TLS 1.3 handshake to provide. This provides users encryption and authentication by default and enables faster connection establishment. “In other words, even when a new QUIC connection is required for the initial request in an HTTP session, the latency incurred before data starts flowing is lower than that of TCP with TLS,” Cloudflare explains. On Hacker News, a few users discussed the differences between HTTP/1, HTTP/2, and HTTP/3. Comparing the three a user commented, “Not aware of benchmarks, but specification-wise I consider HTTP2 to be a regression...I'd rate them as follows: HTTP3 > HTTP1.1 > HTTP2 QUIC is an amazing protocol...However, the decision to make HTTP2 traffic go all through a single TCP socket is horrible and makes the protocol very brittle under even the slightest network delay or packet loss...Sure it CAN work better than HTTP1.1 under ideal network conditions, but any network degradation is severely amplified, to a point where even for traffic within a datacenter can amplify network disruption and cause an outage. HTTP3, however, is a refinement on those ideas and gets pretty much everything right afaik.” Some expressed that the creators of HTTP/3 should also focus on the “real” issues of HTTP including proper session support and getting rid of cookies. Others appreciated this step saying, “It's kind of amazing seeing positive things from monopolies and evergreen updates. These institutions can roll out things fast. It's possible in hardware too-- remember Bell Labs in its hay days?” These were some of the advantages HTTP/3 and QUIC provide over HTTP/2. Read the official announcement by Cloudflare to know more in detail. Cloudflare plans to go public; files S-1 with the SEC Cloudflare finally launches Warp and Warp Plus after a delay of more than five months Cloudflare RCA: Major outage was a lot more than “a regular expression went bad”
Read more
  • 0
  • 0
  • 3816

article-image-googles-v8-javascript-engine-adds-support-for-top-level-await
Fatema Patrawala
25 Sep 2019
3 min read
Save for later

Google’s V8 JavaScript engine adds support for top-level await

Fatema Patrawala
25 Sep 2019
3 min read
Yesterday, Joshua Litt from the Google Chromium team announced to add support for top-level await in V8. V8 is Google’s open source high-performance JavaScript and WebAssembly engine, written in C++. It is used in Chrome and in Node.js, among others. It implements ECMAScript and WebAssembly, and runs on Windows 7 or later, macOS 10.12+, and Linux systems that use x64, IA-32, ARM, or MIPS processors. V8 can run standalone, or can be embedded into any C++ application. The official documentation page on Google Chromium reads, “Adds support for parsing top level await to V8, as well as many tests.This is the final cl in the series to add support for top level await to v8.” Top-level await support will ease running JS script in V8 As per the latest ECMAScript proposal on top-level await allows the await keyword to be used at the top level of the module goal. Top-level await enables modules to act as big async functions: With top-level await, ECMAScript Modules (ESM) can await resources, causing other modules who import them to wait before they start evaluating their body. Earlier developers used IIFE for top-level awaits, a JavaScript function that runs as soon as it is defined. But there are certain limitations in using IIFE, that is with await only available within async functions, a module can include await in the code that executes at startup by factoring that code into an async function. And this pattern will be immediately invoked with IIFE and it is appropriate for situations where loading a module is intended to schedule work that will happen some time later. While Top-level await function lets developers rely on the module system itself to handle all of these, and make sure that things are well-coordinated. Community is really happy to know that top-level support has been added to V8. On Hacker News, one of the users commented, “This is huge! Finally no more need to use IIFE's for top level awaits”. Another user commented, “Top level await does more than remove a main function. If you import modules that use top level await, they will be resolved before the imports finish. To me this is most important in node where it's not uncommon to do async operations during initialization. Currently you either have to export a promise or an async function.” To know more about this read the official Google Chromium documentation page. Other interesting news in web development New memory usage optimizations implemented in V8 Lite can also benefit V8 LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more  
Read more
  • 0
  • 0
  • 4179

article-image-mozilla-introduces-neqo-rust-implementation-for-quic-new-http-protocol
Fatema Patrawala
24 Sep 2019
3 min read
Save for later

Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol

Fatema Patrawala
24 Sep 2019
3 min read
Two months ago, Mozilla introduced Neqo, code written in Rust to implement QUIC, a new protocol for the web developed on top of UDP instead of TCP. As per the GitHub page, web developers who want to run test on http 0.9 programs using neqo-client and neqo-server, below is the code: cargo build ./target/debug/neqo-server 12345 -k key --db ./test-fixture/db ./target/debug/neqo-client http://127.0.0.1:12345/ -o --db ./test-fixture/db While developers who want to run test on http 3 programs using neqo-client and neqo-http3-server must check the code given below: cargo build ./target/debug/neqo-http3-server [::]:12345 --db ./test-fixture/db ./target/debug/neqo-client http://127.0.0.1:12345/ --db ./test-fixture/db What is QUIC and why is it important for web developers According to Wikipedia, QUIC is the next generation encrypted-by-default transport layer network protocol designed by Jim Roskind at Google. It is designed to secure and accelerate web traffic on the Internet. It was implemented and deployed in 2012, and announced publicly in 2013 as an experimentation broadened and described to the IETF. While still an Internet Draft, QUIC is used by more than half of all connections from the Chrome web browser to Google's servers. As per the QUIC’s official website, “QUIC is an IETF Working Group that is chartered to deliver the next transport protocol for the Internet.” One of the users on Hacker News commented, “QUIC is an entirely new protocol for the web developed on top of UDP instead of TCP. UDP has the advantage that it is not dependent on the order of the received packets, hence non-blocking unlike TCP. If QUIC is used, the TCP/TLS/HTTP2 stack is replaced to UDP/QUIC stack.”  The user further comments, “If QUIC features prove effective, those features could migrate into a later version of TCP and TLS (which have a notably longer deployment cycle). So basically, QUIC wants to combine the speed of the UDP protocol, with the reliability of the TCP protocol.” Additionally, the Rust community on Reddit were asked if QUIC is royalty free. To which one of the Rust developer responded, “Yes, it is being developed and standardized by a working group (under the IETF) and the IETF respectively. So it will become an internet standard just like UDP, TCP, HTTP, etc.” If you are interested to know more about Neqo and QUIC, check out the official GitHub page. Other interesting news in web development Chrome 78 beta brings the CSS Properties and Values API, the native file system API, and more! Apple releases Safari 13 with opt-in dark mode support, FIDO2-compliant USB security keys support, and more! Inkscape 1.0 beta is available for testing  
Read more
  • 0
  • 0
  • 6545
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-memory-usage-optimizations-implemented-in-v8-lite-can-benefit-v8
Sugandha Lahoti
13 Sep 2019
4 min read
Save for later

New memory usage optimizations implemented in V8 Lite can also benefit V8

Sugandha Lahoti
13 Sep 2019
4 min read
V8 Lite was released in late 2018 in V8 version 7.3 to dramatically reduce V8’s memory usage. V8 is Google’s open-source JavaScript and WebAssembly engine, written in C++. V8 Lite provides a 22% reduction in typical web page heap size compared to V8 version 7.1 by disabling code optimization, not allocating feedback vectors and performed aging of seldom executed bytecode. Initially, this project was envisioned as a separate Lite mode of V8. However, the team realized that many of the memory optimizations could be used in regular V8 thereby benefiting all users of V8. The team realized that most of the memory savings of Lite mode with none of the performance impact can be achieved by making V8 lazier. They performed Lazy feedback allocation, Lazy source positions, and Bytecode flushing to bring V8 Lite memory optimizations to regular V8. Read also: LLVM WebAssembly backend will soon become Emscripten default backend, V8 announces Lazy allocation of Feedback Vectors The team lazily allocated feedback vectors after a function executes a certain amount of bytecode (currently 1KB). Since most functions aren’t executed very often, they avoid feedback vector allocation in most cases but quickly allocate them where needed, to avoid performance regressions and still allow code to be optimized. One hitch was that lazy allocation of feedback vectors did not allow feedback vectors to form a tree. To address this, they created a new ClosureFeedbackCellArray to maintain this tree, then swap out a function’s ClosureFeedbackCellArray with a full FeedbackVector when it becomes hot. The team says that they, “have enabled lazy feedback allocation in all builds of V8, including Lite mode where the slight regression in memory compared to their original no-feedback allocation approach is more than compensated by the improvement in real-world performance.” Compiling bytecode without collecting source positions Source position tables are generated when compiling bytecode from JavaScript. However, this information is only needed when symbolizing exceptions or performing developer tasks such as debugging. To avoid this waste, bytecode is now compiled without collecting source positions. The source positions are only collected when a stack trace is actually generated. They have also fixed bytecode mismatches and added checks and a stress mode to ensure that eager and lazy compilation of a function always produces consistent outputs. Flush compiled bytecode from functions not executed recently Bytecode compiled from JavaScript source takes up a significant chunk of V8 heap space. Therefore, now compiled bytecode is flushed from functions during garbage collection if they haven’t been executed recently. They also flush feedback vectors associated with the flushed functions. To keep track of the age of a function’s bytecode, they have incremented the age after every major garbage collection, and reset it to zero when the function is executed. Additional memory optimizations Reduce the size of FunctionTemplateInfo objects. The FunctionTemplateInfo object is split such that the rare fields are stored in a side-table which is only allocated on demand if required. The TurboFan optimized code is now deoptimized such that deopt points in optimized code load the deopt id directly before calling into the runtime. Read also: V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more. Result comparison for V8 Lite and V8 Source: V8 blog People on Hacker News appreciated the work done by the team being V8. A comment reads, “Great engineering stuff. I am consistently amazed by the work of V8 team. I hope V8 v7.8 makes it to Node v12 before its LTS release in coming October.” Another says, “At the beginning of the article, they are talking about building a "v8 light" for embedded application purposes, which was pretty exciting to me, then they diverged and focused on memory optimization that's useful for all v8. This is great work, no doubt, but as the most popular and well-tested JavaScript engine, I'd love to see a focus on ease of building and embedding.” https://twitter.com/vpodk/status/1172320685634420737 More details are available on the V8 blog. Other interesting news in Tech Google releases Flutter 1.9 at GDD (Google Developer Days) conference Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, new iPad, and more.
Read more
  • 0
  • 0
  • 3170

article-image-the-tor-project-on-browser-fingerprinting-and-how-it-is-taking-a-stand-against-it
Bhagyashree R
06 Sep 2019
4 min read
Save for later

The Tor Project on browser fingerprinting and how it is taking a stand against it

Bhagyashree R
06 Sep 2019
4 min read
In a blog post shared on Wednesday, Pierre Laperdrix, a postdoctoral researcher in the Secure Web Applications Group at CISPA, talked about browser fingerprinting, its risks, and the efforts taken by the Tor Project to prevent it. He also talked about his Fingerprint Central website, which is officially a part of the Tor project since 2017. What is browser fingerprinting Browser fingerprinting is the systematic collection of information about a remote computing device for the purposes of identification. There are several techniques through which a third-party can get a “rich fingerprint.” These include the availability of JavaScript or other client-side scripting languages, the user-agent and the accept headers, HTML5 Canvas element, and more. The browser fingerprints may include information like browser and operating system type and version, active plugins, timezone, language, screen resolution and various other active settings. Some users may think that these are too generic to identify a particular person. However, a study by Panopticlick, a browser fingerprinting test website, says that only 1 in 286,777 other browsers will share its fingerprint. Here’s an example of fingerprint Pierre Laperdrix shared in his post: Source: The Tor Project As with any technology, browser fingerprinting can be used or misused. The fingerprints can enable a remote application to prevent potential frauds or online identity thefts. On the other hand, these can also be used to track users across websites and collect information about their online behavior, without their consent. Advertisers and marketers can use this data for targeted advertising. Read also: All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Steps taken by the Tor Project to prevent browser fingerprinting Laperdrix said that Tor was the very first browser to understand and address the privacy threats browser fingerprinting poses. The Tor browser, which goes by the tagline “anonymity online”, is designed to reduce online tracking and identification of users. The browser takes a very simple approach to prevent the identification of users. “In the end, the approach chosen by Tor developers is simple: all Tor users should have the exact same fingerprint. No matter what device or operating system you are using, your browser fingerprint should be the same as any device running Tor Browser,” Laperdrix wrote. There are many other changes that have been made to the Tor browser over the years to prevent the unique identification of users. Tor warns users when they maximize their browser window as it is also one attribute that can be used to identify them. It has introduced default fallback fonts to prevent font and canvas fingerprinting. It has all the JS clock sources and event timestamps set to a specific resolution to prevent JS from measuring the time intervals of things like typing to produce a fingerprint. Talking about his contribution towards preventing browser fingerprinting, Laperdrix wrote, “As part of the effort to reduce fingerprinting, I also developed a fingerprinting website called FP Central to help Tor developers find fingerprint regressions between different Tor builds.” As a part of Google Summer of Code 2016, Laperdrix proposed to develop a website called Fingerprint Central, which is now officially included in the Tor Project. Similar to AmIUnique.org or Panopticlick, FP Central is developed to study the diversity of browser fingerprints. It runs a fingerprinting test suite and collects data from Tor browsers to help developers design and test new fingerprinting protection. They can also use it to ensure that fingerprinting-related bugs are correctly fixed with specific regression tests. Explaining the long-term goal of the website he said, “The expected long-term impact of this project is to reduce the differences between Tor users and reinforce their privacy and anonymity online.” There are a whole lot of modifications made under the hood to prevent browser fingerprinting that you can check out using the “tbb-fingerprinting” tag in the bug tracker. These modifications will also make their way into future releases of Firefox under the Tor Uplift program. Many organizations have taken a step against browser fingerprinting including browser companies Mozilla and Brave. Earlier this week, Firefox 69 was shipped with browser fingerprinting blocked by default. Brave also comes with a Fingerprinting Protection Mode enabled by default. In 2018, Apple updated Safari to only share a simplified system profile making it difficult to uniquely identify or track users. Read also: Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Check out Laperdrix’s post on Tor blog to know more in detail about browser fingerprinting. Other news in Web JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 9369

article-image-javascript-will-soon-support-optional-chaining-operator-as-its-ecmascript-proposal-reaches-stage-3
Bhagyashree R
28 Aug 2019
3 min read
Save for later

JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3

Bhagyashree R
28 Aug 2019
3 min read
Last month, the ECMAScript proposal for optional chaining operator reached stage 3 of the TC39 process. This essentially means that the feature is almost finalized and is awaiting feedback from users. The optional chaining operator aims to make accessing properties through connected objects easier when there are chances of a reference or function being undefined or null. https://twitter.com/drosenwasser/status/1154456633642119168 Why optional chaining operator is proposed in JavaScript Developers often need to access properties that are deeply nested in a tree-like structure. To do this, they sometimes end up writing long chains of property accesses. This can make them error-prone. If any of the intermediate references in these chains are evaluated to null or undefined, JavaScript will throw the TypeError: Cannot read property 'name' of undefined error. The optional chaining operator aims to provide a more elegant way of recovering from such instances. It allows you to check for the existence of deeply nested properties in objects. How it works is that if the operand before the operator evaluates to undefined or null, the expression will return to undefined. Or else, the property access, method or function call will be evaluated normally. MDN compares this operator with the dot (.) chaining operator. “The ?. operator functions similarly to the . chaining operator, except that instead of causing an error if a reference is null or undefined, the expression short-circuits with a return value of undefined. When used with function calls, it returns undefined if the given function does not exist,” the document reads. The concept of optional chaining is not new. Several other languages also have support for a similar feature including the null-conditional operator in C# 6 and later, optional chaining operator in Swift, and the existential operator in CoffeeScript. The optional chaining operator is represented by ‘?.’. Here’s how its syntax looks like: obj?.prop       // optional static property access obj?.[expr]     // optional dynamic property access func?.(...args) // optional function or method call Some properties of optional chaining Short-circuiting: The rest of the expression is not evaluated if an optional chaining operator encounters undefined or null at its left-hand side. Stacking: Another property of the optional chaining operator is that you can stack them. This means that you can apply more than one optional chaining operator on a sequence of property accesses. Optional deletion: You can also combine the ‘delete’ operator with an optional chain. Though there is time for the optional chaining operator to land in JavaScript, you can give it try with a Babel plugin. To stay updated with its browser compatibility, check out the MDN web docs. Many developers are appreciating this feature. A developer on Reddit wrote, “Considering how prevalent 'Cannot read property foo of undefined' errors are in JS development, this is much appreciated. Yes, you can rant that people should do null guards better and write less brittle code. True, but better language features help protect users from developer laziness.” Yesterday, the team behind V8, Chrome’s JavaScript engine, also expressed their delight on Twitter: https://twitter.com/v8js/status/1166360971914481669 Read the Optional Chaining for JavaScript proposal to know more in detail. ES2019: What’s new in ECMAScript, the JavaScript specification standard Introducing QuickJS, a small and easily embeddable JavaScript engine Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more
Read more
  • 0
  • 0
  • 6861

article-image-google-chrome-76-now-supports-native-lazy-loading
Bhagyashree R
27 Aug 2019
4 min read
Save for later

Google Chrome 76 now supports native lazy-loading

Bhagyashree R
27 Aug 2019
4 min read
Earlier this month, Google Chrome 76 got native support for lazy loading. Web developers can now use the new ‘loading’ attribute to lazy-load resources without having to rely on a third-party library or writing a custom lazy-loading code. Why native lazy loading is introduced Lazy loading aims to provide better web performance in terms of both speed and consumption of data. Generally, images are the most requested resources on any website. Some web pages end up using a lot of data to load images that are out of the viewport. Though this might not have much effect on a WiFi user, this could surely end up consuming a lot of cellular data. Not only images, but out-of-viewport embedded iframes can also consume a lot of data and contribute to slow page speed. Lazy loading addresses this problem by deferring the non-critical, below-the-fold images and iframe loads until the user scrolls closer to them. This results in faster web page loading, minimized bandwidth for users, and reduced memory usage. Previously, there were a few ways to defer the loading of images and iframes that were out of the viewport. You could use the Intersection Observer API or the ‘data-src’ attribute on the 'img' tag. Many developers also built third-party libraries to provide abstractions that are even easier to use. Bringing native support, however, eliminates the need for an external library. It also ensures that the deferred loading of images and iframes still work even if JavaScript is disabled on the client. How you can use lazy loading Without this feature, Chrome already loads images at different priorities depending on their location with respect to the device viewport. This new ‘loading’ attribute, however, allows developers to completely defer the loading of images and iframes until the user scrolls near them. The distance-from-viewport threshold is not fixed and depends on the type of resources being fetched, whether Lite mode is enabled, and the effective connection type. There are default values assigned for effective connection type in the Chromium source code that might change in a future release. Also, since the images are lazy-loaded, there are chances of content reflow. To prevent this, developers are advised to set width and height for the images. You can assign any one of the following three values to the ‘loading’ attribute: ‘auto’: This represents the default behavior of the browser and is equivalent to not including the attribute. ‘lazy’: This will defer the loading of the images and iframes until it reaches a calculated distance from the viewport. ‘eager’: This will load the resource immediately. Support for native lazy loading in Chrome 76 got mixed reactions from users. A user commented on Hacker News, “I'm happy to see this. So many websites with lazy loading never implemented a fallback for noscript. And most of the popular libraries didn't account for this accessibility.” Another user expressed that it does hinder user experience. They commented, “I may be the odd one out here, but I hate lazy loading. I get why it's a big thing on cellular connections, but I do most of my browsing on WIFI. With lazy loading, I'll frequently be reading an article, reach an image that hasn't loaded in yet, and have to wait for it, even though I've been reading for several minutes. Sometimes I also have to refind my place as the whole darn page reflows. I wish there was a middle ground... detect I'm on WIFI and go ahead and load in the lazy stuff after the above the fold stuff.” Right now, Chrome is the only browser to support native lazy loading. However, other browsers may follow the suit considering Firefox has an open bug for implementing lazy loading and Edge is based on Chromium. Why should your e-commerce site opt for Headless Magento 2? Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event Angular 8.0 releases with major updates to framework, Angular Material, and the CLI
Read more
  • 0
  • 0
  • 6374
article-image-apache-flink-1-9-0-releases-with-fine-grained-batch-recovery-state-processor-api-and-more
Fatema Patrawala
26 Aug 2019
5 min read
Save for later

Apache Flink 1.9.0 releases with Fine-grained batch recovery, State Processor API and more

Fatema Patrawala
26 Aug 2019
5 min read
Last week the Apache Flink community announced the release of Apache Flink 1.9.0. The Flink community defines the project goal as “to develop a stream processing system to unify and power many forms of real-time and offline data processing applications as well as event-driven applications.” In this release, they have made a huge step forward in that effort, by integrating Flink’s stream and batch processing capabilities under a single, unified runtime. There are significant features in this release, namely batch-style recovery for batch jobs and a preview of the new Blink-based query engine for Table API and SQL queries. The team also announced the availability of the State Processor API, one of the most frequently requested features that enables users to read and write savepoints with Flink DataSet jobs. Additionally, Flink 1.9 includes a reworked WebUI and previews of Flink’s new Python Table API and it is integrated with the Apache Hive ecosystem. Let us take a look at the major new features and improvements: New Features and Improvements in Apache Flink 1.9.0 Fine-grained Batch Recovery The time to recover a batch (DataSet, Table API and SQL) job from a task failure is significantly reduced. Until Flink 1.9, task failures in batch jobs were recovered by canceling all tasks and restarting the whole job, i.e, the job was started from scratch and all progress was voided. With this release, Flink can be configured to limit the recovery to only those tasks that are in the same failover region. A failover region is the set of tasks that are connected via pipelined data exchanges. Hence, the batch-shuffle connections of a job define the boundaries of its failover regions. State Processor API Up to Flink 1.9, accessing the state of a job from the outside was limited to the experimental Queryable State. In this release the team introduced a new, powerful library to read, write and modify state snapshots using the batch DataSet API. In practice, this means: Flink job state can be bootstrapped by reading data from external systems, such as external databases, and converting it into a savepoint. State in savepoints can be queried using any of Flink’s batch APIs (DataSet, Table, SQL), for example to analyze relevant state patterns or check for discrepancies in state that can support application auditing or troubleshooting. The schema of state in savepoints can be migrated offline, compared to the previous approach requiring online migration on schema access. Invalid data in savepoints can be identified and corrected. The new State Processor API covers all variations of snapshots: savepoints, full checkpoints and incremental checkpoints. Stop-with-Savepoint Cancelling with a savepoint is a common operation for stopping/restarting, forking or updating Flink jobs. However, the existing implementation did not guarantee output persistence to external storage systems for exactly-once sinks. To improve the end-to-end semantics when stopping a job, Flink 1.9 introduces a new SUSPEND mode to stop a job with a savepoint that is consistent with the emitted data. You can suspend a job with Flink’s CLI client as follows: bin/flink stop -p [:targetDirectory] :jobId The final job state is set to FINISHED on success, allowing users to detect failures of the requested operation. Flink WebUI Rework After a discussion about modernizing the internals of Flink’s WebUI, this component was reconstructed using the latest stable version of Angular — basically, a bump from Angular 1.x to 7.x. The redesigned version is the default in Apache Flink 1.9.0, however there is a link to switch to the old WebUI. Preview of the new Blink SQL Query Processor After the donation of Blink to Apache Flink, the community worked on integrating Blink’s query optimizer and runtime for the Table API and SQL. The team refactored the monolithic flink-table module into smaller modules. This resulted in a clear separation of well-defined interfaces between the Java and Scala API modules and the optimizer and runtime modules. Other important changes in this release: The Table API and SQL are now part of the default configuration of the Flink distribution. Previously, the Table API and SQL had to be enabled by moving the corresponding JAR file from ./opt to ./lib. The machine learning library (flink-ml) has been removed in preparation for FLIP-39. The old DataSet and DataStream Python APIs have been removed in favor of FLIP-38. Flink can be compiled and run on Java 9. Note: that certain components interacting with external systems (connectors, filesystems, reporters) may not work since the respective projects may have skipped Java 9 support. The binary distribution and source artifacts for this release are now available via the Downloads page of the Flink project, along with the updated documentation. Flink 1.9 is API-compatible with previous 1.x releases for APIs annotated with the @Public annotation. You can review the release notes to know about the detailed list of changes and new features to upgrade Flink setup to Flink 1.9.0. Apache Flink 1.8.0 releases with finalized state schema evolution support Apache Flink founders data Artisans could transform stream processing with patent-pending tool Apache Flink version 1.6.0 released!
Read more
  • 0
  • 0
  • 3171

article-image-amazon-transcribe-streaming-announces-support-for-websockets
Savia Lobo
29 Jul 2019
3 min read
Save for later

Amazon Transcribe Streaming announces support for WebSockets

Savia Lobo
29 Jul 2019
3 min read
Last week, Amazon announced that its automatic speech recognition (ASR) service, Amazon Transcribe, now supports WebSockets. According to Amazon, “WebSocket support opens Amazon Transcribe Streaming up to a wider audience and makes integrations easier for customers that might have existing WebSocket-based integrations or knowledge”. Amazon Transcribe allows developers to add speech-to-text capability to their applications easily with its ASR service. Amazon announced the general availability of Amazon Transcribe in the AWS San Francisco Summit 2018. With Amazon Transcribe API, users can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech. Real-time transcripts from a live audio stream are also possible with the Transcribe API. Until now, the Amazon Transcribe Streaming API has been available using HTTP/2 streaming. However, Amazon adds the new WebSockets support as another integration option for bringing real-time voice capabilities to different projects built using Transcribe. What are WebSockets? WebSockets are a protocol built atop TCP, similar to HTTP. HTTP is excellent for short-lived requests, however, it does not handle persistent real-time communications well. Due to this, the first Amazon Transcribe Streaming API made available uses HTTP/2 streams that solve a lot of the issues that HTTP had with real-time communications. Amazon states, “an HTTP connection is normally closed at the end of the message, a WebSocket connection remains open”. With this advantage, messages can be sent bi-directionally with no bandwidth or latency added by handshaking and negotiating a connection. WebSocket connections are full-duplex, which means that the server and client can both transmit data to and fro at the same time. WebSockets were also designed “for cross-domain usage, so there’s no messing around with cross-origin resource sharing (CORS) as there is with HTTP”. Amazon Transcribe Streaming using Websockets While using the WebSocket protocol to stream audio, Amazon Transcribe transcribes the stream in real-time. When a user encodes the audio with event stream encoding, Amazon Transcribe responds with a JSON structure, which is also encoded using event stream encoding. Key components of a WebSocket request to Amazon Transcribe are: Creating a pre-signed URL to access Amazon Transcribe. Creating binary WebSocket frames containing event stream encoded audio data. Handling WebSocket frames in the response. The different languages that Amazon Transcribe currently supports during real-time transcription include British English (en-GB), US English (en-US), French (fr-FR), Canadian French (fr-CA), and US Spanish (es-US). To know more about WebSockets API in detail, visit Amazon’s official post. Understanding WebSockets and Server-sent Events in Detail Implementing a non-blocking cross-service communication with WebClient[Tutorial] Introducing Kweb: A Kotlin library for building rich web applications
Read more
  • 0
  • 0
  • 3833

article-image-ietf-proposes-json-meta-application-protocol-jmap-as-the-next-standard-for-email-protocols
Bhagyashree R
22 Jul 2019
4 min read
Save for later

IETF proposes JSON Meta Application Protocol (JMAP) as the next standard for email protocols

Bhagyashree R
22 Jul 2019
4 min read
Last week, the Internet Engineering Task Force (IETF) published JSON Meta Application Protocol (JMAP) as RFC 8260, now marked as “Proposed Standard”. The protocol is authored by Neil Jenkins, Director and UX Architect at Fastmail and Chris Newman, Principle Engineer at Oracle. https://twitter.com/Fastmail/status/1152281229083009025 What is JSON Meta Application Protocol (JMAP)? Fastmail started working on JMAP in 2014 as an internal development project. It is an internet protocol that handles the submission and synchronization of emails, contacts, and calendars between a client and a server providing a consistent interface to different data types. It is developed to be a possible successor to IMAP and a potential replacement for the CardDAV and CalDAV standards. Why is it needed? According to the developers, the current standards for email protocols, that is IMAP and SMTP, for client-server communication are outdated and complicated. They are not well-suited for modern mobile networks and high-latency scenarios. These limitations in current standards have led to stagnation in the development of new good email clients. Many have also started coming up with proprietary alternatives like Gmail, Outlook, Nylas, and Context.io. Another drawback is that many mobile email clients proxy everything via their own server instead of talking directly to the user’s mail store, for example, Outlook and Newton. This is not only bad for client authors who have to run server infrastructure in addition to just building their clients, but also raises security and privacy concerns. Here’s a video by FastMail explaining the purpose behind JMAP: https://www.youtube.com/watch?v=8qCSK-aGSBA How JMAP solves the limitations in current standards? JMAP is designed to be easier for developers to work with and enable efficient use of network resources. Here are some of its properties that address the limitations in current standards: Stateless: It does not require a persistent connection, which fits best for mobile environments. Immutable Ids: It is more like NFS or filesystems with inodes rather than a name-based hierarchy, which makes renaming easy to detect and cheap to sync. Batchable API calls: It batches multiple API calls in a single request to the server resulting in reduced round trips and better battery life for mobile users. Provides flood control: The client can put limits on how much data the server is allowed to send. For instance, the command will return a ‘tooManyChanges’ error on exceeding the client’s limit, rather than returning a million * 1 EXPUNGED lines as can happen in IMAP. No custom parser required: Support for JSON, a well understood and widely supported encoding format, makes it easier for developers to get started. A backward compatible data model: Its data model is backward compatible with both IMAP folders and Gmail-style labels. Fastmail is already using JMAP in production for its Fastmail and Topicbox products. It is also seeing some adoption in organizations like the Apache Software Foundation, who added experimental support for JMAP in its free mail server Apache in version 3.0. Many developers are happy about this announcement. A user on Hacker News said, “JMAP client and the protocol impresses a lot. Just 1 to a few calls, you can re-sync entire emails state in all folders. With IMAP need to select each folder to inspect its state. Moreover, just a few IMAP servers support fast synchronization extensions like QRESYNC or CONDSTORE.” However, its use of JSON did spark some debate on Hacker News. “JSON is an incredibly inefficient format for shareable data: it is annoying to write, unsafe to parse and it even comes with a lot of overhead (colons, quotes, brackets and the like). I'd prefer s-expressions,” a user commented. To stay updated with the current developments in JMAP, you can join its mailing list. To read more about its specification check out its official website and also its GitHub repository. Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial] Google announces the general availability of AMP for email, faces serious backlash from users Sublime Text 3.2 released with Git integration, improved themes, editor control and much more!  
Read more
  • 0
  • 0
  • 2997
article-image-wasmer-introduces-webassembly-interfaces-for-validating-the-imports-and-exports-of-a-wasm-module
Bhagyashree R
18 Jul 2019
2 min read
Save for later

Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module

Bhagyashree R
18 Jul 2019
2 min read
Yesterday, Syrus Akbary, the founder and CEO of Wasmer, introduced WebAssembly interfaces. It provides a convenient s-expression (symbolic expression) text format that can be used to validate the imports and exports of a Wasm module. Why WebAssembly Interfaces are needed? The Wasmer runtime initially supported only running Emscripten-generated modules and later on added support for other ABIs including WASI and Wascap. WebAssembly runtimes like Wasmer have to do a lot of checks before starting an instance. It does that to ensure a WebAssembly module is compliant with a certain Application Binary Interface (Emscripten or WASI). It checks whether the module imports and exports are what the runtime expects, namely the function signatures and global types match. These checks are important for: Making sure a module is going to work with a certain runtime. Assuring a module is compatible with a certain ABI. Creating a plugin ecosystem for any program that uses WebAssembly as part of its plugin system. The team behind Wasmer introduced WebAssembly Interfaces to ease this process by providing a way to validate imports and exports are as expected. This is how a WebAssembly Interface for WASI looks like: Source: Wasmer WebAssembly Interfaces allow you to run various programs with each ABI, such as Nginx (Emscripten) and Cowsay (WASI). When used together with WAPM (WebAssembly Package Manager), you will also be able to make use of the entire WAPM ecosystem to create, verify, and distribute plugins. They have also proposed it as a standard for defining a specific set of imports and exports that a module must have, in a way that is statically analyzable. Read the official announcement by Wasmer. Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more
Read more
  • 0
  • 0
  • 2788

article-image-introducing-quickjs-a-small-and-easily-embeddable-javascript-engine
Bhagyashree R
12 Jul 2019
3 min read
Save for later

Introducing QuickJS, a small and easily embeddable JavaScript engine

Bhagyashree R
12 Jul 2019
3 min read
On Tuesday, Fabrice Bellard, the creator of FFmpeg and QEMU and Charlie Gordon, a C expert, announced the first public release of QuickJS. Released under MIT license, it is a “small but complete JavaScript engine” that comes with support for the latest ES2019 language specification. Features in QuickJS JavaScript engine Small and easily embeddable: The engine is formed by a few C files and does not have any external dependency. Fast interpreter: The interpreter shows impressive speed by running 56,000 tests from the ECMAScript Test Suite1 in just 100 seconds, and that too on a single-core CPU. A runtime instance completes its life cycle in less than 300 microseconds. ES2019 support: The support for ES2019 specification is almost complete including modules, asynchronous generators, and full Annex B support (legacy web compatibility). Currently, it does not has support for realms and tail calls. No external dependency: It can compile JavaScript source to executables without the need for any external dependency. Command-line interpreter: The command-line interpreter comes with contextual colorization and completion implemented in Javascript. Garbage collection: It uses reference counting with cycle removal to free objects automatically and deterministically. This reduces memory usage and ensures deterministic behavior of the JavaScript engine. Mathematical extensions: You can find all the mathematical extensions in the ‘qjsbn’ version, which are fully-backward compatible with standard Javascript. It supports big integers (BigInt), big floating-point numbers (BigFloat), operator overloading, and also comes with ‘bigint’ and ‘math’ mode. This news struck a discussion on Hacker News, where developers were all praises for Bellard’s and Gordon’s outstanding work on this project. A developer commented, “Wow. The core is a single 1.5MB file that's very readable, it supports nearly all of the latest standard, and Bellard even added his own extensions on top of that. It has compile-time options for either a NaN-boxing or traditional tagged union object representation, so he didn't just go for a single minimal implementation (unlike e.g. OTCC) but even had the time and energy to explore a bit. I like the fact that it's not C99 but appears to be basic C89, meaning very high portability. Despite my general distaste for JS largely due to websites tending to abuse it more than anything, this project is still immensely impressive and very inspiring, and one wonders whether there is still "space at the bottom" for even smaller but functionality competitive implementations.” Another wrote, “I can't wait to mess around with this, it looks super cool. I love the minimalist approach. If it's truly spec compliant, I'll be using this to compile down a bunch of CLI scripts I've written that currently use node. I tend to stick with the ECMAScript core whenever I can and avoid using packages from NPM, especially ones with binary components. A lot of the time that slows me down a bit because I'm rewriting parts of libraries, but here everything should just work with a little bit of translation for the OS interaction layer which is very exciting.” To know more about QuickJS, check out Fabrice Bellard's official website. Firefox 67 will come with faster and reliable JavaScript debugging tools Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more React Native 0.59 is now out with React Hooks, updated JavaScriptCore, and more!
Read more
  • 0
  • 0
  • 6395