Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Server-Side Web Development

85 Articles
article-image-llvm-webassembly-backend-will-soon-become-emscriptens-default-backend-v8-announces
Bhagyashree R
02 Jul 2019
3 min read
Save for later

LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces

Bhagyashree R
02 Jul 2019
3 min read
Yesterday, the team behind V8, an open source JavaScript engine, shared the work they with the community have been doing to make LLVM WebAssembly the default backend for Emscripten. LLVM is a compiler framework and Emscripten is an LLVM-to-Web compiler. https://twitter.com/v8js/status/1145704863377981445 The LLVM WebAssembly backend will be the third backend in Emscripten. The original compiler was written in JavaScript which used to parse LLVM IR in text form. In 2013, a new backend was written called Fastcomp by forking LLVM, which was designed to emit asm.js. It was a big improvement in code quality and compile times. According to the announcement the LLVM WebAssembly backend beats the old Fastcomp backend on most metrics. Here are the advantages this backend will come with: Much faster linking The LLVM WebAssembly backend will allow incremental compilation using WebAssembly object files. Fastcomp uses LLVM Intermediate Representation (IR) in bitcode files, which means that at the time of linking the IR would be compiled by LLVM. This is why it shows slower link times. On the other hand, WebAssembly object files (.o) already contain compiled WebAssembly code, which accounts for much faster linking. Faster and smaller code The new backend shows significant code size reduction as compared to Fastcomp.  “We see similar things on real-world codebases that are not in the test suite, for example, BananaBread, a port of the Cube 2 game engine to the Web, shrinks by over 6%, and Doom 3 shrinks by 15%!,” shared the team in the announcement. The factors that account for the faster and smaller code is that LLVM has better IR optimizations and its backend codegen is smart as it can do things like global value numbering (GVN). Along with that, the team has put their efforts in tuning the Binaryen optimizer which also helps in making the code smaller and faster as compared to Fastcomp. Support for all LLVM IR While Fastcomp could handle the LLVM IR generated by clang, it often failed on other sources. On the contrary, the LLVM WebAssembly backend can handle any IR as it uses the common LLVM backend infrastructure. New WebAssembly features Fastcomp generates asm.js before running asm2wasm. This makes it difficult to handle new WebAssembly features like tail calls, exceptions, SIMD, and so on. “The WebAssembly backend is the natural place to work on those, and we are in fact working on all of the features just mentioned!,” the team added. To test the WebAssembly backend you just have to run the following commands: emsdk install latest-upstream emsdk activate latest-upstream Read more in detail on V8’s official website. V8 7.2 Beta releases with support for public class fields, well-formed JSON.stringify, and more V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more Google’s V8 7.2 and Chrome 72 gets public class fields syntax; private class fields to come soon
Read more
  • 0
  • 0
  • 4522

article-image-google-open-sources-its-robots-txt-parser-to-make-robots-exclusion-protocol-an-official-internet-standard
Bhagyashree R
02 Jul 2019
3 min read
Save for later

Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard

Bhagyashree R
02 Jul 2019
3 min read
Yesterday, Google announced that it has teamed up with the creator of Robots Exclusion Protocol (REP), Martijn Koster and other webmasters to make the 25 year old protocol an internet standard. The REP, better known as robots.txt, is now submitted to IETF (Internet Engineering Task Force). Google has also open sourced its robots.txt parser and matcher as a C++ library. https://twitter.com/googlewmc/status/1145634145261051906 REP was created back in 1994 by Martijn Koster, a software engineer who is known for his contribution in internet searching. Since its inception, it has been widely adopted by websites to indicate whether web crawlers and other automatic clients are allowed to access the site or not. When any automatic client wants to visit a website it first checks for robots.txt that shows something like this: User-agent: * Disallow: / The User-agent: * statement means that this applies to all robots and Disallow: / means that the robot is not allowed to visit any page of the site. Despite being used widely on the web, it is still not an internet standard. With no set in stone rules, developers have interpreted the “ambiguous de-facto protocol” differently over the years. Also, it has not been updated since its creation to address the modern corner cases. This proposed REP draft is a standardized and extended version of REP that gives publishers fine-grained controls to decide what they like to be crawled on their site and potentially shown to interested users. The following are some of the important updates in the proposed REP: It is no longer limited to HTTP and can be used by any URI-based transfer protocol, for instance, FTP or CoAP. Developers need to at least parse the first 500 kibibytes of a robots.txt. This will ensure that the connections are not open for too long to avoid any unnecessary strain on servers. It defines a new maximum caching time of 24 hours after which crawlers cannot use robots.txt. This allows website owners to update their robots.txt whenever they want and also avoid the overloading robots.txt requests by crawlers. It also defines a provision for cases when a previously accessible robots.txt file becomes inaccessible because of server failures. In such cases the disallowed pages will not be crawled for a reasonably long period of time. This updated REP standard is currently in its draft stage and Google is now seeking feedback from developers. It wrote, “we uploaded the draft to IETF to get feedback from developers who care about the basic building blocks of the internet. As we work to give web creators the controls they need to tell us how much information they want to make available to Googlebot, and by extension, eligible to appear in Search, we have to make sure we get this right.” To know more in detail check out the official announcement by Google. Also, check out the proposed REP draft. Do Google Ads secretly track Stack Overflow users? Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers
Read more
  • 0
  • 0
  • 1960

article-image-haproxy-2-0-released-with-kubernetes-ingress-controller-layer-7-retries-polyglot-extensibility-grpc-support-and-more
Vincy Davis
17 Jun 2019
6 min read
Save for later

HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more

Vincy Davis
17 Jun 2019
6 min read
Last week, HAProxy 2.0 was released with critical features of cloud-native and containerized environments. This is an LTS (Long-term support) release, which includes a powerful set of core features such as Layer 7 retries, Cloud-Native threading and logging, polyglot extensibility, gRPC support and more, and will improve the seamless support for integration into modern architectures. In conjunction with this release, the HAProxy team has also introduced the HAProxy Kubernetes Ingress Controller and the HAProxy Data Plane API. The founder of HAProxy Technologies, Willy Tarreau, has said that these developments will come with HAProxy 2.1 version. The HAProxy project has also opened up issue submissions on its HAProxy GitHub account. Some features of HAProxy 2.0 Cloud-Native Threading and Logging HAProxy can now scale to accommodate any environment with less manual configuration. This will enable the number of worker threads to match the machine’s number of available CPU cores. The process setting is no longer required, thus simplifying the bind line. Two new build parameters have been added: MAX_THREADS and MAX_PROCS, which avoids allocating huge structs. Logging has been made easier for containerized environments. Direct logging to stdout and stderr, or to a file descriptor is now possible. Kubernetes Ingress Controller The HAProxy Kubernetes Ingress Controller provides a high-performance ingress for the Kubernetes-hosted applications. It supports TLS offloading, Layer 7 routing, rate limiting, whitelisting. Ingresses can be configured through either ConfigMap resources or annotations. The Ingress Controller gives users the ability to : Use only one IP address and port and direct requests to the correct pod based on the Host header and request path Secure communication with built-in SSL termination Apply rate limits for clients while optionally whitelisting IP addresses Select from among any of HAProxy's load-balancing algorithms Get superior Layer 7 observability with the HAProxy Stats page and Prometheus metrics Set maximum connection limits to backend servers to prevent overloading services Layer 7 Retries With HAProxy 2.0, it will be possible to retry from another server at Layer 7 for failed HTTP requests. The new configuration directive, retry-on, can be used in defaults, listen, or backend section. The number of attempts at retrying can be specified using the retries directive. The full list of retry-on options is given on the HAProxy blog. HAProxy 2.0 also introduces a new http-request action called disable-l7-retry. It allows the user to disable any attempt to retry the request if it fails for any reason other than a connection failure. This can be useful to make sure that POST requests aren’t retried. Polyglot Extensibility The Stream Processing Offload Engine (SPOE) and Stream Processing Offload Protocol (SPOP) were introduced in HAProxy 1.7. It aimed to create the extension points necessary to build upon HAProxy using any programming language. From HAProxy 2.0, the following libraries and examples will be available in the following languages and platforms: C .NET Core Golang Lua Python gRPC HAProxy 2.0 delivers full support for the open-source RPC framework, gRPC. This allows bidirectional streaming of data, detection of gRPC messages, and logging gRPC traffic. Two new converters, protobuf and ungrpc, have been introduced, to extract the raw Protocol Buffer messages. Using Protocol Buffers, gRPC enables users to serialize messages into a binary format that’s compact and potentially more efficient than JSON. Users need to set up a standard end-to-end HTTP/2 configuration, to start using gRPC in HAProxy. HTTP Representation (HTX) The Native HTTP Representation (HTX) was introduced with HAProxy 1.9. Starting from 2.0, it will be enabled by default. HTX creates strongly typed, well-delineated header fields and allows for gaps and out-of-order fields. It also allows HAProxy to maintain consistent semantics from end-to-end and provides higher performance when translating HTTP/2 to HTTP/1.1 or vice versa. LTS Support for 1.9 Features HAProxy 2.0 bring LTS support for many features that were introduced or improved upon during the 1.9 release. Some are them are specified below: Small Object Cache with an increased caching size up to 2GB, set with the max-object-size directive. The total-max-size setting determines the total size of the cache and can be increased up to 4095MB. New fetches like date_us, cpu_calls and more have been included which will report either an internal state or from layer 4, 5, 6, and 7. New converters like strcmp, concat and more allow to transform data within HAProxy Server Queue Priority Control, lets the users to prioritize some queued connections over others. This is helpful to deliver JavaScript or CSS files before images. The resolvers section supports using resolv.conf by specifying parse-resolv-conf. The HAProxy team has planned to build HAProxy 2.1 with features like UDP Support, OpenTracing and Dynamic SSL Certificate Updates. The HAProxy inaugural community conference, HAProxyConf is scheduled to take place in Amsterdam, Netherlands on November 12-13, 2019. A user on Hacker News comments, “HAProxy is probably the best proxy server I had to deal with ever. It's performance is exceptional, it does not interfere with L7 data unless you tell it to and it's extremely straightforward to configure reading the manual.” While some are busy comparing HAProxy with the nginx web server. A user says that “In my previous company we used to use HAProxy, and it was a hassle. Yes, it is powerful. However, nginx is way easier to configure and set up, and performance wise is a contender for most usual applications people needed. nginx just fulfills most people's requirements for reverse proxy and has solid HTTP/2 support (and other features) for way longer.” Another user states that “Big difference is that haproxy did not used to support ssl without using something external like stunnel -- nginx basically did it all out of the box and I haven't had a need for haproxy in quite some time now.” While others suggest that HAProxy is trying hard to stay equipped with the latest features in this release. https://twitter.com/garthk/status/1140366975819849728 A user on Hacker News agrees by saying that “These days I think HAProxy and nginx have grown a lot closer together on capabilities.” Visit the HAProxy blog for more details about HAProxy 2.0. HAProxy introduces stick tables for server persistence, threat detection, and collecting metrics MariaDB announces the release of MariaDB Enterprise Server 10.4 Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service
Read more
  • 0
  • 0
  • 6494
Banner background image

article-image-ngrx-8-released-with-ngrx-data-creator-functions-mock-selectors-for-isolated-unit-testing-and-more
Bhagyashree R
12 Jun 2019
2 min read
Save for later

NgRx 8 released with NgRx Data, creator functions, mock selectors for isolated unit testing, and more!

Bhagyashree R
12 Jun 2019
2 min read
On Monday, the team behind NgRx, a platform that provides reactive libraries for Angular, announced the release of NgRx 8. This release includes the NgRx Data package, creator functions, four run-time checks, mock selectors, and much more. Following are some of the updates in NgRx 8: NgRx Data integrated into the NgRx platform In this release, the team has integrated the angular-ngrx-data library by John Papa and Ward Bell directly into the NgRx platform as a first-party package. Using NgRx in your Angular applications properly requires a deeper understanding and a lot of boilerplate code. This package gives you a “gentle introduction” to NgRx without the boilerplate code and simplifies entity data management. Redesigned creator functions NgRx 8 comes with two new creator functions: createAction: Previously, while creating an action you had to create an action type, create a class, and lastly, create an action union. The new createAction function allows you to create actions in a less verbose way. createReducer: With this function, you will be able to create a reducer without a switch statement. It takes the initial state as the first parameter and any number of ‘on’ functions. Four new runtime checks To help developers better follow the NgRx core concepts and best practices, this release comes with four runtime checks. These are introduced to “shorten the feedback loop of easy-to-make mistakes when you’re starting to use NgRx, or even a well-seasoned developer might make.” The four runtime checks that have been added are: The strictStateImmutability check verifies whether a developer is trying to modify the state object. The strictActionImmutability check verifies that actions are not modified. The strictStateSerializability check verifies if the state is serializable. The strictActionSerializability check verifies if the action is serializable. All of these checks are opt-in and will be disabled automatically in production builds. Mock selectors for isolated unit testing NgRx 7 came with MockStore, a simpler way to condition NgRx state in unit tests. But, it does not allow isolated unit testing on its own. NgRx 8 combines mock selectors and MockStore to make this possible. You can use these mock selectors by importing @ngrx/store/testing. To know more in detail, check out the official announcement on Medium. ng-conf 2018 highlights, the popular angular conference Angular 8.0 releases with major updates to framework, Angular Material, and the CLI 5 useful Visual Studio Code extensions for Angular developers
Read more
  • 0
  • 0
  • 2883

article-image-former-npm-cto-introduces-entropic-a-federated-package-registry-with-a-new-cli-and-much-more
Amrata Joshi
03 Jun 2019
3 min read
Save for later

Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more!

Amrata Joshi
03 Jun 2019
3 min read
Yesterday, at JSConfEU '19, the team behind Entropic announced Entropic, a federated package registry with a new CLI that works smoothly with the network.  Entropic is also Apache 2 licensed and is federated. It mirrors all packages that users install from the legacy package manager. Entropic offers a new file-centric API and a content-addressable storage system that minimizes the amount of data that should be retrieved over a network. This file-centric approach also applies to the publication API. https://www.youtube.com/watch?v=xdLMbvEc2zk C J Silverio, Principal Engineer at Eaze said during the announcement, “I actually believe in open source despite everything I think it's good for us as human beings to give things away to each other but I think it's important. It's going to be plenty for my work so Chris tickets in news isn't it making out Twitter moment now Christensen and I have the natural we would like to give something away to you all right now.” https://twitter.com/kosamari/status/1134876898604048384 https://twitter.com/i/moments/1135060936216272896 https://twitter.com/colestrode/status/1135320460072296449 Features of Entropic Package specifications All the Entropic packages are namespaced, and a full Entropic package spec includes the hostname of its registry. The package specifications are also fully qualified with a namespace, hostname, and package name. They appear to be: namespace@example.com/pkg-name. For example, the ds cli is specified by chris@entropic.dev/ds. If a user publishes a package to their local registry that depends on packages from other registries, then the local instance will mirror all the packages on which the user’s package depend on. The team aims to keep each instance entirely self-sufficient, so installs aren’t dependent on a resource that might vanish. And the abandoned packages are moved to the abandonware namespace. The packages can be easily updated by any user in the package's namespace and can also have a list of maintainers. The ds cli Entropic requires a new command-line client known as ds or "entropy delta". According to the Entropic team, the cli doesn't have a very sensible shell for running commands yet. Currently, if users want to install packages using ds then they can now run ds build in a directory with a Package.toml to produce a ds/node_modules directory. The GitHub page reads, “This is a temporary situation!” But Entropic appears to be more like an alternative to npm as it seeks to address the limitations of the ownership model of npm.Inc. It aims to shift from centralized ownership to federated ownership, to restore power back to the commons. https://twitter.com/deluxee/status/1135489151627870209 To know more about this news, check out the GitHub page. GitHub announces beta version of GitHub Package Registry, its new package management service npm Inc. announces npm Enterprise, the first management code registry for organizations Using the Registry and xlswriter modules
Read more
  • 0
  • 0
  • 4849

article-image-salesforce-open-sources-lightning-web-components-framework
Savia Lobo
30 May 2019
4 min read
Save for later

Salesforce open sources ‘Lightning Web Components framework’

Savia Lobo
30 May 2019
4 min read
Yesterday, the developers at Salesforce open sourced Lightning Web Components framework, a new JavaScript framework that leverages the web standards breakthroughs of the last five years. This will allow developers to contribute to the roadmap and also use the framework irrespective if they are building applications on Salesforce or on any other platform. The Lightning Web Components was first introduced in December 2018. The developers in their official blog post mention, “The last five years have seen an unprecedented level of innovation in web standards, mostly driven by the W3C/WHATWG and the ECMAScript Technical Committee (TC39): ECMAScript 6, 7, 8, 9 and beyond, Web components, Custom elements, Templates and slots, Shadow DOM, etc.” The introduction of Lightning Web Components framework has lead to a dramatic transformation of the web stack. Many features that required frameworks are now standard.   The framework was “born as a modern framework built on the modern web stack”, developers say. Lightning Web Components framework includes three key parts: The Lightning Web Components framework, the framework’s engine. The Base Lightning Components, which is a set of over 70 UI components all built as custom elements. Salesforce Bindings, a set of specialized services that provide declarative and imperative access to Salesforce data and metadata, data caching, and data synchronization. The Lightning Web Components framework doesn’t have dependencies on the Salesforce platform. However, Salesforce-specific services are built on top of the framework. The layered architecture means that one can now use the Lightning Web Components framework to build web apps that run anywhere. The benefits of this include: You only need to learn a single framework You can share code between apps. As Lightning Web Components is built on the latest web standards, you know you are using a cutting-edge framework based on the latest patterns and best practices. Many users said they are unhappy and that the Lightning Web Components framework is comparatively slow. One user wrote on HackerNews, “the Lightning Experience always felt non-performant compared to the traditional server-rendered pages. Things always took a noticeable amount of time to finish loading. Even though the traditional interface is, by appearance alone, quite traditional, as least it felt fast. I don't know if Lightning's problems were with poor performing front end code, or poor API performance. But I was always underwhelmed when testing the SPA version of Salesforce.” Another user wrote, “One of the bigger mistakes Salesforce made with Lightning is moving from purely transactional model to default-cached-no-way-to-purge model. Without letting a single developer to know that they did it, what are the pitfalls or how to disable it (you can't). WRT Lightning motivation, sounds like a much better option would've been supplement older server-rendered pages with some JS, update the stylesheets and make server language more useable. In fact server language is still there, still heavily used and still lacking expressiveness so badly that it's 10x slower to prototype on it rather than client side JS…” In support of Salesforce, a user on HackerNews explains why this Framework might be slow. He said, “At its core, Salesforce is a platform. As such, our customers expect their code to work for the long run (and backwards compatibility forever). Not owning the framework fundamentally means jeopardizing our business and our customers, since we can't control our future. We believe the best way to future-proof our platform is to align with standards and help push the web platform forward, hence our sugar and take on top of Web Components.” He further added, “about using different frameworks, again as a platform, allowing our customers to trivially include their framework choice of the day, will mean that we might end up having to load seven versions of react, five of Vue, 2 Embers .... You get the idea :) Outside the platform we love all the other frameworks (hence other properties might choose what it fits their use cases) and we had a lot of good discussions with framework owners about how to keep improving things over the last two years. Our goal is to keep contributing to the standards and push all the things to be implemented natively on the platform so we all get faster and better.” To know more about this news visit the Lightning Web Components Framework’s official website. Applying styles to Material-UI components in React [Tutorial] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Github Sponsors: Could corporate strategy eat FOSS culture for dinner?
Read more
  • 0
  • 0
  • 3497
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-v8-7-5-beta-is-now-out-with-webassembly-implicit-caching-bulk-memory-operations-and-more
Bhagyashree R
17 May 2019
3 min read
Save for later

V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more

Bhagyashree R
17 May 2019
3 min read
Yesterday, the team behind Google Chrome’s JavaScript and WebAssembly engine, V8 announced the release of V8 7.5 beta. As per V8’s release cycle, its stable version will release in coordination with Chrome 75 stable release, which is expected to come out early June. This release comes with WebAssembly implicit caching, bulk memory operations, JavaScript numeric separators for better readability, and more. Few updates in V8 7.5 Beta WebAssembly implicit caching The team is planning to introduce implicit caching of WebAssembly compilation artifacts in Chrome 75, which is similar to Chromium’s JavaScript code cache. Code caching is an important way of optimizing browsers, which reduces the start-up time of commonly visited web pages by caching the result of parsing and compilation. This essentially means that if a user visits the same web page a second time, the already-seen WebAssembly modules will not be compiled again, and will instead be loaded from the cache. WebAssembly bulk memory operations V8 7.5 will come with a few new WebAssembly instructions for updating large regions of memory or tables. The following are some of these instructions: memory.fill: It fills a memory region with a given byte. memory.copy: It copies data from a source memory region to a destination region, even if these regions overlap. table.copy: Similar to memory.copy, it copies from one region of a table to another, even if the regions are overlapping. JavaScript numeric separators for better readability The human eye finds it difficult to quickly parse a large numeric literal, especially when it contains long digit repetitions, for instance, 10000000. To improve the readability of long numeric literals, a new feature is added that allows using underscores as a separator creating a visual separation between groups of digits. This feature works with both integers and floating point. Streaming script source data directly from the network In previous Chrome versions, the script source data coming in from the network always had to first go to the Chrome main thread before it was forwarded to the streamer. This made the streaming parser to wait for data that has already arrived from the network but hadn’t been forwarded to the streaming task yet because it was blocked at the main thread. Starting from Chrome 75, V8 will be able to stream scripts directly from the network into the streaming parser, without waiting for the Chrome main thread. To know more, check out the official announcement on V8 Blog. Electron 5.0 ships with new versions of Chromium, V8, and Node.js Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more V8 7.2 Beta releases with support for public class fields, well-formed JSON.stringify, and more  
Read more
  • 0
  • 0
  • 2530

article-image-all-about-browser-fingerprinting-the-privacy-nightmare-that-keeps-web-developers-awake-at-night
Bhagyashree R
08 May 2019
4 min read
Save for later

All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night

Bhagyashree R
08 May 2019
4 min read
Last week, researchers published a paper titled Browser Fingerprinting: A survey, that gives a detailed insight into what browser fingerprinting is and how it is being used in the research field and the industry. The paper further discusses the current state of browser fingerprinting and the challenges surrounding it. What is browser fingerprinting? Browser fingerprinting refers to the technique of collecting various device-specific information through a web browser to build a device fingerprint for better identification. The device-specific information may include details like your operating system, active plugins, timezone, language, screen resolution, and various other active settings. This information can be collected through a simple script running inside a browser. A server can also collect a wide variety of information from public interfaces and HTTP headers. This is a completely stateless technique as it does not require storing any collected information inside the browser. The following table shows an example of a browser fingerprint: Source: arXiv.org The history of browser fingerprinting Back in 2009, Jonathan Mayer, who works as an Assistant Professor in the Computer Science Department at Princeton University, investigated if the differences in browsing environments can be exploited to deanonymize users. In his experiment, he collected the content of the navigator, screen, navigator.plugins, and navigator.mimeTypes objects of browsers. The results drawn from his experiment showed that from a total of 1328 clients, 1278 (96.23%) could be uniquely identified. Following this experiment, in 2010, Peter Eckersley from the Electronic Frontier Foundation (EFF) performed the Panopticlick experiment in which he investigated the real-world effectiveness of browser fingerprinting. For this experiment, he collected 470,161 fingerprints in the span of two weeks. This huge amount of data was collected from HTTP headers, JavaScript, and plugins like Flash or Java. He concluded that browser fingerprinting can be used to uniquely identify 83.6% of the device fingerprints he collected. This percentage shot up to 94.2% if users had enabled Flash or Java as these plugins provided additional device information. This is the study that proved that individuals can really be identified through these details and the term “browser fingerprinting was coined”. Applications of Browser fingerprinting As is the case with any technology, browser fingerprinting can be used for both negative and positive applications. By collecting the browser fingerprints, one can track users without their consent or attack their device by identifying a vulnerability. Since these tracking scripts are silent and executed in the background users will have no clue that they are being tracked. Talking about the positive applications, with browser fingerprinting, users can be warned beforehand if their device is out of date by recommending specific updates. This technique can be used to fight against online fraud by verifying the actual content of a fingerprint. “As there are many dependencies between collected attributes, it is possible to check if a fingerprint has been tampered with or if it matches the device it is supposedly belonging to,” reads the paper. It can also be used for web authentication by verifying if the device is genuine or not. Preventing unwanted tracking by Browser fingerprinting By modifying the content of fingerprints: To prevent third-parties from identifying individuals through fingerprints, we can send random or pre-defined values instead of the real ones. As third-parties rely on fingerprint stability to link fingerprints to a single device, these unstable fingerprints will make it difficult for them to identify devices on the web. Switching browsers: A device fingerprint is mainly composed of browser-specific information. So, users can use two different browsers, which will result in two different device fingerprints. This will make it difficult for a third-party to track the browsing pattern of a user. Presenting the same fingerprint for all users: If all the devices on the web present the same fingerprint, there will no advantage of tracking the devices. This is the approach that the Tor Browser uses, which is known as the Tor Browser Bundle (TBB). Reducing the surface of browser APIs: Another defense mechanism is decreasing the surface of browser APIs and reducing the quantity of information a tracking script can collect. This can be done by disabling plugins so that there are no additional fingerprinting vectors like Flash or Silverlight to leak extra device information. Read the full paper, to know more in detail. DuckDuckGo proposes “Do-Not-Track Act of 2019” to require sites to respect DNT browser setting Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser Mozilla engineer shares the implications of rewriting browser internals in Rust
Read more
  • 0
  • 0
  • 5018

article-image-firefox-releases-v66-0-4-and-60-6-2-to-fix-the-expired-certificate-problem-that-ended-up-disabling-add-ons
Bhagyashree R
06 May 2019
3 min read
Save for later

Firefox releases v66.0.4 and 60.6.2 to fix the expired certificate problem that ended up disabling add-ons

Bhagyashree R
06 May 2019
3 min read
Last week on Friday, Firefox users were left infuriated when all their extensions were abruptly disabled. Fortunately, Mozilla has fixed this issue in their yesterday’s releases, Firefox 66.0.4 and Firefox 60.6.2. https://twitter.com/mozamo/status/1124484255159971840 This is not the first time when Firefox users have encountered such type of problems. A similar issue was reported back in 2016 and it seems that they did not take proper steps to prevent the issue from recurring. https://twitter.com/Theobromia/status/1124791924626313216 Multiple users were reporting that all add-ons were disabled on Firefox because of failed verification. Users were also unable to download any new add-ons and were shown  "Download failed. Please check your connection" error despite having a working connection. This happened because the certificate with which the add-ons were signed expired. The timestamp mentioned in the certificates were: Not Before: May 4 00:09:46 2017 GMT Not After : May 4 00:09:46 2019 GMT Mozilla did share a temporary hotfix (“hotfix-update-xpi-signing-intermediate-bug-1548973”) before releasing a product with the issue permanently fixed. https://twitter.com/mozamo/status/1124627930301255680 To apply this hotfix automatically, users need to enable Studies, a feature through which Mozilla tries out new features before they release to the general users. The Studies feature is enabled by default, but if you have previously opted out of it, you can enable it by navigating to Options | Privacy & Security | Allow Firefox to install and run studies. https://twitter.com/mozamo/status/1124731439809830912 Mozilla released Firefox 66.0.4 for desktop and Android users and Firefox 60.6.2 for ESR (Extended Support Release) users yesterday with a permanent fix to this issue. These releases repair the certificate to re-enable web extensions that were disabled because of the issue. There are still some issues that need to be resolved, which Mozilla is currently working on: A few add-ons may appear unsupported or not appear in 'about:addons'. Mozilla assures that the add-ons data will not be lost as it is stored locally and can be recovered by re-installing the add-ons. Themes will not be re-enabled and will switch back to default. If a user’s home page or search settings are customized by an add-on it will be reset to default. Users might see that Multi-Account Containers and Facebook Container are reset to their default state. Containers is a functionality that allows you to segregate your browsing activities within different profiles. As an aftereffect of this certificate issue, data that might be lost include the configuration data regarding which containers to enable or disable, container names, and icons. Many users depend on Firefox’s extensibility property to get their work done and it is obvious that this issue has left many users sour. “This is pretty bad for Firefox. I wonder how much people straight up & left for Chrome as a result of it,” a user commented on Hacker News. Read the Mozilla Add-ons Blog for more details. Mozilla’s updated policies will ban extensions with obfuscated code Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly
Read more
  • 0
  • 0
  • 3206

article-image-can-an-open-web-index-break-googles-stranglehold-over-the-search-engine-market
Bhagyashree R
22 Apr 2019
4 min read
Save for later

Can an Open Web Index break Google’s stranglehold over the search engine market?

Bhagyashree R
22 Apr 2019
4 min read
Earlier this month, Dirk Lewandowski, Professor of Information Research & Information Retrieval  at Hamburg University of Applied Sciences, Germany, published a proposal for building an index of the Web. His proposal aims to separate the infrastructure part of search engine from the services part. Search engines are our way to the web, which makes them an integral part of the Web’s infrastructure. While there are a significant number of search engines present in the market, there are only a few relevant search engines that have their own index, for example, Google, Bing, Yandex and Baidu. Other search engines that pull results from these search engines, for instance, Yahoo, cannot really be considered search engines in the true sense. The US search engine market is split between Google and Bing with roughly two thirds to one-third, respectively, In most European countries, Google covers the 90% of the market share. Highlighting the implications of Google’s dominance in the current search engine market, the report reads, “As this situation has been stable over at least the last few years, there have been discussions about how much power Google has over what users get to see from the Web, as well as about anti-competitive business practices, most notably in the context of the European Commission's competitive investigation into the search giant.” The proposal aims to bring plurality in the search engine market, not only in terms of the numbers of search engine providers but also in the number of search results users get to see when using search engines. The idea is to implement the “missing part of the Web’s infrastructure” called searchable index. The author proposes to separate the infrastructure part of the search engine from services part. This will allow multitude of services, whether existing as search engines or otherwise to be run on a shared infrastructure. The following figure shows how the public infrastructure crawls the web for indexing its content and provides an interface to the services that are built on top of the index. The indexing stage is split into basic indexing and advanced indexing. Basic indexing is responsible for providing the data in a form that services built on top of the index can easily and rapidly process the data. Though services are allowed to do their further indexing to prepare the documents, the open infrastructure also provides some advanced indexing. This provides additional information to the indexed documents, for example, semantic annotations. This advanced indexing requires an extensive infrastructure for data mining and processing. Services will be able to decide for themselves to what extent they want to rely on the pre-processing infrastructure provided by the Open Web Index. A common design principle can be adopted is allowing services a maximum of flexibility. Credits: arXiv Many users are supporting this idea. One Redditor said, “I have been wanting this for years...If you look at the original Yahoo Page when Yahoo first started out it attempted to solve this problem.I believe this index could be regionally or language based.” Some others do believe that implementing an open web index will come with its own challenges. “One of the challenges of creating a "web index" is first creating indexes of each website. "Crawling" to discover every page of a website, as well as all links to external sites, is labour-intensive and relatively inefficient. Part of that is because there is no 100% reliable way to know, before we begin accessing a website, each and every URL for each and every page of the site. There are inconsistent efforts such "site index" pages or the "sitemap" protocol (introduced by Google), but we cannot rely on all websites to create a comprehensive list of pages and to share it,” adds another Redditor. To read more in detail, check out the paper titled: The Web is missing an essential part of infrastructure: an Open Web Index. Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” Google Cloud Next’19 day 1: open-source partnerships, hybrid-cloud platform, Cloud Run, and more Dark Web Phishing Kits: Cheap, plentiful and ready to trick you  
Read more
  • 0
  • 0
  • 2599
article-image-apache-flink-1-8-0-releases-with-finalized-state-schema-evolution-support
Bhagyashree R
15 Apr 2019
2 min read
Save for later

Apache Flink 1.8.0 releases with finalized state schema evolution support

Bhagyashree R
15 Apr 2019
2 min read
Last week, the community behind Apache Flink announced the release of Apache Flink 1.8.0. This release comes with the finalized state evolution support, lazy cleanup strategies for state TTL, improved pattern matching support in SQL, and more. Finalized state schema evolution support This release marks the completion of the community-driven effort to provide a schema evolution story for user state managed by Flink. The following changes are made to finalize the state schema evolution support: The list of data types that support state schema evolution is now extended to include POJOs (Plain Old Java Objects). All Flink built-in serializers are upgraded to use the new serialization compatibility abstractions. Implementing abstractions using custom state serializers is now easy for advanced users. Continuous cleanup of old state based on TTL In Apache Flink 1.6, TTL (time-to-live) was introduced for the keyed state. TTL enables cleanup and makes keyed state entries inaccessible after a given timeout. The state can also be cleaned when writing a savepoint or checkpoint. With this release, continuous cleanup of old entries is also allowed for both the RocksDB state backend and the heap backend. Improved pattern-matching support in SQL This release extends the MATCH_RECOGNIZE clause by adding two new updates: user-defined functions and aggregations. User-defined functions are added for custom logic during pattern detection and aggregations are added for complex CEP definitions. New KafkaDeserializationSchema for direct access to ConsumerRecord A new KafkaDeserializationSchema is introduced to give direct access to the Kafka ConsumerRecord. This will give users access to all data that Kafka provides for a record including the headers. Hadoop-specific distributions will not be released Starting from this release Hadoop-specific distributions will not be released. If a deployment relies on ‘flink-shaded-hadoop2’ being included in ‘flink-dist’, then it must be manually downloaded and copied into the /lib directory. Updates in the Maven modules of Table API Users who have a ‘flink-table’ dependency are required to update their dependencies to ‘flink-table-planner’. If you want to implement a pure table program in Scala or Java, add  ‘flink-table-api-scala’ or ‘flink-table-api-java’ respectively to your project. To know more in detail, check out the official announcement by Apache Flink. Apache Maven Javadoc Plugin version 3.1.0 released LLVM officially migrating to GitHub from Apache SVN Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more!
Read more
  • 0
  • 0
  • 2511

article-image-chrome-safari-opera-and-edge-to-make-hyperlink-auditing-compulsorily-enabled
Bhagyashree R
08 Apr 2019
3 min read
Save for later

Chrome, Safari, Opera, and Edge to make hyperlink auditing compulsorily enabled

Bhagyashree R
08 Apr 2019
3 min read
Last week, Bleeping Computer reported that the latest versions of Google Chrome, Safari, Opera, and Microsoft Edge will not allow users to disable hyperlink auditing that was possible in previous versions. What is hyperlink auditing? The Web Applications 1.0 specification introduced a new feature in HTML5 called hyperlink auditing for tracking clicks on the links. To track user clicks, the “a” and “area” elements support a “ping” attribute that takes one or more URIs as a value. For example: When you click on the hyperlink, the “href” link will be loaded as expected, but additionally, the browser will also send an HTTP POST request to the ping URL. The request headers can then be examined by the scripts that receive the ping POST request to find out where the ping came from. Which browsers have made hyperlink auditing compulsory? After finding this issue in Safari Technology Preview 72, Jeff Johnson, a professional Mac, and iOS software engineer reported this to Apple. Despite this, Apple released Safari 12.1 without any settings to disable hyperlink auditing. Prior to Safari 12.1, users were able to disable this feature with a hidden preference. Similar to Safari, in Google Chrome hyperlink auditing was enabled by default. Users could previously disable this by going to “chrome://flags#disable-hyperlink-auditing” and setting the flag to “Disabled”. But, in Chrome 74 Beta and Chrome 75 Canary builds, this flag has been completely removed. Microsoft Edge and Opera 61 Developer build also removes the option to disable/enable hyperlink auditing. Firefox and Brave, on the other hand, have disabled hyperlink auditing by default. In Firefox 66, Firefox Beta 67, and Firefox Nightly 68 users can enable it using the browser.send_pings setting, the Brave browser, however, does not allow users to enable it at all. How people are reacting to this development? The hyperlink auditing feature has received mixed reactions from developers and users. While some were concerned about its privacy implications, others think that this process makes the user experience more transparent. Sharing how this development can be misused, Chris Weber co-founder of Casaba Security wrote in a blog post,  “the URL could easily be appended with junk causing large HTTP requests to get sent to an inordinately large list of URIs. Information could be leaked in the usual sense of Referrer/Ping-From leaks.” One Reddit user said that this feature is privacy neutral as this kind of tracking can be done with JavaScript or non-JavaScript redirects. Sharing other advantages of the ping attribute, another user said, “The ping attribute for hyperlinks aims to make this process more transparent, with additional benefits such as optimizing network traffic to the target page loads more quickly, as well as an option to disable sending the pings for more user-friendly privacy.” Though this feature brings some advantages, the Web Hypertext Application Technology Working Group (WHATWG) encourages user agents to put control in the hands of the users by providing them a feature to disable this behavior. “User agents should allow the user to adjust this behavior, for example in conjunction with a setting that disables the sending of HTTP `Referer` (sic) headers. Based on the user's preferences, UAs may either ignore the ping attribute altogether or selectively ignore URLs in the list,” mentions WHATWG. To read the full story, visit Bleeping Computer. Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination Mozilla is exploring ways to reduce notification permission prompt spam in Firefox
Read more
  • 0
  • 0
  • 3823

article-image-django-2-2-is-now-out-with-classes-for-custom-database-constraints
Bhagyashree R
02 Apr 2019
2 min read
Save for later

Django 2.2 is now out with classes for custom database constraints

Bhagyashree R
02 Apr 2019
2 min read
Yesterday, the Django team announced the release of Django 2.2. This release comes with classes for custom database constraints, Watchman compatibility for runserver, and more. It comes with support for Python 3.5, 3.6, and 3.7. As this version is a long-term support (LTS) release it will receive security and data loss updates for at least the next three years. Also, this release marks the end of the mainstream support for Django 2.1 and it will continue to receive security and data loss fixes until December 2019. Following are some of the updates Django 2.2 comes with: Classes for custom database constraints Two new classes are introduced to create custom database constraints: CheckConstraint and UniqueConstraint. You can add constraints to the models using the 'Meta.constraints' option. Watchman compatibility for runserver This release comes with Watchman compatibility for runserver replacing Pyinotify. Watchman is a service used to watch files and record when they change and also trigger actions when matching files change. Simple access to request headers Django 2.2 comes with HttpRequest.headers to allow simple access to a request’s headers. It provides a case insensitive, dict-like object for accessing all HTTP-prefixed headers from the request. Each header name is stylized with title-casing when it is displayed, for example, User-Agent. Deserialization using natural keys and forward references To perform deserialization you can now use natural keys containing forward references by passing ‘handle_forward_references=True’ to ‘serializers.deserialize()’. In addition to this, forward references are automatically handled by ‘loaddata’. Some backward incompatible changes and deprecations Starting from this release, admin actions are not collected from base ModelAdmin classes. Support is dropped for Geospatial Data Abstraction Library (GDAL) 1.9 and 1.10. Now, the team has made sqlparse a required dependency to simplify Django’s database handling. Permissions for proxy models are now created using the content type of the proxy model. With this release, model Meta.ordering will not affect GROUP By queries such as  .annotate().values(). Now, a deprecation warning will be shown with the advice to add an order_by() to retain the current query. To read the entire list of updates, visit Django’s official website. Django 2.2 alpha 1.0 is now out with constraints classes, and more! Django is revamping its governance model, plans to dissolve Django Core team Django 2.1.2 fixes major security flaw that reveals password hash to “view only” admin users
Read more
  • 0
  • 0
  • 3304
article-image-f5-networks-is-acquiring-nginx-a-popular-web-server-software-for-670-million
Bhagyashree R
12 Mar 2019
3 min read
Save for later

F5 Networks is acquiring NGINX, a popular web server software for $670 million

Bhagyashree R
12 Mar 2019
3 min read
Yesterday, F5 Networks, the company that offers businesses cloud and security application services, announced that it is set to acquire NGNIX, the company behind the popular open-source web server software, for approximately $670 million. These two companies are coming together to provide their customers with consistent application services across every environment. F5 has been seeing some stall in its growth lately given that its last quarterly earnings have only shown a 4% growth compared to the year before. On the other hand, NGINX continues to show a 100 percent year-on-year growth since 2014. The company currently boasts of 375 million users with about 1,500 customers for its paid services like support, load balancing, and API gateway and analytics. This acquisition will enable F5 to accelerate  ‘time to market’ of its services to customers for building modern applications. F5 plans to enhance the current offerings by NGINX using its security solutions and will also be integrating its cloud-native innovations with NGINX’s load balancing technology. Along with these advancements, F5 will help scale NGINX selling opportunities using its global sales force, channel infrastructure, and partner ecosystem. François Locoh-Donou, President and CEO of F5, sharing his vision behind acquiring NGINX said, “F5’s acquisition of NGINX strengthens our growth trajectory by accelerating our software and multi-cloud transformation”. He adds, “By bringing F5’s world-class application security and rich application services portfolio for improving performance, availability, and management together with NGINX’s leading software application delivery and API management solutions, unparalleled credibility and brand recognition in the DevOps community, and massive open source user base, we bridge the divide between NetOps and DevOps with consistent application services across an enterprise’s multi-cloud environment.” NGINX’s open source community was also a major factor behind this acquisition. F5 will continue investing in the NGINX open source project as open source is a core part of its multi-cloud strategy. F5 expects that this will help it accelerate product integrations with leading open source projects and open doors for more partnership opportunities. Gus Robertson, CEO of NGINX, Inc, said, “NGINX and F5 share the same mission and vision. We both believe applications are at the heart of driving digital transformation. And we both believe that an end-to-end application infrastructure—one that spans from code to customer—is needed to deliver apps across a multi-cloud environment.” The acquisition is now approved by the boards of directors of both F5 and NGINX and is expected to close in the second calendar quarter of 2019. Once the acquisition is complete, the NGINX founders, Gus Robertson, Igor Sysgoev, and Maxim Konovalov will be joining F5 Networks. To know more in detail, check out the announcement by F5 Networks. Now you can run nginx on Wasmjit on all POSIX systems Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack  
Read more
  • 0
  • 0
  • 2775

article-image-mozilla-considers-blocking-darkmatter-after-reuters-reported-its-link-with-a-secret-hacking-operation-project-raven
Bhagyashree R
07 Mar 2019
3 min read
Save for later

Mozilla considers blocking DarkMatter after Reuters reported its link with a secret hacking operation, Project Raven

Bhagyashree R
07 Mar 2019
3 min read
Back in January this year, Reuters in an investigative piece shared that DarkMatter was providing staff for a secret hacking operation called Project Raven. After reading this report, Mozilla is now thinking whether it should block DarkMatter from serving as one of its internet security providers. The unit working for Project Raven were mostly former US intelligence officials, who were allegedly conducting privacy-threatening operations for the UAE government. The team behind this project was working in a converted mansion in Abu Dhabi, which they called “the Villa”.  These operations included hacking accounts of human rights activists, journalists, and officials from rival governments. On February 25, DarkMatter in a letter addressed to Mozilla, CEO Karim Sabbagh denied all the allegations reported by Reuters and refused that it has anything to do with Project Raven. Sabbagh wrote in the letter, “We have never, nor will we ever, operate or manage non-defensive cyber activities against any nationality.” Mozilla’s response to the Reuter report In an interview last week, Mozilla executive said that Reuter’s report has raised concerns inside the company about DarkMatter misusing its authority to certify websites as safe. Mozilla is yet to decide whether they should deny DarkMatter from this authority. Selena Deckelmann, a senior director of engineering for Mozilla, “We don’t currently have technical evidence of misuse (by DarkMatter) but the reporting is strong evidence that misuse is likely to occur in the future if it hasn’t already.” Deckelmann further shared that Mozilla is also concerned about the certifications DarkMatter has granted and may strip some or all of the 400 certifications that DarkMatter has granted to websites under a limited authority since 2017. Marshall Erwin, director of trust and security for Mozilla, said that DarkMatter could use its authority for “offensive cybersecurity purposes rather than the intended purpose of creating a more secure, trusted web.” A website is designated as secure if it is certified by an external authorized organization called Certification Authority (CA). This certifying organization is also responsible for securing the connection between an approved website and its users. To get this authority, these organizations need to apply to individual browser makers like Mozilla and Apple. DarkMatter has been threatening Mozilla to gain full authority to grant certifications since 2017. Giving it a full authority will allow them to issue certificates to hackers impersonating real websites, including banks. https://twitter.com/GossiTheDog/status/1103596200891244545 To know more about this news in detail, read the full story at Reuters’ official website. Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Broswer Mozilla engineer shares the implications of rewriting browser internals in Rust Mozilla shares key takeaways from the Design Tools survey
Read more
  • 0
  • 0
  • 2815