Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Server-Side Web Development

85 Articles
article-image-mozilla-firefox-will-soon-support-letterboxing-an-anti-fingerprinting-technique-of-the-tor-broswer
Bhagyashree R
07 Mar 2019
2 min read
Save for later

Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser

Bhagyashree R
07 Mar 2019
2 min read
Yesterday, ZDNet shared that Mozilla will be adding a new anti-fingerprinting technique called letterboxing to Firefox 67, which is set to release in May this year. Letterboxing is part of the Tor Uplift project that started back in 2016 and is currently available for Firefox Nightly users. As part of the Tor Uplift project, the team is slowing bringing the privacy-focused features of Tor Browser to Firefox. For instance, Firefox 55 came with support for a Tor Browser feature called First-Party Isolation (FPI). This feature prevented ad trackers from using cookies to track user activity by separating cookies on a per-domain basis. What is letterboxing and why it is needed? The dimensions of a browser window can act as a big source of finger-printable data that can be used by advertising networks. These advertising networks can use browser window sizes to create user profiles and track users as they resize their browser and move across new URLs and browser tabs. To maintain online privacy of users, it is important to protect this window dimension data continuously even if users resize or maximize their window or enter fullscreen. What letterboxing does is that it masks the real dimensions of the browser window while keeping the window width and height dimensions multiples of 200px and 100px during the resize operation. And, then it adds a gray space at the top, bottom, left, or right of the current page. The advertising code tracking the window resize events reads the flawed dimensions and sends it to its server, and only then Firefox removes the gray spaces. This is how the advertising code is tricked into reading the incorrect window dimensions. Here is a demo of letterboxing showing how exactly it works: https://www.youtube.com/watch?&v=TQxuuFTgz7M The letterboxing feature is not enabled by default. To enable the feature, you can go to the ‘about:config’ page in the browser, enter “privacy.resistFingerprinting" in the search box, and toggle the browser's anti-fingerprinting features to "true." To know more in detail about letterboxing, check out ZDNet’s website. Mozilla engineer shares the implications of rewriting browser internals in Rust Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 4978

article-image-introducing-zero-server-a-zero-configuration-server-for-react-node-js-html-and-markdown
Bhagyashree R
27 Feb 2019
2 min read
Save for later

Introducing Zero Server, a zero-configuration server for React, Node.js, HTML, and Markdown

Bhagyashree R
27 Feb 2019
2 min read
Developers behind the CodeInterview.io and RemoteInterview.io websites have come up with Zero, a web framework to simplify modern web development. Zero takes the overhead of the usual project configuration for routing, bundling, and transpiling to make it easier to get started. Zero applications consist of static and code files. Static files are all non-code files like images, documents, media files, etc. Code files are parsed, bundled, and served by a particular builder for that file type. Zero supports Node.js, React, HTML, Markdown/MDX. Features in Zero server Autoconfiguration Zero eliminates the need for any configuration files in your project folder. Developers will just have to place their code and it will be automatically compiled, bundled, and served. File-system based routing The routing will be based on the file system, for example, if your code is placed in ‘./api/login.js’, it will be exposed at ‘http://domain.com/api/login’. Auto-dependency resolution Dependencies are automatically installed and resolved. To install a specific version of a package, developers just have to create their own package.json. Support for multiple languages Zero supports code written in multiple languages. So, with Zero, you can do things like exposing your TensorFlow model as a Python API, writing user login code in Node.js, all under a single project folder. Better error handling Zero isolate endpoints from each other by running them in their own process. This will ensure that if one endpoint crashes there is no effect on any other component of the application. For instance, if /api/login crashes, there will be no effect on /chatroom page or /api/chat API. It will also automatically restart the crashed endpoints when the next user visits them. To know more about the Zero server, check out its official website. Introducing Mint, a new HTTP client for Elixir Symfony leaves PHP-FIG, the framework interoperability group Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument  
Read more
  • 0
  • 0
  • 4199

article-image-introducing-mint-a-new-http-client-for-elixir
Amrata Joshi
26 Feb 2019
2 min read
Save for later

Introducing Mint, a new HTTP client for Elixir

Amrata Joshi
26 Feb 2019
2 min read
Yesterday, the team at Elixir introduced Mint as their new low-level HTTP client that provides a small and functional core. It is connection based where each connection is a single structure with an associated socket belonging to the process that started the connection. Features of Mint Connections The HTTP connections of Mint are managed directly in the process that starts the connection. There is no connection pool which is used when a connection is opened. This helps users to build their own process structure that fits their application. Each connection has a single immutable data structure that the users can manage. Mint uses “active mode” sockets so the data and events from the socket are sent as messages to the process that started the connection. The user then passes the messages to the stream/2 function that further returns the updated connection and a list of “responses”. These responses get streamed back and the response is returned in partial response chunks. Process-less To many users, Mint may seem to be more cumbersome to use than other HTTP libraries. But by providing a low-level API without a predetermined process architecture, Mint gives flexibility to the user of the library. If a user writes GenStage pipelines, a pool of producers can fetch data from external sources via HTTP. With Mint, it is possible to have GenStage producer for managing its own connection while reducing overhead and simplifying the code. HTTP/1 and HTTP/2 The Mint.HTTP module has a single interface for both HTTP/1 and HTTP/2 connections which also performs version negotiation on HTTPS connections. Users can now specify HTTP version for choosing Mint.HTTP1 or Mint.HTTP2modules directly. Safe-by-default HTTPS When connecting with HTTPS, Mint performs certificate verification by default. Mint also uses an optional dependency on CAStore for providing certificates from Mozilla’s CA Certificate Store. Few users are happy about this news with  one user commenting on HackerNews, “I like that Mint keeps dependencies to a minimum.” Another user commented, “I'm liking the trend of designing runtime-behaviour agnostic libraries in Elixir.” To know more about this news, check out Mint’s official blog post. Elixir 1.8 released with new features and infrastructure improvements Elixir 1.7, the programming language for Erlang virtual machine, releases Elixir Basics – Foundational Steps toward Functional Programming  
Read more
  • 0
  • 0
  • 2868
Banner background image

article-image-google-chrome-developers-clarify-the-speculations-around-manifest-v3-after-a-study-nullifies-their-performance-hit-argument
Bhagyashree R
18 Feb 2019
4 min read
Save for later

Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument

Bhagyashree R
18 Feb 2019
4 min read
On Friday, a study was published on WhoTracks.me where the performance of the most commonly used ad blockers was analyzed. This study was motivated by the recent Manifest V3 controversy, which reveals that Google developers are planning to introduce an update that could lead to crippling all ad blockers. What update Chrome developers are introducing? The developers are planning to introduce an alternative to the webRequest API named the declrativeNetRequest API, which limits the blocking version of the webRequest API. According to Manifest V3, the declarativeNetRequest API will be treated as the primary content-blocking API in extensions. The Chrome developers listed two reasons behind this new update, one was performance and the other was better privacy guarantee to users. What this API does is, allow extensions to tell Chrome what to do with a given request, rather than have Chrome forward the request to the extension. This allows Chrome to handle a request synchronously. One of the ad blocker maintainers have reported an issue on the Chromium bug tracker for this feature: “If this (quite limited) declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two content blockers I have maintained for years, uBlock Origin (“uBO”) and uMatrix, can no longer exist.” What the study by Ghostery revealed? This study addresses the performance argument made by the developers. For this study, the Ghostery team analyzed the network performance of the most commonly used ad blockers: uBlock Origin, Adblock Plus, Brave, DuckDuckGo and Cliqz'z Ghostery. The study revealed that these content-blockers, except DuckDuckGo, have only sub-millisecond median decision time per request. This small amount of time will not have any overhead noticeable by users. Additionally, the efficiency of content blockers is continuously being improved with innovative approaches or with the help of technologies like WebAssembly. How Google developers reacted to this study and all the feedbacks surrounding Manifest V3? Following the publication of the study and after looking at the feedbacks, Devlin Cronin, a Software Engineer at Google, clarified that these changes are not really meant to prevent content blocking. Cronin added that the changes listed in Manifest V3 are still in the draft and design stage. In the Google group, Manifest V3: Web Request Changes, Cronin said, “We are committed to preserving that ecosystem and ensuring that users can continue to customize the Chrome browser to meet their needs. This includes continuing to support extensions, including content blockers, developer tools, accessibility features, and many others. It is not, nor has it ever been, our goal to prevent or break content blocking.” The team is not planning to remove the webRequest API. Cronin added, “In particular, there are currently no planned changes to the observational capabilities of webRequest (i.e., anything that does not modify the request).” Based on the feedback and concerns shared, the Chrome team did do some revisions including adding support for the dynamic rule to the declarativeNetRequest API. They are also planning to increase the ruleset size, which was 30k earlier. Users are, however, not convinced by this clarification. One user commented on Hacker News, “Keep in mind that their story about performance has been shown to be a complete lie. There is no performance hit from using webRequest like this. This is about removing sophisticated ad blockers in order to defend Google's revenue stream, plain and simple.” Coincidentally, a Chrome 72 upgrade seems to break ad blockers in a way that they can’t see or block analytics anymore if the web page uses a service worker. https://twitter.com/jviide/status/1096947294920949760 Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Regulate Google, Facebook, and other online platforms to protect journalism, says a UK report Google announces the general availability of a new API for Google Docs
Read more
  • 0
  • 0
  • 3407

article-image-how-deliveroo-migrated-from-ruby-to-rust-without-breaking-production
Bhagyashree R
15 Feb 2019
3 min read
Save for later

How Deliveroo migrated from Ruby to Rust without breaking production

Bhagyashree R
15 Feb 2019
3 min read
Yesterday, the Deliveroo engineering team shared their experience about how they migrated their Tier 1 service from Ruby to Rust without breaking production. Deliveroo is an online food delivery company based in the United Kingdom. Why Deliveroo decided to part ways from Ruby for the Dispatcher service? The Logistics team at Deliveroo uses a service called Dispatcher. This service optimally offers an order to the rider, and it does this with the help of a timeline for each rider. This timeline helps in predicting where riders will be at a certain point of time. Knowing this information allows to efficiently suggest a rider for an order. Building these timelines requires a lot of computation. Though these computations are quick, they are a lot in number. The Dispatcher service was first written in Ruby as it was the company’s preferred language in the beginning. Earlier, it was performing fine because the business was not as big it is now. With time, when Deliveroo started growing, the number of orders increased. This is why the Dispatch service started taking much longer than before. Why they chose Rust as the replacement for Ruby? Instead of writing the whole thing in Rust, the team decided to identify the bottlenecks that were slowing down the Dispatcher service and rewrite them in a different programming language (Rust). They concluded that it would be easier if they built some sort of native extension written in Rust and make it work with the current code written in Ruby. The team chose Rust because it provides high performance than C and is memory safe. Rust also allowed them to build dynamic libraries, which can be later loaded into Ruby. Additionally, some of their team members also had experience with Rust and one part of the Dispatcher was already in Rust. How they migrated from Ruby to Rust? There are two options using which you can call Rust from Ruby. One, by writing a dynamic library in Rust with extern "C" interface and calling it using FFI. Second, writing a dynamic library, but using the Ruby API to register methods, so that you can call them from Ruby directly, just like any other Ruby code. The Deliveroo team chose the second approach of using Ruby API, as there are many libraries available to make it easier for them, for instance, ruru, rutie, and Helix. The team decided to use Rutie, which is a recent fork of Ruru and is under active development. The team planned to gradually replace all parts of the Ruby Dispatcher with Rust. They began the migration by replacing with Rust classes which did not have any dependencies on other parts of the Dispatcher and adding feature flags. As the API of both Ruby and Rust classes implementation were quite similar, they were able to use the same tests. With the help of Rust, the overall dispatch time was reduced significantly. For instance, in one of their larger zones, it dropped from ~4 sec to 0.8 sec. Out of these 0.8 seconds, the Rust part only consumed 0.2 seconds. Read the post shared by Andrii Dmytrenko, a Software Engineer at Deliveroo, for more details. Introducing RustPython, a Python 3 interpreter written in Rust Rust 1.32 released with a print debugger and other changes How has Rust and WebAssembly evolved in 2018
Read more
  • 0
  • 0
  • 5085

article-image-how-you-can-replace-a-hot-path-in-javascript-with-webassembly
Bhagyashree R
15 Feb 2019
5 min read
Save for later

How you can replace a hot path in JavaScript with WebAssembly

Bhagyashree R
15 Feb 2019
5 min read
Yesterday, Das Surma, a Web Advocate at Google, shared how he and his team replaced a JavaScript hot path in the Squoosh app with WebAssembly. Squoosh is an image compression web app which allows you to compress images with a variety of codecs that have been compiled from C++ to WebAssembly. Hot paths are basically code execution paths where most of the execution time is spent. With this update, they aimed to achieve predictable performance across all browsers. Its strict typing and low-level architecture enable more optimizations during compilation. Though JavaScript can also achieve similar performance to WebAssembly, it is often difficult to stay on the fast path. What is WebAssembly? WebAssembly, also known as Wasm, provides you with a way to execute code written in different languages at near-native speed on the way. It is a low-level language with a compact binary format, which provides C/C++/Rust as the compilation target so that they can run on the web. When you compile a C or Rust code to WebAssembly, you get a .wasm file. This file contains something called “module declaration”. In addition to the binary instructions for the functions contained within, it contains all the imports the module needs from its environment and a list of exports this module provides to the host. Comparing the file size generated To narrow down the language, Surma gave an example of a JavaScript function that rotates an image by multiples of 90 degrees. This function basically iterates over every pixel of an image and copies it to a different location. This function was written in three different languages, C/C++, Rust, AssemblyScript, and was compiled to WebAssembly. C and Emscripten Emscripten is a C compiler that allows you to easily compile your C code to WebAssembly. After porting the entire JavaScript code to C and compiling it with emcc, Emscripten creates a glue code file called c.js and wasm module called c.wasm. The wasm module gzipped to almost 260 bytes and the c.js file was of the size 3.5 KB. Rust Rust is a programming language syntactically similar to C++. It is designed to provide better memory and thread-safety. The Rust team has introduced various tooling to the WebAssembly ecosystem, and one of them is wasm-pack. With the help of wasm-pack, developers can turn their code into modules that work out-of-the-box with bundlers like Webpack. After compiling the Rust code using wasm-pack, a 7.6 KB wasm module was generated with about 100 bytes of glue code. AssemblyScript AssemblyScript compiles a strictly-typed subset of TypeScript to WebAssembly ahead of time. It uses the same syntax as TypeScript but switches the standard library with its own. This essentially means that you can’t just compile any TypeScript to WebAssembly, but you don’t have to learn a new programming language to write WebAssembly. After installing the AssemblyScript file, with the help of the AssemblyScript/assemblyscript npm package, AssemblyScript provides with a wasm module of at least 300 bytes and no glue code. The module can directly work with vanilla WebAssembly APIs. Comparing the size of files generated by compiling the above three languages, Rust gave the biggest file. Comparing the performance To analyze the performance, the team did speed comparison per language and speed comparison per browser. They shared the results in the following two graphs: Source: Google Developers The graphs show that all the WebAssembly modules were executed in ~500ms or less, which proves that WebAssembly gives a predictable performance. Regardless of which language you choose, the variance between browsers and languages is minimal. The standard deviation of JavaScript across all browsers is ~400ms. And, the standard deviation of all our WebAssembly modules across all browsers is ~80ms. Which language you should choose if you have a JS hot path and want to make it faster with WebAssembly? Looking at the above results, the best choice seems to be C or AssemblyScript, but they decided to go with Rust. They narrowed down to Rust because all the codecs shipped in Squoosh so far are compiled using Emscripten and the team wanted to broaden their knowledge about the WebAssembly ecosystem by using a different language. They did not choose AssemblyScript because it is relatively new and the compiler is not as mature as Rust. The file size difference between Rust and other languages were quite huge but in reality, this is not a big deal. Going by the runtime performance, Rust showed a faster average across browsers than AssemblyScript. Additionally, Rust will be more likely to produce faster code without requiring any manual code optimizations. To read more in detail, check out Surma’s post on Google Developers. Introducing CT-Wasm, a type-driven extension to WebAssembly for secure, in-browser cryptography Creating and loading a WebAssembly module with Emscripten’s glue code [Tutorial] The elements of WebAssembly – Wat and Wasm, explained [Tutorial]
Read more
  • 0
  • 0
  • 3820
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-chromium-developers-to-introduce-a-never-slow-mode-which-sets-limits-on-resource-usage
Bhagyashree R
06 Feb 2019
2 min read
Save for later

Chromium developers to introduce a “Never-Slow Mode”, which sets limits on resource usage

Bhagyashree R
06 Feb 2019
2 min read
Today, Alex Russell, a Google software engineer, submitted a patch called ‘Never-Slow Mode’ for Chromium. With this patch, various limits will be enforced for per-interaction and resources to keep the main thread clean. Russell’s patch is very similar to a bug Craig Hockenberry, a Partner at The Iconfactory, reported for WebKit, last week. He suggested adding limits on how much JavaScript code a website can load to avoid resource abuse of user computers. Here are some of the changes that will be done under this patch: Large scripts will be blocked. document.write() will be turned off Client-Hints will be enabled pervasively Resources will be buffered without ‘Content-Lenght’ set Budgets will be re-set on the interaction Long script tasks, which take more than 200ms, will pause all page execution until the next interaction. Budgets will be set for certain resource types such as script, font, CSS, and images. These are the limits that have been suggested under this patch (all the sizes are in wired size): Source: Chromium Similar to Hockenberry’s suggestion, this patch did get both negative and positive feedback from developers. Some Hacker News users believe that this will prevent web bloat. A user commented, “It's probably in Google's interest to limit web bloat that degrades UX”. Another user said, “I imagine they’re trying to encourage code splitting.” According to another Hacker News user putting hard coded limits will probably not work, “Hardcoded limits are the first tool most people reach for, but they fall apart completely when you have multiple teams working on a product, and when real-world deadlines kick in. It's like the corporate IT approach to solving problems — people can't break things if you lock everything down. But you will make them miserable and stop them doing their job”. You can check out the patch submitted by Russell at Chromium Gerrit. Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users Chromium-based Brave browser shows 22% faster page load time than its Muon-based counterpart
Read more
  • 0
  • 0
  • 1519

article-image-will-putting-limits-on-how-much-javascript-is-loaded-by-a-website-help-prevent-user-resource-abuse
Bhagyashree R
31 Jan 2019
3 min read
Save for later

Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?

Bhagyashree R
31 Jan 2019
3 min read
Yesterday, Craig Hockenberry, who is a Partner at The Iconfactory, reported a bug on WebKit, which focuses on adding a limit on how much JavaScript code a website can load to avoid resource abuse of user computers. Hockenberry feels that though content blocking has helped in reducing the resource abuse and hence providing better performance and better battery life, there are few downsides of using content blockers. His bug report said, “it's hurting many smaller sites that rely on advertising to keep the lights on. More and more of these sites are pleading to disable content blockers.” This results in collateral damage to smaller sites. As a solution to this, he suggested that we need to find a way to incentivize JavaScript developers who keep their codebase smaller and minimal. “Great code happens when developers are given resource constraints... Lack of computing resources inspires creativity”, he adds. As an end result, he believes that we can allow sites to show as many advertisements as they want, but keeping the overall size under a fixed amount. He believes that we can also ask users for permission by adding a simple dialog box, for example, "The site example.com uses 5 MB of scripting. Allow it?” This bug report triggered a discussion on Hacker News, and though few users agreed to his suggestion most were against it. Some developers mentioned that users usually do not read the dialogs and blindly click OK to get the dialog to go away. And, even if users read the dialog, they will not be knowing how much JavaScript code is too much. “There's no context to tell her whether 5MB is a lot, or how it compares to payloads delivered by similar sites. It just expects her to have a strong opinion on a subject that nobody who isn't a coder themselves would have an opinion about,” he added. Other ways to prevent JavaScript code from slowing down browsers Despite the disagreement, developers do agree that there is a need for user-friendly resource limitations in browsers and some suggested the other ways we can prevent JavaScript bloat. One of them said it is good to add resource-limiting tabs on CPU usage, number of HTTP requests and memory usage: “CPU usage allows an initial burst, but after a few seconds dial down to max ~0.5% of CPU, with additional bursts allowed after any user interaction like click or keyboard) Number of HTTP requests (again, initial bursts allowed and in response to user interaction, but radically delay/queue requests for the sites that try to load a new ad every second even after the page has been loaded for 10 minutes) Memory usage (probably the hardest one to get right though)” Another user adds, “With that said, I do hope we're able to figure out how to treat web "sites" and web "apps" differently - for the former, I want as little JS as possible since that just gets in the way of content, but for the latter, the JS is necessary to get the app running, and I don't mind if its a few megabytes in size.” You can read the bug reported on WebKit Bugzilla. D3.js 5.8.0, a JavaScript library for interactive data visualizations in browsers, is now out! 16 JavaScript frameworks developers should learn in 2019 npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn
Read more
  • 0
  • 0
  • 2640

article-image-vlcs-updating-mechanism-still-uses-http-over-https
Bhagyashree R
22 Jan 2019
3 min read
Save for later

VLC’s updating mechanism still uses HTTP over HTTPS

Bhagyashree R
22 Jan 2019
3 min read
Last week, a bug was reported to the VLC bug tracker that all the connections to the update server are still done in HTTP instead of HTTPS. One of the VLC developers replied back asking the bug reporter for a threat model, and when he did not submit it, the VLC developer closed the bug and marked it as “invalid”. This is not the first time this bug has been reported. In a bug reported in 2017, a user said, “It appears that VLC's updating mechanism downloads a new VLC executable over HTTP (ie, in clear-text). Please modify the update mechanism to happen over TLS (preferably with Forward Secrecy enabled).” What are some of the implications of using HTTP over HTTPS? One of the Hacker News users said, “As a trivial example, this is a privacy leak - anyone on the network path can see what version you're upgrading to. It doesn't sound like a huge deal but we are moving to a 100% encrypted world, and it is a one character change to fix the issue. If VLC wants to keep the update over plaintext then they should justify why they want to do that, not have users justify why it should be over https. Instead, it feels like the VLC devs are having a kneejerk defensive reaction.” Along with this, there are several security threats related to software that updates over HTTP, some of which are described here: An attacker can see the contents of software update requests. They can then modify these update requests or responses to change the update behavior or outcome. They can also intercept and redirect software update requests to a malicious server. Attackers can respond to the client request with a huge amount of data that will interfere with the client’s system resulting in endless data attacks. Clients can be prevented by the attackers from being aware of interference with receiving updates by responding to client requests so slowly that automated updates never complete resulting in endless data attacks. Attackers can trick a client into installing software that is older, which is known to have critical bugs. Why VideoLAN does not see it as a big problem? Jean-Baptiste Kempf, the President, and lead VLC developer, said that some of these attacks described above are the case for nearly all download systems, “I'm sorry, but some described attacks (Slow retrieval attacks, Endless data attacks) are issues that are the case for all download system like most Linux Distributions, and that will not be fixed. Mirrors are HTTP and will stay HTTP for a few obvious reasons. Moreover, they will install binaries, so there is no security issue. Moreover, downloads are never done automatically, without user intervention.” As Kempf said, this is not just the case with VLC. A Hacker News user said, “it seems to be a common practice for highly-loaded services to outsource as many cryptographies to clients as possible.” A general-purpose package manager like Pacman uses HTTP because there is not much value in using transport-level security when the payload is cryptographically signed. Even Tesla’s firmware updates are not encrypted in transit as their updates are cryptographically signed. Oracle also followed the same policy with VirtualBox distributions and that's been fine because they signed packages. You can read more in detail on the VLC bug tracker website. dav1d 0.1.0, the AV1 decoder by VideoLAN, is here Presenting dav1d, a new lightweight AV1 decoder, by VideoLAN and FFmpeg dav1d to release soon with all features of AV1, and better performance than libaom
Read more
  • 0
  • 0
  • 2979

article-image-mastodon-2-7-a-decentralized-alternative-to-social-media-silos-is-now-out
Bhagyashree R
21 Jan 2019
2 min read
Save for later

Mastodon 2.7, a decentralized alternative to social media silos, is now out!

Bhagyashree R
21 Jan 2019
2 min read
Yesterday, the Mastodon team released Mastodon 2.7, which comes with major improvements to the admin interface, a new moderation warning system, and more. Mastodon is a free, open-source social network server, which is based on open web protocols like ActivityPub and OStatus. This server aims to provide users with a decentralized alternative to commercial social media silos and returns the control of the content distribution channels to the people. Profile directory The new profile directory allows users to see active posters on a given Mastodon server and filter them by the hashtags in their profile bio. With profile directory, users can find people with common interests without having to read through public timelines. A new moderation warning system This version comes with a new moderation warning system for Mastodon. Moderators can now inform users if their account is suspended or disabled. They can also send official warnings via e-mails, which are reflected in the moderator interface to keep other moderators up to date. Improvements in the administration interface Mastodon 2.7 combines administration interfaces for known servers and domain blocks into a common area. Users can see information like the number of accounts known from a particular server, the number of accounts followed from your server, the number of individuals blocked or reported, etc. A registration API A new registration API is introduced, which allows apps to directly accept new registration from their users, instead of having to send them to a web browser. Users still receive a confirmation e-mail when they sign up through the app, which contains an activation link that can open the app. New commands for managing a Mastodon server The tootctl command-line utility used for managing a Mastodon server has received two new commands: tootctl domains crawl: You can scan the Mastodon network to discover servers and aggregate statistics about Mastodon’s usage. tootctl accounts follow: You can make the users on your server follow a specified account. This command comes in handy in cases where an administrator needs to change their account. You can read the full list of improvements in Mastodon 2.7 on its website. How Dropbox uses automated data center operations to reduce server outage and downtime Obfuscating Command and Control (C2) servers securely with Redirectors [Tutorial] Fortnite server suffered a minor outage, Epic Games was quick to address the issue
Read more
  • 0
  • 0
  • 2700
article-image-django-2-2-alpha-1-0-is-now-out-with-constraints-classes-and-more
Bhagyashree R
18 Jan 2019
3 min read
Save for later

Django 2.2 alpha 1.0 is now out with constraints classes, and more!

Bhagyashree R
18 Jan 2019
3 min read
Yesterday, the team behind Django released Django 2.2 alpha 1.0. Django 2.2 is designated as LTS, which means it will receive security updates for at least three years after its expected release in April 2019. This version will come with two new constraints classes, some minor features, and deprecates Meta.ordering. It is compatible with Python 3.5, 3.6, and 3.7. Here are some of the updates that Django 2.2 will come with: Constraints: Two new constraint classes are defined in django.db.models.constraints for adding custom database constraints, namely, CheckConstraint and UniqueConstraint. These classes are also imported into django.db.models for convenience. django.contrib.auth: A request argument is added to the RemoteUserBackend.configure_user() method as the first positional argument, if it accepts it. django.contrib.gis: Oracle support is added for the Envelope function and SpatiaLite support for the coveredby and covers lookups. django.contrib.postgres: A new ordering argument is added to the ArrayAgg and StringAgg classes for determining the ordering of aggregated elements. With new BTreeIndex, HashIndex, and SpGistIndex classes, you can now create B-Tree, hash, and SP-GiST indexes in the database. Internationalization: Support and translations are added for the Armenian language. Backward incompatible updates Database backend API: These are some of the changes that will be needed in third-party database backends: They must support table check constraints or set DatabaseFeatures.supports_table_check_constraints to False. Support for ignoring constraints or uniqueness errors while inserting is needed or you can set DatabaseFeatures.supports_ignore_conflicts to False. Support for partial indexes is needed or you can set DatabaseFeatures.supports_partial_indexes to False. DatabaseIntrospection.table_name_converter() and column_name_converter() are now removed. Third-party database backends will may have to implement DatabaseIntrospection.identifier_converter() instead. Other changes Admin actions: In this version, admin actions now follow standard Python inheritance and are no longer collected from the base ModelAdmin classes. TransactionTestCase serialized data loading: At the end of the test, initial data migrations are now loaded in TransactionTestCase after the flush. Earlier, this data was loaded at the beginning of the test, which prevented the test --keepdb option from working properly. sqlparse: The sqlparse module will be automatically installed with Django as it is now a required dependency. This change is done to simplify a few parts of Django’s database handling. Permissions for proxy models: You can now create permissions for proxy models using the content type of the proxy model rather than the content type of the concrete model. Django 2.1.2 fixes major security flaw that reveals password hash to “view only” admin users Django 2.1 released with new model view permission and more Python web development: Django vs Flask in 2018
Read more
  • 0
  • 0
  • 2805

article-image-github-plans-to-deprecate-github-services-and-move-to-webhooks-in-2019
Savia Lobo
11 Dec 2018
3 min read
Save for later

GitHub plans to deprecate GitHub services and move to Webhooks in 2019

Savia Lobo
11 Dec 2018
3 min read
On April 25, this year, GitHub announced that it will be shutting down GitHub Services in order to focus on other areas of the API, such as strengthening GitHub Apps and GraphQL, and improving webhooks. According to GitHub, Webhooks are much easier for both users and GitHub staff to debug on the web because of improved logging. GitHub Services has not supported new features since April 25, 2016, and they have also officially deprecated it on October 1st, 2018. The community stated that this functionality will be removed from GitHub.com on January 31st, 2019. The main intention of GitHub Services was to allow third-party developers to submit code for integrating with their services, but this functionality has been superseded by GitHub Apps and webhooks. Since October 1st, 2018, users are denied from adding GitHub services to any repository on GitHub.com, via the UI or API. Users can, however, continue to edit or delete existing GitHub Services. GitHub services vs. webhooks The key differences between GitHub Services and webhooks include: Configuration: GitHub Services have service-specific configuration options, while webhooks are simply configured by specifying a URL and a set of events. Custom logic: GitHub Services can have custom logic to respond with multiple actions as part of processing a single event, while webhooks have no custom logic. Types of requests: GitHub Services can make HTTP and non-HTTP requests, while webhooks can make HTTP requests only. Brownout for GitHub Services During the week of November 5th, 2018, there was a week-long brownout for GitHub Services. Any GitHub Service installed on a repository did not receive any payloads. Normal GitHub Services operations were resumed at the conclusion of the brownout. The main motivation behind the brownout was to allow GitHub users and integrators to see the places that GitHub Services are still being used and begin working towards migrating away from GitHub Services. However, they decided that a week-long brownout would be too disruptive for everyone. Instead, they plan to do a gradual increase in brownouts until the final blackout date of January 31st, 2019. The community announced that on January 31, 2019, they will permanently stop delivering all installed services' events on GitHub.com. As per the updated deprecation timeline: On December 12th, 2018, GitHub Service deliveries will be suspended for a full 24 hours. On January 7th, 2019, GitHub Services will be suspended for a full 7 days. Following that, regular deliveries will resume January 14th, 2019. Users should ensure that their repositories use newer APIs available for handling events. The following changes have taken place since October 1st, 2018: The "Create a hook" endpoint that accepted a required argument called name, which can be set to web for webhooks, or the name of any valid service. Starting October 1st, this endpoint does not require a name to be provided; if it is, it will only accept web as a valid value. Stricter API validation was enforced on November 1st. The name is no longer necessary as a required argument, and requests sending this value are rejected. To learn more about this deprecation, check out Replacing GitHub Services. GitHub introduces Content Attachments API (beta) Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more! GitHub acquires Spectrum, a community-centric conversational platform
Read more
  • 0
  • 0
  • 4049

article-image-now-you-can-run-nginx-on-wasmjit-on-all-posix-systems
Natasha Mathur
10 Dec 2018
2 min read
Save for later

Now you can run nginx on Wasmjit on all POSIX systems

Natasha Mathur
10 Dec 2018
2 min read
Wasmjit team announced last week that you can now run Nginx 1.15.3, a free and open source high-performance HTTP server and reverse proxy, in user-space on all POSIX system. Wasmjit is a small embeddable WebAssembly runtime that can be easily ported to most environments. It primarily targets a Linux kernel module capable of hosting Emscripten-generated WebAssembly modules. It comes equipped with a host environment for running in user-space on POSIX systems. This allows you to run WebAssembly modules without having to run an entire browser. Getting Nginx to run had been a major goal for the wasmjit team ever since its first release in late July. “While it might be convenient to run the same binary on multiple systems without modification (“write once, run anywhere”), this goal was chosen because IO-bound / system call heavy servers to stand to gain the most by running in kernel space. Running FUSE file systems in kernel space is another motivating use case that Wasmjit will soon support”, mentions the wasmjit team. Other future goals for wasmjit includes introduction of an interpreter, rust-runtime for Rust-generated wasm files, Go-runtime for Go-generated wasm files, optimized x86_64 JIT ,arm64 JIT, and macOS kernel module. Wasmjit running nginx has been tested on Linux, OpenBSD, and macOS so far. The complete compiled version of nginx without any modifications and with multi-process capability has been used. All the complex parts of the POSIX API that are needed for proper implementation of Nginx have been used such as signal handling and forking. That being said, Kernel space support still needs working as Emscripten delegates some large APIs such as getaddrinfo() and strftime() to the host implementation. These need to be re-implemented in the kernel. Moreover, kernel space versions of fork(), execve(), and signal handling also need to be implemented. Also, Wasmjit is currently in alpha-level software in development and might lead to unpredictable substances when used in production. Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! Getting Started with Nginx
Read more
  • 0
  • 0
  • 2951
article-image-rocket-0-4-released-with-typed-uris-agnostic-database-support-request-local-state-and-more
Amrata Joshi
10 Dec 2018
4 min read
Save for later

Rocket 0.4 released with typed URIs, agnostic database support, request-local state and more

Amrata Joshi
10 Dec 2018
4 min read
Last week, the team at Rocket released Rocket 0.4, a web framework for Rust which focuses on usability, security, and performance. With Rocket, it is possible to write secure web applications quickly and without sacrificing flexibility or type safety. Features of Rocket 0.4 Typed URIs Rocket 0.4 comes with uri! macro that helps in building URIs to route in the application in a robust, type-safe, and URI-safe manner. The type or route parameter that is mismatched are caught at compile-time. With the help of Rocket 0.4, changes to the route URIs get reflected in the generated URIs, automatically. ORM agnostic database support. Rocket 0.4 comes with a built-in, ORM-agnostic support for databases. It provides a procedural macro that helps in connecting Rocket application to databases through connection pools. Rocket 0.4 gets databases configured individually through configuration mechanisms like Rocket.toml file or environment variables. Request-local state Rocket 0.4 features request-local state which is local to a given request and is carried along with the request. It gets dropped once the request is completed. Whenever a request is available, a request-local state can be used. Request-local state is cached which lets the stored data to be reused. Request-local state is used for request guards which get invoked multiple times during routing and processing of a single request. Live template reloading In this version of Rocket, when an application is compiled in debug mode, templates automatically get reloaded on getting modified. To view the template changes, one has to simply refresh and there is no need of rebuilding the application. Major Improvements Rocket 0.4 introduces SpaceHelmet that provides a typed interface for HTTP security headers. This release features mountable static-file serving via StaticFiles. With this version of Rocket, cookies can automatically get tracked and propagated by client. This version also introduces revamped query string handling that allows any number of dynamic query segments. Rocket 0.4 comes with transforming data guards that transform incoming data before processing it via an implementation of the FromData::transform() method. This version comes with Template::custom() which helps in customizing template engines including registering filters and helpers. With this version, applications can be launched without a working directory. In Rocket 0.4, the log messages refer to routes by name. A default catcher for 504: Gateway Timeout has been added. All derives, macros, and attributes are individually documented in rocket_codegen. To retrieve paths relative to the configuration file, Rocket 0.4 has added Config::root_relative() The private cookies are now set to Http and are also given an expiration date of 1 week by default. What can be expected from Rocket 0.5? Support for Rust Rocket 0.5 will run and compile on stable versions of the Rust compiler. Asynchronous Request Handling Rocket 0.5 will feature the latest asynchronous version that will support asynchronous request handling. Multipart Form Support The lack of built-in multipart form support makes handling file uploads and other submissions difficult. With Rocket 0.5 it will be easy to handle multipart forms. Stronger CSRF and XSS Protection Rocket 0.5 will protect against CSRF using effective robust techniques. It will come with added support for automatic, browser-based XSS protection. Users have been giving some good feedback on the Rocket 0.4 release and are highly appreciative of the Rocket team for the efforts they have taken. A user commented on HackerNews, “This release isn't just about new features or rewrites. It's about passion for one's work. It's about grit. It's about uncompromising commitment to excellence.” Sergio Benitez, a computer science PhD student at Stanford, has been appreciated a lot for his efforts towards the development of Rocket 0.4. A HackerNews user commented, “While there will ever be only one Sergio in the world, there are many others in the broader Rust community who are signaling many of the same positive qualities.” This release has also been appreciated for its feel which is similar to Flask and for its capability of using code generation and function's type signatures to automatically check the incoming parameters. The fact that the next release will be asynchronous has created a lot of curiosity in the developer community and users are now looking forward to Rocket 0.5. Read more about Rocket 0.4 in the official release notes. Use Rust for web development [Tutorial] Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads Kotlin based framework, Ktor 1.0, released with features like sessions, metrics, call logging and more
Read more
  • 0
  • 0
  • 1437

article-image-amazon-rolls-out-aws-amplify-console-a-deployment-and-hosting-service-for-mobile-web-apps-at-reinvent-2018
Amrata Joshi
27 Nov 2018
3 min read
Save for later

Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018

Amrata Joshi
27 Nov 2018
3 min read
On day 1 of AWS re:Invent 2018, the team at Amazon released AWS Amplify Console, a continuous deployment and hosting service for mobile web applications. The AWS Amplify Console helps in avoiding downtime during application deployment and simplifies the deployment of the application’s front end and backend. Features of AWS Amplify Console Simplified continuous workflows By connecting AWS Amplify Console to the code repository, the frontend and backend are deployed in a single workflow on every code commit. This lets the web application to get updated only after the deployment is successfully completed by eliminating inconsistencies between the application’s frontend and backend. Easy Access AWS Amplify Console makes the building, deploying, and hosting of mobile web applications easier. It also lets users access the features faster. Easy custom domain setup One can set up custom domains managed in Amazon Route 53 with a single click and also get a free HTTPS certificate. If one manages the domain in Amazon Route 53, the Amplify Console automatically connects the root, subdomains and branch subdomains. Globally available The apps are served via Amazon's reliable content delivery network with 144 points of presence globally. Atomic deployments In AWS Amplify Console, the atomic deployments eliminate the maintenance windows and the scenarios where files fail to upload properly. Password protection The Amplify Console comes with a password to protect the web app and one easily work on new features without making them publicly accessible. Branch deployments With Amplify Console, one can work on new features without impacting the production. Also, the users can create branch deployments linked to each feature branch. Other features   The Amplify Console automatically detects the front end build settings along with any backend functionality provisioned with the Amplify CLI when connected to a code repository. With AWS Amplify Console, users can easily manage the production and staging environments for front-end and backend by connecting new branches. With AWS Amplify Console, one get screenshots of the app, rendered on different mobile devices to highlight layout issues. Users can now set up rewrites and redirects to maintain SEO rankings. Users can build web apps with static and dynamic functionality. One can deploy SSGs (Service Selection Gateway) with free SSL on the AWS Amplify Console. Check out the official announcement to know more about AWS Amplify Console. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store
Read more
  • 0
  • 0
  • 3434