Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Server-Side Web Development

85 Articles
article-image-symfony-leaves-php-fig-the-framework-interoperability-group
Amrata Joshi
21 Nov 2018
2 min read
Save for later

Symfony leaves PHP-FIG, the framework interoperability group

Amrata Joshi
21 Nov 2018
2 min read
Yesterday, Symfony, a community of 600,000 developers from more than 120 countries, announced that it will no longer be a member of the PHP-FIG, a framework interoperability group. Prior to Symfony, the other major members to leave this group include, Laravel, Propel, Guzzle, and Doctrine. The main goal of the PHP-FIG group is to work together and maintain interoperability, discuss commonalities between projects and work together to make them better. Why Symfony is leaving PHP-FIG PHP-FIG has been working on various PSRs (PHP Standard Recommendations). Kévin Dunglas, a core team member at Symfony, said, “It looks like it's not the goal anymore, 'cause most (but not all) new PSRs are things no major frameworks ask for, and that they can't implement without breaking their whole ecosystem.” https://twitter.com/fabpot/status/1064946913596895232 The fact that the major contributors left the group could possibly be a major reason for Symfony to quit. But it seems many are disappointed by this move of Symfony as they aren’t much satisfied by the reason given. https://twitter.com/mickael_andrieu/status/1065001101160792064 The matter of concern for Symfony was that the major projects were not getting implemented as a combined effort. https://twitter.com/dunglas/status/1065004250005204998 https://twitter.com/dunglas/status/1065002600402247680 Something similar happened while working towards PSR 7, where no commonalities between the projects were given importance. Instead, it was considered as a new separate framework. https://twitter.com/dunglas/status/1065007290217058304 https://twitter.com/titouangalopin/status/1064968608646864897 People are still arguing over why Symfony quit. https://twitter.com/gmponos/status/1064985428300914688 Will the PSRs die? With the latest move by Symfony, there are various questions raised towards the next step the company might take. Will the company still support PSRs or is it the end for the PSRs? Kévin Dunglas has answered to this question in one of his tweets, where he said, “Regarding PSRs, I think we'll implement them if relevant (such as PSR-11) but not the ones not in the spirit of a broad interop (as PSR-7/14).” To know more about this news, check out Fabien Potencier’s Twitter thread Perform CRUD operations on MongoDB with PHP Introduction to Functional Programming in PHP Building a Web Application with PHP and MariaDB – Introduction to caching
Read more
  • 0
  • 0
  • 17522

article-image-build-achatbot-with-microsoft-bot-framework
Kunal Chaudhari
27 Apr 2018
8 min read
Save for later

How to build a chatbot with Microsoft Bot framework

Kunal Chaudhari
27 Apr 2018
8 min read
The Microsoft Bot Framework is an increbible tool from Microsoft. It makes building chatbots easier and more accessible than ever. That means you can build awesome conversational chatbots for a range of platforms, including Facebook and Slack. In this tutorial, you'll learn how to build an FAQ chatbot using Microsoft Bot Framework and ASP.NET Core. This tutorial has been taken from .NET Core 2.0 By Example. Let's get started. You're chatbot that can respond to simple queries such as: How are you? Hello! Bye! This should provide a good foundation for you to go further and build more complex chatbots with the Microsoft Bot Framework, The more you train the Bot and the more questions you put in its knowledge base, the better it will be. If you're a UK based public sector organisation then ICS AI offer conversational AI solutions built to your needs. Their Microsoft based infrastructure runs chatbots augmented with AI to better serve general public enquiries. Build a basic FAQ Chabot with Microsoft Bot Framework First of all, we need to create a page that can be accessed anonymously, as this is frequently asked questions (FAQ ), and hence the user should not be required to be logged in to the system to access this page. To do so, let's create a new controller called FaqController in our LetsChat.csproj. It will be a very simple class with just one action called Index, which will display the FAQ page. The code is as follows: [AllowAnonymous] public class FaqController : Controller { // GET: Faq public ActionResult Index() { return this.View(); } } Notice that we have used the [AllowAnonymous] attribute, so that this controller can be accessed even if the user is not logged in. The corresponding .cshtml is also very simple. In the solution explorer, right-click on the Views folder under the LetsChat project and create a folder named Faq and then add an Index.cshtml file in that folder. The markup of the Index.cshtml would look like this: @{ ViewData["Title"] = "Let's Chat"; ViewData["UserName"] = "Guest"; if(User.Identity.IsAuthenticated) { ViewData["UserName"] = User.Identity.Name; } } <h1> Hello @ViewData["UserName"]! Welcome to FAQ page of Let's Chat </h1> <br /> Nothing much here apart from the welcome message. The message displays the username if the user is authenticated, else it displays Guest. Now, we need to integrate the Chatbot stuff on this page. To do so, let's browse http://qnamaker.ai. This is Microsoft's QnA (as in questions and answers) maker site which a free, easy-to-use, REST API and web-based service that trains artificial intelligence (AI) to respond to user questions in a more natural, conversational way. Compatible across development platforms, hosting services, and channels, QnA Maker is the only question and answer service with a graphical user interface—meaning you don’t need to be a developer to train, manage, and use it for a wide range of solutions. And that is what makes it incredibly easy to use. You would need to log in to this site with your Microsoft account (@microsoft/@live/@outlook). If you don't have one, you should create one and log in. On the very first login, the site would display a dialog seeking permission to access your email address and profile information. Click Yes and grant permission: You would then be presented with the service terms. Accept that as well. Then navigate to the Create New Service tab. A form will appear as shown here: The form is easy to fill in and provides the option to extract the question/answer pairs from a site or .tsv, .docx, .pdf, and .xlsx files. We don't have questions handy and so we will type them; so do not bother about these fields. Just enter the service name and click the Create button. The service should be created successfully and the knowledge base screen should be displayed. We will enter probable questions and answers in this knowledge base. If the user types a question that resembles the question in the knowledge base, it will respond with the answer in the knowledge base. Hence, the more questions and answers we type, the better it will perform. So, enter all the questions and answers that you wish to enter, test it in the local Chatbot setup, and, once you are happy with it, click on Publish. This would publish the knowledge bank and share the sample URL to make the HTTP request. Note it down in a notepad. It contains the knowledge base identifier guide, hostname, and subscription key. With this, our questions and answers are ready and deployed. We need to display a chat interface, pass the user-entered text to this service, and display the response from this service to the user in the chat user interface. To do so, we will make use of the Microsoft Bot Builder SDK for .NET and follow these steps: Download the Bot Application project template from http://aka.ms/bf-bc-vstemplate. Download the Bot Controller item template from http://aka.ms/bf-bc-vscontrollertemplate. Download the Bot Dialog item template from http://aka.ms/bf-bc-vsdialogtemplate. Next, identify the project template and item template directory for Visual Studio 2017. The project template directory is located at %USERPROFILE%DocumentsVisual Studio 2017TemplatesProjectTemplatesVisual C# and the item template directory is located at %USERPROFILE%DocumentsVisual Studio 2017TemplatesItemTemplatesVisual C#. Copy the Bot Application project template to the project template directory. Copy the Bot Controller ZIP and Bot Dialog ZIP to the item template directory. In the solution explorer of the LetsChat project, right-click on the solution and add a new project. Under Visual C#, we should now start seeing a Bot Application template as shown here: Name the project FaqBot and click OK. A new project will be created in the solution, which looks similar to the MVC project template. Build the project, so that all the dependencies are resolved and packages are restored. If you run the project, it is already a working Bot, which can be tested by the Microsoft Bot Framework emulator. Download the BotFramework-Emulator setup executable from https://github.com/Microsoft/BotFramework-Emulator/releases/. Let's run the Bot project by hitting F5. It will display a page pointing to the default URL of http://localhost:3979. Now, open the Bot framework emulator and navigate to the preceding URL and append api/messages; to it, that is, browse to http://localhost:3979/api/messages and click Connect. On successful connection to the Bot, a chat-like interface will be displayed in which you can type the message. The following screenshot displays this step:   We have a working bot in place which just returns the text along with its length. We need to modify this bot, to pass the user input to our QnA Maker service and display the response returned from our service. To do so, we will need to check the code of MessagesController in the Controllers folder. We notice that it has just one method called Post, which checks the activity type, does specific processing for the activity type, creates a response, and returns it. The calculation happens in the Dialogs.RootDialog class, which is where we need to make the modification to wire up our QnA service. The modified code is shown here: private static string knowledgeBaseId = ConfigurationManager.AppSettings["KnowledgeBaseId"]; //// Knowledge base id of QnA Service. private static string qnamakerSubscriptionKey = ConfigurationManager.AppSettings["SubscriptionKey"]; ////Subscription key. private static string hostUrl = ConfigurationManager.AppSettings["HostUrl"]; private async Task MessageReceivedAsync(IDialogContext context, IAwaitable<object> result) { var activity = await result as Activity; // return our reply to the user await context.PostAsync(this.GetAnswerFromService(activity.Text)); context.Wait(MessageReceivedAsync); } private string GetAnswerFromService(string inputText) { //// Build the QnA Service URI Uri qnamakerUriBase = new Uri(hostUrl); var builder = new UriBuilder($"{qnamakerUriBase}/knowledgebases /{knowledgeBaseId}/generateAnswer"); var postBody = $"{{"question": "{inputText}"}}"; //Add the subscription key header using (WebClient client = new WebClient()) { client.Headers.Add("Ocp-Apim-Subscription-Key", qnamakerSubscriptionKey); client.Headers.Add("Content-Type", "application/json"); try { var response = client.UploadString(builder.Uri, postBody); var json = JsonConvert.DeserializeObject<QnAResult> (response); return json?.answers?.FirstOrDefault().answer; } catch (Exception ex) { return ex.Message; } } } The code is pretty straightforward. First, we add the QnA Maker service subscription key, host URL, and knowledge base ID in the appSettings section of Web.config. Next, we read these app settings into static variables so that they are available always. Next, we modify the MessageReceivedAsync method of the dialog to pass the user input to the QnA service and return the response of the service back to the user. The QnAResult class can be seen from the source code. This can be tested in the emulator by typing in any of the questions that we have stored in our knowledge base, and we will get the appropriate response, as shown next: Our simple FAQ bot using the Microsoft Bot Framework and ASP.NET Core 2.0 is now ready! Read more about building chatbots: How to build a basic server side chatbot using Go
Read more
  • 0
  • 1
  • 10749

article-image-the-tor-project-on-browser-fingerprinting-and-how-it-is-taking-a-stand-against-it
Bhagyashree R
06 Sep 2019
4 min read
Save for later

The Tor Project on browser fingerprinting and how it is taking a stand against it

Bhagyashree R
06 Sep 2019
4 min read
In a blog post shared on Wednesday, Pierre Laperdrix, a postdoctoral researcher in the Secure Web Applications Group at CISPA, talked about browser fingerprinting, its risks, and the efforts taken by the Tor Project to prevent it. He also talked about his Fingerprint Central website, which is officially a part of the Tor project since 2017. What is browser fingerprinting Browser fingerprinting is the systematic collection of information about a remote computing device for the purposes of identification. There are several techniques through which a third-party can get a “rich fingerprint.” These include the availability of JavaScript or other client-side scripting languages, the user-agent and the accept headers, HTML5 Canvas element, and more. The browser fingerprints may include information like browser and operating system type and version, active plugins, timezone, language, screen resolution and various other active settings. Some users may think that these are too generic to identify a particular person. However, a study by Panopticlick, a browser fingerprinting test website, says that only 1 in 286,777 other browsers will share its fingerprint. Here’s an example of fingerprint Pierre Laperdrix shared in his post: Source: The Tor Project As with any technology, browser fingerprinting can be used or misused. The fingerprints can enable a remote application to prevent potential frauds or online identity thefts. On the other hand, these can also be used to track users across websites and collect information about their online behavior, without their consent. Advertisers and marketers can use this data for targeted advertising. Read also: All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Steps taken by the Tor Project to prevent browser fingerprinting Laperdrix said that Tor was the very first browser to understand and address the privacy threats browser fingerprinting poses. The Tor browser, which goes by the tagline “anonymity online”, is designed to reduce online tracking and identification of users. The browser takes a very simple approach to prevent the identification of users. “In the end, the approach chosen by Tor developers is simple: all Tor users should have the exact same fingerprint. No matter what device or operating system you are using, your browser fingerprint should be the same as any device running Tor Browser,” Laperdrix wrote. There are many other changes that have been made to the Tor browser over the years to prevent the unique identification of users. Tor warns users when they maximize their browser window as it is also one attribute that can be used to identify them. It has introduced default fallback fonts to prevent font and canvas fingerprinting. It has all the JS clock sources and event timestamps set to a specific resolution to prevent JS from measuring the time intervals of things like typing to produce a fingerprint. Talking about his contribution towards preventing browser fingerprinting, Laperdrix wrote, “As part of the effort to reduce fingerprinting, I also developed a fingerprinting website called FP Central to help Tor developers find fingerprint regressions between different Tor builds.” As a part of Google Summer of Code 2016, Laperdrix proposed to develop a website called Fingerprint Central, which is now officially included in the Tor Project. Similar to AmIUnique.org or Panopticlick, FP Central is developed to study the diversity of browser fingerprints. It runs a fingerprinting test suite and collects data from Tor browsers to help developers design and test new fingerprinting protection. They can also use it to ensure that fingerprinting-related bugs are correctly fixed with specific regression tests. Explaining the long-term goal of the website he said, “The expected long-term impact of this project is to reduce the differences between Tor users and reinforce their privacy and anonymity online.” There are a whole lot of modifications made under the hood to prevent browser fingerprinting that you can check out using the “tbb-fingerprinting” tag in the bug tracker. These modifications will also make their way into future releases of Firefox under the Tor Uplift program. Many organizations have taken a step against browser fingerprinting including browser companies Mozilla and Brave. Earlier this week, Firefox 69 was shipped with browser fingerprinting blocked by default. Brave also comes with a Fingerprinting Protection Mode enabled by default. In 2018, Apple updated Safari to only share a simplified system profile making it difficult to uniquely identify or track users. Read also: Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Check out Laperdrix’s post on Tor blog to know more in detail about browser fingerprinting. Other news in Web JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 9606
Banner background image

article-image-node-js-v10-12-0-current-released
Sugandha Lahoti
11 Oct 2018
4 min read
Save for later

Node.js v10.12.0 (Current) released

Sugandha Lahoti
11 Oct 2018
4 min read
Node.js v10.12.0 was released, yesterday, with notable changes to assert, cli, crypto, fs, and more. However, the Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Hence throughout the v10.12.0 documentation are indications of a section's stability. Let’s look at the notable changes which are stable. Assert module Changes have been made to assert. The assert module provides a simple set of assertion tests that can be used to test invariants. It comprises of a strict mode and a legacy mode, although it is recommended to only use strict mode. In Node.js v10.12.0, the diff output is now improved by sorting object properties when inspecting the values that are compared with each other. Changes to cli The command line interface in Node.js v10.12.0 has two improvements: The options parser now normalizes _ to - in all multi-word command-line flags, e.g. --no_warnings has the same effect as --no-warnings. It also includes bash completion for the node binary. Users can generate a bash completion script with run node --completion-bash. The output can be saved to a file which can be sourced to enable completion. Crypto Module The crypto module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions. In Node.js v10.12.0, crypto adds support for PEM-level encryption. It also supports API asymmetric key pair generation. The new methods crypto.generateKeyPair and crypto.generateKeyPairSync can be used to generate public and private key pairs. The API supports RSA, DSA and EC and a variety of key encodings (both PEM and DER). Improvements to file system The fs module provides an API for interacting with the file system in a manner closely modeled around standard POSIX functions. Node.js v10.12.0 adds a recursive option to fs.mkdir and fs.mkdirSync. On setting this option to true, non-existing parent folders will be automatically created. Updates to Http/2 The http2 module provides an implementation of the HTTP/2 protocol. The new node.js version adds support for a 'ping' event to Http2Session that is emitted whenever a non-ack PING is received. Support is also added for the ORIGIN frame.  Also, nghttp2 is updated to v1.34.0. This adds RFC 8441 extended connect protocol support to allow the use of WebSockets over HTTP/2. Changes in module In the Node.js module system, each file is treated as a separate module. Module has also been updated in v10.12.0. It adds module.createRequireFromPath(filename). This new method can be used to create a custom require function that will resolve modules relative to the filename path. Improvements to process The process object is a global that provides information about, and control over, the current Node.js process. Process adds a 'multipleResolves' process event that is emitted whenever a Promise is attempted to be resolved multiple times. Updates to url Node.js v10.12.0 adds url.fileURLToPath(url) and url.pathToFileURL(path). These methods can be used to correctly convert between file: URLs and absolute paths. Changes in Utilities The util module is primarily designed to support the needs of Node.js' own internal APIs. The changes in Node.js v10.12.0 include: A new sorted option is added to util.inspect(). If set to true, all properties of an object and Set and Map entries will be sorted in the returned string. If set to a function, it is used as a compare function. The util.instpect.custom symbol is now defined in the global symbol registry as Symbol.for('nodejs.util.inspect.custom'). Support for BigInt numbers in util.format() are also added. Improvements in V8 API The V8 module exposes APIs that are specific to the version of V8 built into the Node.js binary. A number of V8 C++ APIs in v10.12.0 have been marked as deprecated since they have been removed in the upstream repository. Replacement APIs are added where necessary. Changes in Windows The Windows msi installer now provides an option to automatically install the tools required to build native modules. You can find the list of full changes on the Node.js Blog. Node.js and JS Foundation announce intent to merge; developers have mixed feelings. Node.js announces security updates for all their active release lines for August 2018. Deploying Node.js apps on Google App Engine is now easy.
Read more
  • 0
  • 0
  • 7163

article-image-django-3-0-released-with-built-in-async-functionality-and-support-for-mariadb-and-python-3-6-3-7-and-3-8
Sugandha Lahoti
03 Dec 2019
2 min read
Save for later

Django 3.0 released with built-in async functionality and support for MariaDB and Python 3.6, 3.7 and 3.8

Sugandha Lahoti
03 Dec 2019
2 min read
Yesterday, Django released its latest major update - Django 3.0. Django is a Python-based web framework designed to help developers build apps faster with less code. Django 3.0 now comes with built-in async functionality, Python 3.6, 3.7 and 3.8 support and third-party library support for the older version of Django. New features in Django 3.0 MariaDB support Django now officially supports MariaDB 10.1 and higher. To use MariaDB you should use the MySQL backend, which is shared between the two. ASGI support for async programming Django 3.0 provides support for running as an ASGI application, making Django fully async-capable (Django already has existing WSGI support). However, async features will only be available to applications that run under ASGI. As a side-effect of this change, Django is now aware of asynchronous event loops and will block you calling code marked as “async unsafe” - such as ORM operations - from an asynchronous context. This was one of the most eagerly awaited features. https://twitter.com/jmcampbell72/status/1201502666431619072 https://twitter.com/arocks/status/1201711143103807490 https://twitter.com/gtcarvalh0/status/1201475826564382720 Exclusion constraints on PostgreSQL Django 3.0 adds a new ExclusionConstraint class which adds exclusion constraints on PostgreSQL. Constraints are added to models using the Meta.constraints option. Filter expressions Expressions that output BooleanField may now be used directly in QuerySet filters, without having to first annotate and then filter against the annotation. Enumerations for model field choices Custom enumeration types TextChoices, IntegerChoices, and Choices are now available as a way to define Field.choices. TextChoices and IntegerChoices types are provided for text and integer fields. Django 3.0 has also dropped support for PostgreSQL 9.4 which ends in December 2019. It also removes private Python 2 compatibility APIs. The upstream support for Oracle 12.1 also ends in July 2021. Django 2.2 will be supported until April 2022. Django 3.0 officially supports Oracle 12.2 and 18c. The complete list of updates is available in the release notes. Django 3.0 is going async! Which Python framework is best for building RESTful APIs? Django or Flask? Django 2.2 is now out with classes for custom database constraints
Read more
  • 0
  • 0
  • 7067

article-image-google-chrome-76-now-supports-native-lazy-loading
Bhagyashree R
27 Aug 2019
4 min read
Save for later

Google Chrome 76 now supports native lazy-loading

Bhagyashree R
27 Aug 2019
4 min read
Earlier this month, Google Chrome 76 got native support for lazy loading. Web developers can now use the new ‘loading’ attribute to lazy-load resources without having to rely on a third-party library or writing a custom lazy-loading code. Why native lazy loading is introduced Lazy loading aims to provide better web performance in terms of both speed and consumption of data. Generally, images are the most requested resources on any website. Some web pages end up using a lot of data to load images that are out of the viewport. Though this might not have much effect on a WiFi user, this could surely end up consuming a lot of cellular data. Not only images, but out-of-viewport embedded iframes can also consume a lot of data and contribute to slow page speed. Lazy loading addresses this problem by deferring the non-critical, below-the-fold images and iframe loads until the user scrolls closer to them. This results in faster web page loading, minimized bandwidth for users, and reduced memory usage. Previously, there were a few ways to defer the loading of images and iframes that were out of the viewport. You could use the Intersection Observer API or the ‘data-src’ attribute on the 'img' tag. Many developers also built third-party libraries to provide abstractions that are even easier to use. Bringing native support, however, eliminates the need for an external library. It also ensures that the deferred loading of images and iframes still work even if JavaScript is disabled on the client. How you can use lazy loading Without this feature, Chrome already loads images at different priorities depending on their location with respect to the device viewport. This new ‘loading’ attribute, however, allows developers to completely defer the loading of images and iframes until the user scrolls near them. The distance-from-viewport threshold is not fixed and depends on the type of resources being fetched, whether Lite mode is enabled, and the effective connection type. There are default values assigned for effective connection type in the Chromium source code that might change in a future release. Also, since the images are lazy-loaded, there are chances of content reflow. To prevent this, developers are advised to set width and height for the images. You can assign any one of the following three values to the ‘loading’ attribute: ‘auto’: This represents the default behavior of the browser and is equivalent to not including the attribute. ‘lazy’: This will defer the loading of the images and iframes until it reaches a calculated distance from the viewport. ‘eager’: This will load the resource immediately. Support for native lazy loading in Chrome 76 got mixed reactions from users. A user commented on Hacker News, “I'm happy to see this. So many websites with lazy loading never implemented a fallback for noscript. And most of the popular libraries didn't account for this accessibility.” Another user expressed that it does hinder user experience. They commented, “I may be the odd one out here, but I hate lazy loading. I get why it's a big thing on cellular connections, but I do most of my browsing on WIFI. With lazy loading, I'll frequently be reading an article, reach an image that hasn't loaded in yet, and have to wait for it, even though I've been reading for several minutes. Sometimes I also have to refind my place as the whole darn page reflows. I wish there was a middle ground... detect I'm on WIFI and go ahead and load in the lazy stuff after the above the fold stuff.” Right now, Chrome is the only browser to support native lazy loading. However, other browsers may follow the suit considering Firefox has an open bug for implementing lazy loading and Edge is based on Chromium. Why should your e-commerce site opt for Headless Magento 2? Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event Angular 8.0 releases with major updates to framework, Angular Material, and the CLI
Read more
  • 0
  • 0
  • 7029
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-javascript-will-soon-support-optional-chaining-operator-as-its-ecmascript-proposal-reaches-stage-3
Bhagyashree R
28 Aug 2019
3 min read
Save for later

JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3

Bhagyashree R
28 Aug 2019
3 min read
Last month, the ECMAScript proposal for optional chaining operator reached stage 3 of the TC39 process. This essentially means that the feature is almost finalized and is awaiting feedback from users. The optional chaining operator aims to make accessing properties through connected objects easier when there are chances of a reference or function being undefined or null. https://twitter.com/drosenwasser/status/1154456633642119168 Why optional chaining operator is proposed in JavaScript Developers often need to access properties that are deeply nested in a tree-like structure. To do this, they sometimes end up writing long chains of property accesses. This can make them error-prone. If any of the intermediate references in these chains are evaluated to null or undefined, JavaScript will throw the TypeError: Cannot read property 'name' of undefined error. The optional chaining operator aims to provide a more elegant way of recovering from such instances. It allows you to check for the existence of deeply nested properties in objects. How it works is that if the operand before the operator evaluates to undefined or null, the expression will return to undefined. Or else, the property access, method or function call will be evaluated normally. MDN compares this operator with the dot (.) chaining operator. “The ?. operator functions similarly to the . chaining operator, except that instead of causing an error if a reference is null or undefined, the expression short-circuits with a return value of undefined. When used with function calls, it returns undefined if the given function does not exist,” the document reads. The concept of optional chaining is not new. Several other languages also have support for a similar feature including the null-conditional operator in C# 6 and later, optional chaining operator in Swift, and the existential operator in CoffeeScript. The optional chaining operator is represented by ‘?.’. Here’s how its syntax looks like: obj?.prop       // optional static property access obj?.[expr]     // optional dynamic property access func?.(...args) // optional function or method call Some properties of optional chaining Short-circuiting: The rest of the expression is not evaluated if an optional chaining operator encounters undefined or null at its left-hand side. Stacking: Another property of the optional chaining operator is that you can stack them. This means that you can apply more than one optional chaining operator on a sequence of property accesses. Optional deletion: You can also combine the ‘delete’ operator with an optional chain. Though there is time for the optional chaining operator to land in JavaScript, you can give it try with a Babel plugin. To stay updated with its browser compatibility, check out the MDN web docs. Many developers are appreciating this feature. A developer on Reddit wrote, “Considering how prevalent 'Cannot read property foo of undefined' errors are in JS development, this is much appreciated. Yes, you can rant that people should do null guards better and write less brittle code. True, but better language features help protect users from developer laziness.” Yesterday, the team behind V8, Chrome’s JavaScript engine, also expressed their delight on Twitter: https://twitter.com/v8js/status/1166360971914481669 Read the Optional Chaining for JavaScript proposal to know more in detail. ES2019: What’s new in ECMAScript, the JavaScript specification standard Introducing QuickJS, a small and easily embeddable JavaScript engine Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more
Read more
  • 0
  • 0
  • 7017

article-image-haproxy-2-0-released-with-kubernetes-ingress-controller-layer-7-retries-polyglot-extensibility-grpc-support-and-more
Vincy Davis
17 Jun 2019
6 min read
Save for later

HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more

Vincy Davis
17 Jun 2019
6 min read
Last week, HAProxy 2.0 was released with critical features of cloud-native and containerized environments. This is an LTS (Long-term support) release, which includes a powerful set of core features such as Layer 7 retries, Cloud-Native threading and logging, polyglot extensibility, gRPC support and more, and will improve the seamless support for integration into modern architectures. In conjunction with this release, the HAProxy team has also introduced the HAProxy Kubernetes Ingress Controller and the HAProxy Data Plane API. The founder of HAProxy Technologies, Willy Tarreau, has said that these developments will come with HAProxy 2.1 version. The HAProxy project has also opened up issue submissions on its HAProxy GitHub account. Some features of HAProxy 2.0 Cloud-Native Threading and Logging HAProxy can now scale to accommodate any environment with less manual configuration. This will enable the number of worker threads to match the machine’s number of available CPU cores. The process setting is no longer required, thus simplifying the bind line. Two new build parameters have been added: MAX_THREADS and MAX_PROCS, which avoids allocating huge structs. Logging has been made easier for containerized environments. Direct logging to stdout and stderr, or to a file descriptor is now possible. Kubernetes Ingress Controller The HAProxy Kubernetes Ingress Controller provides a high-performance ingress for the Kubernetes-hosted applications. It supports TLS offloading, Layer 7 routing, rate limiting, whitelisting. Ingresses can be configured through either ConfigMap resources or annotations. The Ingress Controller gives users the ability to : Use only one IP address and port and direct requests to the correct pod based on the Host header and request path Secure communication with built-in SSL termination Apply rate limits for clients while optionally whitelisting IP addresses Select from among any of HAProxy's load-balancing algorithms Get superior Layer 7 observability with the HAProxy Stats page and Prometheus metrics Set maximum connection limits to backend servers to prevent overloading services Layer 7 Retries With HAProxy 2.0, it will be possible to retry from another server at Layer 7 for failed HTTP requests. The new configuration directive, retry-on, can be used in defaults, listen, or backend section. The number of attempts at retrying can be specified using the retries directive. The full list of retry-on options is given on the HAProxy blog. HAProxy 2.0 also introduces a new http-request action called disable-l7-retry. It allows the user to disable any attempt to retry the request if it fails for any reason other than a connection failure. This can be useful to make sure that POST requests aren’t retried. Polyglot Extensibility The Stream Processing Offload Engine (SPOE) and Stream Processing Offload Protocol (SPOP) were introduced in HAProxy 1.7. It aimed to create the extension points necessary to build upon HAProxy using any programming language. From HAProxy 2.0, the following libraries and examples will be available in the following languages and platforms: C .NET Core Golang Lua Python gRPC HAProxy 2.0 delivers full support for the open-source RPC framework, gRPC. This allows bidirectional streaming of data, detection of gRPC messages, and logging gRPC traffic. Two new converters, protobuf and ungrpc, have been introduced, to extract the raw Protocol Buffer messages. Using Protocol Buffers, gRPC enables users to serialize messages into a binary format that’s compact and potentially more efficient than JSON. Users need to set up a standard end-to-end HTTP/2 configuration, to start using gRPC in HAProxy. HTTP Representation (HTX) The Native HTTP Representation (HTX) was introduced with HAProxy 1.9. Starting from 2.0, it will be enabled by default. HTX creates strongly typed, well-delineated header fields and allows for gaps and out-of-order fields. It also allows HAProxy to maintain consistent semantics from end-to-end and provides higher performance when translating HTTP/2 to HTTP/1.1 or vice versa. LTS Support for 1.9 Features HAProxy 2.0 bring LTS support for many features that were introduced or improved upon during the 1.9 release. Some are them are specified below: Small Object Cache with an increased caching size up to 2GB, set with the max-object-size directive. The total-max-size setting determines the total size of the cache and can be increased up to 4095MB. New fetches like date_us, cpu_calls and more have been included which will report either an internal state or from layer 4, 5, 6, and 7. New converters like strcmp, concat and more allow to transform data within HAProxy Server Queue Priority Control, lets the users to prioritize some queued connections over others. This is helpful to deliver JavaScript or CSS files before images. The resolvers section supports using resolv.conf by specifying parse-resolv-conf. The HAProxy team has planned to build HAProxy 2.1 with features like UDP Support, OpenTracing and Dynamic SSL Certificate Updates. The HAProxy inaugural community conference, HAProxyConf is scheduled to take place in Amsterdam, Netherlands on November 12-13, 2019. A user on Hacker News comments, “HAProxy is probably the best proxy server I had to deal with ever. It's performance is exceptional, it does not interfere with L7 data unless you tell it to and it's extremely straightforward to configure reading the manual.” While some are busy comparing HAProxy with the nginx web server. A user says that “In my previous company we used to use HAProxy, and it was a hassle. Yes, it is powerful. However, nginx is way easier to configure and set up, and performance wise is a contender for most usual applications people needed. nginx just fulfills most people's requirements for reverse proxy and has solid HTTP/2 support (and other features) for way longer.” Another user states that “Big difference is that haproxy did not used to support ssl without using something external like stunnel -- nginx basically did it all out of the box and I haven't had a need for haproxy in quite some time now.” While others suggest that HAProxy is trying hard to stay equipped with the latest features in this release. https://twitter.com/garthk/status/1140366975819849728 A user on Hacker News agrees by saying that “These days I think HAProxy and nginx have grown a lot closer together on capabilities.” Visit the HAProxy blog for more details about HAProxy 2.0. HAProxy introduces stick tables for server persistence, threat detection, and collecting metrics MariaDB announces the release of MariaDB Enterprise Server 10.4 Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service
Read more
  • 0
  • 0
  • 6919

article-image-mozilla-introduces-neqo-rust-implementation-for-quic-new-http-protocol
Fatema Patrawala
24 Sep 2019
3 min read
Save for later

Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol

Fatema Patrawala
24 Sep 2019
3 min read
Two months ago, Mozilla introduced Neqo, code written in Rust to implement QUIC, a new protocol for the web developed on top of UDP instead of TCP. As per the GitHub page, web developers who want to run test on http 0.9 programs using neqo-client and neqo-server, below is the code: cargo build ./target/debug/neqo-server 12345 -k key --db ./test-fixture/db ./target/debug/neqo-client http://127.0.0.1:12345/ -o --db ./test-fixture/db While developers who want to run test on http 3 programs using neqo-client and neqo-http3-server must check the code given below: cargo build ./target/debug/neqo-http3-server [::]:12345 --db ./test-fixture/db ./target/debug/neqo-client http://127.0.0.1:12345/ --db ./test-fixture/db What is QUIC and why is it important for web developers According to Wikipedia, QUIC is the next generation encrypted-by-default transport layer network protocol designed by Jim Roskind at Google. It is designed to secure and accelerate web traffic on the Internet. It was implemented and deployed in 2012, and announced publicly in 2013 as an experimentation broadened and described to the IETF. While still an Internet Draft, QUIC is used by more than half of all connections from the Chrome web browser to Google's servers. As per the QUIC’s official website, “QUIC is an IETF Working Group that is chartered to deliver the next transport protocol for the Internet.” One of the users on Hacker News commented, “QUIC is an entirely new protocol for the web developed on top of UDP instead of TCP. UDP has the advantage that it is not dependent on the order of the received packets, hence non-blocking unlike TCP. If QUIC is used, the TCP/TLS/HTTP2 stack is replaced to UDP/QUIC stack.”  The user further comments, “If QUIC features prove effective, those features could migrate into a later version of TCP and TLS (which have a notably longer deployment cycle). So basically, QUIC wants to combine the speed of the UDP protocol, with the reliability of the TCP protocol.” Additionally, the Rust community on Reddit were asked if QUIC is royalty free. To which one of the Rust developer responded, “Yes, it is being developed and standardized by a working group (under the IETF) and the IETF respectively. So it will become an internet standard just like UDP, TCP, HTTP, etc.” If you are interested to know more about Neqo and QUIC, check out the official GitHub page. Other interesting news in web development Chrome 78 beta brings the CSS Properties and Values API, the native file system API, and more! Apple releases Safari 13 with opt-in dark mode support, FIDO2-compliant USB security keys support, and more! Inkscape 1.0 beta is available for testing  
Read more
  • 0
  • 0
  • 6906

article-image-introducing-wasmjit-a-kernel-mode-webassembly-runtime-for-linux
Bhagyashree R
26 Sep 2018
2 min read
Save for later

Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux

Bhagyashree R
26 Sep 2018
2 min read
Written in C90, Wasmjit is a small embeddable WebAssembly runtime. It is portable to most environments but it primarily targets a Linux kernel module that can host Emscripten-generated WebAssembly modules. What are the benefits of Wasmjit? Improved performance: Using Wasmjit you will be able to run WebAssembly modules in kernel-space (ring 0). This will provide access to system calls as normal function calls, which eliminates the user-kernel transition overhead. This also avoids the scheduling overheads from swapping page tables. This provides a boost in performance for syscall-bound programs like web servers or FUSE file systems. No need to run an entire browser: It also comes with a host environment for running in user-space on POSIX systems. This will allow running WebAssembly modules without having to run an entire browser. What tools do you need to get started? The following are the tools you require to get started with Wasmjit: A standard POSIX C development environment with cc and make Emscripten SDK Optionally, you can install kernel headers on Linux, the linux-headers-amd64 package on Debian, kernel-devel on Fedora What’s in the future? Wasmjit currently supports x86_64 and can run a subset of Emscripten-generated WebAssembly on Linux, macOS, and within the Linux kernel as a kernel module. In coming releases we will see more implementations and improvements along the following lines: Enough Emscripten host-bindings to run nginx.wasm Introduction of an interpreter Rust-runtime for Rust-generated wasm files Go-runtime for Go-generated wasm files Optimized x86_64 JIT arm64 JIT macOS kernel module What to consider when using this runtime? Wasmjit uses vmalloc(), a function for allocating a contiguous memory region in the virtual address space, for code and data section allocations. This prevents those pages from ever being swapped to disk resulting in indiscriminate access to the /dev/wasm device. This can make a system vulnerable to denial-of-service attacks. To mitigate this risk, in future, a system-wide limit on the amount of memory used by the /dev/wasm device will be provided. To get started with Wasmjit, check out its GitHub repository. Why is everyone going crazy over WebAssembly? Unity Benchmark report approves WebAssembly load times and performance in popular web browsers Golang 1.11 is here with modules and experimental WebAssembly port among other updates
Read more
  • 0
  • 0
  • 6867
article-image-introducing-quickjs-a-small-and-easily-embeddable-javascript-engine
Bhagyashree R
12 Jul 2019
3 min read
Save for later

Introducing QuickJS, a small and easily embeddable JavaScript engine

Bhagyashree R
12 Jul 2019
3 min read
On Tuesday, Fabrice Bellard, the creator of FFmpeg and QEMU and Charlie Gordon, a C expert, announced the first public release of QuickJS. Released under MIT license, it is a “small but complete JavaScript engine” that comes with support for the latest ES2019 language specification. Features in QuickJS JavaScript engine Small and easily embeddable: The engine is formed by a few C files and does not have any external dependency. Fast interpreter: The interpreter shows impressive speed by running 56,000 tests from the ECMAScript Test Suite1 in just 100 seconds, and that too on a single-core CPU. A runtime instance completes its life cycle in less than 300 microseconds. ES2019 support: The support for ES2019 specification is almost complete including modules, asynchronous generators, and full Annex B support (legacy web compatibility). Currently, it does not has support for realms and tail calls. No external dependency: It can compile JavaScript source to executables without the need for any external dependency. Command-line interpreter: The command-line interpreter comes with contextual colorization and completion implemented in Javascript. Garbage collection: It uses reference counting with cycle removal to free objects automatically and deterministically. This reduces memory usage and ensures deterministic behavior of the JavaScript engine. Mathematical extensions: You can find all the mathematical extensions in the ‘qjsbn’ version, which are fully-backward compatible with standard Javascript. It supports big integers (BigInt), big floating-point numbers (BigFloat), operator overloading, and also comes with ‘bigint’ and ‘math’ mode. This news struck a discussion on Hacker News, where developers were all praises for Bellard’s and Gordon’s outstanding work on this project. A developer commented, “Wow. The core is a single 1.5MB file that's very readable, it supports nearly all of the latest standard, and Bellard even added his own extensions on top of that. It has compile-time options for either a NaN-boxing or traditional tagged union object representation, so he didn't just go for a single minimal implementation (unlike e.g. OTCC) but even had the time and energy to explore a bit. I like the fact that it's not C99 but appears to be basic C89, meaning very high portability. Despite my general distaste for JS largely due to websites tending to abuse it more than anything, this project is still immensely impressive and very inspiring, and one wonders whether there is still "space at the bottom" for even smaller but functionality competitive implementations.” Another wrote, “I can't wait to mess around with this, it looks super cool. I love the minimalist approach. If it's truly spec compliant, I'll be using this to compile down a bunch of CLI scripts I've written that currently use node. I tend to stick with the ECMAScript core whenever I can and avoid using packages from NPM, especially ones with binary components. A lot of the time that slows me down a bit because I'm rewriting parts of libraries, but here everything should just work with a little bit of translation for the OS interaction layer which is very exciting.” To know more about QuickJS, check out Fabrice Bellard's official website. Firefox 67 will come with faster and reliable JavaScript debugging tools Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more React Native 0.59 is now out with React Hooks, updated JavaScriptCore, and more!
Read more
  • 0
  • 0
  • 6547

article-image-warp-rusts-new-web-framework
Melisha Dsouza
06 Aug 2018
3 min read
Save for later

Warp: Rust's new web framework

Melisha Dsouza
06 Aug 2018
3 min read
Warp is a new Rust web framework. Built by Sean McArthur and Carl Lerche, it's a tool for building and managing web servers. More specifically, it was designed to give developers more control over how they to configure routes within their services. It's worth pointing out that Rust's Warp shouldn't be confused with Haskell's Warp - in the Haskell world, Warp is a lightweight web server for WAI applications. This article was amended 7.25.2019 to clarify that Rust's Warp framework and Haskell's Warp frameworks are different. What's the thinking behind Rust's Warp framework? In a blog post announcing the framework, McArthur explains that the inspiration for Warp came out of his experience working with many different frameworks and tools - most recently Node.js. He writes: "I found that I often times need to configure predicates, like certain headers required, query parameters needed, etc, and sometimes, I need to configure that a set of routes should be 'mounted' at a different path, and possibly want certain predicates there too. I noticed the concept of mounting or sub-routes or sub-resources or whatever the framework calls them didn’t feel… natural, at least to me." With this challenge setting the context for Warp, McArthur's love of Rust and the highly functional aspect of Scala tools like Finch and Akka helped to lay the technical foundations for the web framework. Central to the framework are filters. Read next: Will Rust replace C++? What are filters in the Warp web framework? Filters are a feature that makes configuring endpoints easier. McArthur explains by saying they are "a function that can operate on some input... and returns some output, which could be some app-specific type you wish to pass around, or can be some reply to send back as an HTTP response." The advantage of this is that if you are trying to "piece together data from several different places of a request before you have your domain object" you can treat each source as a 'filter' and combine them in a relatively straightforward manner. McArthur repeatedly uses the word 'natural' - to put it another way, it makes things easier and cleaner for the developer. Read next: Rust 1.28 is here with global allocators, nonZero types and more The Rust ecosystem is growing It's not news that Rust is a hugely popular programming language. In this year's Stack Overflow survey, Rust was listed as the most loved language by respondents (3 years running!). However, it hasn't seen extensive and rapid growth despite its advantages - with a growing ecosystem of tools like Warp that could well change over the next couple of years.
Read more
  • 0
  • 0
  • 6077

article-image-mojolicious-8-0-a-web-framework-for-perl-released-with-new-promises-and-roles
Savia Lobo
18 Sep 2018
2 min read
Save for later

Mojolicious 8.0, a web framework for Perl, released with new Promises and Roles

Savia Lobo
18 Sep 2018
2 min read
Mojolicious, a next generation web framework for the Perl programming language announced its upgrade to the latest 8.0 version. Mojolicious 8.0 was announced at the Mojoconf in Norway held from 6th to 7th September 2018. This release is codenamed as ‘Supervillain’ and is by far the major release in Mojolicious. Mojolicious allows users to easily grow single file prototypes into well-structured MVC web applications. It is a powerful web development toolkit, that one can use for all kinds of applications, independently of the web framework. Many companies such as Alibaba Group, IBM, Logitech, Mozilla, and others rely on Mojolicious to develop new code bases. Even companies like Bugzilla are getting themselves ported to Mojolicious. The Mojolicious community has decided to make a few organizational changes, to support the continuous growth. This includes: All new development will be consolidated in a single GitHub organization. Mojolicious’ official IRC channel named say hi! that has almost 200 regulars will be moving to Freenode (#mojo on irc.freenode.net). This will make it easier for people not yet part of the Perl community to get involved. Some highlights of the Mojolicious 8.0 Promises/A+ Mojolicious 8.0 includes Promises/A+, a new module and pattern for working with event loops. A promise represents the eventual result of an asynchronous operation. Roles and subprocess The version 8.0 now includes roles, a new way to extend Mojo classes. Also, the subprocesses can now mix event loops and computationally expensive tasks. Placeholder types and Mojo::File With the placeholder types, one can avoid repetitive routes. Whereas the Mojo::File, is the brand new module for dealing with file systems. Cpanel::JSON::XS and Mojo::PG With the Cpanel::JSON::XS, users can process JSON at a much faster rate now. The Mojo::PG includes many new SQL::Abstract extensions for Postgres features. To know more about Mojolicious 8.0 in detail, visit its GitHub page. Warp: Rust’s new web framework for implementing WAI (Web Application Interface) What’s new in Vapor 3, the popular Swift based web framework Beating jQuery: Making a Web Framework Worth its Weight in Code
Read more
  • 0
  • 0
  • 5622
article-image-meet-carlo-a-web-rendering-surface-for-node-applications-by-the-google-chrome-team
Bhagyashree R
02 Nov 2018
2 min read
Save for later

Meet Carlo, a web rendering surface for Node applications by the Google Chrome team

Bhagyashree R
02 Nov 2018
2 min read
Yesterday, the Google Chrome team introduced Carlo, a web rendering surface for Node applications. Carlo provides rich rendering capabilities powered by the Google Chrome browser to Node applications. Using Puppeteer it is able to communicate with the locally installed browser instance. Puppeteer is also a Google Chrome project that comes with a high-level API to control Chrome or Chromium over the DevTools Protocol. Why Carlo is introduced? Carlo aims to show how the locally installed browser can be used with Node out-of-the-box. The advantage of using Carlo over Electron is that Node v8 and Chrome v8 engines are decoupled in Carlo. This provides a maintainable model that allows independent updates of the underlying components. In short, Carlo gives you more control over bundling. What you can do with Carlo? Carlo enables you to create hybrid applications that use Web stack for rendering and Node for capabilities. You can do the following with it: Using the web rendering stack, you can visualize dynamic state of your Node applications. Expose additional system capabilities accessible from Node to your web applications. Package your application into a single executable using the command-line interface, pkg. How does it work? It’s working involve three steps: First, Carlo checks whether Google Chrome is installed locally or not It then launches Google Chrome and establishes a connection to it over the process pipe Finally, exposes high-level API for rendering in Chrome In case of those users who do not have Chrome installed, Carlo prints an error message. It supports all Chrome Stable channel, versions 70.* and Node v7.6.0 onwards. You can install and get started with it by executing the following command: npm i carlo Read the full description on Carlo’s GitHub repository. Node v11.0.0 released npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 5502

article-image-how-deliveroo-migrated-from-ruby-to-rust-without-breaking-production
Bhagyashree R
15 Feb 2019
3 min read
Save for later

How Deliveroo migrated from Ruby to Rust without breaking production

Bhagyashree R
15 Feb 2019
3 min read
Yesterday, the Deliveroo engineering team shared their experience about how they migrated their Tier 1 service from Ruby to Rust without breaking production. Deliveroo is an online food delivery company based in the United Kingdom. Why Deliveroo decided to part ways from Ruby for the Dispatcher service? The Logistics team at Deliveroo uses a service called Dispatcher. This service optimally offers an order to the rider, and it does this with the help of a timeline for each rider. This timeline helps in predicting where riders will be at a certain point of time. Knowing this information allows to efficiently suggest a rider for an order. Building these timelines requires a lot of computation. Though these computations are quick, they are a lot in number. The Dispatcher service was first written in Ruby as it was the company’s preferred language in the beginning. Earlier, it was performing fine because the business was not as big it is now. With time, when Deliveroo started growing, the number of orders increased. This is why the Dispatch service started taking much longer than before. Why they chose Rust as the replacement for Ruby? Instead of writing the whole thing in Rust, the team decided to identify the bottlenecks that were slowing down the Dispatcher service and rewrite them in a different programming language (Rust). They concluded that it would be easier if they built some sort of native extension written in Rust and make it work with the current code written in Ruby. The team chose Rust because it provides high performance than C and is memory safe. Rust also allowed them to build dynamic libraries, which can be later loaded into Ruby. Additionally, some of their team members also had experience with Rust and one part of the Dispatcher was already in Rust. How they migrated from Ruby to Rust? There are two options using which you can call Rust from Ruby. One, by writing a dynamic library in Rust with extern "C" interface and calling it using FFI. Second, writing a dynamic library, but using the Ruby API to register methods, so that you can call them from Ruby directly, just like any other Ruby code. The Deliveroo team chose the second approach of using Ruby API, as there are many libraries available to make it easier for them, for instance, ruru, rutie, and Helix. The team decided to use Rutie, which is a recent fork of Ruru and is under active development. The team planned to gradually replace all parts of the Ruby Dispatcher with Rust. They began the migration by replacing with Rust classes which did not have any dependencies on other parts of the Dispatcher and adding feature flags. As the API of both Ruby and Rust classes implementation were quite similar, they were able to use the same tests. With the help of Rust, the overall dispatch time was reduced significantly. For instance, in one of their larger zones, it dropped from ~4 sec to 0.8 sec. Out of these 0.8 seconds, the Rust part only consumed 0.2 seconds. Read the post shared by Andrii Dmytrenko, a Software Engineer at Deliveroo, for more details. Introducing RustPython, a Python 3 interpreter written in Rust Rust 1.32 released with a print debugger and other changes How has Rust and WebAssembly evolved in 2018
Read more
  • 0
  • 0
  • 5451