Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Web Development

354 Articles
article-image-chrome-78-beta-brings-the-css-properties-and-values-api-the-native-file-system-api-and-more
Bhagyashree R
23 Sep 2019
3 min read
Save for later

Chrome 78 beta brings the CSS Properties and Values API, the native file system API, and more!

Bhagyashree R
23 Sep 2019
3 min read
Last week, Google announced the release of Chrome 78 beta. Its stable version is scheduled to release in October this year. Chrome 78 will release with a couple of new APIs including the CSS Properties and Values API and Native File System API. Key updates in Chrome 78 beta The CSS Properties and Values API The Houdini’s CSS Properties and Values API will be supported in Chrome 78. The Houdini task force consists of engineers from Mozilla, Apple, Opera, Microsoft, HP, Intel, and Google. In CSS, developers can define user-controlled properties using CSS custom properties, also known as CSS variables. However, the CSS custom properties do have a few limitations that make them difficult to work with. The CSS Properties and Values API addresses these limitations by allowing the registration of properties that have a value type, an initial value, and a defined inheritance behavior. The Native File System API Chrome 78 will support the Native File System API, which will enable web applications to interact with files on the user’s local device like IDEs, photo and video editors, text editors, and more. After permission to access local files is received, the API will allow web applications to read or save changes directly to files and folders on the user’s device. The SMS Receiver API Websites send a randomly generated one-time-password (OTP) to verify a phone number. This way of verification is cumbersome as it requires a user to manually enter or copy and paste the password into a form. Starting with Chrome 78, users will be able to skip this manual interaction completely with the help of the SMS Receiver API. It provides websites an ability to programmatically obtain OTPs from SMS as a solution “to ease the friction and failure points of manual user input of SMS codes, which is prone to error and phishing.” Origin trials Chrome 78 introduces origin trials that allow developers to try new features and share their feedback on “usability, practicality, and effectiveness to the web standards community.” Developers can register to enable an origin trial feature for all users on their origin for a fixed period of time. To know what features are available as an origin trial, check out the Origin Trials dashboard. Among the deprecations are, disallowing synchronous XHR during page dismissal and the removal of XSS Auditor. On a discussion on Hacker News, users were skeptical about the new Native File System API. A user commented, “I’m not sure about how to think about the file system API. On one hand, is great to see that secure file system access is possible in-browser, which allows most electron apps to be converted into PWAs. That’s great, I no longer need to run 5 different chromium instances. On the other hand, I’m really not sure if I like the future of editing Microsoft Office documents in the browser. I heavily believe that apps should have an integrated UX (with appropriate OS-specific widgets) because it allows coherency and familiarity.” To know what else is coming in Chrome 78, check out the official announcement by Google. Other news in Web Development Safari Technology Preview 91 gets beta support for the WebGPU JavaScript API and WSL New memory usage optimizations implemented in V8 Lite can also benefit V8 GitHub updates to Rails 6.0 with an incremental approach
Read more
  • 0
  • 0
  • 3059

article-image-apple-releases-safari-13-with-dark-mode-support-fido2-compliant-usb-security-keys-support
Bhagyashree R
20 Sep 2019
3 min read
Save for later

Apple releases Safari 13 with opt-in dark mode support, FIDO2-compliant USB security keys support, and more!

Bhagyashree R
20 Sep 2019
3 min read
Yesterday, Apple released Safari 13 for iOS 13, macOS 10.15 (Catalina), macOS Mojave, and macOS High Sierra. This release comes with opt-in dark mode support, FIDO2-compliant USB security keys support, updated Intelligent Tracking Prevention, and much more. Key updates in Safari 13 Desktop-class browsing for iPad users Starting with Safari 13, iPad users will have the same browsing experience as macOS users. In addition to displaying websites same as the desktop Safari, it will also provide the same capabilities including more keyboard shortcuts, a download manager with background downloads, and support for top productivity websites. Updates related to authentication and passwords Safari 13 will prompt users to strengthen their passwords when they sign into a website. On macOS, users will able to use FIDO2-compliant USB security keys in Safari. Also, support is added for “Sign in With Apple” to Safari and WKWebView. Read also: W3C and FIDO Alliance declare WebAuthn as the web standard for password-free logins Security and privacy updates A new permission API is added for DeviceMotionEvent and DeviceOrientationEvent on iOS. The DeviceMotionEvent class encapsulates details like the measurements of the interval, rotation rate, and acceleration of a device. Whereas, the DeviceOrientationEvent class encapsulates the angles of rotation (alpha, beta, and gamma) in degrees and heading. Other updates include updated third-party iframes to prevent them from automatically navigating the page. Intelligent Tracking Prevention is updated to prevent cross-site tracking through referrer and link decoration. Performance-specific updates While using Safari 13, iOS users will find that the initial rendering time for web pages is reduced. The memory consumption by JavaScript including for non-web clients is also reduced. WebAPI updates Safari 13 comes with a new Pointer Events API to enable consistent access to mouse, trackpad, touch, and Apple Pencil events. It also supports the Visual Viewport API that adjusts web content to avoid overlays, such as the onscreen keyboard. Deprecated features in Safari 13 WebSQL and Legacy Safari Extensions are no longer supported. To replace your previously provided Legacy Safari Extensions, Apple provides two options. First, you can configure your Safari App Extension to provide an upgrade path that will automatically remove the previous Legacy Safari Extension when it is installed. Second, you can manually convert your Legacy Safari Extension to a Safari App Extension. In a discussion on Hacker News, users were pleased with the support for the Pointer Events API. A user commented, “The Pointer Events spec is a real joy. For example, if you want to roll your own "drag" event for a given element, the API allows you to do this without reference to document or a parent container element. You can just declare that the element currently receiving pointer events capture subsequent pointer events until you release it. Additionally, the API naturally lends itself to patterns that can easily be extended for multi-touch situations.” Others also expressed their concern regarding the deprecation of Legacy Safari Extensions. A user added, “It really, really is a shame that they removed proper extensions. While Safari never had a good extension story, it was at least bearable, and in all other regards its simply the best Mac browser. Now I have to take a really hard look at switching back to Firefox, and that would be a downgrade in almost every regard I care about. Pity.” Check out the official release notes of Safari 13 to know more in detail. Other news in web development New memory usage optimizations implemented in V8 Lite can also benefit V8 5 pitfalls of React Hooks you should avoid – Kent C. Dodds Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users
Read more
  • 0
  • 0
  • 3605

article-image-inkscape-1-0-beta-is-available-for-testing
Fatema Patrawala
19 Sep 2019
4 min read
Save for later

Inkscape 1.0 beta is available for testing

Fatema Patrawala
19 Sep 2019
4 min read
Last week, the team behind Inkscape project released the first beta version of the upcoming and much-awaited Inkscape 1.0. The team writes on the announcement page that, “after releasing two less visible alpha versions this year, in mid-January and mid-June (and one short-lived beta version), Inkscape is now ready for extensive testing and subsequent bug-fixing.” Most notable changes in Inkscape 1.0 New theme selection: In 'Edit > Preferences > User Interface > Theme', users can set a custom GTK3 theme for Inkscape. If the theme comes with a dark variant, activating the 'Use dark theme' checkbox will result in the dark variant being used. Then the new theme will be applied immediately. Origin in top left corner: Another significant change integrated was to set the origin of the document to the top left corner of the page. It coordinates that a user can see in the interface match the ones that are saved in the SVG data, and makes working in Inkscape more comfortable for people who are used to this standard behavior. Canvas rotation and mirroring: With Ctrl+Shift+Scroll wheel the drawing area can be rotated and viewed from different angles. The canvas can be flipped, to ensure that the drawing does not lean to one side, and looks good either way. Canvas alignment: When the option "Enable on-canvas alignment" is active in the "Align and Distribute" dialog, a new set of handles will appear on canvas. These handles can be used to align the selected objects relative to the area of the current selection. HiDPI screen: Inkscape now supports HiDPI screens. Controlling PowerStroke: The width of PowerStroke is controlled with pressure sensitive graphics tablet Fillet/chamfer LPE and (non-destructive) Boolean Operation LPE: This new LPE adds fillet and chamfer to paths. The Boolean Operations LPE finally makes non-destructive boolean operations available in Inkscape. New PNG export options: The export dialog has received several new options available when you expand the 'Advanced' section. Centerline tracing: A new, unified dialog for vectorizing raster graphics is now available from Path > Trace Bitmap. New Live Path Effect selection dialog: Live Path Effects received a major overhaul, with lots of improvements and new features. Faster Path operations and deselection of large number of paths Variable fonts support: If Inkscape is compiled with a Pango library version 1.41.1, then it will come with support for variable fonts. Complete extensions overhaul: Extensions can now have clickable links, images, a better layout with separators and indentation, multiline text fields, file chooser fields and more. Command line syntax changes: The Inkscape command line is now more powerful and flexible for the user and easier to enhance for the developer. Native support for macOS with a signed and notarized .dmg file: Inkscape is now a first-rate native macOS application, and no longer requires XQuartz to operate. Other important changes for users Custom Icon Sets Icon sets no longer consist of a single file containing all icons. Instead each icon is allocated it's own file. The directory structure must follow the standard structure for Gnome icons. As a side effect of a bug fix to the icon preview dialog, custom UI icon SVG files need to be updated to have their background color alpha channel set to 0 so that they display correctly. Third-party extensions Third-party extensions need to be updated to work with this version of Inkscape. Import/Export via UniConvertor dropped Extensions that previously used the UniConvertor library for saving/opening various file formats have been removed: Import formats that have been removed: Adobe Illustrator 8.0 and below (UC) (*.ai) Corel DRAW Compressed Exchange files (UC) (*.ccx) Corel DRAW 7-X4 files (UC) (*.cdr) [cdr imports, but this specific version?] Corel DRAW 7-13 template files (UC) (*.cdt) Computer Graphics Metafile files (UC) (*.cgm) Corel DRAW Presentation Exchange files (UC) (*.cmx) HP Graphics Language Plot file [AutoCAD] (UC) (*.plt) sK1 vector graphics files (UC) (*.sk1) Export formats that have been removed: HP Graphics Language Plot file [AutoCAD] (UC) (*.plt) sK1 vector graphics files (UC) (*.sk1) Inline LaTeX formula conversion dropped The EQTeXSVG extension that could be used to convert an inline LaTeX equation into SVG paths using Python was dropped, due to its external dependencies. The team has asked to test Inkscape 1.0 beta version and report the findings on Inkscape report page. To know more about this news, check out official Inkscape announcement page. Other interesting news in web development this week! Mozilla announces final four candidates that will replace its IRC network Announcing Feathers 4, a framework for real-time apps and REST APIs with JavaScript or TypeScript Google announces two new attribute links, Sponsored and UGC and updates “nofollow”
Read more
  • 0
  • 0
  • 4177
Banner background image

article-image-google-announces-two-new-attribute-links-sponsored-and-ugc-and-updates-nofollow
Amrata Joshi
16 Sep 2019
5 min read
Save for later

Google announces two new attribute links, Sponsored and UGC and updates “nofollow”

Amrata Joshi
16 Sep 2019
5 min read
Last year, the team at Google announced two new link attributes that provide webmasters with additional ways to Google Search the nature of particular links. The team is also evolving the nofollow attribute to identify the nature of links. How are the new attribute links useful? rel="sponsored" The sponsored attribute is used to identify links on the site that were created as part of sponsorships, advertisements, or other compensation agreements. rel="ugc" The UGC (User Generated Content) attribute value is used for the links within user-generated content, such as forum posts and comments. rel="nofollow" Bloggers usually try to improve their websites' search engine rankings by posting comments like "Visit my discount pharmaceuticals site” on other blogs, these are known as comment spam. Google took steps to solve this issue of comment spam by introducing the nofollow attribute in 2005 for flagging advertising-related or sponsored links. So when Google sees the attribute (rel="nofollow") on hyperlinks, it doesn’t give any credits to them that is used for ranking websites in the search results. This attribute was introduced so that spammers don’t get any benefit from abusing public areas like blog comments, referrer lists, trackbacks, etc. The nofollow attribute was originally used for combatting blog comment spam. It has now been evolved and used for combatting advertising links and user-generated links that aren’t reliable. It is now also used for cases where webmasters want to link to a page but they don’t want to imply any type of endorsement. The nofollow link attribute will be used as a hint for crawling and indexing purposes by March 1, 2020.  Web analysis will be easier with these attributes All of the above attributes will help in processing the links for better analysis of the web. As they are now treated as hints that can be used to identify which links need to be considered and which ones need to be excluded within Search. It is important to identify the links as they contain valuable information that can be used to improve search and can help in understanding as to how the words within these links describe the content they point at. These links can also be used to understand the unnatural linking patterns. The official post reads, “The link attributes of “ugc” and “nofollow” will continue to be a further deterrent. In most cases, the move to a hint model won’t change the nature of how we treat such links. We’ll generally treat them as we did with nofollow before and not consider them for ranking purposes. We will still continue to carefully assess how to use links within Search, just as we always have and as we’ve had to do for situations where no attributions were provided.” How will this affect publishers and SEO experts? The links that were arbitrarily nofollowed might now get counted as per the new update so it might encourage the spammers and hence an increase in link spam. Also, if these nofollowed links get counted, a lot of sites would simply start implementing a nofollow link policy and Google might count those links and that would impact the rankings. For instance, if a website uses a lot of Wikipedia links and if Google counts them, its ranking might improve. SEO experts will now have to look into what link attributes need to be applied to a specific link and work on their strategies and CMS (Content Management Systems) based on the new change. https://twitter.com/AlanBleiweiss/status/1171475313114533891?s=20 Most of the users on HackerNews seems to be sceptical about these new link attributes, according to them it won’t benefit them. A user commented on HackerNews, “I run large forums and mark my links "nofollow". I see no reason or benefit to me to change to or add "ugc". It's not clear that there's any benefits for me. And it's vague enough that I don't know that there are not downsides. Seems best to do nothing.” Few others think that the puporse of nofollow attribute has changed. Another user commented, “This means the meaning of 'nofollow' is changing? That seems a horrible idea. Previously 'nofollow' meant exactly that - "don't follow this link please googlebot", now it will mean "follow this link, but don't grant my site ranking onto the destination." - Thats a VERY different use case, I can't see all the millions of existing 'nofollow' tags being changed by site owners to any of these new tags. Surely a 'nogrant' or somesuch would be a better option, and leave 'nofollow' alone.” Danny Sullivan, Google’s SearchLiaison, responded to the criticism around the newly updated nofollow attribute: https://twitter.com/dannysullivan/status/1171488611918696449 To know more about this news, check out the official post. Other interesting news in web development GitHub updates to Rails 6.0 with an incremental approach 5 pitfalls of React Hooks you should avoid – Kent C. Dodds The Tor Project on browser fingerprinting and how it is taking a stand against it
Read more
  • 0
  • 0
  • 2368

article-image-announcing-feathers-4-a-framework-for-real-time-apps-and-rest-apis-with-javascript-or-typescript
Bhagyashree R
16 Sep 2019
3 min read
Save for later

Announcing Feathers 4, a framework for real-time apps and REST APIs with JavaScript or TypeScript

Bhagyashree R
16 Sep 2019
3 min read
Last month, the creator of the Feathers web-framework, David Luecke announced the release of Feathers 4. This release brings built-in TypeScript definitions, a framework-independent authentication mechanism, improved documentation, security updates in database adapters, and more. Feathers is a web framework for building real-time applications and REST APIs with JavaScript or TypeScript. It supports various frontend technologies including React, VueJS, Angular, and works with any backend. Read also: Getting started with React Hooks by building a counter with useState and useEffect It basically serves as an API layer between any backend and frontend: Source: Feathers Unlike traditional MVC and low-level HTTP frameworks that rely on routes, controllers, or HTTP requests and response handlers, Feathers uses services and hooks. This makes the application easier to understand and test and lets developers focus on their application logic regardless of how it is being accessed. This also enables developers to add new communication protocols without the need for updating their application code. Key updates in Feathers 4 Built-in TypeScript definitions The core libraries and database adapters in Feathers 4 now have built-in TypeScript definitions. With this update, you will be able to create a TypeScript Feathers application with the command-line interface (CLI). Read also: TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more A new framework-independent authentication mechanism Feathers 4 comes with a new framework-independent authentication mechanism that is both flexible and easier to use. It provides a collection of tools for managing username/password, JSON web tokens (JWT) and OAuth authentication, and custom authentication mechanisms. The authentication mechanism includes the following core modules: A Feathers service named ‘AuthenticationService’ to register authentication mechanisms and create authentication tokens. The ‘JWTStrategy’ authentication strategy for authenticating JSON web token service methods calls and HTTP requests. The ‘authenticate’ hook to limit service calls to an authentication strategy. Security updates in database adapters The database adapters in Feathers 4 are updated to include crucial security and usability features, some of which are: Querying by id: The database adapters now support additional query parameters for ‘get’, ‘remove’, ‘update’, and ‘patch’. In this release, a ‘NotFound’ error will be thrown if the record does not match the query, even if the id is valid. Hook-less service methods: Starting from this release, you can call a service method by simply adding ‘a _’ in front instead of using a hook. This will be useful in the cases when you need the raw data from the service without triggering any of its hooks. Multi updates: Mulitple update means you can create, update, or remove multiple records at once. Though it is convenient, it can also open your application to queries that you never intended for. This is why, in Feathers 4, the team has made multiple updates opt-in by disabling it by default. You can enable it by explicitly setting the ‘multi’ option. Along with these updates, the team has also worked on the website and documentation. “The Feathers guide is more concise while still teaching all the important things about Feathers. You get to create your first REST API and real-time web-application in less than 15 minutes and a complete chat application with a REST and websocket API, a web frontend, unit tests, user registration and GitHub login in under two hours,” Luecke writes. Read Luecke’s official announcement to know what else has landed in Feathers 4. Other news in web 5 pitfalls of React Hooks you should avoid – Kent C. Dodds Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users How to integrate a Medium editor in Angular 8
Read more
  • 0
  • 0
  • 3630

article-image-mozilla-announces-final-four-candidates-that-will-replace-its-irc-network
Bhagyashree R
13 Sep 2019
4 min read
Save for later

Mozilla announces final four candidates that will replace its IRC network

Bhagyashree R
13 Sep 2019
4 min read
In April this year, Mozilla announced that it would be shutting down its IRC network stating that it creates “unnecessary barriers to participation in the Mozilla project.” Last week, Mike Hoye, the Engineering Community Manager at Mozilla, shared the four final candidates for Mozilla’s community-facing synchronous messaging system: Mattermost, Matrix/Riot.im, Rocket.Chat, and Slack. Mattermost is a flexible, self-hostable, open-source messaging platform that enables secure team collaboration. Riot.im is an open-source instant messaging client that is based on the federated Matrix protocol. Rocket.Chat is also a free and open-source team chat collaboration platform. The only proprietary option in the shortlisted messaging platform list is Slack, which is a widely used team collaboration hub. Read also: Slack stocks surges 49% on the first trading day on the NYSE after direct public offering Explaining how Mozilla shortlisted these messaging systems, Hoye wrote, “These candidates were assessed on a variety of axes, most importantly Community Participation Guideline enforcement and accessibility, but also including team requirements from engineering, organizational-values alignment, usability, utility and cost.” He said that though there were a whole lot of options to choose from these were the ones that best-suited Mozilla’s current institutional needs and organizational goals. Mozilla will soon be launching official test instances of each of the candidates for open testing. After the one month trial period, the team will be taking feedback in dedicated channels on each of those servers. You can also share your feedback in #synchronicity on IRC.mozilla.org and a forum on Mozilla’s community Discourse instance that the team will be creating soon. Mozilla's timeline for transitioning to the finalized messaging system September 12th to October 9th: Mozilla will be running the proof of concept trials and accepting community feedback. October 9th to 30th: It will discuss the feedback, draft a proposed post-IRC plan, and get approval from the stakeholders. December 1st:  The new messaging system will be started. March 1st, 2020: There will be a transition time for support tooling and developers starting from the launch to March 1st, 2020. After this Mozilla’s IRC network will be shut down. Hoye shared that the internal Slack instance will still be running regardless of the result to ensure smooth communication. He wrote, “Internal Slack is not going away; that has never been on the table. Whatever the outcome of this process, if you work at Mozilla your manager will still need to be able to find you on Slack, and that is where internal discussions and critical incident management will take place.” In a discussion on Hacker News, many rooted for Matrix. A user commented, “I am hoping they go with Matrix, least then I will be able to have the choice of having a client appropriate to my needs.” Another user added, “Man, I sure hope they go the route of Matrix! Between the French government and Mozilla, both potentially using Matrix would send a great and strong signal to the world, that matrix can work for everyone! Fingers crossed!” Many also appreciated that Mozilla chose three open-source messaging systems. A user commented, “It's great to see 3/4 of the options are open source! Whatever happens, I really hope the community gets behind the open-source options and don't let more things get eaten up by commercial silos cough slack cough.” Some were not happy that Zulip, an open-source group chat application, was not selected. “I'm sad to see Zulip excluded from the list. It solves the #1 issue with large group chats - proper threading. Nothing worse than waking up to a 1000 message backlog you have to sort through to filter out the information relevant to you. Except for Slack, all of their other choices have very poor threading,” a user commented. Check out the Hoye’s official announcement to know more in detail. Other news in web Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Wasmer’s first Postgres extension to run WebAssembly is here! JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3
Read more
  • 0
  • 0
  • 4556
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-safari-technology-preview-91-gets-beta-support-for-the-webgpu-javascript-api-and-wsl
Bhagyashree R
13 Sep 2019
3 min read
Save for later

Safari Technology Preview 91 gets beta support for the WebGPU JavaScript API and WSL

Bhagyashree R
13 Sep 2019
3 min read
Yesterday, Apple announced that Safari Technology Preview 91 now supports the beta version of the new WebGPU graphics API and its shading language, Web Shading Language (WSL). You can enable the WebGPU beta support by selecting Experimental Features > WebGPU in the Developer menu. The WebGPU JavaScript API WebGPU is a new graphics API for the web that aims to provide "modern 3D graphics and computation capabilities.” It is a successor to WebGL, a JavaScript API that enables 3D and 2D graphics rendering within any compatible browser without the need for a plug-in. It is being developed in the W3C GPU for the Web Community Group with engineers from Apple, Mozilla, Microsoft, Google, and others. Read also: WebGL 2.0: What you need to know Comparing WebGPU and WebGL WebGPU is different from WebGL in the respect that it is not a direct port of any existing native API, but a similarity between the two is that they both are accessed through JavaScript. However, the team does have plans to make it accessible through WebAssembly as well in the future. In WebGL, rendering a single object requires writing a series of state-changing calls. On the other hand, WebGPU combines all the state-changing calls into a single object named pipeline state object. It validates the state after the pipeline is created to prevent expensive state analysis inside the draw call. Also, wrapping an entire pipeline state in a single function call reduces the number of exchanges between Javascript and WebKit’s C++ browser engine. Similarly, resources in WebGL are bound one-by-one, while WebGPU batches them up into bind groups. The team explains, “In both of these examples, multiple objects are gathered up together and baked into a hardware-dependent format, which is when the browser performs validation. Being able to separate object validation from object use means the application author has more control over when expensive operations occur in the lifecycle of their application.” The main focus area of WebGPU is to provide improved performance and ease of use as compared to WebGL. The team compared the performance of the two using the 2D graphics benchmark, MotionMark. The performance test they wrote measured how many triangles each with different properties were rendered while maintaining 60 frames per second. Each triangle was rendered with a different draw call and bind group. WebGPU showed a substantially better performance than WebGL: Source: Apple WHLSL is now renamed to WSL In November last year, Apple proposed a new shading language for WebGPU named Web High-Level Shading Language (WHLSL), which was source-compatible with HLSL. After receiving the community feedback, they updated the language to be compatible with OpenGL Shading Language (GLSL), which is a pretty commonly used language among the web developers. Apple renamed this version of the language to Web Shading Language (WSL) and describes it as “simple, low-level, and fast to compile.” Read also: Introducing Web High Level Shading Language (WHLSL): A graphics shading language for WebGPU “There are many Web developers using GLSL today in WebGL, so a potential browser accepting a different high-level language, like HLSL, wouldn’t suit their needs well. In addition, a high-level language such as HLSL can’t be executed faithfully on every platform and graphics API that WebGPU is designed to execute on,” the team wrote. Check out the official announcement by Apple to know more in detail. Other news in web Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users New memory usage optimizations implemented in V8 Lite can also benefit V8 Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more  
Read more
  • 0
  • 0
  • 4882

article-image-memory-usage-optimizations-implemented-in-v8-lite-can-benefit-v8
Sugandha Lahoti
13 Sep 2019
4 min read
Save for later

New memory usage optimizations implemented in V8 Lite can also benefit V8

Sugandha Lahoti
13 Sep 2019
4 min read
V8 Lite was released in late 2018 in V8 version 7.3 to dramatically reduce V8’s memory usage. V8 is Google’s open-source JavaScript and WebAssembly engine, written in C++. V8 Lite provides a 22% reduction in typical web page heap size compared to V8 version 7.1 by disabling code optimization, not allocating feedback vectors and performed aging of seldom executed bytecode. Initially, this project was envisioned as a separate Lite mode of V8. However, the team realized that many of the memory optimizations could be used in regular V8 thereby benefiting all users of V8. The team realized that most of the memory savings of Lite mode with none of the performance impact can be achieved by making V8 lazier. They performed Lazy feedback allocation, Lazy source positions, and Bytecode flushing to bring V8 Lite memory optimizations to regular V8. Read also: LLVM WebAssembly backend will soon become Emscripten default backend, V8 announces Lazy allocation of Feedback Vectors The team lazily allocated feedback vectors after a function executes a certain amount of bytecode (currently 1KB). Since most functions aren’t executed very often, they avoid feedback vector allocation in most cases but quickly allocate them where needed, to avoid performance regressions and still allow code to be optimized. One hitch was that lazy allocation of feedback vectors did not allow feedback vectors to form a tree. To address this, they created a new ClosureFeedbackCellArray to maintain this tree, then swap out a function’s ClosureFeedbackCellArray with a full FeedbackVector when it becomes hot. The team says that they, “have enabled lazy feedback allocation in all builds of V8, including Lite mode where the slight regression in memory compared to their original no-feedback allocation approach is more than compensated by the improvement in real-world performance.” Compiling bytecode without collecting source positions Source position tables are generated when compiling bytecode from JavaScript. However, this information is only needed when symbolizing exceptions or performing developer tasks such as debugging. To avoid this waste, bytecode is now compiled without collecting source positions. The source positions are only collected when a stack trace is actually generated. They have also fixed bytecode mismatches and added checks and a stress mode to ensure that eager and lazy compilation of a function always produces consistent outputs. Flush compiled bytecode from functions not executed recently Bytecode compiled from JavaScript source takes up a significant chunk of V8 heap space. Therefore, now compiled bytecode is flushed from functions during garbage collection if they haven’t been executed recently. They also flush feedback vectors associated with the flushed functions. To keep track of the age of a function’s bytecode, they have incremented the age after every major garbage collection, and reset it to zero when the function is executed. Additional memory optimizations Reduce the size of FunctionTemplateInfo objects. The FunctionTemplateInfo object is split such that the rare fields are stored in a side-table which is only allocated on demand if required. The TurboFan optimized code is now deoptimized such that deopt points in optimized code load the deopt id directly before calling into the runtime. Read also: V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more. Result comparison for V8 Lite and V8 Source: V8 blog People on Hacker News appreciated the work done by the team being V8. A comment reads, “Great engineering stuff. I am consistently amazed by the work of V8 team. I hope V8 v7.8 makes it to Node v12 before its LTS release in coming October.” Another says, “At the beginning of the article, they are talking about building a "v8 light" for embedded application purposes, which was pretty exciting to me, then they diverged and focused on memory optimization that's useful for all v8. This is great work, no doubt, but as the most popular and well-tested JavaScript engine, I'd love to see a focus on ease of building and embedding.” https://twitter.com/vpodk/status/1172320685634420737 More details are available on the V8 blog. Other interesting news in Tech Google releases Flutter 1.9 at GDD (Google Developer Days) conference Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, new iPad, and more.
Read more
  • 0
  • 0
  • 3180

article-image-mozilla-brings-back-firefoxs-test-pilot-program-with-the-introduction-of-firefox-private-network-beta
Bhagyashree R
11 Sep 2019
3 min read
Save for later

Mozilla brings back Firefox’s Test Pilot Program with the introduction of Firefox Private Network Beta

Bhagyashree R
11 Sep 2019
3 min read
Yesterday, Mozilla relaunched its Test Pilot Program for the second time, alongside the release of Firefox Private Network Beta. The Test Pilot Program provides Firefox users with a way to try out its newest features and share their feedback with Mozilla. Mozilla first introduced the Test Pilot Program as an add-on for Firefox 3.5 in 2009 and relaunched it in 2016. However, in January this year, it decided to close this program in the process of evolving its “approach to experimentation even further.” While the name is the same, the difference is that the features you will get to try now will be much more stable. Explaining the difference between this iteration of Test Pilot Program and the previous ones, the team wrote in the announcement, “The difference with the newly relaunched Test Pilot program is that these products and services may be outside the Firefox browser, and will be far more polished, and just one step shy of general public release.” Firefox Private Network Beta The first project available for beta testing under this iteration of the Test Pilot Program is Firefox Private Network. It is currently free and available to Firefox for desktop users in the United States only. Firefox Private Network is an opt-in, privacy-focused feature that gives users access to a private network when they are connected to a free and open Wi-Fi. It will encrypt the web addresses you visit and the data you share. Your data will be sent through a proxy service by Mozilla’s partner, Cloudflare. It will also mask your IP address to protect you from third-party trackers around the web. Source: Mozilla Read also: Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Users have already started testing the feature. A user on Hacker News shared, “I just got done testing this, it assigns a U.S. IPv6 address and uses the Cloudflare Warp network. My tests showed a very stable download speed of 150.3 Mbps and an upload speed of 13.8 Mbps with a latency of 31ms.” Another user commented, “I quite like the fact that once this goes mainstream, it'd help limit surveillance and bypass censorship on the web in one fell swoop without having to install or trust 3p other than the implicit trust in Mozilla and its partners (in this case, Cloudflare). Knowing Cloudflare, I'm sure this proxy is as much abt speed and latency as privacy and security.” Some users were also skeptical about the use of Cloudflare in this feature. “As much as I like the idea of baking better privacy tools into the browser, it's hard for me to get enthusiastic about the idea of making Cloudflare even more of an official man-in-the-middle for all network traffic than they already are,” a user added. Others also recommended to try Tor proxy instead, “I'd like to point out though, that, one could run a Tor proxy (it also has a VPN mode) on their phones [0] today to work around censorship and surveillance; anonymity is a bit tricky over tor-as-a-proxy. The speeds over Tor are decent and nothing you can't tolerate whilst casual web browsing. It is probably going to be free forever unlike Firefox's private network.” Read also: The Tor Project on browser fingerprinting and how it is taking a stand against it Read Mozilla’s official announcement to know more in detail. Other news in web development Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more GitHub updates to Rails 6.0 with an incremental approach Wasmer’s first Postgres extension to run WebAssembly is here!
Read more
  • 0
  • 0
  • 2577

article-image-github-updates-to-rails-6-0-with-an-incremental-approach
Bhagyashree R
11 Sep 2019
3 min read
Save for later

GitHub updates to Rails 6.0 with an incremental approach

Bhagyashree R
11 Sep 2019
3 min read
After running the pre-release version of Rails 6.0 for months in production, the GitHub application was deployed to production on its official release last month. Yesterday, GitHub shared how its upgrade team was able to make the transition from Rails 5.2 to 6.0 smoothly just after 1.5 weeks of the release. Rails 6.0 was released with several amazing features including action mailbox, multiple database support, parallel testing, and more last month. GitHub is not only using it but has also made significant contributions to this release. It submitted over 100 pull requests for documentation improvements, bug fixes, performance improvements. Its contributions also included updates to the new features in the framework: parallel testing and multiple database support. “For many GitHub contributors, this was the first time sending changes to the Rails framework, demonstrating that upgrading Rails not only helps GitHub internally, but also improves our developer community as well,” GitHub wrote in the announcement. GitHub’s approach to this update was incremental. Instead of waiting for the final release, it upgraded every week by pulling in the latest changes from Rails master and running its tests against that new version. This enabled them to identify regressions quickly and early. The weekly updating process also made it easy to find these regressions because they were dealing with only a week’s worth of commits. GitHub now plans to use this co-development approach for future releases as well. It wrote, “Once our build for Rails 6.0 was green, we’d merge the pull request to master, and all new code that went into GitHub would need to pass in Rails 5.2 and the newest master build of Rails. Upgrading every week worked so well that we’ll continue using this process for upgrading from 6.0 to 6.1.” Following this approach has not only helped in improving the GitHub application in terms of security, performance, and new features but has also improved the working experience with the GitHub codebase for its engineers. This sparked a discussion on Hacker News were developers also recommended taking an incremental approach for upgrading one’s application. A user commented, “Incremental updates may require more time to complete, as an API may be refactored multiple times over many versions. However, the confidence in moving incrementally is well worth it IMHO. If you don't have an extensive enough test suite or poor/missing QA process (or both!), doing a big bang upgrade is going to both be extremely painful and very error-prone. It's worthwhile to keep up to date. It's probably not worthwhile to upgrade ASAP after a release, but you don't want to wait too long.” Another user added, “...they could have waited but if one has the developer resources, it's better to be proactive instead of waiting for an official release and all of a sudden try to upgrade and run into a lot of unforeseen issues.” Check out the official announcement to know more in detail. Other news in web development GitHub now supports two-factor authentication with security keys using the WebAuthn API The first release candidate of Rails 6.0.0 is now out! GitLab considers moving to a single Rails codebase by combining the two existing repositories
Read more
  • 0
  • 0
  • 2786
article-image-the-tor-project-on-browser-fingerprinting-and-how-it-is-taking-a-stand-against-it
Bhagyashree R
06 Sep 2019
4 min read
Save for later

The Tor Project on browser fingerprinting and how it is taking a stand against it

Bhagyashree R
06 Sep 2019
4 min read
In a blog post shared on Wednesday, Pierre Laperdrix, a postdoctoral researcher in the Secure Web Applications Group at CISPA, talked about browser fingerprinting, its risks, and the efforts taken by the Tor Project to prevent it. He also talked about his Fingerprint Central website, which is officially a part of the Tor project since 2017. What is browser fingerprinting Browser fingerprinting is the systematic collection of information about a remote computing device for the purposes of identification. There are several techniques through which a third-party can get a “rich fingerprint.” These include the availability of JavaScript or other client-side scripting languages, the user-agent and the accept headers, HTML5 Canvas element, and more. The browser fingerprints may include information like browser and operating system type and version, active plugins, timezone, language, screen resolution and various other active settings. Some users may think that these are too generic to identify a particular person. However, a study by Panopticlick, a browser fingerprinting test website, says that only 1 in 286,777 other browsers will share its fingerprint. Here’s an example of fingerprint Pierre Laperdrix shared in his post: Source: The Tor Project As with any technology, browser fingerprinting can be used or misused. The fingerprints can enable a remote application to prevent potential frauds or online identity thefts. On the other hand, these can also be used to track users across websites and collect information about their online behavior, without their consent. Advertisers and marketers can use this data for targeted advertising. Read also: All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Steps taken by the Tor Project to prevent browser fingerprinting Laperdrix said that Tor was the very first browser to understand and address the privacy threats browser fingerprinting poses. The Tor browser, which goes by the tagline “anonymity online”, is designed to reduce online tracking and identification of users. The browser takes a very simple approach to prevent the identification of users. “In the end, the approach chosen by Tor developers is simple: all Tor users should have the exact same fingerprint. No matter what device or operating system you are using, your browser fingerprint should be the same as any device running Tor Browser,” Laperdrix wrote. There are many other changes that have been made to the Tor browser over the years to prevent the unique identification of users. Tor warns users when they maximize their browser window as it is also one attribute that can be used to identify them. It has introduced default fallback fonts to prevent font and canvas fingerprinting. It has all the JS clock sources and event timestamps set to a specific resolution to prevent JS from measuring the time intervals of things like typing to produce a fingerprint. Talking about his contribution towards preventing browser fingerprinting, Laperdrix wrote, “As part of the effort to reduce fingerprinting, I also developed a fingerprinting website called FP Central to help Tor developers find fingerprint regressions between different Tor builds.” As a part of Google Summer of Code 2016, Laperdrix proposed to develop a website called Fingerprint Central, which is now officially included in the Tor Project. Similar to AmIUnique.org or Panopticlick, FP Central is developed to study the diversity of browser fingerprints. It runs a fingerprinting test suite and collects data from Tor browsers to help developers design and test new fingerprinting protection. They can also use it to ensure that fingerprinting-related bugs are correctly fixed with specific regression tests. Explaining the long-term goal of the website he said, “The expected long-term impact of this project is to reduce the differences between Tor users and reinforce their privacy and anonymity online.” There are a whole lot of modifications made under the hood to prevent browser fingerprinting that you can check out using the “tbb-fingerprinting” tag in the bug tracker. These modifications will also make their way into future releases of Firefox under the Tor Uplift program. Many organizations have taken a step against browser fingerprinting including browser companies Mozilla and Brave. Earlier this week, Firefox 69 was shipped with browser fingerprinting blocked by default. Brave also comes with a Fingerprinting Protection Mode enabled by default. In 2018, Apple updated Safari to only share a simplified system profile making it difficult to uniquely identify or track users. Read also: Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Check out Laperdrix’s post on Tor blog to know more in detail about browser fingerprinting. Other news in Web JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 9377

article-image-firefox-69-allows-default-blocking-of-third-party-tracking-cookies-and-cryptomining-for-all-users
Bhagyashree R
05 Sep 2019
6 min read
Save for later

Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users

Bhagyashree R
05 Sep 2019
6 min read
On Tuesday, Mozilla announced the release of Firefox 69. This release comes with default blocking of third-party tracking cookies and cryptomining, for all users. The team has also worked on a patch to minimize power consumption by Firefox Nightly for macOS users, which will possibly land in Firefox 70. In another announcement, Mozilla shared its plans for implementing Chrome’s Manifest V3 changes. Key updates in Firefox 69 Enhanced Tracking Protection on by default for all Browser cookies are used to store your login state, website preferences, provide personalized content, and more. However, they also facilitate third-party tracking. In addition to being a threat to user privacy, they can also end up slowing down your browser, consuming your data, and creating user profiles. The tracked information and profiles can also be sold and used for purposes that you did not consent for. With the aim to prevent this, the Firefox team came up with the Enhanced Tracking Protection feature. In June this year, they made it available to new users by default. With Firefox 69, it is now on by default and set to the ‘Standard’ setting for all users. It blocks all known third-party tracking cookies that are listed by Disconnect. Protection against cryptomining and browser fingerprinting There are many other ways through which users are tracked or their resources are used without their consent. Unauthorized cryptominers run scripts to generate cryptocurrency that requires a lot of computing power. This can end up slowing down your computers and also drain your battery. There are also fingerprinting scripts that store a snapshot of your computer’s configuration when you visit a website, which can be used to track your activities across the web. To address these, the team introduced an option to block cryptominers and browser fingerprinting in  Firefox Nightly 68 and Beta 67. Firefox 69 includes the option to block cryptominers in the “Standard Mode”, which means it is on by default. To block fingerprinting users need to turn on the “Strict Mode.” We can expect the team to make it enabled by default in a future release. Read also: Mozilla adds protection against fingerprinting and Cryptomining scripts in Firefox Nightly and Beta A stricter Block Autoplay feature Starting with Firefox 69, the Block Autoplay will block all media with sound from playing automatically by default. This means that users will be able to block any video from autoplaying, not just those that autoplay with sound. Updates for Windows 10 users Firefox 69 brings support for the Web Authentication HMAC Secret extension via Windows Hello for Windows 10 users. The HMAC Secret extension will allow users to sign-in to their device even when it is offline or in airplane mode. This release also comes with Windows hints to appropriately set content process priority levels and a shortcut on the Win10 taskbar to help users easily find and launch Firefox. Improved macOS battery life Firefox 69 comes with improved battery life and download UI. To minimize battery consumption, Firefox will switch back to the low-power GPU on macOS systems that have a dual graphics card. Other updates include JIT support for ARM64 and Finder now shows download progress for files being downloaded. Not only main releases, but the team is also putting efforts into making Firefox Nightly more power-efficient. On Monday, Henrik Skupin, a senior test engineer at Mozilla, shared that there is about 3X decrease in power usage by Firefox Nightly on macOS. We can expect this change to possibly land in version 70, which is scheduled for October 22. https://twitter.com/whimboo/status/1168437524357898240 Updates for developers Debugger updates: With this release, debugging an application that has event handlers is easier. The debugger now includes the ability to automatically break when the code hits an event handler. Also, developers can now save the scripts shown in the debugger's source list pane via the Download file context menu option. The Resize Observer API: Firefox 69 supports the Resize Observer API by default. This API provides a way to monitor any changes to an element’s size. It also notifies the observer each time when the size changes. Network panel updates: The network panel will now show the resources that got blocked because of CSP or Mixed Content. This will “allow developers to best understand the impact of content blocking and ad blocking extensions given our ongoing expansion of Enhanced Tracking Protection to all users with this release,” the team writes. Re-designed about:debugging: In Firefox 69, the team has now migrated remote debugging from the old WebIDE into a re-designed about:debugging. Check out the official release notes to know what else has landed in Firefox 69. Mozilla on Google’s Manifest V3 Chrome is proposing various changes to its extension platform called Manifest V3. In a blog post shared on Tuesday, Mozilla talked about its plans for implementing these changes and how it will affect extension developers. One of the significant updates proposed in Manifest V3 is the deprecation of the blocking webRequest API, which allows extensions to intercept all inbound and outbound traffic from the browser. It then blocks, redirects, or modifies the intercepted traffic. In place of this API, Chrome is planning to introduce declrativeNetRequest API, which limits the blocking version of the webRequest API. According to Manifest V3, the declarativeNetRequest API will be treated as the primary content-blocking API in extensions. Read also: Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument Explaining the impact of this proposed change if implemented, Mozilla wrote, “This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves.” Mozilla further shared that it does not have any immediate plans to remove blocking WebRequest API. “We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them,” Mozilla wrote in the announcement. However, Mozilla is willing to consider other changes that are proposed in Manifest V3. It is planning to implement the proposal that requires content scripts to have the same permissions as the pages where they get injected. Read the official announcement to know more in detail about Mozilla’s plans regarding Manifest V3. Other news in web JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 #Reactgate forces React leaders to confront community’s toxic culture head on Google Chrome 76 now supports native lazy-loading
Read more
  • 0
  • 0
  • 2833

article-image-laravel-6-0-releases-with-laravel-vapor-compatibility-lazycollection-improved-authorization-response-and-more
Fatema Patrawala
04 Sep 2019
2 min read
Save for later

Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more

Fatema Patrawala
04 Sep 2019
2 min read
Laravel 6.0 releases with improvements in Laravel 5.8. Introduction to semantic versioning, compatibility with Laravel Vapor, improved authorization responses, job middleware, lazy collections, sub-query improvements, the extraction of frontend scaffolding to the laravel/ui Composer package, and a variety of other bug fixes and usability improvements. Key features in Laravel 6.0 Semantic versioning The Laravel framework package now follows the semantic versioning standard. This makes the framework consistent with the other first-party Laravel packages which already followed this versioning standard. Laravel vapor compatibility Laravel 6.0 provides compatibility with Laravel Vapor, an auto-scaling serverless deployment platform for Laravel. Vapor abstracts the complexity of managing Laravel applications on AWS Lambda, as well as interfacing those applications with SQS queues, databases, Redis clusters, networks, CloudFront CDN, and more. Improved exceptions via ignition Laravel 6.0 ships with Ignition, which is a new open source exception detail page. Ignition offers many benefits over previous releases, such as improved Blade error file and line number handling, runnable solutions for common problems, code editing, exception sharing, and an improved UX. Improved authorization responses In previous releases of Laravel, it was difficult to retrieve and expose custom authorization messages to end users. This made it difficult to explain to end-users exactly why a particular request was denied. In Laravel 6.0, this is now easier using authorization response messages and the new Gate::inspect method. Job middleware Job middleware allows developers to wrap custom logic around the execution of queued jobs, reducing boilerplate in the jobs themselves. Lazy collections Many developers already enjoy Laravel's powerful Collection methods. To supplement the already powerful Collection class, Laravel 6.0 has introduced a LazyCollection, which leverages PHP's generators to allow users to work with very large datasets while keeping memory usage low. Eloquent subquery enhancements Laravel 6.0 introduces several new enhancements and improvements to database subquery support. To know more about this release, check out the official Laravel blog page. What’s new in web development this week? Wasmer’s first Postgres extension to run WebAssembly is here! JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading
Read more
  • 0
  • 0
  • 3829
article-image-mozilla-ceo-chris-beard-to-step-down-by-the-end-of-2019-after-five-years-in-the-role
Bhagyashree R
30 Aug 2019
3 min read
Save for later

Mozilla CEO Chris Beard to step down by the end of 2019 after five years in the role

Bhagyashree R
30 Aug 2019
3 min read
Yesterday, Chris Beard, the CEO of Mozilla, announced that he will be stepping down from his role by the end of this year. After an astounding tenure of more than fifteen years at Mozilla, Beard’s immediate plans are to take a break and spend more time with his family. https://twitter.com/cbeard/status/1167091991487729664 Chris Beard’s journey at Mozilla started back in 2004, just before Firefox 1.0 was released. Since then he has been deeply involved in almost every part of the business including product, marketing, innovation, communications, community, and user engagement. In 2013, Beard worked as an Executive-in-Residence at the venture capital firm Greylock Partners, gaining a deeper perspective on innovation and entrepreneurship. During this time he remained an advisor to Mozilla’s Chair, Mitchell Baker. Chris Beard’s appointment as CEO came during a very “tumultuous time” for Mozilla. In 2013, when Gary Kovacs stepped down as Mozilla’s CEO, the company was extensively looking for a new CEO. In March 2014, the company appointed its CTO Brendan Eich, the creator of JavaScript as CEO. Just a few weeks in the role, Eich had to resign from his position after it was revealed that he has donated $1,000 to California Proposition 8, which called for the banning of same-sex marriage in California. Then in April 2014, Chris Beard was appointed as the interim CEO at Mozilla and was confirmed in the position on July 28. Throughout his tenure as a “Mozillian”, Chris Beard has made countless contributions to the company. Listing his achievements, Mozilla’s Chair, Mitchell Baker wrote in a thanking post, “This includes reinvigorating our flagship web browser Firefox to be once again a best-in-class product. It includes recharging our focus on meeting the online security and privacy needs facing people today. And it includes expanding our product offerings beyond the browser to include a suite of privacy and security-focused products and services from Facebook Container and Enhanced Tracking Protection to Firefox Monitor.” Read also: Firefox now comes with a Facebook Container extension to prevent Facebook from tracking user’s web activity Mozilla is now seeking a successor for Beard to lead the company. Mitchell Baker has agreed to step into an interim CEO role if the search continues beyond this year. Meanwhile, Chris Beard will continue to be an advisor to the board of directors and Baker. “And I will stay engaged for the long-term as an advisor to both Mitchell and the Board, as I’ve done before,” he wrote. Many of Beard’s co-workers thanked him for his contribution to Mozilla: https://twitter.com/kaykas/status/1167094792230076424 https://twitter.com/digitarald/status/1167107776734085120 https://twitter.com/hoosteeno/status/1167099338226429952 You can read Beard’s announcement on Mozilla’s blog. What’s new in web development this week Mozilla proposes WebAssembly Interface Types to enable language interoperability Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70 Mozilla’s MDN Web Docs gets new React-powered frontend, which is now in Beta
Read more
  • 0
  • 0
  • 3038

article-image-javascript-will-soon-support-optional-chaining-operator-as-its-ecmascript-proposal-reaches-stage-3
Bhagyashree R
28 Aug 2019
3 min read
Save for later

JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3

Bhagyashree R
28 Aug 2019
3 min read
Last month, the ECMAScript proposal for optional chaining operator reached stage 3 of the TC39 process. This essentially means that the feature is almost finalized and is awaiting feedback from users. The optional chaining operator aims to make accessing properties through connected objects easier when there are chances of a reference or function being undefined or null. https://twitter.com/drosenwasser/status/1154456633642119168 Why optional chaining operator is proposed in JavaScript Developers often need to access properties that are deeply nested in a tree-like structure. To do this, they sometimes end up writing long chains of property accesses. This can make them error-prone. If any of the intermediate references in these chains are evaluated to null or undefined, JavaScript will throw the TypeError: Cannot read property 'name' of undefined error. The optional chaining operator aims to provide a more elegant way of recovering from such instances. It allows you to check for the existence of deeply nested properties in objects. How it works is that if the operand before the operator evaluates to undefined or null, the expression will return to undefined. Or else, the property access, method or function call will be evaluated normally. MDN compares this operator with the dot (.) chaining operator. “The ?. operator functions similarly to the . chaining operator, except that instead of causing an error if a reference is null or undefined, the expression short-circuits with a return value of undefined. When used with function calls, it returns undefined if the given function does not exist,” the document reads. The concept of optional chaining is not new. Several other languages also have support for a similar feature including the null-conditional operator in C# 6 and later, optional chaining operator in Swift, and the existential operator in CoffeeScript. The optional chaining operator is represented by ‘?.’. Here’s how its syntax looks like: obj?.prop       // optional static property access obj?.[expr]     // optional dynamic property access func?.(...args) // optional function or method call Some properties of optional chaining Short-circuiting: The rest of the expression is not evaluated if an optional chaining operator encounters undefined or null at its left-hand side. Stacking: Another property of the optional chaining operator is that you can stack them. This means that you can apply more than one optional chaining operator on a sequence of property accesses. Optional deletion: You can also combine the ‘delete’ operator with an optional chain. Though there is time for the optional chaining operator to land in JavaScript, you can give it try with a Babel plugin. To stay updated with its browser compatibility, check out the MDN web docs. Many developers are appreciating this feature. A developer on Reddit wrote, “Considering how prevalent 'Cannot read property foo of undefined' errors are in JS development, this is much appreciated. Yes, you can rant that people should do null guards better and write less brittle code. True, but better language features help protect users from developer laziness.” Yesterday, the team behind V8, Chrome’s JavaScript engine, also expressed their delight on Twitter: https://twitter.com/v8js/status/1166360971914481669 Read the Optional Chaining for JavaScript proposal to know more in detail. ES2019: What’s new in ECMAScript, the JavaScript specification standard Introducing QuickJS, a small and easily embeddable JavaScript engine Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more
Read more
  • 0
  • 0
  • 6867