Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Front-End Web Development

158 Articles
article-image-bootstrap-5-to-replace-jquery-with-vanilla-javascript
Bhagyashree R
13 Feb 2019
2 min read
Save for later

Bootstrap 5 to replace jQuery with vanilla JavaScript

Bhagyashree R
13 Feb 2019
2 min read
The upcoming major version of Bootstrap, version 5, will no longer have jQuery as a dependency and will be replaced with vanilla JavaScript. In 2017, the Bootstrap team opened a pull request with the aim to remove jQuery entirely from the Bootstrap source and it is now near completion. Under this pull request, the team has removed jQuery from 11 plugins including Util, Alert, Button, Carousel, and more. Using ‘Data’ and ‘EventHandler’ in unit tests is no longer supported. Additionally, Internet Explorer will not be compatible with this version. Despite these updates, developers will be able to use this version both with or without jQuery. Since this will be a major release, users can expect a few breaking changes. Not only just Bootstrap but many other companies have been thinking of decoupling from jQuery. For example, last year, GitHub incrementally removed jQuery from their frontend mainly because of the rapid evolution of web standards and jQuery losing its relevancy over time. This news triggered a discussion on Hacker News, and many users were happy about this development. One user commented, “I think the reason is that many of the problems jQuery was designed to solve (DOM manipulation, cross-browser compatibility issues, AJAX, cool effects) have now been implemented as standards, either in Javascript or CSS and many developers consider the 55k minified download not worth it.” Another user added, “The general argument now is that 95%+ of jQuery is now native in browsers (with arguably the remaining 5% being odd overly backward compatible quirks worth ignoring), so adding a JS dependency for them is "silly" and/or a waste of bandwidth.” Read more in detail, check out Bootstrap’s GitHub repository. jQuery File Upload plugin exploited by hackers over 8 years, reports Akamai’s SIRT researcher GitHub parts ways with JQuery, adopts Vanilla JS for its frontend Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?
Read more
  • 0
  • 0
  • 21995

article-image-react-storybook-ui-logging-user-interactions-with-actions-add-on-tutorial
Packt Editorial Staff
17 Jul 2018
7 min read
Save for later

React Storybook UI: Logging user interactions with Actions add-on [Tutorial]

Packt Editorial Staff
17 Jul 2018
7 min read
Sometimes, you end up creating a whole new page, or a whole new app, just to see what your component can do on its own. This can be a painful process and, which is why Storybook exists in React. With Storybook, you're automating a sandboxed environment to work with. It also handles all the build steps, so you can write a story for your components and see the result. In this article we are going to use the Storybook add-ons, which you can test on any aspect of your component before worrying about integrating it in your application. To be specific we are going to look at Actions, which is a by default add-on in Storybook. This React tutorial is an extract from the book React 16 Tools written by Adam Boduch. Adam Boduch has been involved with large-scale JavaScript development for nearly 10 years. He has practical experience with real-world software systems, and the scaling challenges they pose. Working with Actions in React Storybook The Actions add-on is enabled in your Storybook by default. The idea with Actions is that once you select a story, you can interact with the rendered page elements in the main pane. Actions provide you with a mechanism that logs user interactions in the Storybook UI. Additionally, Actions can serve as a general- purpose tool to help you monitor data as it flows through your components. Let's start with a simple button component: import React from 'react'; const MyButton = ({ onClick }) => ( <button onClick={onClick}>My Button</button> ); export default MyButton; The MyButton component renders a button element and assigns it an onClick event handler. The handler is actually defined by MyComponent; it's passed in as a prop. So let's create a story for this component and pass it an onClick handler function: import React from 'react'; import { storiesOf } from '@storybook/react'; import { action } from '@storybook/addon-actions'; import MyButton from '../MyButton'; storiesOf('MyButton', module).add('clicks', () => ( <MyButton onClick={action('my component clicked')} /> )); Do you see the action() function that's imported from @storybook/addon-actions? This is a higher-order function—a function that returns another function. When you call action('my component clicked'), you're getting a new function in return. The new function behaves kind of like console.log(), in that you can assign it a label and log arbitrary values. The difference is that functions created by the Storybook action() add-on function is that the output is rendered right in the actions pane of the Storybook UI: As usual, the button element is rendered in the main pane. The content that you're seeing in the actions pane is the result of clicking on the button three times. The output is the exact same with every click, so the output is all grouped under the my component clicked label that you assigned to the handler function. In the preceding example, the event handler functions that action() creates are useful for as a substitute for actual event handler functions that you would pass to your components. Other times, you actually need the event handling behavior to run. For example, you have a controlled form field that maintains its own state and you want to see what happens as the state changes. For cases like these, I find the simplest and most effective approach is to add event handler props, even if you're not using them for anything else. Let's take a look at an example of this: import React, { Component } from 'react'; class MyRangeInput extends Component { static defaultProps = { onChange() {}, onRender() {} }; state = { value: 25 }; onChange = ({ target: { value } }) => { this.setState({ value }); this.props.onChange(value); }; render() { const { value } = this.state; this.props.onRender(value); return ( <input type="range" min="1" max="100" value={value} onChange={this.onChange} /> ); } } export default MyRangeInput; Let's start by taking a look at the defaultProps of this component. By default, this component has two default handler functions for onChange and onRender—these do nothing so that if they're not set, they can still be called and nothing will happen. As you might have guessed, we can now pass action() handlers to MyRangeInput components. Let's try this out. Here's what your stories/index.js looks like now: import React from 'react'; import { storiesOf } from '@storybook/react'; import { action } from '@storybook/addon-actions'; import MyButton from '../MyButton'; import MyRangeInput from '../MyRangeInput'; storiesOf('MyButton', module).add('clicks', () => ( <MyButton onClick={action('my component clicked')} /> )); storiesOf('MyRangeInput', module).add('slides', () => ( <MyRangeInput onChange={action('range input changed')} onRender={action('range input rendered')} /> )); Now when you view this story in the Storybook UI, you should see lots of actions logged when you slide the range input slider: As the slider handle moves, you can see the two event handler functions that you've passed to the component are logging the value at different stages of the component rendering life cycle. The most recent action is logged at the top of the pane, unlike browser dev tools which logs the most recent value at the bottom. Let's revisit the MyRangeInput code for a moment. The first function that's called when the slider handle moves is the change handler: onChange = ({ target: { value } }) => { this.setState({ value }); this.props.onChange(value); }; This onChange() method is internal to MyRangeInput. It's needed because the input element that it renders uses the component state as the single source of truth. These are called controlled components in React terminology. First, it sets the state of the value using the target.value property from the event argument. Then, it calls this.props.onChange(), passing it the same value. This is how you can see the even value in the Storybook UI. Note that this isn't the right place to log the updated state of the component. When you call setState(), you have to make the assumption that you're done dealing with state in the function because it doesn't always update synchronously. Calling setState() only schedules the state update and the subsequent re-render of your component. Here's an example of how this can cause problems. Let's say that instead of logging the value from the event argument, you logged the value state after setting it: There's a bit of a problem here now. The onChange handler is logging the old state while the onRender handler is logging the updated state. This sort of logging output is super confusing if you're trying to trace an event value to rendered output—things don't line up! Never log state values after calling setState(). If the idea of calling noop functions makes you feel uncomfortable, then maybe this approach to displaying actions in Storybook isn't for you. On the other hand, you might find that having a utility to log essentially anything at any point in the life cycle of your component without the need to write a bunch of debugging code inside your component. For such cases, Actions are the way to go. To summarize, we learned about Storybook add-on Actions. We saw it help with logging and the links provide a mechanism for navigation beyond the default. Grab the book React 16 Tooling today. This book covers the most important tools, utilities, and libraries that every React developer needs to know — in detail. What is React.js and how does it work? Is React Native is really Native framework? React Native announces re-architecture of the framework for better performance  
Read more
  • 0
  • 0
  • 19633

article-image-mozilla-and-google-chrome-refuse-to-support-gabs-dissenter-extension-for-violating-acceptable-use-policy
Bhagyashree R
12 Apr 2019
5 min read
Save for later

Mozilla and Google Chrome refuse to support Gab’s Dissenter extension for violating acceptable use policy

Bhagyashree R
12 Apr 2019
5 min read
Earlier this year, Gab, the “free speech” social network and a popular forum for far-right viewpoint holders and other fringe groups, launched a browser extension named Dissenter that creates an alternative comment section for any website. The plug-in is now removed from the extension stores of both Mozilla and Google, as the extension violates their acceptable use policy. This decision comes after Columbia Journalism Review reported about the extension to the tech giants. https://twitter.com/nausjcaa/status/1116409587446484994 The Dissenter plug-in, which goes by the tagline “the comment section of the internet”, allows users to discuss any topic in real-time without fearing that their posted comment will be removed by a moderator. The plug-in failed to pass the review process of Mozilla and is now disabled for Firefox users. But, the users who have already installed the plug-in can continue to use it. The Gab team took to Twitter complaining about Mozilla’s Acceptable Use Policy. https://twitter.com/getongab/status/1116036111296544768 When asked for more clarity on which policies Dissenter did not comply with, Mozilla said that they received abuse reports for this extension. It further added that the platform is being used for promoting violence, hate speech, and discrimination, but they failed to show any examples to add any credibility to their claims. https://twitter.com/getongab/status/1116088926559666181 The extension developers responded by saying that they do moderate any illegal conduct or posts happening on their platform as and when they are brought to their attention. “We do not display content containing words from a list of the most offensive racial epithets in the English language,” added the Gab developers. Soon after this, Google Chrome also removed the extension from Chrome Extension Store stating the same reason that the extension does not comply with their policies. After getting deplatformed, the Dissenter team has come to the conclusion that the best way forward is to create their own browser. They are thinking of forking Chromium or the privacy-focused web browser, Brave. “That’s it. We are going to fork Chromium and create a browser with Dissenter, ad blocking, and other privacy tools built in along with the guarantee of free speech that Silicon Valley does not provide.” https://twitter.com/getongab/status/1116308126461046784 Gab does not moderate views posted by its users until they are flagged for any violations and says it “treats its users as adults”. So, until people are complaining, the platform will not take any appropriate action against the threats and hate speech posted in the comments. Though it is known for its tolerance for fringe views and has drawn immense heat from the public, things took turn for the worse after the recent Christchurch shooting. A far-right extremist who shot dead 20+ Muslims and left 30 others injured in two Mosques in New Zealand, had shared his extremist manifesto on social media sites like Gab and 8chan. He had also live-streamed the shooting on Facebook, Youtube, and others. This is not the first time when Gab has been involved in a controversy. Back in October last year, PayPal banned Gab following the anti-Semitic mass shooting in Pittsburgh. It was reported that the shooter was an active poster on the Gab website and has hinted his intentions shortly before the attack. In the same month, hosting provider Joyent also suspended its services for Gab. The platform has also been warned by Stripe for the violations of their policies. Torba, the co-founder of Gab, said, “Payments companies like Paypal, Stripe, Square, Cash App, Coinbase, and Bitpay have all booted us off. Microsoft Azure, Joyent, GoDaddy, Apple, Google’s Android store, and other infrastructure providers, too, have denied us service, all because we refuse to censor user-generated content that is within the boundaries of the law.” Looking at this move by Mozilla, many users felt that this actually contradicts their goal of making the web free and open for all. https://twitter.com/VerGreeneyes/status/1116216415734960134 https://twitter.com/ChicalinaT/status/1116101257494761473 A Hacker News user added, “While Facebook, Reddit, Twitter and now Mozilla may think they're doing a good thing by blocking what they consider hateful speech, it's just helping these people double down on thinking they're in the right. We should not be afraid of ideas. Speech != violence. Violence is violence. With platforms banning more and more offensive content and increasing the label of what is bannable, we're seeing a huge split in our world. People who could once agree to disagree now don't even want to share the same space with one another. It's all call out culture and it's terrible.” Many people think that this step is nothing but a step towards mass-censorship. “I see it as an active endorsement of filter funneling comments sections online, given that despite the operators of Dissenter having tried to make efforts to comply with the terms of service Mozilla have imposed for being listed in their gallery, were given an unclear rationale as to how having "broken" these terms, and no clue as to what they were supposed to do to have avoided doing so,” adds a Reddit user. Mozilla has not revoked the add-on’s signature, so Dissenter can be distributed while guaranteeing that the add-on is safe and can be updated automatically. Manual installation of the extension from Dissenter.com/download is also possible. Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs Mozilla adds protection against fingerprinting and Cryptomining scripts in Firefox Nightly and Beta Mozilla is exploring ways to reduce notification permission prompt spam in Firefox
Read more
  • 0
  • 0
  • 17128
Banner background image

article-image-whats-new-in-ecmascript-2018
Pravin Dhandre
20 Apr 2018
4 min read
Save for later

What's new in ECMAScript 2018 (ES9)?

Pravin Dhandre
20 Apr 2018
4 min read
ECMAScript 2018 -  also known as ES9 - is now complete with lots of features. Since the major features released with ECMAScript 2015 the language has matured with yearly update releases. After multiple draft submissions and completion of the 5-stage process, the TC39 committee has finalized the set of features that will be rolled out in June. The full list of proposals that were submitted to TC39 committee can be viewed in Github repository. What are the key features of ECMAScript 2018? Let’s have a quick look at the key ES9 features and how it is going to add value to web developers. Lifting template literal restriction Template literals generally allow the embedding of languages such as DSLs. However, restrictions on escape sequences make this quite complicated. Removing the restriction will create difficulty in handling cooked template values containing illegal escape sequences. The proposed feature will redefine the cooked value for illegal escape sequences to “undefined”. This lifting of restriction will allow illegal values like \xerxes and makes embedding of language simpler. The detailed proposal with templates can be viewed at Github. Asynchronous iterators The newer version will provide syntactic support for asynchronous iteration with both AsyncIterable and AsyncIterator protocols. The syntactic support will help in reading lines of text from HTTP connection easily. In my opinion, this is one of the most important and useful features which make the code looks simpler. It introduces a new IterationStatement, for-await-of, and also adds syntax which can create async generator functions. The detailed proposal can be viewed at Github. Promise.prototype.finally library As you are aware promise make execution of callback functions easy. Many promise libraries have a "finally" method through which you can run code no matter how the Promise provides resolution. It works by registering a callback which gets invoked when a promise gets fulfilled or denied. Bluebird, Q, and when are some examples. The detailed proposal can be viewed at Github. Unicode property escapes in regular expressions The ECMAScript 2018 version will have addition of Unicode property escapes `\p{…}` and `\P{…}` to regular expressions. These are a new and unique type of escape sequences with u flag set. With this feature, one can create Unicode-aware regular expressions with utmost ease. These escapes are easily readable, compact and get updated automatically from ECMAScript engine. The detailed proposal can be viewed at Github. RegExp lookbehind assertions Assertions are regular expressions which consist of anchors and lookarounds that either succeeds or fails based on the match found. ECMAScript has extended assertion, that does lookaround in forward direction, with lookbehind assertions that does in backward direction. This assertion will be helpful in instances like validating a dollar amount without capturing the dollar sign where a pattern/design is or is not preceded by another. The detailed proposal can be viewed at Github. Object Rest/spread properties The earlier version of ECMAScript includes rest and spread properties for array literals. Likewise, the newer version would be introducing rest and so read elements for object literals. Both these operations for Object would help in extracting properties which we want along with removing unwanted ones. The detailed proposal can be viewed at Github. RegExp named capture groups Capture Groups is another RegExp feature, similar to so called “named Groups” in Java and Python. With this, you can write RegExp to provide names in a format viz. (?<name>...) for different parts of the group. This allows you to use that name and grab whichever group you need in a simplistic way. The detailed proposal can be viewed at Github. s ‘dotAll’ flag for regular expressions In regular expression patterns, the earlier version allows dot (.) to match any character but with astral and line terminator characters like \n \f etc, creating regex was complicated. The newer version proposes addition of a new s flag which can match any character, including astral and line terminators. The detailed proposal can be viewed at Github. When will ECMAScript 2018 be available? All of the features above are expected to be implemented and available in browsers this year. It's in the name after all. But there are likely to be even more new features and capabilities in the 2019 release. Read this to get a clearer picture of what’s likely to feature in the 2019 release.
Read more
  • 0
  • 0
  • 15937

article-image-exploring%e2%80%afforms-in-angular-types-benefits-and-differences%e2%80%af%e2%80%af%e2%80%af-%e2%80%af
Expert Network
21 Jul 2021
11 min read
Save for later

Exploring Forms in Angular – types, benefits and differences     

Expert Network
21 Jul 2021
11 min read
While developing a web application, or setting dynamic pages and meta tags we need to deal with multiple input elements and value types, such limitations could seriously hinder our work – in terms of either data flow control, data validation, or user experience.    This article is an excerpt from the book, ASP.NET Core 5 and Angular, Fourth Edition by Valerio De Sanctis – A revised edition of a bestseller that includes coverage of the Angular routing module, expanded discussion on the Angular CLI, and detailed instructions for deploying apps on Azure, as well as both Windows and Linux.   Sure, we could easily work around most of the issues by implementing some custom methods within our form-based components; we could throw some errors such as isValid(), isNumber(), and so on here and there, and then hook them up to our template syntax and show/hide the validation messages with the help of structural directives such as *ngIf, *ngFor, and the like. However, it would be a horrible way to address our problem; we didn't choose a feature-rich client-side framework such as Angular to work that way.   Luckily enough, we have no reason to do that since Angular provides us with a couple of alternative strategies to deal with these common form-related scenarios:   Template-Driven Forms   Model-Driven Forms, also known as Reactive Forms   Both are highly coupled with the framework and thus extremely viable; they both belong to the @angular/forms library and share a common set of form control classes. However, they also have their own specific sets of features, along with their pros and cons, which could ultimately lead to us choosing one of them.   Let's try to quickly summarize these differences.   Template-Driven Forms   If you've come from AngularJS, there's a high chance that the Template-Driven approach will ring a bell or two. As the name implies, Template-Driven Forms host most of the logic in the template code; working with a Template-Driven Form means:   Building the form in the .html template file   Binding data to the various input fields using ngModel instance   Using a dedicated ngForm object related to the whole form and containing all the inputs, with each being accessible through their name.   These things need to be done to perform the required validity checks. To understand this, here's what a Template-Driven Form looks like:   <form novalidate autocomplete="off" #form="ngForm" (ngSubmit)="onSubmit(form)">  <input type="text" name="name" value="" required   placeholder="Insert the city name..."    [(ngModel)]="city.Name" #title="ngModel"   />  <span *ngIf="(name.touched || name.dirty) &&       name.errors?.required">           Name is a required field: please enter a valid city name.   </span>   <button type="submit" name="btnSubmit"          [disabled]="form.invalid">         Submit   </button>   </form>     Here, we can access any element, including the form itself, with some convenient aliases – the attributes with the # sign – and check for their current states to create our own validation workflow.   These states are provided by the framework and will change in real-time, depending on various things: touched, for example, becomes True when the control has been visited at least once; dirty, which is the opposite of pristine, means that the control value has changed, and so on. We used both touched and dirty in the preceding example because we want our validation message to only be shown if the user moves their focus to the <input name="name"> and then goes away, leaving it blank by either deleting its value or not setting it.   These are Template-Driven Forms in a nutshell; now that we've had an overall look at them, let's try to summarize the pros and cons of this approach. Here are the main advantages of Template-Driven Forms: Template-Driven Forms are very easy to write. We can recycle most of our HTML knowledge (assuming that we have any). On top of that, if we come from AngularJS, we already know how well we can make them work once we've mastered the technique.   They are rather easy to read and understand, at least from an HTML point of view; we have a plain, understandable HTML structure containing all the input fields and validators, one after another. Each element will have a name, a two-way binding with the underlying ngModel, and (possibly) Template-Driven logic built upon aliases that have been hooked to other elements that we can also see, or to the form itself.   Here are their weaknesses:   Template-Driven Forms require a lot of HTML code, which can be rather difficult to maintain and is generally more error-prone than pure TypeScript.   For the same reason, these forms cannot be unit tested. We have no way to test their validators or to ensure that the logic we implemented will work, other than running an end-to-end test with our browser, which is hardly ideal for complex forms.   Their readability will quickly drop as we add more and more validators and input tags. Keeping all their logic within the template might be fine for small forms, but it does not scale well when dealing with complex data items. Ultimately, we can say that Template-Driven Forms might be the way to go when we need to build small forms with simple data validation rules, where we can benefit more from their simplicity. On top of that, they are quite like the typical HTML code we're already used to (assuming that we do have a plain HTML development background); we just need to learn how to decorate the standard <form> and <input> elements with aliases and throw in some validators handled by structural directives such as the ones we've already seen, and we'll be set in (almost) no time.   For additional information on Template-Driven Forms, we highly recommend that you read the official Angular documentation at: https://angular.io/guide/forms   That being said; the lack of unit testing, the HTML code bloat that they will eventually produce, and the scaling difficulties will eventually lead us toward an alternative approach for any non-trivial form. Model-Driven/Reactive Forms   The Model-Driven approach was specifically added in Angular 2+ to address the known limitations of Template-Driven Forms. The forms that are implemented with this alternative method are known as Model-Driven Forms or Reactive Forms, which are the exact same thing.   The main difference here is that (almost) nothing happens in the template, which acts as a mere reference to a more complex TypeScript object that gets defined, instantiated, and configured programmatically within the component class: the form model.   To understand the overall concept, let's try to rewrite the previous form in a Model-Driven/Reactive way (the relevant parts are highlighted). The outcome of doing this is as follows:  <form [formGroup]="form" (ngSubmit)="onSubmit()">  <input formControlName="name" required />   <span *ngIf="(form.get('name').touched || form.get('name').dirty)            && form.get('name').errors?.required">           Name is a required field: please enter a valid city name.   </span>  <button type="submit" name="btnSubmit"           [disabled]="form.invalid">  Submit  </button>     </form>  As we can see, the amount of required code is much lower.  Here's the underlying form model that we will define in the component class file (the relevant parts are highlighted in the following code):   import { FormGroup, FormControl } from '@angular/forms';   class ModelFormComponent implements OnInit {   form: FormGroup;         ngOnInit() {       this.form = new FormGroup({          title: new FormControl()       });     }   }   Let's try to understand what's happening here:   The form property is an instance of FormGroup and represents the form itself.   FormGroup, as the name suggests, is a container of form controls sharing the same purpose. As we can see, the form itself acts as a FormGroup, which means that we can nest FormGroup objects inside other FormGroup objects (we didn't do that in our sample, though).   Each data input element in the form template – in the preceding code, name – is represented by an instance of FormControl.   Each FormControl instance encapsulates the related control's current state, such as valid, invalid, touched, and dirty, including its actual value.   Each FormGroup instance encapsulates the state of each child control, meaning that it will only be valid if/when all its children are also valid.   Also, note that we have no way of accessing the FormControls directly like we were doing in Template-Driven Forms; we have to retrieve them using the .get() method of the main FormGroup, which is the form itself.   At first glance, the Model-Driven template doesn't seem too different from the Template-Driven one; we still have a <form> element, an <input> element hooked to a <span> validator, and a submit button; on top of that, checking the state of the input elements takes a bigger amount of source code since they have no aliases we can use. What's the real deal, then?  To help us visualize the difference, let's look at the following diagrams: here's a schema depicting how Template-Driven Forms work:   [caption id="attachment_72453" align="alignnone" width="690"] Fig 1: Template-Driven Forms schematic[/caption] By looking at the arrows, we can easily see that, in Template-Driven Forms, everything happens in the template; the HTML form elements are directly bound to the DataModel component represented by a property filled with an asynchronous HTML request to the Web Server, much like we did with our cities and country table.   That DataModel will be updated as soon as the user changes something, that is, unless a validator prevents them from doing that. If we think about it, we can easily understand how there isn't a single part of the whole workflow that happens to be under our control; Angular handles everything by itself using the information in the data bindings defined within our template.   This is what Template-Driven actually means: the template is calling the shots.  Now, let's take a look at the Model-Driven Forms (or Reactive Forms) approach:   [caption id="attachment_72454" align="alignnone" width="676"] Fig 2: Model-Driven/Reactive Forms schematic[/caption] As we can see, the arrows depicting the Model-Driven Forms workflow tell a whole different story. They show how the data flows between the DataModel component – which we get from the Web Server – and a UI-oriented form model that retains the states and the values of the HTML form (and its children input elements) that are presented to the user. This means that we'll be able to get in-between the data and the form control objects and perform a number of tasks firsthand: push and pull data, detect and react to user changes, implement our own validation logic, perform unit tests, and so on.  Instead of being superseded by a template that's not under our control, we can track and influence the workflow programmatically, since the form model that calls the shots is also a TypeScript class; that's what Model-Driven Forms are about. This also explains why they are also called Reactive Forms – an explicit reference to the Reactive programming style that favors explicit data handling and change management throughout the workflow.   Summary    In this article, we focused on the Angular framework and the two form design models it offers: the Template-Driven approach, mostly inherited from AngularJS, and the Model-Driven or Reactive alternative. We took some valuable time to analyze the pros and cons provided by both, and then we made a detailed comparison of the underlying logic and workflow. At the end of the day, we chose the Reactive way, as it gives the developer more control and enforces a more consistent separation of duties between the Data Model and the Form Model.   About the author   Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He's the founder and owner of Ryadel and the author of many best-selling books on back-end and front-end web development.      
Read more
  • 0
  • 0
  • 10884

article-image-google-chrome-announces-an-update-on-its-autoplay-policy-and-its-existing-youtube-video-annotations
Natasha Mathur
29 Nov 2018
4 min read
Save for later

Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations

Natasha Mathur
29 Nov 2018
4 min read
Google Chrome team finally announced the release date for its Autoplay Policy, earlier this week. The policy had been delayed when it was released with the Chrome 66 stable release, back in May this year. The latest policy change is scheduled to come out along with Chrome 71, in the upcoming month. The Autoplay policy imposes restrictions that prevent videos and audios from autoplaying in the web browser. For websites that want to be able to autoplay their content, the new policy change will prevent playback by default. For most of the sites, playback will be resumed but a small code adjustment will be required in other cases to resume the audio. Additionally, Google has added a new approach to the policy that includes tracking users' past behavior with the sites that have autoplay enabled. So in case, if a user regularly lets an audio play for more than 7 seconds on a website, the autoplay gets enabled for that website. This is done with the help of a “Media Engagement Index” (MEI) i.e. an index stored locally per Chrome profile on a device. MEI tracks the number of visits to a site that includes audio playback of more than 7 seconds long. Each website gets a score between zero and one in MEI, where higher scores indicate that the user doesn’t mind audio playing on that website. For new user profiles or if a user clears their browsing data, a pre-seed list based on anonymized user aggregated MEI scores is used to track which websites can autoplay. The pre-seeded site list is algorithmically generated and only sites with enough users permitting autoplay on that site are added to the list. “We believe by learning from the user – and anticipating their intention on a per website basis – we can create the best user experience. If users tend to let content play from a website, we will autoplay content from that site in the future. Conversely, if users tend to stop autoplay content from a given website, we will prevent autoplay for that content by default”, mentions the Google team. The reason behind the delay The autoplay policy had been delayed by Google after receiving feedback from the Web Audio developer community, especially the web game developer and WebRTC developers. As per the feedback, the autoplay change was affecting many web games and audio experiences, especially on the sites that had not been updated for the change. Delaying the policy rollout gave web game developers enough time to update their websites. Moreover, Google also explored ways to reduce the negative impact of audio play policy on websites with audio enabled. Following this, Google has made an adjustment to its implementation of Web Audio to reduce the number of websites that had been originally impacted. New adjustments made for the developers As per new adjustments by Google in the autoplay policy, audio will get resumed automatically in case the user has interacted with a page and when the start() method of a source node is called. Source node represents individual audio snippets that most games play. One such example is that of a sound that gets played when a player collects a coin or the background music that plays in a particular stage within a game. Game developers call the start() function on source nodes more often than not in cases whenever any of these sounds are necessary for the game. These changes will enable the autoplay in most web games when the user starts playing the game. Google team has also introduced a mechanism for users that allows them to disable the autoplay policy for cases where the automatic learning doesn’t work as expected. Along with the new autoplay policy update,  Google will also stop showing existing annotations on the YouTube videos to viewers starting from January 15, 2019. All the other existing annotations will be removed. “We always put our users first but we also don’t want to let down the web development community. We believe that with our adjustments to the implementation of the policy, and the additional time we provided for web audio developers to update their code, that we will achieve this balance with Chrome 71”, says the Google team. For more information, check out Google’s official blog post. “ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018 Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native Meet Carlo, a web rendering surface for Node applications by the Google Chrome team
Read more
  • 0
  • 0
  • 8656
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-build-2019-introducing-wsl-2-the-newest-architecture-for-the-windows-subsystem-for-linux
Amrata Joshi
07 May 2019
3 min read
Save for later

Microsoft Build 2019: Introducing WSL 2, the newest architecture for the Windows Subsystem for Linux

Amrata Joshi
07 May 2019
3 min read
Yesterday, on the first day of Microsoft Build 2019, the team at Microsoft introduced WSL 2, the newest architecture for the Windows Subsystem for Linux. With WSL 2, file system performance will increase and users will be able to run more Linux apps. The initial builds of WSL 2 will be available by the end of June, this year. https://twitter.com/windowsdev/status/1125484494616649728 https://twitter.com/poppastring/status/1125489352795201539 What’s new in WSL 2? Run Linux libraries WSL 2 powers Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. This new architecture brings changes to how these Linux binaries interact with Windows and computer’s hardware, but it will still manage to provide the same user experience as in WSL Linux distros With this release, the individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, and can be upgraded or downgraded at any time. Also, users can run WSL 1 and WSL 2 distros side by side. This new architecture uses an entirely new architecture that uses a real Linux kernel. Increases speed With this release, file-intensive operations like git clone, npm install, apt update, apt upgrade, and more will get faster. The initial tests that the team has run have WSL 2 running up to 20x faster as compared to WSL 1, when unpacking a zipped tarball. And it is around 2-5x faster while using git clone, npm install and cmake on various projects. Linux kernel with Windows The team will be shipping an open source real Linux kernel with Windows which will make full system call compatibility possible. This will also be the first time a Linux kernel is shipped with Windows. The team is building the kernel in house and in the initial builds they will ship version 4.19 of the kernel. This kernel is been designed in tune with WSL 2 and it has been optimized for size and performance. The team will service this Linux kernel through Windows updates, users will get the latest security fixes and kernel improvements without needing to manage it themselves. The configuration for this kernel will be available on GitHub once WSL 2 will release. The WSL kernel source will consist of links to a set of patches in addition to the long-term stable source. Full system call compatibility The Linux binaries use system calls for performing functions such as accessing files, requesting memory, creating processes, and more. In WSL 1 the team has created a translation layer that interprets most of these system calls and allow them to work on the Windows NT kernel. It is challenging to implement all of these system calls, where some of the apps don’t run properly in WSL 1. WSL 2 includes its own Linux kernel which has full system call compatibility. To know more about this news, check out Microsoft’s blog post. Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Microsoft and GitHub employees come together to stand with the 996.ICU repository      
Read more
  • 0
  • 0
  • 8202

article-image-what-to-expect-in-webpack-5
Bhagyashree R
07 Feb 2019
3 min read
Save for later

What to expect in Webpack 5?

Bhagyashree R
07 Feb 2019
3 min read
Yesterday, the team behind Webpack shared all the updates we will see in its upcoming version, Webpack 5. This version improves build performance with persistent caching, introduces a new named chunk id algorithm, and more. For Webpack 5, the minimum supported Node.js version has been updated from 6 to 8. As this version is a major release, it will come with breaking changes and users may expect some plugin to not work. Expected features in Webpack 5 Removed Webpack 4 deprecated features All the features that were deprecated in Webpack 4 have been removed in this version. So, when migrating to Webpack 5 ensure that your Webpack build doesn’t show any deprecation warnings. Additionally, the team has also removed IgnorePlugin and BannerPlugin that must now be passed an options object. Automatic Node.js polyfills removed All the versions before Webpack 4 provided polyfills for most of the Node.js core modules. These were automatically applied once a module uses any of the core modules. Using polyfills makes it easy to use modules written for Node.js, but this also increases the bundle size as huge modules get added to the bundle. To stop this, Webpack 5 removes this automatically polyfilling and focuses on frontend compatible modules. Algorithm for deterministic chunk and module IDs Webpack 5 comes with new algorithms for long term caching. These are enabled by default in production mode with the following configuration lines: chunkIds: "deterministic”, moduleIds: “deterministic" These algorithms assign short numeric IDs to modules and chunks in a deterministic way. It is recommended that you use the default values for chunkIds and moduleIds. You can also choose to use the old defaults chunkIds: "size", moduleIds: "size", which will generate smaller bundles, but invalidate them more often for caching. Named Chunk IDs algorithm A named chunk id algorithm is introduced, which is enabled by default in development mode. It gives chunks and filenames human-readable names instead of the old numeric names. The algorithm determines the chunk ID the chunk’s content. So, users no longer need to use import(/* webpackChunkName: "name" */ "module") for debugging.To opt-out of this feature, you can change the configuration as chunkIds: “natural”. Compiler idle and close Starting from Webpack 5, compilers need to be closed after the use. Now, compilers enter and leave an idle state and have hooks for these states. Once compile is closed, all the remaining work should be finished as fast as possible. Then, a callback will signal that the closing has been completed. You can read the entire changelog from the Webpack repository. Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more! How to create a desktop application with Electron [Tutorial] The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 8097

article-image-github-parts-ways-with-jquery-adopts-vanilla-js-for-its-frontend
Bhagyashree R
07 Sep 2018
3 min read
Save for later

GitHub parts ways with JQuery, adopts Vanilla JS for its frontend

Bhagyashree R
07 Sep 2018
3 min read
GitHub has finally finished removing the JQuery dependency from its frontend code. This was a result of gradual decoupling from JQuery which began back in at least 2 years ago. They have chosen not to replace JQuery with yet another framework. Instead, they were able to make this transition with the help of polyfills that allowed them to use standard browser features such as, EventListener, fetch, Array.from, and more. Why GitHub chose JQuery in the beginning? Simple: GitHub started using JQuery 1.2.1 as a dependency in 2007. This enabled its web developers to create more modern and dynamic user experience. JQuery 1.2.1 allowed developers to simplify the process of DOM manipulations, define animations, and make AJAX requests. Its simple interface gave GitHub developers a base to craft extension libraries such as, pjax and facebox, which later became the building blocks for the rest of GitHub frontend. Consistent: Unlike the XMLHttpRequest interface, JQuery was consistent across browsers. GitHub in its early days chose JQuery as it allowed their small development team to quickly prototype and release new features without having to adjust code specifically for each web browser. Why they decided to remove JQuery dependency? After comparing JQuery with the rapid evolution of supported web standards in modern browsers, they observed that: CSS classname switching can be achieved using Element.classList. Visual animations can be created using CSS stylesheets without writing any JavaScript code. The addEventListeners method, which is used to attach an event handler to the document, is now stable enough for cross-platform use. $.ajax requests can be performed using the Fetch Standard. With the evolution of JavaScript, some syntactic sugar that jQuery provides has become redundant. The chaining syntax of JQuery didn’t satisfy how GitHub wanted to write code going forward. According to this announcement, this step of decoupling from jquery will allow them to: Rely on web standards more Have MDN web docs as their default documentation to refer Maintain more resilient code in future Speeding up page load times and JavaScript execution time Which technology is it using now? GitHub has moved from JQuery to vanilla JS (plain JavaScript). It is now using querySelectorAll, fetch for ajax, and delegated-events for event handling; polyfills for standard DOM manipulations, and Custom Elements. The adoption of Custom Elements is on the rise. It is a component library native to the browser, which means that users do not have to download, parse, and compile additional bytes of a framework. With the release of Web Components v1 in 2017, GitHub started to adopt Custom Elements on a wider scale. In future they are also planning to use Shadow DOM. To read more about how GitHub made this transition to using standard browser features, check out their official announcement. Github introduces Project Paper Cuts for developers to fix small workflow problems, iterate on UI/UX, and find other ways to make quick improvements Why Golang is the fastest growing language on GitHub
Read more
  • 0
  • 0
  • 8041

article-image-google-chrome-70-now-supports-webassembly-threads-to-build-multi-threaded-web-applications
Bhagyashree R
30 Oct 2018
2 min read
Save for later

Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications

Bhagyashree R
30 Oct 2018
2 min read
Yesterday, Google announced that Chrome 70 now supports WebAssembly threads. The WebAssembly Community Group has been working to bring the support for threads to the web and this is a step towards that effort. Google’s open source JavaScript and WebAssembly engine, V8 has implemented all the necessary support for WebAssembly threads. Why the support for WebAssembly threads is needed? Earlier, parallelism in browsers was supported with the help of web workers. The downside of web workers is that they do not share mutable data between them. Instead, they rely on message-passing for communication. On the other hand, WebAssembly threads can share the same Wasm memory. The underlying storage of shared memory is enabled by SharedArrayBuffer, a JavaScript primitive that allows sharing the contents of a single ArrayBuffer concurrently between workers. Each WebAssembly thread runs in a web worker, but their shared Wasm memory allows them to work as fast as they do on native platforms. This means that those applications which use Wasm threads are responsible for managing access to the shared memory as in any traditional threaded application. How you can try this support To test the WebAssembly module you need to turn on the experimental WebAssembly threads support in Chrome 70 onwards: First, navigate to the chrome://flags URL in your browser: Source: Google Developers Next, go to the experimental WebAssembly threads setting: Source: Google Developers Now change the setting from Default to Enabled and then restart your browser: Source: Google Developers The aforementioned steps are for development purposes. In case you are interested in testing your application out in the field, you can do that with origin trial. Original trials allow you to try experimental features with your users by obtaining a testing token that’s tied to your domain. You can read more in detail about the WebAssembly thread support in Chrome 70 on the Google Developers blog. Chrome 70 releases with support for Desktop Progressive Web Apps on Windows and Linux Testing WebAssembly modules with Jest [Tutorial] Introducing Walt: A syntax for WebAssembly text format written 100% in JavaScript and needs no LLVM/binary toolkits
Read more
  • 0
  • 0
  • 7683
article-image-ionic-framework-4-0-has-just-been-released-now-backed-by-web-components-not-angular
Richard Gall
23 Jan 2019
4 min read
Save for later

Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular

Richard Gall
23 Jan 2019
4 min read
Ionic Framework today released Ionic Framework 4.0. The release is a complete rebuild of the popular JavaScript framework for developing mobile and desktop apps. Although Ionic has, up until now, Ionic was built using Angular components, this new version has instead been built using Web Components. This is significant, as it changes the whole ball game for the project. It means Ionic Framework is now an app development framework that can be used alongside any front end frameworks, not just Angular. The shift away from Angular makes a lot of sense for the project. It now has the chance to grow adoption beyond the estimated five million developers around the world already using the framework. While in the past Ionic could only be used by Angular developers, it now opens up new options for development teams - so, rather than exacerbating a talent gap in many organizations, it could instead help ease it. However, although it looks like Ionic is taking a significant step away from Angular, it's important to note that, at the moment, Ionic Framework 4.0 is only out on general availability for Angular - it's still only in Alpha for Vue.js and React. Ionic Framework 4.0 and open web standards Although the move to Web Components is the stand-out change in Ionic Framework 4.0, it's also worth noting that the release has been developed in accordance with open web standards. This has been done, according to the team, to help organizations develop Design Systems (something the Ionic team wrote about just a few days ago) - essentially, using a set of guidelines and components that can be reused across multiple platforms and products to maintain consistency across various user experience touch points. Why did the team make the changes to Ionic Framework 4.0 that they did? According to Max Lynch, Ionic Framework co-founder and CEO, the changes present in Ionic Framework 4.0 should help organizations achieve brand consistency quickly, and to give development teams the option of using Ionic with their JavaScript framework of choice. Lynch explains: "When we look at what’s happening in the world of front-end development, we see two major industry shifts... First, there’s a recognition that the proliferation of proprietary components has slowed down development and created design inconsistencies that hurt users and brands alike. More and more enterprises are recognizing the need to adopt a design system: a single design spec, or library of reusable components, that can be shared across a team or company. Second, with the constantly evolving development ecosystem, we recognized the need to make Ionic compatible with whatever framework developers wanted to use—now and in the future. Rebuilding our Framework on Web Components was a way to address both of these challenges and future-proof our technology in a truly unique way." What does Ionic Framework 4.0 tell us about the future of web and app development? Ionic Framework 4.0 is a really interesting release as it tells us a lot about where web and app development is today. It confirms to us, for example, that Angular's popularity is waning. It also suggests that Web Components are going to be the building blocks of the web for years to come - regardless of how frameworks evolve. As Lynch writes in a blog post introducing Ionic Framework 4.0, "in our minds, it was clear Web Components would be the way UI libraries, like Ionic, would be distributed in the future. So, we took a big bet and started porting all 100 of our components over." Ionic Framework 4.0 also suggests that Progressive Web Apps are here to stay too. Lynch writes in the blog post linked to above that "for Ionic to reach performance standards set by Google, new approaches for asynchronous loading and delivery were needed." To do this, he explains, the team "spent a year building out a web component pipeline using Stencil to generate Ionic’s components, ensuring they were tightly packed, lazy loaded, and delivered in smart collections consisting of components you’re actually using." The time taken to ensure that the framework could meet those standards - essentially, that it could support high performance PWAs - underscores that this will be one of the key use cases for Ionic in the years to come.  
Read more
  • 0
  • 0
  • 7201

article-image-nuxt-js-2-0-released-with-a-new-scaffolding-tool-webpack-4-upgrade-and-more
Bhagyashree R
24 Sep 2018
3 min read
Save for later

Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more!

Bhagyashree R
24 Sep 2018
3 min read
Last week, the Nuxt.js community announced the release of Nuxt.js 2.0 with major improvements. This release comes with a scaffolding tool, create-nuxt-app to quickly get you started with Nuxt.js development. To provide a faster boot-up and re-compilation, this release is upgraded to Webpack 4 (Legato) and Babel 7. Nuxt.js is an open source web application framework for creating Vue.js applications. You can choose between universal, static generated or single page application. What is new in Nuxt.js 2.0? Introduced features and upgradations create-nuxt-app To get you quickly started with Nuxt.js development, you can use the newly introduced create-nuxt-app tool. This tool includes all the Nuxt templates such as starter template, express templates, and so on. With create-nuxt-app you can choose between integrated server-side framework, UI frameworks, and add axios module. Introducing nuxt-start and nuxt-legacy To start Nuxt.js application in production mode nuxt-start is introduced. To support legacy build of Nuxt.js for Node.js < 8.0.0,  nuxt-legacy is added. Upgraded to Webpack 4 and Babel 7 To provide faster boot-up time and faster re-compilation, this release uses Webpack 4 (Legato) and Babel 7. ESM supported everywhere In this release, ESM is supported everywhere. You can now use export/import syntax in nuxt.config.js, serverMiddleware, and modules. Replace postcss-next with postcss-preset-env Due to the deprecation of cssnext, you have to use postcss-preset-env instead of postcss-cssnext. Use ~assets instead of ~/assets Due to css-loader upgradation, use ~assets instead of ~/assets for alias in <url> CSS data type, for example, background: url("~assets/banner.svg"). Improvements The HTML script tag in core/renderer.js is fixed to pass W3C validation. The background-color property is now replaced with background in loadingIndicator, to allow the use of images and gradients for your background in SPA mode. Due to server/client artifact isolation, external build.publicPath need to upload built content to .nuxt/dist/client directory instead of .nuxt/dist. webpackbar and consola provide a improved CLI experience and better CI compatibility. Template literals in lodash templates are disabled. Better error handling if the specified plugin isn't found. Deprecated features The vendor array isn't supported anymore. DLL support is removed because it was not stable enough. AggressiveSplittingPlugin is obsoleted, users can use optimization.splitChunks.maxSize instead. The render.gzip option is deprecated. Users can use render.compressor instead. To read more about the updates, check out Nuxt’s official announcement on Medium and also see the release notes on its GitHub repository. Next.js 7, a framework for server-rendered React applications, releases with support for React context API and Webassembly low.js, a Node.js port for embedded systems Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js
Read more
  • 0
  • 0
  • 7094

article-image-angular-7-is-now-stable
Bhagyashree R
19 Oct 2018
3 min read
Save for later

Angular 7 is now stable

Bhagyashree R
19 Oct 2018
3 min read
After releasing Angular 7 beta in August, the Angular team released the stable version of Angular 7 yesterday. This version comes with a DoBootstrap interface, an update to XMB placeholders, and updated dependencies: TypeScript 3.1 and RxJS 6.3. Let's see what features this version brings in: The DoBootstrap interface Earlier, there was an interface available for each lifecycle hook but ngDoBootstrap was missing the corresponding interface. This release comes with the new lifecycle hook interface DoBootstrap. Here's an example using DoBootstrap: class AppModule implements DoBootstrap {  ngDoBootstrap(appRef: ApplicationRef) {    appRef.bootstrap(AppComponent);  } } An "original" placeholder value on extracted XMB XMB placeholders (<ph>) are now updated to include the original value on top of an example. By definition, placeholders have one example tag (<ex>) and a text node. The text node will be used as the original value from the placeholder, while the example will represent a dummy value. For example: compiler-cli An option is added to extend angularCompilerOptions in tsconfig. Currently, only TypeScript supports merging and extending of compilerOptions. This update will allow extending and inheriting angularCompilerOptions from multiple files. A new parameter for CanLoad interface In addition to Route, an UrlSegment[] is passed to implementations of CanLoad as a second parameter. It will contain the array of path elements the user tried to navigate to before canLoad is evaluated. This will help users to store the initial URL segments and refer to them later, for example, to go back to the original URL after authentication via router.navigate(urlSegments). Existing code still works as before because the second function parameter is not mandatory. Dependency updates The following dependencies are upgraded to their latest: angular/core now depends on TypeScript 3.1 and RxJS 6.3 @angular/platform-server now depends on Domino 2.1 Bug fixes Along with these added features, Angular 7 comes with many bug fixes, some of which are: Mappings are added for ngfactory and ngsummary to their module names in aot summary resolver. fileNameToModuleName lookups are now cached to save expensive reparsing of the file when not run as Bazel worker. It is allowed to privately import compile_strategy. Earlier, when an attempt was made to bootstrap a component that includes a router config using AOT Summaries, the test used to fails. This is fixed in this release. The compiler is updated to flatten nested template fns and to generate new slot allocations. To read the full list of changes, check out Angular’s GitHub repository. Angular 7 beta.0 is here! Why switch to Angular for web development – Interview with Minko Gechev ng-conf 2018 highlights, the popular angular conference
Read more
  • 0
  • 0
  • 6929
article-image-introducing-howler-js-a-javascript-audio-library-with-full-cross-browser-support
Bhagyashree R
01 Nov 2018
2 min read
Save for later

Introducing Howler.js, a Javascript audio library with full cross-browser support

Bhagyashree R
01 Nov 2018
2 min read
Developed by GoldFire Studios, Howler.js is an audio library for the modern web that makes working with audio in JavaScript easy and reliable across all platforms. It defaults to Web Audio API and falls back to HTML5 Audio to provide support for all browsers and platforms including IE9 and Cordova. Originally, it was developed for an HTML5 game engine, but it can also be used just as well for any other audio related function in web applications. Features of Howler.js Single API for all audio needs: It provides a simple and consistent API to make it easier to build audio experiences in your application. Audio sprites: For more precise playback and lower resources. you can define and control segments of files with audio sprites. Supports all codecs: It supports all codecs such as MP3, MPEG, OPUS, OGG, OGA, WAV, AAC, CAF, M4A, MP4, WEBA, WEBM, DOLBY, FLAC. Auto-caching for improved performance: It automatically caches loaded sounds that can be reused on subsequent calls for better performance and bandwidth. Modular architecture: Its modular architecture helps you to easily use and extend the library to add custom features. Which browsers does it support? Howler.js is compatible with the following: Google Chrome 7.0+ Internet Explorer 9.0+ Firefox 4.0+ Safari 5.1.4+ Mobile Safari 6.0+ Opera 12.0+ Microsoft Edge Read more about Howler.js on its official website and also check out its GitHub repository. npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out The Ember project releases version 3.5 of Ember.js, Ember Data, and Ember CLI
Read more
  • 0
  • 0
  • 6839

article-image-pi-hole-4-3-2-removes-adblock-style-lists-support-and-implements-many-core-and-web-interface-changes
Vincy Davis
25 Sep 2019
3 min read
Save for later

Pi-hole 4.3.2 removes adblock style lists support and implements many core and web interface changes

Vincy Davis
25 Sep 2019
3 min read
Last week, Pi-hole, the open-source Linux network-level advertisement and internet tracker blocking application released their latest version Pi-hole 4.3.2. It includes many changes in its core and web interfaces. Users can run pihole -up to update this version from a terminal session. One of the core contributors to Pi-hole, Adam Warner revealed that the major change in this release is the removal of support for adblock style lists like Easylist/Easyprivacy. He alerted users that this may lead to a reduction in the number of blocked domains by Pi-hole. Warner also specified the motive behind the removal of adblock support as, “these lists were never designed to be parsed into a HOST formatted file, and while it may catch some domains, there are far too many false positives produced by using them in this way. If you have lists in this format, Pi-hole will now ignore them, and attempts to get around the detection will likely end up with a broken gravity list.” Pi-hole uses dnsmasq, cURL, lighttpd, PHP, and other tools to block Domain Name System (DNS) requests for known tracking and advertising. Intended for a private network, Pi-hole is implemented on embedded devices with network capabilities like Raspberry Pi. A Pi-hole can also block traditional website adverts in smart TVs, mobile operating systems, and more. If Pi-hole obtains any requests from adverts or tracking domains, it does not resolve the requested domain and responds to the requesting device with a blank webpage. Users are happy with Pi-hole 4.3.2 release and are all praises for it on Hacker News. A user said, “I'm a huge fan of this project! I have 3 set-up right now. One as a container on my Nuc at home for myself, and 2 other on old Pi's (one is a 1st gen B model) for family. A simple job to run every 2 months keeps everything up to date. For myself, I use Wireguard to only forward DNS packets to the PiHole when I'm outside the house. If you install a PiHole your help desk calls from family will drop by 90% (personal experience).” Another user comments, “I have Pi-hole running on my LAN and it's amazing. It also helped me identify that my Amcrest PoE security cameras aggressively phone home, even when no cloud functionality is configured on them. All the reasons to keep them on their own VLAN and off the Internet.” Another comment read, “One unadvertised advantage of pi-hole is monitoring and blocking sites that you don't want kids to use, such as the thousands of io-games and whatnot.” Check out the Pi-hole 4.3.2 release notes to know full updates of this release. Brave ad-blocker gives 69x better performance with its new engine written in Rust Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Opera Touch browser lets you browse with one hand on your iPhone, comes with e2e encryption and built-in ad blockers too!
Read more
  • 0
  • 0
  • 6705