Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Front-End Web Development

158 Articles
article-image-exploring%e2%80%afforms-in-angular-types-benefits-and-differences%e2%80%af%e2%80%af%e2%80%af-%e2%80%af
Expert Network
21 Jul 2021
11 min read
Save for later

Exploring Forms in Angular – types, benefits and differences     

Expert Network
21 Jul 2021
11 min read
While developing a web application, or setting dynamic pages and meta tags we need to deal with multiple input elements and value types, such limitations could seriously hinder our work – in terms of either data flow control, data validation, or user experience.    This article is an excerpt from the book, ASP.NET Core 5 and Angular, Fourth Edition by Valerio De Sanctis – A revised edition of a bestseller that includes coverage of the Angular routing module, expanded discussion on the Angular CLI, and detailed instructions for deploying apps on Azure, as well as both Windows and Linux.   Sure, we could easily work around most of the issues by implementing some custom methods within our form-based components; we could throw some errors such as isValid(), isNumber(), and so on here and there, and then hook them up to our template syntax and show/hide the validation messages with the help of structural directives such as *ngIf, *ngFor, and the like. However, it would be a horrible way to address our problem; we didn't choose a feature-rich client-side framework such as Angular to work that way.   Luckily enough, we have no reason to do that since Angular provides us with a couple of alternative strategies to deal with these common form-related scenarios:   Template-Driven Forms   Model-Driven Forms, also known as Reactive Forms   Both are highly coupled with the framework and thus extremely viable; they both belong to the @angular/forms library and share a common set of form control classes. However, they also have their own specific sets of features, along with their pros and cons, which could ultimately lead to us choosing one of them.   Let's try to quickly summarize these differences.   Template-Driven Forms   If you've come from AngularJS, there's a high chance that the Template-Driven approach will ring a bell or two. As the name implies, Template-Driven Forms host most of the logic in the template code; working with a Template-Driven Form means:   Building the form in the .html template file   Binding data to the various input fields using ngModel instance   Using a dedicated ngForm object related to the whole form and containing all the inputs, with each being accessible through their name.   These things need to be done to perform the required validity checks. To understand this, here's what a Template-Driven Form looks like:   <form novalidate autocomplete="off" #form="ngForm" (ngSubmit)="onSubmit(form)">  <input type="text" name="name" value="" required   placeholder="Insert the city name..."    [(ngModel)]="city.Name" #title="ngModel"   />  <span *ngIf="(name.touched || name.dirty) &&       name.errors?.required">           Name is a required field: please enter a valid city name.   </span>   <button type="submit" name="btnSubmit"          [disabled]="form.invalid">         Submit   </button>   </form>     Here, we can access any element, including the form itself, with some convenient aliases – the attributes with the # sign – and check for their current states to create our own validation workflow.   These states are provided by the framework and will change in real-time, depending on various things: touched, for example, becomes True when the control has been visited at least once; dirty, which is the opposite of pristine, means that the control value has changed, and so on. We used both touched and dirty in the preceding example because we want our validation message to only be shown if the user moves their focus to the <input name="name"> and then goes away, leaving it blank by either deleting its value or not setting it.   These are Template-Driven Forms in a nutshell; now that we've had an overall look at them, let's try to summarize the pros and cons of this approach. Here are the main advantages of Template-Driven Forms: Template-Driven Forms are very easy to write. We can recycle most of our HTML knowledge (assuming that we have any). On top of that, if we come from AngularJS, we already know how well we can make them work once we've mastered the technique.   They are rather easy to read and understand, at least from an HTML point of view; we have a plain, understandable HTML structure containing all the input fields and validators, one after another. Each element will have a name, a two-way binding with the underlying ngModel, and (possibly) Template-Driven logic built upon aliases that have been hooked to other elements that we can also see, or to the form itself.   Here are their weaknesses:   Template-Driven Forms require a lot of HTML code, which can be rather difficult to maintain and is generally more error-prone than pure TypeScript.   For the same reason, these forms cannot be unit tested. We have no way to test their validators or to ensure that the logic we implemented will work, other than running an end-to-end test with our browser, which is hardly ideal for complex forms.   Their readability will quickly drop as we add more and more validators and input tags. Keeping all their logic within the template might be fine for small forms, but it does not scale well when dealing with complex data items. Ultimately, we can say that Template-Driven Forms might be the way to go when we need to build small forms with simple data validation rules, where we can benefit more from their simplicity. On top of that, they are quite like the typical HTML code we're already used to (assuming that we do have a plain HTML development background); we just need to learn how to decorate the standard <form> and <input> elements with aliases and throw in some validators handled by structural directives such as the ones we've already seen, and we'll be set in (almost) no time.   For additional information on Template-Driven Forms, we highly recommend that you read the official Angular documentation at: https://angular.io/guide/forms   That being said; the lack of unit testing, the HTML code bloat that they will eventually produce, and the scaling difficulties will eventually lead us toward an alternative approach for any non-trivial form. Model-Driven/Reactive Forms   The Model-Driven approach was specifically added in Angular 2+ to address the known limitations of Template-Driven Forms. The forms that are implemented with this alternative method are known as Model-Driven Forms or Reactive Forms, which are the exact same thing.   The main difference here is that (almost) nothing happens in the template, which acts as a mere reference to a more complex TypeScript object that gets defined, instantiated, and configured programmatically within the component class: the form model.   To understand the overall concept, let's try to rewrite the previous form in a Model-Driven/Reactive way (the relevant parts are highlighted). The outcome of doing this is as follows:  <form [formGroup]="form" (ngSubmit)="onSubmit()">  <input formControlName="name" required />   <span *ngIf="(form.get('name').touched || form.get('name').dirty)            && form.get('name').errors?.required">           Name is a required field: please enter a valid city name.   </span>  <button type="submit" name="btnSubmit"           [disabled]="form.invalid">  Submit  </button>     </form>  As we can see, the amount of required code is much lower.  Here's the underlying form model that we will define in the component class file (the relevant parts are highlighted in the following code):   import { FormGroup, FormControl } from '@angular/forms';   class ModelFormComponent implements OnInit {   form: FormGroup;         ngOnInit() {       this.form = new FormGroup({          title: new FormControl()       });     }   }   Let's try to understand what's happening here:   The form property is an instance of FormGroup and represents the form itself.   FormGroup, as the name suggests, is a container of form controls sharing the same purpose. As we can see, the form itself acts as a FormGroup, which means that we can nest FormGroup objects inside other FormGroup objects (we didn't do that in our sample, though).   Each data input element in the form template – in the preceding code, name – is represented by an instance of FormControl.   Each FormControl instance encapsulates the related control's current state, such as valid, invalid, touched, and dirty, including its actual value.   Each FormGroup instance encapsulates the state of each child control, meaning that it will only be valid if/when all its children are also valid.   Also, note that we have no way of accessing the FormControls directly like we were doing in Template-Driven Forms; we have to retrieve them using the .get() method of the main FormGroup, which is the form itself.   At first glance, the Model-Driven template doesn't seem too different from the Template-Driven one; we still have a <form> element, an <input> element hooked to a <span> validator, and a submit button; on top of that, checking the state of the input elements takes a bigger amount of source code since they have no aliases we can use. What's the real deal, then?  To help us visualize the difference, let's look at the following diagrams: here's a schema depicting how Template-Driven Forms work:   [caption id="attachment_72453" align="alignnone" width="690"] Fig 1: Template-Driven Forms schematic[/caption] By looking at the arrows, we can easily see that, in Template-Driven Forms, everything happens in the template; the HTML form elements are directly bound to the DataModel component represented by a property filled with an asynchronous HTML request to the Web Server, much like we did with our cities and country table.   That DataModel will be updated as soon as the user changes something, that is, unless a validator prevents them from doing that. If we think about it, we can easily understand how there isn't a single part of the whole workflow that happens to be under our control; Angular handles everything by itself using the information in the data bindings defined within our template.   This is what Template-Driven actually means: the template is calling the shots.  Now, let's take a look at the Model-Driven Forms (or Reactive Forms) approach:   [caption id="attachment_72454" align="alignnone" width="676"] Fig 2: Model-Driven/Reactive Forms schematic[/caption] As we can see, the arrows depicting the Model-Driven Forms workflow tell a whole different story. They show how the data flows between the DataModel component – which we get from the Web Server – and a UI-oriented form model that retains the states and the values of the HTML form (and its children input elements) that are presented to the user. This means that we'll be able to get in-between the data and the form control objects and perform a number of tasks firsthand: push and pull data, detect and react to user changes, implement our own validation logic, perform unit tests, and so on.  Instead of being superseded by a template that's not under our control, we can track and influence the workflow programmatically, since the form model that calls the shots is also a TypeScript class; that's what Model-Driven Forms are about. This also explains why they are also called Reactive Forms – an explicit reference to the Reactive programming style that favors explicit data handling and change management throughout the workflow.   Summary    In this article, we focused on the Angular framework and the two form design models it offers: the Template-Driven approach, mostly inherited from AngularJS, and the Model-Driven or Reactive alternative. We took some valuable time to analyze the pros and cons provided by both, and then we made a detailed comparison of the underlying logic and workflow. At the end of the day, we chose the Reactive way, as it gives the developer more control and enforces a more consistent separation of duties between the Data Model and the Form Model.   About the author   Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He's the founder and owner of Ryadel and the author of many best-selling books on back-end and front-end web development.      
Read more
  • 0
  • 0
  • 10884

article-image-firefox-71-released-with-new-developer-tools-features
Savia Lobo
04 Dec 2019
5 min read
Save for later

Firefox 71 released with new developer tools features

Savia Lobo
04 Dec 2019
5 min read
Yesterday, the Firefox team announced its latest version, Firefox 71. This version includes a plethora of new developer tools features such as web socket message inspector, console multi-line editor mode, log on events, and network panel full-text search. Many of these features were first made available in the Firefox Developer Edition and later improved based on the feedback. Other highlights in Firefox 71 includes new web platform features such as CSS subgrid, column-span, Promise.allSettled, and the Media Session API. What’s new in Firefox 71? Improvements in speed and reliability In Firefox 71, the team took some help from the JavaScript team by improving the caching of scripts during a startup. This made both Firefox and DevTools start faster. “One Console test got an astonishing 40% improvement while times across every panel were boosted by 8-15%”, the official blog post mentions. Also, the links to scripts, for example, from the event handler tooltip in the Inspector or the stack traces in the Console, reliably gets you to the expected line and debugging sources loaded through eval() now also works as expected. WebSocket Message Inspector In Firefox 71, the Network panel has a new Messages tab. You can observe all messages sent and received through a WebSocket connection: Source: Mozilla Hacks Sent frames have a green up-arrow icon, while received frames have a red down-arrow icon. You can click on an individual frame to view its formatted data. Know more about WebSocket Message Inspector on the official post. Console multi-line editor mode Another developer tools feature in Firefox 71 is the new multi-line console. It combines the benefits of IDEs to authoring code with the workflow of repeatedly executing code in the context of the page. If you open the regular console, you’ll see a new icon at the end of the prompt row. Source: Mozilla Hacks Clicking this will switch the console to multi-line mode: Source: Mozilla Hacks Here you can enter multiple lines of code, pressing enter after each one, and then run the code using Ctrl + Enter. You can also move between statements using the next and previous arrows. The editor includes regular IDE features you’d expect, such as open/close bracket pair highlighting and automatic indentation. Inline variable preview in Debugger The JavaScript Debugger now provides inline variable previewing, which is a useful timesaver when stepping through your code. In previous versions, you had to scroll through the scope panel to find variable values or hover over a variable in the source pane. In the current version, when execution pauses, you can view relevant variable and property values directly in the source. Source: Mozilla Hacks Using the babel-powered source mapping, preview also works for variables that have been renamed or minified by build steps. Make sure to enable this power-feature by checking Map in the Scopes pane. Log on Event Listeners There have been a few updates in the event listener breakpoints in Firefox 71. A few improvements include, log on events lets you explore which event handlers are being fired in which order without the need for pausing and stepping. Hence, if we choose to log keyboard events, for example, the code no longer pauses as each event is fired: Source: Mozilla Hacks Instead, we can then switch to the console, and whenever we press a key we are given a log of where related events were fired. CSS improvements In Firefox 71, the new CSS includes subgrid, multicol, clip-path: path, and aspect ratio mapping. Subgrid A feature that has been enabled in 71 after being supported behind a pref for a while, the subgrid value of grid-template-columns and grid-template-rows allows you to create a nested grid inside a grid item that will use the main grid’s tracks. This means that grid items inside the subgrid will line up with the parent’s grid tracks, making various layout techniques much easier. Multicol — column-span CSS multicol support has moved forward in a big way with the inclusion of the column-span property in Firefox 71. This allows you to make an element span across all the columns in a multicol container (generated using column-width or column-count). Clip-path: path() The path() value of the clip-path property is now enabled by default — this allows you to create a custom mask shape using a path() function, as opposed to a predefined shape like a circle or ellipse. Aspect ratio mapping Finally, the height and width HTML attributes on the <img> element are now mapped to an internal aspect-ratio property. This allows the browser to calculate the image’s aspect ratio early on and correct its display size before it has loaded if CSS has been applied that causes problems with the display size. There are also a few minor JavaScript changes in this release including, Promise.allSettled(), the Media Session API, and WebGL multiview. A lot of users are excited about this release and are looking forward to trying it out. https://twitter.com/IshSookun/status/1201897724943036417 https://twitter.com/awhite/status/1202163413021077504 To know more about this news in detail, read Firefox 71 official announcement. The new WebSocket Inspector will be released in Firefox 71 Firefox 70 released with better security, CSS, and JavaScript improvements Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70
Read more
  • 0
  • 0
  • 5246

article-image-google-chrome-experiment-crashes-browsers-thousands-it-admins-worldwide
Sugandha Lahoti
18 Nov 2019
4 min read
Save for later

Google Chrome 'secret' experiment crashes browsers of thousands of IT admins worldwide

Sugandha Lahoti
18 Nov 2019
4 min read
On Thursday last week, thousands of IT admins were left aghast when their Google Chrome browsers went blank, the White Screen of Death, and effectively crashed the browser. This was because Google was silently experimenting with a new WebContents Occlusion feature. The WebContents Occlusion feature is designed to suspend Chrome tabs when you move other apps on top of them and reduce resource usage when the browser isn’t in use. This feature is expected to reduce battery usage (for Chrome and other apps running on the same machine). This feature had been under testing in Chrome Canary and Chrome Beta releases. However last week, Google decided to test it in the main stable release, so it could get more feedback on how it behaved. "The experiment/flag has been on in beta for ~5 months," said David Bienvenu, a Google Chrome engineer in a Chromium bug thread. "It was turned on for stable (e.g., M77, M78) via an experiment that was pushed to released Chrome Tuesday morning." The main issue was that this experiment was released silently to the stable release, without IT admins or users being warned about Google’s changes. Naturally, Chrome users were left confused and lashed out their anger and complaints on Google Chrome’s support forum. Business users who were affected included those that run Chrome on Windows Server "terminal server" environments and on Citrix servers. Due to browser-crashing, employees working in tightly controlled enterprise environments were unable to switch browsers impacting business-critical functionality. After multiple complaints from businesses and users, Google rolled back the change late on Thursday night. “I’ll rollback the launch of this experiment and try to figure out how to deal with Citrix,” noted Bienvenu in the bug thread. Later a new Chrome configuration file was pushed out to users. "I believe it's more of a pull than a push thing," Bienvenu said, "so once the update is live on the Google servers, the next time you launch Chrome, you should get the new config. Google's Chrome experiment left ID admins confused Many IT admins were also angry that they’ve wasted valuable resources and time on trying to fix issues in their environment thinking it was their own fault. “We spent the better part of yesterday trying to determine if an internal change had occurred in our environment without our knowledge”, wrote an angry user. “We did not realize this type of event could occur on Chrome unbeknownst to us. We are already discussing alternative options, none of them are great, but this is untenable.", writes an angry user. Others urged Google that they should be allowed to opt out of any Google Chrome experiments. “Would like to be excluded from further experimental changes. We have had the sporadic white screen of deaths over the past few weeks. How would we have ever known it was a part of the 1%?  We chalked it off as bad Chrome profiles. We still have fresh memories of the experimental Chrome sound issue. That was very disruptive too. Please test your changes in your internal rdsh/Citrix environment. Please give us the option to opt out of "experimental" changes.  Thank you for your consideration.” Another said, “We've been having random issues for quite some time, and our agents could be in this 1%. This last one was a huge impact on our customer-facing agents, not to mention working all day yesterday and today of troubleshooting. Is there a way to be excluded from these experimental changes? If Chrome is going to be an enterprise browser, we need stability.” With Google Chrome’s mishap, more people are advocating moving to different browsers that give more control to its end users. Chrome also came under fire recently when it started experimenting with Manifest V3 extension in Chrome 80 Canary build. Chrome’s ad-blocking changes received overwhelmingly negative feedback as it can stop the working of many popular ad-blockers. Other browsers are also popping up now and then which offer better user privacy and ad-blocking features - Brave 1.0 being the latest in the line. Brave 1.0 releases with focus on user privacy, crypto currency-centric private ads and payment platform Google starts experimenting with Manifest V3 extension in Chrome 80 Canary build. Expanding Web Assembly beyond the browser with Bytecode Alliance, a Mozilla, Fastky, Intel and Red Hat partnership.
Read more
  • 0
  • 0
  • 3780

article-image-brave-1-0-releases-with-focus-on-user-privacy-crypto-currency-centric-private-ads-and-payment-platform
Fatema Patrawala
14 Nov 2019
5 min read
Save for later

Brave 1.0 releases with focus on user privacy, crypto currency-centric private ads and payment platform

Fatema Patrawala
14 Nov 2019
5 min read
Yesterday, Brave, the company co-founded by ex-Mozilla CEO, Brendan Eich, launched version 1.0 of its browser for Windows, macOS, Linux, Android and iOS. In a browser market where users have to compromise on their privacy, Brave is positioning itself as a fast option that preserves users’ privacy with strong default settings, as well as a crypto currency-centric private ads and payment platform that allows users to reward content creators. “Surveillance capitalism has plagued the Web for far too long and we’ve reached a critical inflection point where privacy-by-default is no longer a nice-to-have, but a must-have. Users, advertisers, and publishers have finally had enough, and Brave is the answer. Brave 1.0 is the browser reimagined, transforming the Web to put users first with a private, browser-based ads and payment platform. With Brave, the Web can be a rewarding experience for all, without users paying with their privacy.” said Brendan Eich, co-founder and CEO of Brave Software. “Either we all accept the $330 billion ad-tech industry treating us as their products, exploiting our data, piling on more data breaches and privacy scandals, and starving publishers of revenue; or we reject the surveillance economy and replace it with something better that works for everyone. That’s the inspiration behind Brave,” he added. The company also announced last month that Brave has about 8 million monthly active users. Brave offers a privacy-first approach to its users that natively blocks trackers, invasive ads, and device fingerprinting. This leads to substantial improvements in speed, privacy, security, performance, and battery life. It has default settings to block phishing, malware, and malvertising. Embedded plugins, which have proven to be an ongoing security risk, are disabled by default in Brave. Browsing data always stays private and on the user’s device, which means Brave will never see or store the data on its servers or sell user data to third-parties. Brave 1.0 key features Additionally Brave 1.0 offers some unique features to its users: Brave Rewards program to fund open web – By activating Brave Reward, users can support their favorite publishers and content creators and integrate Brave wallet on both desktop and mobile. This feature allows users to send Basic Attention Tokens (BAT) as tips for great content, either directly as they browse or by defaulting to recurring monthly payments to continuously support websites you visit frequently. There are over 300,000 verified websites on-boarded on Brave for this program including The Washington Post, The Guardian, Wikipedia, YouTube, Twitch, Twitter, GitHub and more. Brave Ads compensate users for their attention – Brave has a new blockchain-based advertising model that enables privacy and gives 70% of its revenue share in the form of Basic Attention Tokens (BAT) to users who view the Brave ads. These ads are a part of private ad network and Brave Rewards program. It allows users to opt-in to view relevant privacy-preserving ads in exchange for earning BAT. When users opt into Brave Rewards, Brave ads are enabled by default. As per the content viewed by a user, ad matching happens directly on the user’s device, so their data is never sent to anyone, and they see rewarding ads without mass surveillance. Users can also transfer their earned BAT from the wallet and convert into digital assets and fiat currencies, but they need to complete the verification process with Uphold, a digital money platform. Brave Shields for automatic ad and tracker blocking – Brave Shields, this feature is enabled by default and is customizable from the address bar. It blocks invasive third-party ads, trackers, and autoplay videos immediately – without needing to install any additional programs. On Hacker News, users have appreciated the way Brave browser operates and rewards its content consumers as well as the creators. One of them has explained its functioning in detail, “I've been using Brave rewards, both as a user and a content maker. It's really great, and I feel this may be a reasonable alternative to the invasive trackers+ads we have today. For the uninitiated, Brave lets users opt-in to Brave rewards: - You set your browser to reward content creators with Basic Attention Token (BAT). You set a budget (e.g. 10 BAT/month), and Brave distributes it the sites you use most, e.g. if you watch a particular YouTube channel 30% of your browsing time, it will send 30% of 10 BAT each month to that content creator. - As a user, you can get paid in BAT. You tell Brave if you're willing to see ads, and how often. If so, you get paid in BAT, which you can then distribute to content creators. Brave ads are different: rather than intrusive in-page ads, Brave ads show up as a notification in your operating system outside of the page. This prevents slow downs of the page, keeping your browsing focused, while still allowing support of content creators. And of course, Brave ads are optional and opt-in.” You can download Brave for free, by visiting official Brave page, Google Playstore or the App Store. Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case Brave ad-blocker gives 69x better performance with its new engine written in Rust Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event Brave launches its Brave Ads platform sharing 70% of the ad revenue with its users Brave Privacy Browser has a ‘backdoor’ to remotely inject headers in HTTP requests: HackerNews
Read more
  • 0
  • 0
  • 3084

article-image-snyks-javascript-frameworks-security-report-2019-shares-the-state-of-security-for-react-angular-and-other-frontend-projects
Bhagyashree R
04 Nov 2019
6 min read
Save for later

Snyk’s JavaScript frameworks security report 2019 shares the state of security for React, Angular, and other frontend projects

Bhagyashree R
04 Nov 2019
6 min read
Last week, Snyk, an open-source security platform published the State of JavaScript frameworks security report 2019. This report mainly focuses on security vulnerabilities and risks in React and Angular ecosystems. It further talks about security practices in other common JavaScript frontend ecosystem projects including Vue.js, Bootstrap, and JQuery. https://twitter.com/snyksec/status/1189527376197246977 Key takeaways from the State of JavaScript frameworks security report Security vulnerabilities in core Angular and React projects In the report, the ‘react’, ‘react-dom’, and ‘prop-types’ libraries were considered as the core modules of React since they often form the foundation for React web applications. Snyk’s research team was able to find three cross-site scripting (XSS) vulnerabilities in total: two in ‘react’ and one in ‘react-dom’. The two vulnerabilities in the ‘react’ library were present in its pretty older versions, 0.5.x versions and the versions prior to 0.14. However, the vulnerability in react-dom was found in a recent release, version 16.x. Its occurrence depends on other pre-conditions as well, such as using the library within a server-rendering context. All these vulnerabilities’ Common Vulnerability Scoring System (CVSS) score ranged 6.5 and 7.1, which basically means that they were all medium to high severity vulnerabilities. Coming to Angular, Snyk found 19 vulnerabilities across six different release branches of Angular 1.x or AngularJS, which is no longer maintained. Angular 1.5 has the highest number of vulnerabilities, with seven vulnerabilities in total. Out of those seven, three had high severity and four had medium severity. The good thing is that with every new version, the vulnerabilities have decreased both in terms of severity and count. Security risks of indirect dependencies React and Angular projects are often generated with a scaffolding tool that provides a boilerplate. While in React, we use the ‘create-react-app’ npm package, in Angular we use the ‘@angular/cli’ npm package. In a sample React and Angular project created using these scaffolding tools, it was found that both included development dependencies with vulnerabilities. The good thing is that neither of them had any production dependency security issues. “It’s worthy to note that Angular relies on 952 dependencies, which contain a total of two vulnerabilities; React relies on 1257 dependencies, containing three vulnerabilities and one potential license compatibility issue,”  the report states. Here’s the list of security vulnerabilities that were found in these sample projects: Source: Snyk Security vulnerabilities in the Angular module ecosystem For the purposes of this study, the Snyk research team divided the Angular ecosystem into three areas: Angular ecosystem modules, malicious versions of modules, developer tooling. The Angular module ecosystem has the following vulnerable modules: Source: Snyk Talking about the malicious versions of modules, the report lists three malicious versions for the ‘angular-bmap’, ‘ng-ui-library’, ‘ngx-pica’ modules. The ‘angular-bmap’ 0.0.9 version included a malicious code that collected sensitive information related to password and credit cards from forms. It then used to send this information to an attacker-controlled URL. Thankfully, this version is now taken down from the npm registry. The ‘ng-ui-library’ 1.0.987 has the same malicious code as  ‘angular-bmap’ 0.0.9, despite that it is still maintained. The third module, 'ngx-pica' (from versions 1.1.4 to 1.1.6) also has the same malicious code as the above two modules. In developer tooling, the module ‘angular-http-server’ was found vulnerable to directory traversal twice. Security vulnerabilities in the React module ecosystem In React’s case, Snyk found four malicious packages namely ‘react-datepicker-plus’, ‘react-dates-sc’, ‘awesome_react_utility’, and ‘reactserver-native’. These contain malicious code that harvests credit card and other sensitive information and attacks compromised modules on the React ecosystem. Notable vulnerable modules that were found in React’s ecosystem during this study: The ‘react-marked-markdown’ module has a high-severity XSS vulnerability that does not have any fix available as of now. The ‘preact-render-to-string’ library is vulnerable to XSS in all versions prior to 3.7.2. The ‘react-tooltip’ library is vulnerable to XSS attacks for all versions prior to 3.8.1. The ‘react-svg’ library has a high severity XSS vulnerability that was disclosed by security researcher Ron Perris affecting all versions prior to 2.2.18. The 'mui-datatables' library has the CSV Injection vulnerability. “When we track all the vulnerable React modules we found, we count eight security vulnerabilities over the last three years with two in 2017, six in 2018 and two up until August 2019. This calls for responsible usage of open source and making sure you find and fix vulnerabilities as quickly as possible,” the report suggests. Along with listing the security vulnerabilities in React and Angular, the report also shares the overall security posture of the two. This includes secure coding conventions, built-in secure capabilities, responsible disclosure policies, and dedicated security documentation for the project. Vue.js security In total, four vulnerabilities were detected in the Vue.js core project spanning from December 2017 to August 2018: three medium and one low regular expressions denial of service vulnerability. As for Vue’s module ecosystem, the report lists the following vulnerable modules: The ‘bootstrap-vue’ library has a high severity XSS vulnerability that was disclosed in January 2019 and affects all versions prior to <2.0.0-rc.12. The ‘vue-backbone’ library had a malicious version published. Bootstrap security The Snyk research team was able to track a total of seven XSS vulnerabilities in Bootstrap. Out of those seven, three were disclosed in 2019 for recent Bootstrap v3 versions and three security vulnerabilities were disclosed in 2018, one of which affects the newer 4.x Bootstrap release. All these vulnerabilities have security fixes and an upgrade path for users to remediate the risks. Among the vulnerable modules in the Bootstrap ecosystem are: The ‘bootstrap-markdown’ library that includes an unfixed XSS vulnerability affecting all versions. The ‘bootstrap-vuejs’ library has a high severity XSS vulnerability that affects all versions prior to bootstrap-vue 2.0.0-rc.12. The ‘bootstrap-select’ library also includes a high severity XSS vulnerability. This article touched upon some of the key findings of the report. Check out the full report by Snyk to know more in detail. React Native 0.61 introduces Fast Refresh for reliable hot reloading React Conf 2019: Concurrent Mode preview out, CSS-in-JS, React docs in 40 languages, and more Vue maintainers proposed, listened, and revised the RFC for hooks in Vue API
Read more
  • 0
  • 0
  • 6132

article-image-ghost-3-0-an-open-source-headless-node-js-cms-released-with-jamstack-integration-github-actions-and-more
Savia Lobo
23 Oct 2019
4 min read
Save for later

Ghost 3.0, an open-source headless Node.js CMS, released with JAMStack integration, GitHub Actions, and more!

Savia Lobo
23 Oct 2019
4 min read
Yesterday, the team behind Ghost, an open-source headless Node.js CMS, announced its major version, Ghost 3.0. The new version represents “a total of more than 15,000 commits across almost 300 releases”  Ghost is now used by the likes of Apple, DuckDuckGo, OpenAI, The Stanford Review, Mozilla, Cloudflare, Digital Ocean, and many, others. “To date, Ghost has made $5,000,000 in customer revenue whilst maintaining complete independence and giving away 0% of the business,” the official website highlights. https://twitter.com/Ghost/status/1186613938697338881 What’s new in Ghost 3.0? Ghost on the JAMStack The team has revamped the traditional architecture using JAMStack which makes Ghost a completely decoupled headless CMS. In this way, users can generate a static site and later add dynamic features to make it powerful. The new architecture unlocks content management that is fundamentally built via APIs, webhooks, and frameworks to generate robust modern websites. Continuous theme deployments with GitHub Actions The process of manually making a zip, navigating to Ghost Admin, and uploading an update in the browser can be difficult at times. To deploy Ghost themes to production in a better way, the team decided to combine with Github Actions. This makes it easy to continuously sync custom Ghost themes to live production sites with every new commit. New WordPress migration plugin Earlier versions of Ghost included a very basic WordPress migrator plugin that made it extremely difficult for anyone to move their data between the platforms or have a smooth experience. The new Ghost 3.0 compatible WordPress migration plugin provides a single-button-download of the full WordPress content + image archive in a format that can be dragged and dropped into Ghost's importer. Those who are new and want to explore Ghost 3.0 can create a new site in a few clicks with an unrestricted 14 day free trial as all new sites on Ghost (Pro) are running Ghost 3.0. The team expects users to try out Ghost 3.0 and get back with feedback for the team on the Ghost forum or help out on Github for building the next features with the Ghost team. Ben Thompson’s Stratechery, a subscription-based newsletter featuring in-depth commentary on tech and media news, recently posted an interview with Ghost CEO John O’Nolan. This interview features questions on what Ghost is, where it came from, and much more. Ghost 3.0 has received a positive response from many and also the fact that it is moving towards adopting static site JAMStack approach. A user on Hacker News commented, “In my experience, Ghost has been the no-nonsense blog CMS that has been stable and just worked with very little maintenance. I like that they are now moving towards static site JAMStack approach, driven by APIs rather than the current SSR model. This lets anybody to customise their themes with the language / framework of choice and generating static builds that can be cached for improved loading times.” Another user who is using Ghost for the first time commented, “I've never tried Ghost, although their website always appealed to me (one of the best designed website I know). I've been using WordPress for the past 13 years, for personal and also professional projects, which means the familiarity I've built with building custom themes never drew me towards trying another CMS. But going through this blog post announcement, I saw that Ghost can be used as a headless CMS with frontend frameworks. And since I started using GatsbyJS extensively in the past year, it seems like something that would work _really_ well together. Gonna try it out! And congrats on remaining true to your initial philosophy.” To know more about the other features in detail, read the official blog post. Google partners with WordPress and invests $1.2 million on “an opinionated CMS” called Newspack Verizon sells Tumblr to WordPress parent, Automattic, for allegedly less than $3million, a fraction of its acquisition cost FaunaDB brings its serverless database to Netlify to help developers create apps
Read more
  • 0
  • 0
  • 4604
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-pi-hole-4-3-2-removes-adblock-style-lists-support-and-implements-many-core-and-web-interface-changes
Vincy Davis
25 Sep 2019
3 min read
Save for later

Pi-hole 4.3.2 removes adblock style lists support and implements many core and web interface changes

Vincy Davis
25 Sep 2019
3 min read
Last week, Pi-hole, the open-source Linux network-level advertisement and internet tracker blocking application released their latest version Pi-hole 4.3.2. It includes many changes in its core and web interfaces. Users can run pihole -up to update this version from a terminal session. One of the core contributors to Pi-hole, Adam Warner revealed that the major change in this release is the removal of support for adblock style lists like Easylist/Easyprivacy. He alerted users that this may lead to a reduction in the number of blocked domains by Pi-hole. Warner also specified the motive behind the removal of adblock support as, “these lists were never designed to be parsed into a HOST formatted file, and while it may catch some domains, there are far too many false positives produced by using them in this way. If you have lists in this format, Pi-hole will now ignore them, and attempts to get around the detection will likely end up with a broken gravity list.” Pi-hole uses dnsmasq, cURL, lighttpd, PHP, and other tools to block Domain Name System (DNS) requests for known tracking and advertising. Intended for a private network, Pi-hole is implemented on embedded devices with network capabilities like Raspberry Pi. A Pi-hole can also block traditional website adverts in smart TVs, mobile operating systems, and more. If Pi-hole obtains any requests from adverts or tracking domains, it does not resolve the requested domain and responds to the requesting device with a blank webpage. Users are happy with Pi-hole 4.3.2 release and are all praises for it on Hacker News. A user said, “I'm a huge fan of this project! I have 3 set-up right now. One as a container on my Nuc at home for myself, and 2 other on old Pi's (one is a 1st gen B model) for family. A simple job to run every 2 months keeps everything up to date. For myself, I use Wireguard to only forward DNS packets to the PiHole when I'm outside the house. If you install a PiHole your help desk calls from family will drop by 90% (personal experience).” Another user comments, “I have Pi-hole running on my LAN and it's amazing. It also helped me identify that my Amcrest PoE security cameras aggressively phone home, even when no cloud functionality is configured on them. All the reasons to keep them on their own VLAN and off the Internet.” Another comment read, “One unadvertised advantage of pi-hole is monitoring and blocking sites that you don't want kids to use, such as the thousands of io-games and whatnot.” Check out the Pi-hole 4.3.2 release notes to know full updates of this release. Brave ad-blocker gives 69x better performance with its new engine written in Rust Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Opera Touch browser lets you browse with one hand on your iPhone, comes with e2e encryption and built-in ad blockers too!
Read more
  • 0
  • 0
  • 6472

article-image-mozilla-announces-final-four-candidates-that-will-replace-its-irc-network
Bhagyashree R
13 Sep 2019
4 min read
Save for later

Mozilla announces final four candidates that will replace its IRC network

Bhagyashree R
13 Sep 2019
4 min read
In April this year, Mozilla announced that it would be shutting down its IRC network stating that it creates “unnecessary barriers to participation in the Mozilla project.” Last week, Mike Hoye, the Engineering Community Manager at Mozilla, shared the four final candidates for Mozilla’s community-facing synchronous messaging system: Mattermost, Matrix/Riot.im, Rocket.Chat, and Slack. Mattermost is a flexible, self-hostable, open-source messaging platform that enables secure team collaboration. Riot.im is an open-source instant messaging client that is based on the federated Matrix protocol. Rocket.Chat is also a free and open-source team chat collaboration platform. The only proprietary option in the shortlisted messaging platform list is Slack, which is a widely used team collaboration hub. Read also: Slack stocks surges 49% on the first trading day on the NYSE after direct public offering Explaining how Mozilla shortlisted these messaging systems, Hoye wrote, “These candidates were assessed on a variety of axes, most importantly Community Participation Guideline enforcement and accessibility, but also including team requirements from engineering, organizational-values alignment, usability, utility and cost.” He said that though there were a whole lot of options to choose from these were the ones that best-suited Mozilla’s current institutional needs and organizational goals. Mozilla will soon be launching official test instances of each of the candidates for open testing. After the one month trial period, the team will be taking feedback in dedicated channels on each of those servers. You can also share your feedback in #synchronicity on IRC.mozilla.org and a forum on Mozilla’s community Discourse instance that the team will be creating soon. Mozilla's timeline for transitioning to the finalized messaging system September 12th to October 9th: Mozilla will be running the proof of concept trials and accepting community feedback. October 9th to 30th: It will discuss the feedback, draft a proposed post-IRC plan, and get approval from the stakeholders. December 1st:  The new messaging system will be started. March 1st, 2020: There will be a transition time for support tooling and developers starting from the launch to March 1st, 2020. After this Mozilla’s IRC network will be shut down. Hoye shared that the internal Slack instance will still be running regardless of the result to ensure smooth communication. He wrote, “Internal Slack is not going away; that has never been on the table. Whatever the outcome of this process, if you work at Mozilla your manager will still need to be able to find you on Slack, and that is where internal discussions and critical incident management will take place.” In a discussion on Hacker News, many rooted for Matrix. A user commented, “I am hoping they go with Matrix, least then I will be able to have the choice of having a client appropriate to my needs.” Another user added, “Man, I sure hope they go the route of Matrix! Between the French government and Mozilla, both potentially using Matrix would send a great and strong signal to the world, that matrix can work for everyone! Fingers crossed!” Many also appreciated that Mozilla chose three open-source messaging systems. A user commented, “It's great to see 3/4 of the options are open source! Whatever happens, I really hope the community gets behind the open-source options and don't let more things get eaten up by commercial silos cough slack cough.” Some were not happy that Zulip, an open-source group chat application, was not selected. “I'm sad to see Zulip excluded from the list. It solves the #1 issue with large group chats - proper threading. Nothing worse than waking up to a 1000 message backlog you have to sort through to filter out the information relevant to you. Except for Slack, all of their other choices have very poor threading,” a user commented. Check out the Hoye’s official announcement to know more in detail. Other news in web Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Wasmer’s first Postgres extension to run WebAssembly is here! JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3
Read more
  • 0
  • 0
  • 4476

article-image-safari-technology-preview-91-gets-beta-support-for-the-webgpu-javascript-api-and-wsl
Bhagyashree R
13 Sep 2019
3 min read
Save for later

Safari Technology Preview 91 gets beta support for the WebGPU JavaScript API and WSL

Bhagyashree R
13 Sep 2019
3 min read
Yesterday, Apple announced that Safari Technology Preview 91 now supports the beta version of the new WebGPU graphics API and its shading language, Web Shading Language (WSL). You can enable the WebGPU beta support by selecting Experimental Features > WebGPU in the Developer menu. The WebGPU JavaScript API WebGPU is a new graphics API for the web that aims to provide "modern 3D graphics and computation capabilities.” It is a successor to WebGL, a JavaScript API that enables 3D and 2D graphics rendering within any compatible browser without the need for a plug-in. It is being developed in the W3C GPU for the Web Community Group with engineers from Apple, Mozilla, Microsoft, Google, and others. Read also: WebGL 2.0: What you need to know Comparing WebGPU and WebGL WebGPU is different from WebGL in the respect that it is not a direct port of any existing native API, but a similarity between the two is that they both are accessed through JavaScript. However, the team does have plans to make it accessible through WebAssembly as well in the future. In WebGL, rendering a single object requires writing a series of state-changing calls. On the other hand, WebGPU combines all the state-changing calls into a single object named pipeline state object. It validates the state after the pipeline is created to prevent expensive state analysis inside the draw call. Also, wrapping an entire pipeline state in a single function call reduces the number of exchanges between Javascript and WebKit’s C++ browser engine. Similarly, resources in WebGL are bound one-by-one, while WebGPU batches them up into bind groups. The team explains, “In both of these examples, multiple objects are gathered up together and baked into a hardware-dependent format, which is when the browser performs validation. Being able to separate object validation from object use means the application author has more control over when expensive operations occur in the lifecycle of their application.” The main focus area of WebGPU is to provide improved performance and ease of use as compared to WebGL. The team compared the performance of the two using the 2D graphics benchmark, MotionMark. The performance test they wrote measured how many triangles each with different properties were rendered while maintaining 60 frames per second. Each triangle was rendered with a different draw call and bind group. WebGPU showed a substantially better performance than WebGL: Source: Apple WHLSL is now renamed to WSL In November last year, Apple proposed a new shading language for WebGPU named Web High-Level Shading Language (WHLSL), which was source-compatible with HLSL. After receiving the community feedback, they updated the language to be compatible with OpenGL Shading Language (GLSL), which is a pretty commonly used language among the web developers. Apple renamed this version of the language to Web Shading Language (WSL) and describes it as “simple, low-level, and fast to compile.” Read also: Introducing Web High Level Shading Language (WHLSL): A graphics shading language for WebGPU “There are many Web developers using GLSL today in WebGL, so a potential browser accepting a different high-level language, like HLSL, wouldn’t suit their needs well. In addition, a high-level language such as HLSL can’t be executed faithfully on every platform and graphics API that WebGPU is designed to execute on,” the team wrote. Check out the official announcement by Apple to know more in detail. Other news in web Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users New memory usage optimizations implemented in V8 Lite can also benefit V8 Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more  
Read more
  • 0
  • 0
  • 4836

article-image-laravel-6-0-releases-with-laravel-vapor-compatibility-lazycollection-improved-authorization-response-and-more
Fatema Patrawala
04 Sep 2019
2 min read
Save for later

Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more

Fatema Patrawala
04 Sep 2019
2 min read
Laravel 6.0 releases with improvements in Laravel 5.8. Introduction to semantic versioning, compatibility with Laravel Vapor, improved authorization responses, job middleware, lazy collections, sub-query improvements, the extraction of frontend scaffolding to the laravel/ui Composer package, and a variety of other bug fixes and usability improvements. Key features in Laravel 6.0 Semantic versioning The Laravel framework package now follows the semantic versioning standard. This makes the framework consistent with the other first-party Laravel packages which already followed this versioning standard. Laravel vapor compatibility Laravel 6.0 provides compatibility with Laravel Vapor, an auto-scaling serverless deployment platform for Laravel. Vapor abstracts the complexity of managing Laravel applications on AWS Lambda, as well as interfacing those applications with SQS queues, databases, Redis clusters, networks, CloudFront CDN, and more. Improved exceptions via ignition Laravel 6.0 ships with Ignition, which is a new open source exception detail page. Ignition offers many benefits over previous releases, such as improved Blade error file and line number handling, runnable solutions for common problems, code editing, exception sharing, and an improved UX. Improved authorization responses In previous releases of Laravel, it was difficult to retrieve and expose custom authorization messages to end users. This made it difficult to explain to end-users exactly why a particular request was denied. In Laravel 6.0, this is now easier using authorization response messages and the new Gate::inspect method. Job middleware Job middleware allows developers to wrap custom logic around the execution of queued jobs, reducing boilerplate in the jobs themselves. Lazy collections Many developers already enjoy Laravel's powerful Collection methods. To supplement the already powerful Collection class, Laravel 6.0 has introduced a LazyCollection, which leverages PHP's generators to allow users to work with very large datasets while keeping memory usage low. Eloquent subquery enhancements Laravel 6.0 introduces several new enhancements and improvements to database subquery support. To know more about this release, check out the official Laravel blog page. What’s new in web development this week? Wasmer’s first Postgres extension to run WebAssembly is here! JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading
Read more
  • 0
  • 0
  • 3784
article-image-angular-cli-8-3-0-releases-with-a-new-deploy-command-faster-production-builds-and-more
Bhagyashree R
26 Aug 2019
3 min read
Save for later

Angular CLI 8.3.0 releases with a new deploy command, faster production builds, and more

Bhagyashree R
26 Aug 2019
3 min read
Last week, the Angular team announced the release of Angular CLI 3.8.0. Along with a redesigned website, this release comes with a new deploy command and improves previously introduced differential loading. https://twitter.com/angular/status/1164653064898277378 Key updates in Angular CLI 8.3.0 Deploy directly from CLI to a cloud platform with the new deploy command Starting from Angular CLI 8.3.0, you have a new deploy command to execute the deploy CLI builder associated with your project. It is essentially a simple alias to ng run MY_PROJECT:deploy. There are many third-party builders that implement deployment capabilities to different platforms that you can add to your project with ng add [package name]. After this package with the deployment capability is added, your project’s angular.json file is automatically updated with a deploy section. You can then simply deploy your project by executing the ng deploy command. Currently, the deploy command supports deployment to Firebase, Azure, Zeit, Netlify, and GitHub. You can also create a builder yourself to use the ng deploy command in case you are deploying to a self-managed server or there’s no builder for the cloud platform you are using. Improved differential loading Angular CLI 8.0 introduced the concept of differential loading to maximize browser compatibility of your web application. Most of the modern browsers today support ES2015, but there might be cases when your app users have a browser that doesn’t. To target a wide range of browsers, you can use polyfill scripts for the browsers. You can ship a single bundle containing all your compiled code and any polyfills that may be needed. However, this increased bundle size shouldn’t affect users who have modern browsers. This is where differential loading comes in where the CLI builds two separate bundles as part of your deployed application. The first bundle will target modern browsers, while the second one will target the legacy browser with all necessary polyfills. Though this increases your application’s browser compatibility, the production build ends up taking twice the time. Angular CLI 8.3.0 fixes this by changing how the command runs. Now, the build targeting ES2015 is built first and then it is directly down leveled to ES5, instead of rebuilding the app from scratch. In case you encounter any issue, you can fall back to the previous behavior with NG_BUILD_DIFFERENTIAL_FULL=true ng build --prod. Many Angular developers are excited about the new updates in Angular CLI 8.3.0. https://twitter.com/vikerman/status/1164655906262409216 https://twitter.com/Santosh19742211/status/1164791877356277761 While some did question the usefulness of the deploy command. A developer on Reddit shared their perspective, “Honestly, I think Angular and the CLI are already big and complex enough. Every feature possibly creates bugs and needs to be maintained. While the CLI is incredibly useful and powerful there have been also many issues in the past. On the other hand, I must admit that I can't judge the usefulness of this feature: I've never used Firebase. Is it really so hard to deploy on it? Can't this be done with a couple of lines of a shell script? As already said: One should use CI/CD anyway.” To know more in detail about the new features in Angular CLI 8.3.0, check out the official docs. Also, check out the @angular-schule/ngx-deploy-starter repository to create a new builder for utilizing the deploy command. Angular 8.0 releases with major updates to framework, Angular Material, and the CLI Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 5540

article-image-microsoft-edge-beta-is-now-ready-for-you-to-try
Bhagyashree R
21 Aug 2019
3 min read
Save for later

Microsoft Edge Beta is now ready for you to try

Bhagyashree R
21 Aug 2019
3 min read
Yesterday, Microsoft announced the launch of the first beta builds of its Chromium-based Edge browser. Developers using the supported versions of Windows and macOS can now test it and share their feedback with the Microsoft Edge team. https://twitter.com/MicrosoftEdge/status/1163874455690645504 The preview builds are made available to developers and early adopters through three channels on the Microsoft Edge Insider site: Beta, Canary, and Developer. Earlier this year, Microsoft opened the Canary and Developer channel. Canary builds receive updates every night, while Developer builds are updated weekly. Beta is the third and final preview channel that is the most stable one among the three. It receives updates every six weeks, along with periodic minor updates for bug fixes and security. Source: Microsoft Edge Insider What’s new in Microsoft Edge Beta Microsoft Edge Beta comes with several options using which you can personalize your browsing experience. It supports the dark theme and offers 14 different languages to choose from. If you are not a fan of the default new tab page, you can customize it with its tab page customizations. There are currently three preset styles that you can switch between:  Focused, Inspirational and Informational. You can further customize and personalize Microsoft Edge through the different addons available on Microsoft Edge Insider Addons store or other Chromium-based web stores. Talking about the user privacy bit, this release brings support for tracking prevention. Enabling this feature will protect you from being tracked by websites that you don’t visit. You can choose from three levels of privacy, which are Basic, Balanced and Strict. Microsoft Edge Beta also comes with some of the commercial features that were announced at Build this year. Microsoft Search is now integrated into Bing that lets you search for OneDrive files directly from Bing search. There is support for Internet Explorer mode that brings Internet Explorer 11 compatibility directly into Microsoft Edge. It also supports Windows Defender Application Guard for isolating enterprise-defined untrusted sites. Along with this release, Microsoft also launched the Microsoft Edge Insider Bounty Program.  Under this program researchers who find any high-impact vulnerabilities in Dev and Beta channels will receive rewards up to US$30,000. Read Microsoft’s official announcement to know more in detail. Microsoft officially releases Microsoft Edge canary builds for macOS users Microsoft Edge mobile browser now shows warnings against fake news using NewsGuard Microsoft Edge Beta available on iOS with a breaking news alert, developer options and more
Read more
  • 0
  • 0
  • 2435

article-image-google-and-mozilla-to-remove-extended-validation-indicators-in-chrome-77-and-firefox-70
Bhagyashree R
13 Aug 2019
4 min read
Save for later

Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70

Bhagyashree R
13 Aug 2019
4 min read
Last year, Apple removed the Extended Validation (EV) certificate indicators from Safari on both iOS 12 and Mojave. Now, Google and Mozilla are following suit by removing the EV visual indicators starting from Chrome 77 and Firefox 70. What are Extended Validation Certificates Introduced in 2007, Extended Validation Certificates are issued to applicants after they are verified as a genuine legal entity by a certificate authority (CA). The baseline requirements for an EV certificate are outlined by the CA/Browser forum. Web browsers show a green address bar when visiting a website that is using EV Certificate. You will see the company name alongside the padlock symbol in the green address bar. These certificates can often be expensive. DigiCert charges $344 USD per year, Symantec prices its EV certificate at $995 USD a year, and Thawte at $299 USD a year. Why Chrome and Firefox are removing EV indicators In a survey conducted by Google, users of the Chrome and Safari browsers were asked how much they trusted a website with and without EV indicators. The results of the survey showed that browser identity indicators do not have much effect on users’ secure choices. About 85 percent of users did not find anything strange about a Google login page with the fake URL “accounts.google.com.amp.tinyurl.com”. Seeing these results and prior academic work, the Google Security US team concluded that positive security indicators are largely ineffective. “As part of a series of data-driven changes to Chrome’s security indicators, the Chrome Security UX team is announcing a change to the Extended Validation (EV) certificate indicator on certain websites starting in Chrome 77,” the team wrote in a Google group. Another reason behind this decision was that the EV indicators takes up valuable screen space. Starting with Chrome 77,  the information related to EV indicators will be shown in Page Info that appears when the lock icon is clicked instead of the EV badge: Source: Google Citing similar reasons, the team behind Firefox shared their intention to remove EV indicators from Firefox 70 for desktop yesterday. They also plan to add this information to the identity panel instead of showing it on the identity block. “The effectiveness of EV has been called into question numerous times over the last few years, there are serious doubts whether users notice the absence of positive security indicators and proof of concepts have been pitting EV against domains for phishing,” the team wrote. Many CAs market EV certificates as something that builds visitor confidence, protects them against phishing, and identity fraud. Looking at these advancements, Troy Hunt, a web security expert and the creator of “Have I Been Pwned?” concluded that EV certificates are now dead. In a blog post, he questioned the CAs, “how long will it take the CAs selling EV to adjust their marketing to align with reality?” Users have mixed feelings about this change. “Good riddance, IMO. They never meant much, to begin with, the validation procedures were basically "can you pay the fee?", and they only added to user confusion,” a user said on Hacker News. Many users believe that EV indicators are valuable for financial transactions. A user commented on Reddit, “As a financial institution it was always much easier to just say "make sure it says <Bank name> in the URL bar and it's green" when having a customer on the phone than "Please click on settings -> advanced settings -> security -> display certificate and check the value subject".” To know more, check out the official announcements by Chrome and Firefox teams. Google Chrome to simplify URLs by hiding special-case subdomains Flutter gets new set of lint rules to build better Chrome OS apps Mozilla releases WebThings Gateway 0.9 experimental builds targeting Turris Omnia and Raspberry Pi 4  
Read more
  • 0
  • 0
  • 3017
article-image-react-16-9-releases-with-asynchronous-testing-utility-programmatic-profiler-and-more
Bhagyashree R
09 Aug 2019
3 min read
Save for later

React 16.9 releases with an asynchronous testing utility, programmatic Profiler, and more

Bhagyashree R
09 Aug 2019
3 min read
Yesterday, the React team announced the release of React 16.9. This release comes with an asynchronous testing utility, programmatic Profiler, and a few deprecations and bug fixes. Along with the release announcement, the team has also shared an updated React roadmap. https://twitter.com/reactjs/status/1159602037232791552 Following are some of the updates in React 16.9: Asynchronous act() method for testing React 16.8 introduced a new API method called ‘ReactTestUtils.act()’. This method enables you to write tests that involve rendering and updating components run closer to how React works in the browser. This release only supported synchronous functions. Whereas React 16.9 is updated to accept asynchronous functions as well. Performance measurements with <React.Profiler> Introduced in React 16.5, React Profiler API collects timing information about each rendered component to find performance bottlenecks in your application. This release adds a new “programmatic way” for gathering measurements called <React.Profiler>. It keeps track of the number of times a React application renders and how much it costs for rendering. With these measurements, developers will be able to identify the application parts that are slow and need optimization. Deprecations in React 16.9 Unsafe lifecycle methods are now renamed The legacy component lifecycle methods are renamed by adding an “UNSAFE_” prefix. This will indicate that if your code uses these lifecycle methods, it will be more likely to have bugs in React’s future releases. Following are the functions that have been renamed: ‘componentWillMount’ to ‘UNSAFE_componentWillMount’ ‘componentWillReceiveProps’ to ‘UNSAFE_componentWillReceiveProps’ ‘componentWillUpdate’ to ‘UNSAFE_componentWillUpdate’ This is not a breaking change, however, you will see a warning when using the old names. The team has also provided a ‘codemod’ script for automatically renaming these methods. javascript: URLs will now give a warning URLs that start with “javascript:” are deprecated as they can serve as “a dangerous attack surface”. As with the renamed lifecycle methods, these URLs will continue to work, but will show a warning. The team recommends developers to use React event handlers instead. “In a future major release, React will throw an error if it encounters a javascript: URL,” the announcement read. How does the React roadmap look like The React team has made some changes in the roadmap that was shared in November. React Hooks was released as per the plan, however, the team underestimated the follow-up work, and ended up extending the timeline by a few months. After React Hooks released in React 16.8, the team focused on Concurrent Mode and Suspense for Data Fetching, which are already being used in Facebook’s website. Previously, the team planned to split Concurrent Mode and Suspense for Data Fetching into two releases. Now both of these features will be released in a single version later this year. “We shipped Hooks on time, but we’re regrouping Concurrent Mode and Suspense for Data Fetching into a single release that we intend to release later this year,” the announcement read. To know more in detail about React 16.9, you can check out the official announcement. React 16.8 releases with the stable implementation of Hooks React 16.x roadmap released with expected timeline for features like “Hooks”, “Suspense”, and “Concurrent Rendering” React Conf 2018 highlights: Hooks, Concurrent React, and more
Read more
  • 0
  • 0
  • 2941

article-image-googles-5-5m-cookie-privacy-settlement-that-paid-nothing-to-users-is-now-voided-by-a-u-s-appeals-court
Bhagyashree R
07 Aug 2019
3 min read
Save for later

Google's $5.5m ‘cookie’ privacy settlement that paid nothing to users is now voided by a U.S. appeals court

Bhagyashree R
07 Aug 2019
3 min read
Yesterday, the U.S. Court of Appeals for the Third Circuit voided Google's $5.5m ‘cookie’ privacy settlement that paid nothing to consumers. The settlement was meant to resolve the case against Google for violating user privacy by installing cookies in their browsers. This comes after the decision was challenged by the Center for Class Action Fairness (CCAF), an institution representing class members against unfair class action procedures and settlements. What this Google 'cookie' case was about The class-action case accuses Google of creating a web browser cookie that tracks a user’s data. It mentions that the cookie also tracked data of Safari or Internet Explorer users even if they properly configured their privacy settings. The plaintiffs claim that Google invaded their privacy under the California constitution and the state tort of intrusion upon seclusion. In February 2017,  U.S. District Judge Sue Robinson in Delaware ruled that Google will stop using cookies for Safari browsers and pay a $5.5 million settlement. This settlement will cover fees and costs of the class counsel, incentive awards for the named class representatives, and cy pres distributions. This did not include any direct compensation to class members. The six cy pres recipients were data privacy organizations who agreed to use the funds for researching and promoting browser privacy. Cy pres distributions, which means “as near as possible” allows the court to distribute the money from a class action settlement to a charitable organization. This is done when the settlement becomes impossible, impracticable, or illegal to perform. Some of the cy pres recipients had pre-existing associations with Google and the class counsel, which raised concern. “Through the proposed class-action settlement, the purported wrongdoer promises to pay a couple million dollars to class counsel and make a cy pres contribution to organizations it was already donating to otherwise (at least one of which has an affiliation with class counsel),” the Circuit Judge Thomas Ambro said. He noted that John Roberts, the U.S Chief Justice has previously expressed concerns about cy pres. Many federal courts also are quite skeptical about cy pres awards as they could prompt class counsel to put their own interests ahead of their clients’. Ambro further mentioned that the District Court’s fact-finding was insufficient. “In this context, we believe the District Court’s fact-finding and legal analysis were insufficient for us to review its order certifying the class and approving the fairness, reasonableness, and adequacy of the settlement. We thus vacate and remand for further proceedings in accord with this opinion.” CCAF objection to this settlement was overruled by U.S. District Court for the District of Delaware on February 2, 2017. Ted Frank, CCAF’s director, who is also a class member in this case, filed a notice of appeal on March 1, 2017. Ted Frank believes that the money awarded to the privacy groups should have been instead given to class members. The objection is also being supported by 13 state attorneys. “The state attorneys general agree with CCAF that the feasibility of distributing funds depends on whether it’s impossible to distribute funds to some class members, not whether it’s possible to distribute to all class members,” wrote CCAF. Now, the case is returned to the Delaware court. You can read more about Google’s Cookie Placement Consumer Privacy Litigation case on Hamilton Lincoln Law Institute. Google discriminates against pregnant women, an employee memo alleges Google Chrome to simplify URLs by hiding special-case subdomains Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage  
Read more
  • 0
  • 0
  • 2287