Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Web Development

354 Articles
article-image-google-chrome-76-now-supports-native-lazy-loading
Bhagyashree R
27 Aug 2019
4 min read
Save for later

Google Chrome 76 now supports native lazy-loading

Bhagyashree R
27 Aug 2019
4 min read
Earlier this month, Google Chrome 76 got native support for lazy loading. Web developers can now use the new ‘loading’ attribute to lazy-load resources without having to rely on a third-party library or writing a custom lazy-loading code. Why native lazy loading is introduced Lazy loading aims to provide better web performance in terms of both speed and consumption of data. Generally, images are the most requested resources on any website. Some web pages end up using a lot of data to load images that are out of the viewport. Though this might not have much effect on a WiFi user, this could surely end up consuming a lot of cellular data. Not only images, but out-of-viewport embedded iframes can also consume a lot of data and contribute to slow page speed. Lazy loading addresses this problem by deferring the non-critical, below-the-fold images and iframe loads until the user scrolls closer to them. This results in faster web page loading, minimized bandwidth for users, and reduced memory usage. Previously, there were a few ways to defer the loading of images and iframes that were out of the viewport. You could use the Intersection Observer API or the ‘data-src’ attribute on the 'img' tag. Many developers also built third-party libraries to provide abstractions that are even easier to use. Bringing native support, however, eliminates the need for an external library. It also ensures that the deferred loading of images and iframes still work even if JavaScript is disabled on the client. How you can use lazy loading Without this feature, Chrome already loads images at different priorities depending on their location with respect to the device viewport. This new ‘loading’ attribute, however, allows developers to completely defer the loading of images and iframes until the user scrolls near them. The distance-from-viewport threshold is not fixed and depends on the type of resources being fetched, whether Lite mode is enabled, and the effective connection type. There are default values assigned for effective connection type in the Chromium source code that might change in a future release. Also, since the images are lazy-loaded, there are chances of content reflow. To prevent this, developers are advised to set width and height for the images. You can assign any one of the following three values to the ‘loading’ attribute: ‘auto’: This represents the default behavior of the browser and is equivalent to not including the attribute. ‘lazy’: This will defer the loading of the images and iframes until it reaches a calculated distance from the viewport. ‘eager’: This will load the resource immediately. Support for native lazy loading in Chrome 76 got mixed reactions from users. A user commented on Hacker News, “I'm happy to see this. So many websites with lazy loading never implemented a fallback for noscript. And most of the popular libraries didn't account for this accessibility.” Another user expressed that it does hinder user experience. They commented, “I may be the odd one out here, but I hate lazy loading. I get why it's a big thing on cellular connections, but I do most of my browsing on WIFI. With lazy loading, I'll frequently be reading an article, reach an image that hasn't loaded in yet, and have to wait for it, even though I've been reading for several minutes. Sometimes I also have to refind my place as the whole darn page reflows. I wish there was a middle ground... detect I'm on WIFI and go ahead and load in the lazy stuff after the above the fold stuff.” Right now, Chrome is the only browser to support native lazy loading. However, other browsers may follow the suit considering Firefox has an open bug for implementing lazy loading and Edge is based on Chromium. Why should your e-commerce site opt for Headless Magento 2? Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event Angular 8.0 releases with major updates to framework, Angular Material, and the CLI
Read more
  • 0
  • 0
  • 6378

article-image-apache-flink-1-9-0-releases-with-fine-grained-batch-recovery-state-processor-api-and-more
Fatema Patrawala
26 Aug 2019
5 min read
Save for later

Apache Flink 1.9.0 releases with Fine-grained batch recovery, State Processor API and more

Fatema Patrawala
26 Aug 2019
5 min read
Last week the Apache Flink community announced the release of Apache Flink 1.9.0. The Flink community defines the project goal as “to develop a stream processing system to unify and power many forms of real-time and offline data processing applications as well as event-driven applications.” In this release, they have made a huge step forward in that effort, by integrating Flink’s stream and batch processing capabilities under a single, unified runtime. There are significant features in this release, namely batch-style recovery for batch jobs and a preview of the new Blink-based query engine for Table API and SQL queries. The team also announced the availability of the State Processor API, one of the most frequently requested features that enables users to read and write savepoints with Flink DataSet jobs. Additionally, Flink 1.9 includes a reworked WebUI and previews of Flink’s new Python Table API and it is integrated with the Apache Hive ecosystem. Let us take a look at the major new features and improvements: New Features and Improvements in Apache Flink 1.9.0 Fine-grained Batch Recovery The time to recover a batch (DataSet, Table API and SQL) job from a task failure is significantly reduced. Until Flink 1.9, task failures in batch jobs were recovered by canceling all tasks and restarting the whole job, i.e, the job was started from scratch and all progress was voided. With this release, Flink can be configured to limit the recovery to only those tasks that are in the same failover region. A failover region is the set of tasks that are connected via pipelined data exchanges. Hence, the batch-shuffle connections of a job define the boundaries of its failover regions. State Processor API Up to Flink 1.9, accessing the state of a job from the outside was limited to the experimental Queryable State. In this release the team introduced a new, powerful library to read, write and modify state snapshots using the batch DataSet API. In practice, this means: Flink job state can be bootstrapped by reading data from external systems, such as external databases, and converting it into a savepoint. State in savepoints can be queried using any of Flink’s batch APIs (DataSet, Table, SQL), for example to analyze relevant state patterns or check for discrepancies in state that can support application auditing or troubleshooting. The schema of state in savepoints can be migrated offline, compared to the previous approach requiring online migration on schema access. Invalid data in savepoints can be identified and corrected. The new State Processor API covers all variations of snapshots: savepoints, full checkpoints and incremental checkpoints. Stop-with-Savepoint Cancelling with a savepoint is a common operation for stopping/restarting, forking or updating Flink jobs. However, the existing implementation did not guarantee output persistence to external storage systems for exactly-once sinks. To improve the end-to-end semantics when stopping a job, Flink 1.9 introduces a new SUSPEND mode to stop a job with a savepoint that is consistent with the emitted data. You can suspend a job with Flink’s CLI client as follows: bin/flink stop -p [:targetDirectory] :jobId The final job state is set to FINISHED on success, allowing users to detect failures of the requested operation. Flink WebUI Rework After a discussion about modernizing the internals of Flink’s WebUI, this component was reconstructed using the latest stable version of Angular — basically, a bump from Angular 1.x to 7.x. The redesigned version is the default in Apache Flink 1.9.0, however there is a link to switch to the old WebUI. Preview of the new Blink SQL Query Processor After the donation of Blink to Apache Flink, the community worked on integrating Blink’s query optimizer and runtime for the Table API and SQL. The team refactored the monolithic flink-table module into smaller modules. This resulted in a clear separation of well-defined interfaces between the Java and Scala API modules and the optimizer and runtime modules. Other important changes in this release: The Table API and SQL are now part of the default configuration of the Flink distribution. Previously, the Table API and SQL had to be enabled by moving the corresponding JAR file from ./opt to ./lib. The machine learning library (flink-ml) has been removed in preparation for FLIP-39. The old DataSet and DataStream Python APIs have been removed in favor of FLIP-38. Flink can be compiled and run on Java 9. Note: that certain components interacting with external systems (connectors, filesystems, reporters) may not work since the respective projects may have skipped Java 9 support. The binary distribution and source artifacts for this release are now available via the Downloads page of the Flink project, along with the updated documentation. Flink 1.9 is API-compatible with previous 1.x releases for APIs annotated with the @Public annotation. You can review the release notes to know about the detailed list of changes and new features to upgrade Flink setup to Flink 1.9.0. Apache Flink 1.8.0 releases with finalized state schema evolution support Apache Flink founders data Artisans could transform stream processing with patent-pending tool Apache Flink version 1.6.0 released!
Read more
  • 0
  • 0
  • 3180

article-image-angular-cli-8-3-0-releases-with-a-new-deploy-command-faster-production-builds-and-more
Bhagyashree R
26 Aug 2019
3 min read
Save for later

Angular CLI 8.3.0 releases with a new deploy command, faster production builds, and more

Bhagyashree R
26 Aug 2019
3 min read
Last week, the Angular team announced the release of Angular CLI 3.8.0. Along with a redesigned website, this release comes with a new deploy command and improves previously introduced differential loading. https://twitter.com/angular/status/1164653064898277378 Key updates in Angular CLI 8.3.0 Deploy directly from CLI to a cloud platform with the new deploy command Starting from Angular CLI 8.3.0, you have a new deploy command to execute the deploy CLI builder associated with your project. It is essentially a simple alias to ng run MY_PROJECT:deploy. There are many third-party builders that implement deployment capabilities to different platforms that you can add to your project with ng add [package name]. After this package with the deployment capability is added, your project’s angular.json file is automatically updated with a deploy section. You can then simply deploy your project by executing the ng deploy command. Currently, the deploy command supports deployment to Firebase, Azure, Zeit, Netlify, and GitHub. You can also create a builder yourself to use the ng deploy command in case you are deploying to a self-managed server or there’s no builder for the cloud platform you are using. Improved differential loading Angular CLI 8.0 introduced the concept of differential loading to maximize browser compatibility of your web application. Most of the modern browsers today support ES2015, but there might be cases when your app users have a browser that doesn’t. To target a wide range of browsers, you can use polyfill scripts for the browsers. You can ship a single bundle containing all your compiled code and any polyfills that may be needed. However, this increased bundle size shouldn’t affect users who have modern browsers. This is where differential loading comes in where the CLI builds two separate bundles as part of your deployed application. The first bundle will target modern browsers, while the second one will target the legacy browser with all necessary polyfills. Though this increases your application’s browser compatibility, the production build ends up taking twice the time. Angular CLI 8.3.0 fixes this by changing how the command runs. Now, the build targeting ES2015 is built first and then it is directly down leveled to ES5, instead of rebuilding the app from scratch. In case you encounter any issue, you can fall back to the previous behavior with NG_BUILD_DIFFERENTIAL_FULL=true ng build --prod. Many Angular developers are excited about the new updates in Angular CLI 8.3.0. https://twitter.com/vikerman/status/1164655906262409216 https://twitter.com/Santosh19742211/status/1164791877356277761 While some did question the usefulness of the deploy command. A developer on Reddit shared their perspective, “Honestly, I think Angular and the CLI are already big and complex enough. Every feature possibly creates bugs and needs to be maintained. While the CLI is incredibly useful and powerful there have been also many issues in the past. On the other hand, I must admit that I can't judge the usefulness of this feature: I've never used Firebase. Is it really so hard to deploy on it? Can't this be done with a couple of lines of a shell script? As already said: One should use CI/CD anyway.” To know more in detail about the new features in Angular CLI 8.3.0, check out the official docs. Also, check out the @angular-schule/ngx-deploy-starter repository to create a new builder for utilizing the deploy command. Angular 8.0 releases with major updates to framework, Angular Material, and the CLI Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 5611
Banner background image

article-image-microsoft-edge-beta-is-now-ready-for-you-to-try
Bhagyashree R
21 Aug 2019
3 min read
Save for later

Microsoft Edge Beta is now ready for you to try

Bhagyashree R
21 Aug 2019
3 min read
Yesterday, Microsoft announced the launch of the first beta builds of its Chromium-based Edge browser. Developers using the supported versions of Windows and macOS can now test it and share their feedback with the Microsoft Edge team. https://twitter.com/MicrosoftEdge/status/1163874455690645504 The preview builds are made available to developers and early adopters through three channels on the Microsoft Edge Insider site: Beta, Canary, and Developer. Earlier this year, Microsoft opened the Canary and Developer channel. Canary builds receive updates every night, while Developer builds are updated weekly. Beta is the third and final preview channel that is the most stable one among the three. It receives updates every six weeks, along with periodic minor updates for bug fixes and security. Source: Microsoft Edge Insider What’s new in Microsoft Edge Beta Microsoft Edge Beta comes with several options using which you can personalize your browsing experience. It supports the dark theme and offers 14 different languages to choose from. If you are not a fan of the default new tab page, you can customize it with its tab page customizations. There are currently three preset styles that you can switch between:  Focused, Inspirational and Informational. You can further customize and personalize Microsoft Edge through the different addons available on Microsoft Edge Insider Addons store or other Chromium-based web stores. Talking about the user privacy bit, this release brings support for tracking prevention. Enabling this feature will protect you from being tracked by websites that you don’t visit. You can choose from three levels of privacy, which are Basic, Balanced and Strict. Microsoft Edge Beta also comes with some of the commercial features that were announced at Build this year. Microsoft Search is now integrated into Bing that lets you search for OneDrive files directly from Bing search. There is support for Internet Explorer mode that brings Internet Explorer 11 compatibility directly into Microsoft Edge. It also supports Windows Defender Application Guard for isolating enterprise-defined untrusted sites. Along with this release, Microsoft also launched the Microsoft Edge Insider Bounty Program.  Under this program researchers who find any high-impact vulnerabilities in Dev and Beta channels will receive rewards up to US$30,000. Read Microsoft’s official announcement to know more in detail. Microsoft officially releases Microsoft Edge canary builds for macOS users Microsoft Edge mobile browser now shows warnings against fake news using NewsGuard Microsoft Edge Beta available on iOS with a breaking news alert, developer options and more
Read more
  • 0
  • 0
  • 2465

article-image-mapbox-introduces-martini-a-client-side-terrain-mesh-generation-code
Vincy Davis
16 Aug 2019
3 min read
Save for later

Mapbox introduces MARTINI, a client-side terrain mesh generation code

Vincy Davis
16 Aug 2019
3 min read
Two days ago, Vladimir Agafonkin, an engineer at Mapbox introduced a client-side terrain mesh generation code, called MARTINI, short for ‘Mapbox's Awesome Right-Triangulated Irregular Networks Improved’. It uses a Right-Triangulated Irregular Networks (RTIN) mesh, which consists of big right-angle triangles to render smooth and detailed terrain in 3D. RTIN has two advantages such as: The algorithm generates a hierarchy of all approximations of varying precision, thus enabling quick retrieving. It is very fast making it feasible for client-side meshing from raster terrain tiles. In a blog post, Agafonkin demonstrates a drag and zoom terrain visualization for users to adjust mesh precision in real time. The terrain visualization also displays the number of triangles generated with an error rate. Image Source: Observable How does the RTIN Hierarchy work Mapbox's MARTINI uses the RTIN algorithm which has a size of (2k+1) x (2k+1) grids, “that's why we add 1-pixel borders on the right and bottom”, says Agafonkin. The RTIN algorithm initiates an error map where a grid of error values guides the following mesh retrieval. The error map indicates the user if a certain triangle has to be split or not, by taking the height error value into account. The RTIN algorithm first calculates the error approximation of the smallest triangles, which is then propagated to the parent triangles. This process is repeated until the top two triangles’ errors are calculated and a full error map is produced. This process results in zero T-junctions and thus no gaps in the mesh. Image Source: Observable For retrieving a mesh, RTIN hierarchy starts with two big triangles, which is then subdivided to approximate, according to the error map. Agafonkin says, “This is essentially a depth-first search in an implicit binary tree of triangles, and takes O(numTriangles) steps, resulting in nearly instant mesh generation.” Users have appreciated the Mapbox's MARTINI demo and animation presented by Agafonkin in the blog post. A user on Hacker News says, “This is a wonderful presentation of results and code, well done! Very nice to read.” Another user comments, “Fantastic. Love the demo and the animation.” Another comment on Hacker News reads, “This was a pleasure to play with on an iPad. Excellent work.” For more details on the code and algorithm used in Mapbox's MARTINI, check out Agafonkin’s blog post. Introducing Qwant Maps: an open source and privacy-preserving maps, with exclusive control over geolocated data Top 7 libraries for geospatial analysis Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data
Read more
  • 0
  • 0
  • 4643

article-image-react-devtools-4-0-releases-with-support-for-hooks-experimental-suspense-api-and-more
Bhagyashree R
16 Aug 2019
3 min read
Save for later

React DevTools 4.0 releases with support for Hooks, experimental Suspense API, and more!

Bhagyashree R
16 Aug 2019
3 min read
Yesterday, the React team announced the release of React DevTools 4.0 for Chrome, Firefox, and Edge. In addition to better performance and navigation experience, this release fully supports React Hooks and provides a way to test the experimental Suspense API. Key updates in React DevTools 4.0 Better performance by reducing the “bridge traffic” The React DevTools extension is made up of two parts: frontend and backend. The frontend portion includes the components tree, the Profiler, and all the other things that are visible to you. On the other hand, the backend portion is the one that is invisible. This portion is in charge of notifying the frontend by sending messages through a “bridge”. In previous versions of React DevTools, the traffic caused by this notification process was one of the biggest performance bottlenecks. Starting with React DevTools 4.0, the team has tried to reduce this bridge traffic by minimizing the amount of messages sent by the backend to render the Components tree. The frontend can request more information whenever required. Automatically logs React component stack warnings React DevTools 4.0 now provides an option to automatically append component stack information to the console in the development phase. This will enable developers to identify where exactly in the component tree failure has happened. To disable this feature just navigate to the General settings panel and uncheck the “Append component stacks to warnings and errors.” Source: React Components tree updates Improved hooks support: Hooks allow you to use state and other React features without writing a class. In React DevTools 4.0, hooks have the same level of support as props and state. Component filters: Navigating through large component trees can often be tiresome. Now, you can easily and quickly find the component you are looking for by applying the component filters. "Rendered by" list and an owners tree: React DevTools 4.0 now has a new "rendered by" list in the right-hand pane that will help you quickly step through the list of owners. There is also an owners tree, the inverse of the "rendered by" list, which lists all the things that have been rendered by a particular component. Suspense toggle: The experimental Suspense API allows you to “suspend” the rendering of a component until a condition is met. In <Suspense> components you can specify the loading states when components below it are waiting to be rendered. This DevTools release comes with a toggle to let you test these loading states. Source: React Profiler changes Import and export profiler data: The profiler data can now be exported and shared among other developers for better collaboration. Source: React Reload and profile: React profiler collects performance information each time the application is rendered. This helps you identify and rectify any possible performance bottlenecks in your applications. In previous versions, DevTools only allowed profiling a “profiling-capable version of React.” So, there was no way to profile the initial mount of an application. This is now supported with a "reload and profile" action. Component renders list: The profiler in React DevTools 4.0 displays a list of each time a selected component was rendered during a profiling session. You can use this list to quickly jump between commits when analyzing a component’s performance. You can check out the release notes of React DevTools 4.0 to know what other features have landed in this release. React 16.9 releases with an asynchronous testing utility, programmatic Profiler, and more React Native 0.60 releases with accessibility improvements, AndroidX support, and more React Native VS Xamarin: Which is the better cross-platform mobile development framework?
Read more
  • 0
  • 0
  • 3953
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-and-mozilla-to-remove-extended-validation-indicators-in-chrome-77-and-firefox-70
Bhagyashree R
13 Aug 2019
4 min read
Save for later

Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70

Bhagyashree R
13 Aug 2019
4 min read
Last year, Apple removed the Extended Validation (EV) certificate indicators from Safari on both iOS 12 and Mojave. Now, Google and Mozilla are following suit by removing the EV visual indicators starting from Chrome 77 and Firefox 70. What are Extended Validation Certificates Introduced in 2007, Extended Validation Certificates are issued to applicants after they are verified as a genuine legal entity by a certificate authority (CA). The baseline requirements for an EV certificate are outlined by the CA/Browser forum. Web browsers show a green address bar when visiting a website that is using EV Certificate. You will see the company name alongside the padlock symbol in the green address bar. These certificates can often be expensive. DigiCert charges $344 USD per year, Symantec prices its EV certificate at $995 USD a year, and Thawte at $299 USD a year. Why Chrome and Firefox are removing EV indicators In a survey conducted by Google, users of the Chrome and Safari browsers were asked how much they trusted a website with and without EV indicators. The results of the survey showed that browser identity indicators do not have much effect on users’ secure choices. About 85 percent of users did not find anything strange about a Google login page with the fake URL “accounts.google.com.amp.tinyurl.com”. Seeing these results and prior academic work, the Google Security US team concluded that positive security indicators are largely ineffective. “As part of a series of data-driven changes to Chrome’s security indicators, the Chrome Security UX team is announcing a change to the Extended Validation (EV) certificate indicator on certain websites starting in Chrome 77,” the team wrote in a Google group. Another reason behind this decision was that the EV indicators takes up valuable screen space. Starting with Chrome 77,  the information related to EV indicators will be shown in Page Info that appears when the lock icon is clicked instead of the EV badge: Source: Google Citing similar reasons, the team behind Firefox shared their intention to remove EV indicators from Firefox 70 for desktop yesterday. They also plan to add this information to the identity panel instead of showing it on the identity block. “The effectiveness of EV has been called into question numerous times over the last few years, there are serious doubts whether users notice the absence of positive security indicators and proof of concepts have been pitting EV against domains for phishing,” the team wrote. Many CAs market EV certificates as something that builds visitor confidence, protects them against phishing, and identity fraud. Looking at these advancements, Troy Hunt, a web security expert and the creator of “Have I Been Pwned?” concluded that EV certificates are now dead. In a blog post, he questioned the CAs, “how long will it take the CAs selling EV to adjust their marketing to align with reality?” Users have mixed feelings about this change. “Good riddance, IMO. They never meant much, to begin with, the validation procedures were basically "can you pay the fee?", and they only added to user confusion,” a user said on Hacker News. Many users believe that EV indicators are valuable for financial transactions. A user commented on Reddit, “As a financial institution it was always much easier to just say "make sure it says <Bank name> in the URL bar and it's green" when having a customer on the phone than "Please click on settings -> advanced settings -> security -> display certificate and check the value subject".” To know more, check out the official announcements by Chrome and Firefox teams. Google Chrome to simplify URLs by hiding special-case subdomains Flutter gets new set of lint rules to build better Chrome OS apps Mozilla releases WebThings Gateway 0.9 experimental builds targeting Turris Omnia and Raspberry Pi 4  
Read more
  • 0
  • 0
  • 3070

article-image-react-16-9-releases-with-asynchronous-testing-utility-programmatic-profiler-and-more
Bhagyashree R
09 Aug 2019
3 min read
Save for later

React 16.9 releases with an asynchronous testing utility, programmatic Profiler, and more

Bhagyashree R
09 Aug 2019
3 min read
Yesterday, the React team announced the release of React 16.9. This release comes with an asynchronous testing utility, programmatic Profiler, and a few deprecations and bug fixes. Along with the release announcement, the team has also shared an updated React roadmap. https://twitter.com/reactjs/status/1159602037232791552 Following are some of the updates in React 16.9: Asynchronous act() method for testing React 16.8 introduced a new API method called ‘ReactTestUtils.act()’. This method enables you to write tests that involve rendering and updating components run closer to how React works in the browser. This release only supported synchronous functions. Whereas React 16.9 is updated to accept asynchronous functions as well. Performance measurements with <React.Profiler> Introduced in React 16.5, React Profiler API collects timing information about each rendered component to find performance bottlenecks in your application. This release adds a new “programmatic way” for gathering measurements called <React.Profiler>. It keeps track of the number of times a React application renders and how much it costs for rendering. With these measurements, developers will be able to identify the application parts that are slow and need optimization. Deprecations in React 16.9 Unsafe lifecycle methods are now renamed The legacy component lifecycle methods are renamed by adding an “UNSAFE_” prefix. This will indicate that if your code uses these lifecycle methods, it will be more likely to have bugs in React’s future releases. Following are the functions that have been renamed: ‘componentWillMount’ to ‘UNSAFE_componentWillMount’ ‘componentWillReceiveProps’ to ‘UNSAFE_componentWillReceiveProps’ ‘componentWillUpdate’ to ‘UNSAFE_componentWillUpdate’ This is not a breaking change, however, you will see a warning when using the old names. The team has also provided a ‘codemod’ script for automatically renaming these methods. javascript: URLs will now give a warning URLs that start with “javascript:” are deprecated as they can serve as “a dangerous attack surface”. As with the renamed lifecycle methods, these URLs will continue to work, but will show a warning. The team recommends developers to use React event handlers instead. “In a future major release, React will throw an error if it encounters a javascript: URL,” the announcement read. How does the React roadmap look like The React team has made some changes in the roadmap that was shared in November. React Hooks was released as per the plan, however, the team underestimated the follow-up work, and ended up extending the timeline by a few months. After React Hooks released in React 16.8, the team focused on Concurrent Mode and Suspense for Data Fetching, which are already being used in Facebook’s website. Previously, the team planned to split Concurrent Mode and Suspense for Data Fetching into two releases. Now both of these features will be released in a single version later this year. “We shipped Hooks on time, but we’re regrouping Concurrent Mode and Suspense for Data Fetching into a single release that we intend to release later this year,” the announcement read. To know more in detail about React 16.9, you can check out the official announcement. React 16.8 releases with the stable implementation of Hooks React 16.x roadmap released with expected timeline for features like “Hooks”, “Suspense”, and “Concurrent Rendering” React Conf 2018 highlights: Hooks, Concurrent React, and more
Read more
  • 0
  • 0
  • 2976

article-image-googles-5-5m-cookie-privacy-settlement-that-paid-nothing-to-users-is-now-voided-by-a-u-s-appeals-court
Bhagyashree R
07 Aug 2019
3 min read
Save for later

Google's $5.5m ‘cookie’ privacy settlement that paid nothing to users is now voided by a U.S. appeals court

Bhagyashree R
07 Aug 2019
3 min read
Yesterday, the U.S. Court of Appeals for the Third Circuit voided Google's $5.5m ‘cookie’ privacy settlement that paid nothing to consumers. The settlement was meant to resolve the case against Google for violating user privacy by installing cookies in their browsers. This comes after the decision was challenged by the Center for Class Action Fairness (CCAF), an institution representing class members against unfair class action procedures and settlements. What this Google 'cookie' case was about The class-action case accuses Google of creating a web browser cookie that tracks a user’s data. It mentions that the cookie also tracked data of Safari or Internet Explorer users even if they properly configured their privacy settings. The plaintiffs claim that Google invaded their privacy under the California constitution and the state tort of intrusion upon seclusion. In February 2017,  U.S. District Judge Sue Robinson in Delaware ruled that Google will stop using cookies for Safari browsers and pay a $5.5 million settlement. This settlement will cover fees and costs of the class counsel, incentive awards for the named class representatives, and cy pres distributions. This did not include any direct compensation to class members. The six cy pres recipients were data privacy organizations who agreed to use the funds for researching and promoting browser privacy. Cy pres distributions, which means “as near as possible” allows the court to distribute the money from a class action settlement to a charitable organization. This is done when the settlement becomes impossible, impracticable, or illegal to perform. Some of the cy pres recipients had pre-existing associations with Google and the class counsel, which raised concern. “Through the proposed class-action settlement, the purported wrongdoer promises to pay a couple million dollars to class counsel and make a cy pres contribution to organizations it was already donating to otherwise (at least one of which has an affiliation with class counsel),” the Circuit Judge Thomas Ambro said. He noted that John Roberts, the U.S Chief Justice has previously expressed concerns about cy pres. Many federal courts also are quite skeptical about cy pres awards as they could prompt class counsel to put their own interests ahead of their clients’. Ambro further mentioned that the District Court’s fact-finding was insufficient. “In this context, we believe the District Court’s fact-finding and legal analysis were insufficient for us to review its order certifying the class and approving the fairness, reasonableness, and adequacy of the settlement. We thus vacate and remand for further proceedings in accord with this opinion.” CCAF objection to this settlement was overruled by U.S. District Court for the District of Delaware on February 2, 2017. Ted Frank, CCAF’s director, who is also a class member in this case, filed a notice of appeal on March 1, 2017. Ted Frank believes that the money awarded to the privacy groups should have been instead given to class members. The objection is also being supported by 13 state attorneys. “The state attorneys general agree with CCAF that the feasibility of distributing funds depends on whether it’s impossible to distribute funds to some class members, not whether it’s possible to distribute to all class members,” wrote CCAF. Now, the case is returned to the Delaware court. You can read more about Google’s Cookie Placement Consumer Privacy Litigation case on Hamilton Lincoln Law Institute. Google discriminates against pregnant women, an employee memo alleges Google Chrome to simplify URLs by hiding special-case subdomains Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage  
Read more
  • 0
  • 0
  • 2331

article-image-ffmpeg-4-2-releases-with-av1-decoding-support-through-libdav1d-decoding-of-hevc-444-content-and-more
Vincy Davis
07 Aug 2019
3 min read
Save for later

FFmpeg 4.2 releases with AV1 decoding support through libdav1d, decoding of HEVC 4:4:4 content and more

Vincy Davis
07 Aug 2019
3 min read
Two days ago, the team behind FFmpeg released their latest version of FFmpeg 4.2, nicknamed “Ada”. This release comes with many new filters, decoders, and demuxers. FFmpeg is an open-source project composing software suite of libraries and programs to handle multimedia files. It has cross-platform multimedia framework which is used by various games and applications to record, convert and stream audios and videos. The previous version FFmpeg 4.1 was released last year in November. The FFmpeg team has announced on Twitter that the follow-up point release (4.2.1) will be released in a few weeks. FFmpeg 4.2 has a AV1 decoding support through libdav1d. It also supports decoding of HEVC 4:4:4 content in nvdec, cuviddec and vdpau. It has many new filters like tpad, dedot, freezedetect, truehd_core bitstream, anlmdn, maskfun, and more. Read More: Presenting dav1d, a new lightweight AV1 decoder, by VideoLAN and FFmpeg PCM-DVD has been included as an encoder in FFmpeg 4.2. It also introduces numerous demuxers such as dhav, vividas, hcom, KUX, and IFV. This version also removes libndi-newtek. The new version allows the mov muxer to write tracks in any unspecified language. Earlier, a mov mixer could write in only English, by default. The latest version also supports clang to compile CUDA kernels. Users are happy with the FFmpeg 4.2 release and seem to be excited to get hands on with the new features in their applications. https://twitter.com/Poddingue/status/1158840601267322886 A user on Hacker News comments, “Massive "thank you" to FFmpeg for being an amazing tool. My app pivotally depends on FFmpeg extracting screenshots of videos for users to browse through.” Another comment on Hacker News reads, “Nice to see improved HEVC 4:4:4 support in Linux” Additionally users have appreciated the growth of FFmpeg project in general. A Redditor comments, “Man ffmpeg is so awesome. It's still mind blowing that something so powerful is FOSS. Outside of programming languages, operating systems, and essential tools like the GNU suite, I think an argument can be made that ffmpeg is one of the most important pieces of software ever created. We live in a digital world and virtually everything from security cameras to social media sites to news stations use ffmpeg on some level” Check out the FFmpeg page to read about the updates in detail. Firefox 67 enables AV1 video decoder ‘dav1d’, by default on all desktop platforms Fabrice Bellard, the creator of FFmpeg and QEMU, introduces a lossless data compressor which uses neural networks Introducing QuickJS, a small and easily embeddable JavaScript engine
Read more
  • 0
  • 0
  • 6593
article-image-google-chrome-to-simplify-urls-by-hiding-special-case-subdomains-like-https
Bhagyashree R
02 Aug 2019
4 min read
Save for later

Google Chrome to simplify URLs by hiding special-case subdomains

Bhagyashree R
02 Aug 2019
4 min read
Google’s decision to hide the special-case subdomains, “www” and “m” in Chrome M69 received a huge backlash from the public last year. Following this backlash, the Chrome team did roll back the change. However, this Wednesday the team announced that they are planning to hide “https” and “www” in Chrome omnibox on desktop and Android in M76. In other news, the team is splitting the HTTP cache to prevent side-channel leakage. Chrome to hide https and www from URLs Citing the reason behind reaching this conclusion, the Chrome team said that it is to improve the “simplicity, usability, and security of UI surfaces.” With this change, the team aims to hide away all the distractions and make URLs easier to read and understand for users. Emily Schechter, Product Manager, Chrome Security at Google, wrote on Chromium Issue tracker, “In Sept 2018, we rolled out a change to hide special-case subdomains “www” and “m”. Per my above message on this thread, we rolled back these changes, and announced our intent to re-ship an adjusted version: we will hide “www” but not “m”.” She added, “For several months, we’ve had this version enabled in our Canary, Dev and Beta channels and are confident that it is ready to be enabled in the Stable channel as well.” The Chrome team, together with other browser representatives, has also added a “Simplify non-human-readable or irrelevant components” section in the web URL standard. The section recommends browsers to omit components that can “provide opportunities for spoofing or distract from security-relevant information.” The team has also built an extension named Suspicious Site Reporter for Chrome using which you can identify suspicious sites and report them to Safe Browsing. With this extension, users will be able to see the full URL with no scheme or subdomain hiding. You can also see the full URL by clicking twice in the URL bar on the desktop, and once on mobile. What the public thinks about this change Users have pretty mixed feelings about this update. While some think that this is a step towards making Google a monopoly, others believe that this does simplify URLs for non-technical users. Expressing their concern on Hacker News, a user said, “...these "improvements" in Chrome are meant to make google the defacto interface for using the web. Imagine a world where 99% of users do not have any concept of URLs or any other fundamental WWW concepts. Instead, they open Chrome type whatever they want and get the results.” Some users also highlighted that this update could raise security concerns as well. “Let's assume that you have a blog platform offering subdomains for each user and 'm.blogplatform.com' is available. Now, any user can get that subdomain and impersonate the homepage because Emily from Chromium decided that eliding parts of the URL without any spec is a reasonable decision,” a user added. Apple’s browser, Safari also only shows the domain and lock icon to indicate the legitimacy of a website’s certificate. Since Apple did not receive this amount of user backlash, some felt that the backlash is just the result of people losing trust in the big tech. A user commented, “...collective shrug when Apple hides the URL, but if Google does so we get huge outrage and assumptions that this must clearly be done primarily for malicious reasons.” You can read the full conversation about this update on Chromium’s bug tracker. Chrome to split the HTTP Cache to prevent cross-origin leakage Currently, the HTTP cache stores resources for each of its entries in a single bucket, which is shared among the origins. So, when loading the same resource these origins will refer to the same cache entry. This can lead to a side-channel attack in which a site can detect whether another site has loaded a resource by going through the cache. To prevent this attack, the Chrome team is planning to partition the HTTP cache by the origin of the page’s top-frame. You can read more about this update on the Chrome Platform Status site. Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event Google plans to remove XSS Auditor used for detecting XSS vulnerabilities from its Chrome web browser Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users
Read more
  • 0
  • 0
  • 3591

article-image-scroll-snapping-and-other-cool-css-features-come-to-firefox-68
Fatema Patrawala
01 Aug 2019
2 min read
Save for later

Scroll Snapping and other cool CSS features come to Firefox 68

Fatema Patrawala
01 Aug 2019
2 min read
Yesterday, the Firefox team announced details of the new CSS features added to the Firefox 68. Earlier this month they had announced the release of Firefox 68 with a bunch of CSS additions and changes. Let us take a look at each: CSS Scroll Snapping The update in Firefox 68 brings the Firefox implementation in line with Scroll Snap as implemented in Chrome and Safari. In addition, it removes the old properties which were part of the earlier Scroll Snap Points Specification. The ::marker pseudo-element The ::marker pseudo-element helps in selecting the marker box of a list item. This will typically contain the list bullet, or a number. If you have ever used an image as a list bullet, or wrapped the text of a list item in a span in order to have different bullet and text colors, this pseudo-element is for you! With the marker pseudo-element, you can target the bullet itself. There are only a few CSS properties that may be used on ::marker. These include all font properties. Therefore you can change the font-size or family to be something different to the text. Using ::marker on non-list items A marker can only be shown on the list items, however you can turn any element into a list-item by using display: list-item. The official blog post covers a detailed example with codes on how you can perform this functionality. The ::marker pseudo-element is standardized in CSS Lists Level 3, and CSS Pseudo-elements Level 4, and currently implemented in Firefox 68 and Safari. CSS fixes in Firefox 68 Web developers suffer when a supported feature works differently in different browsers. These interoperability issues are often caused by the age of the web platform. Hence, the Firefox team has made many changes to the CSS specifications. Developers depend on the browsers to update their implementations to match the clarified spec. In the latest Firefox release, the team has got fixes for the ch unit, and list numbering shipping. In addition to changes to the implementation of CSS in Firefox, Firefox 68 brings some great new additions to Developer Tools to work with CSS. Take a look at the Firefox 68 release notes to get a full overview of all the changes and additions in Firefox 68.
Read more
  • 0
  • 0
  • 2706

article-image-is-dark-an-aws-lambda-challenger
Fatema Patrawala
01 Aug 2019
4 min read
Save for later

Is Dark an AWS Lambda challenger?

Fatema Patrawala
01 Aug 2019
4 min read
On Monday, the CEO and Co-founder of Dark, Ellen Chisa, announced the project had raised $3.5 million in funding in a Medium post. Dark is a holistic project that includes a programming language (Darklang), an editor and an infrastructure. The value of this, according to Chisa, is simple: "developers can code without thinking about infrastructure, and have near-instant deployment, which we’re calling deployless." Along with Chisa, Dark is led by CTO, Paul Biggar, who is also the founder of CircleCI, the CI/CD pioneering company. The seed funding is led by Cervin Ventures, in participation with Boldstart, Data Collective, Harrison Metal, Xfactor, Backstage, Nextview, Promus, Correlation, 122 West and Yubari. What are the key features of the Dark programming language? One of the most interesting features in Dark is that deployments take a mere 50 milliseconds. Fast. Chisa says that currently the best teams can manage deployments around 5–10 minutes, but many take considerably longer, sometimes hours. But Dark was designed to change this. It's purpose-built, Chisa seems to suggest, for continuous delivery. “In Dark, you’re getting the benefit of your editor knowing how the language works. So you get really great autocomplete, and your infrastructure is set up for you as soon as you’ve written any code because we know exactly what is required.” She says there are three main benefits to Dark’s approach: An automated infrastructure No need to worry about a deployment pipeline ("As soon as you write any piece of backend code in Dark, it is already hosted for you,” she explains.) Tracing capabilities are built into your code. "Because you’re using our infrastructure, you have traces available in your editor as soon as you’ve written any code. There's undoubtedly a clear sense - whatever users think of the end result - that everything has been engineered with an incredibly clear vision. Dark has been deployed on SaaS platform and project tracking tools Chisa highlights how some customers have already shipped entire products on Dark. Chase Olivieri, who built Altitude, a subscription SaaS providing personalized flight deals, using Drark is cited by Chisa, saying that "as a bootstrapper, Dark has allowed me to move fast and build Altitude without having to worry about infrastructure, scaling, or server management." Downside of Dark is programmers have to learn a new language Speaking to TechCrunch, Chisa admitted their was a downside to Dark - you have to learn a new language. "I think the biggest downside of Dark is definitely that you’re learning a new language, and using a different editor when you might be used to something else, but we think you get a lot more benefit out of having the three parts working together." Chisa acknowledged that it will require evangelizing the methodology to programmers, who may be used to employing a particular set of tools to write their programs. But according to her the biggest selling point is that it will remove the complexity around deployment by bringing an integrated level of automation to the process. Is Darklang basically like AWS Lambda? The community on Hacker News compares Dark with AWS Lambda, with many pessimistic about its prospects. In particular they are skeptical about the efficiency gains Chisa describes. "It only sounds maybe 1 step removed from where aws [sic] lambda’s are now," said one user. "You fiddle with the code in the lambda IDE, and submit for deployment. Is this really that much different?” Dark’s Co-founder, Paul Biggar responded to this in the thread. “Dark founder here. Yes, completely agree with this. To a certain extent, Dark is aimed at being what lambda/serverless should have been." He continues by writing: "The thing that frustrates me about Lambda (and really all of AWS) is that we're just dealing with a bit of code and bit of data. Even in 1999 when I had just started coding I could write something that runs every 10 minutes. But now it's super challenging. Why is it so hard to take a request, munge it, send it somewhere, and then respond to it. That should be trivial! (and in Dark, it is)" The team has planned to roll out the product publicly in September. To find out more more about Dark, read the team's blog posts including What is Dark, How Dark is a functional language, and How Dark allows deploys in 50ms. The V programming language is now open source – is it too good to be true? “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 5351
article-image-stack-overflow-faces-backlash-for-removing-the-hot-meta-posts-section-community-feels-left-out-of-decisions
Bhagyashree R
30 Jul 2019
4 min read
Save for later

Stack Overflow faces backlash for removing the “Hot Meta Posts” section; community feels left out of decisions

Bhagyashree R
30 Jul 2019
4 min read
Last week, Stack Overflow users took to its Meta site to express their concern regarding the communication breakdown between the site and its community. The users highlighted that Stack Overflow has repeatedly failed at consulting the community before coming up with a major change recent being removal of the “Hot Meta Posts” section. “It has become a trend that changes ("features") are pushed out without any prior consultation. Then, in the introductory meta post of said change, push-back is shown by the community and as a result, once again, everyone is left with a bad taste in their mouth from another sour experience,” a user wrote. The backlash comes after Stack Overflow announced last week that it is removing the “Hot Meta Posts” section from its right sidebar. This section listed questions picked semi-randomly every 20 minutes from all posts scoring 3 or more and posted within the past two weeks. As an alternative, moderators can highlight important posts with the help of the “featured” tag. Some of the reasons that it cited for removing this feature were that Meta hasn’t scaled very well since its introduction and the questions on the “Hot Meta Posts” section does not really look ideal for attracting new people. Sara Chipps, an Engineering Manager at StackOverflow, further said that the feature was also affecting the well-being of Stack Overflow employees. “Stack Overflow Employees have panic attacks and nightmares when they know they will need to post something to Meta. They are real human beings that are affected by the way people speak to them. This is outside of the CM team, who have been heroes and who I constantly see abused here,” she wrote. Earlier this month, Stack Overflow faced a similar backlash when it updated its home page. Many users were upset that the page was giving more space to Stack Overflow’s new proprietary products while hiding away the public Q&A feature, which is the main feature Stack Overflow is known for. The company apologized and acted on the feedback. However, users think that this has become a routine. A user added, “It's almost as though you (the company, not the individual) don't care about the users (or, from a cynic's perspective, are actively trying to push out the old folks to make way for the new direction SO is headed in) who have been participating for the best part of a decade.” Some users felt that Stack Overflow does consult with users but not out in the open. “I think they are consulting the community, they're just doing it non publicly and in a different forum, via external stakeholders and interested people on external channels, via data science, and via research interviews and surveys from the research list,“ a user commented. Yaakov Ellis, a developer in the Community dev team at Stack Overflow, assured that Stack Overflow is committed to making the community feel involved. It has no intention to cease the interaction between the company and the community. However, he did admit that there is “internal anxiety” to be more open to the community about the different projects and initiatives. He listed the following reasons: Plans can change, and it is more awkward to do that when it is under the magnifying glass of community discussion.  Functionality being worked on can change direction. Some discussions and features may not be things that the Meta community will be big fans of. And even if we believe that these items are for the best, there will also be times when (with the best intentions), as these decisions have been made after research, data, and users have been consulted, the actual direction is not up for discussion. We can't always share those for privacy purposes, and this causes the back and forth with objectors to be difficult. He further said that there is a need to reset some expectations regarding what happens with the feedback provided by the users. “We value it, and we absolutely promise to listen to all of it [...], but we can't always take the actions that folks here might prefer. We also can't commit to communicating everything in advance, especially when we know that we're simply not open to feedback about certain things, because that would be wasting people's time. Do Google Ads secretly track Stack Overflow users? Stack Overflow confirms production systems hacked Stack Overflow faces backlash for its new homepage that made it look like it is no longer for the open community
Read more
  • 0
  • 0
  • 2387

article-image-amazon-transcribe-streaming-announces-support-for-websockets
Savia Lobo
29 Jul 2019
3 min read
Save for later

Amazon Transcribe Streaming announces support for WebSockets

Savia Lobo
29 Jul 2019
3 min read
Last week, Amazon announced that its automatic speech recognition (ASR) service, Amazon Transcribe, now supports WebSockets. According to Amazon, “WebSocket support opens Amazon Transcribe Streaming up to a wider audience and makes integrations easier for customers that might have existing WebSocket-based integrations or knowledge”. Amazon Transcribe allows developers to add speech-to-text capability to their applications easily with its ASR service. Amazon announced the general availability of Amazon Transcribe in the AWS San Francisco Summit 2018. With Amazon Transcribe API, users can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech. Real-time transcripts from a live audio stream are also possible with the Transcribe API. Until now, the Amazon Transcribe Streaming API has been available using HTTP/2 streaming. However, Amazon adds the new WebSockets support as another integration option for bringing real-time voice capabilities to different projects built using Transcribe. What are WebSockets? WebSockets are a protocol built atop TCP, similar to HTTP. HTTP is excellent for short-lived requests, however, it does not handle persistent real-time communications well. Due to this, the first Amazon Transcribe Streaming API made available uses HTTP/2 streams that solve a lot of the issues that HTTP had with real-time communications. Amazon states, “an HTTP connection is normally closed at the end of the message, a WebSocket connection remains open”. With this advantage, messages can be sent bi-directionally with no bandwidth or latency added by handshaking and negotiating a connection. WebSocket connections are full-duplex, which means that the server and client can both transmit data to and fro at the same time. WebSockets were also designed “for cross-domain usage, so there’s no messing around with cross-origin resource sharing (CORS) as there is with HTTP”. Amazon Transcribe Streaming using Websockets While using the WebSocket protocol to stream audio, Amazon Transcribe transcribes the stream in real-time. When a user encodes the audio with event stream encoding, Amazon Transcribe responds with a JSON structure, which is also encoded using event stream encoding. Key components of a WebSocket request to Amazon Transcribe are: Creating a pre-signed URL to access Amazon Transcribe. Creating binary WebSocket frames containing event stream encoded audio data. Handling WebSocket frames in the response. The different languages that Amazon Transcribe currently supports during real-time transcription include British English (en-GB), US English (en-US), French (fr-FR), Canadian French (fr-CA), and US Spanish (es-US). To know more about WebSockets API in detail, visit Amazon’s official post. Understanding WebSockets and Server-sent Events in Detail Implementing a non-blocking cross-service communication with WebClient[Tutorial] Introducing Kweb: A Kotlin library for building rich web applications
Read more
  • 0
  • 0
  • 3838