Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Front-End Web Development

341 Articles
article-image-react-conf-2019-concurrent-mode-preview-out-css-in-js-react-docs-in-40-languages-and-more
Bhagyashree R
29 Oct 2019
9 min read
Save for later

React Conf 2019: Concurrent Mode preview out, CSS-in-JS, React docs in 40 languages, and more

Bhagyashree R
29 Oct 2019
9 min read
React Conf 2019 wrapped up last week. It was kick-started with a keynote by Tom Occhino and Yuhi Zheng from the React team who both talked about Concurrent Mode and Suspense. Then followed by Frank Yan also from the React team, who explained how they are building the “new Facebook” with React and Relay. One of the major highlights of his talk was the CSS-in-JS library that will be open-sourced once ready. Sophie Alpert, former manager of the React team gave a talk on building a custom React renderer. To demonstrate that, she implemented a small version of ReactDOM in just 30 minutes. There were many other lightning talks and presentations on translated React, building inclusive apps by improving their accessibility, and much more. React Conf 2019 is a two-day event that took place from Oct 24-25 at Lake Las Vegas, Nevada. This conference brought together front-end and full-stack developers to “share knowledge, skills, to network, and just to have fun.” React's long-term goal: "Making it easier to build great user experiences" Tom Occhino, Engineering Director of the React group, took to the stage to talk about the goals for React and the community. He says that React’s long-term goal is to make it easier for developers to build great user experiences. “Easier to build” means improving the developer experience. The three factors that contribute to a great developer experience are a low barrier to entry, developer productivity, and ability to scale. React is constantly working towards improving the developer experience by introducing new features. Two such features are: Concurrent Mode and Suspense. Concurrent Mode Concurrent Mode is a set of features to make React apps more responsive by rendering component trees without blocking the main thread. It gives React the ability to interrupt big blocks of low-priority work in order to focus on higher priority work like responding to user input. This will enable React to work on several state updates concurrently and removing jarring and too frequent DOM updates. The team also released the first early community preview of Concurrent Mode last week. https://twitter.com/reactjs/status/1187411505001746432 Suspense Suspense was introduced as an improvement to the developer experience when dealing with asynchronous data fetching within React apps. It suspends your component rendering and shows a fallback until some condition is met. Occhino describes Suspense as a “React system for orchestrating asynchronous loading of code, data, and resources.” He adds, “Suspense lets the component wait for something before they render. This helps consolidate nested dependencies and nested spinners and things behind the single simple loading experience.” Towards the end of his keynote, Occhino also touched upon how the team plans to make the React community more inclusive and diverse. He said, “Over the past 10 years, I have learned that diverse teams build better products and make better decisions. Everyone working on React shares my conviction about this.” He adds, “Up until recently we have taken a pretty passive stance to building and shaping the React community. We have a responsibility to you all and I feel like we let many of you down. We are committed to doing better!” As a first step, the team has now replaced the React code of conduct with the contributor covenant. Read also: #Reactgate forces React leaders to confront community’s toxic culture head on What’s new the React team is working on Yuzi Zheng, Engineering Manager for React and Relay team at Facebook gave an insight into what projects the core teams are working on. She started off by giving a recap of hooks, which was one of the most-awaited React features announced at React Conf 2018. “Hooks are designed for the future of React in the way that it naturally encourages code that is compatible with all the plumbing features such as accessibility, server-side rendering, suspense, and concurrent mode. Since its release, the reception of Hooks has been really positive,” she shared. If you want to understand the fundamentals of React Hooks and use them for implementing responsive design and more, check out our book, Learning Hooks. Another long-term project that the team is focusing on is providing developers a way to easily build accessibility features in React. Currently, developers can create accessible websites using standard HTML techniques, but it does have some limitations. To help building accessibility directly into React the team is working on two areas: managing focus and input interfaces. For managing focus, the team plans to add primitives that provide “a more structured way of making sure component flows well” for cases like React portals and Suspense fallback and are accessible by default. For input interfaces, they plan to add support for rich gestures that work across platforms and are accessible by default. The team is also focusing on improving the initial render times. Server-side rendering helps in reducing the amount of CPU usage on the client for the initial render to some extent, but it does have some limitations. To meet these limitations, the team plans to add built-in support for server-side rendering. This will work with lazily loaded components to reduce the bytes needed on the client, support streaming down markups in chunks, and be fully-compatible with Concurrent Mode and Suspense. The CSS-in-JS library Frank Yan, Engineering Manager in the React group at Facebook talked about how the team has rebuilt and redesigned the Facebook website and the key lessons they have learned along the way. The new Facebook website is a single-page app with React organizing the HTML and JavaScript into components from the top down and with GraphQL and Relay colocating the queries declaratively in the components. The only key part that the team did not reorganize was CSS. They instead created a new library to embed styles in components called CSS-in-JS. It aims to make the styles easier to read, understand, and update. Its syntax is inspired by React Native and other frameworks. Since it enables you to embed styles inside JavaScript files, you can also use JavaScript tooling like type checkers and linters. React docs translated into 40 languages Nat Alison is a freelance front-end developer who helped the React team coordinate translations of reactjs.org into 40 languages. She shared why and how they were able to translate the docs for this massively popular library. She shared, “More than 80% of the world’s population does not know English. If we restrict React, one of the most popular JavaScript frameworks, we restrict who gets to create and shape the web.” Providing the officially translated docs will make it easier for several non-English speaking React developers to understand and use it in their projects. This will also prevent users from creating unofficial translations, which can be incorrect, outdated, or difficult to find. Initially, they thought of integrating a SaaS platform that allows users to submit translations, but this was not a feasible solution. Then they decided to check out the solution used by Vue, which is maintaining separate repositories for each language forked from the original repo. Similar to Vue, the team also created a bot that periodically tracks for changes in the English repo and submits pull requests whenever there is a change. If you want to contribute to translating React docs in your language, check out the IsReactTranslatedYet website. Developing accessible apps Brittany Feenstra, a developer at Formidable, took to the stage to talk about why accessibility is important and how you can approach it. Accessibility or a11y is making your apps and websites usable for everyone, including people with any kind of disabilities.  There are four types of disabilities that developers need to design for: visual, auditory, motor, and cognitive. Feenstra mentioned that though we all are aware of the importance of accessibility, we often “end up saving it for later” because of tight deadlines. Feenstra, however, compares accessibility with marathons. It is not something that you can achieve in just one sprint, she says. You should instead look at it as a training program that you will follow when participating in a marathon. You need to take a step-by-step approach to make an accessible app. If we do that “we will be way less fatigued and well-equipped,” she adds. Sharing some starting tips she said that we need to focus on three areas. First, learn to run, or in accessibility context, understand the HTML semantics then explore reference patterns, navigation, and focus traps. Second, improve nutritional habits, or in accessibility context, use environments and tools that help us write sturdier code. She recommends using axe, an accessibility checker for WCAG 2 and Section 508 accessibility. Also, check out the tools that basically simulate how people with visual impairment will see your UI such as NoCoffee and I want to see like the colour blind. She emphasizes on linting and testing your code for accessibility with the help of eslint-plugin-jsx-a11y and accessibility assessment automation tools. Third, cross-train and stretch, or in accessibility context, learn to “interact with the UI in ways that let us understand the update we are making to our code.” “React is Fiction” This was a talk by Jenn Creighton, a Frontend Architect at The Wing, who comes from a creative writing background. “Writing React to me felt like coming home. It was really familiar in a way that I could not pinpoint,” she said. Then she realized that writing React reminded her of fiction and merging the two disciplines helped her write better components. Creighton drew the similarities between developing in React and creative writing. One of the key principles of creative writing is “Show, don’t tell” that advises authors to describe a situation instead of just telling it. This will help engage the readers as they will be able to picture the situation in their heads. According to Creighton, React also has a similar principle: “Declarative, not imperative.”  React is declarative, which allows developers to describe what the final state should be, instead of listing all the steps to reach that state. There were many other exciting talks about progressive web animations, building React-Select, and more. Check out the live streams to watch the full talks: Day1: https://www.youtube.com/watch?v=RCiccdQObpo Day2: https://www.youtube.com/watch?v=JDDxR1a15Yo&t=2376s Ionic React released; Ionic Framework pivots from Angular to a native React version ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more React Native 0.61 introduces Fast Refresh for reliable hot reloading
Read more
  • 0
  • 0
  • 5743

article-image-firefox-70-released-with-better-security-css-and-javascript-improvements
Savia Lobo
23 Oct 2019
6 min read
Save for later

Firefox 70 released with better security, CSS, and JavaScript improvements

Savia Lobo
23 Oct 2019
6 min read
Mozilla team announced the much-awaited release of Firefox 70 yesterday with amazing new features like secure password generation with Lockwise and the new Firefox Privacy Protection Report. Firefox 70 also includes a plethora of additions for developers such as DOM mutation breakpoints and inactive CSS rule indicators in the DevTools, several new CSS text properties, two-value display syntax, and JS numeric separators, and much more. Firefox 70 centers around enhanced privacy and security The new Firefox 70 includes an Enhanced Tracking Protection, which includes a Firefox Privacy Protection Report that gives additional details and more visibility into how you’re being tracked online so you can better combat it. The Enhanced Tracking Protection was set up as default by the browser in September this year. The report highlights how ETP prevents third-party trackers from building a user’s profile based on their online activity. The report also includes the number of cross-site and social media trackers, finger-printers and crypto-miners Mozilla blocked. The report also helps users to keep themselves updated with Firefox Monitor and Firefox Lockwise. Firefox Monitor helps users to get a summary of the number of unsafe passwords that may have been used in a breach so that you can take action to update and change those passwords. Firefox Lockwise helps users to manage passwords and different synced devices. Firefox Lockwise includes a button where users can click to view their logins and updates. They can also have the ability to quickly view and manage how many devices they syncing and sharing passwords with. To know more about security in Firefox 70, read Mozilla’s blog. What’s new in Firefox 70 Updated HTML forms and secure passwords To generate secure passwords, the team has updated HTML input elements. Here, any input element of type password will have an option to generate a secure password available in the context menu, which can then be stored in Lockwise. In addition, any type="password" field with autocomplete=”new-password” set on it will have an autocomplete UI to generate a new password in-context. New CSS improvements Firefox 70 includes some CSS improvements like new options for styling underlines and new set of two-keyword values. Options for styling underlines include three new properties for text-decoration (underline): text-decoration-thickness: sets the thickness of lines added via text-decoration. text-underline-offset: sets the distance between a text-decoration and the text it is set on. Bear in mind that this only works on underlines. text-decoration-skip-ink: sets whether underlines and overlines are drawn if they cross descenders and ascenders. The default value, auto, causes them to only be drawn where they do not cross over a glyph. To allow underlines to cross glyphs, set the value to none. Two-keyword display values Until now, the display property has taken a single value. However, the team says that “the boxes on a page have an outer display type, which determines how the box is laid out in relation to other boxes on the page, and an inner display type, which determines how the box’s children will behave.” The two-keyword values allow you to explicitly specify the outer and inner display values. In supporting browsers (which currently includes only Firefox), the single keyword values will map to new two-keyword values, for example: display: flex; is equivalent to display: block flex; display: inline-flex; is equivalent to display: inline flex; JavaScript improvements Firefox 70 now supports numeric separators for JavaScript. Underscores can now be used as separators in large numbers so that they are more readable. Other improvements in JavaScript include: Intl improvements Firefox 70 includes improved JavaScript i18n (internationalization), starting with the implementation of the Intl.RelativeTimeFormat.formatToParts() method. This is a special version of Intl.RelativeTimeFormat.format() that returns an array of objects, each one representing a part of the value, rather than returning a string of the localized time value. Also,  Intl.NumberFormat.format() and Intl.NumberFormat.formatToParts() now accept BigInt values. Performance Improvements The inclusion of the new baseline interpreter has speeded up JavaScript. The code for the new interpreter includes shared code from the existing Baseline JIT. You can read more about the interpreter on The Baseline Interpreter: a faster JS interpreter in Firefox 70. New Developer tools The Developer Tools Accessibility panel now includes an audit for keyboard accessibility and a color deficiency simulator for systems with WebRender enabled. Pause option in DOM Mutation in Debugger DOM Mutation Breakpoints (aka DOM Change Breakpoints) let you pause scripts that add, remove, or change specific elements. Once a DOM mutation breakpoint is set, you’ll see it listed under “DOM Mutation Breakpoints” in the right-hand pane of the Debugger; this is also where you’ll see breaks reported. Source: Mozilla Hacks Color contrast information in the color picker! In the CSS Rules view, you can click foreground colors with the color picker to determine if their contrast with the background color meets accessibility guidelines. Accessibility inspector: keyboard checks The Accessibility inspector‘s Check for issues dropdown now includes keyboard accessibility checks: Selecting this option causes Firefox to go through each node in the accessibility tree and highlight all that have a keyboard accessibility issue: Hovering over or clicking each one will reveal information about what the issue is, along with a “Learn more” link for more details on how to fix it. Web socket inspector In Firefox DevEdition, the Network monitor now has a new “Messages” panel, which appears when you are monitoring a web socket connection (i.e. a 101 response). This can be used to inspect web socket frames sent and received through the connection. This functionality was originally supposed to be in Firefox 70 general release, but the team had a few more bugs to resolve, so expect it in Firefox 71! For now, users can explore it in the DevEdition. Fixed issues in Firefox 70 Built-in Firefox pages now follow the system dark mode preference Aliased theme properties have been removed, which may affect some themes Passwords can now be imported from Chrome on macOS in addition to existing support for Windows Readability is now greatly improved on under- or overlined texts, including links. The lines will now be interrupted instead of crossing over a glyph. Improved privacy and security indicators A new crossed-out lock icon will indicate sites delivered via insecure HTTP The formerly green lock icon is now grey The Extended Validation (EV) indicator has been moved to the identity popup that appears when clicking the lock icon To know more about other improvements and bug fixes in Firefox 70 in detail read Mozilla’s official blog. Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70 Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Mozilla Thunderbird 78 will include OpenPGP support, expected to be released by Summer 2020
Read more
  • 0
  • 0
  • 4455

article-image-5-pitfalls-of-react-hooks-you-should-avoid-kent-c-dodds
Sugandha Lahoti
09 Sep 2019
7 min read
Save for later

5 pitfalls of React Hooks you should avoid - Kent C. Dodds

Sugandha Lahoti
09 Sep 2019
7 min read
The React community first introduced Hooks, back in October 2018 as a JavaScript function to allow using React without classes. The idea was simple - With the help of Hooks, you will be able to “hook into” or use React state and other React features from function components. In February, React 16.8 released with the stable implementation of Hooks. As much as Hooks are popular, there are certain pitfalls which developers should avoid when they are learning and adopting React Hooks. In his talk, “React Hook Pitfalls” at React Rally 2019 (August 22-23 2019), Kent C. Dodds talks about 5 common pitfalls of React Hooks and how to avoid/fix them. Kent is a world renowned speaker, maintainer and contributor of hundreds of popular npm packages. He's actively involved in the open source community of React and general JavaScript ecosystem. He’s also the creator of react-testing-library which provides simple and complete React DOM testing utilities that encourage good testing practices. Tl;dr Problem: Starting without a good foundation Solution: Read the React Hooks docs and the FAQ Problem: Not using (or ignoring) the ESLint plugin Solution: Install, use, and follow the ESLint plugin Problem: Thinking in Lifecycles Solution: Don't think about Lifecycles, think about synchronizing side effects to state Problem: Overthinking performance Solution: React is fast by default and so research before applying performance optimizations pre-maturely Problem: Overthinking the testing of React hooks Solution: Avoid testing ‘implementation details’ of the component. Pitfall #1 Starting without a good foundation Often React developers begin coding without reading the documentation and that leads to a number of issues and small problems. Kent recommends developers to start by reading the React Hooks documentation and the FAQ section thoroughly. He jokingly adds, “Once you read the frequently asked questions, you can ask the infrequently asked questions. And then maybe those will get in the docs, too. In fact, you can make a pull request and put it in yourself.” Pitfall #2: Not using or (ignoring) the ESLint plugin The ESLint plugin is the official plugin built by the React team. It has two rules: "rules of hooks" and "exhaustive deps." The default recommended configuration of these rules is to set "rules of hooks" to an error, and the "exhaustive deps" to a warning. The linter plugin enforces these rules automatically. The two “Rules of Hooks” are: Don’t call Hooks inside loops, conditions, or nested functions Instead, always use Hooks at the top level of your React function. By following this rule, you ensure that Hooks are called in the same order each time a component renders. Only Call Hooks from React Functions Don’t call Hooks from regular JavaScript functions. Instead, you can either call Hooks from React function components or call them from custom Hooks. Kent agrees that sometimes the rule is incapable of performing static analysis on your code properly due to limitations of ESLint. “I believe”, he says, “ this is why it's recommended to set the exhaustive deps rule to "warn" instead of "error." When this happens, the plugin will tell you so in the warning. He recommends  developers should restructure their code to avoid that warning. The solution Kent offers for this pitfall is to Install, follow, and use the ESLint plugin. The ESLint plugin, he says will not only catch easily missable bugs, but it will also teach you things about your code and hooks in the process. Pitfall #3: Thinking in Lifecycles In Hooks the components are declarative. Kent says that this feature allows you to stop thinking about "when things should happen in the lifecycle of the component" (which doesn't matter that much) and more about "when things should happen in relation to state changes" (which matters much more.) With React Hooks, he adds, you're not thinking about component Lifecycles, instead you're thinking about synchronizing the state of the side-effects with the state of the application. This idea is difficult for React developers to grasp initially, however once you do it, he adds, you will naturally experience fewer bugs in your apps thanks to the design of the API. https://twitter.com/ryanflorence/status/1125041041063665666 Solution: Think about synchronizing side effects to state, rather than lifecycle methods. Pitfall #4: Overthinking performance Kent says that even though it's really important to be considerate of performance, you should also think about your code complexity. If your code is complex, you can't give people the great features they're looking for, as you will be spending all your time, dealing with the complexity of your code. He adds, "unnecessary re-renders" are not necessarily bad for performance. Just because a component re-renders, doesn't mean the DOM will get updated (updating the DOM can be slow). React does a great job at optimizing itself; it’s fast by default. For this, he mentions. “If your app's unnecessary re-renders are causing your app to be slow, first investigate why renders are slow. If rendering your app is so slow that a few extra re-renders produces a noticeable slow-down, then you'll likely still have performance problems when you hit "necessary re-renders." Once you fix what's making the render slow, you may find that unnecessary re-renders aren't causing problems for you anymore.” If still unnecessary re-renders are causing you performance problems, then you can unpack the built-in performance optimization APIs like React.memo, React.useMemo, and React.useCallback. More information on this on Kent’s blogpost on useMemo and useCallback. Solution: React is fast by default and so research before applying performance optimizations pre-maturely; profile your app and then optimize it. Pitfall #5: Overthinking the testing of React Hooks Kent says, that people are often concerned that they need to rewrite their tests along with all of their components when they refactor to hooks from class components. He explains, “Whether your component is implemented via Hooks or as a class, it is an implementation detail of the component. Therefore, if your test is written in such a way that reveals that, then refactoring your component to hooks will naturally cause your test to break.” He adds, “But the end-user doesn't care about whether your components are written with hooks or classes. They just care about being able to interact with what those components render to the screen. So if your tests interact with what's being rendered, then it doesn't matter how that stuff gets rendered to the screen, it'll all work whether you're using classes or hooks.” So, to avoid this pitfall, Kent’s recommendation is that you write tests that will work irrespective of whether you're using classes or hook. Before you upgrade to Hooks, start writing your tests free of implementation detail and your refactored hooks can be validated by the tests that you've written for your classes. The more your tests resemble the way your software is used, the more confidence they can give you. In review: Read the docs and the FAQ. Install, use and follow the ESLint plugin. Think about synchronizing side effects to state. Profile your app and then optimize it. Avoid testing implementation details. Watch the full talk on YouTube. https://www.youtube.com/watch?v=VIRcX2X7EUk Read more about React #Reactgate forces React leaders to confront community’s toxic culture head on React.js: why you should learn the front end JavaScript library and how to get started Ionic React RC is now out!
Read more
  • 0
  • 0
  • 8522
Banner background image

article-image-opentracing-and-opencensus-merge-into-opentelemetry-project-google-introduces-opencensus-web
Sugandha Lahoti
13 Aug 2019
4 min read
Save for later

OpenTracing and OpenCensus merge into OpenTelemetry project; Google introduces OpenCensus Web

Sugandha Lahoti
13 Aug 2019
4 min read
Google has introduced an extension of OpenCensus called the OpenCensus Web which is a library for collecting application performance and behavior monitoring data of web pages. This library focuses on the frontend web application code that executes in the browser allowing it to collect user-side performance data. It is still in alpha stage with the API subject to change. This is great news for websites that are heavy by nature, such as media-driven pages like Instagram, Facebook, YouTube, and Amazon, and WebApps. OpenCensus Web interacts with three application components, the Frontend web server, the Browser JS, and the OpenCensus Agent. The agent receives traces from the frontend web server proxy endpoint or directly from the browser JS, and exports them to a trace backend. Features of OpenCensus Web OpenCensus Web traces spans for initial load including server-side HTML rendering The OpenCensus Web spans also includes detailed annotations for DOM load events as well as network events It automatically traces all the click events as long as the click is done in a DOM element and it is not disabled OC Web traces route transitions between the different sections of your page by monkey-patching the History API It allows users to create custom spans for their web application for tasks or code involved in user interaction It performs automatic spans for HTTP requests and browser performance data OC web relates user interactions back to the initial page load tracing. Along with this release, the OpenCensus family of projects is merging with OpenTracing into OpenTelemetry. This means all of the OpenCensus community will be moving over to OpenTelemetry, Google and Omnition included. OpenCensus Web’s functionality will be migrated into OpenTelemetry JS once this project is ready. Omnition founder wrote on Hacker News, “Although Google will be heavily involved in both the client libraries and agent development, Omnition, Microsoft, and others will also be major contributors.” Another comment on Hacker News, explains the merger more in detail. “OpenCensus is a Google project to standardize metrics and distributed tracing. It's an API spec and libraries for various languages with varying backend support. OpenTracing is a CNCF project as an API for distributed tracing with a separate project called OpenMetrics for the metrics API. Neither include libraries and rely on the community to provide them.  The industry decided for once that we don't need all this competing work and is consolidating everything into OpenTelemetry that combines an API for tracing and metrics along with libraries. Logs (the 3rd part of observability) are in the planning phase.  OpenCensus Web is bringing the tracing/metrics part to your frontend JS so you can measure how your webapp works in addition to your backend apps and services.” By September 2019, OpenTelemetry plans to reach parity with existing projects for C#, Golang, Java, NodeJS, and Python. When each language reaches parity, the corresponding OpenTracing and OpenCensus projects will be sunset (old projects will be frozen, but the new project will continue to support existing instrumentation for two years, via a backwards compatibility bridge). Read more on the OpenTelemetry roadmap. Public reaction for OpenCensus Web has been positive. People have expressed their opinions on a Hacker News thread. “This is great, as the title says, this means that web applications can now have tracing across the whole stack, all within the same platform.” “I am also glad to know that the merge between OpenTracing and OpenCensus is still going well. I started adding telemetry to the projects I maintain in my current job and so far it has been very helpful to detect not only bottlenecks in the operations but also sudden spikes in the network traffic since we depend on so many 3rd-party web API that we have no control over. Thank you OpenCensus team for providing me with the tools to learn more.” For more information about OpenCensus Web, visit Google’s blog. CNCF Sandbox, the home for evolving cloud-native projects, accepts Google’s OpenMetrics Project Google open sources ClusterFuzz, a scalable fuzzing tool Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard
Read more
  • 0
  • 0
  • 2813

article-image-vue-maintainers-proposed-listened-and-revised-the-rfc-for-hooks-in-vue-api
Bhagyashree R
28 Jun 2019
6 min read
Save for later

Vue maintainers proposed, listened, and revised the RFC for hooks in Vue API

Bhagyashree R
28 Jun 2019
6 min read
The internet was ablaze when Evan You, creator of Vue, published an RFC to introduce a function-based component API earlier this month. This followed a huge discussion in the Vue community on whether such an API is really needed. https://twitter.com/youyuxi/status/1137567675356291072 This proposal came after Evan You previewed an experimental Hooks API back in November at Vue Conf Toronto 2018. Why Vue needs a function-based component API Components help you to abstract your code into smaller pieces. This gives your web applications a better structure, makes the code more readable and understandable and most importantly enables you to reuse logic across multiple components. According to the RFC, the components API in Vue 2.x has some drawbacks in terms of reusability. The three common patterns that are generally used to achieve reusability in Vue are mixins, high-order components, and renderless components. Each of these come with their share of drawbacks: Mixins bring implicit dependencies in code, causes name clashes, and make your code harder to understand. HOCs can often be verbose, involve lots of passing props and hoisting statics, and can cause name conflicts. Renderless components require extra stateful component instances that come at the cost of performance. This function-based component API aims to address all these drawbacks. Inspired by React Hooks, its objective is to provide developers a “clean and flexible way” to compose logic and share it between components. The team plans to achieve this by moving the logic code to a "composition function" and returning reactive state. Another motivation behind this proposed change is to provide better built-in TypeScript type inference support as function-based APIs are naturally type-friendly. Also, code written with function-based APIs compresses better than an object or class-based code. What Vue developers think about this RFC? The Vue community was a little taken aback with this proposal that will essentially change the way they used to write Vue. They were concerned that this will take away the most desirable property of Vue, which is its simplicity. Vue’s class-based API made it easy to understand and get started with. However, bringing function-based API to Vue will complex things in exchange for very fewer advantages. Some argued that this change will make it another React. “Like a lot of others here, I chose Vue vs React for the simplicity and readability of code. The class-based API was easy to understand and pick up. If I wanted React, I would have just chosen React from the beginning. I get that there are some technical advantages to doing this, but Vue 3 is starting to really turn me off of staying with Vue going forward,” a developer shared on a Reddit thread. Developers were concerned that the time they have invested in learning Vue will go to waste as everything is about to change. A Vue developer commented on Reddit, “You learn to do something one way and then they change it up on you. Might as well just switch to react at this point.” Many compared this scenario to that of Angular 1->2 or Python 2->3 and suggested switching to Svelte to avoid the mess. Some, however, liked the syntax and are looking forward to playing around with the API.  A developer shared, “But I read through, checked out the new (simpler) example, read Evan's arguments about logical task grouping, and on a second read with a more open mind, I actually kind of like the new syntax and am now looking forward to trying it out. I'm glad they agreed to keep the object syntax around though.” How the Vue team responded When the RFC was first published it implied that the current API will be deprecated in a future major release. Also, there was a lot of confusion around the "compatibility" and "stable" build.  Many developers felt that this RFC is already “set in stone” from the way it was communicated. They felt that the core team has already decided to bring this API to Vue without community consultation. So, one of the reasons behind this confusion was how the change was communicated. The team acknowledged this and asked for suggestions from the community to improve their communication. https://twitter.com/N_Tepluhina/status/1142715703558103040 The core team clarified that the update will be additive and the team has no plans to remove the Object API in a future major release. Evan You, the creator of Vue, said in a thread, “feel free to stay with the current API for as long as you wish. As long as the community feels there's a need for the old API to stay, it will stay. The only one that can make the decision to switch to the new API is yourself.” He also addressed the concerns on a Hacker News thread: There is a lot of FUD in this thread so we need to clarify a bit: - This API is purely additive to 2.x and doesn't break anything. - 3.0 will have a standard build which adds this API on top of 2.x API, and an opt-in "lean build" which drops a number of 2.x APIs for a smaller and faster runtime. - This is an open RFC, which means it's not set in stone. The whole point of having an RFC is so that users can voice their opinions. It's not like we are shipping this tomorrow. After listening to various perspectives shared by developers, the core team revised the RFC accordingly putting everybody finally at ease. Guillaume Chau, a member of the Vue core team, put out a clear and concise plan of action on Twitter to which people are responding positively. This plan reassured that the Object API will not be deprecated until the community stops using it and the proposed API will be first offered as a standalone plugin for Vue 2.x. https://twitter.com/Akryum/status/1143114880960126976 Some developers have also started to try out the new API: https://twitter.com/igor_randj/status/1143302939496370177 https://twitter.com/cmsalvado/status/1143230023089786880 Closing thoughts Open source programmers put their time and efforts in building software that helps the community and an RFC (request for comments) is a way for the community to get involved in building high quality software at scale. Through RFC you can share your constructive feedback on why a change is necessary or is not necessary. And, all this can be done in a respectful way. This showed us a very good example of how an RFC should really work. Publishing an RFC, discussing with the community, listening to the community, and deciding collectively what to do next. Despite some hiccups in communication, the Vue core team did a good job in engaging with the community to develop the roadmap for the function-based component API in Vue. Read the RFC for function-based component API for more details. Vue 2.6 is now out with a new unified syntax for slots, and more Learning Vue in 2019 with Anthony Gore’s developer knowledge map Evan You shares Vue 3.0 updates at VueConf Toronto 2018
Read more
  • 0
  • 0
  • 4073

article-image-apple-proposes-a-privacy-focused-ad-click-attribution-model-for-counting-conversions-without-tracking-users
Bhagyashree R
23 May 2019
5 min read
Save for later

Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users

Bhagyashree R
23 May 2019
5 min read
Yesterday, Apple announced a new ad attribution model, which aims to hit the right balance between online user privacy and enabling advertisers to measure the effectiveness of their ad campaigns. This model, named Privacy Preserving Ad Click Attribution, is implemented in WebKit and is offered as an experimental feature in Safari Technology Preview 82+. Ad attribution model and its privacy concerns Online advertising is one of the most effective media for businesses to expand their reach and find new customers. And, ad click attribution model allows you to analyze which of your many advertising campaigns or marketing channels are leading to actual conversions. Generally, ad attribution is done through cookies and something called “tracking pixels”. Cookies are small data files stored by your browser to remember stateful information, for instance, items added in the shopping cart in an online store. A tracking pixel is basically a piece of HTML code which is loaded when a user visits a website or opens an email. If proper privacy protections are not employed, websites can use this data for user profiling. What is worse is that this data can also be sent to third parties like data brokers, affiliate networks, and advertising networks. This collection of browsing data across multiple websites is what is referred to as cross-site tracking. How Apple’s ad attribution aims to help Apple’s ad attribution model is built directly into the browser and runs on-device. This will ensure that the browser vendor will not be able to see what advertisements are being clicked or what purchases are being made. The ‘Privacy Preserving Ad Click Attribution model’ works in three steps: Storing ad clicks According to Apple's alternate Privacy Preserving Ad Click Attribution, the page hosting the ad will be responsible for storing the ad clicks. It will do this via two optional attributes: ‘adDestination’ and ‘adCampaignID’. The ‘adDestination’ attribute is the domain the ad click is navigating the user to, and ‘adCampaignID’ is the identifier of the ad campaign. Neither the browser vendor nor the website will be allowed to read the stored ad click data or detect that it exists. This data will be stored for a limited time and in the case of WebKit, it is 7 days. Matching the conversions against stored ad clicks The second step of matching the conversions against stored ad clicks will allow advertisers to understand which of their ad campaigns are the most effective ones. Conversion is basically getting the user to perform the desired action according to your advertisement, for instance, a customer adding an item to the shopping cart or signing up for a new service. In this model, tracking pixels are used as a way to determine what all actions are taken by the user benefitting the business. Data like the location of the user, time of day, the value of the conversion, or some other relevant data are passed to the browser through different parameters. Apple ensures that no sensitive data like names, addresses, or other are stored. Sending out ad click attribution data In the last step, the browser reports to the website or marketer the existence of the conversion. After the conversion is matched to an ad, the browser will set a timer at random between 24 to 48 hours to send a stateless POST request to the advertiser. And, within this time it will pass the ad campaign and other parameters to the advertiser. Apple is previewing this model in Safari Technology Preview 82+. It is also proposing this model as a standard through the W3C Web Platform Incubator Community Group (WICG). The model has received mixed reaction from users. Some think that this model can help in reducing online tracking. A Reddit user supporting the initiative said, “Ad companies are not having trouble attributing campaigns. The problem is that small, uncoordinated "privacy" features cause Ad Tech companies to become far more aggressive in how they track users. It's not the companies that lose here, it's you. A standardized, privacy-centric method for companies to accomplish attribution will help end the arms race and move back to a more consumer-friendly model. Small edges are worth a fortune in Ads. This is like the war on drugs. Clamping down and assuming ad companies will walk away is way too optimistic. Instead, they will move deeper into the shadows at whatever the cost.” Others think that it is not a browser’s responsibility to help online advertisement and should be on the users’ side. “I certainly have never wanted my browser to report ad click attribution,” another Redditor remarked. Read the full announcement by Apple for more details. Apple Pay will soon support NFC tags to trigger payments U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case Apple plans to make notarization a default requirement in all future macOS updates
Read more
  • 0
  • 0
  • 4016
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-10-commandments-for-effective-ux-design
Will Grant
11 Mar 2019
8 min read
Save for later

Will Grant’s 10 commandments for effective UX Design

Will Grant
11 Mar 2019
8 min read
Somewhere along the journey of web maturity, we forgot something important: user experience is not art. It's the opposite of art. UX design should perform one primary function: serving users. Your UX design has to look great, but it should not be at the expense of hampering the working of the website. This is an extract from 101 UX Principles by Will Grant. Read our interview with Will here. #1 Empathy and objectivity are the primary skills of a  UX professional Empathy and objectivity are the primary skills you must possess to be good at UX. This is not to undermine those who have spent many years studying and working in the UX field — their insights and experience are valuable — rather say that study and practice alone are not enough. You need empathy to understand your users’ needs, goals, and frustrations. You need objectivity to look at your product with fresh eyes, spot the flaws and fix them. You can learn everything else. Read More: Soft skills every data scientist should teach their child #2 Don’t use more than two typefaces Too often designers add too many typefaces to their products. You should aim to use two typefaces maximum: one for headings and titles, and another for body copy that is intended to be read. Using too many typefaces creates too much visual ‘noise’ and increases the effort that the user has to put into understanding the view in front of them. What’s more, many custom-designed brand typefaces are made with punchy visual impact in mind, not readability. Use weights and italics within that font family for emphasis, rather than switching to another family. Typically, this means using your corporate brand font as the heading, while leaving the controls, dialogs and in-app copy (which need to be clearly legible) in a more proven, readable typeface. #3 Make your buttons look like buttons There are parts of your UI that can be interacted with, but your user doesn’t know which parts and doesn’t want to spend time learning. Flat design is bad. It’s really terrible for usability. It’s style over substance and it forces your users to think more about every interaction they make with your product. Stop making it hard for your customers to find the buttons! By drawing on real-world examples, we can make UI buttons that are obvious and instantly familiar. By using real-life inspiration to create affordances, a new user can identify the controls right away. Create the visual cues your user needs to know instantly that they’re looking at a button that can be tapped or clicked. #4 Make ‘blank slates’ more than just empty views The default behavior of many apps is to simply show an empty view where the content would be. For a new user, this is a pretty poor experience and a massive missed opportunity for you to give them some extra orientation and guidance. The blank slate is only shown once (before the user has generated any content). This makes it an ideal way of orienting people to the functions of your product while getting out of the way of more established users who will hopefully ‘know the ropes’ a little better. For that reason, it should be considered mandatory for UX designers to offer users a useful blank slate. #5 Hide ‘advanced’ settings from most users There’s no need to include every possible menu option on your menu when you can hide advanced settings away. Group settings together but separate out the more obscure ones for their own section of ‘power user’ settings. These should also be grouped into sections if there are a lot of them (don’t just throw all the advanced items in at random). Not only does hiding advanced settings have the effect of reducing the number of items for a user to mentally juggle, but it also makes the app appear less daunting, by hiding complex settings from most users. By picking good defaults, you can ensure that the vast majority of users will never need to alter advanced settings. For the ones that do, an advanced menu section is a pretty well-used pattern. #6 Use device-native input features where possible If you’re using a smartphone or tablet to dial a telephone number, the device’s built-in ‘phone’ app will have a large numeric keypad, that won’t force you to use a fiddly ‘QWERTY’ keyboard for numeric entry. Sadly, too often we ask users to use the wrong input features in our products. By leveraging what’s already there, we can turn painful form entry experiences into effortless interactions. No matter how good you are, you can’t justify spending the time and money that other companies have spent on making usable system controls. Even if you get it right, it’s still yet another UI for your user to learn, when there’s a perfectly good one already built into their device. Use that one. #7 Always give icons a text label Icons are used and misused so relentlessly, across so many products, that you can’t rely on any 'one' single icon to convey a definitive meaning. For example, if you’re offering a ‘history’ feature,  there’s a wide range of pictogram clocks, arrows, clocks within arrows, hourglasses, and parchment scrolls to choose from. This may confuse the user and hence you need to add a text label to make the user understand what this icon means in this context within your product. Often, a designer will decide to sacrifice the icon label on mobile responsive views. Don’t do this. Mobile users still need the label for context. The icon and the label will then work in tandem to provide context and instruction and offer a recall to the user, whether they’re new to your product or use it every day. #8 Decide if an interaction should be obvious, easy or possible To help decide where (and how prominently) a control or interaction should be placed, it’s useful to classify interactions into one of three types. Obvious Interactions Obvious interactions are the core function of the app, for example, the shutter button on a camera app or the “new event” button on a calendar app. Easy Interactions An easy interaction could be switching between the front-facing and rear-facing lens in a camera app, or editing an existing event in a calendar app. Possible Interactions Interactions we classify as possible are rarely used and they are often advanced features. For example, it is possible to adjust the white balance or auto-focus on a camera app or make an event recurring on a calendar app. #9 Don’t join the dark side So-called ‘dark patterns’ are UI or UX patterns designed to trick the user into doing what the corporation or brand wants them to do. These are, in a way, exactly the same as the scams used by old-time fraudsters and rogue traders, now transplanted to the web and updated for the post-internet age. Shopping carts that add extra "add-on" items (like insurance, protection policies, and so on) to your cart before you check out, hoping that you won't remove them Search results that begin their list by showing the item they'd like to sell you instead of the best result Ads that don't look like ads, so you accidentally tap them Changing a user's settings—edit your private profile and if you don't explicitly make it private again, the company will switch it back to public Unsubscribe "confirmation screens", where you have to uncheck a ton of checkboxes just right to actually unsubscribe. In some fields, medicine, for example, professionals have a code of conduct and ethics that form the core of the work they do. Building software does not have such a code of conduct, but maybe it should do. #10 Test with real users There’s a myth that user testing is expensive and time-consuming, but the reality is that even very small test groups (less than 10 people) can provide fascinating insights. The nature of such tests is very qualitative and doesn’t lend itself well to quantitative analysis, so you can learn a lot from working with a small sample set of fewer than 10 users. Read More: A UX strategy is worthless without a solid usability test plan You need to test with real users, not your colleagues, not your boss and not your partner. You need to test with a diverse mix of people, from the widest section of society you can get access to. User testing is an essential step to understanding not just your product but also the users you’re testing: what their goals really are, how they want to achieve them and where your product delivers or falls short. Summary In the web development world, UX and UI professionals keep making UX mistakes, trying to reinvent the wheel, and forgetting to put themselves in the place of a user. Following these 10 commandments and applying them to the software design will create more usable and successful products, that look great but at the same time do not hinder functionality. Is your web design responsive? What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability Trends UX Design
Read more
  • 0
  • 0
  • 4554

article-image-npm-javascript-predictions-for-2019-react-graphql-and-typescript-are-three-technologies-to-learn
Bhagyashree R
10 Dec 2018
3 min read
Save for later

npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn

Bhagyashree R
10 Dec 2018
3 min read
Based on Laurie Voss’ talk on Node+JS Interactive 2018, on Friday, npm has shared some insights and predictions about JavaScript for 2019. These predictions are aimed to help developers make better technical choices in 2019. Here are the four predictions npm has made: “You will abandon one of your current tools.” In JavaScript, frameworks and tools don’t last and generally enjoy a phase of peak popularity of 3-5 years. This follows a slow decline as developers have to maintain the legacy applications but move to newer frameworks for new work. Mr. Voss said in his talk, “Nothing lasts forever!..Any framework that we see today will have its hay days and then it will have an after-life where it will slowly slowly degrade.” For developers, this essentially means that it is better to keep on learning new frameworks instead of holding on to their current tools too tightly. “Despite a slowdown in growth, React will be the dominant framework in 2019.” Though React’s growth has slowed down in 2018, as compared to 2017, it still continues to dominate the web scene. 60% of npm survey respondents said they are using React. In 2019, npm says that more people will use React for building web applications. As people using it will grow we will have more tutorials, advice, and bug fixes. “You’ll need to learn GraphQL.” The GraphQL client library is showing tremendous popularity and as per npm it is going to be a “technical force to reckon with in 2019.” It was first publicly released in 2015 and it is still too early to put it into production, but going by its growing popularity, developers are recommended to learn its concepts in 2019. npm also predict that developers will see themselves using GraphQL in new projects later in the year and in 2020. “Somebody on your team will bring in TypeScript.” npm’s survey uncovered that 46% of the respondents were using Microsoft’s TypeScript, a typed superset of JavaScript that compiles to plain JavaScript. One of the reason for this major adoption by enthusiasts could be the extra safety TypeScript provides by type-checking. Adopting TypeScript in 2019 could prove really useful, especially if you’re a member of a larger team. Read the detailed report and predictions on npm’s website. 4 key findings from The State of JavaScript 2018 developer survey TypeScript 3.2 released with configuration inheritance and more 7 reasons to choose GraphQL APIs over REST for building your APIs
Read more
  • 0
  • 0
  • 5472

article-image-4-key-findings-from-the-state-of-javascript-2018-developer-survey
Prasad Ramesh
20 Nov 2018
4 min read
Save for later

4 key findings from The State of JavaScript 2018 developer survey

Prasad Ramesh
20 Nov 2018
4 min read
Three JavaScript developers surveyed over 20,000 JavaScript developers to find out what’s happening within the language and its huge ecosystem. From usage to satisfaction to learning habits, this State if JavaScript 2018 report offered another valuable insight on a community that is still going strong, despite the landscape continuing to change. You can check out the results of the State of JavaScript 2018 survey in detail here but keep reading to find out 4 things we found interesting about the State of JavaScript 2018 survey. JavaScript developers love ES6 and TypeScript ES6 and TypeScript were the most well received. 86.3% and 46.7% developers respectively have used and would use these languages again. ClojureScript, Elm, and Flow, however, don’t seem to pique many developers' interests these days (unsurprisingly). React rules the front-end frameworks - Angular's popularity may be dwindling There has been a big battle between a couple of frameworks in the front-end side of web development - namely between React, Vue, and Angular. The State of JavaScript 2018 survey suggests that React is winning out, with Vue in second position. 64.8% and 28.8% developers said that they would use React and Vue.js respectively, again. However, Vue is growing in popularity - 46.6% of respondents expressed an interest in learning it. However, news wasn't great for Angular - 33.8% of respondents said that they wouldn't use Angular again. Vue is gaining popularity as. Ember and polymer were less than well received as more than 50% of the responses for both indicated no interest in learning them. Preact and Polymer, meanwhile, are perhaps still a little new on the scene: 28.1% and 18.5% respondents had never even heard of these frameworks. Vue.js 3.0 is ditching JavaScript for TypeScript. Learn more here. Redux is the most used in the data layer - but JavaScript developers want to learn GraphQL Redux is the most used in the data layer - but JavaScript developers want to learn GraphQL When it comes to data, Redux is the most popular library with 47.2% developers saying that they would use it again. GraphQL is second with 20.4% of respondents vouching for it. But Redux shouldn’t be complacent - 62.5% developers also want to learn GraphQL. It looks like the Redux and GraphQL debate is going to continue well into 2019. What the consensus will be in 12 months time is anyone’s guess. Why do React developers love Redux? Find out here. Express.js popularity confirms Node.js as JavaScript’s quiet hero It was observed that there haven’t been any major break breakthroughs in this area in recent years. But that is, perhaps, a good thing when you consider the frantic pace of change in other areas of JavaScript. It probably also has a lot to do with the dominance of Node.js in this area. Express, a Node.js framework, is by far the most popular, with 64.7% of developers taking the survey saying they would use it again. Sadly, it appears Meteor is languishing despite its meteoric hype just a few years ago. 49.4% of developers had heard of it, but said they had no interest in learning it. In conclusion: The landscape is becoming more clearly defined, but the JavaScript developer role is changing A few years ago, the JavaScript ecosystem was chaotic and almost incoherent. Every week seemed to bring a new framework demanding your attention. It looks, as we move towards the end of the decade, that things are a lot different now - React has established itself at the forefront of the front end, while TypeScript appears to have embedded itself within the ecosystem too. With GraphQL also generating interest, and competing with Redux, we're seeing a clear shift in what JavaScript developers are doing, and what they're being asked to do. As the stack expands, managing data sources and building for speed and scalability is now a problem right at the heart of JavaScript development, not just on its fringes.
Read more
  • 0
  • 0
  • 4465

article-image-vue-js-3-0-is-ditching-javascript-for-typescript-what-else-is-new
Bhagyashree R
01 Oct 2018
5 min read
Save for later

Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new?

Bhagyashree R
01 Oct 2018
5 min read
Last week, Evan You, the creator of Vue.js gave a summary of what to expect in the coming major release of Vue.js 3.0. To provide a better support for TypeScript, the codebase is being written in TypeScript leaving behind vanilla JS. This new codebase currently targets evergreen browsers such as Google Chrome, and assumes baseline native ES2015 support. Let’s see what else we will see in this major iteration: High-level API changes Template syntax will not see much changes, except some tweaks in the scoped slots syntax. Vue.js 3.0 will come with native support for class-based components. This will provide users with an API that is pleasant to use in native ES2015 without the need of any transpilation or stage-x features. The Vue.js 3.x codebase will be written in TypeScript, providing improved support for TypeScript. Support for the 2.x object-based component format will be provided by internally transforming the object to a corresponding class. Functional components can now be plain functions, however, the async components will need to be explicitly created via a helper function. The virtual DOM format used in render functions will see major changes. Upgrading will be easier if you don’t heavily rely on handwritten (non-JSX) render functions in your app. Mixins will still be supported. Cleaner and more maintainable source code architecture To make contributing to Vue.js easier, Vue.js 3.0 is being re-written from the ground up for a cleaner and more maintainable architecture. To do this, the developers are breaking some internal functionalities into individual packages to isolate the scope of complexity. For example, the observer module will be converted to its own package, with its own public API and tests. As mentioned earlier, the codebase is being re-written in TypeScript. This makes proficiency in TypeScript a primary prerequisite for contributing to the new codebase. However, the type information and IDE support will enable a new contributor to easily make meaningful contributions. Proxy-based observation mechanism Vue.js 3.0 will come with a Proxy-based observer implementation that provides reactivity tracking with full language coverage. This aims to eliminate a number of limitations of the current implementation of Vue.js 2, which is based on Object.defineProperty: Detection of property addition or deletion Detection of Array index mutation or .length mutation Support for Map, Set, WeakMap and WeakSet Additionally, this new observer will have the following features: Exposed API for creating observables: This provides a lightweight and simple cross-component state management solution for small to medium scale scenarios. Lazy observation by default: In Vue.js 3.x, only the data used to render the initially visible part of an app will need to be observed. This will eliminate the overhead on app startup if your dataset is huge. Immutable observables: Immutable versions of a value can be created to prevent mutations even on nested properties, except when the system temporarily unlocks it internally. Better debugging capabilities: Two new hooks, renderTracked and renderTriggered are added. These will help you precisely trace when and why a component re-render is tracked or triggered. Other runtime improvements Smaller runtime The new codebase is designed to be tree-shaking friendly. The built-in components and directive runtime helpers will be imported on-demand and are tree-shakable. As a result, the constant baseline size for the new runtime is <10kb gzipped. Improved performance On initial benchmarks, the developers are observing up to 100% performance improvement across the board. Vue.js 3.0 will reduce the time spent in JavaScript when your app boots up. Built-in support for Fragments and Portals Vue 3.0 will come with built-in support for Fragments and Portals. Fragments are the components returning multiple root nodes. Portals are introduced to render a sub-tree in another part of the DOM, instead of inside the component. Improved slots mechanism All compiler-generated slots are now functions and invoked during the child component’s render call. This will ensure dependencies in slots are collected as dependencies for the child instead of the parent. This means that: When a slot content changes, only the child re-renders When the parent re-renders the child does not have to if its slot content did not change This improvement will provide even more precise change detection at the component tree level. Custom Renderer API Using this API you will be able to create custom renderers. With this API, it will be easier for the render-to-native projects like Weex and NativeScript Vue to stay up-to-date with upstream changes. This API will also make the creation of custom renderers for various other purposes trivially easier. Along with these, they have announced few compiler improvements and IE11 support. They haven’t revealed any date yet but we can expect Vue.js 3.0 to release in 2019. To know more, check out their official announcement on Medium. Vue CLI 3.0 is here as the standard build toolchain behind Vue applications Introducing Vue Native for building native mobile apps with Vue.js Testing Single Page Applications (SPAs) using Vue.js developer tools
Read more
  • 0
  • 0
  • 25902
article-image-javascript-async-programming-using-promises-tutorial
Pavan Ramchandani
23 Jul 2018
10 min read
Save for later

JavaScript async programming using Promises [Tutorial]

Pavan Ramchandani
23 Jul 2018
10 min read
JavaScript now has a new native pattern for writing asynchronous code called the Promise pattern. This new pattern removes the common code issues that the event and callback pattern had. It also makes the code look more like synchronous code. A promise (or a Promise object) represents an asynchronous operation. Existing asynchronous JavaScript APIs are usually wrapped with promises, and the new JavaScript APIs are purely implemented using promises. Promises are new in JavaScript but are already present in many other programming languages. Programming languages, such as C# 5, C++ 11, Swift, Scala, and more are some examples that support promises. In this tutorial, we will see how to use promises in JavaScript. This article is an excerpt from the book, Learn ECMAScript - Second Edition, written by Mehul Mohan and Narayan Prusty. Promise states A promise is always in one of these states: Fulfilled: If the resolve callback is invoked with a non-promise object as the argument or no argument, then we say that the promise is fulfilled Rejected: If the rejecting callback is invoked or an exception occurs in the executor scope, then we say that the promise is rejected Pending: If the resolve or reject callback is yet to be invoked, then we say that the promise is pending Settled: A promise is said to be settled if it's either fulfilled or rejected, but not pending Once a promise is fulfilled or rejected, it cannot be transitioned back. An attempt to transition it will have no effect. Promises versus callbacks Suppose you wanted to perform three AJAX requests one after another. Here's a dummy implementation of that in callback-style: ajaxCall('http://example.com/page1', response1 => { ajaxCall('http://example.com/page2'+response1, response2 => { ajaxCall('http://example.com/page3'+response2, response3 => { console.log(response3) } }) }) You can see how quickly you can enter into something known as callback-hell. Multiple nesting makes code not only unreadable but also difficult to maintain. Furthermore, if you start processing data after every call, and the next call is based on a previous call's response data, the complexity of the code will be unmatchable. Callback-hell refers to multiple asynchronous functions nested inside each other's callback functions. This makes code harder to read and maintain. Promises can be used to flatten this code. Let's take a look: ajaxCallPromise('http://example.com/page1') .then( response1 => ajaxCallPromise('http://example.com/page2'+response1) ) .then( response2 => ajaxCallPromise('http://example.com/page3'+response2) ) .then( response3 => console.log(response3) ) You can see the code complexity is suddenly reduced and the code looks much cleaner and readable. Let's first see how ajaxCallPromise would've been implemented. Please read the following explanation for more clarity of preceding code snippet. Promise constructor and (resolve, reject) methods To convert an existing callback type function to Promise, we have to use the Promise constructor. In the preceding example, ajaxCallPromise returns a Promise, which can be either resolved or rejected by the developer. Let's see how to implement ajaxCallPromise: const ajaxCallPromise = url => { return new Promise((resolve, reject) => { // DO YOUR ASYNC STUFF HERE $.ajaxAsyncWithNativeAPI(url, function(data) { if(data.resCode === 200) { resolve(data.message) } else { reject(data.error) } }) }) } Hang on! What just happened there? First, we returned Promise from the ajaxCallPromise function. That means whatever we do now will be a Promise. A Promise accepts a function argument, with the function itself accepting two very special arguments, that is, resolve and reject. resolve and reject are themselves functions. When, inside a Promise constructor function body, you call resolve or reject, the promise acquires a resolved or rejected value that is unchangeable later on. We then make use of the native callback-based API and check if everything is OK. If everything is indeed OK, we resolve the Promise with the value being the message sent by the server (assuming a JSON response). If there was an error in the response, we reject the promise instead. You can return a promise in a then call. When you do that, you can flatten the code instead of chaining promises again.For example, if foo() and bar() both return Promise, then, instead of: foo().then( res => { bar().then( res2 => { console.log('Both done') }) }) We can write it as follows: foo() .then( res => bar() ) // bar() returns a Promise .then( res => { console.log('Both done') }) This flattens the code. The then (onFulfilled, onRejected) method The then() method of a Promise object lets us do a task after a Promise has been fulfilled or rejected. The task can also be another event-driven or callback-based asynchronous operation. The then() method of a Promise object takes two arguments, that is, the onFulfilled and onRejected callbacks. The onFulfilled callback is executed if the Promise object was fulfilled, and the onRejected callback is executed if the Promise was rejected. The onRejected callback is also executed if an exception is thrown in the scope of the executor. Therefore, it behaves like an exception handler, that is, it catches the exceptions. The onFulfilled callback takes a parameter, that is, the fulfilment value of the promise. Similarly, the onRejected callback takes a parameter, that is, the reason for rejection: ajaxCallPromise('http://example.com/page1').then( successData => { console.log('Request was successful') }, failData => { console.log('Request failed' + failData) } ) When we reject the promise inside the ajaxCallPromise definition, the second function will execute (failData one) instead of the first function. Let's take one more example by converting setTimeout() from a callback to a promise. This is how setTimeout() looks: setTimeout( () => { // code here executes after TIME_DURATION milliseconds }, TIME_DURATION) A promised version will look something like the following: const PsetTimeout = duration => { return new Promise((resolve, reject) => { setTimeout( () => { resolve() }, duration); }) } // usage: PsetTimeout(1000) .then(() => { console.log('Executes after a second') }) Here we resolved the promise without a value. If you do that, it gets resolved with a value equal to undefined. The catch (onRejected) method The catch() method of a Promise object is used instead of the then() method when we use the then() method only to handle errors and exceptions. There is nothing special about how the catch() method works. It's just that it makes the code much easier to read, as the word catch makes it more meaningful. The catch() method just takes one argument, that is, the onRejected callback. The onRejected callback of the catch() method is invoked in the same way as the onRejected callback of the then() method. The catch() method always returns a promise. Here is how a new Promise object is returned by the catch() method: If there is no return statement in the onRejected callback, then a new fulfilled Promise is created internally and returned. If we return a custom Promise, then it internally creates and returns a new Promise object. The new promise object resolves the custom promise object. If we return something else other than a custom Promise in the onRejected callback, then a new Promise object is created internally and returned. The new Promise object resolves the returned value. If we pass null instead of the onRejected callback or omit it, then a callback is created internally and used instead. The internally created onRejected callback returns a rejected Promise object. The reason for the rejection of the new Promise object is the same as the reason for the rejection of a parent Promise object. If the Promise object to which catch() is called gets fulfilled, then the catch() method simply returns a new fulfilled promise object and ignores the onRejected callback. The fulfillment value of the new Promise object is the same as the fulfillment value of the parent Promise. To understand the catch() method, consider this code: ajaxPromiseCall('http://invalidURL.com') .then(success => { console.log(success) }, failed => { console.log(failed) }); This code can be rewritten in this way using the catch() method: ajaxPromiseCall('http://invalidURL.com') .then(success => console.log(success)) .catch(failed => console.log(failed)); These two code snippets work more or less in the same way. The Promise.resolve(value) method The resolve() method of the Promise object takes a value and returns a Promise object that resolves the passed value. The resolve() method is basically used to convert a value to a Promise object. It is useful when you find yourself with a value that may or may not be a Promise, but you want to use it as a Promise. For example, jQuery promises have different interfaces from ES6 promises. Therefore, you can use the resolve() method to convert jQuery promises into ES6 promises. Here is an example that demonstrates how to use the resolve() method: const p1 = Promise.resolve(4); p1.then(function(value){ console.log(value); }); //passed a promise object Promise.resolve(p1).then(function(value){ console.log(value); }); Promise.resolve({name: "Eden"}) .then(function(value){ console.log(value.name); }); The output is as follows: 4 4 Eden The Promise.reject(value) method The reject() method of the Promise object takes a value and returns a rejected Promise object with the passed value as the reason. Unlike the Promise.resolve() method, the reject() method is used for debugging purposes and not for converting values into promises. Here is an example that demonstrates how to use the reject() method: const p1 = Promise.reject(4); p1.then(null, function(value){ console.log(value); }); Promise.reject({name: "Eden"}) .then(null, function(value){ console.log(value.name); }); The output is as follows: 4 Eden The Promise.all(iterable) method The all() method of the Promise object takes an iterable object as an argument and returns a Promise that fulfills when all of the promises in the iterable object have been fulfilled. This can be useful when we want to execute a task after some asynchronous operations have finished. Here is a code example that demonstrates how to use the Promise.all() method: const p1 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve(); }, 1000); }); const p2 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve(); }, 2000); }); const arr = [p1, p2]; Promise.all(arr).then(function(){ console.log("Done"); //"Done" is logged after 2 seconds }); If the iterable object contains a value that is not a Promise object, then it's converted to the Promise object using the Promise.resolve() method. If any of the passed promises get rejected, then the Promise.all() method immediately returns a new rejected Promise for the same reason as the rejected passed Promise. Here is an example to demonstrate this: const p1 = new Promise(function(resolve, reject){ setTimeout(function(){ reject("Error"); }, 1000); }); const p2 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve(); }, 2000); }); const arr = [p1, p2]; Promise.all(arr).then(null, function(reason){ console.log(reason); //"Error" is logged after 1 second }); The Promise.race(iterable) method The race() method of the Promise object takes an iterable object as the argument and returns a Promise that fulfills or rejects as soon as one of the promises in the iterable object is fulfilled or rejected, with the fulfillment value or reason from that Promise. As the name suggests, the race() method is used to race between promises and see which one finishes first. Here is a code example that shows how to use the race() method: var p1 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve("Fulfillment Value 1"); }, 1000); }); var p2 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve("fulfillment Value 2"); }, 2000); }); var arr = [p1, p2]; Promise.race(arr).then(function(value){ console.log(value); //Output "Fulfillment value 1" }, function(reason){ console.log(reason); }); Now at this point, I assume you have a basic understanding of how promises work, what they are, and how to convert a callback-like API into a promised API. Let's take a look at async/await, the future of asynchronous programming. If you found this article useful, do check out the book Learn ECMAScript, Second Edition for learning the ECMAScript standards to design your web applications. Implementing 5 Common Design Patterns in JavaScript (ES8) What's new in ECMAScript 2018 (ES9)? How to build a weather app using Kotlin for JavaScript
Read more
  • 0
  • 0
  • 4763

article-image-html5-and-the-rise-of-modern-javascript-browser-apis-tutorial
Pavan Ramchandani
20 Jul 2018
15 min read
Save for later

HTML5 and the rise of modern JavaScript browser APIs [Tutorial]

Pavan Ramchandani
20 Jul 2018
15 min read
The HTMbrowserification arrived in 2008. HTML5, however, was so technologically advanced in 2008 that it was predicted that it would not be ready till at least 2022! However, that turned out to be incorrect, and here we are, with fully supported HTML5 and ES6/ES7/ES8-supported browsers. A lot of APIs used by HTML5 go hand in hand with JavaScript. Before looking at those APIs, let us understand a little about how JavaScript sees the web. This'll eventually put us in a strong position to understand various interesting, JavaScript-related things such as the Web Workers API, etc. In this article, we will introduce you to the most popular web languages HTML and JavaScript and how they came together to become the default platform for building modern front-end web applications. This is an excerpt from the book, Learn ECMAScript - Second Edition, written by Mehul Mohan and Narayan Prusty. The HTML DOM The HTML DOM is a tree version of how the document looks. Here is a very simple example of an HTML document: <!doctype HTML> <html> <head> <title>Cool Stuff!</title> </head> <body> <p>Awesome!</p> </body> </html> Here's how its tree version will look: The previous diagram is just a rough representation of the DOM tree. HTML tags consist of head and body; furthermore, the <body> tag consists of a <p> tag, whereas the <head> tag consists of the <title> tag. Simple! JavaScript has access to the DOM directly, and can modify the connections between these nodes, add nodes, remove nodes, change contents, attach event listeners, and so on. What is the Document Object Model (DOM)? Simply put, the DOM is a way to represent HTML or XML documents as nodes. This makes it easier for other programming languages to connect to a DOM-following page and modify it accordingly. To be clear, DOM is not a programming language. DOM provides JavaScript with a way to interact with web pages. You can think of it as a standard. Every element is part of the DOM tree, which can be accessed and modified with APIs exposed to JavaScript. DOM is not restricted to being accessed only by JavaScript. It is language-independent and there are several modules available in various languages to parse DOM (just like JavaScript) including PHP, Python, Java, and so on. As said previously, DOM provides JavaScript with a way to interact with it. How? Well, accessing DOM is as easy as accessing predefined objects in JavaScript: document. The DOM API specifies what you'll find inside the document object. The document object essentially gives JavaScript access to the DOM tree formed by your HTML document. If you notice, you cannot access any element at all without actually accessing the document object first. DOM methods/properties All HTML elements are objects in JavaScript. The most commonly used object is the document object. It has the whole DOM tree attached to it. You can query for elements on that. Let's look at some very common examples of these methods: getElementById method getElementsByTagName method getElementsByClassName method querySelector method querySelectorAll method By no means is this an exhaustive list of all methods available. However, this list should at least get you started with DOM manipulation. Use MDN as your reference for various other methods. Here's the link: https://developer.mozilla.org/en-US/docs/Web/API/Document#Methods. Modern JavaScript browser APIs HTML5 brought a lot of support for some awesome APIs in JavaScript, right from the start. Although some APIs were released with HTML5 itself (such as the Canvas API), some were added later (such as the Fetch API). Let's see some of these APIs and how to use them with some code examples. Page Visibility API - is the user still on the page? The Page Visibility API allows developers to run specific code whenever the page user is on goes in focus or out of foucs. Imagine you run a game-hosting site and want to pause the game whenever the user loses focus on your tab. This is the way to go! function pageChanged() { if (document.hidden) { console.log('User is on some other tab/out of focus') // line #1 } else { console.log('Hurray! User returned') // line #2 } } document.addEventListener("visibilitychange", pageChanged); We're adding an event listener to the document; it fires whenever the page is changed. Sure, the pageChanged function gets an event object as well in the argument, but we can simply use the document.hidden property, which returns a Boolean value depending on the page's visibility at the time the code was called. You'll add your pause game code at line #1 and your resume game code at line #2. navigator.onLine API – the user's network status The navigator.onLine API tells you if the user is online or not. Imagine building a multiplayer game and you want the game to automatically pause if the user loses their internet connection. This is the way to go here! function state(e) { if(navigator.onLine) { console.log('Cool we\'re up'); } else { console.log('Uh! we\'re down!'); } } window.addEventListener('offline', state); window.addEventListener('online', state); Here, we're attaching two event listeners to window global. We want to call the state function whenever the user goes offline or online. The browser will call the state function every time the user goes offline or online. We can access it if the user is offline or online with navigator.onLine, which returns a Boolean value of true if there's an internet connection, and false if there's not. Clipboard API - programmatically manipulating the clipboard The Clipboard API finally allows developers to copy to a user's clipboard without those nasty Adobe Flash plugin hacks that were not cross-browser/cross-device-friendly. Here's how you'll copy a selection to a user's clipboard: <script> function copy2Clipboard(text) { const textarea = document.createElement('textarea'); textarea.value = text; document.body.appendChild(textarea); textarea.focus(); textarea.setSelectionRange(0, text.length); document.execCommand('copy'); document.body.removeChild(textarea); } </script> <button onclick="copy2Clipboard('Something good!')">Click me!</button> First of all, we need the user to actually click the button. Once the user clicks the button, we call a function that creates a textarea in the background using the document.createElement method. The script then sets the value of the textarea to the passed text (this is pretty good!) We then focus on that textarea and select all the contents inside it. Once the contents are selected, we execute a copy with document.execCommand('copy'); this copies the current selection in the document to the clipboard. Since, right now, the value inside the textarea is selected, it gets copied to the clipboard. Finally, we remove the textarea from the document so that it doesn't disrupt the document layout. You cannot trigger copy2Clipboard without user interaction. I mean, obviously you can, but document.execCommand('copy') will not work if the event does not come from the user (click, double-click, and so on). This is a security implementation so that a user's clipboard is not messed around with by every website that they visit. The Canvas API - the web's drawing board HTML5 finally brought in support for <canvas>, a standard way to draw graphics on the web! Canvas can be used pretty much for everything related to graphics you can think of; from digitally signing with a pen, to creating 3D games on the web (3D games require WebGL knowledge, interested? - visit http://bit.ly/webgl-101). Let's look at the basics of the Canvas API with a simple example: <canvas id="canvas" width="100" height="100"></canvas> <script> const canvas = document.getElementById("canvas"); const ctx = canvas.getContext("2d"); ctx.moveTo(0,0); ctx.lineTo(100, 100); ctx.stroke(); </script> This renders the following: How does it do this? Firstly, document.getElementById('canvas') gives us the reference to the canvas on the document. Then we get the context of the canvas. This is a way to say what I want to do with the canvas. You could put a 3D value there, of course! That is indeed the case when you're doing 3D rendering with WebGL and canvas. Once we have a reference to our context, we can do a bunch of things and add methods provided by the API out-of-the-box. Here we moved the cursor to the (0, 0) coordinates. Then we drew a line till (100,100) (which is basically a diagonal on the square canvas). Then we called stroke to actually draw that on our canvas. Easy! Canvas is a wide topic and deserves a book of its own! If you're interested in developing awesome games and apps with Canvas, I recommend you start off with MDN docs: http://bit.ly/canvas-html5. The Fetch API - promise-based HTTP requests One of the coolest async APIs introduced in browsers is the Fetch API, which is the modern replacement for the XMLHttpRequest API. Have you ever found yourself using jQuery just for simplifying AJAX requests with $.ajax? If you have, then this is surely a golden API for you, as it is natively easier to code and read! However, fetch comes natively, hence, there are performance benefits. Let's see how it works: fetch(link) .then(data => { // do something with data }) .catch(err => { // do something with error }); Awesome! So fetch uses promises! If that's the case, we can combine it with async/await to make it look completely synchronous and easy to read! <img id="img1" alt="Mozilla logo" /> <img id="img2" alt="Google logo" /> const get2Images = async () => { const image1 = await fetch('https://cdn.mdn.mozilla.net/static/img/web-docs-sprite.22a6a085cf14.svg'); const image2 = await fetch('https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png'); console.log(image1); // gives us response as an object const blob1 = await image1.blob(); const blob2 = await image2.blob(); const url1 = URL.createObjectURL(blob1); const url2 = URL.createObjectURL(blob2); document.getElementById('img1').src = url1; document.getElementById('img2').src = url2; return 'complete'; } get2Images().then(status => console.log(status)); The line console.log(image1) will print the following: You can see the image1 response provides tons of information about the request. It has an interesting field body, which is actually a ReadableStream, and a byte stream of data that can be cast to a  Binary Large Object (BLOB) in our case. A blob object represents a file-like object of immutable and raw data. After getting the Response, we convert it into a blob object so that we can actually use it as an image. Here, fetch is actually fetching us the image directly so we can serve it to the user as a blob (without hot-linking it to the main website). Thus, this could be done on the server side, and blob data could be passed down a WebSocket or something similar. Fetch API customization The Fetch API is highly customizable. You can even include your own headers in the request. Suppose you've got a site where only authenticated users with a valid token can access an image. Here's how you'll add a custom header to your request: const headers = new Headers(); headers.append("Allow-Secret-Access", "yeah-because-my-token-is-1337"); const config = { method: 'POST', headers }; const req = new Request('http://myawesomewebsite.awesometld/secretimage.jpg', config); fetch(req) .then(img => img.blob()) .then(blob => myImageTag.src = URL.createObjectURL(blob)); Here, we added a custom header to our Request and then created something called a Request object (an object that has information about our Request). The first parameter, that is, http://myawesomewebsite.awesometld/secretimage.jpg, is the URL and the second is the configuration. Here are some other configuration options: Credentials: Used to pass cookies in a Cross-Origin Resource Sharing (CORS)-enabled server on cross-domain requests. Method: Specifies request methods (GET, POST, HEAD, and so on). Headers: Headers associated with the request. Integrity: A security feature that consists of a (possibly) SHA-256 representation of the file you're requesting, in order to verify whether the request has been tampered with (data is modified) or not. Probably not a lot to worry about unless you're building something on a very large scale and not on HTTPS. Redirect: Redirect can have three values: Follow: Will follow the URL redirects Error: Will throw an error if the URL redirects Manual: Doesn't follow redirect but returns a filtered response that wraps the redirect response Referrer: the URL that appears as a referrer header in the HTTP request. Accessing and modifying history with the history API You can access a user's history to some level and modify it according to your needs using the history API. It consists of the length and state properties: console.log(history, history.length, history.state); The output is as follows: {length: 4, scrollRestoration: "auto", state: null} 4 null In your case, the length could obviously be different depending on how many pages you've visited from that particular tab. history.state can contain anything you like (we'll come to its use case soon). Before looking at some handy history methods, let us take a look at the window.onpopstate event. Handling window.onpopstate events The window.onpopstate event is fired automatically by the browser when a user navigates between history states that a developer has set. This event is important to handle when you push to history object and then later retrieve information whenever the user presses the back/forward button of the browser. Here's how we'll program a simple popstate event: window.addEventListener('popstate', e => { console.log(e.state); // state data of history (remember history.state ?) }) Now we'll discuss some methods associated with the history object. Modifying history - the history.go(distance) method history.go(x) is equivalent to the user clicking his forward button x times in the browser. However, you can specify the distance to move, that is history.go(5); . This equivalent to the user hitting the forward button in the browser five times. Similarly, you can specify negative values as well to make it move backward. Specifying 0 or no value will simply refresh the page: history.go(5); // forwards the browser 5 times history.go(-1); // similar effect of clicking back button history.go(0); // refreshes page history.go(); // refreshes page Jumping ahead - the history.forward() method This method is simply the equivalent of history.go(1). This is handy when you want to just push the user to the page he/she is coming from. One use case of this is when you can create a full-screen immersive web application and on your screen there are some minimal controls that play with the history behind the scenes: if(awesomeButtonClicked && userWantsToMoveForward()) { history.forward() } Going back - the history.back() method This method is simply the equivalent of history.go(-1). A negative number, makes the history go backwards. Again, this is just a simple (and numberless) way to go back to a page the user came from. Its application could be similar to a forward button, that is, creating a full-screen web app and providing the user with an interface to navigate by. Pushing on the history - history.pushState() This is really fun. You can change the browser URL without hitting the server with an HTTP request. If you run the following JS in your browser, your browser will change the path from whatever it is (domain.com/abc/egh) to  /i_am_awesome (domain.com/i_am_awesome) without actually navigating to any page: history.pushState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); history.pushState({page2: "Packt"}, "This is page2", "/page2_packt"); // <-- state is currently here The History API doesn't care whether the page actually exists on the server or not. It'll just replace the URL as it is instructed. The  popstate event when triggered with the browser's back/forward button, will fire the function below and we can program it like this: window.onpopstate = e => { // when this is called, state is already updated. // e.state is the new state. It is null if it is the root state. if(e.state !== null) { console.log(e.state); } else { console.log("Root state"); } } To run this code, run the onpopstate event first, then the two lines of history.pushState previously. Then press your browser's back button. You should see something like: {myName: "Mehul"} which is the information related to the parent state. Press back button one more time and you'll see the message Root State. pushState does not fire onpopstate event. Only browsers' back/forward buttons do. Pushing on the history stack - history.replaceState() The history.replaceState() method is exactly like history.pushState(), the only difference is that it replaces the current page with another, that is, if you use history.pushState() and press the back button, you'll be directed to the page you came from. However, when you use history.replaceState() and you press the back button, you are not directed to the page you came from because it is replaced with the new one on the stack. Here's an example of working with the replaceState method: history.replaceState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); This replaces (instead of pushing) the current state with the new state. Although using the History API directly in your code may not be beneficial to you right now, many frameworks and libraries such as React, under the hood, use the History API to create a seamless, reload-less, smooth experience for the end user. If you found this article useful, do check out the book Learn ECMAScript, Second Edition to learn the ECMAScript standards for designing quality web applications. What's new in ECMAScript 2018 (ES9)? 8 recipes to master Promises in ECMAScript 2018 Build a foodie bot with JavaScript
Read more
  • 0
  • 0
  • 6600

article-image-build-user-directory-app-with-angular-tutorial
Sugandha Lahoti
05 Jul 2018
12 min read
Save for later

Build user directory app with Angular [Tutorial]

Sugandha Lahoti
05 Jul 2018
12 min read
In this article, we will learn how to build a user directory with Angular. The app will have a REST API which will be created during the course of this example. In this simple example, we'll be creating a users app which will be a table with a list of users together with their email addresses and phone numbers. Each user in the table will have an active state whose value is a boolean. We will be able to change the active state of a particular user from false to true and vice versa. The app will give us the ability to add new users and also delete users from the table. diskDB will be used as the database for this example. We will have an Angular service which contains methods that will be responsible for communicating with the REST end points. These methods will be responsible for making get, post, put, and delete requests to the REST API. The first method in the service will be responsible for making a get request to the API. This will enable us to retrieve all the users from the back end. Next, we will have another method that makes a post request to the API. This will enable us to add new users to the array of existing users. The next method we shall have will be responsible for making a delete request to the API in order to enable the deletion of a user. Finally, we shall have a method that makes a put request to the API. This will be the method that gives us the ability to edit/modify the state of a user. In order to make these requests to the REST API, we will have to make use of the HttpModule. The aim of this section is to solidify your knowledge of HTTP. As a JavaScript and, in fact, an Angular developer, you are bound to make interactions with APIs and web servers almost all the time. So much data used by developers today is in form of APIs and in order to make interactions with these APIs, we need to constantly make use of HTTP requests. As a matter of fact, HTTP is the foundation of data communication for the web. This article is an excerpt from the book, TypeScript 2.x for Angular Developers, written by Chris Nwamba. Create a new Angular app To start a new Angular app, run the following command: ng new user This creates the Angular 2 user app. Install the following dependencies: Express Body-parser Cors npm install express body-parser cors --save Create a Node server Create a file called server.js at the root of the project directory. This will be our node server. Populate server.js with the following block of code: // Require dependencies const express = require('express'); const path = require('path'); const http = require('http'); const cors = require('cors'); const bodyParser = require('body-parser'); // Get our API routes const route = require('./route'); const app = express(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: false })); // Use CORS app.use(cors()); // Set our api routes app.use('/api', route); /** * Get port from environment. */ const port = process.env.PORT || '3000'; /** * Create HTTP server. */ const server = http.createServer(app); //Listen on provided port app.listen(port); console.log('server is listening'); What's going on here is pretty simple: We required and made use of the dependencies We defined and set the API routes We set a port for our server to listen to The API routes are being required from ./route, but this path does not exist yet. Let's quickly create it. At the root of the project directory, create a file called route.js. This is where the API routes will be made. We need to have a form of a database from where we can fetch, post, delete, and modify data. Just as in the previous example, we will make use of diskdb. The route will pretty much have the same pattern as in the first example. Install diskDB Run the following in the project folder to install diskdb: npm install diskdb Create a users.json file at the root of the project directory to serve as our database collection where we have our users' details. Populate users.json with the following: [{"name": "Marcel", "email": "test1@gmail.com", "phone_number":"08012345", "isOnline":false}] Now, update route.js. route.js const express = require('express'); const router = express.Router(); const db = require('diskdb'); db.connect(__dirname, ['users']); //save router.post('/users', function(req, res, next) { var user = req.body; if (!user.name && !(user.email + '') && !(user.phone_number + '') && !(user.isActive + '')) { res.status(400); res.json({ error: 'error' }); } else { console.log('ds'); db.users.save(todo); res.json(todo); } }); //get router.get('/users', function(req, res, next) { var foundUsers = db.users.find(); console.log(foundUsers); res.json(foundUsers); foundUsers = db.users.find(); console.log(foundUsers); }); //updateUsers router.put('/user/:id', function(req, res, next) { var updUser = req.body; console.log(updUser, req.params.id) db.users.update({_id: req.params.id}, updUser); res.json({ msg: req.params.id + ' updated' }); }); //delete router.delete('/user/:id', function(req, res, next) { console.log(req.params); db.users.remove({ _id: req.params.id }); res.json({ msg: req.params.id + ' deleted' }); }); module.exports = router; We've created a REST API with the API routes, using diskDB as the database. Start the server using the following command: node server.js The server is running and it is listening to the assigned port. Now, open up the browser and go to http://localhost:3000/api/users. Here, we can see the data that we imputed to the users.json file. This shows that our routes are working and we are getting data from the database. Create a new component Run the following command to create a new component: ng g component user This creates user.component.ts, user.component.html, user.component.css and user.component.spec.ts files. User.component.spec.ts is used for testing, therefore we will not be making use of it in this chapter. The newly created component is automatically imported into app.module.ts. We have to tell the root component about the user component. We'll do this by importing the selector from user.component.ts into the root template component (app.component.html): <div style="text-align:center"> <app-user></app-user> </div> Create a service The next step is to create a service that interacts with the API that we created earlier: ng generate service user This creates a user service called the user.service.ts. Next, import UserService class into app.module.ts and include it to the providers array: Import rxjs/add/operator/map in the imports section. import { Injectable } from '@angular/core'; import { Http, Headers } from '@angular/http'; import 'rxjs/add/operator/map'; Within the UserService class, define a constructor and pass in the angular 2 HTTP service. import { Injectable } from '@angular/core'; import { Http, Headers } from '@angular/http'; import 'rxjs/add/operator/map'; @Injectable() export class UserService { constructor(private http: Http) {} } Within the service class, write a method that makes a get request to fetch all users and their details from the API: getUser() { return this.http .get('http://localhost:3000/api/users') .map(res => res.json()); } Write the method that makes a post request and creates a new todo: addUser(newUser) { var headers = new Headers(); headers.append('Content-Type', 'application/json'); return this.http .post('http://localhost:3000/api/user', JSON.stringify(newUser), { headers: headers }) .map(res => res.json()); } Write another method that makes a delete request. This will enable us to delete a user from the collection of users: deleteUser(id) { return this.http .delete('http://localhost:3000/api/user/' + id) .map(res => res.json()); } Finally, write a method that makes a put request. This method will enable us to modify the state of a user: updateUser(user) { var headers = new Headers(); headers.append('Content-Type', 'application/json'); return this.http .put('http://localhost:3000/api/user/' + user._id, JSON.stringify(user), { headers: headers }) .map(res => res.json()); } Update app.module.ts to import HttpModule and FormsModule and include them to the imports array: import { HttpModule } from '@angular/http'; import { FormsModule } from '@angular/forms'; ..... imports: [ ..... HttpModule, FormsModule ] The next thing to do is to teach the user component to use the service: Import UserService in user.component.ts. import {UserService} from '../user.service'; Next, include the service class in the user component constructor. constructor(private userService: UserService) { }. Just below the exported UserComponent class, add the following properties and define their data types: users: any = []; user: any; name: any; email: any; phone_number: any; isOnline: boolean; Now, we can make use of the methods from the user service in the user component. Updating user.component.ts Within the ngOnInit method, make use of the user service to get all users from the API: ngOnInit() { this.userService.getUser().subscribe(users => { console.log(users); this.users = users; }); } Below the ngOnInit method, write a method that makes use of the post method in the user service to add new users: addUser(event) { event.preventDefault(); var newUser = { name: this.name, email: this.email, phone_number: this.phone_number, isOnline: false }; this.userService.addUser(newUser).subscribe(user => { this.users.push(user); this.name = ''; this.email = ''; this.phone_number = ''; }); } Let's make use of the delete method from the user service to enable us to delete users: deleteUser(id) { var users = this.users; this.userService.deleteUser(id).subscribe(data => { console.log(id); const index = this.users.findIndex(user => user._id == id); users.splice(index, 1) }); } Finally, we'll make use of user service to make put requests to the API: updateUser(user) { var _user = { _id: user._id, name: user.name, email: user.email, phone_number: user.phone_number, isActive: !user.isActive }; this.userService.updateUser(_user).subscribe(data => { const index = this.users.findIndex(user => user._id == _user._id) this.users[index] = _user; }); } We have all our communication with the API, service, and component. We have to update user.component.html in order to illustrate all that we have done in the browser. We'll be making use of bootstrap for styling. So, we have to import the bootstrap CDN in index.html: <!doctype html> <html lang="en"> <head> //bootstrap CDN <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta/css/bootstrap.min.css" integrity="sha384-/Y6pD6FV/Vv2HJnA6t+vslU6fwYXjCFtcEpHbNJ0lyAFsXTsjBbfaDjzALeQsN6M" crossorigin="anonymous"> <meta charset="utf-8"> <title>User</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> </head> <body> <app-root></app-root> </body> </html> Updating user.component.html Here is the component template for the user component: <form class="form-inline" (submit) = "addUser($event)"> <div class="form-row"> <div class="col"> <input type="text" class="form-control" [(ngModel)] ="name" name="name"> </div> <div class="col"> <input type="text" class="form-control" [(ngModel)] ="email" name="email"> </div> <div class="col"> <input type="text" class="form-control" [(ngModel)] ="phone_number" name="phone_number"> </div> </div> <br> <button class="btn btn-primary" type="submit" (click) = "addUser($event)"><h4>Add User</h4></button> </form> <table class="table table-striped" > <thead> <tr> <th>Name</th> <th>Email</th> <th>Phone_Number</th> <th>Active</th> </tr> </thead> <tbody *ngFor="let user of users"> <tr> <td>{{user.name}}</td> <td>{{user.email}}</td> <td>{{user.phone_number}}</td> <td>{{user.isActive}}</td> <td><input type="submit" class="btn btn-warning" value="Update Status" (click)="updateUser(user)" [ngStyle]="{ 'text-decoration-color:': user.isActive ? 'blue' : ''}"></td> <td><button (click) ="deleteUser(user._id)" class="btn btn-danger">Delete</button></td> </tr> </tbody> </table> A lot is going on in the preceding code, let's drill down into the code block: We have a form which takes in three inputs and a submit button which triggers the addUser() method when clicked There is a delete button which triggers the delete method when it is clicked There is also an update status input element that triggers the updateUser() method when clicked We created a table in which our users' details will be displayed utilizing Angular's *ngFor directive and Angular's interpolation binding syntax, {{}} Some extra styling will be added to the project. Go to user.component.css and add the following: form{ margin-top: 20px; margin-left: 20%; size: 50px; } table{ margin-top:20px; height: 50%; width: 50%; margin-left: 20%; } button{ margin-left: 20px; } Running the app Open up two command line interfaces/terminals. In both of them, navigate to the project directory. Run node server.js to start the server in one. Run ng serve in the other to serve the Angular 2 app. Open up the browser and go to localhost:4200. In this simple users app, we can perform all CRUD operations. We can create new users, get users, delete users, and update the state of users. By default, a newly added user's active state is false. That can be changed by clicking on the change state button. We created an Angular app from scratch for building a user directory. To know more, on how to write unit tests and perform debugging in Angular, check our book TypeScript 2.x for Angular Developers. Everything new in Angular 6: Angular Elements, CLI commands and more Why switch to Angular for web development – Interview with Minko Gechev Building Components Using Angular
Read more
  • 0
  • 0
  • 2991
article-image-create-enterprise-grade-angular-forms-typescript-tutorial
Sugandha Lahoti
04 Jul 2018
11 min read
Save for later

Create enterprise-grade Angular forms in TypeScript [Tutorial]

Sugandha Lahoti
04 Jul 2018
11 min read
Typescript is an open-source programming language which adds optional static typing to Javascript. To give you a flavor of the benefits of TypeScript, let’s have a very quick look at some of the things that TypeScript brings to the table: A compilation step Strong or static typing Type definitions for popular JavaScript libraries Encapsulation Private and public member variable decorators In this article, we will learn how to build forms with typescript. We will cover as much as it takes to build business applications that collect user information. Here is a breakdown of what you should expect from this article: Typed form input and output Form controls Validation Form submission and handling This article is an excerpt from the book, TypeScript 2.x for Angular Developers, written by Chris Nwamba. Creating types for forms We want to try to utilize TypeScript as much as possible, as it simplifies our development process and makes our app behavior more predictable. For this reason, we will create a simple data class to serve as a type for the form values. First, create a new Angular project to follow along with the examples. Then, use the following command to create a new class: ng g class flight The class is generated in the app folder; replace its content with the following data class: export class Flight { constructor( public fullName: string, public from: string, public to: string, public type: string, public adults: number, public departure: Date, public children?: number, public infants?: number, public arrival?: Date, ) {} } This class represents all the values our form (yet to be created) will have. The properties that are succeeded by a question mark (?) are optional, which means that TypeScript will throw no errors when the respective values are not supplied. Before jumping into creating forms, let's start with a clean slate. Replace the app.component.html file with the following: <div class="container"> <h3 class="text-center">Book a Flight</h3> <div class="col-md-offset-3 col-md-6"> <!-- TODO: Form here --> </div> </div> Run the app and leave it running. You should see the following at port 4200 of localhost (remember to include Bootstrap): The form module Now that we have a contract that we want the form to follow, let's now generate the form's component: ng g component flight-form The command also adds the component as a declaration to our App module: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { FlightFormComponent } from './flight-form/flight-form.component'; @NgModule({ declarations: [ AppComponent, // Component added after // being generated FlightFormComponent ], imports: [ BrowserModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } What makes Angular forms special and easy to use are functionalities provided out-of-the-box, such as the NgForm directive. Such functionalities are not available in the core browser module but in the form module. Hence, we need to import them: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; // Import the form module import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { FlightFormComponent } from './flight-form/flight-form.component'; @NgModule({ declarations: [ AppComponent, FlightFormComponent ], imports: [ BrowserModule, // Add the form module // to imports array FormsModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Simply importing and adding FormModule to the imports array is all we needed to do. Two-way binding The perfect time to start showing some form controls using the form component in the browser is right now. Keeping the state in sync between the data layer (model) and the view can be very challenging, but with Angular it's just a matter of using one directive exposed from FormModule: <!-- ./app/flight-form/flight-form.component.html --> <form> <div class="form-group"> <label for="fullName">Full Name</label> <input type="text" class="form-control" [(ngModel)]="flightModel.fullName" name="fullName" > </div> </form> Angular relies on the name attribute internally to carry out binding. For this reason, the name attribute is required. Pay attention to [(ngModel)]="flightModel.fullName"; it's trying to bind a property on the component class to the form. This model will be of the Flight type, which is the class we created earlier: // ./app/flight-form/flight-form.component.ts import { Component, OnInit } from '@angular/core'; import { Flight } from '../flight'; @Component({ selector: 'app-flight-form', templateUrl: './flight-form.component.html', styleUrls: ['./flight-form.component.css'] }) export class FlightFormComponent implements OnInit { flightModel: Flight; constructor() { this.flightModel = new Flight('', '', '', '', 0, '', 0, 0, ''); } ngOnInit() {} } The flightModel property is added to the component as a Flight type and initialized with some default values. Include the component in the app HTML, so it can be displayed in the browser: <div class="container"> <h3 class="text-center">Book a Flight</h3> <div class="col-md-offset-3 col-md-6"> <app-flight-form></app-flight-form> </div> </div> This is what you should have in the browser: To see two-way binding in action, use interpolation to display the value of flightModel.fullName. Then, enter a value and see the live update: <form> <div class="form-group"> <label for="fullName">Full Name</label> <input type="text" class="form-control" [(ngModel)]="flightModel.fullName" name="fullName" > {{flightModel.fullName}} </div> </form> Here is what it looks like: More form fields Let's get hands-on and add the remaining form fields. After all, we can't book a flight by just supplying our names. The from and to fields are going to be select boxes with a list of cities we can fly into and out of. This list of cities will be stored right in our component class, and then we can iterate over it in the template and render it as a select box: export class FlightFormComponent implements OnInit { flightModel: Flight; // Array of cities cities:Array<string> = [ 'Lagos', 'Mumbai', 'New York', 'London', 'Nairobi' ]; constructor() { this.flightModel = new Flight('', '', '', '', 0, '', 0, 0, ''); } } The array stores a few cities from around the world as strings. Let's now use the ngFor directive to iterate over the cities and display them on the form using a select box: <div class="row"> <div class="col-md-6"> <label for="from">From</label> <select type="text" id="from" class="form-control" [(ngModel)]="flightModel.from" name="from"> <option *ngFor="let city of cities" value="{{city}}">{{city}}</option> </select> </div> <div class="col-md-6"> <label for="to">To</label> <select type="text" id="to" class="form-control" [(ngModel)]="flightModel.to" name="to"> <option *ngFor="let city of cities" value="{{city}}">{{city}}</option> </select> </div> </div> Neat and clean! You can open the browser and see it right there: The select drop-down, when clicked, shows a list of cities, as expected: Next, let's add the trip type field (radio buttons), the departure date field (date control), and the arrival date field (date control): <div class="row" style="margin-top: 15px"> <div class="col-md-5"> <label for="" style="display: block">Trip Type</label> <label class="radio-inline"> <input type="radio" name="type" [(ngModel)]="flightModel.type" value="One Way"> One way </label> <label class="radio-inline"> <input type="radio" name="type" [(ngModel)]="flightModel.type" value="Return"> Return </label> </div> <div class="col-md-4"> <label for="departure">Departure</label> <input type="date" id="departure" class="form-control" [(ngModel)]="flightModel.departure" name="departure"> </div> <div class="col-md-3"> <label for="arrival">Arrival</label> <input type="date" id="arrival" class="form-control" [(ngModel)]="flightModel.arrival" name="arrival"> </div> </div> How the data is bound to the controls is very similar to the text and select fields that we created previously. The major difference is the types of control (radio buttons and dates): Lastly, add the number of passengers (adults, children, and infants): <div class="row" style="margin-top: 15px"> <div class="col-md-4"> <label for="adults">Adults</label> <input type="number" id="adults" class="form-control" [(ngModel)]="flightModel.adults" name="adults"> </div> <div class="col-md-4"> <label for="children">Children</label> <input type="number" id="children" class="form-control" [(ngModel)]="flightModel.children" name="children"> </div> <div class="col-md-4"> <label for="infants">Infants</label> <input type="number" id="infants" class="form-control" [(ngModel)]="flightModel.infants" name="infants"> </div> </div> The passengers section are all of the number type because we are just expected to pick the number of passengers coming onboard from each category: Validating the form and form fields Angular greatly simplifies form validation by using its built-in directives and state properties. You can use the state property to check whether a form field has been touched. If it's touched but violates a validation rule, you can use the ngIf directive to display associated errors. Let's see an example of validating the full name field: <div class="form-group"> <label for="fullName">Full Name</label> <input type="text" id="fullName" class="form-control" [(ngModel)]="flightModel.fullName" name="fullName" #name="ngModel" required minlength="6"> </div> We just added three extra significant attributes to our form's full name field: #name, required, and minlength. The #name attribute is completely different from the name attribute in that the former is a template variable that holds information about this given field via the ngModel value while the latter is the usual form input name attribute. In Angular, validation rules are passed as attributes, which is why required and minlength are there. Yes, the fields are validated, but there are no feedbacks to the user on what must have gone wrong. Let's add some error messages to be shown when form fields are violated: <div *ngIf="name.invalid && (name.dirty || name.touched)" class="text-danger"> <div *ngIf="name.errors.required"> Name is required. </div> <div *ngIf="name.errors.minlength"> Name must be at least 6 characters long. </div> </div> The ngIf directive shows these div elements conditionally: If the form field has been touched but there's no value in it, the Name is required error is shown Name must be at least 6 characters long is also shown when the field is touched but the content length is less than 6. The following two screenshots show these error outputs in the browser: A different error is shown when a value is entered but the value text count is not up to 6: Submitting forms We need to consider a few factors before submitting a form: Is the form valid? Is there a handler for the form prior to submission? To make sure that the form is valid, we can disable the Submit button: <form #flightForm="ngForm"> <div class="form-group" style="margin-top: 15px"> <button class="btn btn-primary btn-block" [disabled]="!flightForm.form.valid"> Submit </button> </div> </form> First, we add a template variable called flightForm to the form and then use the variable to check whether the form is valid. If the form is invalid, we disable the button from being clicked: To handle the submission, add an ngSubmit event to the form. This event will be called when the button is clicked: <form #flightForm="ngForm" (ngSubmit)="handleSubmit()"> ... </form> You can now add a class method, handleSubmit, to handle the form submission. A simple log to the console may be just enough for this example: export class FlightFormComponent implements OnInit { flightModel: Flight; cities:Array<string> = [ ... ]; constructor() { this.flightModel = new Flight('', '', '', '', 0, '', 0, 0, ''); } // Handle for submission handleSubmit() { console.log(this.flightModel); } } We discussed about collecting user inputs via forms. We covered important features of forms, such as typed inputs, validation, two-way binding, submission, and so on. All these interesting methods will prepare you for getting started with building business applications. If you liked our article, you may read our book TypeScript 2.x for Angular Developers, to learn to use typed DOM events and event handling among other interesting things to do with Typescript. Typescript 2.9 release candidate is here How to install and configure TypeScript How to work with classes in Typescript
Read more
  • 0
  • 0
  • 3149

article-image-why-do-react-developers-love-redux-for-state-management
Sugandha Lahoti
03 Jul 2018
3 min read
Save for later

Why do React developers love Redux for state management?

Sugandha Lahoti
03 Jul 2018
3 min read
Redux is an implementation of FLUX, which is a pattern for managing application state in React. Redux brings a clean and testable design to the table using a purely functional approach. Redux completes the missing piece of the React framework and is used at the core of React for most complex React projects. This video tutorial talks about why Redux is needed and touches upon the Redux Flow. Why Redux? If you have written a large-scale application before, you will know that managing application state can become a pain as the app grows. Application state includes server responses, cached data, and data that has not been persisted to the server yet. Furthermore, the User Interface (UI) state constantly increases in complexity. Let’s take the example of an e-commerce website. Any website contains a lot of components, for instance, the product view, the menu section, the filter panel. Whenever we have such a complex app, whether it be a mobile or a web app, it becomes difficult to communicate between components and to know each other’s updated state. For instance, when you interact with the price filter slider, the product view changes. This can obviously work if we have a parent component calling the child component and share properties. However, this works only for simple apps. For complex apps, it becomes difficult to manage the state and update history between multiple components. Redux comes to the rescue here. In order to understand the functioning of Redux, we will go through a flow chart. Redux Flow Action Whenever a state change occurs in the components, it triggers an action creator. An action creator is a function called action. Actions are plain javascript objects of information that send data from your application to your store. They are the only source of information for the store. Reducers After action, returns this object, it is handled by Reducers. Reducers specify how the application’s state changes in response to actions sent to the store, depending on the action type. Store The store is the object that brings them together. It holds the application state, allows access to state, and allows state to be updated. Provider The provider distributes the data retrieved from a store to all the other components by encapsulating a main base component. This all seems highly theoretical, and may seem a bit difficult to gulp down first. But once you practically apply it, you will get used to complex terminologies and how Redux flows. Don’t forget to watch the video tutorial from Learning React Native Development by Mifta Sintaha to know more about Redux. For a comprehensive guide to building React Native mobile apps, buy the full video course from the Packt store. Introduction to Redux Creating Reusable Generic Modals in React and Redux Minko Gechev: “Developers should learn all major front-end frameworks to go to the next level”
Read more
  • 0
  • 0
  • 8176