Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Web Development

1802 Articles
article-image-introduction-wordpress-plugin
Packt
22 Feb 2018
13 min read
Save for later

Introduction to WordPress Plugin

Packt
22 Feb 2018
13 min read
In this article, Yannick Lefebvre, author of Wordpress Plugin Development Cookbook, Second Edition will cover the following recipes: Creating a new shortcode with parameters Managing multiple sets of user settings from a single admin page WordPress shortcodes are a simple, yet powerful tool that can be used to automate the insertion of code into web pages. For example, a shortcode could be used to automate the insertion of videos from a third-party platform that is not supported natively by WordPress, or embed content from a popular web site. By following the two code samples found in this article, you will learn how to create a WordPress plugin that defines your own shortcode to be able to quickly embed Twitter feeds on a web site. You will also learn how to create an administration configuration panel to be able to create a set of configurations that can be referenced when using your newly-created shortcode. Creating a new shortcode with parameters While simple shortcodes already provide a lot of potential to output complex content to a page by entering a few characters in the post editor, shortcodes become even more useful when they are coupled with parameters that will be passed to their associated processing function. Using this technique, it becomes very easy to create a shortcode that accelerates the insertion of external content in WordPress posts or pages by only needing to specify the shortcode and the unique identifier of the source element to be displayed. We will illustrate this concept in this recipe by creating a shortcode that will be used to quickly add Twitter feeds to posts or pages. How to do it... Navigate to the WordPress plugin directory of your development installation. Create a new directory called ch3-twitter-embed. Navigate to this directory and create a new text file called ch3-twitter-embed.php. Open the new file in a code editor and add an appropriate header at the top of the plugin file, naming the plugin Chapter 2 - Twitter Embed. Add the following line of code to declare a new shortcode and specify the name of the function that should be called when the shortcode is found in posts or pages: add_shortcode( 'twitterfeed', 'ch3te_twitter_embed_shortcode' ); Add the following code section to provide an implementation for the ch3te_twitter_embed_shortcode function: function ch3te_twitter_embed_shortcode( $atts ) { extract( shortcode_atts( array( 'user_name' => 'ylefebvre' ), $atts ) ); if ( !empty( $user_name ) ) { $output = '<a class="twitter-timeline" href="'; $output .= esc_url( 'https://twitter.com/' . $user_name ); $output .= '">Tweets by ' . esc_html( $user_name ); $output .= '</a><script async '; $output .= 'src="//platform.twitter.com/widgets.js"'; $output .= ' charset="utf-8"></script>'; } else { $output = ''; } return $output; }. Save and close the plugin file. Log in to the administration page of your development WordPress installation. Click on Plugins in the left-hand navigation menu. Activate your new plugin. Create a new page and use the shortcode [twitterfeed user_name='WordPress'] in the page editor, where WordPress is the Twitter username of the feed to display: Save and view the page to see that the shortcode was replaced by an embedded Twitter feed on your site. Edit the page and remove the user_name parameter and its associated value, only leaving the core [twitterfeed] shortcode in the post and Save. Refresh the page and see that the feed is still being displayed but now shows tweets from another account. How it works... When shortcodes are used with parameters, these extra pieces of data are sent to the associated processing function in the $atts parameter variable. By using a combination of the standard PHP extract and WordPress-specific shortcode_atts functions, our plugin is able to parse the data sent to the shortcode and create an array of identifiers and values that are subsequently transformed into PHP variables that we can use in the rest of our shortcode implementation function. In this specific example, we expect a single variable to be used, called user_name, which will be stored in a PHP variable called $user_name. If the user enters the shortcode without any parameter, a default value of ylefebvre will be assigned to the username variable to ensure that the plugin still works. Since we are going to accept user input in this code, we also verify that the user did not provide an empty string and we use the esc_html and esc_url functions to remove any potentially harmful HTML characters from the input string and make sure that the link destination URL is valid. Once we have access to the twitter username, we can put together the required HTML code that will embed a Twitter feed in our page and display the selected user's tweets. While this example only has one argument, it is possible to define multiple parameters for a shortcode. Managing multiple sets of user settings from a single admin page Throughout this article, you have learned how to create configuration pages to manage single sets of configuration options for our plugins. In some cases, only being able to specify a single set of options will not be enough. For example, looking back at the Twitter embed shortcode plugin that was created, a single configuration panel would only allow users to specify one set of options, such as the desired twitter feed dimensions or the number of tweets to display. A more flexible solution would be to allow users to specify multiple sets of configuration options, which could then be called up by using an extra shortcode parameter (for example, [twitterfeed user_name="WordPress" option_id="2"]). While the first thought that might cross your mind to configure such a plugin is to create a multi-level menu item with submenus to store a number of different settings, this method would produce a very awkward interface for users to navigate. A better way is to use a single panel but give the user a way to select between multiple sets of options to be modified. In this recipe, you will learn how to enhance the previously created Twitter feed shortcode plugin to be able to control the embedded feed size and number of tweets to display from the plugin configuration panel and to give users the ability to specify multiple display sizes. Getting ready You should have already followed the Creating a new shortcode with parameters recipe in the article to have a starting point for this recipe. Alternatively, you can get the resulting code (Chapter 2/ch3-twitter-embed/ch3-twitter-embed.php) from the downloaded code bundle. How to do it... Navigate to the ch3-twitter-embed folder of the WordPress plugin directory of your development installation. Open the ch3-twitter-embed.php file in a text editor. Add the following lines of code to implement an activation callback to initialize plugin options when it is installed or upgraded: register_activation_hook( __FILE__, 'ch3te_set_default_options_array' ); function ch3te_set_default_options_array() { ch3te_get_options(); } Introduction to WordPress Plugin [ 6 ] function ch3te_get_options( $id = 1 ) { $options = get_option( 'ch3te_options_' . $id, array() ); $new_options['setting_name'] = 'Default'; $new_options['width'] = 560; $new_options['number_of_tweets'] = 3; $merged_options = wp_parse_args( $options, $new_options ); $compare_options = array_diff_key( $new_options, $options ); if ( empty( $options ) || !empty( $compare_options ) ) { update_option( 'ch3te_options_' . $id, $merged_options ); } return $merged_options; } Insert the following code segment to register a function to be called when the administration menu is put together. When this happens, the callback function adds an item to the Settings menu and specifies the function to be called to render the configuration page: // Assign function to be called when admin menu is constructed add_action( 'admin_menu', 'ch3te_settings_menu' ); // Function to add item to Settings menu and // specify function to display options page content function ch3te_settings_menu() { add_options_page( 'Twitter Embed Configuration', 'Twitter Embed', 'manage_options', 'ch3te-twitter-embed', 'ch3te_config_page' ); Add the following code to implement the configuration page rendering function: // Function to display options page content function ch3te_config_page() { // Retrieve plugin configuration options from database if ( isset( $_GET['option_id'] ) ) { $option_id = intval( $_GET['option_id'] ); } elseif ( isset( $_POST['option_id'] ) ) { $option_id = intval( $_POST['option_id'] ); } else { Introduction to WordPress Plugin [ 7 ] $option_id = 1; } $options = ch3te_get_options( $option_id ); ?> <div id="ch3te-general" class="wrap"> <h3>Twitter Embed</h3> <!-- Display message when settings are saved --> <?php if ( isset( $_GET['message'] ) && $_GET['message'] == '1' ) { ?> <div id='message' class='updated fade'> <p><strong>Settings Saved</strong></p></div> <?php } ?> <!-- Option selector --> <div id="icon-themes" class="icon32"><br></div> <h3 class="nav-tab-wrapper"> <?php for ( $counter = 1; $counter <= 5; $counter++ ) { $temp_options = ch3te_get_options( $counter); $class = ( $counter == $option_id ) ? ' nav-tabactive' : ''; ?> <a class="nav-tab<?php echo $class; ?>" href="<?php echo add_query_arg( array( 'page' => 'ch3te-twitterembed', 'option_id' => $counter ), admin_url( 'options-general.php' ) ); ?>"><?php echo $counter; ?><?php if ( $temp_options !== false ) echo ' (' . $temp_options['setting_name'] . ')'; else echo ' (Empty)'; ?></a> <?php } ?> </h3><br /> <!-- Main options form --> <form name="ch3te_options_form" method="post" action="admin-post.php"> <input type="hidden" name="action" value="save_ch3te_options" /> <input type="hidden" name="option_id" value="<?php echo $option_id; ?>" /> <?php wp_nonce_field( 'ch3te' ); ?> <table> <tr><td>Setting name</td> <td><input type="text" name="setting_name" value="<?php echo esc_html( $options['setting_name'] ); ?>"/> </td> </tr> <tr><td>Feed width</td> <td><input type="text" name="width" Introduction to WordPress Plugin [ 8 ] value="<?php echo esc_html( $options['width'] ); ?>"/></td> </tr> <tr><td>Number of Tweets to display</td> <td><input type="text" name="number_of_tweets" value="<?php echo esc_html( $options['height'] ); ?>"/></td> </tr> </table><br /> <input type="submit" value="Submit" class="buttonprimary" /> </form> </div> <?php } Add the following block of code to register a function that will process user options when submitted to the site: add_action( 'admin_init', 'ch3te_admin_init' ); function ch3te_admin_init() { add_action( 'admin_post_save_ch3te_options', 'process_ch3te_options' ); Add the following code to implement the process_ch3te_options function, declared in the previous block of code, and to declare a utility function used to clean the redirection path: // Function to process user data submission function process_ch3te_options() { // Check that user has proper security level if ( !current_user_can( 'manage_options' ) ) { wp_die( 'Not allowed' ); } // Check that nonce field is present check_admin_referer( 'ch3te' ); // Check if option_id field was present if ( isset( $_POST['option_id'] ) ) { $option_id = intval( $_POST['option_id'] ); } else { $option_id = 1; } // Build option name and retrieve options $options = ch3te_get_options( $option_id ); // Cycle through all text fields and store their Introduction to WordPress Plugin [ 9 ] values foreach ( array( 'setting_name' ) as $param_name ) { if ( isset( $_POST[$param_name] ) ) { $options[$param_name] = sanitize_text_field( $_POST[$param_name] ); } } // Cycle through all numeric fields, convert to int and store foreach ( array( 'width', 'number_of_tweets' ) as $param_name ) { if ( isset( $_POST[$param_name] ) ) { $options[$param_name] = intval( $_POST[$param_name] ); } } // Store updated options array to database $options_name = 'ch3te_options_' . $option_id; update_option( $options_name, $options ); $cleanaddress = add_query_arg( array( 'message' => 1, 'option_id' => $option_id, 'page' => 'ch3te-twitter-embed' ), admin_url( 'options-general.php' ) ); wp_redirect( $cleanaddress ); exit; } // Function to process user data submission function process_ch3te_options() { // Check that user has proper security level if ( !current_user_can( 'manage_options' ) ) { wp_die( 'Not allowed' ); } // Check that nonce field is present check_admin_referer( 'ch3te' ); // Check if option_id field was present if ( isset( $_POST['option_id'] ) ) { $option_id = intval( $_POST['option_id'] ); } else { $option_id = 1; } // Build option name and retrieve options $options = ch3te_get_options( $option_id ); // Cycle through all text fields and store their values foreach ( array( 'setting_name' ) as $param_name ) { if ( isset( $_POST[$param_name] ) ) { $options[$param_name] = sanitize_text_field( $_POST[$param_name] ); } } Find the ch3te_twitter_embed_shortcode function and modify it as follows to accept the new option_id parameter and load the plugin options to produce the desired output. The changes are identified in bold within the recipe: function ch3te_twitter_embed_shortcode( $atts ) { extract( shortcode_atts( array( 'user_name' => 'ylefebvre', 'option_id' => '1' ), $atts ) ); if ( intval( $option_id ) < 1 || intval( $option_id ) > 5 ) { $option_id = 1; } $options = ch3te_get_options( $option_id ); if ( !empty( $user_name ) ) { $output = '<a class="twitter-timeline" href="'; $output .= esc_url( 'https://twitter.com/' . $user_name ); $output .= '" data-width="' . $options['width'] . Save and close the plugin file. Deactivate and then Activate the Chapter 2 - Twitter Embed plugin from the administration interface to execute its activation function and create default settings. Navigate to the Settings menu and select the Twitter Embed submenu item to see the newly created configuration panel with the first set of options being displayed and more sets of options accessible through the drop-down list shown at the top of the page. To select the set of options to be used, add the parameter option_id to the shortcode used to display a Twitter feed, as follows: [twitterfeed user_name="WordPress" option_id="1"] How it works... This recipe shows how we can leverage options arrays to create multiple sets of options simply by creating the name of the options array on the fly. Instead of having a specific option name in the first parameter of the get_option function call, we create a string with an option ID. This ID is sent through as a URL parameter on the configuration page and as a hidden text field when processing the form data. On initialization, the plugin only creates a single set of options, which is probably enough for most casual users of the plugin. Doing so will avoid cluttering the site database with useless options. When the user requests to view one of the empty option sets, the plugin creates a new set of options right before rendering the options page. The rest of the code is very similar to the other examples that we saw in this article, since the way to access the array elements remains the same. Summary In this article, the author has explained about the entire process of how to create a new shortcode with parameters and how to manage multiple sets of user settings from a single admin page.
Read more
  • 0
  • 0
  • 5946

article-image-api-gateway-and-its-need
Packt
21 Feb 2018
9 min read
Save for later

API Gateway and its Need

Packt
21 Feb 2018
9 min read
 In this article by Umesh R Sharma, author of the book Practical Microservices, we will cover API Gateway and its need with simple and short examples. (For more resources related to this topic, see here.) Dynamic websites show a lot on a single page, and there is a lot of information that needs to be shown on the page. The common success order summary page shows the cart detail and customer address. For this, frontend has to fire a different query to the customer detail service and order detail service. This is a very simple example of having multiple services on a single page. As a single microservice has to deal with only one concern, in result of that to show much information on page, there are many API calls on the same page. So, a website or mobile page can be very chatty in terms of displaying data on the same page. Another problem is that, sometimes, microservice talks on another protocol, then HTTP only, such as thrift call and so on. Outer consumers can't directly deal with microservice in that protocol. As a mobile screen is smaller than a web page, the result of the data required by the mobile or desktop API call is different. A developer would want to give less data to the mobile API or have different versions of the API calls for mobile and desktop. So, you could face a problem such as this: each client is calling different web services and keeping track of their web service and developers have to give backward compatibility because API URLs are embedded in clients like in mobile app. Why do we need the API Gateway? All these preceding problems can be addressed with the API Gateway in place. The API Gateway acts as a proxy between the API consumer and the API servers. To address the first problem in that scenario, there will only be one call, such as /successOrderSummary, to the API Gateway. The API Gateway, on behalf of the consumer, calls the order and user detail, then combines the result and serves to the client. So basically, it acts as a facade or API call, which may internally call many APIs. The API Gateway solves many purposes, some of which are as follows. Authentication API Gateways can take the overhead of authenticating an API call from outside. After that, all the internal calls remove security check. If the request comes from inside the VPC, it can remove the check of security, decrease the network latency a bit, and make the developer focus more on business logic than concerning about security. Different protocol Sometimes, microservice can internally use different protocols to talk to each other; it can be thrift call, TCP, UDP, RMI, SOAP, and so on. For clients, there can be only one rest-based HTTP call. Clients hit the API Gateway with the HTTP protocol and the API Gateway can make the internal call in required protocol and combine the results in the end from all web service. It can respond to the client in required protocol; in most of the cases, that protocol will be HTTP. Load-balancing The API Gateway can work as a load balancer to handle requests in the most efficient manner. It can keep a track of the request load it has sent to different nodes of a particular service. Gateway should be intelligent enough to load balances between different nodes of a particular service. With NGINX Plus coming into the picture, NGINX can be a good candidate for the API Gateway. It has many of the features to address the problem that is usually handled by the API Gateway. Request dispatching (including service discovery) One main feature of the gateway is to make less communication between client and microservcies. So, it initiates the parallel microservices if that is required by the client. From the client side, there will only be one hit. Gateway hits all the required services and waits for the results from all services. After obtaining the response from all the services, it combines the result and sends it back to the client. Reactive microservice designs can help you achieve this. Working with service discovery can give many extra features. It can mention which is the master node of service and which is the slave. Same goes for DB in case any write request can go to the master or read request can go to the slave. This is the basic rule, but users can apply so many rules on the basis of meta information provided by the API Gateway. Gateway can record the basic response time from each node of service instance. For higher priority API calls, it can be routed to the fastest responding node. Again, rules can be defined on the basis of the API Gateway you are using and how it will be implemented. Response transformation Being a first and single point of entry for all API calls, the API Gateway knows which type of client is calling a mobile, web client, or other external consumer; it can make the internal call to the client and give the data to different clients as per needs and configuration. Circuit breaker To handle the partial failure, the API Gateway uses a technique called circuit breaker pattern. A service failure in one service can cause the cascading failure in the flow to all the service calls in stack. The API Gateway can keep an eye on some threshold for any microservice. If any service passes that threshold, it marks that API as open circuit and decides not to make the call for a configured time. Hystrix (by Netflix) served this purpose efficiently. Default value in this is failing of 20 requests in 5 seconds. Developers can also mention the fall back for this open circuit. This fall back can be of dummy service. Once API starts giving results as expected, then gateway marks it as a closed service again. Pros and cons of API Gateway Using the API Gateway itself has its own pros and cons. In the previous section, we have described the advantages of using the API Gateway already. I will still try to make them in points as the pros of the API Gateway. Pros Microservice can focus on business logic Clients can get all the data in a single hit Authentication, logging, and monitoring can be handled by the API Gateway Gives flexibility to use completely independent protocols in which clients and microservice can talk It can give tailor-made results, as per the clients needs It can handle partial failure Addition to the preceding mentioned pros, some of the trade-offs are also to use this pattern. Cons It can cause performance degrade due to lots of happenings on the API Gateway With this, discovery service should be implemented Sometimes, it becomes the single point of failure Managing routing is an overhead of the pattern Adding additional network hope in the call Overall. it increases the complexity of the system Too much logic implementation in this gateway will lead to another dependency problem So, before using the API Gateway, both of the aspects should be considered. Decision of including the API Gateway in the system increases the cost as well. Before putting effort, cost, and management in this pattern, it is recommended to analysis how much you can gain from it. Example of API Gateway In this example, we will try to show only sample product pages that will fetch the data from service product detail to give information about the product. This example can be increased in many aspects. Our focus of this example is to only show how the API Gateway pattern works; so we will try to keep this example simple and small. This example will be using Zuul from Netflix as an API Gateway. Spring also had an implementation of Zuul in it, so we are creating this example with Spring Boot. For a sample API Gateway implementation, we will be using http://start.spring.io/ to generate an initial template of our code. Spring initializer is the project from Spring to help beginners generate basic Spring Boot code. A user has to set a minimum configuration and can hit the Generate Project button. If any user wants to set more specific details regarding the project, then they can see all the configuration settings by clicking on the Switch to the full version button, as shown in the following screenshot: Let's create a controller in the same package of main application class and put the following code in the file: @SpringBootApplication @RestController public class ProductDetailConrtoller { @Resource ProductDetailService pdService; @RequestMapping(value = "/product/{id}") public ProductDetail getAllProduct( @PathParam("id") String id) { return pdService.getProductDetailById(id); } }   In the preceding code, there is an assumption of the pdService bean that will interact with Spring data repository for product detail and get the result for the required product ID. Another assumption is that this service is running on port 10000. Just to make sure everything is running, a hit on a URL such as http://localhost:10000/product/1 should give some JSON as response. For the API Gateway, we will create another Spring Boot application with Zuul support. Zuul can be activated by just adding a simple @EnableZuulProxy annotation. The following is a simple code to start the simple Zuul proxy: @SpringBootApplication @EnableZuulProxy public class ApiGatewayExampleInSpring { public static void main(String[] args) { SpringApplication.run(ApiGatewayExampleInSpring.class, args); } }   Rest all the things are managed in configuration. In the application.properties file of the API Gateway, the content will be something as follows: zuul.routes.product.path=/product/** zuul.routes.produc.url=http://localhost:10000 ribbon.eureka.enabled=false server.port=8080  With this configuration, we are defining rules such as this: for any request for a URL such as /product/xxx, pass this request to http://localhost:10000. For outer world, it will be like http://localhost:8080/product/1, which will internally be transferred to the 10000 port. If we defined a spring.application.name variable as product in product detail microservice, then we don't need to define the URL path property here (zuul.routes.product.path=/product/** ), as Zuul, by default, will make it a URL/product. The example taken here for an API Gateway is not very intelligent, but this is a very capable API Gateway. Depending on the routes, filter, and caching defined in the Zuul's property, one can make a very powerful API Gateway. Summary In this article, you learned about the API Gateway, its need, and its pros and cons with the code example. Resources for Article:   Further resources on this subject: What are Microservices? [article] Microservices and Service Oriented Architecture [article] Breaking into Microservices Architecture [article]
Read more
  • 0
  • 0
  • 11573

article-image-getting-started-raspberry-pi
Packt
21 Feb 2018
7 min read
Save for later

Getting Started on the Raspberry Pi

Packt
21 Feb 2018
7 min read
 In this article, by Soham Chetan Kamani, author of the book Full Stack Web Development with Raspberry Pi 3, we will cover the marvel of the Raspberry Pi, however, doesn’t end here. It’s extreme portability means we can now do things which were not previously possible with traditional desktop computers. The GPIO pins give us easy access to interface with external devices. This allows the Pi to act as a bridge between embedded electronics and sensors, and the power that linux gives us. In essence, we can run any code in our favorite programming language (which can run on linux), and interface it directly to outside hardware quickly and easily. Once we couple this with the wireless networking capabilities introduced in the Raspberry Pi 3, we gain the ability to make applications that would not have been feasible to make before this device existed.and Scar de Courcier, authors of Windows Forensics Cookbook The Raspberry Pi has become hugely popular as a portable computer, and for good reason. When it comes to what you can do with this tiny piece of technology, the sky’s the limit. Back in the day, computers used to be the size of entire neighborhood blocks, and only large corporations doing expensive research could afford them. After that we went on to embrace personal computers, which were still a bit expensive, but, for the most part, could be bought by the common man. This brings us to where we are today, where we can buy a fully functioning Linux computer, which is as big as a credit card, for under 30$. It is truly a huge leap in making computers available to anyone and everyone. (For more resources related to this topic, see here.)  Web development and portable computing have come a long way. A few years ago we couldn’t dream of making a rich, interactive, and performant application which runs on the browser. Today, not only can we do that, but also do it all in the palm of our hands (quite literally). When we think of developing an application that uses databases, application servers, sockets, and cloud APIs, the picture that normally comes to mind is that of many server racks sitting in a huge room. In this book however, we are going to implement all of that using only the Raspberry Pi. In this article, we will go through the concept of the internet of things, and discuss how web development on the Raspberry Pi can help us get there. Following this, we will also learn how to set up our Raspberry Pi and access it from our computer. We will cover the following topics: The internet of things Our application Setting up Raspberry Pi Remote access The Internet of things (IOT) The web has until today been a network of computers exchanging data. The limitation of this was that it was a closed loop. People could send and receive data from other people via their computers, but rarely much else. The internet of things, in contrast, is a network of devices or sensors that connect the outside world to the internet. Superficially, nothing is different: the internet is still a network of computers. What has changed, is that now, these computers are collecting and uploading data from things instead of people. This now allows anyone who is connected to obtain information that is not collected by a human. The internet of things as a concept has been around for a long time, but it is only now that almost anyone can connect a sensor or device to the cloud, and the IOT revolution was hugely enabled by the advent of portable computing, which was led by the Raspberry Pi.  A brief look at our application Throughout this book, we are going to go through different components and aspects of web development and embedded systems. These are all going to be held together by our central goal of making an entire web application capable of sensing and displaying the surrounding temperature and humidity. In order to make a properly functioning system, we have to first build out the individual parts. More difficult still, is making sure all the parts work well together. Keeping this in mind, let's take a look at the different components of our technology stack, and the problems that each of them solve : The sensor interface - Perception The sensor is what connects our otherwise isolated application to the outside world. The sensor will be connected to the GPIO pins of the Raspberry pi. We can interface with the sensor through various different native libraries. This is the starting point of our data. It is where all the data that is used by our application is created. If you think about it, every other component of our technology stack only exists to manage, manipulate, and display the data collected from the sensor. The database - Persistence "Data" is the term we give to raw information, which is information that we cannot easily aggregate or understand. Without a way to store and meaningfully process and retrieve this data, it will always remain "data" and never "information", which is what we actually want. If we just hook up a sensor and display whatever data it reads, we are missing out on a lot of additional information. Let's take the example of temperature: What if we wanted to find out how the temperature was changing over time? What if we wanted to find the maximum and minimum temperatures for a particular day, or a particular week, or even within a custom duration of time? What if we wanted to see temperature variation across locations? There is no way we could do any of this with only the sensor. We also need some sort of persistence and structure to our data, and this is exactly what the database provides for us. If we structure our data correctly, getting the answers to the above questions is just a matter of a simple database query. The user interface - Presentation The user interface is the layer which connects our application to the end user. One of the most challenging aspects of software development is to make information meaningful and understandable to regular users of our application. The UI layer serves exactly this purpose: it takes relevant information and shows it in such a way that it is easily understandable to humans. How do we achieve such a level of understandability with such a large amount of data? We use visual aids: like colors, charts and diagrams (just like how the diagrams in this book make its information easier to understand). An important thing for any developer to understand is that your end user actually doesn't care about any of the the back-end stuff. The only thing that matters to them is a good experience. Of course, all the 0ther components serve to make the users experience better, but it's really the user facing interface that leaves the first impression, and that's why it's so important to do it well. The application server - Middleware This layer consists of the actual server side code we are going to write to get the application running. It is also called "middleware". In addition to being in the exact center of the architecture diagram, this layer also acts as the controller and middle-man for the other layers. The HTML pages that form the UI are served through this layer. All the database queries that we were talking about earlier are made here. The code that runs in this layer is responsible for retrieving the sensor readings from our external pins and storing the data in our database. Summary We are just warming up! In this article we got a brief introduction to the concept of the internet of things. We then went on to look at an overview of what we were going to build throughout the rest of this book, and saw how the Raspberry Pi can help us get there. Resources for Article:   Further resources on this subject: Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article] Setting up your Raspberry Pi [article] Welcome to JavaScript in the full stack [article]
Read more
  • 0
  • 0
  • 2799
Banner background image

article-image-why-has-vuejs-become-so-popular
Amit Kothari
19 Jan 2018
5 min read
Save for later

Why has Vue.js become so popular?

Amit Kothari
19 Jan 2018
5 min read
The JavaScript ecosystem is full of choices, with many good web development frameworks and libraries to choose from. One of these frameworks is Vue.js, which is gaining a lot of popularity these days. In this post, we’ll explore why you should use Vue.js, and what makes it an attractive option for your next web project. For the latest Vue.js eBooks and videos, visit our Vue.js page. What is Vue.js? Vue.js is a JavaScript framework for building web interfaces. Vue has been gaining a lot of popularity recently. It ranks number one among the 5 web development tools that will matter in 2018. If you take a look at its GitHub page you can see just how popular it has become – the community has grown at an impressive rate. As a modern web framework, Vue ticks a lot of boxes. It uses a virtual DOM for better performance. A virtual DOM is an abstraction of the real DOM; this means it is lightweight and faster to work with. Vue is also reactive and declarative. This is useful because declarative rendering allows you to create visual elements that update automatically based on the state/data changes. One of the most exciting things about Vue is that it supports the component-based approach of building web applications. Its single file components, which are independent and loosely coupled, allow better reuse and faster development. It’s a tool that can significantly impact how you do things. What are the benefits of using Vue.js? Every modern web framework has strong benefits – if they didn’t, no one would use them after all. But here are some of the reasons why Vue.js is a good web framework that can help you tackle many of today’s development challenges. Check out this post to know more on how to install and use Vue.js for web development Good documentation. One of the things that are important when starting with a new framework is its documentation. Vue.js documentation is very well maintained; it includes a simple but comprehensive guide and well-documented APIs. Learning curve. Another thing to look for when picking a new framework is the learning curve involved. Compared to many other frameworks, Vue's concepts and APIs are much simpler and easier to understand. Also, it is built on top of classic web technologies like JavaScript, HTML, and CSS. This results in a much gentler learning curve. Unlike other frameworks which require further knowledge of different technologies - Angular requires TypeScript for example, and React uses JSX, with Vue we can build a sophisticated app by using HTML-based templates, plain JavaScript, and CSS. Less opinionated, more flexible. Vue is also pretty flexible compared to other popular web frameworks. The core library focuses on the ‘view’ part, using a modular approach that allows you to pick your own solution for other issues. While we can use other libraries for things like state management and routing, Vue offers officially supported companion libraries, which are kept up to date with the core library. This includes Vuex, which is an Elm, Flux, and Redux inspired state management solution, and vue-router, Vue's official routing library, which is powerful and incredibly easy to use with Vue.js. But because Vue is so flexible if you wanted to use Redux instead of Vuex, you can do just that. Vue even supports JSX and TypeScript. And if you like taking a CSS-in-JS approach, many other popular libraries also support Vue. Performance. One of the main reasons many teams are using Vue is because of its performance. Vue is small and even with minimal optimization effort performs better than many other frameworks. This is largely due to its lightweight virtual DOM implementation. Check out the JavaScript frameworks performance benchmark for a useful performance comparison. Tools. Along with a number of companion libraries, Vue also offers really good tools that offer a great development experience. Vue-CLI is Vue’s command line tool. Simple yet powerful, it provides different templates, allows project customization and makes starting a new Vue project incredibly easy. Vue also provides its own dev tools for Chrome (vue-devtools), which allows you to inspect the component tree and Vuex state, view events and even time travel. This makes the debugging process pretty easy. Vue also supports hot reload. Hot reload is great because instead of needing to reload a whole page, it allows you to simply reload only the updated component while maintaining the app's current state. Community. No framework can succeed without community support and, as we’ve seen already, Vue has a very active and constantly growing community. The framework is already adopted by many big companies, and its growth is only going to continue. While it is a great option for web development, Vue is also collaborating with Weex, a platform for building cross-platform mobile apps. Weex is backed by the Alibaba group, which is one of the largest e-commerce businesses in the world. Although Weex is not as mature as other app frameworks like React native, it does allow you to build a UI with Vue, which can be rendered natively on iOS and Android. Vue.js offers plenty of benefits. It performs well and is very easy to learn. However, it is, of course important to pick the right tool for the job, and one framework may work better than the other based on the project requirements and personal preferences. With this in mind, it’s worth comparing Vue.js with other frameworks. Are you considering using Vue.js? Do you already use it? Tell us about your experience! You can get started with building your first Vue.js 2 web application from this post.
Read more
  • 0
  • 3
  • 8731

article-image-bootstrap-4-objects-components-flexbox-and-layout
Packt
21 Aug 2017
14 min read
Save for later

Bootstrap 4 Objects, Components, Flexbox, and Layout

Packt
21 Aug 2017
14 min read
In this article by Ajdin Imsirovic author of the book Bootstrap 4 Cookbook, we have three recipes from the book, in which we will be looking at using CSS to override Bootstrap 4 styling and create customized blockquotes. Next. we will look at how to utilize SCSS to control the number of card columns at different screen sizes. We will wrap it up with the third recipe, in which we will look at classes that Bootstrap 4 uses to implement flex-based layouts. Specifically, we will switch the flex direction of card components, based on the screen size. (For more resources related to this topic, see here.) Customizing the blockquote element with CSS In this recipe, we will examine how to use and modify Bootstrap's blockquote element. The technique we'll employ is using the :before and :after CSS pseudo-classes. We will add HTML entities to the CSS content property, and then style their position, size, and color. Getting ready Navigate to the recipe4 page of the chapter 3 website, and preview the final result that we are trying to achieve (its preview is available in chapter3-complete/app, after running harp server in the said folder). To get this look, we are using all the regular Bootstrap 4 CSS classes, with the addition of .bg-white, added in the preceding recipe. In this recipe, we will add custom styles to .blockquote. How to do it... In the empty chapter3/start/app/recipe4.ejs file, add the following code: <div class="container mt-5"> <h1>Chapter 3, Recipe 4:</h1> <p class="lead">Customize the Blockquote Element with CSS</p> </div> <!-- Customizing the blockquote element --> <div class="container"> <div class="row mt-5 pt-5"> <div class="col-lg-12"> <blockquote class="blockquote"> <p>Blockquotes can go left-to-right. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Repellat dolor pariatur, distinctio doloribus aliquid recusandae soluta tempore. Vero a, eum.</p> <footer class="blockquote-footer">Some Guy, <cite>A famous publication</cite> </footer> </blockquote> </div> <div class="col-lg-12"> <blockquote class="blockquote blockquote-reverse bg-white"> <p>Blockquotes can go right-to-left. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Quisquam repellendus sequi officia nulla quaerat quo.</p> <footer class="blockquote-footer">Another Guy, <cite>A famous movie quote</cite> </footer> </blockquote> </div> <div class="col-lg-12"> <blockquote class="blockquote card-blockquote"> <p>You can use the <code>.card-blockquote</code> class. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Aliquid accusamus veritatis quasi.</p> <footer class="blockquote-footer">Some Guy, <cite>A reliable source</cite> </footer> </blockquote> </div> <div class="col-12"> <blockquote class="blockquote bg-info"> <p>Blockquotes can go left-to-right. Lorem ipsum dolor sit amet. </p> <footer class="blockquote-footer">Some Guy, <cite>A famous publication</cite> </footer> </blockquote> </div> </div> </div> In main-03-04.scss, add the following code: blockquote.blockquote { padding: 2rem 2rem 2rem 4rem; margin: 2rem; quotes: "201C" "201D"; position: relative; } blockquote:before { content: open-quote; font-family: Georgia, serif; font-size: 12rem; opacity: .04; font-weight: bold; position:absolute; top:-6rem; left: 0; } blockquote:after { content: close-quote; font-size: 12rem; opacity: .04; font-family: Georgia, serif; font-weight: bold; position:absolute; bottom:-11.3rem; right: 0; } In main.scss, uncomment @include for main-03-04.scss. Run grunt sass and harp server. How it works... In this recipe, we are using the regular blockquote HTML element and Bootstrap's classes for styling it. To make it look different, we primarily use the following tweaks: Setting the blockquote.blockquote position to relative Setting the :before and :after pseudo-classes, position to absolute In blockquote.blockquote, setting the padding and margin. Also, assigning the values for opening and closing quotes, using CSS (ISO) encoding for the two HTML entities Using Georgia font to style the content property in pseudo-classes Setting the font-size of pseudo-classes to a very high value and giving the font a very high opacity, so as to make it become more background-like With absolute positioning in place, it is easy to place the quotes in the exact location, using negative rem values Controlling the number of card columns on different breakpoints with SCSS This recipe will involve some SCSS mixins, which will alter the behavior of the card-columns component. To be able to showcase the desired effect, we will have to have a few hundred lines of compiled HTML code. This poses an issue; how do we show all that code inside a recipe? Here, Harp partials come to the rescue! Since most of the code in this recipe is repetitive, we will make a separate file. The file will contain the code needed to make a single card. Then, we will have a div with the class of card-columns, and this div will hold 20 cards, which will, in fact, be 20 calls to the single card file in our source code before compilation. This will make it easy for us to showcase how the number of cards in this card-columns div will change, based on screen width. To see the final result, open the chapter4/complete code's app folder, and run the console (that is, bash) on it. Follow it up with the harp server command, and navigate to localhost:9000 in your browser to see the result we will achieve in this recipe.  Upon opening the web page as explained in the preceding paragraph, you should see 20 cards in a varying number of columns, depending on your screen size. Getting ready To get acquainted with how card-columns work, navigate to the card-columns section of the Bootstrap documentation at https://v4-alpha.getbootstrap.com/components/card/#card-columns. How to do it… Open the currently empty file located at chapter4start/app/recipe04-07.ejs, and add the following code: <div class="container-fluid"> <div class="mt-5"> <h1><%- title %></h1> <p><a href="https://v4-alpha.getbootstrap.com/components/card/#card-columns" target="_blank">Link to bootstrap card-columns docs</a></p> </div><!-- /.container-fluid --> <div class="container-fluid mt-5 mb-5"> <div class="card-columns"> <!-- cards 1 to 5 --> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <!-- cards 6 to 10 --> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <!-- cards 11 to 15 --> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <!-- cards 16 to 20 --> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> <%- partial("partial/_recipe04-07-samplecard.ejs") %> </div> </div> Open the main.scss file, and comment out all the other imports since some of them clash with this recipe: @import "recipe04-04.scss"; @import "./bower_components/bootstrap/scss/bootstrap.scss"; @import "./bower_components/bootstrap/scss/_mixins.scss"; @import "./bower_components/font-awesome/scss/font-awesome.scss"; @import "./bower_components/hover/scss/hover.scss"; // @import "recipe04-01.scss"; // @import "recipe04-02.scss"; // @import "recipe04-03.scss"; // @import "recipe04-05.scss"; // @import "recipe04-06.scss"; @import "recipe04-07.scss"; // @import "recipe04-08.scss"; // @import "recipe04-09.scss"; // @import "recipe04-10.scss"; // @import "recipe04-11.scss"; // @import "recipe04-12.scss"; Next, we will add the partial file with the single card code in app/partial/_recipe04-07-samplecard.ejs: <div class="card"> <img class="card-img-top img-fluid" src="http://placehold.it/300x250" alt="Card image description"> <div class="card-block"> <h4 class="card-title">Lorem ipsum dolor sit amet.</h4> <p class="card-text">Lorem ipsum dolor sit amet, consectetur adipisicing elit. Officia autem, placeat dolorem sed praesentium aliquid suscipit tenetur iure perspiciatis sint?</p> </div> </div> If you are serving the files on Cloud9 IDE, then reference the placehold.it images from HTTPS so you don't have the warnings appearing in the console. Open this recipe's SCSS file, titled recipe04-07.scss, and paste the following code: .card-columns { @include media-breakpoint-only(sm) { column-count: 2; } @include media-breakpoint-only(md) { column-count: 3; } @include media-breakpoint-only(lg) { column-count: 5; } @include media-breakpoint-only(xl) { column-count: 7; } } Recompile Sass and start the harp server command to view the result. How it works… In step 1, we added our recipe's structure in recipe04-07.ejs. The focus in this file is the div with the class of card-columns, which holds 20 calls to the sample card partial file. In step 2, we included the SCSS file for this recipe, and to make sure that it works, we comment out the imports for all the other recipes' SCSS files. In step 3, we made our single card, as per the Bootstrap documentation. Finally, we customized the .card-columns class in our SCSS by changing the value of the card-columns property using the media-breakpoint-only mixin. The media-breakpoint-only mixin takes the sm, md, lg, and xl values as its parameter. This allows us to easily change the value of the column-count property in our layouts.  Breakpoint-dependent switching of flex direction on card components In this recipe, we will ease into using the flexbox grid in Bootstrap 4 with a simple example of switching the flex-direction property. To achieve this effect, we will use a few helper classes to enable the use of Flexbox in our recipe. To get acquainted with the way Flexbox works in Bootstrap, check out the official documentation at https://v4-alpha.getbootstrap.com/utilities/flexbox/ . Getting ready To get started with the recipe, let's first get an idea of what we will make. Navigate to chapter8complete/app/ and run harp server. Then, preview the completed recipe at localhost:9000/recipe08-01 . You should see a simple layout with four card components lined up horizontally. Now, resize the browser, either by changing the browser's window width or by pressing F12 (which will open developer tools and allow you to narrow down the viewport by adjusting the size of developer tools). At a certain breakpoint (), you should see the cards stacked on top of one another. That is the effect that we will achieve in this recipe. How to do it… Open the folder titled chapter8/start inside source code. Open the currently empty file titled recipe08-01.ejs inside the app folder; copy the below code a it into recipe08-01.ejs: <div class="container"> <h2 class="mt-5">Recipe 08-01: Breakpoint-dependent Switching of Flex Direction on Card Components</h2> <p>In this recipe we'll switch DIRECTION, between a vertical (.flex- {breakpoint}column), and a horizontal (.flex-{breakpoint}-row) stacking of cards.</p> <p>This recipe will introduce us to the flexbox grid in Bootstrap 4.</p> </div><!-- /.container --> <div class="container"> <%- partial("partial/_card0") %> <%- partial("partial/_card0") %> <%- partial("partial/_card0") %> <%- partial("partial/_card0") %> </div> While still in the same file, find the second div with the class of container and add more classes to it, as follows: <div class="container d-flex flex-column flex-lg-row"> Now, open the app/partial folder and copy and paste the following code into the file titled _card0.ejs: <div class="p-3" id="card0"> <div class="card"> <div class="card-block"> <h3 class="card-title">Special title treatment</h3> <p class="card-text">With supporting text below as a natural lead-in to additional content.</p> <a href="#" class="btn btn-primary">Go somewhere</a> </div> </div> </div> Now, run the harp server command and preview the result at localhost:9000/recipe08-01, inside the chapter8start folder. Resize the browser window to see the stacking of card components on smaller resolutions.  How it works… To start discussing how this recipe works, let's first do a little exercise. In the file titled recipe08-01, inside the chapter8start folder, locate the first div with the container class. Add the class of d-flex to s div, so that this section of code now looks like this: <div class="container d-flex"> Save the file and refresh the page in your browser. You should see that adding the helper class of d-flex to our first container has completely changed the way that this container is displayed. What has happened is that our recipe's heading and the two paragraphs (which are all inside the first container div) are now sitting on the same flex row. The reason for this behavior is the addition of Bootstrap's utility class of d-flex, which sets our container to display: flex. With display: flex, the default behavior is to set the flex container to flex-direction: row. This flex direction is implicit, meaning that we don't have to specify it. However, if we want to specify a different value to the flex-direction property, we can use another Bootstrap 4 helper class, for example, flex-row-reverse. So, let's add it to the first div, like this: <div class="container d-flex flex-row-reverse"> Now, if we save and refresh our page, we will see that the heading and the two paragraphs still show on the flex row, but now the last paragraph comes first, on the left edge of the container. It is then followed by the first paragraph, and finally, by the heading itself. There are four ways to specify flex-direction in Bootstrap, that is, by adding one of the following four classes to our wrapping HTML element: flex-row, flex-row-reverse, flex-column, and flex-column-reverse. The first two classes align our flex items horizontally, and the last two classes align our flex items vertically. Back to our recipe, we can see that on the second container, we added the following three classes on the original div (that had only the class of container in step 1): d-flex, flex-column, and flex-lg-row.  Now we can understand what each of these classes does. The d-flex class sets our second container to display: flex. The flex-column class stacks our flex items (the four card components) vertically, with each card taking up the width of the container.  Since Bootstrap is a mobile first framework, the classes we provide also take effect mobile first. If we want to override a class, by convention, we need to provide a breakpoint at which the initial class behavior will be overridden. In this recipe, we want to specify a class, with a specific breakpoint, at which this class will make our cards line up horizontally, rather than stacking them vertically. Because of the number of cards inside our second container, and because of the minimum width that each of these cards takes up, the most obvious solution was to have the cards line up horizontally on resolutions of lg and up. That is why we provide the third class of flex-lg-row to our second container. We could have used any other helper class, such as flex-row, flex-sm-row, flex-md-row, or flex-xl-row, but the one that was actually used made the most sense. Summary In this article, we have covered Customizing the blockquote element with css, Controlling the number of card columns on different breakpoints with SCSS, and Breakpoint-dependent switching of flex direction on card components.  Resources for Article: Further resources on this subject: Web Development with React and Bootstrap [article] Gearing Up for Bootstrap 4 [article] Deep Customization of Bootstrap [article]
Read more
  • 0
  • 0
  • 4226

article-image-writing-modules
Packt
14 Aug 2017
15 min read
Save for later

Writing Modules

Packt
14 Aug 2017
15 min read
In this article, David Mark Clements, the author of the book, Node.js Cookbook, we will be covering the following points to introduce you to using Node.js  for exploratory data analysis: Node's module system Initializing a module Writing a module Tooling around modules Publishing modules Setting up a private module repository Best practices (For more resources related to this topic, see here.) In idiomatic Node, the module is the fundamental unit of logic. Any typical application or system consists of generic code and application code. As a best practice, generic shareable code should be held in discrete modules, which can be composed together at the application level with minimal amounts of domain-specific logic. In this article, we'll learn how Node's module system works, how to create modules for various scenarios, and how we can reuse and share our code. Scaffolding a module Let's begin our exploration by setting up a typical file and directory structure for a Node module. At the same time, we'll be learning how to automatically generate a package.json file (we refer to this throughout as initializing a folder as a package) and to configure npm (Node's package managing tool) with some defaults, which can then be used as part of the package generation process. In this recipe, we'll create the initial scaffolding for a full Node module. Getting ready Installing Node If we don't already have Node installed, we can go to https://nodejs.org to pick up the latest version for our operating system. If Node is on our system, then so is the npm executable; npm is the default package manager for Node. It's useful for creating, managing, installing, and publishing modules. Before we run any commands, let's tweak the npm configuration a little: npm config set init.author.name "<name here>" This will speed up module creation and ensure that each package we create has a consistent author name, thus avoiding typos and variations of our name. npm stands for... Contrary to popular belief, npm is not an acronym for Node Package Manager; in fact, it stands for npm is Not An Acronym, which is why it's not called NINAA. How to do it… Let's say we want to create a module that converts HSL (hue, saturation, luminosity) values into a hex-based RGB representation, such as will be used in CSS (for example,  #fb4a45 ). The name hsl-to-hex seems good, so let's make a new folder for our module and cd into it: mkdir hsl-to-hex cd hsl-to-hex Every Node module must have a package.json file, which holds metadata about the module. Instead of manually creating a package.json file, we can simply execute the following command in our newly created module folder: npm init This will ask a series of questions. We can hit enter for every question without supplying an answer. Note how the default module name corresponds to the current working directory, and the default author is the init.author.name value we set earlier. An npm init should look like this: Upon completion, we should have a package.json file that looks something like the following: { "name": "hsl-to-hex", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "David Mark Clements", "license": "MIT" } How it works… When Node is installed on our system, npm comes bundled with it. The npm executable is written in JavaScript and runs on Node. The npm config command can be used to permanently alter settings. In our case, we changed the init.author.name setting so that npm init would reference it for the default during a module's initialization. We can list all the current configuration settings with npm config ls . Config Docs Refer to https://docs.npmjs.com/misc/config for all possible npm configuration settings. When we run npm init, the answers to prompts are stored in an object, serialized as JSON and then saved to a newly created package.json file in the current directory. There's more… Let's find out some more ways to automatically manage the content of the package.json file via the npm command. Reinitializing Sometimes additional metadata can be available after we've created a module. A typical scenario can arise when we initialize our module as a git repository and add a remote endpoint after creating the module. Git and GitHub If we've not used the git tool and GitHub before, we can refer to http://help.github.com to get started. If we don't have a GitHub account, we can head to http://github.com to get a free account. To demonstrate, let's create a GitHub repository for our module. Head to GitHub and click on the plus symbol in the top-right, then select New repository: Select New repository. Specify the name as hsl-to-hex and click on Create Repository. Back in the Terminal, inside our module folder, we can now run this: echo -e "node_modulesn*.log" > .gitignore git init git add . git commit -m '1st' git remote add origin http://github.com/<username>/hsl-to-hex git push -u origin master Now here comes the magic part; let's initialize again (simply press enter for every question): npm init This time the Git remote we just added was detected and became the default answer for the git repository question. Accepting this default answer meant that the repository, bugs, and homepage fields were added to package.json . A repository field in package.json is an important addition when it comes to publishing open source modules since it will be rendered as a link on the modules information page at http://npmjs.com. A repository link enables potential users to peruse the code prior to installation. Modules that can't be viewed before use are far less likely to be considered viable. Versioning The npm tool supplies other functionalities to help with module creation and management workflow. For instance, the npm version command can allow us to manage our module's version number according to SemVer semantics. SemVer SemVer is a versioning standard. A version consists of three numbers separated by a dot, for example, 2.4.16. The position of a number denotes specific information about the version in comparison to the other versions. The three positions are known as MAJOR.MINOR.PATCH. The PATCH number is increased when changes have been made that don't break the existing functionality or add any new functionality. For instance, a bug fix will be considered a patch. The MINOR number should be increased when new backward compatible functionality is added. For instance, the adding of a method. The MAJOR number increases when backwards-incompatible changes are made. Refer to http://semver.org/ for more information. If we were to a fix a bug, we would want to increase the PATCH number. We can either manually edit the version field in package.json , setting it to 1.0.1, or we can execute the following: npm version patch This will increase the version field in one command. Additionally, if our module is a Git repository, it will add a commit based on the version (in our case, v1.0.1), which we can then immediately push. When we ran the command, npm output the new version number. However, we can double-check the version number of our module without opening package.json: npm version This will output something similar to the following: { 'hsl-to-hex': '1.0.1', npm: '2.14.17', ares: '1.10.1-DEV', http_parser: '2.6.2', icu: '56.1', modules: '47', node: '5.7.0', openssl: '1.0.2f', uv: '1.8.0', v8: '4.6.85.31', zlib: '1.2.8' } The first field is our module along with its version number. If we added a new backwards-compatible functionality, we can run this: npm version minor Now our version is 1.1.0. Finally, we can run the following for a major version bump: npm version major This sets our modules version to 2.0.0. Since we're just experimenting and didn't make any changes, we should set our version back to 1.0.0. We can do this via the npm command as well: npm version 1.0.0 See also Refer to the following recipes: Writing module code Publishing a module Installing dependencies In most cases, it's most wise to compose a module out of other modules. In this recipe, we will install a dependency. Getting ready For this recipe, all we need is Command Prompt open in the hsl-to-hex folder from the Scaffolding a module recipe. How to do it… Our hsl-to-hex module can be implemented in two steps: Convert the hue degrees, saturation percentage, and luminosity percentage to corresponding red, green, and blue numbers between 0 and 255. Convert the RGB values to HEX. Before we tear into writing an HSL to the RGB algorithm, we should check whether this problem has already been solved. The easiest way to check is to head to http://npmjs.com and perform a search: Oh, look! Somebody already solved this. After some research, we decide that the hsl-to-rgb-for-reals module is the best fit. Ensuring that we are in the hsl-to-hex folder, we can now install our dependency with the following: npm install --save hsl-to-rgb-for-reals Now let's take a look at the bottom of package.json: tail package.json #linux/osx type package.json #windows Tail output should give us this: "bugs": { "url": "https://github.com/davidmarkclements/hsl-to-hex/issues" }, "homepage": "https://github.com/davidmarkclements/hsl-to-hex#readme", "description": "", "dependencies": { "hsl-to-rgb-for-reals": "^1.1.0" } } We can see that the dependency we installed has been added to a dependencies object in the package.json file. How it works… The top two results of the npm search are hsl-to-rgb and hsl-to-rgb-for-reals . The first result is unusable because the author of the package forgot to export it and is unresponsive to fixing it. The hsl-to-rgb-for-reals module is a fixed version of hsl-to-rgb . This situation serves to illustrate the nature of the npm ecosystem. On the one hand, there are over 200,000 modules and counting, and on the other many of these modules are of low value. Nevertheless, the system is also self-healing in that if a module is broken and not fixed by the original maintainer, a second developer often assumes responsibility and publishes a fixed version of the module. When we run npm install in a folder with a package.json file, a node_modules folder is created (if it doesn't already exist). Then, the package is downloaded from the npm registry and saved into a subdirectory of node_modules (for example, node_modules/hsl-to-rgb-for-reals ). npm 2 vs npm 3 Our installed module doesn't have any dependencies of its own. However, if it did, the sub-dependencies would be installed differently depending on whether we're using version 2 or version 3 of npm. Essentially, npm 2 installs dependencies in a tree structure, for instance, node_modules/dep/node_modules/sub-dep-of-dep/node_modules/sub-dep-of-sub-dep. Conversely, npm 3 follows a maximally flat strategy where sub-dependencies are installed in the top level node_modules folder when possible, for example, node_modules/dep, node_modules/sub-dep-of-dep, and node_modules/sub-dep-of-sub-dep. This results in fewer downloads and less disk space usage; npm 3 resorts to a tree structure in cases where there are two versions of a sub-dependency, which is why it's called a maximally flat strategy. Typically, if we've installed Node 4 or above, we'll be using npm version 3. There's more… Let's explore development dependencies, creating module management scripts and installing global modules without requiring root access. Installing development dependencies We usually need some tooling to assist with development and maintenance of a module or application. The ecosystem is full of programming support modules, from linting to testing to browser bundling to transpilation. In general, we don't want consumers of our module to download dependencies they don't need. Similarly, if we're deploying a system built-in node, we don't want to burden the continuous integration and deployment processes with superfluous, pointless work. So, we separate our dependencies into production and development categories. When we use npm --save install <dep>, we're installing a production module. To install a development dependency, we use --save-dev. Let's go ahead and install a linter. JavaScript Standard Style A standard is a JavaScript linter that enforces an unconfigurable ruleset. The premise of this approach is that we should stop using precious time up on bikeshedding about syntax. All the code in this article uses the standard linter, so we'll install that: npm install --save-dev standard semistandard If the absence of semicolons is abhorrent, we can choose to install semistandard instead of standard at this point. The lint rules match those of standard, with the obvious exception of requiring semicolons. Further, any code written using standard can be reformatted to semistandard using the semistandard-format command tool. Simply, run npm -g i semistandard-format to get started with it. Now, let's take a look at the package.json file: { "name": "hsl-to-hex", "version": "1.0.0", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "David Mark Clements", "license": "MIT", "repository": { "type": "git", "url": "git+ssh://git@github.com/davidmarkclements/hsl-to-hex.git" }, "bugs": { "url": "https://github.com/davidmarkclements/hsl-to-hex/issues" }, "homepage": "https://github.com/davidmarkclements/hsl-to- hex#readme", "description": "", "dependencies": { "hsl-to-rgb-for-reals": "^1.1.0" }, "devDependencies": { "standard": "^6.0.8" } } We now have a devDependencies field alongside the dependencies field. When our module is installed as a sub-dependency of another package, only the hsl-to-rgb-for-reals module will be installed while the standard module will be ignored since it's irrelevant to our module's actual implementation. If this package.json file represented a production system, we could run the install step with the --production flag, as shown: npm install --production Alternatively, this can be set in the production environment with the following command: npm config set production true Currently, we can run our linter using the executable installed in the node_modules/.bin folder. Consider this example: ./node_modules/.bin/standard This is ugly and not at all ideal. Refer to Using npm run scripts for a more elegant approach. Using npm run scripts Our package.json file currently has a scripts property that looks like this: "scripts": { "test": "echo "Error: no test specified" && exit 1" }, Let's edit the package.json file and add another field, called lint, as follows: "scripts": { "test": "echo "Error: no test specified" && exit 1", "lint": "standard" }, Now, as long as we have standard installed as a development dependency of our module (refer to Installing Development Dependencies), we can run the following command to run a lint check on our code: npm run-script lint This can be shortened to the following: npm run lint When we run an npm script, the current directory's node_modules/.bin folder is appended to the execution context's PATH environment variable. This means even if we don't have the standard executable in our usual system PATH, we can reference it in an npm script as if it was in our PATH. Some consider lint checks to be a precursor to tests. Let's alter the scripts.test field, as illustrated: "scripts": { "test": "npm run lint", "lint": "standard" }, Chaining commands Later, we can append other commands to the test script using the double ampersand (&&) to run a chain of checks. For instance, "test": "npm run lint && tap test". Now, let's run the test script: npm run test Since the test script is special, we can simply run this: npm test Eliminating the need for sudo The npm executable can install both the local and global modules. Global modules are mostly installed so to allow command line utilities to be used system wide. On OS X and Linux, the default npm setup requires sudo access to install a module. For example, the following will fail on a typical OS X or Linux system with the default npm setup: npm -g install cute-stack # <-- oh oh needs sudo This is unsuitable for several reasons. Forgetting to use sudo becomes frustrating; we're trusting npm with root access and accidentally using sudo for a local install causes permission problems (particularly with the npm local cache). The prefix setting stores the location for globally installed modules; we can view this with the following: npm config get prefix Usually, the output will be /usr/local . To avoid the use of sudo, all we have to do is set ownership permissions on any subfolders in /usr/local used by npm: sudo chown -R $(whoami) $(npm config get prefix)/{lib/node_modules,bin,share} Now we can install global modules without root access: npm -g install cute-stack # <-- now works without sudo If changing ownership of system folders isn't feasible, we can use a second approach, which involves changing the prefix setting to a folder in our home path: mkdir ~/npm-global npm config set prefix ~/npm-global We'll also need to set our PATH: export PATH=$PATH:~/npm-global/bin source ~/.profile The source essentially refreshes the Terminal environment to reflect the changes we've made. See also Scaffolding a module Writing module code Publishing a module Resources for Article: Further resources on this subject: Understanding and Developing Node Modules [article] Working with Pluginlib, Nodelets, and Gazebo Plugins [article] Basic Website using Node.js and MySQL database [article]
Read more
  • 0
  • 0
  • 1375
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introduction-moodle-3
Packt
17 Jul 2017
13 min read
Save for later

Introduction to Moodle 3

Packt
17 Jul 2017
13 min read
In this article, Ian Wild, the author of the book, Moodle 3.x Developer's Guide will be intoroducing you to Moodle 3.For any organization considering implementing an online learning environment, Moodle is often the number one choice. Key to its success is the free, open source ethos that underpins it. Not only is the Moodle source code fully available to developers, but Moodle itself has been developed to allow for the inclusion of third party plugins. Everything from how users access the platform, the kinds of teaching interactions that are available, through to how attendance and success can be reported – in fact all the key Moodle functionality – can be adapted and enhanced through plugins. (For more resources related to this topic, see here.) What is Moodle? There are three reasons why Moodle has become so important, and much talked about, in the world of online learning, one technical, one philosophical and the third educational. From a technical standpoint Moodle - an acronym for Modular Object Oriented Dynamic Learning Environment – is highly customizable. As a Moodle developer, always remember that the ‘M’ in Moodle stands for modular. If you are faced with a client feature request that demands a feature Moodle doesn’t support then don’t panic. The answer is simple: we create a new custom plugin to implement it. Check out the Moodle Plugins Directory (https://moodle.org/plugins/) for a comprehensive library of supported 3rd party plugins that Moodle developers have created and given back to the community. And this leads to the philosophical reason why Moodle dominates. Free open source software for education Moodle is grounded firmly in a community-based, open source philosophy (see https://en.wikipedia.org/wiki/Open-source_model). But what does this mean for developers? Fundamentally, it means that we have complete access to the source code and, within reason, unfettered access to the people who develop it. Access to the application itself is free – you don’t need to pay to download it and you don’t need to pay to run it. But be aware of what ‘free’ means in this context. Hosting and administration, for example, take time and resources and are very unlikely to be free. As an educational tool, Moodle was developed to support social constructionism (see https://docs.moodle.org/31/en/Pedagogy) – if you are not familiar with this concept then it is essentially suggesting that building an understanding of a concept or idea can be best achieved by interacting with a broad community. The impact on us as Moodle plugin developers is that there is a highly active group of users and developers. Before you begin developing any Moodle plugins come and join us at https://moodle.org. Plugin Development – Authentication In this article, we will be developing a novel plugin that will seamlessly integrate Moodle and the WordPress content management system. Our plugin will authorize users via WordPress when they click on a link to Moodle when on a WordPress page. The plugin discussed in this article has already been released to the Moodle community – check out the Moodle Plugins Directory at https://moodle.org/plugins/auth_wordpress for details. Let us start by learning what Moodle authentication is and how new user accounts are created. Authentication Moodle supports a range of different authentication methods out of the box, each one supported by its own plugin. To go to the list of available plugins, from the Administration block, click on Site administration, click Plugins, then click Authentication, and finally click on Manage authentication. The list of currently installed authentication plugins is displayed: Each plugin interfaces with an internal Application Programming Interface (API), the Access API – see the Moodle developer documentation for details here: https://docs.moodle.org/dev/Access_API Getting logged in There are two ways of prompting the Moodle authentication process: Attempting to log in from the log in page. Clicking on a link to a protected resource (i.e. a page or file that you can’t view or download without logging in). For an overview of the process, take a look in the developer documentation at https://docs.moodle.org/dev/Authentication_plugins#Overview_of_Moodle_authentication_process. After checks to determine if an upgrade is required (or if we are partway through the upgrade process), there is a short fragment of code that loads the configured authentication plugins and for each one calls a special method called loginpage_hook(): $authsequence = get_enabled_auth_plugins(true); // auths, in sequence foreach($authsequence as $authname) { $authplugin = get_auth_plugin($authname); $authplugin->loginpage_hook(); } The loginpage_hook() function gives each authentication plugin the chance to intercept the login. Assuming that the login has not been intercepted, the process then continues with a check to ensure the supplied username conforms to the configured standard before calling authenticate_user_login() which, if successful, returns a $user object. OAuth Overview The OAuth authentication mechanism provides secure delegated access. OAuth supports a number of scenarios, including: A client requests access from a server and the server responds with either a ‘confirm’ or ‘deny’. This is called two legged authentication A client requests access from a server and the server, the server then pops up a confirmation dialog so that the user can authorize the access, and then it finally responds with either a ‘confirm’ or ‘deny’. This is called three legged authentication In this article we will be implementing such a mechanism means, in practice, that: An authentication server will only talk to configured clients No passwords are exchanged between server and client – only tokens are exchanged, which are meaningless on their own By default, users need to give permission before resources are accessed Having given an overview, here is the process again described in a little more detail: A new client is configured in the authentication server. A client is allocated a unique client key, along with a secret token (referred to as client secret) The client POSTs an HTTP request to the server (identifying itself using the client key and client secret) and the server responds with a temporary access token. This token is used to request authorization to access protected resources from the server. In this case ‘protected resources’ mean the WordPress API. Access to the WordPress API will allow us to determine details of the currently logged in user. The server responds not with an HTTP response but by POSTing new permanent tokens back to the client via a callback URI (i.e. the server talks to the client directly in order to ensure security). The process ends with the client possessing permanent authorization tokens that can be used to access WP-API functions. Obviously, the most effective way of learning about this process is to implement it so let’s go ahead and do that now. Installing the WordPress OAuth 1.0a server The first step will be to add the OAuth 1.0a server plugin to WordPress. Why not the more recent OAuth 2.0 server plugin? This is because 2.0 only supports https:// and not http://. Also, internally (at least at time of writing) WordPress will only authenticate internally using either OAuth 1.0a or cookies. Log into WordPress as an administrator and, from the Dashboard, hover the mouse over the Plugins menu item and click on Installed Plugins. The Plugins page is displayed. At the top of the page, press the Add New button: As described previously, ensure that you install version 1.0a and not 2.0: Once installed, we need to configure a new client. From the Dashboard menu, hover the mouse over Users and you will see a new Applications menu item has been added. Click on this to display the Registered Applications page. Click the Add New button to display the Add Application page: The Consumer Name is the title for our client that will appear in the Applications page, and Description is a brief explanation of that client to aid with the identification. The Callback is the URI that WordPress will talk to (refer to the outline of the OAuth authentication steps). As we have not yet developed the Moodle/OAuth client end yet you can specify oob in Callback (this stands for ‘out of band’). Once configured, WordPress will generate new OAuth credentials, a Client Key and a Client Secret: Having installed and configured the server end now it’s time to develop the client. Creating a new Moodle auth plugin Before we begin, download the finished plugin from https://github.com/iandavidwild/moodle-auth_wordpress and install it in your local development Moodle instance. The development of a new authentication plugin is described in the developer documentation at https://docs.moodle.org/dev/Authentication_plugins. As described there, let us start with copying the none plugin (the no login authentication method) and using this as a template for our new plugin. In Eclipse, I’m going to copy the none plugin to a new authentication method called wordpress: That done, we need to update the occurrences of auth_none to auth_wordpress. Firstly, rename /auth/wordpress/lang/en/auth_none.php to auth_wordpress.php. Then, in auth.php we need to rename the class auth_plugin_none to auth_plugin_wordpress. As described, the Eclipse Find/Replace function is great for updating scripts: Next, we need to update version information in version.php. Update all the relevant names, descriptions and dates. Finally, we can check that Moodle recognises our new plugin by navigating to the Site administration menu and clicking on Notifications. If installation is successful, our new plugin will be listed on the Available authentication plugins page: Configuration Let us start with considering the plugin configuration. We will need to allow a Moodle administrator to configure the following: The URL of the WordPress installation The client key and client secret provided by WordPress There is very little flexibility in the design of an authentication plugin configuration page so at this stage, rather than creating a wireframe drawing and having this agreed with the client, we can simply go ahead and write the code. The configuration page is defined in /config.html. Remember to start declaring the relevant language strings in /lang/en/auth_wordpress.php. Configuration settings themselves will be managed by the Moodle framework by calling our plugin’s process_config() method. Here is the declaration: /** * Processes and stores configuration data for this authentication plugin. * * @return @bool */ function process_config($config) { // Set to defaults if undefined if (!isset($config->wordpress_host)) { $config->wordpress_host = ‘‘; } if (!isset($config->client_key)) { $config->client_key = ‘‘; } if (!isset($config->client_secret)) { $config->client_secret = ‘‘; } set_config(‘wordpress_host’, trim($config->wordpress_host), ‘auth/wordpress’); set_config(‘client_key’, trim($config->client_key), ‘auth/wordpress’); set_config(‘client_secret’, trim($config->client_secret), ‘auth/wordpress’); return true; } Having dealt with configuration, now let us start managing the actual OAuth process. Handling OAuth calls Rather than go into the details of how we can send HTTP requests to WordPress let’s use a third party library to do this work. The code I’m going to use is based on Abraham Williamstwitteroauth library (see https://github.com/abraham/twitteroauth). In Eclipse, take a look at the files OAuth.php and BasicOAuth.php for details. To use the library, you will need to add the following lines to the top of /wordpress/auth.php: require_once($CFG->dirroot . ‘/auth/wordpress/OAuth.php’); require_once($CFG->dirroot . ‘/auth/wordpress/BasicOAuth.php’); use OAuth1BasicOauth; Let’s now start work on handling the Moodle login event. Handling the Moodle login event When a user clicks on link to a protected resource Moodle calls loginpage_hook() in each enabled authentication plugin. To handle this, let us first implement loginpage_hook(). We need to add the following lines to auth.php: /** * Will get called before the login page is shown. * */ function loginpage_hook() { $client_key = $this->config->client_key; $client_secret = $this->config->client_secret; $wordpress_host = $this->config->wordpress_host; if( (strlen($wordpress_host) > 0) && (strlen($client_key) > 0) && (strlen($client_secret) > 0) ) { // kick ff the authentication process $connection = new BasicOAuth($client_key, $client_secret); // strip the trailing slashes from the end of the host URL to avoid any confusion (and to make the code easier to read) $wordpress_host = rtrim($wordpress_host, ‘/’); $connection->host = $wordpress_host . “/wp-json”; $connection->requestTokenURL = $wordpress_host . “/oauth1/request”; $callback = $CFG->wwwroot . ‘/auth/wordpress/callback.php’; $tempCredentials = $connection->getRequestToken($callback); // Store temporary credentials in the $_SESSION }// if } This implements the first leg of the authentication process and the variable $tempCredentials will now contain a temporary access token. We will need to store these temporary credentials and then call on the server to ask the user to authorize the connection (leg two). Add the following lines immediately after the // Store temporary credentials in the $_SESSION comment: $_SESSION[‘oauth_token’] = $tempCredentials[‘oauth_token’]; $_SESSION[‘oauth_token_secret’] = $tempCredentials[‘oauth_token_secret’]; $connection->authorizeURL = $wordpress_host . “/oauth1/authorize”; $redirect_url = $connection->getAuthorizeURL($tempCredentials); header(‘Location: ‘ . $redirect_url); die; Next, we need to implement the OAuth callback. Create a new script called callback.php: The callback.php script will need to: Sanity check the data being passed back from WordPress and fail gracefully if there is an issue Get the wordpress authentication plugin instance (an instance of auth_plugin_wordpress) Call on a handler method that will perform the authentication (which we will then need to implement) The script is simple, short and available here: https://github.com/iandavidwild/moodle-auth_wordpress/blob/MOODLE_31_STABLE/callback.php Now, in the auth.php script, we need to implement the callback_handler() method to auth_plugin_wordpress. You can check out the code in GitHub. Visit https://github.com/iandavidwild/moodle-auth_wordpress/blob/MOODLE_31_STABLE/auth.php and scroll down to the call_backhandler() method. Lastly, let us add a fragment of code to the loginpage_hook() method that allows us to turn off WordPress authentication in config.php. Add the following to the very beginning of the loginpage_hook() function: global $CFG; if(isset($CFG->disablewordpressauth) && ($CFG->disablewordpressauth == true)) { return; } Summary In this article, we introduced the Moodle learning platform, investigated the open source philosophy that underpins it, and how Moodle’s functionality can be extended and enhanced through plugins. We took a pre-existing plugin to develop a new WordPress authentication module. This will allow a user logged into WordPress to automatically log into Moodle. To do so we implemented three legged Oauth 1.0a WordPress to Moodle authentication. Check out the complete code at https://github.com/iandavidwild/moodle-auth_wordpress/blob/MOODLE_31_STABLE/callback.php. More information on the plugin described in this article is available from the main Moodle website at https://moodle.org/plugins/auth_wordpress. Resources for Article:   Further resources on this subject: Introduction to Moodle [article] Moodle for Online Communities [article] An Introduction to Moodle 3 and MoodleCloud [article]
Read more
  • 0
  • 0
  • 2393

article-image-chart-model-and-draggable-and-droppable-directives
Packt
06 Jul 2017
9 min read
Save for later

Chart Model and Draggable and Droppable Directives

Packt
06 Jul 2017
9 min read
In this article by Sudheer Jonna and Oleg Varaksin, the author of the book Learning Angular UI Development with PrimeNG, we will see how to work with the chart model and learn about draggable and droppable directives. (For more resources related to this topic, see here.) Working with the chart model The chart component provides a visual representation of data using chart on a web page. PrimeNG chart components are based on charts.js 2.x library (as a dependency), which is a HTML5 open source library. The chart model is based on UIChart class name and it can be represented with element name as p-chart. The chart components will work efficiently by attaching the chart model file (chart.js) to the project root folder entry point. For example, in this case it would be index.html file. It can be configured as either CDN resource, local resource, or CLI configuration. CDN resource configuration: <script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.5.0/Chart.bu ndle.min.js"></script> Angular CLI configuration: "scripts": [ "../node_modules/chart.js/dist/Chart.js", //..others ] More about the chart configuration and options will be available in the official documentation of the chartJS library (http://www.chartjs.org/). Chart types The chart type is defined through the type property. It supports six types of charts with an options such as pie, bar, line, doughnut, polarArea, and radar. Each type has it's own format of data and it can be supplied through the data property. For example, in doughnut chart, the type should refer to doughnut and the data property should bind to the data options as shown here: <p-chart type="doughnut" [data]="doughnutdata"></p-chart> The component class has to define data the options with labels and datasets as follows: this.doughnutdata = { labels: ['PrimeNG', 'PrimeUI', 'PrimeReact'], datasets: [ { data: [3000, 1000, 2000], backgroundColor: [ "#6544a9", "#51cc00", "#5d4361" ], hoverBackgroundColor: [ "#6544a9", "#51cc00", "#5d4361" ] } ] }; Along with labels and data options, other properties related to skinning can be applied too. The legends are closable by default (that is, if you want to visualize only particular data variants then it is possible by collapsing legends which are not required). The collapsed legend is represented with a strike line. The respective data component will be disappeared after click operation on legend. Customization Each series is customized on a dataset basis but you can customize the general or common options via the options attribute. For example, the line chart which customize the default options would be as follows: <p-chart type="line" [data]="linedata" [options]="options"></p-chart> The component class needs to define chart options with customized title and legend properties as follows: this.options = { title: { display: true, text: 'PrimeNG vs PrimeUI', fontSize: 16 }, legend: { position: 'bottom' } }; As per the preceding example, the title option is customized with a dynamic title, font size, and conditional display of the title. Where as legend attribute is used to place the legend in top, left, bottom, and right positions. The default legend position is top. In this example, the legend position is bottom. The line chart with preceding customized options would results as a snapshot shown here: The Chart API also supports the couple of utility methods as shown here: refresh Redraws the graph with new data reinit Destroys the existing graph and then creates it again generateLegend Returns an HTML string of a legend for that chart Events The chart component provides a click event on data sets to process the select data using onDataSelect event callback. Let us take a line chart example with onDataSelect event callback by passing an event object as follows: <p-chart type="line" [data]="linedata" (onDataSelect)="selectData($event)"></p-chart> In the component class, an event callback is used to display selected data information in a message format as shown: selectData(event: any) { this.msgs = []; this.msgs.push({ severity: 'info', summary: 'Data Selected', 'detail': this.linedata.datasets[event.element._datasetIndex] .data[event.element._index] }); } In the preceding event callback (onDataSelect), we used an index of the dataset to display information. There are also many other options from an event object: event.element = Selected element event.dataset = Selected dataset event.element._datasetIndex = Index of the dataset in data event.element._index = Index of the data in dataset Learning Draggable and Droppable directives Drag and drop is an action, which means grabbing an object and dragging it to a different location. The components capable of being dragged and dropped enrich the web and make a solid base for modern UI patterns. The drag and drop utilities in PrimeNG allow us to create draggable and droppable user interfaces efficiently. They make it abstract for the developers to deal with the implementation details at the browser level. In this section, we will learn about pDraggable and pDroppable directives. We will introduce a DataGrid component containing some imaginary documents and make these documents draggable in order to drop them onto a recycle bin. The recycle bin is implemented as DataTable component which shows properties of dropped documents. For the purpose of better understanding the developed code, a picture comes first: This picture shows what happens after dragging and dropping three documents. The complete demo application with instructions is available on GitHub at https://github.com/ova2/angular-development-with-primeng/tree/master/chapter9/dragdrop. Draggable pDraggable is attached to an element to add a drag behavior. The value of the pDraggable attribute is required--it defines the scope to match draggables with droppables. By default, the whole element is draggable. We can restrict the draggable area by applying the dragHandle attribute. The value of dragHandle can be any CSS selector. In the DataGrid with available documents, we only made the panel's header draggable: <p-dataGrid [value]="availableDocs"> <p-header> Available Documents </p-header> <ng-template let-doc pTemplate="item"> <div class="ui-g-12 ui-md-4" pDraggable="docs" dragHandle=".uipanel- titlebar" dragEffect="move" (onDragStart)="dragStart($event, doc)" (onDragEnd)="dragEnd($event)"> <p-panel [header]="doc.title" [style]="{'text-align':'center'}"> <img src="/assets/data/images/docs/{{doc.extension}}.png"> </p-panel> </div> </ng-template> </p-dataGrid> The draggable element can fire three events when dragging process begins, proceeds, and ends. These are onDragStart, onDrag, and onDragEnd respectively. In the component class, we buffer the dragged document at the beginning and reset it at the end of the dragging process. This task is done in two callbacks: dragStart and dragEnd. class DragDropComponent { availableDocs: Document[]; deletedDocs: Document[]; draggedDoc: Document; constructor(private docService: DocumentService) { } ngOnInit() { this.deletedDocs = []; this.docService.getDocuments().subscribe((docs: Document[]) => this.availableDocs = docs); } dragStart(event: any, doc: Document) { this.draggedDoc = doc; } dragEnd(event: any) { this.draggedDoc = null; } ... } In the shown code, we used the Document interface with the following properties: interface Document { id: string; title: string; size: number; creator: string; creationDate: Date; extension: string; } In the demo application, we set the cursor to move when the mouse is moved over any panel's header. This trick provides a better visual feedback for draggable area: body .ui-panel .ui-panel-titlebar { cursor: move; } We can also set the dragEffect attribute to specifies the effect that is allowed for a drag operation. Possible values are none, copy, move, link, copyMove, copyLink, linkMove, and all. Refer the official documentation to read more details at https://developer.mozilla.org/en-US/docs/Web/API/DataTransfer/effectAllowed. Droppable pDroppable is attached to an element to add a drop behavior. The value of the pDroppable attribute should have the same scope as pDraggable. Droppable scope can also be an array to accept multiple droppables. The droppable element can fire four events. Event name Description onDragEnter Invoked when a draggable element enters the drop area onDragOver Invoked when a draggable element is being dragged over the drop area onDrop Invoked when a draggable is dropped onto the drop area onDragLeave Invoked when a draggable element leaves the drop area In the demo application, the whole code of the droppable area looks as follows: <div pDroppable="docs" (onDrop)="drop($event)" [ngClass]="{'dragged-doc': draggedDoc}"> <p-dataTable [value]="deletedDocs"> <p-header>Recycle Bin</p-header> <p-column field="title" header="Title"></p-column> <p-column field="size" header="Size (bytes)"></p-column> <p-column field="creator" header="Creator"></p-column> <p-column field="creationDate" header="Creation Date"> <ng-template let-col let-doc="rowData" pTemplate="body"> {{doc[col.field].toLocaleDateString()}} </ng-template> </p-column> </p-dataTable> </div> Whenever a document is being dragged and dropped into the recycle bin, the dropped document is removed from the list of all available documents and added to the list of deleted documents. This happens in the onDrop callback: drop(event: any) { if (this.draggedDoc) { // add draggable element to the deleted documents list this.deletedDocs = [...this.deletedDocs, this.draggedDoc]; // remove draggable element from the available documents list this.availableDocs = this.availableDocs.filter((e: Document) => e.id !== this.draggedDoc.id); this.draggedDoc = null; } } Both available and deleted documents are updated by creating new arrays instead of manipulating existing arrays. This is necessary in data iteration components to force Angular run change detection. Manipulating existing arrays would not run change detection and the UI would not be updated. The Recycle Bin area gets a red border while dragging any panel with document. We have achieved this highlighting by setting ngClass as follows: [ngClass]="{'dragged-doc': draggedDoc}". The style class dragged-doc is enabled when the draggedDoc object is set. The style class is defined as follows: .dragged-doc { border: solid 2px red; } Summary Initially we started with chart components. At first we started with chart Model API and then will learn how to create charts programmatically using various chart types such as pie, bar, line, doughnut, polar and radar charts. We also learned features of Draggable and Droppable. Resources for Article: Further resources on this subject: Building Components Using Angular [article] Get Familiar with Angular [article] Writing a Blog Application with Node.js and AngularJS [article]
Read more
  • 0
  • 0
  • 4199

article-image-tangled-web-not-all
Packt
22 Jun 2017
20 min read
Save for later

Tangled Web? Not At All!

Packt
22 Jun 2017
20 min read
In this article by Clif Flynt, the author of the book Linux Shell Scripting Cookbook - Third Edition, we can see a collection of shell-scripting recipes that talk to services on the Internet. This articleis intended to help readers understand how to interact with the Web using shell scripts to automate tasks such as collecting and parsing data from web pages. This is discussed using POST and GET to web pages, writing clients to web services. (For more resources related to this topic, see here.) In this article, we will cover the following recipes: Downloading a web page as plain text Parsing data from a website Image crawler and downloader Web photo album generator Twitter command-line client Tracking changes to a website Posting to a web page and reading response Downloading a video from the Internet The Web has become the face of technology and the central access point for data processing. The primary interface to the web is via a browser that's designed for interactive use. That's great for searching and reading articles on the web, but you can also do a lot to automate your interactions with shell scripts. For instance, instead of checking a website daily to see if your favorite blogger has added a new blog, you can automate the check and be informed when there's new information. Similarly, twitter is the current hot technology for getting up-to-the-minute information. But if I subscribe to my local newspaper's twitter account because I want the local news, twitter will send me all news, including high-school sports that I don't care about. With a shell script, I can grab the tweets and customize my filters to match my desires, not rely on their filters. Downloading a web page as plain text Web pages are simply text with HTML tags, JavaScript and CSS. The HTML tags define the content of a web page, which we can parse for specific content. Bash scripts can parse web pages. An HTML file can be viewed in a web browser to see it properly formatted. Parsing a text document is simpler than parsing HTML data because we aren't required to strip off the HTML tags. Lynx is a command-line web browser which download a web page as plaintext. Getting Ready Lynx is not installed in all distributions, but is available via the package manager. # yum install lynx or apt-get install lynx How to do it... Let's download the webpage view, in ASCII character representation, in a text file by using the -dump flag with the lynx command: $ lynx URL -dump > webpage_as_text.txt This command will list all the hyperlinks <a href="link"> separately under a heading References, as the footer of the text output. This lets us parse links separately with regular expressions. For example: $lynx -dump http://google.com > plain_text_page.txt You can see the plaintext version of text by using the cat command: $ cat plain_text_page.txt Search [1]Images [2]Maps [3]Play [4]YouTube [5]News [6]Gmail [7]Drive [8]More » [9]Web History | [10]Settings | [11]Sign in [12]St. Patrick's Day 2017 _______________________________________________________ Google Search I'm Feeling Lucky [13]Advanced search [14]Language tools [15]Advertising Programs [16]Business Solutions [17]+Google [18]About Google © 2017 - [19]Privacy - [20]Terms References Parsing data from a website The lynx, sed, and awk commands can be used to mine data from websites. How to do it... Let's go through the commands used to parse details of actresses from the website: $ lynx -dump -nolist http://www.johntorres.net/BoxOfficefemaleList.html | grep -o "Rank-.*" | sed -e 's/ *Rank-([0-9]*) *(.*)/1t2/' | sort -nk 1 > actresslist.txt The output is: # Only 3 entries shown. All others omitted due to space limits 1 Keira Knightley 2 Natalie Portman 3 Monica Bellucci How it works... Lynx is a command-line web browser—it can dump a text version of a website as we would see in a web browser, instead of returning the raw html as wget or cURL do. This saves the step of removing HTML tags. The -nolist option shows the links without numbers. Parsing and formatting the lines that contain Rank is done with sed: sed -e 's/ *Rank-([0-9]*) *(.*)/1t2/' These lines are then sorted according to the ranks. See also The Downloading a web page as plain text recipe in this article explains the lynx command. Image crawler and downloader Image crawlers download all the images that appear in a web page. Instead of going through the HTML page by hand to pick the images, we can use a script to identify the images and download them automatically. How to do it... This Bash script will identify and download the images from a web page: #!/bin/bash #Desc: Images downloader #Filename: img_downloader.sh if [ $# -ne 3 ]; then echo "Usage: $0 URL -d DIRECTORY" exit -1 fi while [ -n $1 ] do case $1 in -d) shift; directory=$1; shift ;; *) url=$1; shift;; esac done mkdir -p $directory; baseurl=$(echo $url | egrep -o "https?://[a-z.-]+") echo Downloading $url curl -s $url | egrep -o "<imgsrc=[^>]*>" | sed's/<imgsrc="([^"]*).*/1/g' | sed"s,^/,$baseurl/,"> /tmp/$$.list cd $directory; while read filename; do echo Downloading $filename curl -s -O "$filename" --silent done < /tmp/$$.list An example usage is: $ ./img_downloader.sh http://www.flickr.com/search/?q=linux -d images How it works... The image downloader script reads an HTML page, strips out all tags except <img>, parses src="URL" from the <img> tag, and downloads them to the specified directory. This script accepts a web page URL and the destination directory as command-line arguments. The [ $# -ne 3 ] statement checks whether the total number of arguments to the script is three, otherwise it exits and returns a usage example. Otherwise, this code parses the URL and destination directory: while [ -n "$1" ] do case $1 in -d) shift; directory=$1; shift ;; *) url=${url:-$1}; shift;; esac done The while loop runs until all the arguments are processed. The shift command shifts arguments to the left so that $1 will take the next argument's value; that is, $2, and so on. Hence, we can evaluate all arguments through $1 itself. The case statement checks the first argument ($1). If that matches -d, the next argument must be a directory name, so the arguments are shifted and the directory name is saved. If the argument is any other string it is a URL. The advantage of parsing arguments in this way is that we can place the -d argument anywhere in the command line: $ ./img_downloader.sh -d DIR URL Or: $ ./img_downloader.sh URL -d DIR The egrep -o "<imgsrc=[^>]*>"code will print only the matching strings, which are the <img> tags including their attributes. The phrase [^>]*matches all the characters except the closing >, that is, <imgsrc="image.jpg">. sed's/<imgsrc="([^"]*).*/1/g' extracts the url from the string src="url". There are two types of image source paths—relative and absolute. Absolute paths contain full URLs that start with http:// or https://. Relative URLs starts with / or image_name itself. An example of an absolute URL is http://example.com/image.jpg. An example of a relative URL is /image.jpg. For relative URLs, the starting / should be replaced with the base URL to transform it to http://example.com/image.jpg. The script initializes the baseurl by extracting it from the initial url with the command: baseurl=$(echo $url | egrep -o "https?://[a-z.-]+") The output of the previously described sed command is piped into another sed command to replace a leading / with the baseurl, and the results are saved in a file named for the script's PID: /tmp/$$.list. sed"s,^/,$baseurl/,"> /tmp/$$.list The final while loop iterates through each line of the list and uses curl to downloas the images. The --silent argument is used with curl to avoid extra progress messages from being printed on the screen. The final while loop iterates through each line of the list and uses curl to downloas the images. The --silent argument is used with curl to avoid extra progress messages from being printed on the screen. Web photo album generator Web developers frequently create photo albums of full sized and thumbnail images. When a thumbnail is clicked, a large version of the picture is displayed. This requires resizing and placing many images. These actions can be automated with a simple bash script. The script creates thumbnails, places them in exact directories, and generates the code fragment for <img> tags automatically.  Web developers frequently create photo albums of full sized and thumbnail images. When a thumbnail is clicked, a large version of the picture is displayed. This requires resizing and placing many images. These actions can be automated with a simple bash script. The script creates thumbnails, places them in exact directories, and generates the code fragment for <img> tags automatically. Getting ready This script uses a for loop to iterate over every image in the current directory. The usual Bash utilities such as cat and convert (from the Image Magick package) are used. These will generate an HTML album, using all the images, in index.html. How to do it... This Bash script will generate an HTML album page: #!/bin/bash #Filename: generate_album.sh #Description: Create a photo album using images in current directory echo "Creating album.." mkdir -p thumbs cat <<EOF1 > index.html <html> <head> <style> body { width:470px; margin:auto; border: 1px dashed grey; padding:10px; } img { margin:5px; border: 1px solid black; } </style> </head> <body> <center><h1> #Album title </h1></center> <p> EOF1 for img in *.jpg; do convert "$img" -resize "100x""thumbs/$img" echo "<a href="$img">">>index.html echo "<imgsrc="thumbs/$img" title="$img" /></a>">> index.html done cat <<EOF2 >> index.html </p> </body> </html> EOF2 echo Album generated to index.html Run the script as follows: $ ./generate_album.sh Creating album.. Album generated to index.html How it works... The initial part of the script is used to write the header part of the HTML page. The following script redirects all the contents up to EOF1 to index.html: cat <<EOF1 > index.html contents... EOF1 The header includes the HTML and CSS styling. for img in *.jpg *.JPG; iterates over the file names and evaluates the body of the loop. convert "$img" -resize "100x""thumbs/$img" creates images of 100 px width as thumbnails. The following statements generate the required <img> tag and appends it to index.html: echo "<a href="$img">" echo "<imgsrc="thumbs/$img" title="$img" /></a>">> index.html Finally, the footer HTML tags are appended with cat as done in the first part of the script. Twitter command-line client Twitter is the hottest micro-blogging platform, as well as the latest buzz of the online social media now. We can use Twitter API to read tweets on our timeline from the command line! Twitter is the hottest micro-blogging platform, as well as the latest buzz of the online social media now. We can use Twitter API to read tweets on our timeline from the command line! Let's see how to do it. Getting ready Recently, Twitter stopped allowing people to log in by using plain HTTP Authentication, so we must use OAuth to authenticate ourselves.  Perform the following steps: Download the bash-oauth library from https://github.com/livibetter/bash-oauth/archive/master.zip, and unzip it to any directory. Go to that directory and then inside the subdirectory bash-oauth-master, run make install-all as root.Go to https://apps.twitter.com/ and register a new app. This will make it possible to use OAuth. After registering the new app, go to your app's settings and change Access type to Read and Write. Now, go to the Details section of the app and note two things—Consumer Key and Consumer Secret, so that you can substitute these in the script we are going to write. Great, now let's write the script that uses this. How to do it... This Bash script uses the OAuth library to read tweets or send your own updates. #!/bin/bash #Filename: twitter.sh #Description: Basic twitter client oauth_consumer_key=YOUR_CONSUMER_KEY oauth_consumer_scret=YOUR_CONSUMER_SECRET config_file=~/.$oauth_consumer_key-$oauth_consumer_secret-rc if [[ "$1" != "read" ]] && [[ "$1" != "tweet" ]]; then echo -e "Usage: $0 tweet status_messagen ORn $0 readn" exit -1; fi #source /usr/local/bin/TwitterOAuth.sh source bash-oauth-master/TwitterOAuth.sh TO_init if [ ! -e $config_file ]; then TO_access_token_helper if (( $? == 0 )); then echo oauth_token=${TO_ret[0]} > $config_file echo oauth_token_secret=${TO_ret[1]} >> $config_file fi fi source $config_file if [[ "$1" = "read" ]]; then TO_statuses_home_timeline'''YOUR_TWEET_NAME''10' echo $TO_ret | sed's/,"/n/g' | sed's/":/~/' | awk -F~ '{} {if ($1 == "text") {txt=$2;} else if ($1 == "screen_name") printf("From: %sn Tweet: %snn", $2, txt);} {}' | tr'"''' elif [[ "$1" = "tweet" ]]; then shift TO_statuses_update''"$@" echo 'Tweeted :)' fi Run the script as follows: $./twitter.sh read Please go to the following link to get the PIN: https://api.twitter.com/oauth/authorize?oauth_token=LONG_TOKEN_STRING PIN: PIN_FROM_WEBSITE Now you can create, edit and present Slides offline. - by A Googler $./twitter.sh tweet "I am reading Packt Shell Scripting Cookbook" Tweeted :) $./twitter.sh read | head -2 From: Clif Flynt Tweet: I am reading Packt Shell Scripting Cookbook How it works... First of all, we use the source command to include the TwitterOAuth.sh library, so we can use its functions to access Twitter. The TO_init function initializes the library. Every app needs to get an OAuth token and token secret the first time it is used. If these are not present, we use the library function TO_access_token_helper to acquire them. Once we have the tokens, we save them to a config file so we can simply source it the next time the script is run. The library function TO_statuses_home_timeline fetches the tweets from Twitter. This data is retuned as a single long string in JSON format, which starts like this: [{"created_at":"Thu Nov 10 14:45:20 +0000 "016","id":7...9,"id_str":"7...9","text":"Dining... Each tweet starts with the created_at tag and includes a text and a screen_nametag. The script will extract the text and screen name data and display only those fields. The script assigns the long string to the variable TO_ret. The JSON format uses quoted strings for the key and may or may not quote the value. The key/value pairs are separated by commas, and the key and value are separated by a colon :. The first sed to replaces each," character set with a newline, making each key/value a separate line. These lines are piped to another sed command to replace each occurrence of ": with a tilde ~ which creates a line like screen_name~"Clif_Flynt" The final awk script reads each line. The -F~ option splits the line into fields at the tilde, so $1 is the key and $2 is the value. The if command checks for text or screen_name. The text is first in the tweet, but it's easier to read if we report the sender first, so the script saves a text return until it sees a screen_name, then prints the current value of $2 and the saved value of the text. The TO_statuses_updatelibrary function generates a tweet. The empty first parameter defines our message as being in the default format, and the message is a part of the second parameter. Tracking changes to a website Tracking website changes is useful to both web developers and users. Checking a website manually impractical, but a change tracking script can be run at regular intervals. When a change occurs, it generate a notification. Getting ready Tracking changes in terms of Bash scripting means fetching websites at different times and taking the difference by using the diff command. We can use curl and diff to do this. How to do it... This bash script combines different commands, to track changes in a webpage: #!/bin/bash #Filename: change_track.sh #Desc: Script to track changes to webpage if [ $# -ne 1 ]; then echo -e "$Usage: $0 URLn" exit 1; fi first_time=0 # Not first time if [ ! -e "last.html" ]; then first_time=1 # Set it is first time run fi curl --silent $1 -o recent.html if [ $first_time -ne 1 ]; then changes=$(diff -u last.html recent.html) if [ -n "$changes" ]; then echo -e "Changes:n" echo "$changes" else echo -e "nWebsite has no changes" fi else echo "[First run] Archiving.." fi cp recent.html last.html Let's look at the output of the track_changes.sh script on a website you control. First we'll see the output when a web page is unchanged, and then after making changes. Note that you should change MyWebSite.org to your website name. First, run the following command: $ ./track_changes.sh http://www.MyWebSite.org [First run] Archiving.. Second, run the command again. $ ./track_changes.sh http://www.MyWebSite.org Website has no changes Third, run the following command after making changes to the web page: $ ./track_changes.sh http://www.MyWebSite.org Changes: --- last.html 2010-08-01 07:29:15.000000000 +0200 +++ recent.html 2010-08-01 07:29:43.000000000 +0200 @@ -1,3 +1,4 @@ +added line :) data How it works... The script checks whether the script is running for the first time by using [ ! -e "last.html" ];. If last.html doesn't exist, it means that it is the first time and, the webpage must be downloaded and saved as last.html. If it is not the first time, it downloads the new copy recent.html and checks the difference with the diff utility. Any changes will be displayed as diff output.Finally, recent.html is copied to last.html. Note that changing the website you're checking will generate a huge diff file the first time you examine it. If you need to track multiple pages, you can create a folder for each website you intend to watch. Posting to a web page and reading the response POST and GET are two types of requests in HTTP to send information to, or retrieve information from a website. In a GET request, we send parameters (name-value pairs) through the webpage URL itself. The POST command places the key/value pairs in the message body instead of the URL. POST is commonly used when submitting long forms or to conceal the information submitted from a casual glance. Getting ready For this recipe, we will use the sample guestbook website included in the tclhttpd package.  You can download tclhttpd from http://sourceforge.net/projects/tclhttpd and then run it on your local system to create a local webserver. The guestbook page requests a name and URL which it adds to a guestbook to show who has visited a site when the user clicks the Add me to your guestbook button. This process can be automated with a single curl (or wget) command. How to do it... Download the tclhttpd package and cd to the bin folder. Start the tclhttpd daemon with this command: tclsh httpd.tcl The format to POST and read the HTML response from generic website resembles this: $ curl URL -d "postvar=postdata2&postvar2=postdata2" Consider the following example: $ curl http://127.0.0.1:8015/guestbook/newguest.html -d "name=Clif&url=www.noucorp.com&http=www.noucorp.com" curl prints a response page like this: <HTML> <Head> <title>Guestbook Registration Confirmed</title> </Head> <Body BGCOLOR=white TEXT=black> <a href="www.noucorp.com">www.noucorp.com</a> <DL> <DT>Name <DD>Clif <DT>URL <DD> </DL> www.noucorp.com </Body> -d is the argument used for posting. The string argument for -d is similar to the GET request semantics. var=value pairs are to be delimited by &. You can POST the data using wget by using --post-data "string". For example: $ wgethttp://127.0.0.1:8015/guestbook/newguest.cgi --post-data "name=Clif&url=www.noucorp.com&http=www.noucorp.com" -O output.html Use the same format as cURL for name-value pairs. The text in output.html is the same as that returned by the cURL command. The string to the post arguments (for example, to -d or --post-data) should always be given in quotes. If quotes are not used, & is interpreted by the shell to indicate that this should be a background process. How to do it... If you look at the website source (use the View Source option from the web browser), you will see an HTML form defined, similar to the following code: <form action="newguest.cgi"" method="post"> <ul> <li> Name: <input type="text" name="name" size="40"> <li> Url: <input type="text" name="url" size="40"> <input type="submit"> </ul> </form> Here, newguest.cgi is the target URL. When the user enters the details and clicks on the Submit button, the name and url inputs are sent to newguest.cgi as a POST request, and the response page is returned to the browser. Downloading a video from the internet There are many reasons for downloading a video. If you are on a metered service, you might want to download videos during off-hours when the rates are cheaper. You might want to watch videos where the bandwidth doesn't support streaming, or you might just want to make certain that you always have that video of cute cats to show your friends. Getting ready One program for downloading videos is youtube-dl. This is not included in most distributions and the repositories may not be up to date, so it's best to go to the youtube-dl main site:http://yt-dl.org You'll find links and information on that page for downloading and installing youtube-dl. How to do it… Using youtube-dl is easy. Open your browser and find a video you like. Then copy/paste that URL to the youtube-dl command line. youtube-dl  https://www.youtube.com/watch?v=AJrsl3fHQ74 While youtube-dl is downloading the file it will generate a status line on your terminal. How it works… The youtube-dl program works by sending a GET message to the server, just as a browser would do. It masquerades as a browser so that YouTube or other video providers will download a video as if the device were streaming. The –list-formats (-F) option will list the available formats a video is available in, and the –format (-f) option will specify which format to download. This is useful if you want to download a higher-resolution video than your internet connection can reliably stream. Summary In this article we learned how to download and parse website data, send data to forms, and automate website-usage tasks and similar activities. We can automate many activities that we perform interactively through a browser with a few lines of scripting. Resources for Article: Further resources on this subject: Linux Shell Scripting – various recipes to help you [article] Linux Shell Script: Tips and Tricks [article] Linux Shell Script: Monitoring Activities [article]
Read more
  • 0
  • 0
  • 4602

article-image-what-are-microservices
Packt
20 Jun 2017
12 min read
Save for later

What are Microservices?

Packt
20 Jun 2017
12 min read
In this article written by Gaurav Kumar Aroraa, Lalit Kale, Kanwar Manish, authors of the book Building Microservices with .NET Core, we will start with a brief introduction. Then, we will define its predecessors: monolithic architecture and service-oriented architecture (SOA). After this, we will see how microservices fare against both SOA and the monolithic architecture. We will then compare the advantages and disadvantages of each one of these architectural styles. This will enable us to identify the right scenario for these styles. We will understand the problems that arise from having a layered monolithic architecture. We will discuss the solutions available to these problems in the monolithic world. At the end, we will be able to break down a monolithic application into a microservice architecture. We will cover the following topics in this article: Origin of microservices Discussing microservices (For more resources related to this topic, see here.) Origin of microservices The term microservices was used for the first time in mid-2011 at a workshop of software architects. In March 2012, James Lewis presented some of his ideas about microservices. By the end of 2013, various groups from the IT industry started having discussions on microservices, and by 2014, it had become popular enough to be considered a serious contender for large enterprises. There is no official introduction available for microservices. The understanding of the term is purely based on the use cases and discussions held in the past. We will discuss this in detail, but before that, let's check out the definition of microservices as per Wikipedia (https://en.wikipedia.org/wiki/Microservices), which sums it up as: Microservices is a specialization of and implementation approach for SOA used to build flexible, independently deployable software systems. In 2014, James Lewis and Martin Fowler came together and provided a few real-world examples and presented microservices (refer to http://martinfowler.com/microservices/) in their own words and further detailed it as follows: The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies. It is very important that you see all the attributes James and Martin defined here. They defined it as an architectural style that developers could utilize to develop a single application with the business logic spread across a bunch of small services, each having their own persistent storage functionality. Also, note its attributes: it can be independently deployable, can run in its own process, is a lightweight communication mechanism, and can be written in different programming languages. We want to emphasize this specific definition since it is the crux of the whole concept. And as we move along, it will come together by the time we finish this book. Discussing microservices Until now, we have gone through a few definitions of microservices; now, let's discuss microservices in detail. In short, a microservice architecture removes most of the drawbacks of SOA architectures.  Slicing your application into a number of services is neither SOA nor microservices. However, combining service design and best practices from the SOA world along with a few emerging practices, such as isolated deployment, semantic versioning, providing lightweight services, and service discovery in polyglot programming, is microservices. We implement microservices to satisfy business features and implement them with reduced time to market and greater flexibility. Before we move on to understand the architecture, let's discuss the two important architectures that have led to its existence: The monolithic architecture style SOA Most of us would be aware of the scenario where during the life cycle of an enterprise application development, a suitable architectural style is decided. Then, at various stages, the initial pattern is further improved and adapted with changes that cater to various challenges, such as deployment complexity, large code base, and scalability issues. This is exactly how the monolithic architecture style evolved into SOA, further leading up to microservices. Monolithic architecture The monolithic architectural style is a traditional architecture type and has been widely used in the industry. The term "monolithic" is not new and is borrowed from the Unix world. In Unix, most of the commands exist as a standalone program whose functionality is not dependent on any other program. As seen in the succeeding image, we can have different components in the application such as: User interface: This handles all of the user interaction while responding with HTML or JSON or any other preferred data interchange format (in the case of web services). Business logic: All the business rules applied to the input being received in the form of user input, events, and database exist here. Database access: This houses the complete functionality for accessing the database for the purpose of querying and persisting objects. A widely accepted rule is that it is utilized through business modules and never directly through user-facing components. Software built using this architecture is self-contained. We can imagine a single .NET assembly that contains various components, as described in the following image: As the software is self-contained here, its components are interconnected and interdependent. Even a simple code change in one of the modules may break a major functionality in other modules. This would result in a scenario where we'd need to test the whole application. With the business depending critically on its enterprise application frameworks, this amount of time could prove to be very critical. Having all the components tightly coupled poses another challenge: whenever we execute or compile such software, all the components should be available or the build will fail; refer to the preceding image that represents a monolithic architecture and is a self-contained or a single .NET assembly project. However, monolithic architectures might also have multiple assemblies. This means that even though a business layer (assembly, data access layer assembly, and so on) is separated, at run time, all of them will come together and run as one process.  A user interface depends on other components' direct sale and inventory in a manner similar to all other components that depend upon each other. In this scenario, we will not be able to execute this project in the absence of any one of these components. The process of upgrading any one of these components will be more complex as we may have to consider other components that require code changes too. This results in more development time than required for the actual change. Deploying such an application will become another challenge. During deployment, we will have to make sure that each and every component is deployed properly; otherwise, we may end up facing a lot of issues in our production environments. If we develop an application using the monolithic architecture style, as discussed previously, we might face the following challenges: Large code base: This is a scenario where the code lines outnumber the comments by a great margin. As components are interconnected, we will have to bear with a repetitive code base. Too many business modules: This is in regard to modules within the same system. Code base complexity: This results in a higher chance of code breaking due to the fix required in other modules or services. Complex code deployment: You may come across minor changes that would require whole system deployment. One module failure affecting the whole system: This is in regard to modules that depend on each other. Scalability: This is required for the entire system and not just the modules in it. Intermodule dependency: This is due to tight coupling. Spiraling development time: This is due to code complexity and interdependency. Inability to easily adapt to a new technology: In this case, the entire system would need to be upgraded. As discussed earlier, if we want to reduce development time, ease of deployment, and improve maintainability of software for enterprise applications, we should avoid the traditional or monolithic architecture. Service-oriented architecture In the previous section, we discussed the monolithic architecture and its limitations. We also discussed why it does not fit into our enterprise application requirements. To overcome these issues, we should go with some modular approach where we can separate the components such that they should come out of the self-contained or single .NET assembly. The main difference between SOA & monolithic is not one or multiple assembly. But as the service in SOA runs as separate process, SOA scales better compared to monolithic. Let's discuss the modular architecture, that is, SOA. This is a famous architectural style using which the enterprise applications are designed with a collection of services as its base. These services may be RESTful or ASMX Web services. To understand SOA in more detail, let's discuss "service" first. What is service? Service, in this case, is an essential concept of SOA. It can be a piece of code, program, or software that provides some functionality to other system components. This piece of code can interact directly with the database or indirectly through another service. Furthermore, it can be consumed by clients directly, where the client may either be a website, desktop app, mobile app, or any other device app. Refer to the following diagram: Service refers to a type of functionality exposed for consumption by other systems (generally referred to as clients/client applications). As mentioned earlier, it can be represented by a piece of code, program, or software. Such services are exposed over the HTTP transport protocol as a general practice. However, the HTTP protocol is not a limiting factor, and a protocol can be picked as deemed fit for the scenario. In the following image, Service – direct selling is directly interacting with Database, and three different clients, namely Web, Desktop, and Mobile, are consuming the service. On the other hand, we have clients consuming Service – partner selling, which is interacting with Service – channel partners for database access. A product selling service is a set of services that interacts with client applications and provides database access directly or through another service, in this case, Service – Channel partner.  In the case of Service – direct selling, shown in the preceding example, it is providing some functionality to a Web Store, a desktop application, and a mobile application. This service is further interacting with the database for various tasks, namely fetching data, persisting data, and so on. Normally, services interact with other systems via some communication channel, generally the HTTP protocol. These services may or may not be deployed on the same or single servers. In the preceding image, we have projected an SOA example scenario. There are many fine points to note here, so let's get started. Firstly, our services can be spread across different physical machines. Here, Service-direct selling is hosted on two separate machines. It is a possible scenario that instead of the entire business functionality, only a part of it will reside on Server 1 and the remaining on Server 2. Similarly, Service – partner selling appears to be having the same arrangement on Server 3 and Server 4. However, it doesn't stop Service – channel partners being hosted as a complete set on both the servers: Server 5 and Server 6. A system that uses a service or multiple services in a fashion mentioned in the preceding figure is called an SOA. We will discuss SOA in detail in the following sections. Let's recall the monolithic architecture. In this case, we did not use it because it restricts code reusability; it is a self-contained assembly, and all the components are interconnected and interdependent. For deployment, in this case, we will have to deploy our complete project after we select the SOA (refer to preceding image and subsequent discussion). Now, because of the use of this architectural style, we have the benefit of code reusability and easy deployment. Let's examine this in the wake of the preceding figure: Reusability: Multiple clients can consume the service. The service can also be simultaneously consumed by other services. For example, OrderService is consumed by web and mobile clients. Now, OrderService can also be used by the Reporting Dashboard UI. Stateless: Services do not persist any state between requests from the client, that is, the service doesn't know, nor care, that the subsequent request has come from the client that has/hasn't made the previous request. Contract-based: Interfaces make it technology-agnostic on both sides of implementation and consumption. It also serves to make it immune to the code updates in the underlying functionality. Scalability: A system can be scaled up; SOA can be individually clustered with appropriate load balancing. Upgradation: It is very easy to roll out new functionalities or introduce new versions of the existing functionality. The system doesn't stop you from keeping multiple versions of the same business functionality. Summary In this article, we discussed what the microservice architectural style is in detail, its history, and how it differs from its predecessors: monolithic and SOA. We further defined the various challenges that monolithic faces when dealing with large systems. Scalability and reusability are some definite advantages that SOA provides over monolithic. We also discussed the limitations of the monolithic architecture, including scaling problems, by implementing a real-life monolithic application. The microservice architecture style resolves all these issues by reducing code interdependency and isolating the dataset size that any one of the microservices works upon. We utilized dependency injection and database refactoring for this. We further explored automation, CI, and deployment. These easily allow the development team to let the business sponsor choose what industry trends to respond to first. This results in cost benefits, better business response, timely technology adoption, effective scaling, and removal of human dependency. Resources for Article: Further resources on this subject: Microservices and Service Oriented Architecture [article] Breaking into Microservices Architecture [article] Microservices – Brave New World [article]
Read more
  • 0
  • 0
  • 3527
article-image-cors-nodejs
Packt
20 Jun 2017
14 min read
Save for later

CORS in Node.js

Packt
20 Jun 2017
14 min read
In this article by Randall Goya, and Rajesh Gunasundaram the author of the book CORS Essentials, Node.js is a cross-platform JavaScript runtime environment that executes JavaScript code at server side. This enables to have a unified language across the web application development. JavaScript becomes the unified language that runs both on client side and server side. (For more resources related to this topic, see here.) In this article we will learn about: Node.js is a JavaScript platform for developing server-side web applications. Node.js can provide the web server for other frameworks including Express.js, AngularJS, Backbone,js, Ember.js and others. Some other JavaScript frameworks such as ReactJS, Ember.js and Socket.IO may also use Node.js as the web server. Isomorphic JavaScript can add server-side functionality for client-side frameworks. JavaScript frameworks are evolving rapidly. This article reviews some of the current techniques, and syntax specific for some frameworks. Make sure to check the documentation for the project to discover the latest techniques. Understanding CORS concepts, you may create your own solution, because JavaScript is a loosely structured language. All the examples are based on the fundamentals of CORS, with allowed origin(s), methods, and headers such as Content-Type, or preflight, that may be required according to the CORS specification. JavaScript frameworks are very popular JavaScript is sometimes called the lingua franca of the Internet, because it is cross-platform and supported by many devices. It is also a loosely-structured language, which makes it possible to craft solutions for many types of applications. Sometimes an entire application is built in JavaScript. Frequently JavaScript provides a client-side front-end for applications built with Symfony, Content Management Systems such as Drupal, and other back-end frameworks. Node.js is server-side JavaScript and provides a web server as an alternative to Apache, IIS, Nginx and other traditional web servers. Introduction to Node.js Node.js is an open-source and cross-platform library that enables in developing server-side web applications. Applications will be written using JavaScript in Node.js can run on many operating systems, including OS X, Microsoft Windows, Linux, and many others. Node.js provides a non-blocking I/O and an event-driven architecture designed to optimize an application's performance and scalability for real-time web applications. The biggest difference between PHP and Node.js is that PHP is a blocking language, where commands execute only after the previous command has completed, while Node.js is a non-blocking language where commands execute in parallel, and use callbacks to signal completion. Node.js can move files, payloads from services, and data asynchronously, without waiting for some command to complete, which improves performance. Most JS frameworks that work with Node.js use the concept of routes to manage pages and other parts of the application. Each route may have its own set of configurations. For example, CORS may be enabled only for a specific page or route. Node.js loads modules for extending functionality via the npm package manager. The developer selects which packages to load with npm, which reduces bloat. The developer community creates a large number of npm packages created for specific functions. JXcore is a fork of Node.js targeting mobile devices and IoTs (Internet of Things devices). JXcore can use both Google V8 and Mozilla SpiderMonkey as its JavaScript engine. JXcore can run Node applications on iOS devices using Mozilla SpiderMonkey. MEAN is a popular JavaScript software stack with MongoDB (a NoSQL database), Express.js and AngularJS, all of which run on a Node.js server. JavaScript frameworks that work with Node.js Node.js provides a server for other popular JS frameworks, including AngularJS, Express.js. Backbone.js, Socket.IO, and Connect.js. ReactJS was designed to run in the client browser, but it is often combined with a Node.js server. As we shall see in the following descriptions, these frameworks are not necessarily exclusive, and are often combined in applications. Express.js is a Node.js server framework Express.js is a Node.js web application server framework, designed for building single-page, multi-page, and hybrid web applications. It is considered the "standard" server framework for Node.js. The package is installed with the command npm install express –save. AngularJS extends static HTML with dynamic views HTML was designed for static content, not for dynamic views. AngularJS extends HTML syntax with custom tag attributes. It provides model–view–controller (MVC) and model–view–viewmodel (MVVM) architectures in a front-end client-side framework.  AngularJS is often combined with a Node.js server and other JS frameworks. AngularJS runs client-side and Express.js runs on the server, therefore Express.js is considered more secure for functions such as validating user input, which can be tampered client-side. AngularJS applications can use the Express.js framework to connect to databases, for example in the MEAN stack. Connect.js provides middleware for Node.js requests Connect.js is a JavaScript framework providing middleware to handle requests in Node.js applications. Connect.js provides middleware to handle Express.js and cookie sessions, to provide parsers for the HTML body and cookies, and to create vhosts (virtual hosts) and error handlers, and to override methods. Backbone.js often uses a Node.js server Backbone.js is a JavaScript framework with a RESTful JSON interface and is based on the model–view–presenter (MVP) application design. It is designed for developing single-page web applications, and for keeping various parts of web applications (for example, multiple clients and the server) synchronized. Backbone depends on Underscore.js, plus jQuery for use of all the available fetures. Backbone often uses a Node.js server, for example to connect to data storage. ReactJS handles user interfaces ReactJS is a JavaScript library for creating user interfaces while addressing challenges encountered in developing single-page applications where data changes over time. React handles the user interface in model–view–controller (MVC) architecture. ReactJS typically runs client-side and can be combined with AngularJS. Although ReactJS was designed to run client-side, it can also be used server-side in conjunction with Node.js. PayPal and Netflix leverage the server-side rendering of ReactJS known as Isomorphic ReactJS. There are React-based add-ons that take care of the server-side parts of a web application. Socket.IO uses WebSockets for realtime event-driven applications Socket.IO is a JavaScript library for event-driven web applications using the WebSocket protocol ,with realtime, bi-directional communication between web clients and servers. It has two parts: a client-side library that runs in the browser, and a server-side library for Node.js. Although it can be used as simply a wrapper for WebSocket, it provides many more features, including broadcasting to multiple sockets, storing data associated with each client, and asynchronous I/O. Socket.IO provides better security than WebSocket alone, since allowed domains must be specified for its server. Ember.js can use Node.js Ember is another popular JavaScript framework with routing that uses Moustache templates. It can run on a Node.js server, or also with Express.js. Ember can also be combined with Rack, a component of Ruby On Rails (ROR). Ember Data is a library for  modeling data in Ember.js applications. CORS in Express.js The following code adds the Access-Control-Allow-Origin and Access-Control-Allow-Headers headers globally to all requests on all routes in an Express.js application. A route is a path in the Express.js application, for example /user for a user page. app.all sets the configuration for all routes in the application. Specific HTTP requests such as GET or POST are handled by app.get and app.post. app.all('*', function(req, res, next) { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); next(); }); app.get('/', function(req, res, next) { // Handle GET for this route }); app.post('/', function(req, res, next) { // Handle the POST for this route }); For better security, consider limiting the allowed origin to a single domain, or adding some additional code to validate or limit the domain(s) that are allowed. Also, consider limiting sending the headers only for routes that require CORS by replacing app.all with a more specific route and method. The following code only sends the CORS headers on a GET request on the route/user, and only allows the request from http://www.localdomain.com. app.get('/user', function(req, res, next) { res.header("Access-Control-Allow-Origin", "http://www.localdomain.com"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); next(); }); Since this is JavaScript code, you may dynamically manage the values of routes, methods, and domains via variables, instead of hard-coding the values. CORS npm for Express.js using Connect.js middleware Connect.js provides middleware to handle requests in Express.js. You can use Node Package Manager (npm) to install a package that enables CORS in Express.js with Connect.js: npm install cors The package offers flexible options, which should be familiar from the CORS specification, including using credentials and preflight. It provides dynamic ways to validate an origin domain using a function or a regular expression, and handler functions to process preflight. Configuration options for CORS npm origin: Configures the Access-Control-Allow-Origin CORS header with a string containing the full URL and protocol making the request, for example http://localdomain.com. Possible values for origin: Default value TRUE uses req.header('Origin') to determine the origin and CORS is enabled. When set to FALSE CORS is disabled. It can be set to a function with the request origin as the first parameter and a callback function as the second parameter. It can be a regular expression, for example /localdomain.com$/, or an array of regular expressions and/or strings to match. methods: Sets the Access-Control-Allow-Methods CORS header. Possible values for methods: A comma-delimited string of HTTP methods, for example GET, POST An array of HTTP methods, for example ['GET', 'PUT', 'POST'] allowedHeaders: Sets the Access-Control-Allow-Headers CORS header. Possible values for allowedHeaders: A comma-delimited string of  allowed headers, for example "Content-Type, Authorization'' An array of allowed headers, for example ['Content-Type', 'Authorization'] If unspecified, it defaults to the value specified in the request's Access-Control-Request-Headers header exposedHeaders: Sets the Access-Control-Expose-Headers header. Possible values for exposedHeaders: A comma-delimited string of exposed headers, for example 'Content-Range, X-Content-Range' An array of exposed headers, for example ['Content-Range', 'X-Content-Range'] If unspecified, no custom headers are exposed credentials: Sets the Access-Control-Allow-Credentials CORS header. Possible values for credentials: TRUE—passes the header for preflight FALSE or unspecified—omit the header, no preflight maxAge: Sets the Access-Control-Allow-Max-Age header. Possible values for maxAge An integer value in milliseconds for TTL to cache the request If unspecified, the request is not cached preflightContinue: Passes the CORS preflight response to the next handler. The default configuration without setting any values allows all origins and methods without preflight. Keep in mind that complex CORS requests other than GET, HEAD, POST will fail without preflight, so make sure you enable preflight in the configuration when using them. Without setting any values, the configuration defaults to: { "origin": "*", "methods": "GET,HEAD,PUT,PATCH,POST,DELETE", "preflightContinue": false } Code examples for CORS npm These examples demonstrate the flexibility of CORS npm for specific configurations. Note that the express and cors packages are always required. Enable CORS globally for all origins and all routes The simplest implementation of CORS npm enables CORS for all origins and all requests. The following example enables CORS for an arbitrary route " /product/:id" for a GET request by telling the entire app to use CORS for all routes: var express = require('express') , cors = require('cors') , app = express(); app.use(cors()); // this tells the app to use CORS for all re-quests and all routes app.get('/product/:id', function(req, res, next){ res.json({msg: 'CORS is enabled for all origins'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80'); }); Allow CORS for dynamic origins for a specific route The following example uses corsOptions to check if the domain making the request is in the whitelisted array with a callback function, which returns null if it doesn't find a match. This CORS option is passed to the route "product/:id" which is the only route that has CORS enabled. The allowed origins can be dynamic by changing the value of the variable "whitelist." var express = require('express') , cors = require('cors') , app = express(); // define the whitelisted domains and set the CORS options to check them var whitelist = ['http://localdomain.com', 'http://localdomain-other.com']; var corsOptions = { origin: function(origin, callback){ var originWhitelisted = whitelist.indexOf(origin) !== -1; callback(null, originWhitelisted); } }; // add the CORS options to a specific route /product/:id for a GET request app.get('/product/:id', cors(corsOptions), function(req, res, next){ res.json({msg: 'A whitelisted domain matches and CORS is enabled for route product/:id'}); }); // log that CORS is enabled on the server app.listen(80, function(){ console.log(''CORS is enabled on the web server listening on port 80''); }); You may set different CORS options for specific routes, or sets of routes, by defining the options assigned to unique variable names, for example "corsUserOptions." Pass the specific configuration variable to each route that requires that set of options. Enabling CORS preflight CORS requests that use a HTTP method other than GET, HEAD, POST (for example DELETE), or that use custom headers, are considered complex and require a preflight request before proceeding with the CORS requests. Enable preflight by adding an OPTIONS handler for the route: var express = require('express') , cors = require('cors') , app = express(); // add the OPTIONS handler app.options('/products/:id', cors()); // options is added to the route /products/:id // use the OPTIONS handler for the DELETE method on the route /products/:id app.del('/products/:id', cors(), function(req, res, next){ res.json({msg: 'CORS is enabled with preflight on the route '/products/:id' for the DELETE method for all origins!'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80''); }); You can enable preflight globally on all routes with the wildcard: app.options('*', cors()); Configuring CORS asynchronously One of the reasons to use NodeJS frameworks is to take advantage of their asynchronous abilities, handling multiple tasks at the same time. Here we use a callback function corsDelegateOptions and add it to the cors parameter passed to the route /products/:id. The callback function can handle multiple requests asynchronously. var express = require('express') , cors = require('cors') , app = express(); // define the allowed origins stored in a variable var whitelist = ['http://example1.com', 'http://example2.com']; // create the callback function var corsDelegateOptions = function(req, callback){ var corsOptions; if(whitelist.indexOf(req.header('Origin')) !== -1){ corsOptions = { origin: true }; // the requested origin in the CORS response matches and is allowed }else{ corsOptions = { origin: false }; // the requested origin in the CORS response doesn't match, and CORS is disabled for this request } callback(null, corsOptions); // callback expects two parameters: error and options }; // add the callback function to the cors parameter for the route /products/:id for a GET request app.get('/products/:id', cors(corsDelegateOptions), function(req, res, next){ res.json({msg: ''A whitelisted domain matches and CORS is enabled for route product/:id'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80''); }); Summary We have learned important stuffs of applying CORS in Node.js. Let us have a qssuick recap of what we have learnt: Node.js provides a web server built with JavaScript, and can be combined with many other JS frameworks as the application server. Although some frameworks have specific syntax for implementing CORS, they all follow the CORS specification by specifying allowed origin(s) and method(s). More robust frameworks allow custom headers such as Content-Type, and preflight when required for complex CORS requests. JavaScript frameworks may depend on the jQuery XHR object, which must be configured properly to allow Cross-Origin requests. JavaScript frameworks are evolving rapidly. The examples here may become outdated. Always refer to the project documentation for up-to-date information. With knowledge of the CORS specification, you may create your own techniques using JavaScript based on these examples, depending on the specific needs of your application. https://en.wikipedia.org/wiki/Node.js  Resources for Article: Further resources on this subject: An Introduction to Node.js Design Patterns [article] Five common questions for .NET/Java developers learning JavaScript and Node.js [article] API with MongoDB and Node.js [article]
Read more
  • 0
  • 0
  • 10061

article-image-understanding-basics-gulp
Packt
19 Jun 2017
15 min read
Save for later

Understanding the Basics of Gulp

Packt
19 Jun 2017
15 min read
In this article written by Travis Maynard, author of the book Getting Started with Gulp - Second Edition, we will take a look at the basics of gulp and how it works. Understanding some of the basic principles and philosophies behind the tool, it's plugin system will assist you as you begin writing your own gulpfiles. We'll start by taking a look at the engine behind gulp and then follow up by breaking down the inner workings of gulp itself. By the end of this article, you will be prepared to begin writing your own gulpfile. (For more resources related to this topic, see here.) Installing node.js and npm As you learned in the introduction, node.js and npm are the engines that work behind the scenes that allow us to operate gulp and keep track of any plugins we decide to use. Downloading and installing node.js For Mac and Windows, the installation is quite simple. All you need to do is navigate over to http://nodejs.org and click on the big green install button. Once the installer has finished downloading, run the application and it will install both node.js and npm. For Linux, there are a couple more steps, but don't worry; with your newly acquired command-line skills, it should be relatively simple. To install node.js and npm on Linux, you'll need to run the following three commands in Terminal: sudo add-apt-repository ppa:chris-lea/node.js sudo apt-get update sudo apt-get install nodejs The details of these commands are outside the scope of this book, but just for reference, they add a repository to the list of available packages, update the total list of packages, and then install the application from the repository we added. Verify the installation To confirm that our installation was successful, try the following command in your command line: node -v If node.js is successfully installed, node -v will output a version number on the next line of your command line. Now, let's do the same with npm: npm -v Like before, if your installation was successful, npm -v should output the version number of npm on the next line. The versions displayed in this screenshot reflect the latest Long Term Support (LTS) release currently available as of this writing. This may differ from the version that you have installed depending on when you're reading this. It's always suggested that you use the latest LTS release when possible. The -v  command is a common flag used by most command-line applications to quickly display their version number. This is very useful to debug version issues while using command-line applications. Creating a package.json file Having npm in our workflow will make installing packages incredibly easy; however, we should look ahead and establish a way to keep track of all the packages (or dependencies) that we use in our projects. Keeping track of dependencies is very important to keep your workflow consistent across development environments. Node.js uses a file named package.json to store information about your project, and npm uses this same file to manage all of the package dependencies your project requires to run properly. In any project using gulp, it is always a great practice to create this file ahead of time so that you can easily populate your dependency list as you are installing packages or plugins. To create the package.json file, we will need to run npm's built in init action using the following command: npm init Now, using the preceding command, the terminal will show the following output: Your command line will prompt you several times asking for basic information about the project, such as the project name, author, and the version number. You can accept the defaults for these fields by simply pressing the Enter key at each prompt. Most of this information is used primarily on the npm website if a developer decides to publish a node.js package. For our purposes, we will just use it to initialize the file so that we can properly add our dependencies as we move forward. The screenshot for the preceding command is as follows: Installing gulp With npm installed and our package.json file created, we are now ready to begin installing node.js packages. The first and most important package we will install is none other than gulp itself. Locating gulp Locating and gathering information about node.js packages is very simple, thanks to the npm registry. The npm registry is a companion website that keeps track of all the published node.js modules, including gulp and gulp plugins. You can find this registry at http://npmjs.org. Take a moment to visit the npm registry and do a quick search for gulp. The listing page for each node.js module will give you detailed information on each project, including the author, version number, and dependencies. Additionally, it also features a small snippet of command-line code that you can use to install the package along with readme information that will outline basic usage of the package and other useful information. Installing gulp locally Before we install gulp, make sure you are in your project's root directory, gulp-book, using the cd and ls commands you learned earlier. If you ever need to brush up on any of the standard commands, feel free to take a moment to step back and review as we progress through the book. To install packages with npm, we will follow a similar pattern to the ones we've used previously. Since we will be covering both versions 3.x and 4.x in this book, we'll demonstrate installing both: For installing gulp 3.x, you can use the following: npm install --save-dev gulp For installing gulp 4.x, you can use the following: npm install --save-dev gulpjs/gulp#4.0 This command is quite different from the 3.x command because this command is installing the latest development release directly from GitHub. Since the 4.x version is still being actively developed, this is the only way to install it at the time of writing this book. Once released, you will be able to run the previous command to without installing from GitHub. Upon executing the command, it will result in output similar to the following: To break this down, let's examine each piece of this command to better understand how npm works: npm: This is the application we are running install: This is the action that we want the program to run. In this case, we are instructing npm to install something in our local folder --save-dev: This is a flag that tells npm to add this module to the dev dependencies list in our package.json file gulp: This is the package we would like to install Additionally, npm has a –-save flag that saves the module to the list of dependencies instead of devDependencies. These dependency lists are used to separate the modules that a project depends on to run, and the modules a project depends on when in development. Since we are using gulp to assist us in development, we will always use the --save-dev flag throughout the book. So, this command will use npm to contact the npm registry, and it will install gulp to our local gulp-book directory. After using this command, you will note that a new folder has been created that is named node_modules. It is where node.js and npm store all of the installed packages and dependencies of your project. Take a look at the following screenshot: Installing gulp-cli globally For many of the packages that we install, this will be all that is needed. With gulp, we must install a companion module gulp-cli globally so that we can use the gulp command from anywhere in our filesystem. To install gulp-cli globally, use the following command: npm install -g gulp-cli In this command, not much has changed compared to the original command where we installed the gulp package locally. We've only added a -g flag to the command, which instructs npm to install the package globally. On Windows, your console window should be opened under an administrator account in order to install an npm package globally. At first, this can be a little confusing, and for many packages it won't apply. Similar build systems actually separate these usages into two different packages that must be installed separately; once that is installed globally for command-line use and another installed locally in your project. Gulp was created so that both of these usages could be combined into a single package, and, based on where you install it, it could operate in different ways. Anatomy of a gulpfile Before we can begin writing tasks, we should take a look at the anatomy and structure of a gulpfile. Examining the code of a gulpfile will allow us to better understand what is happening as we run our tasks. Gulp started with four main methods:.task(), .src(), .watch(), and .dest(). The release of version 4.x introduced additional methods such as: .series() and .parallel(). In addition to the gulp API methods, each task will also make use of the node.js .pipe() method. This small list of methods is all that is needed to understand how to begin writing basic tasks. They each represent a specific purpose and will act as the building blocks of our gulpfile. The task() method The .task() method is the basic wrapper for which we create our tasks. Its syntax is .task(string, function). It takes two arguments—string value representing the name of the task and a function that will contain the code you wish to execute upon running that task. The src() method The .src() method is our input or how we gain access to the source files that we plan on modifying. It accepts either a single glob string or an array of glob strings as an argument. Globs are a pattern that we can use to make our paths more dynamic. When using globs, we can match an entire set of files with a single string using wildcard characters as opposed to listing them all separately. The syntax is for this method is .src(string || array).  The watch() method The .watch() method is used to specifically look for changes in our files. This will allow us to keep gulp running as we code so that we don't need to rerun gulp any time we need to process our tasks. This syntax is different between the 3.x and 4.x version. For version 3.x the syntax is—.watch(string || array, array) with the first argument being our paths/globs to watch and the second argument being the array of task names that need to be run when those files change. For version 4.x the syntax has changed a bit to allow for two new methods that provide more explicit control of the order in which tasks are executed. When using 4.x instead of passing in an array as the second argument, we will use either the .series() or .parallel() method like so—.watch(string || array, gulp.series() || gulp.parallel()). The dest() method The dest() method is used to set the output destination of your processed file. Most often, this will be used to output our data into a build or dist folder that will be either shared as a library or accessed by your application. The syntax for this method is .dest(string). The pipe() method The .pipe() method will allow us to pipe together smaller single-purpose plugins or applications into a pipechain. This is what gives us full control of the order in which we would need to process our files. The syntax for this method is .pipe(function). The parallel() and series() methods The parallel and series methods were added in version 4.x as a way to easily control whether your tasks are run together all at once or in a sequence one after the other. This is important if one of your tasks requires that other tasks complete before it can be ran successfully. When using these methods the arguments will be the string names of your tasks separated by a comma. The syntax for these methods is—.series(tasks) and .parallel(tasks); Understanding these methods will take you far, as these are the core elements of building your tasks. Next, we will need to put these methods together and explain how they all interact with one another to create a gulp task. Including modules/plugins When writing a gulpfile, you will always start by including the modules or plugins you are going to use in your tasks. These can be both gulp plugins or node.js modules, based on what your needs are. Gulp plugins are small node.js applications built for use inside of gulp to provide a single-purpose action that can be chained together to create complex operations for your data. Node.js modules serve a broader purpose and can be used with gulp or independently. Next, we can open our gulpfile.js file and add the following code: // Load Node Modules/Plugins var gulp = require('gulp'); var concat = require('gulp-concat'); var uglify = require('gulp-uglify'); The gulpfile.js file will look as shown in the following screenshot: In this code, we have included gulp and two gulp plugins: gulp-concat and gulp-uglify. As you can now see, including a plugin into your gulpfile is quite easy. After we install each module or plugin using npm, you simply use node.js' require() function and pass it in the name of the module. You then assign it to a new variable so that you can use it throughout your gulpfile. This is node.js' way of handling modularity, and because a gulpfile is essentially a small node.js application, it adopts this practice as well. Writing a task All tasks in gulp share a common structure. Having reviewed the five methods at the beginning of this section, you will already be familiar with most of it. Some tasks might end up being larger than others, but they still follow the same pattern. To better illustrate how they work, let's examine a bare skeleton of a task. This skeleton is the basic bone structure of each task we will be creating. Studying this structure will make it incredibly simple to understand how parts of gulp work together to create a task. An example of a sample task is as follows: gulp.task(name, function() { return gulp.src(path) .pipe(plugin) .pipe(plugin) .pipe(gulp.dest(path)); }); In the first line, we use the new gulp variable that we created a moment ago and access the .task() method. This creates a new task in our gulpfile. As you learned earlier, the task method accepts two arguments: a task name as a string and a callback function that will contain the actions we wish to run when this task is executed. Inside the callback function, we reference the gulp variable once more and then use the .src() method to provide the input to our task. As you learned earlier, the source method accepts a path or an array of paths to the files that we wish to process. Next, we have a series of three .pipe() methods. In each of these pipe methods, we will specify which plugin we would like to use. This grouping of pipes is what we call our pipechain. The data that we have provided gulp with in our source method will flow through our pipechain to be modified by each piped plugin that it passes through. The order of the pipe methods is entirely up to you. This gives you a great deal of control in how and when your data is modified. You may have noticed that the final pipe is a bit different. At the end of our pipechain, we have to tell gulp to move our modified file somewhere. This is where the .dest() method comes into play. As we mentioned earlier, the destination method accepts a path that sets the destination of the processed file as it reaches the end of our pipechain. If .src() is our input, then .dest() is our output. Reflection To wrap up, take a moment to look at a finished gulpfile and reflect on the information that we just covered. This is the completed gulpfile that we will be creating from scratch, so don't worry if you still feel lost. This is just an opportunity to recognize the patterns and syntaxes that we have been studying so far. We will begin creating this file step by step: // Load Node Modules/Plugins var gulp = require('gulp'); var concat = require('gulp-concat'); var uglify = require('gulp-uglify'); // Process Styles gulp.task('styles', function() {     return gulp.src('css/*.css')         .pipe(concat('all.css'))         .pipe(gulp.dest('dist/')); }); // Process Scripts gulp.task('scripts', function() {     return gulp.src('js/*.js')         .pipe(concat('all.js'))         .pipe(uglify())         .pipe(gulp.dest('dist/')); }); // Watch Files For Changes gulp.task('watch', function() {     gulp.watch('css/*.css', 'styles');     gulp.watch('js/*.js', 'scripts'); }); // Default Task gulp.task('default', gulp.parallel('styles', 'scripts', 'watch')); The gulpfile.js file will look as follows: Summary In this article, you installed node.js and learned the basics of how to use npm and understood how and why to install gulp both locally and globally. We also covered some of the core differences between the 3.x and 4.x versions of gulp and how they will affect your gulpfiles as we move forward. To wrap up the article, we took a small glimpse into the anatomy of a gulpfile to prepare us for writing our own gulpfiles from scratch. Resources for Article: Further resources on this subject: Performing Task with Gulp [article] Making a Web Server in Node.js [article] Developing Node.js Web Applications [article]
Read more
  • 0
  • 0
  • 3585

article-image-wordpress-web-application-framework
Packt
15 Jun 2017
20 min read
Save for later

WordPress as a Web Application Framework

Packt
15 Jun 2017
20 min read
In this article written by Rakhitha Ratanayake, author of the book Wordpress Web Application Development - Third Edition you will learn that WordPress has matured from the most popular blogging platform to the most popular content management system. Thousands of developers around the world are making a living from WordPress design and development. As more and more people are interested in using WordPress, the dream of using this amazing framework for web application development is becoming possible. The future seems bright as WordPress hasalready got dozens of built-in features, which can be easily adapted to web application development using slight modifications. Since you are already reading this article, you have to be someone who is really excited to see how WordPress fits into web application development. Throughout this article, we will learn how we can inject the best practices of web development into WordPress framework to build web applications in rapid process.Basically, this article will be important for developers from two different perspectives. On one hand, beginner- to intermediate-level WordPress developers can get knowledge of cutting-edge web development technologies and techniques to build complex applications. On the other hand, web development experts who are already familiar with popular PHP frameworks can learn WordPress for rapid application development. So, let's get started! In this article, we will cover the following topics: WordPress as a CMS WordPress as a web application framework Simplifying development with built-in features Identifying the components of WordPress Making a development plan for forum management application Understanding limitations and sticking with guidelines Building a question-answer interface Enhancing features of the questions plugin (For more resources related to this topic, see here.) In order to work with this article, you should be familiar with WordPress themes, plugins, and its overall process. Developers who are experienced in PHP frameworks can work with this article while using the reference sources to learn WordPress. By the end of this article, you will have the ability to make the decision to choose WordPress for web development. WordPress as a CMS Way back in 2003, WordPress released its first version as a simple blogging platform and continued to improve until it became the most popular blogging tool. Later, it continued to improve as a CMS and now has a reputation for being the most popular CMS for over 5 years. These days everyone sees WordPress as a CMS rather than just a blogging tool. Now the question is, where will it go next? Recent versions of WordPress have included popular web development libraries such as Backbone.js and Underscore.js and developers are building different types of applications with WordPress. Also the most recent introduction of REST API is a major indication that WordPress is moving towards the direction of building web applications. The combination of REST API and modern JavaScript frameworks will enable developers to build complex web applications with WordPress. Before we consider the application development aspects of WordPress, it's ideal to figure out the reasons for it being such a popular CMS. The following are some of the reasons behind the success of WordPress as a CMS: Plugin-based architecture for adding independent features and the existence of over 40,000 open source plugins Ability to create unlimited free websites at www.wordpress.com and use the basic WordPress features A super simple and easy-to-access administration interface A fast learning curve and comprehensive documentation for beginners A rapid development process involving themes and plugins An active development community with awesome support Flexibility in building websites with its themes, plugins, widgets, and hooks Availability of large premium theme and plugin marketplaces for developers to sell advanced plugin/themes and users to build advanced sites with those premium plugins/themes without needing a developer. These reasons prove why WordPress is the top CMS for website development. However, experienced developers who work with full stack web applications don't believe that WordPress has a future in web application development. While it's up for debate, we'll see what WordPress has to offer for web development. Once you complete reading this article, you will be able to decide whether WordPress has a future in web applications. I have been working with full stack frameworks for several years, and I certainly believe the future of WordPress for web development. WordPress as a web application framework In practice, the decision to choose a development framework depends on the complexity of your application. Developers will tend to go for frameworks in most scenarios. It's important to figure out why we go with frameworks for web development. Here's a list of possible reasons why frameworks become a priority in web application development: Frameworks provide stable foundations for building custom functionalities Usually, stable frameworks have a large development community with an active support They have built-in features to address the common aspects of application development, such as routing, language support, form validation, user management, and more They have a large amount of utility functions to address repetitive tasks Full stack development frameworks such as Zend, CodeIgniter, and CakePHP adhere to the points mentioned in the preceding section, which in turn becomes the framework of choice for most developers. However, we have to keep in mind that WordPress is an application where we built applications on top of existing features. On the other hand, traditional frameworks are foundations used for building applications such as WordPress. Now, let's take a look at how WordPress fits into the boots of web application framework. The MVC versus event-driven architecture A vast majority of web development frameworks are built to work with the Model-View-Controller(MVC) architecture, where an application is separated into independent layers called models, views, and controllers. In MVC, we have a clear understanding of what goes where and when each of the layers will be integrated in the process. So, the first thing most developers will look at is the availability of MVC in WordPress. Unfortunately, WordPress is not built on top of the MVC architecture. This is one of the main reasons why developers refuse to choose it as a development framework. Even though it is not MVC, we can create custom execution process to make it work like a MVC application. Also, we can find frameworks such as WP MVC, which can be used to take advantage of both WordPress's native functionality and a vast plugin library and all of the many advantages of an MVC framework. Unlike other frameworks, it won't have the full capabilities of MVC. However, unavailability of the MVC architecture doesn't mean that we cannot develop quality applications with WordPress. There are many other ways to separate concerns in WordPress applications. WordPress on the other hand, relies on a procedural event-driven architecture with its action hooks and filters system. Once a user makes a request, these actions will get executed in a certain order to provide the response to the user. You can find the complete execution procedure at http://codex.wordpress.org/Plugin_API/Action_Reference. In the event-driven architecture, both model and controller code gets scattered throughout the theme and plugin files. Simplifying development with built-in features As we discussed in the previous section, the quality of a framework depends on its core features. The better the quality of the core, the better it will be for developing quality and maintainable applications. It's surprising to see the availability of number of WordPress features directly related to web development, even though it is meant to create websites. Let's get a brief introduction about the WordPress core features to see how it fits into web application development. User management Built-in user management features are quite advanced in order to cater to the most common requirements of any web application. Its user roles and capability handling makes it much easier to control the access to specific areas of your application. We can separate users into multiple levels using roles and then use capabilities to define the permitted functionality for each user level. Most full stack frameworks don't have a built-in user management features, and hence, this can be considered as an advantage of using WordPress. Media management File uploading and managing is a common and time consuming task in web applications. Media uploader, which comes built-in with WordPress, can be effectively used to automate the file-related tasks without writing much source code. A super-simple interface makes it so easy for application users to handle file-related tasks. Also, WordPress offers built-in functions for directly uploading media files without the media uploader. These functions can be used effectively to handle advanced media uploading requirements without spending much time. Template management WordPress offers a simple template management system for its themes. It is not as complex or fully featured as a typical template engine. However, it offers a wide range of capabilities in CMS development perspective, which we can extend to suit web applications. Database management In most scenarios, we will be using the existing database table structure for our application development. WordPress database management functionalities offer a quick and easy way of working with existing tables with its own style of functions. Unlike other frameworks, WordPress provides a built-in database structure, and hence most of the functionalities can be used to directly work with these tables without writing custom SQL queries. Routing Comprehensive support for routing is provided through permalinks. WordPress makes it simple to change the default routing and choose your own routing, in order to built search engine friendly URLs. XML-RPC API Building an API is essential for allowing third-party access to our application. WordPress provides built-in API for accessing CMS-related functionality through its XML-RPC interface. Also, developers are allowed to create custom API functions through plugins, making it highly flexible for complex applications. REST API REST API makes it possible to give third-party access to the application data, similar to XML-RPC API. This API uses easy to understand HTTP requests and JSON format making it easier to communicate with WordPress applications. JavaScript is becoming the modern trend in developing applications. So the availability of JSON in REST API will allow external users to access and manipulate WordPress data within their JavaScript based applications. Caching Caching in WordPress can be categorized into two sections called persistent and nonpersistent cache. Nonpersistent caching is provided by WordPress cache object while persistent caching is provided through its Transient API. Caching techniques in WordPress is a simple compared to other frameworks, but it's powerful enough to cater to complex web applications. Scheduling As developers, you might have worked with cron jobs for executing certain tasks at specified intervals. WordPress offers same scheduling functionality through built-in functions, similar to a cron job. However, WordPress cron execution is slightly different from normal cron jobs. In WordPress, cron won't be executed unless someone visits the site. Typically, it's used for scheduling future posts. However, it can be extended to cater complex scheduling functionality. Plugins and widgets The power of WordPress comes from its plugin mechanism, which allows us to dynamically add or remove functionality without interrupting other parts of the application. Widgets can be considered as a part of the plugin architecture and will be discussed in detail further in this article. Themes The design of a WordPress site comes through the theme. This site offers many built-in template files to cater to the default functionality. Themes can be easily extended for custom functionality. Also, the design of the site can be changed instantly by switching compatible theme. Actions and filters Actions and filters are part of the WordPress hook system. Actions are events that occur during a request. We can use WordPress actions to execute certain functionalities after a specific event is completed. On the other hand, filters are functions that are used to filter, modify, and return the data. Flexibility is one of the key reasons for the higher popularity of WordPress, compared to other CMS. WordPress has its own way of extending functionality of custom features as well as core features through actions and filters. These actions and filters allow the developers to build advanced applications and plugins, which can be easily extended with minor code changes. As a WordPress developer, it's a must to know the perfect use of these actions and filters in order to build highly flexible systems. The admin dashboard WordPress offers a fully featured backend for administrators as well as normal users. These interfaces can be easily customized to adapt to custom applications. All the application-related lists, settings, and data can be handled through the admin section. The overall collection of features provided by WordPress can be effectively used to match the core functionalities provided by full stack PHP frameworks. Identifying the components of WordPress WordPress comes up with a set of prebuilt components, which are intended to provide different features and functionality for an application. A flexible theme and powerful admin features act as the core of WordPress websites, while plugins and widgets extend the core with application-specific features. As a CMS, we all have a pretty good understanding of how these components fit into a WordPress website. Here our goal is to develop web applications with WordPress, and hence it is important to identify the functionality of these components in the perspective of web applications. So, we will look at each of the following components, how they fit into web applications, and how we can take advantage of them to create flexible applications through a rapid development process: The role of WordPress themes The role of admin dashboard The role of plugins The role of widgets The role of WordPress themes Most of us are used to seeing WordPress as a CMS. In its default view, a theme is a collection of files used to skin your web application layouts. In web applications, it's recommended to separate different components into layers such as models, views, and controllers. WordPress doesn't adhere to the MVC architecture. However, we can easily visualize themes or templates as the presentation layer of WordPress. In simple terms, views should contain the HTML needed to generate the layout and all the data it needs, should be passed to the views. WordPress is built to create content management systems, and hence, it doesn't focus on separating views from its business logic. Themes contain views, also known as template files, as a mix of both HTML code and PHP logic. As web application developers, we need to alter the behavior of existing themes, in order to limit the logic inside templates and use plugins to parse the necessary model data to views. Structure of a WordPress page layout Typically, posts or pages created in WordPress consist of five common sections. Most of these components will be common across all the pages in the website. In web applications, we also separate the common layout content into separate views to be included inside other views. It's important for us to focus on how we can adapt the layout into web application-specific structure. Let's visualize the common layout of WordPress using the following diagram: Having looked at the structure, it's obvious that Header, Footer, and the Main Contentarea are mandatory even for web applications. However, the Footerand Commentssection will play a less important role in web applications, compared to web pages. Sidebaris important in web applications, even though it won't be used with the same meaning. It can be quite useful as a dynamic widget area. Customizing the application layout Web applications can be categorized as projects and products. A project is something we develop targeting specific requirements of a client. On the other hand, a product is an application created based on the common set of requirements for wide range of users. Therefore, customizations will be required on layouts of your product based on different clients. WordPress themes make it simple to customize the layout and features using child themes. We can make the necessary modifications in the child theme while keeping the core layout in the parent theme. This will prevent any code duplications in customizing layouts. Also, the ability to switch themes is a powerful feature that eases the layout customization. The role of the admin dashboard The administration interface of an application plays one of the most important roles behind the scenes. WordPress offers one of the most powerful and easy-to-access admin areas amongst other competitive frameworks. Most of you should be familiar with using admin area for CMS functionalities. However, we will have to understand how each component in the admin area suits the development of web applications. The admin dashboard Dashboard is the location where all the users get redirected, once logged into admin area. Usually, it contains dynamic widget areas with the most important data of your application. Dashboard can play a major role in web applications, compared to blogging or CMS functionality. The dashboard contains a set of default widgets that are mainly focused on main WordPress features such as posts, pages, and comments. In web applications, we can remove the existing widgets related to CMS and add application-specific widgets to create a powerful dashboard. WordPress offers a well-defined API to create a custom admin dashboard widgets and hence we can create a very powerful dashboard using custom widgets for custom requirements in web applications. Posts and pages Posts in WordPress are built for creating content such as articles and tutorials. In web applications, posts will be the most important section to create different types of data. Often, we will choose custom post types instead of normal posts for building advanced data creation sections. On the other hand, pages are typically used to provide static content of the site. Usually, we have static pages such as About Us, Contact Us, Services, and so on. Users User management is a must use section for any kind of web application. User roles, capabilities and profiles will be managed in this section by the authorized users. Appearance Themes and application configurations will be managed in this section. Widgets and theme options will be the important sections related to web applications. Generally, widgets are used in sidebars of WordPress sites to display information such as recent members, comments, posts, and so on. However, in web applications, widgets can play a much bigger role as we can use widgets to split main template into multiple sections. Also, these types of widgetized areas become handy in applications where majority of features are implemented with AJAX. The theme options panel can be used as the general settings panel of web applications where we define the settings related to templates and generic site-specific configurations. Settings This section involves general application settings. Most of the prebuilt items in this section are suited for blogs and websites. We can customize this section to add new configuration areas related to our plugins, used in web application development. There are some other sections such as links, pages, and comments, which will not be used frequently in complex web application development. The ability to add new sections is one of the key reasons for its flexibility. The role of plugins In normal circumstances, WordPress developers use functions that involve application logic scattered across theme files and plugins. Even some of the developers change the core files of WordPress. Altering WordPress core files, third-party theme or plugin files is considered a bad practice since we lose all the modifications on version upgrades and it may break the compatibility of other parts of WordPress. In web applications, we need to be much more organized. In the Role of WordPress theme section, we discussed the purpose of having a theme for web applications. Plugins will be and should be used to provide the main logic and content of your application. The plugins architecture is a powerful way to add or remove features without affecting the core. Also, we have the ability to separate independent modules into their own plugins, making it easier to maintain. On top of this, plugins have the ability to extend other plugins. Since there are over 40,000 free plugins and large number of premium plugins, sometimes you don't have to develop anything for WordPress applications. You can just use number of plugins and integrate them properly to build advanced applications. The role of widgets The official documentation of WordPress refers to widgets as a component that adds content and features to your sidebar. In a typical blogging or CMS user's perspective, it's a completely valid statement. Actually, the widgets offer more in web applications by going beyond the content that populates sidebars. Modern WordPress themes provides wide range of built-in widgets for advanced functionality, making it much more easier to build applications. The following screenshot shows a typical widgetized sidebar of a website: We can use dynamic widgetized areas to include complex components as widgets, making it easy to add or remove features without changing source code. The following screenshot shows a sample dynamic widgetized area. We can use the same technique for developing applications with WordPress. Throughout these sections, we covered the main components of WordPress and how they fit into the actual web application development. Now, we have a good understanding of the components in order to plan our application developed throughout this article. A development plan for the forum management application In this article, our main goal is to learn how we can build full stack web applications using built-in WordPress features. Therefore, I thought of building a complete application, explaining each and every aspect of web development. We will develop an online forum management system for creating public forums or managing support forum for a specific product or service. This application can be considered as a mini version of a powerful forum system like bbPress. We will be starting the development of this application. Planning is a crucial task in web development, in which we will save a lot of time and avoid potential risks in the long run. First, we need to get a basic idea about the goal of this application, features and functionalities, and the structure of components to see how it fits into WordPress. Application goals and target audience Anyone who are using Internet on day to day basis knows the importance of online discussion boards, also known as forums. These forums allows us to participate in a large community and discuss common matters, either related to a specific subject or a product. The application developed throughout is intended to provide simple and flexible forum management application using a WordPress plugin with the goals of: Learning to develop a forum application Learning to use features of various online forums Learning to manage a forum for your product or service This application will be targeted towards all the people who have participated in an online forum or used a support system of a product they purchased. I believe that both output of this application and the contents will be ideal for the PHP developers who want to jump into WordPress application development. Summary Our main goal was to find how WordPress fits into web application development. We started this articleby identifying the CMS functionalities of WordPress. We explored the features and functionalities of popular full stack frameworks and compared them with the existing functionalities of WordPress. Then, we looked at the existing components and features of WordPress and how each of those components fit into a real-world web application. We also planned the forum management application requirements and identified the limitations in using WordPress for web applications. Finally, we converted the default interface into a question-answer interface in a rapid process using existing functionalities, without interrupting the default behavior of WordPress and themes. By now, you should be able to decide whether to choose WordPress for your web application, visualize how your requirements fits into components of WordPress, and identify and minimize the limitations. Resources for Article: Further resources on this subject: Creating Your Own Theme—A Wordpress Tutorial [article] Introduction to a WordPress application's frontend [article] Wordpress: Buddypress Courseware [article]
Read more
  • 0
  • 0
  • 9280
article-image-building-components-using-angular
Packt
06 Apr 2017
11 min read
Save for later

Building Components Using Angular

Packt
06 Apr 2017
11 min read
In this article by Shravan Kumar Kasagoni, the author of the book Angular UI Development, we will learn how to use new features of Angular framework to build web components. After going through this article you will understand the following: What are web components How to setup the project for Angular application development Data binding in Angular (For more resources related to this topic, see here.) Web components In today's web world if we need to use any of the UI components provided by libraries like jQuery UI, YUI library and so on. We write lot of imperative JavaScript code, we can't use them simply in declarative fashion like HTML markup. There are fundamental problems with the approach. There is no way to define custom HTML elements to use them in declarative fashion. The JavaScript, CSS code inside UI components can accidentally modify other parts of our web pages, our code can also accidentally modify UI components, which is unintended. There is no standard way to encapsulate code inside these UI components. Web Components provides solution to all these problems. Web Components are set of specifications for building reusable UI components. Web Components specifications is comprised of four parts: Templates: Allows us to declare fragments of HTML that can be cloned and inserted in the document by script Shadow DOM: Solves DOM tree encapsulation problem Custom elements: Allows us to define custom HTML tags for UI components HTML imports: Allows to us add UI components to web page using import statement More information on web components can be found at: https://www.w3.org/TR/components-intro/. Component are the fundament building blocks of any Angular application. Components in Angular are built on top of the web components specification. Web components specification is still under development and might change in future, not all browsers supports it. But Angular provides very high abstraction so that we don't need to deal with multiple technologies in web components. Even if specification changes Angular can take care of internally, it provides much simpler API to write web components. Getting started with Angular We know Angular is completely re-written from scratch, so everything is new in Angular. In this article we will discuss few important features like data binding, new templating syntax and built-in directives. We are going use more practical approach to learn these new features. In the next section we are going to look at the partially implemented Angular application. We will incrementally use Angular new features to implement this application. Follow the instruction specified in next section to setup sample application. Project Setup Here is a sample application with required Angular configuration and some sample code. Application Structure Create directory structure, files as mentioned below and copy the code into files from next section. Source Code package.json We are going to use npm as our package manager to download libraries and packages required for our application development. Copy the following code to package.json file. { "name": "display-data", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "tsc": "tsc", "tsc:w": "tsc -w", "lite": "lite-server", "start": "concurrent "npm run tsc:w" "npm run lite" " }, "author": "Shravan", "license": "ISC", "dependencies": { "angular2": "^2.0.0-beta.1", "es6-promise": "^3.0.2", "es6-shim": "^0.33.13", "reflect-metadata": "^0.1.2", "rxjs": "^5.0.0-beta.0", "systemjs": "^0.19.14", "zone.js": "^0.5.10" }, "devDependencies": { "concurrently": "^1.0.0", "lite-server": "^1.3.2", "typescript": "^1.7.5" } } The package.json file holds metadata for npm, in the preceding code snippet there are two important sections: dependencies: It holds all the packages required for an application to run devDependencies: It holds all the packages required only for development Once we add the preceding package.json file to our project we should run the following command at the root of our application. $ npm install The preceding command will create node_modules directory in the root of project and downloads all the packages mentioned in dependencies, devDependencies sections into node_modules directory. There is one more important section, that is scripts. We will discuss about scripts section, when we are ready to run our application. tsconfig.json Copy the below code to tsconfig.json file. { "compilerOptions": { "target": "es5", "module": "system", "moduleResolution": "node", "sourceMap": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "removeComments": false, "noImplicitAny": false }, "exclude": [ "node_modules" ] } We are going to use TypeScript for developing our Angular applications. The tsconfig.json file is the configuration file for TypeScript compiler. Options specified in this file are used while transpiling our code into JavaScript. This is totally optional, if we don't use it TypeScript compiler use are all default flags during compilation. But this is the best way to pass the flags to TypeScript compiler. Following is the expiation for each flag specified in tsconfig.json: target: Specifies ECMAScript target version: 'ES3' (default), 'ES5', or 'ES6' module: Specifies module code generation: 'commonjs', 'amd', 'system', 'umd' or 'es6' moduleResolution: Specifies module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6) sourceMap: If true generates corresponding '.map' file for .js file emitDecoratorMetadata: If true enables the output JavaScript to create the metadata for the decorators experimentalDecorators: If true enables experimental support for ES7 decorators removeComments: If true, removes comments from output JavaScript files noImplicitAny: If true raise error if we use 'any' type on expressions and declarations exclude: If specified, the compiler will not compile the TypeScript files in the containing directory and subdirectories index.html Copy the following code to index.html file. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Top 10 Fastest Cars in the World</title> <link rel="stylesheet" href="app/site.css"> <script src="node_modules/angular2/bundles/angular2-polyfills.js"></script> <script src="node_modules/systemjs/dist/system.src.js"></script> <script src="node_modules/rxjs/bundles/Rx.js"></script> <script src="node_modules/angular2/bundles/angular2.dev.js"></script> <script> System.config({ transpiler: 'typescript', typescriptOptions: {emitDecoratorMetadata: true}, map: {typescript: 'node_modules/typescript/lib/typescript.js'}, packages: { 'app' : { defaultExtension: 'ts' } } }); System.import('app/boot').then(null, console.error.bind(console)); </script> </head> <body> <cars-list>Loading...</cars-list> </body> </html> This is startup page of our application, it contains required angular scripts, SystemJS configuration for module loading. Body tag contains <cars-list> tag which renders the root component of our application. However, I want to point out one specific statement: The System.import('app/boot') statement will import boot module from app package. Physically it loading boot.js file under app folder. car.ts Copy the following code to car.ts file. export interface Car { make: string; model: string; speed: number; } We are defining a car model using TypeScript interface, we are going to use this car model object in our components. app.component.ts Copy the following code to app.component.ts file. import {Component} from 'angular2/core'; @Component({ selector: 'cars-list', template: '' }) export class AppComponent { public heading = "Top 10 Fastest Cars in the World"; } Important points about AppComponent class: The AppComponent class is our application root component, it has one public property named 'heading' The AppComponent class is decorated with @Component() function with selector, template properties in its configuration object The @Component() function is imported using ES2015 module import syntax from 'angular2/core' module in Angular library We are also exporting the AppComponent class as module using export keyword Other modules in application can also import the AppComponent class using module name (app.component – file name without extension) using ES2015 module import syntax boot.ts Copy the following code to boot.ts file. import {bootstrap} from 'angular2/platform/browser' import {AppComponent} from './app.component'; bootstrap(AppComponent); In this file we are importing bootstrap() function from 'angular2/platform/browser' module and the AppComponent class from 'app.component' module. Next we are invoking bootstrap() function with the AppComponent class as parameter, this will instantiate an Angular application with the AppComponent as root component. site.css Copy the following code to site.css file. * { font-family: 'Segoe UI Light', 'Helvetica Neue', 'Segoe UI', 'Segoe'; color: rgb(51, 51, 51); } This file contains some basic styles for our application. Working with data in Angular In any typical web application, we need to display data on a HTML page and read the data from input controls on a HTML page. In Angular everything is a component, HTML page is represented as template and it is always associated with a component class. Application data lives on component's class properties. Either push values to template or pull values from template, to do this we need to bind the properties of component class to the controls on the template. This mechanism is known as data binding. Data binding in angular allows us to use simple syntax to push or pull data. When we bind the properties of component class to the controls on the template, if the data on the properties changes, Angular will automatically update the template to display the latest data and vice versa. We can also control the direction of data flow (from component to template, from template to component). Displaying Data using Interpolation If we go back to our AppComponent class in sample application, we have heading property. We need to display this heading property on the template. Here is the revised AppComponent class: app/app.component.ts import {Component} from 'angular2/core'; @Component({ selector: 'cars-list', template: '<h1>{{heading}}</h1>' }) export class AppComponent { public heading = "Top 10 Fastest Cars in the World"; } In @Component() function we updated template property with expression {{heading}} surrounded by h1 tag. The double curly braces are the interpolation syntax in Angular. Any property on the class we need to display on the template, use the property name surrounded by double curly braces. Angular will automatically render the value of property on the browser screen. Let's run our application, go to command line and navigate to the root of the application structure, then run the following command. $ npm start The preceding start command is part of scripts section in package.json file. It is invoking two other commands npm run tsc:w, npm run lite. npm run tsc:w: This command is performing the following actions: It is invoking TypeScript compiler in watch mode TypeScript compiler will compile all our TypeScript files to JavaScript using configuration mentioned in tsconfig.json TypeScript compiler will not exit after the compilation is over, it will wait for changes in TypeScript files Whenever we modify any TypeScript file, on the fly compiler will compile them to JavaScript npm run lite: This command will start a lite weight Node.js web server and launches our application in browser Now we can continue to make the changes in our application. Changes are detected and browser will refresh automatically with updates. Output in the browser: Let's further extend this simple application, we are going to bind the heading property to a textbox, here is revised template: template: ` <h1>{{heading}}</h1> <input type="text" value="{{heading}}"/> ` If we notice the template it is a multiline string and it is surrounded by ` (backquote/ backtick) symbols instead of single or double quotes. The backtick (``) symbols are new multi-line string syntax in ECMAScript 2015. We don't need start our application again, as mentioned earlier it will automatically refresh the browser with updated output until we stop 'npm start' command is at command line. Output in the browser: Now textbox also displaying the same value in heading property. Let's change the value in textbox by typing something, then hit the tab button. We don't see any changes happening on the browser. But as mentioned earlier in data binding whenever we change the value of any control on the template, which is bind to a property of component class it should update the property value. Then any other controls bind to same property should also display the updated value. In browser h1 tag should also display the same text whatever we type in textbox, but it won't happen. Summary We started this article by covering introduction to web components. Next we discussed a sample application which is the foundation for this article. Then we discussed how to write components using new features in Angular to like data binding and new templating syntaxes using lot of examples. By the end of this article, you should have good understanding of Angular new concepts and should be able to write basic components. Resources for Article: Further resources on this subject: Get Familiar with Angular [article] Gearing Up for Bootstrap 4 [article] Angular's component architecture [article]
Read more
  • 0
  • 0
  • 3217

article-image-hands-service-fabric
Packt
06 Apr 2017
12 min read
Save for later

Hands on with Service Fabric

Packt
06 Apr 2017
12 min read
In this article by Rahul Rai and Namit Tanasseri, authors of the book Microservices with Azure, explains that Service Fabric as a platform supports multiple programming models. Each of which is best suited for specific scenarios. Each programming model offers different levels of integration with the underlying management framework. Better integration leads to more automation and lesser overheads. Picking the right programming model for your application or services is the key to efficiently utilize the capabilities of Service Fabric as a hosting platform. Let's take a deeper look into these programming models. (For more resources related to this topic, see here.) To start with, let's look at the least integrated hosting option: Guest Executables. Native windows applications or application code using Node.js or Java can be hosted on Service Fabric as a guest executable. These executables can be packaged and pushed to a Service Fabric cluster like any other services. As the cluster manager has minimal knowledge about the executable, features like custom health monitoring, load reporting, state store and endpoint registration cannot be leveraged by the hosted application. However, from a deployment standpoint, a guest executable is treated like any other service. This means that for a guest executable, Service Fabric cluster manager takes care of high availability, application lifecycle management, rolling updates, automatic failover, high density deployment and load balancing. As an orchestration service, Service Fabric is responsible for deploying and activating an application or application services within a cluster. It is also capable of deploying services within a container image. This programming model is addressed as Guest Containers. The concept of containers is best explained as an implementation of operating system level virtualization. They are encapsulated deployable components running on isolated process boundaries sharing the same kernel. Deployed applications and their runtime dependencies are bundles within the container with an isolated view of all operating system constructs. This makes containers highly portable and secure. Guest container programming model is usually chosen when this level of isolation is required for the application. As containers don't have to boot an operating system, they have fast boot up time and are comparatively small in size. A prime benefit of using Service Fabric as a platform is the fact that it supports heterogeneous operating environments. Service Fabric supports two types of containers to be deployed as guest containers: Docker containers on Linux and Windows server containers. Container images for Docker containers are stored in Docker Hub and Docker APIs are used to create and manage the containers deployed on Linux kernel. Service Fabric supports two different types of containers in Windows Server 2016 with different levels of isolation. They are: Windows Server containers and Windows Hyper-V containers Windows Server containers are similar to Docker containers in terms of the isolation they provide. Windows Hyper-V containers offer higher degree of isolation and security by not sharing the operating system kernel across instances. These are ideally used when a higher level of security isolation is required such as systems requiring hostile multitenant hosts. The following figure illustrates the different isolation levels achieved by using these containers. Container isolation levels Service Fabric application model treats containers as an application host which can in turn host service replicas. There are three ways of utilizing containers within a Service Fabric application mode. Existing applications like Node.js, JavaScript application of other executables can be hosted within a container and deployed on Service Fabric as a Guest Container. A Guest Container is treated similar to a Guest Executable by Service Fabric runtime. The second scenario supports deploying stateless services inside a container hosted on Service Fabric. Stateless services using Reliable Services and Reliable actors can be deployed within a container. The third option is to deploy stateful services in containers hosted on Service Fabric. This model also supports Reliable Services and Reliable Actors. Service Fabric offers several features to manage containerized Microservices. These include container deployment and activation, resource governance, repository authentication, port mapping, container discovery and communication and ability to set environment variables. While containers offer a good level of isolation it is still heavy in terms of deployment footprint. Service Fabric offers a simpler, powerful programming model to develop your services which they call Reliable Services. Reliable services let you develop stateful and stateless services which can be directly deployed on Service Fabric clusters. For stateful services, the state can be stored close to the compute by using Reliable Collections. High availability of the state store and replication of the state is taken care by the Service Fabric cluster management services. This contributes substantially to the performance of the system by improving the latency of data access. Reliable services come with a built-in pluggable communication model which supports HTTP with Web API, WebSockets and custom TCP protocols out of the box. A Reliable service is addressed as stateless if it does not maintain any state within it or if the scope of the state stored is limited to a service call and is entirely disposable. This means that a stateless service does not require to persist, synchronize or replicate state. A good example for this service is a weather service like MSN weather service. A weather service can be queried to retrieve weather conditions associated with a specific geographical location. The response is totally based on the parameters supplied to the service. This service does not store any state. Although stateless services are simpler to implement, most of the services in real life are not stateless. They either store state in an external state store or an internal one. Web front end hosting APIs or web applications are good use cases to be hosted as stateless services. A stateful service persists states. The outcome of a service call made to a stateful service is usually influenced by the state persisted by the service. A service exposed by a bank to return the balance on an account is a good example for a stateful service. The state may be stored in an external data store such as Azure SQL Database, Azure Blobs or Azure Table store. Most services prefer to store the state externally considering the challenges around reliability, availability, scalability and consistency of the data store. With Service Fabric, state can be stored close to the compute by using reliable collections. To makes things more lightweight, Service Fabric also offers a programming model based on Virtual actor pattern. This programming model is called Reliable Actors. The Reliable Actors programming model is built on top of Reliable Services. This guarantees the scalability and reliability of the services. An Actor can be defined as an isolated, independent unit of compute and state with single-threaded execution. Actors can be created, managed and disposed independent of each other. Large number of actors can coexist and execute at a time. Service Fabric Reliable Actors are a good fit for systems which are highly distributed and dynamic by nature. Every actor is defined as an instance of an actor type; the same way an object is an instance of a class. Each actor is uniquely identified by an actor ID. The lifetime of Service Fabric Actors is not tied to their in-memory state. As a result, Actors are automatically created the first time a request for them is made. Reliable Actor's garbage collector takes care of disposing unused Actors in memory. Now that we understand the programming models, let's take a look at how the services deployed on Service Fabric are discovered and how the communication between services takes place. Service Fabric discovery and communication An application built on top of Microservices is usually composed of multiple services, each of which runs multiple replicas. Each service is specialized in a specific task. To achieve an end to end business use case, multiple services will need to be stitched together. This requires services to communicate to each other. A simple example would be web front end service communicating with the middle tier services which in turn connects to the back end services to handle a single user request. Some of these middle tier services can also be invoked by external applications. Services deployed on Service Fabric are distributed across multiple nodes in a cluster of virtual machines. The services can move across dynamically. This distribution of services can wither be triggered by a manual action of be result of Service Fabric cluster manager re-balancing services to achieve optimal resource utilization. This makes communication a challenge as services are not tied to a particular machine. Let's understand how Service Fabric solved this challenge for its consumers. Service protocols Service Fabric, as a hosting platform for Microservices does not interfere in the implementation of the service. On top of this, it also lets services decide on the communication channels they want to open. These channels are addressed as service endpoints. During service initiation, Service Fabric provides the opportunity for the services to set up the endpoints for incoming request on any protocol or communication stack. The endpoints are defined according to common industry standards, that is IP:Port. It is possible that multiple service instances share a single host process. In which case, they either have to use different ports or a port sharing mechanism. This will ensure that every service instance is uniquely addressable. Service endpoints Service discovery Service Fabric can rebalance services deployed on a cluster as a part of orchestration activities. This can be caused by resource balancing activities, failovers, upgrades, scale outs or scale ins. This will result in change in service endpoint addresses as the services move across different virtual machines. Service distribution The Service Fabric Naming Service is responsible for abstracting this complexity from the consuming service or application. Naming service takes care of service discovery and resolution. All service instances in Services Fabric are identified by a unique URL like fabric:/MyMicroServiceApp/AppService1. This name stays constant across the lifetime of the service although the endpoint addresses which physically host the service may change. Internally, Service Fabric manages a map between the service names and the physical location where the service is hosted. This is similar to the DNS service which is used to resolve Website URLs to IP addresses. The following figure illustrates the name resolution process for a service hosted on Service Fabric: Name resolution Connections from applications external to Service Fabric Service communications to or between services hosted in Service Fabric can be categorized as internal or external. Internal communication among services hosted on Service Fabric is easily achieved using the Naming Service. External communication, originated from an application or a user outside the boundaries of Service Fabric will need some extra work. To understand how this works, let's dive deeper in to the logical network layout of a typical Service Fabric cluster. Service Fabric cluster is always placed behind an Azure Load Balancer. The Load Balancer acts like a gateway to all traffic which needs to pass to the Service Fabric cluster. The Load Balancer is aware of every post open on every node of a cluster. When a request hits the Load Balancer, it identifies the port the request is looking for and randomly routes the request to one of the nodes which has the requested port open. The Load Balancer is not aware of the services running on the nodes or the ports associated with the services. The following figure illustrates request routing in action. Request routing Configuring ports and protocols The protocol and the ports to be opened by a Service Fabric cluster can be easily configured through the portal. Let's take an example to understand the configuration in detail. If we need a web application to be hosted on a Service Fabric cluster which should have port 80 opened on HTTP to accept incoming traffic, the following steps should be performed. Configuring service manifest Once a service listening to port 80 is authored, we need to configure port 80 in the service manifest to open a listener in the service. This can be done by editing the Service Manifest.xml. <Resources> <Endpoints> <Endpoint Name="WebEndpoint" Protocol="http" Port="80" /> </Endpoints> </Resources> Configuring custom end point On the Service Fabric cluster, configure port 80 as a custom endpoint. This can be easily done through the Azure Management portal. Configuring custom port Configure Azure Load Balancer Once the cluster is configured and created, the Azure Load Balancer can be instructed to forward the traffic to port 80. If the Service Fabric cluster is created through the portal, this step is automatically taken care for every port which is configured on the cluster configuration. Configuring Azure Load Balancer Configure health check Azure Load Balancer probes the ports on the nodes for their availability to ensure reliability of the service. The probes can be configured on the Azure portal. This is an optional step as a default probe configuration is applied for each endpoint when a cluster is created. Configuring probe Built-in Communication API Service Fabric offers many built-in communication options to support inter service communications. Service Remoting is one of them. This option allows strong typed remote procedure calls between Reliable Services and Reliable Actors. This option is very easy to set up and operate with as Service Remoting handles resolution of service addresses, connection, retry and error handling. Service Fabric also supports HTTP for language-agnostic communication. Service Fabric SDK exposes ICommunicationClient and ServicePartitionClient classes for service resolution, HTTP connections, and retry loops. WCF is also supported by Service Fabric as a communication channel to enable legacy workload to be hosted on it. The SDK exposed WcfCommunicationListener for the server side and WcfCommunicationClient and ServicePartitionClient classes for the client to ease programming hurdles. Resources for Article: Further resources on this subject: Installing Neutron [article] Designing and Building a vRealize Automation 6.2 Infrastructure [article] Insight into Hyper-V Storage [article]
Read more
  • 0
  • 0
  • 4168