Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Server-Side Web Development

15 Articles
article-image-7-reasons-to-choose-graphql-apis-over-rest-for-building-your-apis
Sugandha Lahoti
09 Aug 2018
4 min read
Save for later

7 reasons to choose GraphQL APIs over REST for building your APIs

Sugandha Lahoti
09 Aug 2018
4 min read
REST has long been the go-to web service for front-end developers, but recently GraphQL has exploded in popularity. Now there's another great choice for developers for implementing APIs – the Facebook created, open source GraphQL specification. Facebook has been using GraphQL APIs for almost 6 years now in most components of the Facebook and Instagram apps and websites. And since it’s open source announcement in 2015, a large number of industries, from tech giants to lean startups, have also been using this specification for creating web services. Here are 7 reasons why you should also give GraphQL a try for building your APIs. #1. GraphQL is Protocol agnostic Both REST and GraphQL are specifications for building and consuming APIs and can be operated over HTTP. However, GraphQL is protocol agnostic. What this means is that it does not depend on anything HTTP. We don't use HTTP methods or HTTP response codes with GraphQL, except for using it as a channel for GraphQL communication. #2. GraphQL allows Data Fetching GraphQL APIs allow data fetching. This data fetching feature is what makes it better as compared to REST, as you have only one endpoint to access data on a server. Whereas in a typical REST API, you may have to make requests to multiple endpoints to fetch or retrieve data from a server. #3. GraphQL eliminates Overfetching and Underfetching As mentioned earlier, the GraphQL server is a single endpoint that handles all the client requests, and it can give the clients the power to customize those requests at any time. Clients can ask for multiple resources in the same request and they can customize the fields needed from all of them. This way, clients can be in control of the data they fetch and they can easily avoid the problems of over-fetching and under-fetching. With GraphQL, clients and servers are independent which means they can be changed without affecting each other. #4. Openness, Flexibility, and Power GraphQL APIs solves the data loading problem with its three attributes. First, GraphQL is an open specification rather than a software. You can use GraphQL to serve many different needs at once. Secondly, GraphQL is flexible enough not to be tied to any particular programming language, database or hosting environment. Third GraphQL brings in power and performance and reduces code complexity by using declarative queries instead of writing code. #5. Request and response are directly related In RESTful APIs, the language we use for the request is different than the language we use for the response. However, in the case of GraphQL APIs, the language used for the request is directly related to the language used for the response. Since we use a similar language to communicate between clients and servers, debugging problems become easier. With GraphQL APIs queries mirroring the shape of their response, any deviations can be detected, and these deviations would point us to the exact query fields that are not resolving correctly. #6. GraphQL features declarative data communication GraphQL pays major attention towards improving the DI/DX. The developer experience is as important as the user experience, maybe more. When it comes to data communication, we need to give developers a declarative language for communicating an application's data requirements. GraphQL acts as a simple query language that allows developers to ask for the data required by their applications in a simple, natural, and declarative way that mirrors the way they use that data in their applications. That's why frontend application developers love GraphQL. #7. Open source ecosystem and a fabulous community GraphQL has evolved in leaps and bounds from when it was open sourced. The only tooling available for developers to use GraphQL was the graphql-js reference implementation, when it came out first. Now, reference implementations of the GraphQL specification are available in various languages with multiple GraphQL clients. In addition, you also have multiple tools such as Prisma, GraphQL Faker, GraphQL Playground, graphql-config etc to build GraphQL APIs. The GraphQL community is growing rapidly. Entire conferences are exclusively dedicated to GraphQL, GraphQL Europe, GraphQL Day and GraphQL Summit to name a few. If you want to learn GraphQL, here a few resources to help you get your feet off the ground quickly. Learning GraphQL and Relay Hands-on GraphQL for Better RESTful Web Services [Video] Learning GraphQL with React and Relay [Video] 5 web development tools will matter in 2018 What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies
Read more
  • 0
  • 0
  • 7692

article-image-top-5-automated-testing-frameworks
Sugandha Lahoti
11 Jul 2018
6 min read
Save for later

Top 5 automated testing frameworks

Sugandha Lahoti
11 Jul 2018
6 min read
The world is abuzz with automation. It is everywhere today and becoming an integral part of organizations and processes. Software testing, an intrinsic part of website/app/software development has also been taken over by test automation tools. However, as it happens in many software markets, a surplus of tools complicates the selection process. We have identified top 5 testing frameworks, used by most developers for automating the testing process. These automation testing frameworks cover a broad range of devices and support different scripting languages. Each framework has their own uniques pros, cons, and learning approaches. Selenium [box type="shadow" align="" class="" width=""]Creators: Jason Huggins Language: Java Current version: 3.11.0 Popularity: 11,031 stars on GitHub[/box] Selenium is probably the most popular test automation framework, primarily used for testing web apps.  However, selenium can also be used in cloud-based services, load-testing services and for monitoring, quality assurance, test architecture, regression testing, performance analysis, and mobile testing. It is open source; i.e., the source code can be altered and modified if you want to customize it for your testing purposes. It is flexible enough for you to write your own script and add functionality to test scripts and the framework. The Selenium suite consists of four different tools: Selenium IDE, Selenium Grid, Selenium RC, and Selenium WebDriver. It also supports a wide range of programming languages such as C#, Java, Python, PHP, Ruby, Groovy, and Perl. Selenium is portable, so it can be run anywhere, eliminating the need to configure it specifically for a particular machine. It becomes quite handy when you are working in varied environments and platforms supporting various system environments - Windows, Mac, Linux and browsers - Chrome, Firefox, IE, and Headless browsers. Most importantly, Selenium has a great community which implies more forums, more resources, examples, and solved problems. Appium [box type="shadow" align="" class="" width=""]Creators: Dan Cuellar Language: C# Current version: 1.8.1 Popularity: 7,432 stars on GitHub[/box] Appium is an open source test automation framework for testing native, hybrid, and mobile web applications. It allows you to run automated tests on actual devices, emulators (Android), and simulators (iOS). It provides cross-platform solutions for native and hybrid mobile apps, which means that the same test cases will work on multiple platforms (iOS, Android, Windows, Mac).  Appium also allows you to talk to other Android apps that are integrated with App Under Test (AUT). Appium has a client-server architecture. It extends the WebDriver client libraries, which are already written in most popular programming languages. So, you are free to use any programming language to write the automation test scripts. With Appium, you can also run your test scripts in the cloud using services such as Sauce Labs and Testdroid. Appium is available on GitHub with documentations and tutorial to learn all that is needed. The Appium team is alive, active, and highly responsive as far as solving an issue is concerned. Developers can expect a reply after no more than 36 hours, after an issue is opened. The community around Appium is also pretty large and growing every month. Katalon Studio [box type="shadow" align="" class="" width=""]Creators: Katalon LLC. Language: Groovy Current version: 5.4.2[/box] Katalon Studio is another test automation solution for web application, mobile, and web services. Katalon Studio uses Groovy, a language built on top of Java. It is built on top of the Selenium and Appium frameworks, taking advantage of these two for integrated web and mobile test automation. Unlike Appium, and Selenium, which are more suitable for testers who possess good programming skills, Katalon Studio can be used by testers with limited technical knowledge. Katalon Studio has a interactive UI with drag-drop features, select keywords and test objects to form test steps functionalities. It has a manual mode for technically strong users and a scripting mode that supports development facilities like syntax highlighting, code suggestion and debugging. On the down side, Katlon has to load many extra libraries for parsing test data, test objects, and for logging. Therefore, it may be a bit slower for long test cases as compared to other testing frameworks which use Java. Robot Framework [box type="shadow" align="" class="" width=""]Creators: Pekka Klärck, Janne Härkönen et al. Language: Python Current version: 3.0.4 Popularity: 2,393 stars on GitHub[/box] Robot Framework is a Python-based, keyword-driven, acceptance test automation framework. It is a general purpose test automation framework primarily  used for acceptance testing and streamlines it into mainstream development, thus giving rise to the concept of acceptance test driven development (ATDD). It was created by Pekka Klärck as part of his master's thesis and was developed within Nokia Siemens Networks in 2005. Its core framework is written in Python, but it also supports IronPython (.NET), Jython (JVM) and PyPy. The Keyword driven approach simplifies tests and makes them readable. There is also provision for creating reusable higher-level keywords from existing ones. Robot Framework stands out from other testing tools by working on easy-to-use tabular test files that provide different approaches towards test creation. It is the extensible nature of the tool that makes it so versatile. It can be adjusted into different scenarios and used with different software backend such as by using Python and Java libraries, and also via different API’s. Watir [box type="shadow" align="" class="" width=""]Creators: Bret Pettichord, Charley Baker, and more. Language: Ruby Current version: 6.7.2 Popularity: 1126 stars on GitHub[/box] Watir is powerful test automation tool based on a family of ruby libraries. It stands for Web Application Testing In Ruby. Watir can connect to databases, export XML, and structure code as reusable libraries, and read data files and spreadsheets all thanks to Ruby. It supports cross-browser and data-driven testing and the tests are easy to read and maintain. It also integrates with other BBD tools such as Cucumber, Test/Unit, BrowserStack or SauceLabs for cross-browser testing and Applitools for visual testing. Whilst Watir supports only Internet Explorer on Windows, Watir-WebDriver, the modern version of the Watir API based on Selenium,  supports Chrome, Firefox, Internet Explorer, Opera and also can run in headless mode (HTMLUnit). [dropcap]A[/dropcap]ll the frameworks that we discussed above offer unique benefits based on their target platforms and respective audiences. One should avoid selecting a framework based solely on technical requirements. Instead, it is important to identify what is suitable to developers, their team, and the project. For instance, even though general-purpose frameworks cover a broad range of devices, they often lack hardware support. And frameworks which are device-specific often lack support for different scripting languages and approaches. Work with what suits your project and your team requirements best. Selenium and data-driven testing: An interview with Carl Cocchiaro 3 best practices to develop effective test automation with Selenium Writing Your First Cucumber Appium Test
Read more
  • 0
  • 7
  • 7757

article-image-5-reasons-node-js-could-topple-java
Amarabha Banerjee
20 Jun 2018
4 min read
Save for later

The top 5 reasons why Node.js could topple Java

Amarabha Banerjee
20 Jun 2018
4 min read
Last year Mikeal Rogers, the community organizer of Node.js foundation stated in an interview: “Node.js will take over Java within a year”. No doubt Java has been the most popular programming language for a very long time. But Node is catching up quickly thanks to its JavaScript connection; the most used programming language for the front end web development. JavaScript has gained significant popularity for server side web development too and that is where Node.js has a bigger role to play. JavaScript functionalities get compiled in the browser and are capable of creating sleek and beautiful websites with ease. Node.js extends JavaScript capabilities to the server side and allows JavaScript code to run on the server side. In this way, JavaScript is able to utilize the resources of the system and perform more complex tasks than just running on the browser. Today we look at the top 5 reasons why Node.js has become so popular with the potential to take over Java. Asynchronous programming Node.js brings asynchronous programming to the server side. The meaning of Asynchronous request handling is that while one request is being addressed, the newer requests will not have to wait in queue in order to be completed.The requests are taken up in parallel and are processed as and when they arrive. This saves a lot of time and also helps to maximize the processor’s power to the full extent. Event Driven Architecture Node.js is completely built upon the foundation of Event Driven Architecture. What do we mean by event driven architecture in Node.js? Every request, be it access to database or a simple redirect to a web address is considered as an event and is stored in a single thread. Once the thread is complete with requests, be it a single request or multiple requests, the events are completed in sequence and any new request is added as an event on top of the previous events. As the events are completed, the output is either printed or delivered. This event driven approach has paved way for the present event driven architecture based application and implementation of microservices. Vibrant Community The Node.js developer community is a large and an active community. This has propelled the creation of several other third party tools which have made server-side development easier. One such tool is Socket.io which enables push messaging between the server and the client. Tools like Socket.io, Express.js, Websockets etc, have enabled faster message transfer resulting in more efficient and better applications. Better for Scaling When you are trying to build a large scale industrial grade application, there are two techniques available - multithreading and event driven architecture. Although the choice depends on the exact requirement of the application, Node can solve a multitude of your problems because it doesn’t just scale up the number of processors, but it can scale up per processor. This simply means the number of processes per processor can also be scaled up in node.js in addition to the number of processors. Real Time Applications Are you developing real time applications like Google doc, or Trello where there is a need of small messages travelling to and from, from the server to the client? Node.js will be the best choice for you to build something similar. The reason being the feature we discussed in the second point - event driven architecture and also the presence of fast messaging tools. The smaller and more frequent your messaging needs, the better node.js works for you. Although we’ve looked at some of the features in favor of Node.js, no technology is above limitations. For example if you are building CRUD applications and there is no need for real time data flow, then node.js would not make your job any easier. If you are looking to build CPU heavy applications, then Node.js might disappoint you because it comprises of only one CPU thread. But keeping in mind that it brings the flexibility of JavaScript to the server side and is the inspiration behind groundbreaking technologies like Microservices, it’s imperative that Node.js is going to grow more in the near future. Server-Side Rendering Implementing 5 Common Design Patterns in JavaScript (ES8) Behavior Scripting in C# and Javascript for game developers
Read more
  • 0
  • 0
  • 3824

article-image-5-things-you-need-to-learn-to-become-a-server-side-web-developer
Amarabha Banerjee
19 Jun 2018
6 min read
Save for later

5 things you need to learn to become a server-side web developer

Amarabha Banerjee
19 Jun 2018
6 min read
The profession of a back end web developer is ringing out loud and companies seek to get a qualified server-side developer to their team. The fact that the back-end specialist has comprehensive set of knowledge and skills helps them realize their potential in versatile web development projects. Before diving into what it takes to succeed at back end development as a profession, let’s look at what it’s about. In simple words, the back end is that invisible part of any application that activates all its internal elements. If the front-end answers the question of “how does it look”, then the back end or server-side web development deals with “how does it work”. A back end developer is the one who deals with the administrative part of the web application, the internal content of the system, and server-side technologies such as database, architecture and software logic. If you intend to become a professional server-side developer then there are few basic steps which will ease out your journey. In this article we have listed down five aspects of server-side development: servers, databases, networks, queues and frameworks, which you must master to become a successful server side web developer. Servers and databases: At the heart of server-side development are servers which are nothing but the hardware and storage devices connected to a local computer with working internet connection. So everytime you ask your browser to load a web page, the data stored in the servers are accessed and sent to the browser in a certain format. The bigger the application, the larger the amount of data stored in the server-side. The larger the data, the higher possibility of lag and slow performance. Databases are the particular file formats in which the data is stored. There are two different types of databases - Relational and Non- Relational. Both have their own pros and cons. Some of the popular databases which you can learn to take your skills up to the next level are NoSQL, SQL Server, MySQL, MongoDB, DynamoDB etc. Static and Dynamic servers: Static servers are physical hard drives where application data, CSS and HTML files, pictures and images are stored. Dynamic servers actually signify another layer between the server and the browser. They are often known as application servers. The primary function of these application servers is to process the data and format it as per the web page when the data fetching operation is initiated from the browser. This makes saving data much easier and process of data loading becomes much faster. For example, Wikipedia servers are filled with huge amounts of data, but they are not stored as HTML pages, rather they are stored as raw data. When they are queried by the browser, the application browser processes the data and formats it into the HTML format and then sends it to the browser. This makes the process a whole lot faster and space saving for the physical data storage. If you want to go a step ahead and think futuristic, then the latest trend is moving your servers on the cloud. This means the server-side tasks are performed by different cloud based services like Amazon AWS, and Microsoft Azure. This makes your task much simpler as a back end developer, since you simply need to decide which services you would require to best run your application and the rest is taken care off by the cloud service providers. Another aspect of server side development that’s generating a lot of interest among developer is is serverless development. This is based on the concept that the cloud service providers will allocate server space depending on your need and you don’t have to take care of backend resources and requirements. In a way the name Serverless is a misnomer, because the servers are there, just that they are in the cloud and you don’t have to bother about it. The primary role of a backend developer in a serverless system would be to figure out the best possible services and optimize the running cost on the cloud, deploy and monitor the system for non-stop robust performance. The communication protocol: The protocol which defines the data transfer between client side and server side is called HyperTextTransfer Protocol (HTTP). When a search request is typed in the browser, an HTTP request with a URL is sent to the server and the server then sends a response message with either request succeeded or web page not found. When an HTML page is returned for a search query, it is rendered by the web browser. While processing the response, the browser may discover links to other resources (e.g. an HTML page usually references JavaScript and CSS pages), and send separate HTTP Requests to download these files. Both static and dynamic websites use exactly the same communication protocol/patterns. As we have progressed quite a long way from the initial communication protocols, newer technologies like SSL, TLS, IPv6 have taken over the web communication domain. Transport Layer Security (TLS) – and its predecessor, Secure Sockets Layer (SSL), which is now deprecated by the Internet Engineering Task Force (IETF) – are cryptographic protocols that provide communications security over a computer network. The primary reason these protocols were introduced was to protect user data and provide increased security. Similarly newer protocols had to be introduced around late 90’s to cater to the increasing number of internet users. Protocols are basically unique identification pointers that determine the IP address of the server. The initial protocol used was IPv4 which is currently being substituted by IPv6 which has the capability to provide 2^128 or 3.4×1038 addresses. Message Queuing: This is one of the most important aspects of creating fast and dynamic web applications. Message Queuing is the stage where data is queued as per the different responses and then delivered to the browser. This process is asynchronous which means that the server and the browser need not interact with the message queue at the same time. There are some popular message queuing tools like RabbitMQ, MQTT, ActiveMQ which provide real time message queuing functionality. Server-side frameworks and languages: Now comes the last but one of the most important pointers. If you are a developer with a particular choice of language in mind, you can use a language based framework to add functionalities to your application easily. Also this makes it more efficient. Some of the popular server-side frameworks are Node.js for JavaScript, Django for Python, Laravel for PHP, Spring for Java and so on. But using these frameworks will need some amount of experience in respective languages. Now that you have a broad understanding of what server-side web development is, and what are the components, you can jump right into server-side development, databases and protocols management to progress into a successful professional back-end web developer. The best backend tools in web development Preparing the Spring Web Development Environment Is novelty ruining web development?  
Read more
  • 0
  • 0
  • 9393

Banner background image
article-image-the-best-backend-tools-in-web-development
Sugandha Lahoti
06 Jun 2018
5 min read
Save for later

The best backend tools in web development

Sugandha Lahoti
06 Jun 2018
5 min read
If you’re a backend developer, it’s easy to feel overwhelmed by the range of backend development tools available. It goes without saying that you should use what works for you but sometimes it’s not that easy to even work that out. With this in mind, this year’s Skill Up report offers a useful insight into some of the most popular backend tools being used today. Let’s take a look at what tools came out on top. That should help you make decisions about what you’re going to use or maybe even just learn. Read the Skill Up report in full. Sign up to our weekly newsletter and download the PDF for free. Node.js More than 50% respondents said, they prefer Node.js, the popular server-side Javascript coding framework. Node.js is a Javascript runtime that runs on the V8 JavaScript runtime engine. Node.js adds capabilities to Javascript (front-end language) to let it do more than just creating interactive websites. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. The latest stable release of Node, Node 10, will be the next candidate in line for the Long Term Support (LTS) in October 2018. Node.js 10.0 comes with plenty of new features like OpenSSL 1.1.0 security toolkit, upgraded npm, N-API, and much more. Get started with learning Node.js with the following books: Learning Node.js Development Learn Node.js by Building 6 Projects RESTful Web API Design with Node.js 10 - Third Edition ASP.NET Core The next popular alternative was ASP. NET Core with over 25% developers approving it as their choice of backend framework. ASP.NET Core is the open-source cross-platform framework for building backends, web apps and services, and IoT apps. According to the skill-up survey, it was also one of the most popular framework used by developers. It provides a cloud-ready, environment-based configuration system. It seamlessly integrates with popular client-side frameworks and libraries, including Angular, React, and Bootstrap. Get started with ASP.NET Core by reading: Learning ASP.NET Core 2.0 Mastering ASP.NET Core 2.0 ASP.NET Core 2 High Performance - Second Edition Express.js Developers and tech pros also like to work with Express JS, and hence it ranked No. 3 on our list. Express JS is the pre-built Node JS framework that can help developers build faster and smarter websites and web apps. Express basically extends Node.js to build complete web apps. It is the perfect framework to learn for developers, who are fluent in Node.js, but want to transition to creating apps from just server-side technologies. Express is lightweight and comes with extra, built-in web application features and the Express API to support the already robust, feature-packed Node.js platform. Express is not just limited to NodeJS. It also works seamlessly with other modules and offers HTTP utilities and middleware for creating APIs. It can help developers master single-page and multiple-page websites, as well as some complex web apps. You can go through Projects in ExpressJS [Video], a complete course to learn professional web development using Express.js. Laravel Next, was Laravel, a prominent member of a new generation of web frameworks. It is one of the most popular PHP frameworks and is also free and and open source. It features: A simple, fast routing engine Powerful dependency injection container Multiple back-ends for session and cache storage Database agnostic schema migrations Robust background job processing Real-time event broadcasting The latest stable release, Laravel 5 is a substantial upgrade with a lot of new toys, at the same time retaining the features that made Laravel wildly successful. It comes with plenty of architectural as well as design-based changes. Start building with Laravel with these videos. Beginning Laravel [Video] Laravel Foundations: Basics to Every App [Video] Java EE The fifth most popular choice of backend tool is the Java EE. The Enterprise Java standard or Java EE is a collection of technologies and APIs for the Java platform designed to support Enterprise. By enterprise, we mean applications classified as large-scale, distributed, transactional and highly-available, designed to support mission-critical business requirements. Applications written to comply with the Java EE specification do not tie developers to a specific vendor; instead, they can be deployed to any Java EE compliant application server. The Java EE server application implements the Java EE platform APIs and provides the standard Java EE services. The latest stable release, Java EE 8 brings with it a load of features, mainly targeting newer architectures such as microservices, modernized security APIs, and cloud deployments. Our best picks for learning Java EE: Java EE 8 Application Development Architecting Modern Java EE Applications Java EE 8 High Performance The other backend tools which were among the top picks by developers included: Spring, a programming and configuration model for building modern Java-based enterprise applications, on any kind of deployment platform. Django, a powerful Python web framework for creating RESTful web services. It reduces the amount of trivial code, which simplifies the creation of web applications and results in faster development. Flask, a framework for building web servers in Python. It is a micro framework, meaning it’s not a full stack web application development framework. It just gives the developers very basics to get a web server running. Firebase, Google’s mobile platform to help developers run mobile backend code without managing servers and develop high-quality apps. Ruby on Rails, one of the oldest, backend technology. A certain percentage of people still prefer using ruby on rails for their backend code. Rails is a flexible and IDE friendly framework with easy functions and manipulations and the support of the powerful ruby language. The entire skill up survey report can be read on the Packt website, which details on what developers think about the changing tech landscape and the parameters that are driving that change. This survey report is launched at the start of the Skill Up campaign, where every eBook and video will be available for $10. Go grab your free content now!
Read more
  • 0
  • 0
  • 10037

article-image-restful-apis-cloud-iot-social-media-emerging-technologies
Pavan Ramchandani
01 Jun 2018
13 min read
Save for later

What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies

Pavan Ramchandani
01 Jun 2018
13 min read
Two decades ago, the IT industry saw tremendous opportunities with the dot-com boom. Similar to the dot-com bubble, the IT industry is transitioning through another period of innovation. The disruption is seen in major lines of business with the introduction of recent technology trends like Cloud services, Internet of Things (IoT), single-page applications, and social media. In this article, we have covered the role and implications of RESTful web APIs in these emerging technologies. This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. Cloud services We are in an era where business and IT are married together. For enterprises, thinking of a business model without an IT strategy is becoming out of the equation. Keeping the interest on the core business, often the challenge that lies ahead of the executive team is optimizing the IT budget. Cloud computing has come to the rescue of the executive team in bringing savings to the IT spending incurred for running a business. Cloud computing is an IT model for enabling anytime, anywhere, convenient, on-demand network access to a shared pool of configurable computing resources. In simple terms, cloud computing refers to the delivery of hosted services over the internet that can be quickly provisioned and decommissioned with minimal management effort and less intervention from the service provider. Cloud characteristics Five key characteristics deemed essential for cloud computing are as follows: On-demand Self-service: Ability to automatically provision cloud-based IT resources as and when required by the cloud service consumer Broad Network Access: Ability to support seamless network access for cloud-based IT resources via different network elements such as devices, network protocols, security layers, and so on Resource Pooling: Ability to share IT resources for cloud service consumers using the multi-tenant model Rapid Elasticity: Ability to dynamically scale IT resources at runtime and also release IT resources based on the demand Measured Service: Ability to meter the service usage to ensure cloud service consumers are charged only for the services utilized Cloud offering models Cloud offerings can be broadly grouped into three major categories, IaaS, PaaS, and SaaS, based on their usage in the technology stack: Software as a Service (SaaS) delivers the application required by an enterprise, saving the costs an enterprise needs to procure, install, and maintain these applications, which will now be offered by a cloud service provider at competitive pricing Platform as a Service (PaaS) delivers the platforms required by an enterprise for building their applications, saving the cost the enterprise needs to set up and maintain these platforms, which will now be offered by a cloud service provider at competitive pricing Infrastructure as a Service (IaaS) delivers the infrastructure required by an enterprise for running their platforms or applications, saving the cost the enterprise needs to set up and maintain the infrastructure components, which will now be offered by a cloud service provider at competitive pricing RESTful APIs' role in cloud services RESTful APIs can be looked on as the glue that connects the cloud service providers and cloud service consumers. For example, application developers requiring to display a weather forecast can consume the Google Weather API. In this section, we will look at the applicability of RESTful APIs for provisioning resources in the cloud. For an illustration of RESTful APIs, we will be using the Oracle Cloud service platform. Users can set up a free trial account via https://Cloud.oracle.com/home and try out the examples discussed in the following sections. For example, we will try to set up a test virtual machine instance using the REST APIs. The high-level steps required to be performed are as follows: Locate REST API endpoint Generate authentication cookie Provision virtual machine instance Locating the REST API endpoint Once users have signed up for an Oracle Cloud account, they can locate the REST API endpoint to be used by navigating via the following steps: Login screen: Choose the relevant Cloud Account details and click the My Services button as shown in the screenshot ahead: Home page: Displays the cloud services Dashboard for the user. Click the Dashboard icon as shown in the following screenshot: Dashboard screen: Lists the various cloud offerings. Click the Compute Classic offering: Compute Classic screen: Displays the details of infrastructure resources utilized by the user: Site Selector screen: Displays the REST endpoint: Generating an authentication cookie Authentication is required for provisioning the IT resources. For this purpose, we will be required to generate an authentication cookie using the Authenticate User REST API. The details of the API are as follows: API detailsDescriptionAPI functionAuthenticate supplied user credential and generate authentication cookie for use in the subsequent API calls.Endpoint <RESTEndpoint captured in previous section>/authenticate/ Example: https://compute.eucom-north-1.oracleCloud.com/authenticate/ HTTP method POST Request header properties Content-Type:application/oracle-compute-v3+jsonAccept: application/oracle-compute-v3+jsonRequest body user: Two-part name of the user in the format/Computeidentity_domain/user password: Password for the specified userSample request:{ "password": "xxxxx", "user": "/Compute-586113456/test@gmail.com" } Response header properties set-cookie: Authentication cookie value The following screenshot shows the authentication cookie generated by invoking the Authenticate User REST API via the Postman tool: Provisioning a virtual machine instance Consumer are allowed to provision IT resources on the Oracle Compute Cloud infrastructure service, using the LaunchPlans or Orchestration REST API. For this demonstration, we will use the LaunchPlans REST API. The details of the API are as follows: API functionLaunch plan used to provision infra resources in Oracle Compute Cloud Service.Endpoint <RESTEndpoint captured in above section>/launchplan/ Example: https://compute.eucom-north-1.oracleCloud.com/launchplan/ HTTP method POST Request header properties Content-Type:application/oracle-compute-v3+json Accept: application/oracle-compute-v3+json Cookie: <Authentication cookie> Request body instances: Array of instances to be provisioned. For details of properties required by each instance, refer to http://docs.oracle.com/en/Cloud/iaas/compute-iaas-Cloud/stcsa/op-launchplan--post.html. relationships: Mention if any relationship with other instances. Sample Request: { "instances": [ { "shape": "oc3", "imagelist": "/oracle/public/oel_6.4_2GB_v1", "name": "/Compute-586113742/test@gmail.com/test-vm-1", "label": "test-vm-1", "sshkeys":[] } ] } Response body Provisioned list of instances and their relationships The following screenshot shows the creation of a test virtual machine instance by invoking the LaunchPlan REST API via the Postman tool: HTTP Response Status 201 confirms the request for provisioning was successful. Check the provisioned instance status via the cloud service instances page as shown here: Internet of things The Internet of Things (IoT), as the name says, can be considered as a technology enabler for things (which includes people as well) to connect or disconnect from the internet. The term IoT was first coined by Kelvin Ashton in 1999. With broadband Wi-Fi becoming widely available, it is becoming a lot easier to connect things to the internet. This a has a lot of potential to enable a smart way of living and already there are many projects being spoken about smart homes, smart cities, and so on. A simple use case can be predicting the arrival time of a bus so that commuters can get a benefit, if there are any delays and plan accordingly. In many developing countries, the transport system is enabled with smart devices which help commuters predict the arrival or departure time for a bus or train precisely. Gartner analysts firm has predicted that more than 26 billion devices will be connected to the internet by 2020. The following diagram from Wikipedia shows the technology roadmap depicting the applicability of the IoT by 2020 across different areas:   IoT platform The IoT platform consists of four functional layers—the device, data, integration, and service layers. For each functional layer, let us understand the capabilities required for the IoT platform: Device Device management capabilities supporting device registration, provisioning, and controlling access to devices. Seamless connectivity to devices to send and receive data. Data Management of huge volume of data transmitted between devices. Derive intelligence from data collected and trigger actions. Integration Collaboration of information between devices.ServiceAPI gateways exposing the APIs. IoT benefits The IoT platform is seen as the latest evolution of the internet, offering various benefits as shown here: The IoT is becoming widely used due to lowering cost of technologies such as cheap sensors, cheap hardware,  and low cost of high bandwidth network. The connected human is the most visible outcome of the IoT revolution. People are connected to the IoT through various means such as Wearables, Hearables, Nearables, and so on, which can be used to improve the lifestyle, health, and wellbeing of human beings: Wearables: Wearables are any form of sophisticated, computer- like technology which can be worn or carried by a person, such as smart watches, fitness devices, and so on. Hearables: Hearables are wireless computing earpieces, such as headphones. Nearables: Nearables are smart objects with computing devices attached to them, such as door locks, car locks, and so on. Unlike Wearables or Hearables, Nearables are static. Also, in the healthcare industry, the IoT-enabled devices can be used to monitor patients' heart rate or diabetes. Smart pills and nanobots could eventually replace surgery and reduce the risk of complications. RESTful APIs' role in the IoT The architectural pattern used for the realization of the majority of the IoT use cases follows the event-driven architecture pattern. The event-driven architecture software pattern deals with the creation, consumption, and identification of events. An event can be generalized to refer the change in state of an entity. For example, a printer device connected to the internet may emit an event when the printer cartridge is low on ink so that the user can order a new cartridge. The following diagram shows the same with different devices connected to the internet:   The common capability required for devices connected to the internet is the ability to send and receive event data. This can be easily accomplished with RESTful APIs. The following are some of the IoT APIs available on the market: Hayo API: The Hayo API is used by developers to build virtual remote controls for the IoT devices in a home. The API senses and transmits events between virtual remote controls and devices, making it easier for users to achieve desired actions on applications by simply manipulating a virtual remote control. Mozilla Battery Status API: The Mozilla Battery Status API is used to monitor system battery levels of mobile devices and streams notification events for changes in the battery levels and charging progress. Its integration allows users to retrieve real-time updates of device battery levels and status. Caret API: The Caret API allows status sharing across devices. The status can be customized as well. Modern web applications Web-based applications have seen a drastic evolution from Web 1.0 to Web 2.0. Web 1.0 sites were designed mostly with static pages; Web 2.0 has added more dynamism to it. Let us take a quick snapshot of the evolution of web technologies over the years. 1993-1995Static HTML Websites with embedded images and minimal JavaScript1995-2000Dynamic web pages driven by JSP, ASP, CSS for styling, JavaScript for client side validations.2000-2008Content Management Systems like Word Press, Joomla, Drupal, and so on.2009-2013Rich Internet Applications, Portals, Animations, Ajax, Mobile Web applications2014 OnwardsSinge Page App, Mashup, Social Web   Single-page applications Single-page applications are web applications designed to load the application in a single HTML page. Unlike traditional web applications, rather than refreshing the whole page for displaying content change, it enhances the user experience by dynamically updating the current page, similar to a desktop application. The following are some of the key features or benefits of single-page applications: Load contents in single page No refresh of page Responsive design Better user experience Capability to fetch data asynchronously using Ajax Capability for dynamic data binding RESTFul API role in single-page applications In a traditional web application, the client requests a URI and the requested page is displayed in the browser. Subsequent to that, when the user submits a form, the submitted form data is sent to the server and the response is displayed by reloading the whole page as follows: Social media Social media is the future of communication that not only lets one interact but also enables the transfer of different content formats such as audio, video, and image between users. In Web 2.0 terms, social media is a channel that interacts with you along with providing information. While regular media is a one-way communication, social media is a two-way communication that asks for one's comments and lets one vote. Social media has seen tremendous usage via networking sites such as Facebook, LinkedIn, and so on. Social media platforms Social media platforms are based on Web 2.0 technology which serves as the interactive medium for collaboration, communication, and sharing among users. We can classify social media platforms broadly based on their usage as follows: Social networking servicesPlatforms where people manage their social circles and interact with each other, such as Facebook.Social bookmarking servicesAllows one to save, organize, and manage links to various resource over the internet, such as StumbleUpon.Social media newsPlatform that allows people to post news or articles, such as reddit.Blogging servicesPlatform where users can exchange their comments on views, such as Twitter.Document sharing servicesPlatform that lets you share your documents, such as SlideShare.Media sharing servicesPlatform that lets you share media contents, such as YouTube.Crowd sourcing servicesObtaining needed services, ideas, or content by soliciting contributions from a large group of people or an online community, such as Ushahidi. Social media benefits User engagement through social media has seen tremendous growth and many companies use social media channels for campaigns and branding. Let us look at various benefits social media offers: Customer relationship managementA company can use social media to campaigns their brand and potentially benefit with positive feedback from customer review.Customer retention and expansionCustomer reviews can become a valuable source of information for retention and also help to add new customers.Market researchSocial media conversations can become useful insight for market research and planning.Gain competitive advantageAbility to get a view of competitors' messages which enables a company to build strategies to handle their peers in the market.Public relationsCorporate news can be conveyed to audience in real time.Cost controlCompared to traditional methods of campaigning, social media offers better advertising at cheaper cost.   RESTful API role in social media Many of the social networks provide RESTful APIs to expose their capabilities. Let us look at some of the RESTful APIs of popular social media services: Social media servicesRESTFul APIReferenceYouTubeAdd YouTube features to your application, including the ability to upload videos, create and manage playlists, and more.https://developers.google.com/youtube/v3/FacebookThe Graph API is the primary way to get data out of, and put data into, Facebook's platform. It's a low-level HTTP-based API that you can use to programmatically query data, post new stories, manage ads, upload photos, and perform a variety of other tasks that an app might implement.https://developers.facebook.com/docs/graph-api/overviewTwitter Twitter provides APIs to search, filter, and create an ads campaign.https://developer.twitter.com/en/docs To summarize, we discussed modern technology trends and the role of RESTful APIs in each of these areas including its implication on the cloud, virtual machines, user experience for various architecture, and building social media applications. To know more about designing and working with RESTful web services, do check out Java RESTful Web Services, Second Edition. Getting started with Django and Django REST frameworks to build a RESTful app How to develop RESTful web services in Spring
Read more
  • 0
  • 0
  • 4207
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-python-web-development-django-vs-flask-2018
Aaron Lazar
28 May 2018
7 min read
Save for later

Python web development: Django vs Flask in 2018

Aaron Lazar
28 May 2018
7 min read
A colleague of mine, wrote an article over two years ago, talking about the two top Python web frameworks, Django and Flask. It’s 2018 now, and a lot has changed in the IT world. There have been a couple of frameworks that emerged or gained popularity in the last 3 years, like Bottle or CherryPy, for example. However, Django and Flask have managed to stand their ground and have continued to remain as the top 2 Python frameworks. Moreover, there have been some major breakthroughs in web application architecture like the rise of Microservices, that has in turn pushed the growth of newer architectures like Serverless and Cloud-Native. I thought it would be a good idea to present a more modern comparison of these two frameworks, to help you take an informed decision on which one you should be choosing for your application development. So before we dive into ripping these frameworks apart, let’s briefly go over a few factors we’ll be considering while evaluating them. Here’s what I got in mind, in no particular order: Ease of use Popularity Community support Job market Performance Modern Architecture support Ease of use This is something l like to cover first, because I know it’s really important for developers who are just starting out, to assess the learning curve before they attempt to scale it. When I’m talking about ease of use, I’m talking about how easy it is to get started with using the tool in your day to day projects. Flask, like it’s webpage, is a very simple tool to learn, simply because it’s built to be simple. Moreover, the framework is un-opinionated, meaning that it will allow you to implement things the way you choose to, without throwing a fuss. This is really important when you’re starting out. You don’t want to run into too much issues that will break your confidence as a developer. On the other hand, Django is a great framework to learn too. While several Python developers will disagree with me, I would say Django is a pretty complex framework, especially for a newbie. Now this is not all that bad, right. I mean, especially when you’re building a large project, you want to be the one holding the reins. If you’re starting out with some basic projects then, it may be wise not to choose Django. The way I see it, learning Flask first will allow you to learn Django much faster. Popularity Both frameworks are quite popular, with Django starring at 34k on Github, and Flask having a slight edge at 36k. If you take a look at the Google trends, both tools follow a pretty similar trend, with Django’s volume much higher, owing to its longer existence. Source: SEM Rush As mentioned before, Flask is more popular among beginners and those who want to build basic websites easily. On the other hand, Django is more popular among the professionals who have years of experience building robust websites. Community Support and Documentation In terms of community support, we’re looking at how involved the community is, in developing the tool and providing support to those who need it. This is quite important for someone who’s starting out with a tool, or for that matter, when there’s a new version releasing and you need to keep yourself up to date.. Django features 170k tags on Stackoverflow, which is over 7 times that of Flask, which stands at 21k. Although Django is a clear winner in terms of numbers, both mailing lists are quite active and you can receive all the help you need, quite easily. When it comes to documentation, Django has some solid documentation that can help you get up and running in no time. On the other hand, Flask has good documentation too, but you usually have to do some digging to find what you’re looking for. Job Scenes Jobs are really important especially if you’re looking for a corporate one It’s quite natural that the organization that’s hiring you will already be working with a particular stack and they will expect you to have those skills before you step in. Django records around 2k jobs on Indeed in the USA, while Flask records exactly half that amount. A couple of years ago, the situation was pretty much the same; Django was a prime requirement, while Flask had just started gaining popularity. You’ll find a comment stating that “Picking up Flask might be a tad easier then Django, but for Django you will have more job openings” Itjobswatch.uk lists Django as the 2nd most needed Skill for a Python Developer, whereas Flask is way down at 20. Source: itjobswatch.uk Clearly Django is in more demand that Flask. However, if you are an independent developer, you’re still free to choose the framework you wish to work with. Performance Honestly speaking, Flask is a microframework, which means it delivers a much better performance in terms of speed. This is also because in Flask, you could write 10k lines of code, for something that would take 24k lines in Django. Response time comparison for data from remote server: Django vs Flask In the above image we see how both tools perform in terms of loading a response from the server and then returning it. Both tools are pretty much the same, but Flask has a slight edge over Django. Load time comparison from database with ORM: Django vs Flask In this image, we see how the gap between the tools is quite large, with Flask being much more efficient in loading data from the database. When we talk about performance, we also need to consider the power each framework provides you when you want to build large apps. Django is a clear winner here, as it allows you to build massive, enterprise grade applications. Django serves as a full-stack framework, which can easily be integrated with JavaScript to build great applications. On the other hand, Flask is not suitable for large applications. The JetBrains Python Developer Survey revealed that Django was a more preferred option among the respondents. Jetbrains Python Developer Survey 2017 Modern Architecture Support The monolith has been broken and microservices have risen. What’s interesting is that although applications are huge, they’re now composed of smaller services working together to make up the actual application. While you would think Django would be a great framework to build microservices, it turns out that Flask serves much better, thanks to its lightweight architecture and simplicity. While you work on a huge enterprise application, you might find Flask being interwoven wherever a light framework works best. Here’s the story of one company that ditched Django for microservices. I’m not going to score these tools because they’re both awesome in their own right. The difference arises when you need to choose one for your projects and it’s quite evident that Flask should be your choice when you’re working on a small project or maybe a smaller application built into a larger one, maybe a blog or a small website or a web service. Although, if you’re on the A team, making a super awesome website for maybe, Facebook or a billion dollar enterprise, instead of going the Django unchained route, choose Django with a hint of Flask added in, for all the right reasons. :) Django recently hit version 2.0 last year, while Flask hit version 1.0 last month. Here’s some great resources to get you up and running with Django and Flask. So what are you waiting for? Go build that website! Why functional programming in Python matters Should you move to Python 3.7 Why is Python so good for AI and Machine Learning?
Read more
  • 0
  • 0
  • 15875

article-image-how-nodejs-changing-web-development
Antonio Cucciniello
05 Jul 2017
5 min read
Save for later

How is Node.js Changing Web Development?

Antonio Cucciniello
05 Jul 2017
5 min read
If you have remotely been paying attention to what is going on in the web development space, you know that Node.js has become extremely popular and is many developers’ choice of backend technology. It all started in 2009 by Ryan Dahl. It is a JavaScript runtime that is built on Google Chrome's V8 JavaScript Engine.Over the past couple of years, more and more engineers have moved towards Node.js in many of the their web applications. With plenty of people using it now, how has Node.js changed web development? Scalability Scalability is the one thing that makes Node.js so popular. Node.js runs everything in a single thread. This single thread is event driven (due to JavaScript being the language that it is written with). It is also non-blocking. Now, when you spin up a server in your Node web app, every time a new user connects to the server, that launches an event. That event gets handled concurrently with the other events that are occurring or users that are connecting to the server. In web applications built with other technologies, this would slow down the server after a large amount of users. In contrast, with a Node application, and the non-blocking event driven nature, this allows for highly scalable applications. This now allows companies that are attempting to scale, to build their apps with Node, which will prevent any slowdowns they may have had. This also means they do not have to purchase as much server space as someone using a web app that was not developed with Node. Ease of Use As previously mentioned, Node.js is written with JavaScript. Now, JavaScript was always used to add functionality to the frontend of applications. But with the addition of Node.js, you can now write the entire application in JavaScript. This now makes it so much easier to be a frontend developer who can edit some backend code, or be a backend engineer who can play around with some frontend code. This in turn makes it so much easier to become a Full Stack Engineer. You do not really need to know anything new except the basic concepts of how things work in the backend. As a result, we have recently seen the rise in a full stack JavaScript developer. This also reduces the complexity of working with multiple languages; it minimizes any confusion that might arise when you have to switch from JavaScript on the front end to whatever language would have been used on the backend.  Open Source Community When Node was released, NPM, node package manager, was also given to the public. The Node package manager does exactly what it says on the tin: it allows developers to quickly add and use third party libraries and frameworks in their code. If you have used Node, then you can vouch for me here when I say there is almost always a package that you can use in your application that can make it easier to develop your application or automate a larger task. There are packages to help create http servers, help with image processing, and help with unit testing. If you need it, it’s probably been made. The even more awesome part about this community is that it’s growing by the day, and people are extremely active by contributing the many open source packages out there to help developers with various needs. This increases the productivity of all developers that are using Node in their application because they can shift the focus from something that is not that important in their application, to the main purpose of it. Aid in Frontend Development With the release of Node it did not only benefit the backend side of development, it also benefitted the frontend. With new frameworks that can be used on the frontend such as React.js or virtual-dom, these are all installed using NPM. With packages like browserify you can also use Node’s require to use packages on the frontend that normally would be used on the backend! You can be even more productive and develop things faster on the front end as well! Conclusion Node.js is definitely changing web development for the better. It is making engineers more productive with the use of one language across the entire stack. So, my question to you is, if you have not tried out Node in your application, what are you waiting for? Do you not like being more productive? If you enjoyed this post, tweet about your opinion of how Node.js changed web development. If you dislike Node.js, I would love to hear your opinion as well! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 3304

article-image-mapreduce-amazon-emr-nodejs
Pedro Narciso
14 Dec 2016
8 min read
Save for later

MapReduce on Amazon EMR with Node.js

Pedro Narciso
14 Dec 2016
8 min read
In this post, you will learn how to write a Node.js MapReduce application and how to run it on Amazon EMR. You don’t need to be familiar with Hadoop or EMR API's. In order to run the examples, you will need a Github account, an Amazon AWS, some money to spend at AWS, and Bash or an equivalent installed on your computer. EMR, BigData, and MapReduce We define BigData as those data sets too large or too complex to be processed by traditional processing applications. BigData is also a relative term: A data set can be too big for your Raspberry PI, while being a piece of cake for your desktop. What is MapReduce? MapReduce is a programming model that allows data sets to be processed in a parallel and distributed fashion. How does it work? You create a cluster and feed it with the data set. Then, you define a mapper and a reducer. MapReduce involves the following three steps: Mapping step: This breaks down the input data into KeyValue pairs Shuffling step: KeyValue pairs are grouped by Key Reducing step: KeyValue pairs are processed by Key in parallel It’s guaranteed that all data belonging to a single key will be processed by a single reducer instance. Our processing job project directory setup Today, we will implement a very simple processing job: counting unique words from a set of text files. The code for this article is hosted at Here. Let's set up a new directory for our project: $ mkdir -p emr-node/bin $ cd emr-node $ npm init --yes $ git init We also need some input data. In our case, we will download some books from project Gutenberg as follows: $ mkdir data $ curl -Lo data/tmohah.txt http://www.gutenberg.org/ebooks/45315.txt.utf-8 $ curl -Lo data/mad.txt http://www.gutenberg.org/ebooks/5616.txt.utf-8 Mapper and Reducer As we stated before, the mapper will break down its input into KeyValue pairs. Since we use the streaming API, we will read the input form stdin. We will then split each line into words, and for each word, we are going to print "word1" to stdout. TAB character is the expected field separator. We will see later the reason for setting "1" as the value. In plain Javascript, our ./bin/mapper can be expressed as: #!/usr/bin/env node const readline = require('readline'); const rl = readline.createInterface({ input : process.stdin }); rl.on('line', function(line){ line.trim().split(' ').forEach(function(word){ console.log(`${word}t1`); }); }); As you can see, we have used the readline module (a Node built-in module) to parse stdin. Each line is broken down into words, and each word is printed to stdout as we stated before. Time to implement our reducer. The reducer expects a set of KeyValue pairs, sorted by key, as input, such as the following: First<TAB>1 First<TAB>1 Second<TAB>1 Second<TAB>1 Second<TAB>1 We then expect the reducer to output the following: First<TAB>2 Second<TAB>3 Reducer logic is very simple and can be expressed in pseudocode as: IF !previous_key previous_key = current_key counter = value IF previous_key equals current_key counter = counter + value ELSE print previous_key<TAB>counter previous_key = current_key; counter = value; The first statement is necessary to initialize the previous_key and counter variables. Let's see the real JavaScript implementation of ./bin/reducer: #!/usr/bin/env node var previousKey, counter; const readline = require('readline'); const rl = readline.createInterface({ input : process.stdin }); function print(){ console.log(`${previousKey}t${counter}`); } function countWord(line) { let [currentKey, value] = line.split('t'); value = +value; if(typeof previousKey === 'undefined'){ previousKey = currentKey; counter = value; return; } if(previousKey === currentKey){ counter = counter + value; return; } print(); previousKey = currentKey; counter = value; } process.stdin.on('end',function(){ print(); }); rl.on('line', countWord); Again, we use readline module to parse stdin line by line. The countWord function implements our reducer logic described before. The last thing we need to do is to set execution permissions to those files: chmod +x ./bin/mapper chmod +x ./bin/reducer How do I test it locally? You have two ways to test your code: Install Hadoop and run a job With a simple shell script The second one is my preferred one for its simplicity: ./bin/mapper <<EOF | sort | ./bin/reducer first second first first second first EOF It should print the following: first<TAB>4 second<TAB>2 We are now ready to run our job in EMR! Amazon environment setup Before we run any processing job, we need to perform some setup on the AWS side. If you do not have an S3 bucket, you should create one now. Under that bucket, create the following directory structure: <your bucket> ├── EMR │ └── logs ├── bootstrap ├── input └── output Upload our previously downloaded books from project Gutenberg to the input folder. We also need AWS cli installed on the computer. You can install it with the python package manager. If you do not have AWS cli installed on your computer, then run: $ sudo pip install awscli awscli requires some configuration, so run the following and provide the requested data: $ aws configure You can find this data in your Amazon AWS web console. Be aware that usability is not Amazon’s strongest point. If you do not have your IAM EMR roles yet, it is time to create them: aws emr create-default-roles Good. You are now ready to deploy your first cluster. Check out this (run-cluster.sh) script: #!/bin/bash MACHINE_TYPE='c1.medium' BUCKET='pngr-emr-demo' REGION='eu-west-1' KEY_NAME='pedro@triffid' aws emr create-cluster --release-label 'emr-4.0.0' --enable-debugging --visible-to-all-users --name PNGRDemo --instance-groups InstanceCount=1,InstanceGroupType=CORE,InstanceType=$MACHINE_TYPE InstanceCount=1,InstanceGroupType=MASTER,InstanceType=$MACHINE_TYPE --no-auto-terminate --enable-debugging --log-uri s3://$BUCKET/EMR/logs --bootstrap-actions Path=s3://$BUCKET/bootstrap/bootstrap.sh,Name=Install --ec2-attributes KeyName=$KEY_NAME,InstanceProfile=EMR_EC2_DefaultRole --service-role EMR_DefaultRole --region $REGION The previous script will create a 1 master, 1 core cluster, which is big enough for now. You will need to update this script with your bucket, region, and key name. Remember that your keys are listed at "AWS EC2 console/Key pairs". Running this script will print something like the following: { "ClusterId": "j-1HHM1B0U5DGUM" } That is your cluster ID and you will need it later. Please visit your Amazon AWS EMR console and switch to your region. Your cluster should be listed there. It is possible to add the processing steps with either the UI or aws cli. Let's use a shell script (add-step.sh): #!/bin/bash CLUSTER_ID=$1 BUCKET='pngr-emr-demo' OUTPUT='output/1' aws emr add-steps --cluster-id $CLUSTER_ID --steps Name=CountWords,Type=Streaming,Args=[-input,s3://$BUCKET/input,-output,s3://$BUCKET/$OUTPUT,-mapper,mapper,-reducer,reducer] It is important to point out that our "OUTPUT" directory does not exist at S3 yet. Otherwise, the job will fail. Call ./add-step.sh plus the cluster ID to add our CountWords step: ./add-step j-1HHM1B0U5DGUM Done! So go back to the Amazon UI, reload the cluster page, and check the steps. "CountWords" step should be listed there. You can track job progress from the UI (reload the page) or from the command line interface. Once the job is done, terminate the cluster. You will probably want to configure the cluster to terminate as soon as it finishes or when any step fails. Termination behavior can be specified with the "aws emr create-cluster". Sometimes the bootstrap process can be difficult. You can SSH into the machines, but before that, you will need to modify their security groups, which are listed at "EC2 web console/security groups". Where to go from here? You can (and should) break down your processing jobs into smaller steps because it will simplify your code and add more composability to your steps. You can compose more complex processing jobs by using the output of a step as the input for the next step. Imagine that you have run the "CountWords" processing job several times and now you want to sum the outputs. Well, for that particular case, you just add a new step with an "identity mapper" and your already built reducer, and feed it with all of the previous outputs. Now you can see why we output "WORD1" from the mapper. About the author Pedro Narciso García Revington is a Senior Full Stack Developer with 10+ years of experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD, and polyglot persistence.
Read more
  • 0
  • 0
  • 3514

article-image-building-better-bundles-why-processenvnodeenv-matters-optimized-builds
Mark Erikson
14 Nov 2016
5 min read
Save for later

Building Better Bundles: Why process.env.NODE_ENV Matters for Optimized Builds

Mark Erikson
14 Nov 2016
5 min read
JavaScript developers are keenly aware of the need to reduce the size of deployed assets, especially in today's world of single-page apps. This usually means running increasingly complex JavaScript codebases through build steps that produce a minified bundle for deployment. However, if you read a typical tutorial on setting up a build tool like Browserify or Webpack, you'll see numerous references to a variable called process.env.NODE_ENV. Tutorials always talk about how this needs to be set to a value like "production" in order to produce a properly optimized bundle, but most articles never really spell out why this value matters and how it relates to build optimization. Here's an explanation of why process.env.NODE_ENV is used and how it fits into the typical build process. Operating system environment variables are widely used as a method of configuring applications, especially as a way to activate behavior based on different deployment environments (such as development vs testing vs production). Node.js exposes the current process's environment variables to the script as an object called process.env. From there, the Express web server framework popularized using an environment variable called NODE_ENV as a flag to indicate whether the server should be running in "development" mode vs "production" mode. At runtime, the script looks up that value by checking process.env.NODE_ENV. Because it was used within the Node ecosystem, browser-focused libraries also started using it to determine what environment they were running in, and using it to control optimizations and debug mode behavior. For example, React uses it as the equivalent of a C preprocessor #ifdef to act as conditional checking for debug logging and perf tracking, roughly like this: function someInternalReactFunction() { // do actual work part 1 if(process.env.NODE_ENV === "development") { // do debug-only work, like recording perf stats } // do actual work part 2 } If process.env.NODE_ENV is set to "production", all those if clauses will evaluate to false, and the potentially expensive debug code won't run. In addition, in conjunction with a tool like UglifyJS that does minification and removal of dead code blocks, a clause that is surrounded with if(process.env.NODE_ENV === "development") will become dead code in a production build and be stripped out, thus reducing bundled code size and execution time. However, because the NODE_ENV environment variable and the corresponding process.env.NODE_ENV runtime field are normally server-only concepts, by default those values do not exist in client-side code. This is where build tools such as Webpack's DefinePlugin or the Browserify Envify transform come in, which perform search-and-replace operations on the original source code. Since these build tools are doing transformation of your code anyway, they can force the existence of global values such as process.env.NODE_ENV. (It's also important to note that because DefinePlugin in particular does a direct text replacement, the value given to DefinePlugin must include actual quotes inside of the string itself. Typically, this is done either with alternate quotes, such as '"production"', or by using JSON.stringify("production")). Here's the key: the build tool could set that value to anything, based on any condition that you want, as you're defining your build configuration. For example, I could have a webpack.production.config.js Webpack config file that always uses the DefinePlugin to set that value to "production" throughout the client-side bundle. It wouldn't have to be checking the actual current value of the "real" process.env.NODE_ENV variable while generating the Webpack config, because as the developer I would know that any time I'm doing a "production" build, I would want to set that value in the client code to "production'. This is where the "code I'm running as part of my build process" and "code I'm outputting from my build process" worlds come together. Because your build script is itself most likely to be JavaScript code running under Node, it's going to have process.env.NODE_ENV available to it as it runs. Because so many tools and libraries already share the convention of using that field's value to determine their dev-vs-production status, the common convention is to use the current value of that field inside the build script as it's running to also determine the value of that field as applied to the client code being transformed. Ultimately, it all comes down to a few key points: NODE_ENV is a system environment variable that Node exposes into running scripts. It's used by convention to determine dev-vs-prod behavior, by both server tools, build scripts, and client-side libraries. It's commonly used inside of build scripts (such as Webpack config generation) as both an input value and an output value, but the tie between the two is still just convention. Build tools generally do a transform step on the client-side code, replace any references to process.env.NODE_ENV with the desired value, and the resulting code will contain dead code blocks as debug-only code is now inside of an if(false)-type condition, ensuring that code doesn't execute at runtime. Minifier tools such as UglifyJS will strip out the dead code blocks, leaving the production bundle smaller. So, the next time you see process.env.NODE_ENV mentioned in a build script, hopefully you'll have a much better idea why it's there. About the author Mark Erikson is a software engineer living in southwest Ohio, USA, where he patiently awaits the annual heartbreak from the Reds and the Bengals. Mark is author of the Redux FAQ, maintains the React/Redux Links list and Redux Addons Catalog, and occasionally tweets at @acemarke. He can be usually found in the Reactiflux chat channels, answering questions about React and Redux. He is also slightly disturbed by the number of third-person references he has written in this bio!
Read more
  • 0
  • 0
  • 19575
article-image-what-api-economy
Darrell Pratt
03 Nov 2016
5 min read
Save for later

What is the API Economy?

Darrell Pratt
03 Nov 2016
5 min read
If you have pitched the idea of a set of APIs to your boss, you might have run across this question. “Why do we need an API, and what does it have to do with an economy?” The answer is the API economy - but it's more than likely that that is going to be met with more questions. So let's take some time to unpack the concept and get through some of the hyperbole surrounding the topic. An economy (From Greek οίκος – "household" and νέμoμαι – "manage") is an area of the production, distribution, or trade, and consumption of goods and services by different agents in a given geographical location. - Wikipedia If we take the definition of economy from Wikipedia and the definition of API as an Application Programming Interface, then what we should be striving to create is a platform (as the producer of the API) that will attract a set of agents that will use that platform to create, trade or distribute goods and services to other agents over the Internet (our geography has expanded). The central tenet of this economy is that the APIs themselves need to provide the right set of goods (data, transactions, and so on) to attract other agents (developers and business partners) that can grow their businesses alongside ours and further expand the economy. This piece from Gartner explains the API economy very well. This is a great way of summing it up: "The API economy is an enabler for turning a business or organization into a platform." Let’s explore a bit more about APIs and look at a few examples of companies that are doing a good job of running API platforms. The evolution of the API economy If you asked someone what an API actually was 10 or more years ago, you might have received puzzled looks. The Application Programming Interface at that time was something that the professional software developer was using to interface with more traditional enterprise software. That evolved into the popularity of the SDK (Software Development Kit) and a better mainstream understanding of what it meant to create integrations or applications on pre-existing platforms. Think of the iOS SDK or Android SDK and how those kits and the distribution channels that Apple and Google created have led to the explosion of the apps marketplace. Jeff Bezos’s mandate that all IT assets have an API at Amazon was a major event in the API economy timeline. Amazon continued to build APIs such as SNS, SQS, Dynamo and many others. Each of these API components provided a well-defined service that any application can use and has significantly reduced the barrier to entry for new software and service companies. With this foundation set, the list of companies providing deep API platforms has steadily increased. How exactly does one profit in the API economy? If we survey a small set of API frameworks, we can see that companies use their APIs in different ways to add value to their underlying set of goods or create a completely new revenue stream for the company. Amazon AWS Amazon AWS is the clearest example of an API as a product unto itself. Amazon makes available a large set of services that provide defined functionality and for which Amazon charges with rates based upon usage of CPU and storage (it gets complicated). Each new service they launch addresses a new area of need and work to provide integrations between the various services. Social APIs Facebook, Twitter and others in the social space, run API platforms to increase the usage of their properties. Some of the inherent value in Facebook comes from sites and applications far afield from facebook.com and their API platform enables this. Twitter has had a more complicated relationship with its API users over time, but the API does provide many methods that allow both apps and websites to tap into Twitter content and thus extend Twitter’s reach and audience size. Chat APIs Slack has created a large economy of applications focused around its chat services and built up a large number of partners and smaller applications that add value to the platform. Slack’s API approach is one that is centered on providing a platform for others to integrate with and add content into the Slack data system. This approach is more open than the one taken by Twitter and the fast adoption has added large sums to Slack’s current valuation. Along side the meteoric rise of Slack, the concept of the bot as an assistant has also taken off. Companies like api.ai are offering services to enable chat services with AI as a service. The service offerings that surround the bot space are growing rapidly and offer a good set of examples as to how a company can monetize their API. Stripe Stripe competes in the payments as a service space along with PayPal, Square and Braintree. Each of these companies offers API platforms that vastly simplify the integration of payments into web sites and applications. Anyone who has built an e-commerce site before 2000 can and will appreciate the simplicity and power that the API economy brings to the payment industry. The pricing strategy in this space is generally on a per use case and is relatively straightforward. It Takes a Community to make the API economy work There are very few companies that will succeed by building an API platform without growing an active community of developers and partners around it. While it is technically easy to create and API given the tooling available, without an active support mechanism and detailed and easily consumable documentation your developer community may never materialize. Facebook and AWS are great examples to follow here. They both actively engage with their developer communities and deliver rich sets of documentation and use-cases for their APIs.
Read more
  • 0
  • 0
  • 4684

article-image-future-node
Owen Roberts
09 Sep 2016
2 min read
Save for later

The Future is Node

Owen Roberts
09 Sep 2016
2 min read
In the few years we’ve seen Node.js explode onto the tech scene and go from strength to strength – In fact, the rate of adoption has been so great that the Node Foundation has mentioned that in the last year alone we’ve seen the amount of developers using the server-side platform have grown by 100% to reach a staggering 3.5 million users. Early adopters to Nodehave included Netflix, PayPal, and even Walmart. The Node fanbase is constantly building new Node Package Managerpackages to share among themselves. With React and Angular offering the perfect accompaniment to Node in modern web applications, along with a host of JavaScript tools like Gulp and Grunt able to use Node’s best practices for easier development Node has become an essential tool for the modern JavaScript developer, one that shows no signs of slowing down or being replaced. Whether Node will be around a decade from now remains to seen, but with a hungry user base, thousands of user created npms already created, and full-stack JavaScript moving to be the cornerstone of most web applications, it’s definitely not going anyway anytime soon. For now, the future really is Node. Want to get started learning Node? Or perhaps you’re looking give your skills the boost to ensure you stay on top? We’ve just released Node.js Blueprints, and if you’re looking to see the true breadth of possibilities that Node offers you then there’s no better way to discover how to apply this framework in new and unexpected ways. But why wait? With a free Mapt account you can read the first 2 chapters for nothing at all! When you’re ready to continue learning just what Node can do, sign up to Mapt to get unlimited access to every chapter in the book, along with the rest of our entire video and eBook range, at just $29.99 per month!
Read more
  • 0
  • 0
  • 2104

article-image-nodejs-its-easy-get-things-done
Packt Publishing
05 Sep 2016
4 min read
Save for later

With Node.js, it’s easy to get things done

Packt Publishing
05 Sep 2016
4 min read
Luciano Mammino is the author (alongside Mario Casciaro) of the second edition of Node.js Design Patterns, released in July 2016. He was kind enough to speak to us about his life as a web developer and working with Node.js – as well as assessing Node’s position within an exciting ecosystem of JavaScript libraries and frameworks. Follow Luciano on Twitter – he tweets from @loige.  1.     Tell us about yourself – who are you and what do you do? I’m an Italian software developer living in Dublin and working at Smartbox as Senior Engineer in the Integration team. I’m a lover of JavaScript and Node.js and I have a number of upcoming side projects that I am building with these amazing technologies.  2.     Tell us what you do with Node.js. How does it fit into your wider development stack? The Node.js platform is becoming ubiquitous; the range of problems that you can address with it is growing bigger and bigger. I’ve used Node.js on a Raspberry Pi, in desktop and laptop computers and on the cloud quite successfully to build a variety of applications: command line scripts, automation tools, APIs and websites. With Node.js it’s really easy to get things done. Most of the time I don't need to switch to other development environments or languages. This is probably the main reason why Node.js fits very well in my development stack.  3.     What other tools and frameworks are you working with? Do they complement Node.js? Some of the tools I love to use are RabbitMq, MongoDB, Redis and Elastic Search. Thanks to the Npm repository, Node.js has an amazing variety of libraries which makes integration with these technologies seamless. I was recently experimenting with ZeroMQ, and again I was surprised to see how easy it is to get started with a Node.js application.  4.     Imagine life before you started using Node.js. What has its impact been on the way you work? I started programming when I was very young so I really lived "a life" as a programmer before having Node.js. Before Node.js came out I was using JavaScript a lot to program the frontend of web applications but I had to use other languages for the backend. The context-switching between two environments is something that ends up eating up a lot of time and energy. Luckily today with Node.js we have the opportunity to use the same language and even to share code across the whole web stack. I believe that this is something that makes my daily work much easier and enjoyable.  5.     How important are design patterns when you use Node.js? Do they change how you use the tool? I would say that design patterns are important in every language and in this case Node.js makes no difference. Furthermore due to the intrinsically asynchronous nature of the language having a good knowledge of design patterns becomes even more important in Node.js to avoid some of the most common pitfalls.  6.     What does the future hold for Node.js? How can it remain a really relevant and valuable tool for developers? I am sure Node.js has a pretty bright future ahead. Its popularity is growing dramatically and it is starting to gain a lot of traction in enterprise environments that have typically bound to other famous and well-known languages like Java. At the same time Node.js is trying to keep pace with the main innovations in the JavaScript world. For instance, in the latest releases Node.js added support for almost all the new language features defined in the ECMAScript 2015 standard. This is something that makes programming with Node.js even more enjoyable and I believe it’s a strategy to follow to keep developers interested and the whole environment future-proof.  Thanks Luciano! Good luck for the future – we’re looking forward to seeing how dramatically Node.js grows over the next 12 months. Get to grips with Node.js – and the complete JavaScript development stack – by following our full-stack developer skill plan in Mapt. Simply sign up here.
Read more
  • 0
  • 0
  • 4907
article-image-node-6-what-expect
Darwin Corn
16 Jun 2016
5 min read
Save for later

Node 6: What to Expect

Darwin Corn
16 Jun 2016
5 min read
I have a confession to make—I’m an Archer. I’ve taken my fair share of grief for this, with everyone from the network admin that got by on Kubuntu, to the C dev running Gentoo getting in on the fun. As I told the latter, Gentoo is for lazy people; lazy people that want to have their cake and eat it too with respect to the bleeding edge. Given all that, you’ll understand that when I was asked to write this post, my inner Archer balked at the thought of digging through source and documentation to distill this information into something palatable for your average back end developer. I crossed my fingers and wandered over to their GitHub, hoping for a clear roadmap, or at least a bunch of issues milestoned to 6.0.0. I was disappointed even though a lively discussion on dropping XP/Vista support briefly captured my attention. I’ve also been primarily doing front end work for the last few months. While I pride myself on being an IT guy who does web stuff every now and then (someone who cares more about the CI infrastructure than the product that ships), circumstances have dictated that I be in the ‘mockup’ phase for a couple volunteer gigs as well as my own projects, all at once. Convenient? Sure. Not exactly conducive to my generalist, RenaissanceMan self-image, though (as an aside, the annotations in Rap Genius read like a Joseph Ducreux meme generator). Back end framework and functionality has been relegated to the back of my mind. As such, I watched Node bring io.js back into the fold and go to Semver this fall with all the interest of a house cat. A cat that cares more about the mouse in front of him than the one behind the wall. By the time Node 6 drops in April, I’m going to be feasting on back end work. It would behoove me to understand the new tools I’ll be using. So I ignored the Archer on my left shoulder and, and listening to my editor on the right, dug around the source for a bit. Of note is that there’s nothing near as ground breaking as the 4.0.0 release, where Node adopted the ES6 support introduced in io.js 1.0.0. That’s not so much a slight at Node development as a reflection of the timeline of JavaScript development. That said, here’s some highlights from my foray into their issue tracker: Features Core support for Promises I’ve been playing around with Ember for so long that I was frankly surprised that Node didn’t include Core support for Promises. A quick look at the history of Node shows that they originally existed (based on EventEmitters), but that support was removed in 0.2 and ever since server-side promises have been the domain of various npm modules. The PR is still open, so it remains to be seen if the initial implementation of this makes it into 6.0.0. Changing FIPS crypto support In case you’re not familiar with the Federal Information Processing Standard, MDN has a great article explaining it. Node’s implementation left some functionality to be desired, namely that Node compiled with FIPS support couldn’t use nonFIPS hashing algorithms (md5 for one, breaking npm). Node 6 compiled with FIPS support will likely both fix that and flip it’s functionality, such that, by default, FIPS support will be disabled (in Node compiled with it), requiring invocation in order to be used. Replacing C http-parser and DNS resolvers with JS implementations ThesePRs are still open (and have been for almost a year) so I’d say seeing them come in 6.0.0 is unlikely, though the latest discussion on the latter seems centered around CI so I might be wrong on it not making 6.0.0. That being said, while not fundamentally changing functionality too much, it will be cool to see more of the codebase being written in JavaScript. Deprecations fs reevaluation This primarily affects pre-v4 graceful-fs, on which many modules, including major ones such as npm itself and less, are dependent. The problem lies not insofar as these modules are dependent on graceful-fs, as they don’t require v4 or newer. This could lead to breakage for a lot of Node apps whose developers aren’t on their game when it comes to keeping the dependencies up to date. As a set and forget kind of guy, I’m thankful this is just getting deprecated and not outright removed. Removals sys The sys module was deprecated in Node 4 and has always been deprecated in io.js. There was some discussion on its removal last summer, and it was milestoned to 6.0.0. It’s still deprecated as of Node 5.7.0, so we’ll likely have to wait until release to see if it’s actually removed. Nothing groundbreaking, but there is some maturation of the platform in both codebase and deprecations as well as outright removals. If you live on the bleeding edge like I do, you’ll hop on over to Node 6 as soon as it drops in April, but there’s little incentive to make the leap for those taking a more conservative approach to Node development. About the Author Darwin Corn is a Systems Analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the Information Technology world.
Read more
  • 0
  • 0
  • 1995

article-image-five-benefits-dot-net-going-open-source
Ed Bowkett
12 Dec 2014
2 min read
Save for later

Five Benefits of .NET Going Open Source

Ed Bowkett
12 Dec 2014
2 min read
By this point, I’m sure almost everyone has heard of the news about Microsoft’s decision to open source the .NET framework. This blog will cover what the benefits of this decision are for developers and what it means. Remember this is just an opinion and I’m sure there are differing views out there in the wider community. More variety People no longer have to stick with Windows to develop .NET applications. They can choose between operating systems and this doesn’t lock developers down. It makes it more competitive and ultimately, opens .NET up to a wider audience. The primary advantage of this announcement is that .NET developers can build more apps to run in more places, on more platforms. It means a more competitive marketplace, and improves developers and opens them up to one of the largest growing operating systems in the world, Linux. Innovate .NET Making .NET open source allows the code to be revised and rewritten. This will have dramatic outcomes for .NET and it will be interesting to see what developers do with the code as they continually look for new functionalities with .NET. Cross-platform development The ability to cross-develop on different operating systems is now massive. Previously, this was only available with the Mono project, Xamarin. With Microsoft looking to add more Xamarin tech to Visual Studio, this will be an interesting development to watch moving into 2015. A new direction for Microsoft By opening .NET up as open source software, Microsoft seems to have adopted a more "developer-friendly" approach under the new CEO, Satya Nadella. That’s not to say the previous CEO ignored developers, but by being more open as a company, and changing its view on open source, has allowed Microsoft to reach out to communities easier and quicker. Take the recent deal Microsoft made with Docker and it looks like Microsoft is heading in the right direction in terms of closing the gap between the company and developers. Acknowledgement of other operating systems When .NET first came around, around 2002, the entire world ran on Windows—it was the head operating system, certainly in terms of the mass audience. Today, that simply isn’t the case—you have Mac OSX, you have Linux—there is much more variety, and as a result .NET, by going open source, have acknowledged that Windows is no longer the number one option in workplaces.
Read more
  • 0
  • 0
  • 2607