Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Web Development

87 Articles
article-image-webgl-20-what-you-need-know
Raka Mahesa
01 May 2017
5 min read
Save for later

WebGL 2.0: What you need to know

Raka Mahesa
01 May 2017
5 min read
Earlier this year, Google and Mozilla released a version of Chrome and Firefox that has full support for WebGL 2.0. While some of the previous versions of their browsers also have support for WebGL 2.0, those versions by default disable the WebGL 2.0 feature. By enabling WebGL 2.0 in their latest browser version, it seems both Google and Mozilla are confident that this bleeding edge web technology can finally be used by most users without any problems.  So, what is WebGL 2.0? How does it differ from the previous version of WebGL? What, in fact, is WebGL?  To answer those questions, let's go back in time a little bit. In the early 1990s, graphics intensive applications were expensive to create because the software had to be customized for each type of graphic processing hardware. Imagine having to write an app for each smartphone vendor separately; it would cost many man hours. So, to mitigate this problem, a standard for graphics computing was introduced. This standard is called OpenGL (which stands for Open Graphics Library).  When mobile phones with display screens were introduced, people realized that mobile technology also needed a standard for graphics computing. However, OpenGL is a standard primarily for desktop-class hardware, so they realized that they would need a different standard that could work with the limited capability of mobile hardware. And thus OpenGL ES (Embedded System) was branched out from OpenGL, and the initial version was released in the early 2000s.  The same progression happened to web technology. In 2009, web applications became increasingly graphic-intensive, so a graphical standard called WebGL was introduced to help software developers. One thing noted, however, was that users could access web applications from both desktop and mobile devices, so WebGL needed to work on both platforms. To accommodate that, WebGL was created based on the OpenGL ES specification instead of the desktop-focused OpenGL.  Technology keeps advancing. As graphics hardware becomes more capable, additional features get added to the graphical standards. The latest version of OpenGL ES, version 3.0, was released in 2012 to keep up with the advancement in mobile GPU. WebGL 1.0, however, was still based on OpenGL ES 2.0. So in 2017, the specification for WebGL 2.0, which was based on OpenGL ES 3.0, was finally released.  As we can see from the timeline, WebGL 2.0 is really fresh out the oven. In fact, it's so new, that at the time of writing this article, the only browsers that support the standards are Google Chrome, Mozilla Firefox, and the Opera browser. WebGL 2.0 support on Safari is still under development. Also, it's worth noting that no mobile browser supports WebGL 2.0 by default (WebGL 2.0 support on Chrome for Android can be enabled via a hidden menu).  Considering the limited number of compatible platforms, as developers, we really can't rely on the user to have the necessary browser for our apps. So, with that limitation in mind, we have to always check for the browser's capability and prepare a fallback method in the event that the browser does not support WebGL 2.0.  So, how does WebGL version 2.0 differ from version 1.0? Fortunately, nothing major has changed with the way the library is used. This latest version of WebGL simply adds additional features and also makes some optional extensions of the library to be included by default.  One of the WebGL 1.0 extensions that have been made mandatory on WebGL 2.0 is the Instancing extension, which enables developers to render multiple copies of the same mesh efficiently. This feature is very useful for drawing decorative objects, like grass. Another extension that has been included in WebGL 2.0 is Depth Texture, which is used a lot for computing lighting and creating shadow maps.  Another major addition to WebGL 2.0 is the support for GLSL 3.0 ES, the latest programming language for the OpenGL shader. With this version of GLSL, a loop in the shader is no longer restricted to a constant integer. Not just that, GLSL 3.0 ES also brings additional matrix operations (like an inverse function) that will make coding a shader much easier.  WebGL 2.0 also offers much better support for textures. With version 2.0, the non-power of 2D textures are finally supported, which means the size of your texture image is no longer limited to 32, 64, 128, 256, and such. 3D textures are also supported now, which is pretty useful for volumetric effects such as light rays and smoke, as well as for storing medical scans.  WebGL 2.0 also adds support for more texture formats such as RGBA32, RGBA16, R11F_G11F_B10F, SRGB8, and others. More compressed texture formats that are not platform-specific are also supported, including: COMPRESSED_RGB8_ETC2, COMPRESSED_RGBA8_ETC2_EAC, and more.  There are other additions to WebGL 2.0, such as Multiple Draw Buffer, Transform Feedback, Uniform Buffer Object, and more. To learn about these and much more, see the official WebGL 2.0 specifications to check out all those additions in detail.  About this author Raka Mahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 8976

article-image-react-native-performance
Pierre Monge
21 Feb 2017
7 min read
Save for later

React Native Performance

Pierre Monge
21 Feb 2017
7 min read
Since React Native[1] came out, the core group of developers, as well as the community, kept on improving its framework, including the performance and stability of the technology. In this article, we talk about React Native's performance. This blog post is aimed at those people who want to learn more about React Native, but it might be a bit too complex for beginners. How does it work? In order to understand how to optimize our application, we have to understand how React Native works. But don't worry; it's not too hard. Let’s take the following piece of code into consideration: st=>start: Javascript e=>end: Natif op=>operation: Bridge st->op->e Let's discuss what it represents. React Native bases itself on two environments: a JS (Javascript) environment and a Native environment. These two entities communicate together with a bridge. The JS is our "organizer." This is where we will run our algorithms, moderate our views, run network calls, and so on. The Native is there for the display and the physical link part. It senses physical events as well as some virtuals if we ask it to do so, and then sends it to the JS part. The bridge exists as a link, as shown in the following code:       render(): ReactElement<*> {     return ( <TextInput           value={''}           onChangeText={() => { /* Here we handle native event */ } />     );  } Here, we simply have a Text Input. Most of the component involves all the branches of the React Native stack. This Text Input is called in JS, but is displayed on the device in Native. Every character typed in the Native component involves the physical event, transforms it in letter or action, and then transmits it by the bridge to the JS component. In all of the transactions of data between JS and Native, the Bridge always intervenes so that the data is included in both parts. The bridge has to serialize the data. The bridge is simple. It's a bit stupid, and it has only one job, but... it is the one that will bother us the most. The Bridge and other losses of performance Bridge Imagine that you are in an airport. You get your ticket online in five minutes; you are already in the plane and the flight will take the time that it's supposed to. However, before that, there's the regulation of the flight—the checking in. It will take horribly long to find the right flight, drop-down your luggage at the right place, go through security and get yourself checked, and so on. Well, this is our Bridge. Js is fast even though it is the main thread. Native is also fast, but the Bridge is slow. Actually, it's more like it has so much data to serialize that it takes it so much time to serialize that he can't improve its performance. Or... It is slow, simply because you made it go slow! The Bridge is optimized to batch the data[2]. Therefore, we can't send it data too fast; and, if we really have to, then we have to minimize to the maximum. Let's take for example an animation. We want to make a square go from left to the right in 10 seconds. The pure JS versions:       /* on top of class */ let i = 0; loadTimer = () => {     if (i < 100) {         i += 1;         setTimeout(loadTimer, 100);     } }; ... componentDidMount(): void {     loadTimer(); } ... render(): ReactElement<*> {     let animationStyle = {         transform: [             {                 translateX: i * Dimensions.get('window').width / 100,             },         ],     };     return ( <View             style={[animationStyle, { height: 50, width: 50, backgroundColor: 'red' }] }         />     ); } Here is an implementation in pure JS of a pseudo animation. This version, where we make raw data go through the bridge, is dirty. It's dirty code and very slow, TO BAN! Animated Version:       ... componentDidMount(): void {     this.state.animatedValue.setValue(0);     Animated.spring(this.state.animatedValue, {         toValue: 1,     }); } ... render(): ReactElement<*> {     let animationStyle = {         transform: [             {                 translateX: this.state.animatedValue.interpolate({                     inputRange: [0, 1],                     outputRange: [0, Dimensions.get('window').width],                 }),             },         ],     };     return ( <Animated.View             style={[animationStyle, { height: 50, width: 50, backgroundColor: 'red' }] }         />     ); } It's already much more understandable. The Animated library has been created to improve the performance of the animations, and its objective is to lighten the use of the bridge by sending predictions of the data to the native before starting the animation. The animation will be much softer and successful with the rightful library. The general perfomance of the app will automatically be improved. However, the animation is not the only one at fault here. You have to take time to verify that you don't have too much unnecessary data going through the bridge. Other Factors Thankfully, the Bridge isn't the only one at fault, and there are many other ways to optimize a React Native application.Therefore, here is an exhaustive list of why and/or how you can optimize your application: Do not neglect your business logic; even if JS and native are supposed to be fast; you have to optimize them. Ban the while or synchronize functions, which takes time in your application. Blocking the JS is the same as blocking the application. The rendering of a view is costly, and it is done most of the time without anything changing! It's why you MUST use the 'shouldComponentUpdate' method in your components. If you do not manage to optimize a JavaScript component, then it means that it would be good to transduce it in Native. The transactions with the bridge should be minimized. There are many states in a React Native application. A debug stage, which is a release state. The release state increases the performance of the app greatly with the flags of compilation while taking out the dev mode. On the other hand, it doesn't solve everything. The 'debugger mode' will slow down your application because the JS will turn on your browser and won't do it on your phone. Tools The React Native "tooling" is not yet very developed, but a great part of the toolset is that itis coming from the application. A hundred percentof the functionality is native. Here is a short list of some of the important tools that should help you out: Tools Platform Description react-addons-perf Both This is a tool that allows simple benchmarks to render components; it also gives the wasted time (the time loss to give the components), which didn't change. Systrace Android This is hard to use but useful to detect big bottlenecks. Xcode iOS This function of Xcode allows you to understand how our application is rendered (great to use if you have unnecessary views). rn-snoopy Both Snoopy is a softwarethatallows you to spy on the bridge. The main utility of this tool is the debug, but it can be used in order to optimize. You now have some more tricks and tools to optimize your React Native application. However,there is no hidden recipeor magic potion...It will take some time and research. The performance of a React Native application is very important. The joy of creating a mobile application in JavaScript must at least be equal to the experience of the user testing it. About the Author Pierre Monge is an IT student from Bordeaux, France. His interests include C, JS, Node, React, React Native, and more. He can be found on GitHub @azendoo. [1]React Native allows you to build mobile apps using only JavaScript. It uses the same design as React, allowing you to compose a rich mobile UI from declarative components. [2]Batch processing
Read more
  • 0
  • 0
  • 3113

article-image-web-development-tools-behind-large-portion-modern-internet-pornography
Erik Kappelman
20 Feb 2017
6 min read
Save for later

The Web Development Tools Behind A Large Portion of the Modern Internet: Pornography

Erik Kappelman
20 Feb 2017
6 min read
Pornography is one of, if not, the most common forms of media on the Internet, if you go by the number of websites or amount of data transferred. Despite this fact, Internet pornography is rarely discussed or written about in positive terms. This is somewhat unexpected, given that pornography has spurned many technological advances throughout its history. Many of the advances in video capture and display were driven by the need to make and display pornography better. The desire to purchase pornography on the Internet with more anonymity was one of the ways PayPal drew, and continues to draw, customers to its services. This blog will look into some of the tools being used by some of the more popular Internet pornography sites today. We will be examining the HTML source for some of the largest websites in this industry. The content of this blog will not be explicit, and the intention is not titillation. YouPorn is one of the top 100 accessed websites on the Internet; so, I believe it is relevant to have a serious conversation about the technologies used by these sites. This conversation does not have to be explicit in anyway, and it will not be. Much of what is in the <head> tag in the YouPorn HTML source is related to loading assets, such as stylesheets. After several <meta> tags, most designed to enhance the website’s SEO, a very large chunk of JavaScript appears. It is hard to say, at this point, whether or not YouPorn is using a common frontend framework, or if this JavaScript was wholly written by a developer somewhere. It certainly was minified before it was sent to the frontend, which is the least you would expect. This script does a variety of things. It handles that lovely popup that occurs as soon as a viewer clicks anywhere on the page; this is handled with vanilla JavaScript. The script also collects a large amount of information about the viewer’s viewing device. This includes information about the operating system, the browser, the device type, device brand, and even some information about the CPU. This information is used to optimize the viewer’s experience. The script also identifies whether or not the viewer is using AdBlock, and then modifies the page as such. Two third-party tools that are in this script are jQuery and AJAX. These two tools would be very necessary for a website that’s main purpose is the display of pornographic content. This is because AJAX can help expedite the movement of the content from backend to frontend, and jQuery can enhance DOM manipulation in order to improve the viewer’s user interface. AJAX and jQuery can also be seen in the source code of the PornHub website. Again, this is really the least you would expect from a website that serves as much content as any of the porn websites that are currently popular. The source code for these pages show that YouPorn and PornHub both use Google Analytics tools, presumably, to assist in their content targeting. This is a part of how pornography websites begin to grow more and more geared toward a specific viewer over time. PornHub and YouPorn spend a lot of lines of code building what could be considered a profile of their viewers. This way, viewers can see what they want immediately, which ought to enhance their experience and keep them online. xHamster follows a similar template as it identifies information about the user’s device and uses Google Analytics to target the viewer with specific content. Layout and navigation of any website is important. Although pornography is very desirable to some, websites that display it have so many competitors that they must all try very hard to satisfy their viewers. This makes every detail very important. YouPorn and PornHub appear to use BootStrap as the foundation of their frontend design. There is quite a bit of customization performed by the sites, but BootStrap is still in the foundation. Although it is somewhat less clear, it seems that xHamster also uses BootStrap as its design foundation. Now, let’s choose a video and see what the source code tells us about what happens when viewers attempt to interact with the content. On PornHub, there are a series of view previews that, when rolled over, a video sprite appears in order to give the viewer a preview of the video. Once the video is clicked on, the viewer is sent to a new page to view the specific video. In the case of PornHub, this is done through the execution of a PHP script that uses the video’s ID and an AJAX request to get the user onto the right page with the right video. Once we are on the video’s page, we can see that PornHub, and probably xHamster and YouPorn as well, are using Flash Video. I am viewing these websites on a MacBook, so it is likely that the video type is different when viewed on a device that does not support Flash Video. This is part of the reason so much information about a viewer’s device is collected upon visiting these websites. This short investigation into the tools used by these websites has revealed that, although pornographers have been on the cutting edge of web technology in the past, some of the current pornography providers are using tools that are somewhat unimpressive, or at least run of the mill. That being said, there is never a good reason to reinvent the wheel, and these websites are clearly doing fine in terms of viewership. For the aspiring developers out there, I would take it to heart that some of the most viewed websites on the Internet are using some of the most basic tools to provide their users with content. This confirms what I have often found to be true, that is, getting it done right is far more important than getting it done in a fancy way. I have only scratched the surface of this topic. I hope that others will investigate more into the type of technologies used by this very large portion of the modern Internet. Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 4801

article-image-isomorphic-javascript
Sam Wood
26 Jan 2017
3 min read
Save for later

Why you should learn Isomorphic JavaScript in 2017

Sam Wood
26 Jan 2017
3 min read
One of the great challenges of JavaScript development has been wrangling your code for both the server- and the client-side of your site or app. Fullstack JS Devs have worked to master the skills to work on both the front and backend, and numerous JS libraries and frameworks have been created to make your life easier. That's why Isomorphic JavaScript is your Next Big Thing to learn for 2017. What even is Isomorphic JavaScript? Isomorphic JavaScript are JavaScript applications that run on both the client and the server-side. The term comes from a mathematical concept, whereby a property remains constant even as its context changes. Isomorphic JavaScript therefore shares the same code, whether it's running in the context of the backend or the frontend. It's often called the 'holy grail' of web app development. Why should I use Isomorphic JavaScript? "[Isomorphic JavaScript] provides several advantages over how things were done 'traditionally'. Faster perceived load times and simplified code maintenance, to name just two," says Google Engineer and Packt author Matt Frisbie in our 2017 Developer Talk Report. Netflix, Facebook and Airbnb have all adopted Isomorphic libraries for building their JS apps. Isomorphic JS apps are *fast*, operating off one base of code means that no time is spent loading and parsing any client-side JavaScript when a page is accessed. It might only be a second - but that slow load time can be all it takes to frustrate and lose a user. But Isomorphic apps are faster to render HTML content directly in browser, ensuring a better user experience overall. Isomorphic JavaScript isn't just quick for your users, it's also quick for you. By utilizing one framework that runs on both the client and the server, you'll open yourself up to a world of faster development times and easier code maintenance. What tools should I learn for Isomorphic JavaScript? The premier and most powerful tool for Isomorphic JS is probably Meteor - the fullstack JavaScript platform. With 10 lines of JavaScript in Meteor, you can do what will take you 1000s elsewhere. No need to worry about building your own stack of libraries and tools - Meteor does it all in one single package. Other Isomorphic-focused libraries include Rendr, created by Airbnb. Rendr allows you to build a Backbone.js + Handlebars.js single-page app that can also be fully rendered on the server-side - and was used to build the Airbnb mobile web app for drastically improved page load times. Rendr also strives to be a library, rather than a framework, meaning that it can be slotted into your stack as you like and gives you a bit more flexibility than a complete solution such as Meteor.
Read more
  • 0
  • 0
  • 2599

article-image-mapreduce-amazon-emr-nodejs
Pedro Narciso
14 Dec 2016
8 min read
Save for later

MapReduce on Amazon EMR with Node.js

Pedro Narciso
14 Dec 2016
8 min read
In this post, you will learn how to write a Node.js MapReduce application and how to run it on Amazon EMR. You don’t need to be familiar with Hadoop or EMR API's. In order to run the examples, you will need a Github account, an Amazon AWS, some money to spend at AWS, and Bash or an equivalent installed on your computer. EMR, BigData, and MapReduce We define BigData as those data sets too large or too complex to be processed by traditional processing applications. BigData is also a relative term: A data set can be too big for your Raspberry PI, while being a piece of cake for your desktop. What is MapReduce? MapReduce is a programming model that allows data sets to be processed in a parallel and distributed fashion. How does it work? You create a cluster and feed it with the data set. Then, you define a mapper and a reducer. MapReduce involves the following three steps: Mapping step: This breaks down the input data into KeyValue pairs Shuffling step: KeyValue pairs are grouped by Key Reducing step: KeyValue pairs are processed by Key in parallel It’s guaranteed that all data belonging to a single key will be processed by a single reducer instance. Our processing job project directory setup Today, we will implement a very simple processing job: counting unique words from a set of text files. The code for this article is hosted at Here. Let's set up a new directory for our project: $ mkdir -p emr-node/bin $ cd emr-node $ npm init --yes $ git init We also need some input data. In our case, we will download some books from project Gutenberg as follows: $ mkdir data $ curl -Lo data/tmohah.txt http://www.gutenberg.org/ebooks/45315.txt.utf-8 $ curl -Lo data/mad.txt http://www.gutenberg.org/ebooks/5616.txt.utf-8 Mapper and Reducer As we stated before, the mapper will break down its input into KeyValue pairs. Since we use the streaming API, we will read the input form stdin. We will then split each line into words, and for each word, we are going to print "word1" to stdout. TAB character is the expected field separator. We will see later the reason for setting "1" as the value. In plain Javascript, our ./bin/mapper can be expressed as: #!/usr/bin/env node const readline = require('readline'); const rl = readline.createInterface({ input : process.stdin }); rl.on('line', function(line){ line.trim().split(' ').forEach(function(word){ console.log(`${word}t1`); }); }); As you can see, we have used the readline module (a Node built-in module) to parse stdin. Each line is broken down into words, and each word is printed to stdout as we stated before. Time to implement our reducer. The reducer expects a set of KeyValue pairs, sorted by key, as input, such as the following: First<TAB>1 First<TAB>1 Second<TAB>1 Second<TAB>1 Second<TAB>1 We then expect the reducer to output the following: First<TAB>2 Second<TAB>3 Reducer logic is very simple and can be expressed in pseudocode as: IF !previous_key previous_key = current_key counter = value IF previous_key equals current_key counter = counter + value ELSE print previous_key<TAB>counter previous_key = current_key; counter = value; The first statement is necessary to initialize the previous_key and counter variables. Let's see the real JavaScript implementation of ./bin/reducer: #!/usr/bin/env node var previousKey, counter; const readline = require('readline'); const rl = readline.createInterface({ input : process.stdin }); function print(){ console.log(`${previousKey}t${counter}`); } function countWord(line) { let [currentKey, value] = line.split('t'); value = +value; if(typeof previousKey === 'undefined'){ previousKey = currentKey; counter = value; return; } if(previousKey === currentKey){ counter = counter + value; return; } print(); previousKey = currentKey; counter = value; } process.stdin.on('end',function(){ print(); }); rl.on('line', countWord); Again, we use readline module to parse stdin line by line. The countWord function implements our reducer logic described before. The last thing we need to do is to set execution permissions to those files: chmod +x ./bin/mapper chmod +x ./bin/reducer How do I test it locally? You have two ways to test your code: Install Hadoop and run a job With a simple shell script The second one is my preferred one for its simplicity: ./bin/mapper <<EOF | sort | ./bin/reducer first second first first second first EOF It should print the following: first<TAB>4 second<TAB>2 We are now ready to run our job in EMR! Amazon environment setup Before we run any processing job, we need to perform some setup on the AWS side. If you do not have an S3 bucket, you should create one now. Under that bucket, create the following directory structure: <your bucket> ├── EMR │ └── logs ├── bootstrap ├── input └── output Upload our previously downloaded books from project Gutenberg to the input folder. We also need AWS cli installed on the computer. You can install it with the python package manager. If you do not have AWS cli installed on your computer, then run: $ sudo pip install awscli awscli requires some configuration, so run the following and provide the requested data: $ aws configure You can find this data in your Amazon AWS web console. Be aware that usability is not Amazon’s strongest point. If you do not have your IAM EMR roles yet, it is time to create them: aws emr create-default-roles Good. You are now ready to deploy your first cluster. Check out this (run-cluster.sh) script: #!/bin/bash MACHINE_TYPE='c1.medium' BUCKET='pngr-emr-demo' REGION='eu-west-1' KEY_NAME='pedro@triffid' aws emr create-cluster --release-label 'emr-4.0.0' --enable-debugging --visible-to-all-users --name PNGRDemo --instance-groups InstanceCount=1,InstanceGroupType=CORE,InstanceType=$MACHINE_TYPE InstanceCount=1,InstanceGroupType=MASTER,InstanceType=$MACHINE_TYPE --no-auto-terminate --enable-debugging --log-uri s3://$BUCKET/EMR/logs --bootstrap-actions Path=s3://$BUCKET/bootstrap/bootstrap.sh,Name=Install --ec2-attributes KeyName=$KEY_NAME,InstanceProfile=EMR_EC2_DefaultRole --service-role EMR_DefaultRole --region $REGION The previous script will create a 1 master, 1 core cluster, which is big enough for now. You will need to update this script with your bucket, region, and key name. Remember that your keys are listed at "AWS EC2 console/Key pairs". Running this script will print something like the following: { "ClusterId": "j-1HHM1B0U5DGUM" } That is your cluster ID and you will need it later. Please visit your Amazon AWS EMR console and switch to your region. Your cluster should be listed there. It is possible to add the processing steps with either the UI or aws cli. Let's use a shell script (add-step.sh): #!/bin/bash CLUSTER_ID=$1 BUCKET='pngr-emr-demo' OUTPUT='output/1' aws emr add-steps --cluster-id $CLUSTER_ID --steps Name=CountWords,Type=Streaming,Args=[-input,s3://$BUCKET/input,-output,s3://$BUCKET/$OUTPUT,-mapper,mapper,-reducer,reducer] It is important to point out that our "OUTPUT" directory does not exist at S3 yet. Otherwise, the job will fail. Call ./add-step.sh plus the cluster ID to add our CountWords step: ./add-step j-1HHM1B0U5DGUM Done! So go back to the Amazon UI, reload the cluster page, and check the steps. "CountWords" step should be listed there. You can track job progress from the UI (reload the page) or from the command line interface. Once the job is done, terminate the cluster. You will probably want to configure the cluster to terminate as soon as it finishes or when any step fails. Termination behavior can be specified with the "aws emr create-cluster". Sometimes the bootstrap process can be difficult. You can SSH into the machines, but before that, you will need to modify their security groups, which are listed at "EC2 web console/security groups". Where to go from here? You can (and should) break down your processing jobs into smaller steps because it will simplify your code and add more composability to your steps. You can compose more complex processing jobs by using the output of a step as the input for the next step. Imagine that you have run the "CountWords" processing job several times and now you want to sum the outputs. Well, for that particular case, you just add a new step with an "identity mapper" and your already built reducer, and feed it with all of the previous outputs. Now you can see why we output "WORD1" from the mapper. About the author Pedro Narciso García Revington is a Senior Full Stack Developer with 10+ years of experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD, and polyglot persistence.
Read more
  • 0
  • 0
  • 3481

article-image-web-developer-app-developer
Oliver Blumanski
12 Dec 2016
4 min read
Save for later

From Web Developer to App Developer

Oliver Blumanski
12 Dec 2016
4 min read
As a web developer, you have to adapt every year to new technologies. In the last four years, the JavaScript world has exploded, and their toolsets are changing very fast. In this blog post, I will describe my experience of changing from a web developer to an app developer. My start in the Mobile App World My first attempt at creating a mobile app was a simple JavaScript one-page app, which was just a website designed for mobile devices. It wasn’t very impressive, but at the time, there was no React-Native or Ionic Framework. It was nice, but it wasn't great. Ionic Framework Later, I developed apps using the Ionic/Angular Framework, which uses Cordova as a wrapper. Ionic apps run in a web-view on the device. To work with Ionic was pretty easy, and the performance increased over time, so I found it to be a good toolset. If you need an app that is running on a broad spectrum of devices, Ionic is a good choice. React-Native A while ago, I made the change to React-Native. React-Native was supported only by iOS at the start, but then it also supported Android, so I thought that the time was right to switch to React-Native. The React-Native world is a bit different than the Ionic world. React-Native is still newish, and many modules are a work-in-progress; so, React-Native itself is released every two weeks with a new version. Working with React-Native is bleeding edge development. React-Native and Firebase are what I use right now. When I was working with Ionic, I was using a SQLite database to cache on the device, and I used Ajax to get data from a remote API. For notifications, I used Google GCM and Pushwoosh, and for uploads, AWS S3. With React-Native, I chose the new Firebase v3, which came out earlier this year. Firebase offers a real-time database, authentication, cloud messaging, storage, analytics, offline data capability, and much more. Firebase can replace all of the third-party tools I have used before. For further information, check out here. Google Firebase supports three platforms: iOS, Android, and the Web. Unfortunately, the web platform does not support offline capabilities, notifications, and some other features. If you want to use all the features Firebase has to offer, there is a React-Native module that is wrapping the IOS and Android native platforms. The JavaScript API module is identical to the Firebase web platform JavaScript API. So, you can use the Firebase web docs on this. Developing with React-Native, you come in touch with a lot of different technologies and programming languages. You have to deal with Xcode, and with Android, you have to add/change the Java code and deal with Gradle, permanent google-service upgrades, and many other things. It is fun to work with React-Native, but it can also be frustrating regarding unfinished modules or outdated documentation on the web. It pushes you into new areas, so you learn Java, Objective-C, or both. So, why not? Firebase V3 Features Let’s look at some of the Firebase V3 features. Firebase Authentication One of the great features that Firebase offers is authentication. They have, ready to go, Facebook login, Twitter login, Google login, Github login, anonymous login, and email/password sign up. OK, to get the Facebook login running, you will still need a third-party module. For Facebook login, I have recently used this module. And, for a Google login, I have recently used this module. Firebase Cloud Messages You can receive notifications on the device, but the differences are depending on the state of the app. For instance, is the app open or closed. Read up here. Firebase Cloud Messages Server You may want to send messages to all or particular users/devices, and you can do this via the FCM Server. I use a NodeJS script as the FCM Server, and I use this module to do so Here. You can read more at Here. Firebase Real-Time Database You can subscribe to database queries; so, as soon as data is changing, your app is getting the new data without a reload. However, you can only call the data once. The real-time database uses web sockets to deliver data. Conclusion As a developer, you have to evolve with technology and keep up with upcoming development tools. I think that mobile development is more exciting than web development these days, and this is the reason why I would like to focus more on app development. About the author Oliver Blumanski is a developer based out of Townsville, Australia. He has been a software developer since 2000, and can be found on GitHub @ blumanski.
Read more
  • 0
  • 0
  • 1650
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-what-does-brutalist-web-design-tell-us-about-2016
Erik Kappelman
02 Dec 2016
5 min read
Save for later

What Does Brutalist Web Design Tell Us About 2016?

Erik Kappelman
02 Dec 2016
5 min read
Brutalist web design is a somewhat new design philosophy in web development. Brutalist web design is characterised by websites that are intentionally difficult to navigate and use, lack of smooth lines or templates, and hyper-individualization and uniqueness. Some examples of Brutalist websites include the websites for Adult Swim, The Drudge Report, and Hacker News. These and other websites like them are termed ‘Brutalist’ in reference to a mid-20th century architectural movement of the same name. Brutalist buildings use exposed concrete, are modular in design, and choose function over form in most cases. Brutalist buildings are imposing, and can loom menacingly. With this in mind, Brutalist web design is well named. Both styles are appreciated for their economically sound foundations, artistic ‘honesty,’ and anti-elitist undertones. So, that's what Brutalist web design is, but what does Brutalist web design mean for the year 2016 and years to come? In order to answer this question, I think it is good to take a look back at the origins of the Internet. At its core, the Internet is a set of protocols used to seamlessly connect networks with other networks, allowing individuals to instantly share information. One of the fundamental tools for the network of networks is a universal way of displaying information—enter HTML, and eventually CSS. HTML and CSS have held up the design end of the greatest technological renaissance in human history from the years 1993 through present day. Information is displayed in new ways, in new places, and at new speeds. All this took was an incredible amount of work by designers and developers. Today, despite the latest versions of HTML and CSS still being behind the front-end of basically every website on the planet, any web designer looking for a job knows that mastering tools that wrap around HTML and CSS like WordPress, Bootstrap and Sass is more important than improving the ability to hard-code HTML and CSS. WordPress and Bootstrap and tools like them were born of necessity. People began to demand more and more ornate websites as the Internet proliferated, and certain schemas and templates began to become more popular than others. Designers created tools that allowed them to create flashy websites fast, in order to meet demand. The end of this story is the modern Internet—extremely well designed websites that are almost indistinguishable from one another. Brutalist web design is a response to this evolution. Brutalism demands what a website can do be the measure of the website’s value, instead of how it looks. Also, principles of Brutalist web design would suggest that templates are the antithesis to creativity. As someone who has some experience with web design, I understand where Brutalism is coming from. It's difficult when a client is wrapped up in whether or not a menu bounces off the top of the pane as the user scrolls, but shrugs off a site’s ability to change content based on a user’s physical location. That said, is 2016 the beginning of the Brutalist web design revolution? Well I would ask, did the Brutalist architectural movement last? Given that Brutalist buildings were only built for about 20 years, and the age of the Internet moves faster and faster every day, I would suggest that Brutalist web design will likely be unheard of in less than five years. I believe this for two reasons. First, it just seems too much like a fad to not be a fad. Websites that look like they came from the early nineties, websites that are hard to navigate, named after a niche architectural movement from the seventies… This all screams fad to me. Second, people like aesthetics, because people are lazy. Aesthetics themselves are a reflection of our own laziness. We like to be told what to like and what is beautiful. Greco-Roman architecture is still seen around the world after thousands of years, because the aesthetic is pleasing. The smooth lines of buildings like the Guggenheim in New York City or the color and form of Saint Basil's Cathedral in Moscow, show that many people like the meticulous design that is seen in websites like Twitter or FiveThirtyEight, and many others. But, I still haven’t answered the title question. I think that Brutalist web design means that in 2016 we are on the cusp of some real changes in the way that we view and share information. Ubiquitous wearable tech and the ‘Internet of Things’ are just two of the many big changes right around the corner. Brutalism feels like a step sideways rather than a step forward. We may be taking sidesteps, because steps forward are currently indeterminate or difficult. In the simplest terms, Brutalism means that in 2016 some people are trying to break out of a design system that has gotten better, but hasn’t fundamentally changed in over 20 years. Brutalist web design suggests that web design is likely to experience tremendous changes in the near future as the Internet itself changes. Traditional aesthetics will likely endure, but Brutalist web design suggests people are bored and want more. What exactly that is, only time will tell. Author: Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 1318

article-image-building-better-bundles-why-processenvnodeenv-matters-optimized-builds
Mark Erikson
14 Nov 2016
5 min read
Save for later

Building Better Bundles: Why process.env.NODE_ENV Matters for Optimized Builds

Mark Erikson
14 Nov 2016
5 min read
JavaScript developers are keenly aware of the need to reduce the size of deployed assets, especially in today's world of single-page apps. This usually means running increasingly complex JavaScript codebases through build steps that produce a minified bundle for deployment. However, if you read a typical tutorial on setting up a build tool like Browserify or Webpack, you'll see numerous references to a variable called process.env.NODE_ENV. Tutorials always talk about how this needs to be set to a value like "production" in order to produce a properly optimized bundle, but most articles never really spell out why this value matters and how it relates to build optimization. Here's an explanation of why process.env.NODE_ENV is used and how it fits into the typical build process. Operating system environment variables are widely used as a method of configuring applications, especially as a way to activate behavior based on different deployment environments (such as development vs testing vs production). Node.js exposes the current process's environment variables to the script as an object called process.env. From there, the Express web server framework popularized using an environment variable called NODE_ENV as a flag to indicate whether the server should be running in "development" mode vs "production" mode. At runtime, the script looks up that value by checking process.env.NODE_ENV. Because it was used within the Node ecosystem, browser-focused libraries also started using it to determine what environment they were running in, and using it to control optimizations and debug mode behavior. For example, React uses it as the equivalent of a C preprocessor #ifdef to act as conditional checking for debug logging and perf tracking, roughly like this: function someInternalReactFunction() { // do actual work part 1 if(process.env.NODE_ENV === "development") { // do debug-only work, like recording perf stats } // do actual work part 2 } If process.env.NODE_ENV is set to "production", all those if clauses will evaluate to false, and the potentially expensive debug code won't run. In addition, in conjunction with a tool like UglifyJS that does minification and removal of dead code blocks, a clause that is surrounded with if(process.env.NODE_ENV === "development") will become dead code in a production build and be stripped out, thus reducing bundled code size and execution time. However, because the NODE_ENV environment variable and the corresponding process.env.NODE_ENV runtime field are normally server-only concepts, by default those values do not exist in client-side code. This is where build tools such as Webpack's DefinePlugin or the Browserify Envify transform come in, which perform search-and-replace operations on the original source code. Since these build tools are doing transformation of your code anyway, they can force the existence of global values such as process.env.NODE_ENV. (It's also important to note that because DefinePlugin in particular does a direct text replacement, the value given to DefinePlugin must include actual quotes inside of the string itself. Typically, this is done either with alternate quotes, such as '"production"', or by using JSON.stringify("production")). Here's the key: the build tool could set that value to anything, based on any condition that you want, as you're defining your build configuration. For example, I could have a webpack.production.config.js Webpack config file that always uses the DefinePlugin to set that value to "production" throughout the client-side bundle. It wouldn't have to be checking the actual current value of the "real" process.env.NODE_ENV variable while generating the Webpack config, because as the developer I would know that any time I'm doing a "production" build, I would want to set that value in the client code to "production'. This is where the "code I'm running as part of my build process" and "code I'm outputting from my build process" worlds come together. Because your build script is itself most likely to be JavaScript code running under Node, it's going to have process.env.NODE_ENV available to it as it runs. Because so many tools and libraries already share the convention of using that field's value to determine their dev-vs-production status, the common convention is to use the current value of that field inside the build script as it's running to also determine the value of that field as applied to the client code being transformed. Ultimately, it all comes down to a few key points: NODE_ENV is a system environment variable that Node exposes into running scripts. It's used by convention to determine dev-vs-prod behavior, by both server tools, build scripts, and client-side libraries. It's commonly used inside of build scripts (such as Webpack config generation) as both an input value and an output value, but the tie between the two is still just convention. Build tools generally do a transform step on the client-side code, replace any references to process.env.NODE_ENV with the desired value, and the resulting code will contain dead code blocks as debug-only code is now inside of an if(false)-type condition, ensuring that code doesn't execute at runtime. Minifier tools such as UglifyJS will strip out the dead code blocks, leaving the production bundle smaller. So, the next time you see process.env.NODE_ENV mentioned in a build script, hopefully you'll have a much better idea why it's there. About the author Mark Erikson is a software engineer living in southwest Ohio, USA, where he patiently awaits the annual heartbreak from the Reds and the Bengals. Mark is author of the Redux FAQ, maintains the React/Redux Links list and Redux Addons Catalog, and occasionally tweets at @acemarke. He can be usually found in the Reactiflux chat channels, answering questions about React and Redux. He is also slightly disturbed by the number of third-person references he has written in this bio!
Read more
  • 0
  • 0
  • 19523

article-image-look-webvr-development
Paul Dechov
08 Nov 2016
7 min read
Save for later

A Look into WebVR Development

Paul Dechov
08 Nov 2016
7 min read
Virtual reality technology is right in the middle of becoming massively available. But it has been considered farfetched—a distant holy grail largely confined to fiction—for long enough that it is still too easy to harbor certain myths: Working within this medium must require a long and difficult journey to pick up a lot of prerequisite expertise. It must exist beyond the power and scope of the web browser, which was not intended for such rich media experiences. Does VR belong in a Web Browser? The Internet is, of course, an astoundingly open and democratic communications medium, operating at an incredible scale. Browsers, considered as general software for delivering media content over the Internet, have steadily gained functionality and rendering power, and have been extraordinarily successful in combining media content with the values of the internet. The native rendering mechanism at work in the browser environment consumes descriptions of: * Structure and content in HTML: simple, static, and declarative * Styling in CSS (optionally embedded in HTML) * Behavior in JavaScript (optionally embedded in HTML): powerful but complex and loosely structured (oddly too powerful in some ways, and abuses such as pop-up ads were originally its most noticeable applications) The primary metaphor of the Web as people first came to know it focused on traveling from place to place along connected pathways, reinforced by terms like "navigate", "explore", and "site". In this light, it is apparent that the next logical step is right inside of the images, videos and games that have proliferated around the world thanks to the web platform, and that this is no departure from the ongoing evolution of that platform. On social media, we share a lot of content, but interact using low-bandwidth channels (limited media snippets—mostly text). Going forward, these will increasingly blend into shared virtual interactions of unlimited sensory richness and creativity. This has existed to a small extent for some time in the realm of first-person online games, and will see its true potential with the scale and generality of the Web and the immersive power of VR. A quick aside on compatibility and accessiblity: the consequence of such a widely available Web is that your audience has a vast range of different devices, as well as a vast range of different sensorimotor capabilities. It is quite relevant when pushing the limits of the platform, and always wise to consider this: how the experience degrades when certain things are missing, and how best to fall back to an acceptable (not broken) variant of the content in these cases. I believe that VR and the Web will accelerate each other's growth in this third decade of the Web, just as we saw with earlier media: images in its first decade and video in its second. Hyperlinks, essential to the experience of the Web, will teleport us from one site to another: this is one of many research avenues that has been explored by eleVR, a pioneering research team led by Vi Hart and funded by Y Combinator Research. Adopted enthusiastically by the Chrome and Firefox browsers, and with Microsoft recently announcing support in its Edge browser, it is fair to say that widespread support for WebVR is imminent. For browsers that are not likely to support the technology natively in the near future (for example, Safari), there is a polyfill you can include so that you can use it anyway, and this is one of the things A-Frame takes care of for you. A-Frame A-Frame is a framework from Mozilla that: Provides a bundle of conveniences that automatically prepares your client-side app environment for VR. Exposes a declarative interface for the composition of modules (aspects of appearance of functionality). Exposes a simple interface for writing your own modules, and encourages sharing and reusability of modules in the A-Frame community. The only essential structural elements are: * <a-scene>, the container * <a-entity>, an empty shell with transform attributes like position, rotation, and scale, but with neither appearance nor behavior. HTML was designed to make web design accessible to non-programmers and opened up the ability for those without prior technical skills to learn to build a home page. A-Frame brings this power and simplicity to VR, making the creation of virtual environments accessible to all regardless of their experience level with 3D graphics and programming. The A-Frame Inspector also provides a powerful UI for VR creation using A-Frame. Example: Visualization Some stars are in fact extremely bright while others appear bright because they are so near (such as the brightest star in the sky, Sirius). I will describe just one possible use of VR to communicate these differences in star distance effectively. Setting the scene: <a-scene> <a-sky color="black"></a-sky> <a-light type="ambient" color="white"></a-light> </a-scene> A default camera is automatically configured for us, but if we want a different perspective we could add and position an <a-camera> element. Using a dataset of stars' coordinates and visual magnitudes, we can plot the brightest stars in the sky to create a simulated sky view (a virtual stellarium), but scaled so as to be able to perceive the distances with our eyes, immediately and intuitively. The role of VR in this example is to hook into our familiar experience of looking around at the sky and our hardwired depth perceptivity. This approach has potential to foster a deeper and more lasting appreciation of the data than numbers in a spreadsheet can, and then abstract one- or two-dimensional depictions can, for that matter. Inside the scene, per star: <a-sphere position="100 120 -60" radius="1" color="white"> The position would be derived from the coordinates of the star, and the radius could reflect the absolute visual magnitude of the star. We could also have the color reflect the spectral range of the star. A-Frame includes basic interaction handlers that work with a variety of rendering modes. While hovering over (or in VR, gazing at) a star, we could set up JavaScript handlers to view more information about it. By clicking on (in VR, pressing the button while gazing at) one of these stars, we could perhaps transform the view by shifting the camera and looking at the sky from that star's point of view. Or we could zoom in to a detailed view of that star, visualizing its planetary system, and so on. Now, if we were to treat this visualization as a possible depiction of abstract data points or concepts, we can represent. For instance, the points in space could be people, and the distance could represent, perhaps, any combination of weighted criteria. You would see with the full power of your vision how near or far others are in terms of these criteria. This simple perspective enables immersive data storytelling, allowing you to examine entities in any given domain space. My team at TWO-N makes heavy use of D3 and React in our work (among many other open source tools), both of which work seamlessly and automatically with A-Frame due to the nature of the interface that A-Frame provides. Whether you're writing or generating literal HTML, or relying on tools to help you manage the DOM dynamically, it's ultimately all about attaching elements to the document and setting their attributes; this is the browser's native content rendering engine, around which the client-side JavaScript ecosystem is built. About the author Paul Dechov is a visualization engineer at [TWO-N].
Read more
  • 0
  • 0
  • 1361

article-image-thinking-outside-skybox-developing-cinematic-vr-experiences-web
Paul Dechov
04 Nov 2016
6 min read
Save for later

Thinking outside the Skybox – Developing Cinematic VR Experiences for the Web

Paul Dechov
04 Nov 2016
6 min read
We as a society are on the road to mastering a powerful new medium. Even just by dipping your toes into the space of virtual reality, it is easy to find vast unexplored territory and a wealth of potential in this next stage of human communications. As with movies around the turn of the 20th century, there is a mix of elements borrowed from older media, combined with wild experimentation. Thanks to accelerating progress, it will not take too long to home in on the unique essence of immersive virtual worlds. Yet we know there are still so many untapped opportunities around creating, capturing, and delivering these worlds and shaping the experiences inside them. Chris Milk is a pioneer in both the content and technology aspects of virtual reality filmmaking. He produced a short film "Evolution of Verse", available (along with many others) through his company's app Within, that tells a story of the emergence of the VR medium in abstract form. It bravely tests the limits of immersive art, serves as an astounding introductory illustration of the potential for visual effects in virtual reality, and contains an exhilarating homage to the humanity's initial experience of the movies. On the Web—the platform that is the most open, connected, and accessible for both creators and audiences—we now have the opportunity to take our creations across the barrier of limitations established by the format of the screen and the norms of the platform. We can bring along those things that work well on a flat screen, and will come to rethink them as we experiment with the newfound ability of the audience to convincingly perceive our work in the first person. What is Virtual Reality? First, a quick survey of consumer head-mounted displays in rough order of increasing power and price: Mobile: Google Cardboard, Samsung Gear VR, Google Daydream (coming soon) Tethered: Sony PlayStation VR (coming soon), Oculus Rift, HTC Vive It would be helpful to analyze the medium of virtual reality in terms of its various immersive facets: Head-tracking: Most crucially, the angle of your head is mapped in real time to that of the virtual camera. A major hallmark of our visual experience of physical reality is turning your head in order to look around or to face something. This capability is leveraged by the category of 360° videos (note that the name evokes looking around in a circle, but generally you can look up and down as well). This is a radical departure in terms of cinematography, as directing attention via a rectangular frame is no longer an option. Stereoscopy: Seeing depth as a 3rd spatial dimension is a major sensory advantage for perceiving complex objects and local surroundings. Though there is a trade-off between depth perception and perspective distortion, 3D undeniably contributes a sense of presence, and therefore a strong element of immersion. 3D can be compared with stereo sound, which also was looked upon for decades as a novelty and a gimmick before achieving ubiquity in the 1960s (on that note, positional audio is another significant factor in delivering an immersive experience). Isolation: Blocking out ambient light noises and distractions that would dilute the visual experience, akin to noise-blocking or noise-canceling headphones. Motion tracking: Enables so-called "room-scale VR", which allows you to move through virtual environments and around virtual objects. This can greatly heighten the fidelity of experience, and comes with some interesting challenges. This capability is currently only available with the HTC Vive but we will soon see it on mobile, put forward by Google's Project Tango. Button: Works as a mouse click in combination with a cursor in the center of your field of view. Motion-tracked hand controller: Again, this is currently a feature of the HTC Vive only, but Oculus and Google's Daydream will be coming out with controllers, as will PlayStation VR using PlayStation Move controllers. Even fairly basic applications of these controllers like Tilt Brush have immense appeal. Immersive Graphics on the Web There is one sequence of "Evolution of Verse" that is reminiscent of one of my favorite THREE.js demos, of flocking birds. In pursuit of advanced hardware acceleration, this demo uses shaders in order to support the real-time navigation and animation of thousands of birds (i.e. boids) at once. A-Frame is a high-level wrapper around THREE.js that provides a simple, structured interface to immersive 3D graphics. An advanced feature of A-Frame materials allows you to register shaders (a low-level drawing subroutine), and attach these materials to entities. Aside from the material, any number of other components could be added to the entity (lights, sounds, cameras, etc.), including perhaps one that encapsulates the boid navigation logic using a custom component (which are simple to write). A-Frame has great support for importing 3D objects and scenes (downloaded from Sketchfab or clara.io, for instance) using the obj-model and collada-model components. An Asset Management System is also included, for caching and preprocessing assets like models and textures or images and videos. In the future it will also support the up-and-coming glTF standard runtime format for objects and scenes—comparable to the PNG format but for 3D content (with support for animation, however). This piece lives as an external component for now, one of many external resources available as part of the large A-Frame ecosystem. From flocks of birds to the many other techniques explored and validated by the WebGL community, immersive cinematic storytelling on the web has a bright future ahead. During the filming of "The Birds", Alfred Hitchcock found it necessary to insist on literally immersing his lead actress (and surrogate for the audience) in flocks of predatory birds. Perhaps in a more harmless way yet motivated by similar dramatic ambition, creators of web experiences will insist on staging their work to take full advantage of the new paradigm of simulated reality, and it is no longer too early to get started. Image credits: * left: cutestockfootage.com * right: NYT VR app About the author Paul Dechov is a visualization engineer at TWO-N.
Read more
  • 0
  • 0
  • 1167
article-image-what-api-economy
Darrell Pratt
03 Nov 2016
5 min read
Save for later

What is the API Economy?

Darrell Pratt
03 Nov 2016
5 min read
If you have pitched the idea of a set of APIs to your boss, you might have run across this question. “Why do we need an API, and what does it have to do with an economy?” The answer is the API economy - but it's more than likely that that is going to be met with more questions. So let's take some time to unpack the concept and get through some of the hyperbole surrounding the topic. An economy (From Greek οίκος – "household" and νέμoμαι – "manage") is an area of the production, distribution, or trade, and consumption of goods and services by different agents in a given geographical location. - Wikipedia If we take the definition of economy from Wikipedia and the definition of API as an Application Programming Interface, then what we should be striving to create is a platform (as the producer of the API) that will attract a set of agents that will use that platform to create, trade or distribute goods and services to other agents over the Internet (our geography has expanded). The central tenet of this economy is that the APIs themselves need to provide the right set of goods (data, transactions, and so on) to attract other agents (developers and business partners) that can grow their businesses alongside ours and further expand the economy. This piece from Gartner explains the API economy very well. This is a great way of summing it up: "The API economy is an enabler for turning a business or organization into a platform." Let’s explore a bit more about APIs and look at a few examples of companies that are doing a good job of running API platforms. The evolution of the API economy If you asked someone what an API actually was 10 or more years ago, you might have received puzzled looks. The Application Programming Interface at that time was something that the professional software developer was using to interface with more traditional enterprise software. That evolved into the popularity of the SDK (Software Development Kit) and a better mainstream understanding of what it meant to create integrations or applications on pre-existing platforms. Think of the iOS SDK or Android SDK and how those kits and the distribution channels that Apple and Google created have led to the explosion of the apps marketplace. Jeff Bezos’s mandate that all IT assets have an API at Amazon was a major event in the API economy timeline. Amazon continued to build APIs such as SNS, SQS, Dynamo and many others. Each of these API components provided a well-defined service that any application can use and has significantly reduced the barrier to entry for new software and service companies. With this foundation set, the list of companies providing deep API platforms has steadily increased. How exactly does one profit in the API economy? If we survey a small set of API frameworks, we can see that companies use their APIs in different ways to add value to their underlying set of goods or create a completely new revenue stream for the company. Amazon AWS Amazon AWS is the clearest example of an API as a product unto itself. Amazon makes available a large set of services that provide defined functionality and for which Amazon charges with rates based upon usage of CPU and storage (it gets complicated). Each new service they launch addresses a new area of need and work to provide integrations between the various services. Social APIs Facebook, Twitter and others in the social space, run API platforms to increase the usage of their properties. Some of the inherent value in Facebook comes from sites and applications far afield from facebook.com and their API platform enables this. Twitter has had a more complicated relationship with its API users over time, but the API does provide many methods that allow both apps and websites to tap into Twitter content and thus extend Twitter’s reach and audience size. Chat APIs Slack has created a large economy of applications focused around its chat services and built up a large number of partners and smaller applications that add value to the platform. Slack’s API approach is one that is centered on providing a platform for others to integrate with and add content into the Slack data system. This approach is more open than the one taken by Twitter and the fast adoption has added large sums to Slack’s current valuation. Along side the meteoric rise of Slack, the concept of the bot as an assistant has also taken off. Companies like api.ai are offering services to enable chat services with AI as a service. The service offerings that surround the bot space are growing rapidly and offer a good set of examples as to how a company can monetize their API. Stripe Stripe competes in the payments as a service space along with PayPal, Square and Braintree. Each of these companies offers API platforms that vastly simplify the integration of payments into web sites and applications. Anyone who has built an e-commerce site before 2000 can and will appreciate the simplicity and power that the API economy brings to the payment industry. The pricing strategy in this space is generally on a per use case and is relatively straightforward. It Takes a Community to make the API economy work There are very few companies that will succeed by building an API platform without growing an active community of developers and partners around it. While it is technically easy to create and API given the tooling available, without an active support mechanism and detailed and easily consumable documentation your developer community may never materialize. Facebook and AWS are great examples to follow here. They both actively engage with their developer communities and deliver rich sets of documentation and use-cases for their APIs.
Read more
  • 0
  • 0
  • 4540

article-image-5-mistakes-web-developers-make-when-working-mongodb
Charanjit Singh
21 Oct 2016
5 min read
Save for later

5 Mistakes Web Developers Make When Working with MongoDB

Charanjit Singh
21 Oct 2016
5 min read
MongoDB is a popular document-based NoSQL database. Here in this post, I am listing some mistakes that I've found developers make while working on MongoDB projects. Database accessible from the Internet Allowing your MongoDB database to be accessible from the Internet is the most common mistake I've found developers make in the wild. Mongodb's default configuration used to expose the database to Internet; that is, you can connect to the database using the URL of the server it's being run on. It makes perfect sense for starters who might be deploying a database on a different machine, given how it is the path of least resistance. But in the real world, it's a bad default value that often is ignored. A database (whether Mongo or any other) should be accessible only to your app. It should be hidden in a private local network that provides access to your app's server only. Although this vulnerability has been fixed in newer versions of MongoDB, make sure you change the config if you're upgrading your database from a previous version, and that the new junior developer you hired didn't expose the database that connects to the Internet with the application server. If it's a requirement to have a database accessible from the open-Internet, pay special attention to securing the database. Having a whitelist of IP addresses that only have access to the database is almost always a good idea. Not having multiple database users with access roles Another possible security risk is having a single MongoDB database user doing all of the work. This usually happens when developers with little knowledge/experience/interest in databases handle the database management or setup. This happens when database management is treated as lesser work in smaller software shops (the kind I get hired for mostly). Well, it is not. A database is as important as the app itself. Your app is most likely mainly providing an interface to the database. Having a single user to manage the database and using the same user in the application for accessing the database is almost never a good idea. Many times this exposes vulnerabilities that could've been avoided if the database user had limited access in the first place. NoSQL doesn't mean "secure" by default. Security should be considered when setting the database up, and not left as something to be done "properly" after shipping. Schema-less doesn't mean thoughtless When someone asked Ronny why he chose MongoDB for his new shiny app, his response was that "it's schema-less, so it’s more flexible". Schema-less can prove to be quite a useful feature, but with great power comes great responsibility. Often times, I have found teams struggling with apps because they didn't think the structure for storing their data through when they started. MongoDB doesn’t require you to have a schema, but it doesn't mean you shouldn't properly think about your data structure. Rushing in without putting much thought into how you're going to structure your documents is a sure recipe for disaster. Your app might be small and simple and so easy right now, but simple apps become complicated very quickly. You owe your future self to have a proper well thought out database schema. Most programming languages that provide an interface to MongoDB have libraries to impose some kind of database schema on MongoDB. Pick your favorite and use it religiously. Premature Sharding Sharding is an optimization, so doing it too soon is usually a bad idea. Many times a single replica set is enough to run a fast smooth MongoDB that meets all of your needs. Most of the time a bad schema and (bad) indexing are the performance bottlenecks many users try to solve with sharding. In such cases sharding might do more harm because you end up with poorly tuned shards that don't perform well either. Sharding should be considered when a specific resource, like RAM or concurrency, becomes a performance bottleneck on some particular machine. As a general rule, if your database fits on a single server, sharding provides little benefit anyway. Most MongoDB setups work successfully without ever needing sharding. Replicas as backup Replicas are not backup. You need to have a proper backup system in place for your database and not consider replicas as a backup mechanism. Consider what would happen if you deploy the wrong code that ruins the database. In this case, replicas will simply follow the master and have the same damage. There are a variety of ways that you can use to backup and restore your MongoDB, be it filesystem snapshots or mongodump or a third party service like MMS. Having proper timely fire drills is also very important. You should be confident that the backups you're making can actually be used in a real-life scenario. Practice restoring your backups before you actually need them and verify everything works as expected. A catastrophic failure in your production system should not be the first time when you try to restore from backups (often only to find out you're backing up corrupt data). About the author Charanjit Singh is a freelance JavaScript (React/Express) developer. Being an avid fan of functional programming, he’s on his way to take on Haskell/Purescript as his main professional languages.
Read more
  • 0
  • 0
  • 2769

article-image-future-node
Owen Roberts
09 Sep 2016
2 min read
Save for later

The Future is Node

Owen Roberts
09 Sep 2016
2 min read
In the few years we’ve seen Node.js explode onto the tech scene and go from strength to strength – In fact, the rate of adoption has been so great that the Node Foundation has mentioned that in the last year alone we’ve seen the amount of developers using the server-side platform have grown by 100% to reach a staggering 3.5 million users. Early adopters to Nodehave included Netflix, PayPal, and even Walmart. The Node fanbase is constantly building new Node Package Managerpackages to share among themselves. With React and Angular offering the perfect accompaniment to Node in modern web applications, along with a host of JavaScript tools like Gulp and Grunt able to use Node’s best practices for easier development Node has become an essential tool for the modern JavaScript developer, one that shows no signs of slowing down or being replaced. Whether Node will be around a decade from now remains to seen, but with a hungry user base, thousands of user created npms already created, and full-stack JavaScript moving to be the cornerstone of most web applications, it’s definitely not going anyway anytime soon. For now, the future really is Node. Want to get started learning Node? Or perhaps you’re looking give your skills the boost to ensure you stay on top? We’ve just released Node.js Blueprints, and if you’re looking to see the true breadth of possibilities that Node offers you then there’s no better way to discover how to apply this framework in new and unexpected ways. But why wait? With a free Mapt account you can read the first 2 chapters for nothing at all! When you’re ready to continue learning just what Node can do, sign up to Mapt to get unlimited access to every chapter in the book, along with the rest of our entire video and eBook range, at just $29.99 per month!
Read more
  • 0
  • 0
  • 1628
article-image-nodejs-its-easy-get-things-done
Packt Publishing
05 Sep 2016
4 min read
Save for later

With Node.js, it’s easy to get things done

Packt Publishing
05 Sep 2016
4 min read
Luciano Mammino is the author (alongside Mario Casciaro) of the second edition of Node.js Design Patterns, released in July 2016. He was kind enough to speak to us about his life as a web developer and working with Node.js – as well as assessing Node’s position within an exciting ecosystem of JavaScript libraries and frameworks. Follow Luciano on Twitter – he tweets from @loige.  1.     Tell us about yourself – who are you and what do you do? I’m an Italian software developer living in Dublin and working at Smartbox as Senior Engineer in the Integration team. I’m a lover of JavaScript and Node.js and I have a number of upcoming side projects that I am building with these amazing technologies.  2.     Tell us what you do with Node.js. How does it fit into your wider development stack? The Node.js platform is becoming ubiquitous; the range of problems that you can address with it is growing bigger and bigger. I’ve used Node.js on a Raspberry Pi, in desktop and laptop computers and on the cloud quite successfully to build a variety of applications: command line scripts, automation tools, APIs and websites. With Node.js it’s really easy to get things done. Most of the time I don't need to switch to other development environments or languages. This is probably the main reason why Node.js fits very well in my development stack.  3.     What other tools and frameworks are you working with? Do they complement Node.js? Some of the tools I love to use are RabbitMq, MongoDB, Redis and Elastic Search. Thanks to the Npm repository, Node.js has an amazing variety of libraries which makes integration with these technologies seamless. I was recently experimenting with ZeroMQ, and again I was surprised to see how easy it is to get started with a Node.js application.  4.     Imagine life before you started using Node.js. What has its impact been on the way you work? I started programming when I was very young so I really lived "a life" as a programmer before having Node.js. Before Node.js came out I was using JavaScript a lot to program the frontend of web applications but I had to use other languages for the backend. The context-switching between two environments is something that ends up eating up a lot of time and energy. Luckily today with Node.js we have the opportunity to use the same language and even to share code across the whole web stack. I believe that this is something that makes my daily work much easier and enjoyable.  5.     How important are design patterns when you use Node.js? Do they change how you use the tool? I would say that design patterns are important in every language and in this case Node.js makes no difference. Furthermore due to the intrinsically asynchronous nature of the language having a good knowledge of design patterns becomes even more important in Node.js to avoid some of the most common pitfalls.  6.     What does the future hold for Node.js? How can it remain a really relevant and valuable tool for developers? I am sure Node.js has a pretty bright future ahead. Its popularity is growing dramatically and it is starting to gain a lot of traction in enterprise environments that have typically bound to other famous and well-known languages like Java. At the same time Node.js is trying to keep pace with the main innovations in the JavaScript world. For instance, in the latest releases Node.js added support for almost all the new language features defined in the ECMAScript 2015 standard. This is something that makes programming with Node.js even more enjoyable and I believe it’s a strategy to follow to keep developers interested and the whole environment future-proof.  Thanks Luciano! Good luck for the future – we’re looking forward to seeing how dramatically Node.js grows over the next 12 months. Get to grips with Node.js – and the complete JavaScript development stack – by following our full-stack developer skill plan in Mapt. Simply sign up here.
Read more
  • 0
  • 0
  • 4372

article-image-angular-2-dependency-injection-powerful-design-pattern
Mary Gualtieri
27 Jun 2016
5 min read
Save for later

Angular 2 Dependency Injection: A powerful design pattern

Mary Gualtieri
27 Jun 2016
5 min read
From 7th to 13th November we're celebrating two of the hottest tools in the JavaScript universe. Check out our best Angular and React content here - and save up to 80%! Dependency Injection is one of the biggest features of Angular. It allows you to inject dependencies in different components throughout your web application without needing to know how these dependencies are created. So what does this actually mean? If a component depends on a service, you do not create this service. Instead, you have a constructor request this service, and the framework then provides it to you. You can actually view dependency injection as a design pattern or framework. In Angular 1, you must tell the framework how to create a service. Let’s take a look at a code example. There is nothing out of the ordinary with this sample code. The class is set up to construct a house object when needed. However, the problem with this code example is that the constructor assigns the needed dependencies, and it knows how these objects are created. What is the big deal you may ask? First, this makes the code very hard to maintain, and second, the code is even harder to test. Let’s rewrite the code example as follows: What just happened? The dependency creation is moved out of the constructor, and the constructor is extended to expect all of the needed dependencies. This is significant becauseyou want to create a new house object. All you have to do is pass all of the needed dependencies to the constructor. This allows the dependencies to be decoupled from your class, allowing you to pass mocked dependencies when you write a test. Angular 2 has made a drastic, but welcome, changeto dependency injection. Angular 2 provides more control for maintainability, and it is easier to test. In the new version of Angular, it focuses more on how to get these dependencies. Dependency Injection consists of three things: Injector Provider Dependency The injector object exposes the APIs in order for you to create instances of dependencies. A provider tells the injector how to create the instance of a dependency. This is done by the provider taking a token and map to a factory function that creates an object. A dependency is the type of which an object should be created. What does this look like in code? Let’s dissect this code. You have to import an injector from Angular 2 in order to expose some static APIs to create the injectors. The resolveAndCreate() function is a factory one that creates an injector and takes a list of providers. However, you must ask yourself, “How does the injector know which dependencies are needed in order to represent a house?” Easy! Lets take a look at the following code: First, you need to import injectfrom the framework and apply the decorator to all of the parameters in the constructor. By attaching the Inject decorator to the House class, the metadata is used by the dependency injection system. To put it simply, you tell the dependency injectionthat the first constructor parameter should be of the Couch type, the second of the Table type, and the third of the Doors type. The class declares the dependencies, and the dependency injection can read this information whenever the application needs to create an object of House. If you take a look at the resolveAndCreate() method, it creates an injector from an array of binding. The passed-in bindings, in this case, are types from the constructor parameters. You might be wondering how dependency injection in Angular 2 works in the framework. Luckily, you do not have to create injectors manually when you build Angular 2 components. The developers behind Angular 2 have created an API that hides all of the injector system when you build components in Angular 2. Let’s explore how this actually works. Here, we have a very basic component, but what happens if you expand this component? Take a look: As you added a class, you now need to make it available in your application as an injectable. Do this by passing provider configurations to your application injector. Bootstrap() actually takes care of creating the root injector for you. It takes a list of providers as a second argument and then passes it straight to the injector when it is created. It looks similar to this: One last thing to consider when using dependency injection is: what do you do if you want a different binding configuration in a specific component? You just simply add a providers property to the component, as follows: Remember that providers do not construct the instances that will be injected, but it does construct a child injector that is created for the component. To conclude, Angular 2 introduces a new dependency injection system. The new dependency injection system allows for more control to maintain your code, to test it more easily, and to rely on interfaces. The new dependency injection system is built into Angular 2 and only has one API for dependency injection into components. About the author Mary Gualtieri is a full stack web developer and web designer who enjoys all aspects of the Web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solve problems, all while constantly learning. She can be found on GitHub as MaryGualtieri.
Read more
  • 0
  • 0
  • 4880