Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Front-End Web Development

54 Articles
article-image-angular-2-dependency-injection-powerful-design-pattern
Mary Gualtieri
27 Jun 2016
5 min read
Save for later

Angular 2 Dependency Injection: A powerful design pattern

Mary Gualtieri
27 Jun 2016
5 min read
From 7th to 13th November we're celebrating two of the hottest tools in the JavaScript universe. Check out our best Angular and React content here - and save up to 80%! Dependency Injection is one of the biggest features of Angular. It allows you to inject dependencies in different components throughout your web application without needing to know how these dependencies are created. So what does this actually mean? If a component depends on a service, you do not create this service. Instead, you have a constructor request this service, and the framework then provides it to you. You can actually view dependency injection as a design pattern or framework. In Angular 1, you must tell the framework how to create a service. Let’s take a look at a code example. There is nothing out of the ordinary with this sample code. The class is set up to construct a house object when needed. However, the problem with this code example is that the constructor assigns the needed dependencies, and it knows how these objects are created. What is the big deal you may ask? First, this makes the code very hard to maintain, and second, the code is even harder to test. Let’s rewrite the code example as follows: What just happened? The dependency creation is moved out of the constructor, and the constructor is extended to expect all of the needed dependencies. This is significant becauseyou want to create a new house object. All you have to do is pass all of the needed dependencies to the constructor. This allows the dependencies to be decoupled from your class, allowing you to pass mocked dependencies when you write a test. Angular 2 has made a drastic, but welcome, changeto dependency injection. Angular 2 provides more control for maintainability, and it is easier to test. In the new version of Angular, it focuses more on how to get these dependencies. Dependency Injection consists of three things: Injector Provider Dependency The injector object exposes the APIs in order for you to create instances of dependencies. A provider tells the injector how to create the instance of a dependency. This is done by the provider taking a token and map to a factory function that creates an object. A dependency is the type of which an object should be created. What does this look like in code? Let’s dissect this code. You have to import an injector from Angular 2 in order to expose some static APIs to create the injectors. The resolveAndCreate() function is a factory one that creates an injector and takes a list of providers. However, you must ask yourself, “How does the injector know which dependencies are needed in order to represent a house?” Easy! Lets take a look at the following code: First, you need to import injectfrom the framework and apply the decorator to all of the parameters in the constructor. By attaching the Inject decorator to the House class, the metadata is used by the dependency injection system. To put it simply, you tell the dependency injectionthat the first constructor parameter should be of the Couch type, the second of the Table type, and the third of the Doors type. The class declares the dependencies, and the dependency injection can read this information whenever the application needs to create an object of House. If you take a look at the resolveAndCreate() method, it creates an injector from an array of binding. The passed-in bindings, in this case, are types from the constructor parameters. You might be wondering how dependency injection in Angular 2 works in the framework. Luckily, you do not have to create injectors manually when you build Angular 2 components. The developers behind Angular 2 have created an API that hides all of the injector system when you build components in Angular 2. Let’s explore how this actually works. Here, we have a very basic component, but what happens if you expand this component? Take a look: As you added a class, you now need to make it available in your application as an injectable. Do this by passing provider configurations to your application injector. Bootstrap() actually takes care of creating the root injector for you. It takes a list of providers as a second argument and then passes it straight to the injector when it is created. It looks similar to this: One last thing to consider when using dependency injection is: what do you do if you want a different binding configuration in a specific component? You just simply add a providers property to the component, as follows: Remember that providers do not construct the instances that will be injected, but it does construct a child injector that is created for the component. To conclude, Angular 2 introduces a new dependency injection system. The new dependency injection system allows for more control to maintain your code, to test it more easily, and to rely on interfaces. The new dependency injection system is built into Angular 2 and only has one API for dependency injection into components. About the author Mary Gualtieri is a full stack web developer and web designer who enjoys all aspects of the Web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solve problems, all while constantly learning. She can be found on GitHub as MaryGualtieri.
Read more
  • 0
  • 0
  • 4882

article-image-beating-jquery-making-web-framework-worth-its-weight-code
Erik Kappelman
20 Apr 2016
5 min read
Save for later

Beating jQuery: Making a Web Framework Worth its Weight in Code

Erik Kappelman
20 Apr 2016
5 min read
Let me give you a quick disclaimer. This is a bit of a manifesto. Last year I started a little technology company with some friends of mine. We were lucky enough to get a solid client for web development right away. He was an author in need of a blogging app to communicate with the fans of his upcoming book. In another post I have detailed how I used Angular.js, among other tools, to build this responsive, dynamic web app. Using Angular.js is a wonderful experience and I would recommend it to anyone. However, Angular.js really only looks good by comparison. By this I mean, if we allow any web framework to exist in a vacuum and not simply rank them against one another, they are all pretty bad. Before you gather your pitchforks and torches to defend your favorite flavor let me explain myself. What I am arguing in this post is that many of the frameworks we use are not worth their weight in code. In other words, we add a whole lot of code to our apps when we import the frameworks, and then in practice using the framework is only a little bit better than using jQuery, or even pure JavaScript. And yes I know that using jQuery means including a whole bunch of code into your web app, but frameworks like Angular.js are many times built on top of jQuery anyway. So, the weight of jQuery seems to be a necessary evil. Let’s start with a simple http request for information from the backend. This is what it looks like in Angular.js: $http.get('/dataSource').success(function(data) { $scope.pageData = data; }); Here is a similar request using Ember.js: App.DataRoute = Ember.Route.extend({ model: function(params) { return this.store.find('data', params.data_id); } }); Here is a similar jQuery request: $.get( "ajax/stuff.html", function( data ) { $( ".result" ).html( data ); alert( "Load was performed." ); }); It's important for readers to remember that I am a front-end web developer. By this, I mean I am sure there are complicated, technical, and valid reasons why Ember.js and Angular.js are far superior to using jQuery. But, as a front-end developer, I am interested in speed and simplicity. When I look at these http requests and see that they are overwhelmingly similar I begin to wonder if these frameworks are actually getting any better. One of the big draws to Angular.js and Ember.js is the use of handlebars to ease the creation of dynamic content. Angular.js using handlebars looks something like this: <h1> {{ dynamicStuff }} </h1> This is great because I can go into my controller and make changes to the dynamicStuff variable and it shows up on my page. However, the following accomplishes a similar task using jQuery. $(function () { var dynamicStuff = “This is dog”; $(‘h1’).html( dynamicStuff ); }); I admit that there are many ways in which Angular.js or Ember.js make developing easier. DOM manipulation definitely takes less code and overall the development process is faster. However, there are many times that the limitations of the framework drive the development process. This means that developers sacrifice or change functionality simply to fit the framework. Of course, this is somewhat expected. What I am trying to say with this post is that if we are going to sacrifice load-times and constrict our development methods in order to use the framework of our choice can they at least be simpler to use? So, just for the sake of advancement lets think about what the perfect web framework would be able to do. First of all, there needs to be less set up. The brevity and simplicity of the http request in Angular.js is great, but it requires injecting the correct dependencies in multiple files. This adds stress, opportunities to make mistakes and development time. So, instead of requiring the developer to grab each specific tool for each specific implementation what if the framework took care of that for you? By this I mean if I were to make an http request like this: http( ‘targetURL’ , get , data) When the source is compiled or interpreted the needed dependencies for this http request are dynamically brought into the mix. This way we can make a simpler http request and we can avoid the hassle of setting up the dependencies. As far as DOM manipulation goes, the handlebars seem to be about as good as it gets. However, there needs to be better ways to target individual instances of a repeated elements such as <p> tags holding the captions in a photo gallery. The current solutions for problems like this one are overly complex. Especially when this issue involves one of the most common things on the internet, a photo gallery. About the Author As you can see, I am more of a critic than a problem solver. I believe the issues I bring up here are valid. As we all become more and more entrenched in the Internet of Things, it would be nice if the development process caught up with the standards of ease that end users demand.
Read more
  • 0
  • 0
  • 2971

article-image-adblocking-and-future-web
Sam Wood
11 Apr 2016
6 min read
Save for later

Adblocking and the Future of the Web

Sam Wood
11 Apr 2016
6 min read
Kicked into overdrive by Apple's iOS9 infamously coming with adblocking options for Safari, the content creators of the Internet have woken up to the serious challenge of ad-blocking tech. The AdBlock+ Chrome extension boasts over 50 million active users. I'm one of them. I'm willing to bet that you might be one too. AdBlock use is rising massively and globally and shows no sign of slowing down. Commentators have blamed the web-reading public, have declared web publishers have brought this on themselves and even made worryingly convincing arguments that adblocking is a conspiracy by corporate supergiants to kill the web as we know it. They all agree on one point though - the way we present and consume web content is going to have to evolve or die. So how might adblocking change the web? We All Go Native One of the most proposed and most popular solutions to the adblocking crisis is to embrace "native" advertising. Similar to sponsorship or product placement in other media, native advertising interweaves its sponsor into the body of the content piece. By doing so, an advert is made immune to the traditional scripts and methods that identify and block net ads. This might be a thank you note to a sponsor at the end of a blog, an 'advertorial' upsell of a product or service, or corporate content marketing where a company produces and promotes their own content in a bid to garner your attention for their paid products. (Just like this blog. I'm afraid it's content marketing. Would you like to buy a quality tech eBook? How about the Web Developer's Reference guide - your Bible for everything you need to know about web dev! Help keep this Millennial creative in a Netflix account and pop culture tee-shirts.) The Inevitable Downsides Turns out nobody wants to read sponsored content - only 24% of readers scroll down on a native ad. A 2014 survey by Contently revealed two-thirds of respondents saying they felt deceived by sponsored advertising. We may see this changing. As the practice becomes more mainstream, readers come to realize it does not impact on quality or journalistic integrity. But it's a worrying set of statistics for anyone who hoped direct advertising might save them from the scourge of adblock. The Great App Exodus There's a increasingly popular prediction that adblocking may lead to a great exodus of content from browser-based websites to exist more and more in a scattered app-based ecosystem. We can already see the start of this movement. Every major content site bugs you to download its dedicated app, where ads can live free of fear. If you consume most of your mobile media through Snapchat Discover channels, through Facebook mobile sharing, or even through IM services like Telegram, you'll be reading your web content in that apps dedicated built-in reader. That reader is, of course, free of adblocking extensions. The Inevitable Downsides The issue here is one of corporate monopoly. Some journalists have criticized Facebook Instant (the tech which has Facebook host articles from popular news sites for increased load times) for giving Facebook too much power over the news business. Vox's Matthew Yglesias's predicts restructuring where "instead of digital media brands being companies that build websites, they will operate more like television studios — bringing together teams that collaborate on the creation of content, which is then distributed through diverse channels that are not themselves controlled by the studio." The control that these platforms could exert raises troubling concerns for the future of the Internet as a bastion of free and public speech. User Experience with Added Guilt Alongside adding advertising <script> tags, web developers are increasingly creating sites that detect if you're using AdBlocking software and punishing you accordingly. This can take many forms - from a simple plea to be put on your whitelist in order to keep the servers running, to the cruel and inhuman: Some sites are going as far as actively blocking content for users using adblockers. Trying accessing an article on the likes of Forbes or CityAM with an adblocker turned on. You'll find yourself greeted with an officious note and a scrambled page that refuses to show you the goods unless you switch off the blocker. The Inevitable Downsides No website wants to be in a position where it has to beg or bully their visitors. Whilst your committed readers might be happy to whitelist your URL, antagonizing new users is a surefire way to get them to bounce from the site. Sadly, sabotaging their own sites for ad blocking visitors might be one of the most effective ways for 'traditional' web content providers to survive. After all, most users block ads in order to improve their browsing experience. If the UX of a site on the whitelist is vastly superior to the UX under adblock, it might end up being more pleasant to browse with the extension off. An Uneasy Truce between Adblockers and Content In many ways adblocking was a war that web adverts started. From the pop-up to the autoplaying video, web ad software has been built to be aggressive. The response of adblockers is an indiscriminate and all-or-nothing approach. As Marco Arment, creator of the Peace adblocking app, notes "Today’s web readers [are so] fed up that they disable all ads, or even all Javascript. Web developers and standards bodies couldn’t be more out of touch with this issue, racing ahead to give browsers and Javascript even more capabilities without adequately addressing the fundamental problems that will drive many people to disable huge chunks of their browser’s functionality." Both sides need to learn to trust one another again. The AdBlock+ Chrome extension now comes with an automatic whitelist of sites; 'guilt' website UX works to remind us that a few banner ads might be the vital price we pay to keep our favorite mid-sized content site free and accessible. If content providers work to restore sanity to the web on their ends, then our need for adblocking software as users will diminish accordingly. It's a complex balance that will need a lot of good will from both 'sides' - but if we're going to save the web as we know it, then a truce might be necessary. Building a better web? How about checking out our Essential Web Dev? Get five titles for only $50!  
Read more
  • 0
  • 1
  • 1983

article-image-future-service
Edward Gordon
07 Apr 2016
5 min read
Save for later

The Future as a Service

Edward Gordon
07 Apr 2016
5 min read
“As a Service” services (service2?) generally allow younger companies to scale quickly and efficiently. A lot of the hassle is abstracted away from the pain of implementation, and they allow start-ups to focus on the key drivers of any company – product quality and product availability. For less than the cost of proper infrastructure investment, you can have highly-available, fully distributed, buzzword enabled things at your fingertips to start running wild with. However, “as a Service” providers feel like they’re filling a short-term void rather than building long-term viable option for companies. Here’s why. 1. Cost The main driver of SaaS is that there’s lower upfront costs. But it’s a bit like the debit card versus credit card debate; if you have the money you can pay for it upfront and never worry about it again. If you don’t have the money but need it now, then credit is the answer – and the associated continued costs. For start-ups, a perceived low-cost model is ideal at first glance. With that, there’s the downside that you’ll be paying out of your aaS for the rest of your service with them, and moving out of the ecosystem that you thought looked so robust 4 years ago will give the sys admin that you have to hire in to fix it nightmares. Cost is a difficult thing to balance, but there’s still companies still happily running on SQL Server 2005 without any problems; a high upfront cost normally means that it’s going to stick around for ages (you’ll make it work!). To be honest, for most small businesses, investment in a developer who can stitch together open source technologies to suit your needs will be better than running to the closest spangly Service provider. However, aaS does mean you don’t need System Administrators stressing about ORM-generated queries. 2. Ownership of data An under-discussed but vital issue that lies behind the aaS movement is the ownership of data, and what this means to companies. How secure are the bank details of your clients? How does the aaS provider secure against attacks? Where does this fit in terms of compliance? To me, the risks associated with giving your data for another company to keep is too high to justify, even if it’s backed up by license agreements and all types of unhackable SSL things (#Heartbleed). After all, a bank is more appealing to thieves than a safe behind a picture in your living room. Probably*. As a company, regardless of size, your integrity is all. I think you should own that. 3. The Internet as kingmaker We once had an issue at the Packt office where, during a desk move, someone plugged an Internet cable (that’s the correct term for them, right?) from one port to another, rather than into their computer. The Internet went down for half the day without anyone really knowing what was going on. Luckily, we still had local access to stuff – chapters, databases, schedules, and so on. If we were fully bought into the cloud we would have lost a collective 240 man hours from one office because of an honest mistake. Using the Internet as your only connection point to the data you work with can, and will, have consequences for businesses who work with time-critical pieces of data. This leaves an interesting space open that, as far as I’m aware, very few “as a Service” providers have explored; hybrid cloud. If the issue, basically, is the Internet and what cloud storage means to you operationally and in terms of data compliance, then a world where you can keep sensitive and “critical” data local while keeping bulk data with your cloud provider, then you can leverage the benefits of both worlds. The advantages of speed and lack of overheads would still be there, as well as the added security of knowing that you’re still “owning” your data and your brand reputation. Hybrid clouds generally seem to be an emergent solution in the market at large. There are even solutions now on Kickstarter that provide you with a “cloud” where you own your data. Lovely. Hell, you can even make your own PaaS with Chef and Docker. I could go on. The quite clear popularity of “as a Service” products means there’s value in the services they’re offering. At the moment though, there’s enough problems inherent in adoption to believe that they’re a stop-gap to something more finite. The future, I think, lies away from the black and white of aaS and on-premises software. There’s advantages in both, and as we continue to develop services and solutions that blend the two, I think we’re going to end up at a more permanent solution to the argument. *I don’t actually advocate the safe behind a picture method. More of a loose floorboard man myself. From 4th-10th April, save 50% on 20 of out top cloud titles. From AWS to Azure and OpenStack - and even Docker for good measure - learn how to build the services of tomorrow. If one isn't enough, grab 5 for just $50! Find them here.
Read more
  • 0
  • 0
  • 1552

article-image-angularjs-nodejs-and-firebase-startup-web-developers-toolkit
Erik Kappelman
09 Mar 2016
7 min read
Save for later

Angular.js, Node.js, and Firebase: the startup web developer's toolkit

Erik Kappelman
09 Mar 2016
7 min read
So, you’ve started a web company. You’ve even attracted one or two solid clients. But now you have to produce, and you have to produce fast. If you’ve been in this situation, then we have something in common. This is where I found myself a few months ago. A caveat: I am a self-taught web developer in an absolute sense. Self-taught or not, in August of 2015 I found myself charged with creating a fully functional blogging app for an author. Needless to say, I was in over my head. I was aware of Node.js, because that had been the backend for the very simple static content site my company had produced first. I was aware of database concepts and I did know a reasonable amount of JavaScript, but I felt ill prepared to pull of these tools together in a cohesive fashion. Luckily for me it was 2015 and not 1998. Today, web developers are blessed with tools that make the development of websites and web apps a breeze. After some research, I decided to use Angular.js to control the frontend behavior of the website, Node.js with Express.js as the backend, and Firebase to hold the data. Let me walk you through the steps I used to get started. First of all, if you aren’t using Express.js on top of Node.js for your backend in development you should start. Node.js was written in C, C++ and JavaScript by Ryan Dahl in 2009. This multiplatform runtime environment for JavaScript is fast, open-source, and easy to learn, because odds are you already know JavaScript. Using Express.js and the Express-Generator in congress with Node.js makes development quite simple. Express.js is Node middleware. In simple terms, Express.js makes your life a whole lot easier by doing most of the work for you. So, let’s build our backend. First, install Node.js and NPM on your system. There are a variety of online resources to complete this step. Then, using NPM, install the Express application generator. $ npm install express-generator -g Once we have Node.js and the Express generator installed, get to your development folder and execute the following commands to build the skeleton of your web app’s backend: $ express app-name –e I use the –e flag to set the middleware to use ejs files instead of the default jade files. I prefer ejs to jade but you might not. This command will produce a subdirectory called app-name in your current directory. If you navigate into this directory and type the commands $ npm install $ npm start and then navigate in a browser to http://localhost:3000 you will see the basic welcome page auto generated by Express. There are thousands of great things about Node.js and Express.js and I will leave them to be discovered by you as you continue to use these tools. Right now, we are going to get Firebase connected to our server. This can serve as general instructions for installing and using Node modules as well. Head over to firebase.com and create a free account. If you end up using Firebase for a commercial app you will probably want to upgrade to a paid account, but for now the starter account should be fine. Once you get your Firebase account setup, create a Firebase instance using their online interface. Once this is done get back to your backend code to connect the Firebase to your server. First install the Firebase Node module. $ npm install firebase --save Make sure to use the --save flag because this puts a new line in your packages.json file located in the root of the web server. This means that if you type npm install, as you did earlier, NPM will know that you added firebase to your webserver and install it if it is not already present. Now open the index.js file in the routes folder in the root of your Node app. At the top of this folder, put in the line var Firebase = require(‘firebase’); This pulls the Firebase module you installed into your code. Then to create a connection to the account you just created on Firebase[MA1]  ,put in the following line of code: var FirebaseRef = new Firebase("https://APP-NAME.firebaseio.com/"); Now, to take a snapshot in JSON of your Firebase and store it in an object, include the following lines var FirebaseContent = {}; FirebaseRef.on("value", function(snapshot) { FirebaseContent = snapshot.val(); }, function(errorObject) { console.log("The read failed: " + errorObject.code); }); FirebaseContent is now a JavaScript object containing your complete Firebase data. Now let’s get Angular.js hooked up to the frontend of the website and then it’s time for you to start developing. Head over to angularjs.org and download the source code or get the CDN. We will be using the CDN. Open the file index.ejs in the views directory in your Node app’s root. Modify the <head> tag, adding the CDN. <head> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.0-rc.1/angular.min.js"></script> </head> This allows you to use the Angular.js tools. Angular.js uses controllers to control your app. Let’s make your angular app and connect a controller. Create a file called myapp.js in your public/javascripts directory. In myapp.js include the following angular.module(“myApp”,[]); This file will grow but for now this is all you need. Now create a file in the same directory called myController.js and put this code into it. Angular.module(“myApp”).controller(‘myController’,['$scope',function($scope){ $scope.jsVariable = 'Controller is working'; }]) Now modify the index.ejs file again. <html ng-app=“myApp”> <head> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.0-rc.1/angular.min.js"></script> <script src="/javascripts/myApp.js"></script> <script src="/javascripts/myController.js"></script> </head> <body ng-controller= “myController”> <h1> {{jsVariable}} </h1> </body> </html> If you start your app again and go back to http://localhost:3000 you should see that your controller is now controlling the contents of the first heading. This is just a basic setup and there is much more you will learn along the way. Speaking from experience, taking the time to learn these tools and put them into use will make your development faster and easier and your results will be of much higher quality. About the Author Erik was born and raised in Missoula, Montana. He attended the University of Montana and received a degree in economics. Along the way, he discovered the value of technology in our modern world, especially to businesses. He is currently seeking a master's degree in economics from the University of Montana and his research focuses on the economics of gender empowerment in the developing world. During his professional life, he has worn many hats – from cashier, to barista, to community organizer. He feels that Montana offers a unique business environment that is often best understood by local businesses. He started his company, Duplovici, with his friends in an effort to meet the unique needs of Montana businesses, non-profits, and individuals. He believes technology is not simply an answer to profit maximization for businesses: by using internet technologies we can unleash human creativity through collective action and the arts, as well as business ventures. He is also the proud father of two girls and has a special place in his heart for getting girls involved with technology and business.
Read more
  • 0
  • 1
  • 7334

article-image-angularjs-love-affair-decade
Richard Gall
05 Feb 2016
6 min read
Save for later

AngularJS: The Love Affair of the Decade

Richard Gall
05 Feb 2016
6 min read
AngularJS stands at the apex of the way we think about web development today. Even as we look ahead to Angular 2.0, the framework serves as a useful starting point for thinking about the formation of contemporary expectations about what a web developer actually does and the products and services they create. Notably (for me at least) Angular is closely tied up with Packt’s development over the past few years. It’s had an impact on our strategic focus, forcing us to think about our customers in new ways. Let’s think back to the world before AngularJS. This was back in the days when Backbone.js meant something, when Knockout was doing the rounds. As this article from October has it, AngularJS effectively took advantage of a world suffering from ‘Framework fatigue’. It’s as if there was a ‘framework bubble’, and it’s only when that bubble burst that the way forward becomes clearer. This was a period of experimentation and exploration; improvement and efficiency were paramount, but a symptom of this was the way in which trends – some might say fads – took hold of the collective imagination. This period was a ‘framework’ bubble which, I’d suggest, prefigures the startup bubble, a period in which we’re living today. Developers were looking for new ways of doing things; they wanted to be more efficient, their projects more scalable, fast, and robust. All those words that are attached to development (in both senses of the word) took on particular urgency. As you might expect, this unbelievable pace of growth and change was like catnip for Packt. This insatiable desire for new tools was something that we could tapped into, delivering information and learning materials on even the most niche new tools. It was exciting. But it couldn’t last. It was thanks to AngularJS that this changed. Ironically, if AngularJS burst the framework bubble, ending what seemed like an endless stream of potential topics to cover, it also supplied us with some of our most popular titles. AngularJS Web Application Development Cookbook, for example, was a huge success. Written by Matt Frisbie, it helped us to forge a stronger relationship with the AngularJS world. It was weird – its success also brought an end to a very exciting period of growth, where Packt was able to reach out to new customers, small communities that other publishers could not. But we had to grow up. AngularJS was like a friend’s wedding; it made us realise that we needed to become more mature, more stable. But why, we should ask, was AngularJS so popular? Everyone is likely to have their own different story, their own experience of adopting AngularJS, and that, perhaps, is precisely the point. Brian Rinaldi, in the piece to which I refer above, notes a couple of things that made Angular a framework to which people could commit. Its ties with Google, for example gave it a mark of authority and reliability, while its ability to integrate with other frameworks means developers still have the flexibility to use the tools they want to while still having a single place to which they could return. Brian writes: The point is, all these integrations not only made the choice of Angular easier, but make leaving harder. It’s no longer just about the code I write, but Angular is tied into my entire development experience. Experience is fundamental here. If the framework bubble was all about different ways of doing the same thing faster and more effectively, today the reverse is true. Developers want to work in one way, but to be able to do lots of things. It’s a change in priorities; the focus of the modern web developer in 2016 has changed. The challenges are different, as mobile devices, SPAs, cloud, personalization, have become fundamental issues for web developers to reckon with. Good web developers looks beyond the immediacy of their project, and need to think carefully about users and about how they can deliver a great product or service. That’s what we’ve found at Packt. The challenges faced by the customers we serve are no longer quite so transparent or simple. If, just a few years ago, we relied upon the simple need to access information about a new framework, today the situation is more nuanced. Many of the challenges are due to changing user behaviour, a fragmentation of needs and contexts. For example, maybe you want to learn responsive web design? Or need to build a mobile app? Of course, these problems haven’t just appeared in the last 12 months, but they are no longer additional extras, but central to success. It’s these problems that have had a part in causing the startup bubble – businesses solving (or, if they’re really good, disrupting) customer needs with software. A framework such as React might be seen as challenging AngularJS. But despite its dedicated, almost evangelical core of support, it’s nevertheless relatively small. And it would also be wrong to see the emergence of React (alongside other tools, including Meteor), as a return to the heady days of the framework bubble. Instead it has grown out of a world inculcated by Angular – it is, remember, a tool designed to build a very specific type of application. The virtual DOM, after all, is an innovation that helps deliver a truly immediate and fast user experience. The very thing that makes React great is why it won’t supplant Angular – why would it even want to? If you do one thing, and do it well, you’re adding value that people couldn’t get from anywhere else. Fear of obsolescence – that’s the world in which AngularJS entered, and the world in which Packt grew. But today, the greatest fear isn’t so much obsolescence, it’s ‘Am I doing the right thing for my users? Are my customers going to like this website – this new app?’ So, as we await Angular 2.0, don’t forget what AngularJS does for you – don’t forget the development experience and don’t forget to think about your users. Packt will be ready when you want to learn 2.0 – but we’ll also still have the insights and guidance you need to do something new with AngularJS. Progress and development isn’t linear; it’s never a straight line. So don’t be scared to explore, rediscover what works. It’s not always about what’s new, it’s about what’s right for you. Save up to 70% on some of our very best web development titles from 11th to 17th April. From Flask to React to Angular 2, it's the perfect opportunity to push your web development skills forward. Find them here.
Read more
  • 0
  • 0
  • 1445
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-angular-2-new-world-web-dev
Owen Roberts
04 Feb 2016
5 min read
Save for later

Angular 2 in the new world of web dev

Owen Roberts
04 Feb 2016
5 min read
This week at Packt we’re all about Angular, and with the release of Angular 2 just on the horizon there’s no better time to be an Angular user. Our first book on Angular was Mastering Web Application Development with AngularJS back in 2013 and it’s amazing to see how much the JS landscape is a completely different place than what it was just 3 or 4 years ago. How so? Well, Backbone was expected to lord over other frameworks as The Top Dog, while others like Ember and Knockout were carving their own respectable niches and fans. When Angular started to pick up steam it was seen as a breath of fresh air thanks to its simplicity and host of features. Compared to the more niche driven frameworks at the time the appeal of the Google lead powerhouse drove developers all over to give it a go, and managed to keep them hooked. Of course web dev is a different world than it was in 2013. We’ve seen the growth of full-stack JS development, JS promises are starting to become more in use, components are the latest step in building web apps, and a host of new frameworks and libraries have burst onto the scene as older ones begin to fade into the background. Libraries like React and Polymer are fantastic alternatives to frameworks for developers who want to pick and choose the best stuff for their apps; while Ember has gone from strength to strength in the last few years with a diehard fanbase. A different world means that rewriting Angular from the ground for 2.0 makes sense, but it’s not without its risks too. So, what does Angular need to avoid falling behind? Here are a few ideas (And hopes!) Ease-of-use One of Angular’s greatest strengths was how easy it was to use; not just in the actual coding, but also in integration. Angular has always had that bonus over the competition – one of the biggest reasons it became so popular was because so many other projects allowed for easy Angular integration. However, the other side of the coin was Angular’s equally difficult learning curve; before the book and tutorials found their way onto the market everyone was trying to find as much as they could about Angular in order to get the most out of the more complex or difficult parts of the framework. With 2.x being a complete rewrite every developer is back in the same place again, what the Angular team needs to ensure is that Angular is just as welcoming as its new competition - React, Ember, and even Polymer offer a host of ways to get into their development mindsets. Angular needs to do the same. Debugging Does anyone actually like debugging? My current attempts at Python usually grind to a halt when I reach the debugging phase and for a lot of developers there’s always that whisper of “Urgh” under their breath when they finally get around to bugs. Angular isn’t any different, and you can find a lot of articles and Stack Overflow questions all about debugging in Angular. For what it’s worth Angular seem to have learnt from their experiences with 1.x. They’ve worked directly with the team at Rangle.io to create Batarangle, which is a Chrome plugin that checks Angular 2 apps. Only time will tell how well debugging in Angular will work for every developer, but this is the sort of thing that the Angular team need to give developers – work with other teams to build better tools that help developers breeze through the more difficult tasks. The future devs vs the old With the release of Angular 2 in the coming months we’re going to see React and Angular 2 fight for dominance as the defacto framework on the JS market. The rewrite of Angular is arguably the biggest weakness and strength that Angular 2 offers. For previous Angular 1.x users there are two routes you can go down: Take the jump to Angular 2 and learn everything again. Decide the clean slate is an opportunity to give React a try – maybe even stick with it. What does Angular need to do to ensure after the release of 2 to get old users back on the Angular horse? A few of the writers that I’ve worked with in the past have talked about Angular as the Lego of the JS world – it’s simpler to pick up and everything fits snug together. There’s a great simplicity in building good looking Angular apps – the team needs to remind more jaded Angular 1.x fans that 2.x is the same Angular they love rebuilt for the new challenges of 2016 onwards. It’s still fun Lego, but shinier. If you’re new to the framework and want to see why it’s become such a beloved framework then be sure to check out our Angular tech page; this page has all our best eBooks and videos, as well as the chance to preorder our upcoming Angular 2 titles to download the chapters as soon as they’re finished.
Read more
  • 0
  • 0
  • 1709

article-image-angularjs-2-the-tempest-we-should-all-embrace
Ed Gordon
19 Nov 2015
5 min read
Save for later

AngularJS 2.0 is a tempest we should all embrace

Ed Gordon
19 Nov 2015
5 min read
2016 will be the year of AngularJS 2.0 and it’s going to be awesome. AngularJS has been a known quantity to Packt for about 4 years, and has been around for 6. In the last 24 months, we’ve really seen it gain massive adoption amongst our user base. Conferences are held in its name. It will come as no surprise that it’s one of our best-selling topics. Thousands of apps have been deployed and created with it. People, do in fact, love it. So the decision to rewrite the entire project seems odd. A lot has been written about this already from developers who know their stuff. Some are for it, some against it, and some are a little more balanced. For a technically reasoned article, Rob Eisenberg’s blog about AngularJS 2.0 is the best of many I’ve read. For one that quotes Shakespeare, read on. At Packt I’ve been the commissioning editor on a fair number of products. You may remember me from such hits as MEAN Web Development and Mastering D3.js. While I may not have the developer nous, creating a product is the same process whether it is a good framework or a good book. And part of this process understanding when you’ve got a good product, and when you had a good product that needs ripping up, and starting over. What’s past is prologue AngularJS’s design was emergent from increased adoption. It started life as a tool to aid designers throw up a quick online form. It was an internal tool at Google. They didn’t realise that every Joe Web Developer would be using it to power their client’s not-so-SEO-friendly bespoke applications. It’s the equivalent of what would happen if people started using this blog as a template for all future blogs. I’d enjoy it for the first few years, living the blogosphere high-life, then people would start moaning to me, and I would hate it. I’d have to start again, for my own health as much as for the health of the millions of bloggers who were using my formatting to try and contain their vastness. So we’re agreed that they need to change things. Good. Oh brave new world/That has such features in’t Many frameworks change things whilst maintaining backwards compatibility. WordPress is a famous example of doing everything possible to avoid introducing breaking-changes at any major update. The result is, by now, a pretty bloated application that much like Angular, started out serving a very different purpose to how it now finds itself being deployed. It’s what gave rise to smaller, lighter-weight platforms like Ghost, designed purely for blogging. AngularJS however is not an example of developers maintaining backwards compatibility. It takes pleasure in starting over. In fact, you can just about rip up your old Angular apps now. It’s for your own good. By starting from a clean slate, the Angular team have the chance to design AngularJS in to what it should be rather than what it ended up being. It may not make sense to the developers who are using Angular 1.x at the moment, but to be frank Google doesn’t care. It cares about good products. It’s planning a product that will endeavour to remain relevant in to the future, rather than spending its time trying to patch up something that was a result of rushed 2010 thinking. Part of this attempt at continued relevance is TypeScript. TypeScript extends the capabilities of ES6; moving to AngularJS 2.0 before ES7 is released means that it’s recommended that TypeScript is used to make the most of what Angular offers. This is a big move, but it’s an attempt at moving the capabilities forward. Doing something is always preferable to doing nothing. The other headline act, and related to the ES6 features is the move to make Angular compatible with Web Components. Web Components will redefine what web development means, in time, and making sure that their framework is on hand to help deliver them safely to developers is again a smart product decision. The temporary pain of the rewrite will be rewarded by increased ease of use and longevity for the developers and clients who build and consume AngularJS applications. There are a whole host more features; a move to mobile-first design, which I understand, and lots of technical and syntax improvements, which I don’t; increased performance, and plenty more too. Every decision is being made to make Angular a better product for everyone who uses it. Gentle breath of yours my sails/Must fill, or else my project fails AngularJS 2.0 has been a divisive figure in the web development world. I’ve been at Packt for three years and can’t remember a time when such a popular and well-used technology completely ripped up everything they had and started again. It will set a precedent in software that will shape the future, either way it ‘goes down’. What we should focus on is that this wholesale change is designed to make the product better – not just now, but in to the future - and that decision should be applauded. It’s not unheard of for Google to stop/start/abandon high-profile projects (cough Google Glass cough), but they should be recognised nonetheless for their dedication in trying to make this a more accessible and useful platform long term. Ultimately though, it will be the users who decide if they won or lost. The team are bringing a different project in the hope that people see its advantages, but no matter the intent a product is only useful if the consumers find it useful. Through our ‘gentle breath’, the Angular project will fly or fail. Let’s embrace it.
Read more
  • 0
  • 0
  • 1926

article-image-one-second-website-10x-your-site-performance
Dave Barnes
20 Oct 2015
5 min read
Save for later

The One Second Website : 10x your site performance

Dave Barnes
20 Oct 2015
5 min read
Last year, Patrick Hamann gave a talk for Google Developers on Breaking News at 1000ms. It lays out how Patrick and his team built a 1-second web site for the Guardian, improving performance almost 10 times. I learned a lot from the talk, and I’ve summarized that below. Here’s the video: And you can browse the slides here: Web speed has come to a head recently. Facebook’s Instant Articles put speed on everyone’s radar. A news page takes 8 seconds to load, and that puts people off clicking links. Like many others, I couldn’t quite believe things had got this bad. We have fast broadband and wifi. How can a 1,000 word article take so long? So there’s a lot of discussion around the problem, but Patrick’s talk lays out many of the solutions. Here’s the keys I took from it: The problem Sites are slow, really slow. 8 seconds is normal. And yet, people really care about speed. It’s a user’s second most important feature, right after “easy to find content”. In fact, if it takes more than a second for a page to respond people start to assume the site is broken. If most pages take more than a second, people start to assume the web is broken. And we wonder why 91% of mobile impressions are in apps, not the web. The budget Patrick set a hard budget for page loads of 1 second, and measured everything against that. This is his BHAG — make the web site nearly 10x faster. But once the goal is clear, people have a habit of finding solutions. The harder the goal, the more radical the solutions people will find. Modest goals lead to modest problem solving. Next time you want to improve something, set a 10x goal, get serious about it — and let everybody’s ingenuity loose on the solution. The solution Patrick and his team’s radical solutions revolved around 4 key principles. Deliver core content first There’s a lot of stuff on a news article page, but what we really want to see is the article content. Patrick’s team got serious about the really important stuff, creating a ‘swim lane’ system. The important stuff — the core article content — was put into a fast lane, loaded first, and then the rest built around it. This made the goal more doable. The whole page didn’t need to load in 1000ms. If the core content loaded in 1s people could read it, and by the time they had the rest of the page would be ready. (Even the flimsiest Guardian article will take more than 1s to read!.) Core content should render within 1000ms Here’s the problem. To get content to the reader in 1000ms you have only 400ms to play with, because the basic network overhead takes 600ms over a good 3g connection. So to really supercharge speed, the Guardian inlined the critical CSS. For the Guardian, the critical CSS is the article formatting. The rest can come a bit later. The new site uses JavaScript to download, store, and load CSS on demand rather than leaving that decision to the browser.  From: https://speakerdeck.com/patrickhamann/breaking-news-at-1000ms-front-trends-2014 Every feature must fail gracefully Fonts are a recognizable part of the Guardian brand, important despite the overhead. But not that important. The new design fails decisively and fast when it’s right to do so: Decision tree — fallback vs. custom font. The really clever bit of the whole set up is the font JSON. Instead of downloading several font binaries, just one JSON request downloads all the fonts in base64 encoding. This means some overhead in file size, but replaces several requests with just one cachable object: A nice trick, and one you can use yourself: they made webfontjson an Open Source project. Every request should be measured The final pillar really comes down to really knowing your shit. Graph and measure EVERYTHING that affects your performance and your time-to-render budget. In addition to the internal analytics platform Opan, Patrick uses Speedcurve to monitor and report on performance against a set of benchmarks over time: Sum up For everyone… big improvements come from BIG GOALS and ingenious solutions. Be ambitious and set a budget/goal that gives great customer benefit, then work towards it. For web developers: Performance is a requirement. Everybody has to have it as a priority from day one. Take the one second web site challenge. Make that your budget, and measure, optimize, repeat. Make the core content download first, render it in the fast lane. Then build the rest around the outside. Now if that whet your appetite, watch the video. Especially if you’re more involved in web dev, I’m sure you’ll learn a lot more from it than I did! What techniques do you use to 10x your site’s performance? From 11th to 17th April save up to 70% on some of our very best web development products. It's the perfect opportunity to explore - and learn - the tools and frameworks that can help you unlock greater performance and build even better user experiences. Find them here.
Read more
  • 0
  • 0
  • 1358

article-image-guide-better-typography-web
Brian Hough
19 Aug 2015
8 min read
Save for later

Better Typography for the Web

Brian Hough
19 Aug 2015
8 min read
Despite living in a world dominated by streaming video and visual social networks, the web is still primarily a place for reading. This means there is a tremendous amount of value in having solid, readable typography on your site. With the advances in CSS over the past few years, we are finally able to tackle a variety of typographical issues that print has long solved. In addition, we are also able to address a lot of challenges that are unique to the web. Never have we had more control over web typography. Here are 6 quick snippets that can take yours to the next level. Responsive Font Sizing Not every screen is created equal. With a vast array of screen sizes, resolutions, and pixel densities it is crucial that our typography adjusts itself to fit the user's screen. While we've had access to relative font measurements for awhile, they have been cumbersome to work with. Now with rem we can have relative font-sizing without all the headaches. Let's take a look at how easy it is to scale typography for different screens. html { font-size: 62.5%; } h1 { font-size: 2.1rem // Equals 21px; p { font-size: 1.6rem; // Equals 16px } By setting font-size to 62.5% it allows use base 10 for setting our font-size using rem. This means a font set to 1.6rem is the same as setting it to 16px. This makes it easy to tell what size our text is actually going to be, something that is often an issue when using em. Browser support for rem is really good at this stage, so you shouldn't need a fallback. However, if you need to support older IE it is simple as creating a second font-size rule set in px after the line you set it in rem. All that is left is to scale our text based on screen size or resolution. By using media queries, we can keep the relative sizing of type elements the same without having to manually adjust each element for every breakpoint. // Scale Font Based On Screen Resolution @media only screen and (min-device-pixel-ratio: 2), only screen and (min-resolution: 192dpi) { html { font-size: 125%; } } // Scale Font Based On Screen Size @media only screen and (max-device-width : 667px) { html { font-size: 31.25%; } } This will scale all the type on the page with just one property per breakpoint, there is now no excuse not to adjust font-size based on your users' screens. Relative Line Height Leading, or the space between baselines in a paragraph, is an important typographical attribute that directly affects the readability of your content. The right line height provides a distinction between lines of text, allowing a reader to scan a block of testing quickly. An easy tip a lot of us miss is setting a unitless line-height. By setting line-height in this way, it acts as a ratio to the size of your type. This scales your leading with your font-size, making it a perfect compliment to using rem: p { font-size: 1.6rem; line-height: 1.4; } This will set our line-height at a ratio of 1.4 times our font-size. Consequently, 1.4 is a good value to start with when tweaking your leading, but your ratio will ultimately depend on the font you are using. Rendering Control The way type renders on a web page is affected not only by the properties of a user's screen, but also by their operating system and browser. Different font-rendering implementations can mean the difference between your web fonts loading quickly and clearly or chugging along and rendering into a pixelated mess. Font services like TypeKit recognize this problem and provide you a way to preview how a font will appear in different operating system/browser combinations. Luckily though, you don't have to leave it completely up to chance. Some browser engines, like WebKit, give us some extra control over how a font renders. text-rendering controls how a font is rendered by the browser. If your goal is optimal legibility, setting text-rendering to optimizeLegibility will make use of additional information provided in certain fonts to enhance kerning and make use of built-in ligatures. p.legibility { text-rendering: optimizeLegibility; } While this sounds perfect, there are scenarios where you don't want to use it. It can crush rendering times on less powerful machines, especially mobile browsers. It is best to use it sparingly on your content, and not just apply to every piece of text on a page. All browsers support this property except Internet Explorer. This is not the only way you can optimize font rendering. Webkit browsers also allow you to adjust the type of anti-aliasing they use to render fonts. Chrome is notoriously polarizing in how fonts look, so this is a welcome addition. It is best to experiment with the different options, as it really comes down to the font you've chosen and your personal taste. p { webkit-font-smoothing: none; webkit-font-smoothing: antialiased; webkit-font-smoothing: subpixel-antialiased; } Lastly, if you don't find that the font-smoothing options aren't enough, you can had a bit of boldness to your fonts in WebKit, with the following snippet. The result isn't for everyone, but if you find your font is rendering a bit on the light side, it does the trick. p { -webkit-text-stroke 0.35px; } Hanging Punctuation Hanging punctuation is a typographical technique that keeps punctuation at the begging of a paragraph from disrupting the flow of the text. By utilizing the left margin, punctuation like open quotes and list bullets are able to be displayed while still allowing text to be left justified. This makes it easier for the reader to scan a paragraph. We can achieve this effect by applying the following snippet to elements where we lead with punctuation or bullets. p:first-child { text-indent: -0.5rem; } ul { padding: 0px; } ul li { list-style-position: outside; } One note with bulleted lists is to make sure its container does not have overflow set to hidden as this will hide the bullets when they are set to outside if you want to be super forward-looking. Work is being done on giving us even more control over hanging punctuation, including character detection and support for leading and trailing punctuation. Proper Hyphenation One of the most frustrating things on the web for typography purists is the ragged right edge on blocks of text. Books solve this through carefully justified text. This, unfortunately, is not an option for us yet on the web, and while a lot of libraries attempt to solve this with JavaScript, there are some things you can do to handle this with CSS alone. .hyphenation { -ms-word-break: break-all; word-break: break-all; // Non standard for webkit word-break: break-word; -webkit-hyphens: auto; -moz-hyphens: auto; hyphens: auto; } .no-hyphenation { -ms-word-break: none; word-break: none; // Non standard for webkit word-break: none; -webkit-hyphens: none; -moz-hyphens: none; hyphens: none; } Browser support is pretty solid, with Chrome being the notable exception. You must set a language attribute on the parent element, as the browser leverages this to determine hyphenation rules. Also, note if you are using Autoprefixer, it will not add all the appropriate variations to the hyphens property. Descender-Aware Underlining This is a newer trick I first noticed in iMessage on iOS. It makes underlined text a bit more readable by protecting descenders (the parts of a letter that drop before the baseline) from being obscured by the underline. This makes it an especially good fit for links. .underline { text-decoration: none; text-shadow: .03rem 0 #FFFFFF,-.03rem 0 #FFFFFF,0 .03rem #FFFFFF,0 -.03rem #FFFFFF,.06rem 0 #FFFFFF,-.06rem 0 #FFFFFF,.09rem 0 #FFFFFF,-.09rem 0 #FFFFFF; color: #000000; background-image: linear-gradient(#FFFFFF,#FFFFFF),linear-gradient(#FFFFFF,#FFFFFF),linear-gradient(#000000,#000000); background-size: .05rem 1px,.05rem 1px,1px 1px; background-repeat: no-repeat,no-repeat,repeat-x; background-position: 0 90%,100% 90%,0 90%; } First, we create a text-shadow the same color as our background (in this case white) around the content we want to be underlined. The key is that the shadow is thin enough to obscure things behind it, without overlapping other letters. Next, we use a background gradient to recreate our underline. The text shadow alone is not enough as a normal text-decoration: underline is actually placed over the type. The gradient will appear just like a normal underline, but now the text-shadow will obscure the underline where the descenders overlap it. By using rem, this effect also scales based on our font-size. Conclusion Users still spend a tremendous amount of time reading online. Making that as frictionless as possible should be one of the top priorities for any site with written content. With just a little bit of CSS, we can drastically improve the readability of our content with little to no overhead. About The Author Brian is a Front-End Architect, Designer, and Product Manager at Piqora. By day, he is working to prove that the days of bad Enterprise User Experiences are a thing of the past. By night, he obsesses about ways to bring designers and developers together using technology. He blogs about his early stage startup experience at lostinpixelation.com, or you can read his general musings on twitter @b_hough.
Read more
  • 0
  • 0
  • 1775
article-image-biggest-web-developer-salary-and-skills-survey-2015
Packt Publishing
27 Jul 2015
1 min read
Save for later

The biggest web developer salary and skills survey of 2015

Packt Publishing
27 Jul 2015
1 min read
The following infographic is taken from our comprehensive Skill Up IT industry salary reports, with data from over 20,000 developers. Download the full size infographic here.    
Read more
  • 0
  • 0
  • 1629

article-image-today-you-are-not-web-developer-if-you-dont-know-javascript
Mario Casciaro
01 Jul 2015
6 min read
Save for later

You're not a web developer if you don't know JavaScript

Mario Casciaro
01 Jul 2015
6 min read
Mario Casciaro is a software engineer and technical lead with a passion for open source. After the recent publication of his successful book Node.JS Design Patterns, we caught up with him to discuss his views on today’s most important web development skills, and what the future holds. The best tool for the job may not be in your skillset yet I remember working on a small side project, something I try to do as much as possible, to put new skills into practice and try things outside of my job. It was a web application, something very similar to a social network, and I remember choosing Java with the Spring Framework as the main technology stack, and Backbone on the front-end. At the time - around 4 years ago - I was an expert Java developer, and considered it the technology with the most potential. It worked perfectly to implement enterprise web applications as well as mission-critical distributed applications and even mobile apps. While Java is still a popular and valuable tool in 2015, my experience doing this small side project made me rethink my opinion – I wouldn’t use it today unless there was a particular need for it. I remember that at some point I realized I was spending a lot of my development time in designing the object-oriented structure of the application and writing boilerplate code. Trying to find a solution, I migrated the project to Groovy and Grails and on the front-end I tried to implement a small homemade two-way binding framework. Things improved a little, but I was still feeling the need for something more agile on both ends, something more suited to the web. The web moves fast, so always let your skills evolve I wanted to try something radically different than the typical PHP, Ruby on Rails or Python for the server or JQuery or Backbone for the client. Fortunately I discovered Node.js and Angular.js, and that changed everything. By using Node I noticed that my mindset shifted from “how to do things” to “just get things done”. On the other hand, Angular revolutionized my approach to front end development, allowing me to drastically cut the amount of boilerplate code I was using. But most importantly, I realized that JavaScript and its ecosystem was becoming a seriously big thing. Today I would not even consider building a web application without having JavaScript as my primary player. The amount of packages on npm is staggering - a clear indication that the web has shifted towards JavaScript. The most impressive part of this story is that I also realized the importance that these new skills had in defining my career; if I wanted to build web applications, JavaScript and its amazing ecosystem had to be the focus of my learning efforts. This led me to find a job where Node, Angular and other cutting-edge JavaScript technologies actually played a crucial role in the success of the project I was in charge of creating. But the culmination of my renewed interest in JavaScript is the book I published 6 months ago - Node.jsDesignPatterns - which contains the best of the experience I accumulated since I devoted myself to the full-stack JavaScript mantra. The technologies and the skills that matter today for a web developer Today, if I had to give advice to someone approaching web development for the first time I would definitely recommend starting with JavaScript. I wouldn’t have said that 5-6 years ago, but today it’s the only language that allows you to get started both on the back end and the front end. Moreover JavaScript, in combination with other web technologies such as HTML and CSS, gives you access to an even broader set of applications with the help of projects like nw.js and ApacheCordova. PHP, Ruby, and Python are still very popular languages for developing the server-side of a web application, but for someone that already knows JavaScript, Node.js would be a natural choice. Not only does it save you the time it takes to learn a new language, it also offers a level of integration with the front end that is impossible with other platforms. I’m talking, of course, about sharing code between the server and the client and even implementing isomorphic applications which can run on both Node.js and the browser. React is probably the framework that offers the most interesting features in the area of isomorphic application development and definitely something worth digging into more, and it’s likely that we’ll also see a lot more from PouchDB, an isomorphic JavaScript database that will help developers build offline-enabled or even offline-first web applications more easily than ever before. Always stay ahead of the curve Today, as 4 years ago, the technologies that will play an important role in the web of tomorrow are already making an impact. WebRTC, for example, enables the creation of real-time peer-to-peer applications in the browser, without the need for any additional plugin. Developers are already using it to build fast and lightweight audio/video conferencing applications or even complete BitTorrent clients in the browser! Another revolutionizing technology is going to be ServiceWorkers which should dramatically improve the capabilities of offline applications. On the front end, WebComponents are going to play a huge role, and the Polymer project has already demonstrated what this new set of standards will be able to create. With regards to JavaScript itself, web developers will have to become familiar with the ES6 standard sooner than expected, as cross-compilation tools such as Babel are already allowing us to use ES6 on almost any platform. But we should also keep an eye on ES7 as it will contain very useful features to simplify asynchronous programming. Finally, as the browser becomes the runtime environment of the future, the recently revealed WebAssembly promises to give the web its own “bytecode”, allowing you to load code written in other languages from JavaScript, When WebAssembly becomes widely available, it will be common to see a complex 3D video game or a full-featured video editing tool running in the browser. JavaScript will probably remain the mainstream language for the web, but it will be complemented by the new possibilities introduced by WebAssembly. If you want to explore the JavaScript ecosystem in detail start with our dedicated JavaScript page. You'll find our latest and most popular, along with free tutorials and insights.
Read more
  • 0
  • 0
  • 3099

article-image-best-angular-yet-new-features-angularjs-13
Sebastian Müller
16 Apr 2015
5 min read
Save for later

The best Angular yet - New Features in AngularJS 1.3

Sebastian Müller
16 Apr 2015
5 min read
AngularJS 1.3 was released in October 2014 and it brings with it a lot of new and exciting features and performance improvements to the popular JavaScript framework. In this article, we will cover the new features and improvements that make AngularJS even more awesome. Better Form Handling with ng-model-options The ng-model-options directive added in version 1.3 allows you to define how model updates are done. You use this directive in combination with ng-model. Debounce for Delayed Model Updates In AngularJS 1.2, with every key press, the model value was updated. With version 1.3 and ng-model-options, you can define debounce time in milliseconds, which will delay the model update until the user hasn’t pressed a key in the configured time. This is mainly a performance feature to save $digest cycles that would normally occur after every key press when you don’t use ng-model-options: <input type="text" ng-model="my.username" ng-model-options="{ debounce: 500 }" /> updateOn - Update the Model on a Defined Event An alternative to the debounce option inside the ng-model-options directive is updateOn. This updates the model value when the given event name is triggered. This is also a useful feature for performance reasons. <input type="text" ng-model="my.username" ng-model-options="{ updateOn: 'blur' }" /> In our example, we only update the model value when the user leaves the form field. getterSetter - Use getter/setter Functions in ng-model app.js: angular.module('myApp', []).controller('MyController', ['$scope', function($scope) { var myEmail = 'example@example.com'; $scope.user = { email: function email(newEmail) { if (angular.isDefined(newEmail)) { myEmail = newEmail; } return myEmail; } }; }]); index.html: <div ng-app="myApp" ng-controller="MyController"> current user email: {{ user.email() }} <input type="email" ng-model="user.email" ng-model-options="{ getterSetter: true }" /> </div> When you set getterSetter to true, Angular will treat the referenced model attribute as a getter and setter method. When the function is called with no parameter, it’s a getter call and AngularJS expects that you return the current assigned value. AngularJS calls the method with one parameter when the model needs to be updated. New Module - ngMessages The new ngMessages module provides features for a cleaner error message handling in forms. It’s a feature that is not contained in the core framework and must be loaded via a separate script file. index.html: … <body> ... <script src="angular.js"></script> <script src="angular-messages.js"></script> <script src="app.js"></script> </body> app.js: // load the ngMessages module as a dependency angular.module('myApp', ['ngMessages']);  The first version contains only two directives for error message handling: <form name="myForm"> <input type="text" name="myField" ng-model="myModel.field" ng-maxlength="5" required /> <div ng-messages="myForm.myField.$error" ng-messages-multiple> <div ng-message="maxlength"> Your field is too long! </div> <div ng-message="required"> This field is required! </div> </div> </form> First, you need a container element that has an “ng-messages” directive with a reference to the $error object of the field you want to show error messages for. The $error object contains all validation errors that currently exist. Inside the container element, you can use the ng-message directive for every error type that can occur. Elements with this directive are automatically hidden when no validation error for the given type exists. When you set the “ng-messages-multiple” attribute on the element, you are using the “ng-messages” directive and all validation error messages are displayed at the same time. Strict-DI Mode AngularJS provides multiple ways to use the dependency injection mechanism in your application. One way is not safe to use when you minify your JavaScript files. Let’s take a look at this example: angular.module('myApp', []).controller('MyController', function($scope) { $scope.username = 'JohnDoe'; }); This example works perfectly in the browser as long as you do not minify this code with a JavaScript minifier like UglifyJS or Google Closure Compiler. The minified code of this controller might look like this: angular.module('myApp', []).controller('MyController', function(a) { a.username = 'JohnDoe'; }); When you run this code in your browser, you will see that your application is broken. Angular cannot inject the $scope service anymore because the minifier changed the function parameter name. To prevent this type of bug, you have to use this array syntax: angular.module('myApp', []).controller('MyController', ['$scope', function($scope) { $scope.username = 'JohnDoe'; }]); When this code is minified by your tool of choice, AngularJS knows what to inject because the provided string ‘$scope’ is not rewritten by the minifier: angular.module('myApp', []).controller('MyController', ['$scope', function(a) { a.username = 'JohnDoe'; }]); Using the new Strict-DI mode, developers are forced to use the array syntax. An exception is thrown when they don’t use this syntax. To enable the Strict-DI mode, you have to add the ng-strict-di directive to the element that you are using for the ng-app directive: <html ng-app="myApp" ng-strict-di> <head> </head> <body> ... </body> </html> IE8 Browser Support Angular 1.2 had built-in support for Internet Explorer 8 and up. Now that the global market share of IE8 has dropped and it takes a lot of time and extra code to support the browser, the team decided to drop support for the browser that was released back in 2009. Summary This article shows only a few new features added to Angular1.3. To learn about all of the new features, read the changelog file on Github or check out the AngularJS 1.3 migration guide. About the Author Sebastian Müller is Senior Software Engineer at adesso AG in Dortmund, Germany. He spends his time building Single Page Applications and is interested in JavaScript Architectures. He can be reached at @Sebamueller on Twitter and as SebastianM on Github.
Read more
  • 0
  • 0
  • 1976
article-image-webgl-games
Alvin Ourrad
05 Mar 2015
5 min read
Save for later

WebGL in Games

Alvin Ourrad
05 Mar 2015
5 min read
In this post I am not going to show you any game engine, nor framework, nor library. This post is a more general write-up that aims to give you a more general overview of the technology that powers some of these frameworks : WebGL. Introduction Back in the days, in 2011, 3D in the browser was not really a thing outside of the realm of Flash, and the websites didn't make much use of the canvas element like they do today. During that year, the Khronos Group started an initiative called WebGL. This project was about creating an implementation of OpenGL ES 2.0 in a royalty free, standard, and cross browser API. Even though the canvas element can only draw 2d primitives, it actually is possible to render 3D graphics at a decent speed with this element.  By making a clever use of perspective and using a lot of optimizations, MrDoob with THREE.js managed to create a 3D canvas renderer, which quite frankly offers stunning results as you can see here and there. But, even though canvas can do the job, its speed and level of hardware-acceleration is nothing compared to the one WebGL benefits from, especially when you take into account the browsers on lower-end devices such as our mobile phones. Fast-forward in time, when Apple officially announced the support of WebGL for mobile Safari in IOS 8, the main goal was reached, since most of the recent browsers were able to use this 3D technology natively. Can I have 3D ? It's very likely that you can now, although there are still some graphics cards that were not made to support WebGL, but the global support is very good now. If you are interested in learning how to make 3D graphics in the browser, I recommend you do some research about a library called THREE.js. This library has been around for a while and is usually what most people choose to get started with, as this library is just a 3D library and nothing more. If you want to interact with the mouse, or create a bowling game, you will have to use some additional plugins and/or libraries. 3D in the gaming landscape As the support and the awareness around WebGL started rising, some entrepreneurs and companies saw it as a way to create a business or wanted to take part in this 3D adventure. As a result, several products are available to you if you want to delve into 3D gaming. Playcanvas This company likes saying that they re-created "Unity in the browser", which is not far from the truth really. Their in-browser editor is very complete, and mimics the entity-component system that exists in Unity. However, I think the best thing they have created among their products is their real-time collaboration feature. It allows you to work on a project with a team and instantly updates the editor and the visuals for everyone currently viewing it. The whole engine was also open sourced a few months ago, which has given us beautiful demos like this one:  http://codepen.io/playcanvas/pen/ctxoD Feel free to check out their website and give their editor a try:  https://playcanvas.com Goo technology Goo technology is an environment that encompasses a 3D engine, the Goo engine, an editor and a development environment. Goo create is also a very nicely designed 3D editor in the browser. What I really like about Goo is their cartoony mascot, "Goon" that you can see in a lot of their demos and branding, which adds a lot of fun and humanity to them. Have fun watching this little dude in his adventures and learn more about the company in these links:  http://www.goocreate.com Babylonjs I wasn't sure if this one was worth including, Babylon is a contestant to THREE.js created by Microsoft that doesn't want to be "just a rendering engine," but wants to add some useful components available out-of-the-box such as camera controls, a physics engine, and some audio capabilities. Babylon is relatively new and definitely not as battle-tested as THREE.js, but they created a set of tools that help you get started with it that I like, namely the playground and the shader editor. 2D ? Yes, there is a major point that I haven't mentioned yet. WebGL has been used across more 2D games that you might imagine. Yes, there is no reason why 2D games shouldn’t have this level of hardware-acceleration. The first games that used WebGL for their 2D needs were Rovio and ZeptoLabs for the ports of their respective multi-million-dollar hits that are Angry Birds and Cut the Rope to JavaScript. When pixi.js came out, a lot of people started using it for their games. The major HTML5 game framework, Phaser is also using it. Play ! This is the end of this post, I hope you enjoyed it and that you want to get started with these technologies. There is no time to waste -- it's all in your hands. About the author Alvin Ourrad is a web developer fond of the web and the power of open standards. A lover of open source, he likes experimenting with interactivity in the browser. He currently works as an HTML5 game developer.
Read more
  • 0
  • 0
  • 2060

article-image-try-something-new-today-reactjs
Sarah C
28 Jan 2015
5 min read
Save for later

Try Something New Today – ReactJS

Sarah C
28 Jan 2015
5 min read
Sometimes it seems like AngularJS is the only frontend game in town. There are reasons for that. It’s sophisticated, it’s a game-changer for web design that makes things better right now, and the phenomenal rate of adoption has also led to all kinds of innovative integrations. However, when all you have is a directive, every problem starts to look like an ng-, as they say. Now and again, we all like the novelty of change. As the first signs of spring emerge from the snow, our techie fancies lightly turn to thoughts of components. So, like a veritable Sam-I-Am, let me press you to try something you may not have tried before.* Today’s the day to take ReactJS for a test drive. So what’s the deal with React, then? ReactJS was developed by Facebook, then open sourced last year. Lately, it’s been picking up speed. Integrations have improved, and now that Facebook have also open sourced Flux, we’re hearing a lot of buzz about what React can do for your UI design. (Flux is an application pattern. You can read more about its controller-free philosophy at Facebook’s GitHub page.) Like so many things, React isn’t quite a framework and it isn’t quite a library. Where React excels is in generating UI components that refresh with data changes. With disarming modesty, React communicates the smallest changes on the server side to the browser quickly, without having to re-render anything except the part of the display that needs to change. Here’s a quick run through of React’s most pleasing features. (ReactJS also has a good sense of humour, and enjoys long walks along the beach at sunset.) Hierarchical components ReactJS is built around components: the new black of web dev. Individual components bundle together the markup and logic as handy reusable treats. Everyone has their own style when developing their apps, but React’s feel and rhythm encourages you to think in components. React’s components are also hierarchical – you can nest them and have them inherit properties and state. There are those who are adamant that this is the future of all good web-app code. Minimal re-rendering Did you catch my mention of ‘state’ up there? React components can have state. Let the wars begin right now about that, but it brings me to the heart of React’s power. React reacts. Any change triggers a refresh, but with minimal re-rendering. With its hierarchical components, React is smart enough to only ever refresh and supply new display data to the part of the component that needs it, not the entire thing. That’s good news for speed and overhead. Speedy little virtual dom In fact, ReactJS is light in every sense. And it owes a lot of its power to its virtual DOM. Rather than plug into the DOM directly, React renders every change into a virtual DOM and then compares it against the current DOM. If it sees something that needs to be changed in the view, React gets to work on changing just that part, leaving everything else untouched. Fun to write React mixes HTML and JavaScript, so you can refer to HTML elements right there inside your <script>. Yes, okay, that’s ‘fun’ for a given value of fun. The kind where dorks get a little giddy about pleasing syntax. But we’re all in this together, so we might as well accept ourselves and each other. For example, here’s a simple component rendering from an official tutorial: // tutorial1.js var CommentBox = React.createClass({ render: function() {    return (      <div className="commentBox">        Hello, world! I am a CommentBox.      </div>    ); } }); React.renderComponent( <CommentBox />, document.getElementById('content') ); This is JSX syntax, which React uses instead of defining templates within a string. Pretty, right? Reactive charts and pictures With luck, at this point, your coffee has kicked in and you’re beginning to think about possible use cases where React might be a shiny new part of your toolkit. Obviously, React’s going to be useful for anything with lots of real-time activity. As a frontend for a chat client, streaming news, or a dashboard, it’s got obvious powers. But think a little further and you’ll see a world of other possibilities. React can also handle SVG for graphics and charts, with the potential to create dynamic and malleable visualisations even without D3. SEO One last-but-not-least selling point: web apps built with this framework don’t scare the Google Spiders. Because everything’s passed to the client side and into the DOM having already had its shoes shined by the virtual DOM, it’s very easy to make apps legible for search engines as well as for people, allowing your stored data to be indexed and boost your SEO by reflecting your actual content. Give it a shot and do some experimenting. Have you had any wins or unexpected problems with React? Or are you thinking of giving it a whirl for your next app? We’re going to try it out for some in-house data viz, and may possibly even report back. What about you? *Do not try ReactJS with a goat on a boat without taking proper safety precautions. (With a mouse in a house is fine and, indeed, encouraged.)
Read more
  • 0
  • 0
  • 1598