Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

6 JavaScript micro optimizations you need to know

Save for later
  • 18 min read
  • 05 Apr 2018

article-image
JavaScript micro optimizations can improve the performance of your JavaScript code. This means you can get it to do more - this is essential especially when thinking about the scale of modern web applications, as greater efficiencies in code can lead to much stronger overall performance.

Let us have a look at micro optimizations in detail.

Truthy/falsy comparisons


We have all, at some point, written if conditions or assigned default values by relying on the truthy or falsy nature of the JavaScript variables. As helpful as it is most of the times, we will need to consider the impact that such an operation would cause on our application. However, before we jump into the details, let's discuss how any condition is evaluated in JavaScript, specifically an if condition in this case. As a developer, we tend to do the following:

if(objOrNumber) {

// do something

}


This works for most of the cases, unless the number is 0, in which case it gets evaluated to false. That is a very common edge case, and most of us catch it anyway. However, what does the JavaScript engine have to do to evaluate this condition? How does it know whether the objOrNumber evaluates to true or false? Let's return to our ECMA262 specs and pull out the IF condition spec (https://www.ecma-international.org/ecma-262/5.1/#sec-12.5). The following is an excerpt of the same:

Semantics

The production IfStatement : If (Expression) Statement else Statement Statement is evaluated as follows:

  1.    Let exprRef be the result of evaluating Expression.
  2.    If ToBoolean(GetValue(exprRef)) is true, then


Return the result of evaluating the first Statement.

  1.    Else,


Return the result of evaluating the second Statement.

Now, we note that whatever expression we pass goes through the following three steps:

  1. Getting the exprRef from Expression.
  2. GetValue is called on exprRef.
  3. ToBoolean is called as the result of step 2.


Step 1 does not concern us much at this stage; think of it this wayan expression can be something like a == b or something like the shouldIEvaluateTheIFCondition() method call, that is, something that evaluates your condition.

Step 2 extracts the value of the exprRef, that is, 10, true, undefined. In this step, we differentiate how the value is extracted based on the type of the exprRef. You can refer to the details of GetValue here.

Step 3 then converts the value extracted from Step 2 into a Boolean value based on the following table (taken from https://www.ecma-international.org/ecma-262/5.1/#sec-9. 2):

6-javascript-micro-optimizations-need-know-img-0At each step, you can see that it is always beneficial if we are able to provide the direct boolean value instead of a truthy or falsy value.

Looping optimizations


We can do a deep-down dive into the for loop, similar to what we did with the if condition earlier (https://www.ecma-international.org/ecma-262/5.1/#sec-12.6.3), but there are easier and more obvious optimizations which can be applied when it comes to loops. Simple changes can drastically affect the quality and performance of the code; consider this for example:

for(var i = 0; i < arr.length; i++) {

// logic

}


The preceding code can be changed as follows:

var len = arr.length;

for(var i = 0; i < len; i++) {

// logic

}


What is even better is to run the loops in reverse, which is even faster than what we have seen previously:

var len = arr.length;

for(var i = len; i >= 0; i--) {

// logic

}

The conditional function call


Some of the features that we have within our applications are conditional. For example, logging or analytics fall into this category. Some of the applications may have logging turned off for some time and then turned back on. The most obvious way of achieving this is to wrap the method for logging within an if condition. However, since the method could be triggered a lot of times, there is another way in which we can make the optimization in this case:

function someUserAction() {

// logic

if (analyticsEnabled) { trackUserAnalytics();



}

}

// in some other class

function trackUserAnalytics() {

// save analytics

}


Instead of the preceding approach, we can instead try to do something, which is only slightly different but allows V8-based engines to optimize the way the code is executed:

function someUserAction() {

// logic

trackUserAnalytics();

}

// in some other class

function toggleUserAnalytics() { if(enabled) {

trackUserAnalytics =   userAnalyticsMethod;

} else {

trackUserAnalytics = noOp;

}

}

function userAnalyticsMethod() {

// save analytics

}

// empty function function noOp           {}


Now, the preceding implementation is a double-edged sword. The reason for that is very simple. JavaScript engines employ a technique called inline caching (IC), which means that any previous lookup for a certain method performed by the JS engine will be cached and reused when triggered the next time; for example, if we have an object that has a nested method, a.b.c, the method a.b.c will be only looked up once and stored on cache (IC); if a.b.c is called the next time, it will be picked up from IC, and the JS engine will not parse the whole chain again. If there are any changes to the a.b.c chain, then the IC gets invalidated and a new dynamic lookup is performed the next time instead of being retrieved from the IC.

So, from our previous example, when we have noOp assigned to the trackUserAnalytics() method, the method path gets tracked and saved within IC, but it internally removes this function call as it is a call to an empty method. However, when it is applied to an actual function with some logic in it, IC points it directly to this new method. So, if we keep calling our toggleUserAnalytics() method multiple times, it keeps invalidating our IC, and our dynamic method lookup has to happen every time until the application state stabilizes (that is, toggleUserAnalytics() is no longer called).

Image and font optimizations


When it comes to image and font optimizations, there are no limits to the types and the scale of optimization that we can perform. However, we need to keep in mind our target audience, and we need to tailor our approach based on the problem at hand.

With both images and fonts, the first and foremost important thing is that we do not overserve, that is, we request and send only the data that is necessary by determining the dimensions of the device that our application is running on.

The simplest way to do this is by adding a cookie for your device size and sending it to the server along with each of the request. Once the server receives the request for the image, it can then retrieve the image based on the dimension of the image that was sent to the cookie. Most of the time these images are something like a user avatar or a list of people who commented on a certain post. We can agree that the thumbnail images do not need to be of the same size as that of the profile page, and we can save some of the bandwidth while transmitting a smaller image based on the image.

Since screens these days have very high Dots Per Inch (DPI), the media that we serve to screens needs to be worthy of it. Otherwise, the application looks bad and the images look all pixelated. This can be avoided using Vector images or SVGs, which can be GZipped over the wire, thus reducing the payload size.

Another not so obvious optimization is changing the image compression type. Have you ever loaded a page in which the image loads from the top to bottom in small, incremental rectangles? By default, the images are compressed using a baseline technique, which is a default method of compressing the image from top to bottom. We can change this to be progressive compression using libraries such as imagemin. This would load the entire image first as blurred, then semi blurred, and so on until the entire image is uncompressed and displayed on the screen. Uncompressing a progressive JPEG might take a little longer than that of the baseline, so it is important to measure before making such optimizations.

Another extension based on this concept is a Chrome-only format of an image called WebP. This is a highly effective way of serving images, which serves a lot of companies in production and saved almost 30% on bandwidth. Using WebP is almost as simple as the progressive compression as discussed previously. We can use the imagemin-webp node module, which can convert a JPEG image into a webp image, thus reducing the image size to a great extent.

Web fonts are a little different than that of images. Images get downloaded and rendered onto the UI on demand, that is, when the browser encounters the image either from the HTML 0r CSS files. However, the fonts, on the other hand, are a little different. The font files are only requested when the Render Tree is completely constructed. That means that

the CSSOM and DOM have to be ready by the time request is dispatched for the fonts. Also, if the fonts files are being served from the server and not locally, then there are chances that we may see the text without the font applied first (or no text at all) and then we see the font applied, which may cause a flashing effect of the text.

There are multiple simple techniques to avoid this problem: Download, serve, and preload the font files locally:

<link rel="preload" href="fonts/my-font.woff2" as="font">


Specify the unicode-range in the font-face so that browsers can adapt and improvise on the character set and glyphs that are actually expected by the browser:

@font-face(

...

unicode-range: U+000-5FF; // latin

...

)


So far, we have seen that we can get the unstyled text to be loaded on to the UI and the get styled as we expected it to be; this can be changed using the font loading API, which allows us to load and render the font using JavaScript:

var font = new FontFace("myFont", "url(/my-fonts/my-font.woff2)", { unicodeRange: 'U+000-5FF'

});

// initiate a fetch without Render Tree font.load().then(function() {

// apply the font document.fonts.add(font);

document.body.style.fontFamily = "myFont";

});

Garbage collection in JavaScript


Let's take a quick look at what garbage collection (GC) is and how we can handle it in JavaScript. A lot of low-level languages provide explicit capabilities to developers to allocate and free memory in their code. However, unlike those languages, JavaScript automatically handles the memory management, which is both a good and bad thing. Good because we no longer have to worry about how much memory we need to allocate, when we need to do so, and how to free the assigned memory. The bad part about the whole process is that, to an uninformed developer, this can be a recipe for disaster and they can end up with an application that might hang and crash.

Luckily for us, understanding the process of GC is quite easy and can be very easily incorporated into our coding style to make sure that we are writing optimal code when it comes to memory management. Memory management has three very obvious steps:

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
  1.    Assign the memory to variables:

var a = 10; // we assign a number to a memory location referenced by variable a

  1.    Use the variables to read or write from the memory:

a += 3; // we read the memory location referenced by a and write a new value to it

  1.    Free the memory when it's no longer needed.


Now, this is the part that is not explicit. How does the browser know when we are done with the variable a and it is ready to be garbage collected? Let's wrap this inside a function before we continue this discussion:

function test() { var a = 10;

a += 3;

return a;

}


We have a very simple function, which just adds to our variable a and returns the result and finishes the execution. However, there is actually one more step, which will happen after the execution of this method called mark and sweep (not immediately after, sometimes this can also happen after a batch of operations is completed on the main thread). When the browser performs mark and sweep, it's dependent on the total memory the application consumes and the speed at which the memory is being consumed.

Mark and sweep algorithm


Since there is no accurate way to determine whether the data at a particular memory location is going to be used or not in the future, we will need to depend on alternatives which can help us make this decision. In JavaScript, we use the concept of a reference to determine whether a variable is still being used or not—if not, it can be garbage collected.

The concept of mark and sweep is very straightforward: what all memory locations are reachable from all the known active memory locations? If something is not reachable, collect it, that is, free the memory. That's it, but what are the known active memory locations? It still needs a starting point, right? In most of the browsers, the GC algorithm keeps a list of the roots from which the mark and sweep process can be started. All the roots and their children are marked as active, and any variable that can be reached from these roots are also marked as active. Anything that cannot be reached can be marked as unreachable and thus collected. In most of the cases, the roots consist of the window object.

So, we will go back to our previous example:

function test() { var a = 10;

a += 3;

return a;

}


Our variable a is local to the test() method. As soon as the method is executed, there is no way to access that variable anymore, that is, no one holds any reference to that variable, and that is when it can be marked for garbage collection so that the next time GC runs, the var  a will be swept and the memory allocated to it can be freed.

Garbage collection and V8


When it comes to V8, the process of garbage collection is extremely complex (as it should be). So, let's briefly discuss how V8 handles it.

In V8, the memory (heap) is divided into two main generations, which are the new-space and old-space. Both new-space and old-space are assigned some memory (between 1 MB and 20 MB). Most of the programs and their variables when created are assigned within the new-space. As and when we create a new variable or perform an operation, which consumes memory, it is by default assigned from the new-space, which is optimized for memory allocation. Once the total memory allocated to the new-space is almost completely consumed, the browser triggers a Minor GC, which basically removes the variables that are no longer being referenced and marks the variables that are still being referenced and cannot be removed yet. Once a variable survives two or more Minor GCs, then it becomes a candidate for old-space where the GC cycle is not run as frequently as that of the new- space. A Major GC is triggered when the old-space is of a certain size, all of this is driven by the heuristics of the application, which is very important to the whole process. So, well- written programs move fewer objects into the old-space and thus have less Major GC events being triggered.

Needless to say that this is a very high-level overview of what V8 does for garbage collection, and since this process keeps changing over time, we will switch gears and move on to the next topic.

Avoiding memory leaks


Well, now that we know on a high level what garbage collection is in JavaScript and how it works, let's take a look at some common pitfalls which prevent us from getting our variables marked for GC by the browser.

Assigning variables to global scope


This should be pretty obvious by now; we discussed how the GC mechanism determines a root (which is the window object) and treats everything on the root and its children as active and never marks them for garbage collection.

So, the next time you forget to add a var to your variable declarations, remember that the global variable that you are creating will live forever and never get garbage collected:

function test() {

a = 10; // created on window object a += 3;

return a;

}

Removing DOM elements and references


It's imperative that we keep our DOM references to a minimum, so a well-known step that we like to perform is caching the DOM elements in our JavaScript so that we do not have to query any of the DOM elements over and over. However, once the DOM elements are removed, we will need to make sure that these methods are removed from our cache as well, otherwise, they will never get GC'd:

var cache = {

row: document.getElementById('row')

};

function removeTable() { document.body.removeChild(document.getElementById('row'));

}


The code shown previously removes the row from the DOM but the variable cache still refers to the DOM element, hence preventing it from being garbage collected. Another interesting thing to note here is that even when we remove the table that was containing the row, the entire table would remain in the memory and not get GC'd because the row, which is in cache internally refers to the table.

Closures edge case


Closures are amazing; they help us deal with a lot of problematic scenarios and also provide us with ways in which we can simulate the concept of private variables. Well, all that is good, but sometimes we tend to overlook the potential downsides that are associated with the closures. Here is what we do know and use:

function myGoodFunc() {

var a = new Array(10000000).join('*');

// something big enough to cause a spike in memory usage

function myGoodClosure() {

return a + ' added from closure';

}

myGoodClosure();

}

setInterval(myGoodFunc, 1000);


When we run this script in the browser and then profile it, we see as expected that the method consumes a constant amount of memory and then is GC'd and restored to the baseline memory consumed by the script:

6-javascript-micro-optimizations-need-know-img-1

Now, let's zoom into one of these spikes and take a look at the call tree to determine what all events are triggered around the time of the spikes:

6-javascript-micro-optimizations-need-know-img-2

We can see that everything happens as per our expectation here; first, our setInterval() is triggered, which calls myGoodFunc(), and once the execution is done, there is a GC, which collects the data and hence the spike, as we can see from the preceding screenshots.

Now, this was the expected flow or the happy path when dealing with closures. However, sometimes our code is not as simple and we end up performing multiple things within one closure, and sometimes even end up nesting closures:

function myComplexFunc() {

var a = new Array(1000000).join('*');

// something big enough to cause a spike in memory usage function closure1() {

return a + ' added from closure';

}

closure1();

function closure2() { console.log('closure2 called')

}

setInterval(closure2, 100);

}

setInterval(myComplexFunc, 1000);


We can note in the preceding code that we extended our method to contain two closures now: closure1 and closure2. Although closure1 still performs the same operation as before, closure2 will run forever because we have it running at 1/10th of the frequency of the parent function. Also, since both the closure methods share the parent closure scope, in this case the variable a, it will never get GC'd and thus cause a huge memory leak, which can be seen from the profile as follows:

6-javascript-micro-optimizations-need-know-img-3

On a closer look, we can see that the GC is being triggered but because of the frequency at which the methods are being called, the memory is slowly leaking (lesser memory is collected than being created):

6-javascript-micro-optimizations-need-know-img-4

Well, that was an extreme edge case, right? It's way more theoretical than practical—why would anyone have two nested setInterval() methods with closures. Let's take a look at another example in which we no longer nest multiple setInterval(), but it is driven by the same logic.

Let's assume that we have a method that creates closures:

var something = null;
 
function replaceValue () {
var previousValue = something;
 
// `unused` method loads the `previousValue` into closure scope function </span>unused() {
if (previousValue) console.log("hi");
}
// update something something = {
str: new Array(1000000).join('*'),
 
// all closures within replaceValue share the same
// closure scope hence someMethod would have access
// to previousValue which is nothing but its parent

 
// object (`something`)
 
 
// since `someMethod` has access to its parent
// object, even when it is replaced by a new (identical)
// object in the next setInterval iteration, the previous
// value does not get garbage collected because the someMethod
// on previous value still maintains reference to previousValue
// and so on.
 
someMethod: function () {}
};
}
 
setInterval(replaceValue, 1000);


A simple fix to solve this problem is obvious, as we have said ourselves that the previous value of the object something doesn't get garbage collected as it refers to the previousValue from the previous iteration. So, the solution to this would be to clear out the value of the previousValue at the end of each iteration, thus leaving nothing for something to refer once it is unloaded, hence the memory profiling can be seen to change:

6-javascript-micro-optimizations-need-know-img-5

The preceding image changes as follows:

6-javascript-micro-optimizations-need-know-img-6

To summarize,  we introduced JavaScript micro-optimizations and memory optimizations that ultimately led to a high performance JavaScript.

If you have found this post useful, do check out the book Hands-On Data Structures and Algorithms with JavaScript for solutions to implement complex data structures and algorithms in practical way.

6-javascript-micro-optimizations-need-know-img-7