Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On JavaScript High Performance

You're reading from   Hands-On JavaScript High Performance Build faster web apps using Node.js, Svelte.js, and WebAssembly

Arrow left icon
Product type Paperback
Published in Feb 2020
Publisher Packt
ISBN-13 9781838821098
Length 376 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Justin Scherer Justin Scherer
Author Profile Icon Justin Scherer
Justin Scherer
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Tools for High Performance on the Web 2. Immutability versus Mutability - The Balance between Safety and Speed FREE CHAPTER 3. Vanilla Land - Looking at the Modern Web 4. Practical Example - A Look at Svelte and Being Vanilla 5. Switching Contexts - No DOM, Different Vanilla 6. Message Passing - Learning about the Different Types 7. Streams - Understanding Streams and Non-Blocking I/O 8. Data Formats - Looking at Different Data Types Other Than JSON 9. Practical Example - Building a Static Server 10. Workers - Learning about Dedicated and Shared Workers 11. Service Workers - Caching and Making Things Faster 12. Building and Deploying a Full Web Application 13. WebAssembly - A Brief Look into Native Code on the Web 14. Other Books You May Enjoy

Functional style programming

Even after all of this talk about functional concepts not being the best in terms of raw speed, it can still be quite helpful to utilize them in JavaScript. There are many languages out there that are not purely functional and all of these give us the ability to utilize the best ideas from many paradigms. Languages such as F# and Scala come to mind. There are a few ideas that are great when it comes to this style of programming and we can utilize them in JavaScript with built-in concepts.

Lazy evaluation

In JavaScript, we can perform what is called lazy evaluation. Lazy evaluation means that the program does not run what it does not need to. One way of thinking about this is when someone is given a list of answers to a problem and they are told to put the correct answer to the problem. If they see that the answer was the second item that they looked at, they are not going to keep going through the rest of the answers they were given; they are going to stop at the second item. The way we use lazy evaluation in JavaScript is with generators.

Generators are functions that will pause execution until the next method is called on them. A simple example of this is shown as follows:

const simpleGenerator = function*() {
let it = 0;
for(;;) {
yield it;
it++;
}
}

const sg = simpleGenerator();
for(let i = 0; i < 10; i++) {
console.log(sg.next().value);
}
sg.return();
console.log(sg.next().value);

First, we notice that function has a star next to it. This shows that this is a generator function. Next, we set up a simple variable to hold our value and then we have an infinite loop. Some may think that this will run continuously, but lazy evaluation shows that we will only run up to the yield. This yield means we will pause execution here and that we can grab the value that we send back.

So, we start the function up. We have nothing to pass to it so we just simply start it. Next, we call next on the generator and grab the value. This gives us a single iteration and returns whatever was on the yield statement. Finally, we call return to say that we are done with this generator. If we wanted to, we can grab the final value here.

Now, we will notice that when we call next and try to grab the value, it returns undefined. We can take a look at the generator and notice that it has a property called done. This can allow us to see with finite generators if they are finished. So, how can this be helpful when we want to do something? A rather trivial example is a timing function. What we will do is start off the timer before we want to time something and then we will call it another time to calculate the time it took for something to run (very similar to console.time and timeEnd, but it should showcase what is available with generators).

This generator could look like the following:

const timing = function*(time) {
yeild Date.now() - time;
}
const time = timing(Date.now());
let sum = 0;
for(let i = 0; i < 1000000; i++) {
sum = sum + i;
}
console.log(time.next().value);

We are now timing a simple summing function. All this does is seed the timing generator with the current time. Once the next function is called, it runs the statements up to the yield and returns the value held in the yield. This will give us a new time against the time that we passed in. We now have a simple function for timings. This can be especially useful for environments where we may not have access to the console and we are going to log this information somewhere else.

Just as shown in the preceding code block, we can also work with many different types of lazy loading. One of the best types that utilize this interface is streams. Streams have been available inside Node.js for quite some time, but the stream interface for browsers has a basic standardization and certain parts are still under debate. A simple example of this type of lazy loading or lazy reading can be seen in the following code:

const nums = function*(fn=null) {
let i = 0;
for(;;) {
yield i;
if( fn ) {
i += fn(i);
} else {
i += 1;
}
}
}
const data = {};
const gen = nums();
for(let i of gen) {
console.log(i);
if( i > 100 ) {
break;
}
data.push(i);
}

const fakestream = function*(data) {
const chunkSize = 10;
const dataLength = data.length;
let i = 0;
while( i < dataLength) {
const outData = [];
for(let j = 0; j < chunkSize; j++) {
outData.push(data[i]);
i+=1;
}
yield outData;
}
}

for(let i of fakestream(data)) {
console.log(i);
}

This example shows the concept of lazy evaluation along with a couple of concepts for streaming that we will see in a later chapter. First, we create a generator that can take in a function and can utilize it for our logic function in creating numbers. In our case, we are just going to use the default case and have it generate one number at a time. Next, we are going to run this through a for/of loop to generate numbers up to 101.

Next, we create a fakestream generator that will chunk our data for us. This is similar to streams that allow us to work on a chunk of data at a time. We can transform this data if we want to (known as a TransformStream) or we can just let it pass through (a special type of TransformStream called a PassThrough). We create a fake chunk size at 10. We then run another for/of loop over the data we had before and simply log it. However, we could decide to do something with this data if we wanted to.

This is not the exact interface that streams utilize, but it does showcase how we can have lazy evaluation inside our code with generators and that it is also built into certain concepts such as streaming. There are many other potential uses for generators and lazy evaluation techniques that will not be covered here, but they are available to developers who are looking for a more functional-style approach to list and map comprehensions.

Tail-end recursion optimization

This is another concept that many functional languages have, but most JavaScript engines do not have (WebKit being the exception). Tail-end recursion optimizations allow recursive functions that are built in a certain way to run just like a simple loop. In pure functional languages, there is no such thing as a loop, so the only method of working over a collection is to recursively go through it. We can see that if we build a function as a tail-recursive function, it will break our stack. The following code illustrates this:

const _d = new Array(100000);
for(let i = 0; i < _d.length; i++) {
_d[i] = i;
}
const recurseSummer = function(data, sum=0) {
if(!data.length ) {
return sum;
}
return recurseSummer(data.slice(1), sum + data[0]);
}
console.log(recurseSummer(_d));

We create an array of 100,000 items and assign them all the value that is at their index. We then try using a recursive function to sum all of the data in the array. Since the last call for the function is the function itself, some compilers are able to make an optimization here. If they notice that the last call is to the same function, they know that the current stack can be destroyed (there is nothing left for the function to do). However, non-optimized compilers (most JavaScript engines) will not make this optimization so we keep adding stacks to our call system. This leads to a call stack size exceedance and makes it so we cannot utilize this purely functional concept.

There is hope for JavaScript, however. A concept called trampolining can be utilized to make tail-end recursion possible by modifying the function a bit and how we call it. The following is the modified code to utilize trampolining and give us what we want:

const trampoline = (fun) => {
return (...arguments) => {
let result = fun(...arguments);
while( typeof result === 'function' ) {
result = result();
}
return result;
}
}

const _d = new Array(100000);
for(let i = 0; i < _d.length; i++) {
_d[i] = i;
}
const recurseSummer = function(data, sum=0) {
if(!data.length ) {
return sum;
}
return () => recurseSummer(data.slice(1), sum + data[0]);
}
const final = trampoline(recurseSummer);
console.log(final(_d));

What we are doing is wrapping our recursive function inside one that we run through in a simple loop. The trampoline function works like this:

  • It takes in a function and returns a newly constructed function that will run our recursive function but loop through it, checking the return type.
  • Inside this inner function, it starts the loop up by executing a first run of the function.
  • While we still see a function as our return type, it will continue looping.
  • Once we finally do not get a function, we will return the results.

We are now able to utilize tail-end recursion to do some of the things that we would do in a purely functional world. An example of this was seen previously (which could be seen as a simple reduce function). Another example is as follows:

const recurseFilter = function(data, con, filtered=[]) {
if(!data.length ) {
return filtered;
}
return () => recurseFilter(data.slice(1), con, con(data[0]) ?
filtered.length ? new Array(...filtered), data[0]) : [data[0]] : filtered);

const finalFilter = trampoline(recurseFilter);
console.log(finalFilter(_d, item => item % 2 === 0));

With this function, we are simulating what a filter-based operation may look like in a pure functional language. Again, if there is no length, we are at the end of the array and we return our filtered array. Otherwise, we return a new function that recursively calls itself with a new list, the function that we are going to filter with, and then the filtered list. There is a bit of weird syntax here. We have to pass back a single array with the new item if we have an empty list, otherwise, it will give us an empty array with the number of items that we pass in.

We can see that both of these functions pass what is known as tail-end recursion and are also functions that could be written in a purely functional language. But, we will also see that these run a lot slower than simple for loops or even the built-in array methods for these types of functions. At the end of the day, if we wanted to write purely functional programming using tail-end recursion, we could, but it is wise not to do this in JavaScript.

Currying

The final concept that we will be looking at is currying. Currying is the ability of a function that takes multiple arguments to actually be a series of functions that takes a single argument and returns either another function or the final value. Let's take a look at a simple example to see this concept in action:

const add = function(a) {
return function(b) {
return a + b;
}
}

What we are doing is taking a function that accepts multiple arguments, such as the add function. We then return a function that takes a single argument, in this case, b. This function then adds the numbers a and b together. What this allows us to do is either use the function as we normally would (except we run the function that comes back to us and pass in the second argument) or we get the return value from running it on a single argument and then use that function to add whatever values come next. Each of these concepts can be seen in the following code:

console.log(add(2)(5), 'this will be 7');
const add5 = add(5);
console.log(add5(5), 'this will be 10');

There are a couple of uses for currying and they also show off a concept that can be used quite frequently. First, it shows off the idea of partial application. What this does is set some of our arguments for us and return a function. We can then pass this function along in the chain of statements and eventually use it to fill in the remaining functions.

Just remember that all currying functions are partially applied functions, but not all partially applied functions are currying functions.

An example of partial application can be seen in the following code:

const fullFun = function(a, b, c) {
console.log('a', a);
console.log('b', b);
console.log('c', c);
}
const tempFun = fullFun.bind(null, 2);
setTimeout(() => {
const temp2Fun = tempFun.bind(null, 3);
setTimeout(() => {
const temp3Fun = temp2Fun.bind(null, 5);
setTimeout() => {
console.log('temp3Fun');
temp3Fun();
}, 1000);
}, 1000);
console.log('temp2Fun');
temp2Fun(5);
}, 1000);
console.log('tempFun');
tempFun(3, 5);

First, we create a function that takes three arguments. We then create a new temporary function that binds 2 to the first argument of that function. Bind is an interesting function. It takes the scope that we want as the first argument (what this points to) and then takes an arbitrary length of arguments to fill in for the arguments of the function we are working on. In our case, we only bind the first variable to the number 2. We then create a second temporary function where we bind the first variables of the first temporary function to 3. Finally, we create a third and final temporary function where we bind the first argument of the second function to the number 5.

We can see at each run that we are able to run each of these functions and that they take a different number of arguments depending on which version of the function we have used. bind is a very powerful tool and allows us to pass functions around that may get arguments filled in from other functions before the final function is used.

Currying is the idea that we will use partial application, but that we are going to compose a multi-argument function with multiple nested functions inside it. So what does currying give us that we cannot already do with other concepts? If we are in the pure functional world, we can actually get quite a bit. Take, for example, the map function on arrays. It wants a function definition of a single item (we are going to ignore the other parameters that we normally do not use) and wants the function to return a single item. What happens when we have a function such as the following one and it could be used inside the map function, but it has multiple arguments? The following code showcases what we are able to do with currying and this use case:

const calculateArtbitraryValueWithPrecision = function(prec=0, val) {
return function(val) {
return parseFloat((val / 1000).toFixed(prec));
}
}
const arr = new Array(50000);
for(let i = 0; i < arr.length; i++) {
arr[i] = i + 1000;
}
console.log(arr.map(calculatorArbitraryValueWithPrecision(2)));

What we are doing is taking a generic function (an arbitrary one at that) and utilizing it in the map function by making it more specific, in this case by giving the precision two decimal places. This allows us to write very generic functions that can work over arbitrary data and make specific functions out of them.

We will utilize partial application a bit in our code and we may use currying. In general, however, we will not utilize currying as is seen in purely functional languages as this can lead to a slowdown and higher memory consumption. The main ideas to take away are partial application and the idea of how variables on the outer scope can be used in an inner scoped location.

These three concepts are quite crucial to the idea of pure functional programming, but we will not utilize most of them. In highly performant code, we need to squeeze out every ounce of speed and memory that we can and most of these constructs take up more than we care for. Certain concepts can be used to great lengths in high-performance code. The following will be used in later chapters: partial application, streaming/lazy evaluation, and possibly some recursion. Being comfortable with seeing functional code will help when working with libraries that utilize these concepts, but as we have talked about at length, they are not as performant as our iterative methods.

You have been reading a chapter from
Hands-On JavaScript High Performance
Published in: Feb 2020
Publisher: Packt
ISBN-13: 9781838821098
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime