Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Hands-On JavaScript High Performance
Hands-On JavaScript High Performance

Hands-On JavaScript High Performance: Build faster web apps using Node.js, Svelte.js, and WebAssembly

eBook
$17.99 $26.99
Paperback
$38.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Hands-On JavaScript High Performance

Immutability versus Mutability - The Balance between Safety and Speed

In recent years, development practices have moved to a more functional style of programming. This means less focus on mutable programming (changing variables instead of creating new ones when we want to modify something). Mutability happens when we change a variable from one thing to another. This could be updating a number, changing what the message says, or even changing the item from a string to a number. A mutable state leads to quite a few areas of programming pitfalls, such as undetermined state, deadlocking in multithreaded environments, and even the changing of data types when we did not mean to (also known as side effects). Now, we have many libraries and languages that help us curtail this behavior.

All of this has caused a push toward the use of immutable data structures and functions that create new objects based on the input. While this leads to fewer errors in terms of mutable state, it presents a host of other issues, predominantly, higher memory usage and lower speeds. Most JavaScript runtimes do not have optimizations that allow for this style of programming. When we are concerned with memory and speed, we need to have as much of an advantage as possible, and this is the advantage mutable programming gives us.

In this chapter, we are going to focus on the following topics:

  • Current trends with immutability on the web
  • Writing safe mutable code
  • Functional-like programming on the web

Technical requirements

The current fascination with immutability

A look at current web trends shows the fascination with utilizing immutability. Libraries such as React can be used without their immutable state counterparts, but they are usually used along with Redux or Facebook's Flow library. Any of these libraries will showcase how immutability can lead to safer code and fewer bugs.

For those of you who do not know, immutability means that we cannot change the variable once it has been set with data. This means that once we assign something to a variable, we can no longer change that variable. This helps prevent unwanted changes from happening, and can also lead to a concept called pure functions. We will not be going into what pure functions are, but just be aware that it is a concept that many functional programmers have been bringing to JavaScript.

But, does that mean we need it and does it lead to a faster system? In the case of JavaScript, it can depend. A well-managed project with documentation and testing can easily showcase how we would possibly not need these libraries. On top of this, we may need to actually mutate the state of an object. We may write to an object in one location, but have many other parts read from that object.

There are many patterns of development that can give us similar benefits to what immutability can without the overhead of creating a lot of temporary objects or even going into a fully pure functional style of programming. We can utilize systems such as Resource Acquisition Is Initialization (RAII). We may find that we want to use some immutability, and in this case, we can utilize built-in browser tools such as Object.freeze() or Object.seal().

However, we are getting ahead of ourselves. Let's take a look at a couple of the libraries mentioned and see how they handle immutable states and how it could potentially lead to problems when we are coding.

A dive into Redux

Redux is a great state management system. When we are developing complex systems such as Google Docs or a reporting system that lets us look up various user statistics in real time, it can manage the state of our application. However, it can lead to some overly complicated systems that may not need the state management that it represents.

Redux takes the philosophy that no one object should be able to mutate the state of an application. All of that state needs to be hosted in a single location and there should be functions that handle state changes. This would mean a single location for writes, and multiple locations able to read the data. This is similar to some concepts that we will want to utilize later.

However, it does take things a step further and many articles will want us to pass back brand-new objects. There is a reason for this. Many objects, especially those that have multiple layers, are not easy to copy off. Simple copy operations, such as using Object.assign({}, obj) or utilizing the spread operator for arrays, will just copy the references that they hold inside. Let's take a look at an example of this before we write a Redux-based application.

If we open up not_deep_copy.html from our repository, we will see that the console prints the same thing. If we take a look at the code, we will see a very common case of copying objects and arrays:

const newObj = Object.assign({}, obj);
const newArr = [...arr];

If we make this only a single layer deep, we will see that it actually executes a copy. The following code will showcase this:

const obj2 = {item : 'thing', another : 'what'};
const arr2 = ['yes', 'no', 'nope'];

const newObj2 = Object.assign({}, obj2);
const newArr2 = [...arr2]

We will go into more detail regarding this case and how to truly execute a deep copy, but we can begin to see how Redux may hide problems that are still in our system. Let's build out a simple Todo application to at least showcase Redux and what it is capable of. So, let's begin:

  1. First, we will need to pull down Redux. We can do this by utilizing Node Package Manager (npm) and installing it in our system. It is as simple as npm install redux.
  2. We will now go into the newly created folder and grab the redux.min.js file and put it into our working directory.
  3. We now will create a file called todo_redux.html. This will house all of our main logic.
  4. At the top of it, we will add the Redux library as a dependency.
  5. We will then add in the actions that we are going to perform on our store.
  6. We will then set up the reducers that we want to use for our application.
  7. We will then set up the store and prepare it for data changes.
  8. We will then subscribe to those data changes and make updates to the UI.

The example that we are working on is a slightly modified version of the Todo application from the Redux example. The one nice thing is that we will be utilizing the vanilla DOM and not utilizing another library such as React, so we can see how Redux can fit into any application if the need arises.

  1. So, our actions are going to be adding a todo element, toggling a todo element to complete or not complete, and setting the todo elements that we want to see. This code appears as follows:
const addTodo = function(test) {
return { type : ACTIONS.ADD_TODO, text };
}
const toggleTodo = function(index) {
return { type : ACTIONS.TOGGLE_TODO, index };
}
const setVisibilityFilter = function(filter) {
return { type : ACTIONS.SET_VISIBILITY_FILTER, filter };
}
  1. Next, the reducers will be separated, with one for our visibility filter and another for the actual todo elements.

The visibility reducer is quite simple. It checks the type of action and, if it is a type of SET_VISIBILITY_FILTER, we will handle it, otherwise, we just pass the state object on. For our todo reducer, if we see an action of ADD_TODO, we will return a new list of items with our item at the bottom. If we toggle one of the items, we return a new list with that item set to the opposite of what it was set to. Otherwise, we just pass the state object on. All of this looks like the following:

const visibilityFilter = function(state = 'SHOW_ALL', action) {
switch(action.type) {
case 'SET_VISIBILITY_FILTER': {
return action.filter;
}
default: {
return state;
}
}
}

const todo = function(state = [], action) {
switch(action.type) {
case 'ADD_TODO': {
return [
...state,
{
text : action.text,
completed : false
}
}
case 'TOGGLE_TODO': {
return state.map((todo, index) => {
if( index === action.index ) {
return Object.assign({}, todo, {
completed : !todo.completed
});
}
return todo;
}
}
default: {
return state;
}
}
}
  1. After this, we put both reducers into a single reducer and set up the state object.

The heart of our logic lies in UI implementation. Notice that we set this up to work off the data. This means that data could be passed into our function and the UI would update accordingly. We could make it the other way around, but making the UI be driven by data is a good paradigm to live by. We first have a previous state store. We can utilize this further by only updating what was actually updated, but we only use it for the first check. We grab the current state and check the differences between the two. If we see that the length has changed, we know that we should add a todo item. If we see that the visibility filter was changed, we will update the UI accordingly. Finally, if neither of these is true, we will go through and check which item was checked or unchecked. The code looks like the following:

store.subscribe(() => 
const state = store.getState();
// first type of actions ADD_TODO
if( prevState.todo.length !== state.todo.length ) {
container.appendChild(createTodo(state.todo[state.todo.length
- 1].text));
// second type of action SET_VISIBILITY_FILTER
} else if( prevState.visibilityFilter !==
state.visibilityFilter ) {
setVisibility(container.children, state);
// final type of action TOGGLE_TODO
} else {
const todos = container.children;
for(let i = 0; i < todos.length; i++) {
if( state.todo[i].completed ) {
todos[i].classList.add('completed');
} else {
todos[i].classList.remove('completed');
}
}
}
prevState = state;
});

If we run this, we should get a simple UI that we can interact with in the following ways:

  • Add todo items.
  • Mark existing todo items as complete.

We are also able to have a different view of it by clicking on one of the three buttons at the bottom as seen in the following screenshot. If we only want to see all of our completed tasks, we can click the Update button.

Now, we are able to save the state for offline storage if we wanted to, or we could send the state back to a server for constant updates. This is what makes Redux quite nice. However, there are some caveats when working with Redux that relate to what we stated previously:

  1. First, we are going to need to add something to our Todo application to be able to handle nested objects in our state. A piece of information that has been left out of this Todo application is setting a date by when we want to complete that item. So, let's add some fields for us to fill out to set a completion date. We will add in three new number inputs like so:
<input id="year" type="number" placeholder="Year" />
<input id="month" type="number" placeholder="Month" />
<input id="day" type="number" placeholder="Day" />
  1. Then, we will add in another filter type of Overdue:
<button id="SHOW_OVERDUE">Overdue</button>
  1. Make sure to add this to the visibilityFilters object. Now, we need to update our addTodo action. We are also going to pass on a Date object. This also means we will need to update our ADD_TODO case to add the action.date to our new todo object. We will then update our onclick handler for our Add button and adjust it with the following:
const year = document.getElementById('year');
const month = document.getElementById('month');
const day = document.getElementById('day');
store.dispatch(addTodo(input.value), {year : year.value, month : month.value, day : day.value}));
year.value = "";
month.value = "";
day.value = "";
  1. We could hold the date as a Date object (this would make more sense), but to showcase the issue that can arise, we are just going to hold a new object with year, month, and day fields. We will then showcase this date on the Todo application by adding another span element and populating it with the values from these fields. Finally, we will need to update our setVisibility method with the logic to show our overdue items. It should look like the following:
case visibilityFilters.SHOW_OVERDUE: {
const currTodo = state.todo[i];
const tempTime = currTodo.date;
const tempDate = new Date(`${tempTime.year}/${tempTime.month}/${tempTime.day}`);
if( tempDate < currDay && !currTodo.completed ) {
todos[i].classList.remove('hide');
} else {
todos[i].classList.add('hide');
}
}

With all of this, we should now have a working Todo application, along with showcasing our overdue items. Now, this is where it can get messy working with state management systems such as Redux. What happens when we want to make modifications to an already created item and it is not a simple flat object? Well, we could just get that item and update it in the state system. Let's add the code for this:

  1. First, we are going to create a new button and input that will change the year of the last entry. We will add a click handler for the Update button:
document.getElementById('UPDATE_LAST_YEAR').onclick = function(e) {
store.dispatch({ type : ACTIONS.UPDATE_LAST_YEAR, year :
document.getElementById('updateYear').value });
}
  1. We will then add in this new action handler for the todo system:
case 'UPDATE_LAST_YEAR': {
const prevState = state;
const tempObj = Object.assign({}, state[state.length -
1].date);
tempObj.year = action.year;
state[state.length - 1].date = tempObj;
return state;
}

Now, if we run our code with our system, we will notice something. Our code is not getting past the check object condition in our subscription:

if( prevState === state ) {
return;
}

We updated the state directly, and so Redux never created a new object because it did not detect a change (we updated an object's value that we do not have a reducer on directly). Now, we could create another reducer specifically for the date, but we can also just recreate the array and pass it through:

case 'UPDATE_LAST_YEAR': {
const prevState = state;
const tempObj = Object.assign({}, state[state.length - 1].date);
tempObj.year = action.year;
state[state.length - 1].date = tempObj;
return [...state];
}

Now, our system detects that there was a change and we are able to go through our methods to update the code.

The better implementation would be to split out our todo reducer into two separate reducers. But, since we are working on an example, it was made as simple as possible.

With all of this, we can see how we need to play by the rules that Redux has laid out for us. While this tool can be of great benefit for us in large-scale applications, for smaller state systems or even componentized systems, we may find it better to have a true mutable state and work on it directly. As long as we control access to that mutable state, then we are able to fully utilize a mutable state to our advantage.

This is not to take anything away from Redux. It is a wonderful library and it performs well even under heavier loads. But, there are times when we want to work directly with a dataset and mutate it directly. Redux can do this and gives us its event system, but we are able to build this ourselves without all of the other pieces that Redux gives us. Remember that we want to slim the codebase down as much as possible and make it as efficient as possible. Extra methods and extra calls can add up when we are working with tens to hundreds of thousands of data items.

With this introduction into Redux and state management systems complete, we should also take a look at a library that makes immutable systems a requirement: Immutable.js.

Immutable.js

Again, utilizing immutability, we can code in an easier-to-understand fashion. However, it will usually mean we can't scale to the levels that we need for truly high-performance applications.

First, Immutable.js takes a great stab at the functional-style data structures and methods needed to create a functional system in JavaScript. This usually leads to cleaner code and cleaner architecture. But, what we get in terms of these advantages leads to a decrease in speed and/or an increase in memory.

Remember, when we're working with JavaScript, we have a single-threaded environment. This means that we do not really have deadlocks, race conditions, or read/write access problems.

We can actually run into these issues when utilizing something like SharedArrayBuffers between workers or different tabs, but that is a discussion for later chapters. For now, we are working in a single-threaded environment where the issues of multi-core systems do not really crop up.

Let's take a real-world example of a use case that can come up. We want to turn a list of lists into a list of objects (think of a CSV). What might the code look like to build this data structure in plain old JavaScript, and another one utilizing the Immutable.js library? Our Vanilla JavaScript version may appear as follows:

const fArr = new Array(fillArr.length - 1);
const rowSize = fillArr[0].length;
const keys = new Array(rowSize);
for(let i = 0; i < rowSize; i++) {
keys[i] = fillArr[0][i];
}
for(let i = 1; i < fillArr.length; i++) {
const obj = {};
for(let j = 0; j < rowSize; j++) {
obj[keys[j]] = fillArr[i][j];
}
fArr[i - 1] = obj;
}

We construct a new array of the size of the input list minus one (the first row is the keys). We then store the row size instead of computing that each time for the inner loop later. Then, we create another array to hold the keys and we grab those from the first index of the input array. Next, we loop through the rest of the entries in the input and create objects. We then loop through each inner array and set the key to the value and location j, and set the value to the input's i and j values.

Reading in data through nested arrays and loops can be confusing, but results in fast read times. On a dual-core processor with 8 GB of RAM, this code took 83 ms.

Now, let's build something similar in Immutable.js. It should look like the following:

const l = Immutable.List(fillArr);
const _k = Immutable.List(fillArr[0]);
const tFinal = l.map((val, index) => {
if(!index ) return;
return Immutable.Map(_k.zip(val));
});
const final = tfinal.shift();

This is much easier to interpret if we understand functional concepts. First, we want to create a list based on our input. We then create another temporary list for the keys called _k. For our temporary final list, we utilize the map function. If we are at the 0 index, we just return from the function (since this is the keys). Otherwise, we return a new map that is created by zipping the keys list with the current value. Finally, we remove the front of the final list since it will be undefined.

This code is wonderful in terms of readability, but what are the performance characteristics of this? On a current machine, this ran in around 1 second. This is a big difference in terms of speed. Let's see how they compare in terms of memory usage.

Settled memory (what the memory goes back to after running the code) appears to be the same, settling back to around 1.2 MB. However, the peak memory for the immutable version is around 110 MB, whereas the Vanilla JavaScript version only gets to 48 MB, so a little under half the memory usage. Let's take a look at another example and see the results that transpire.

We are going to create an array of values, except we want one of the values to be incorrect. So, we will set the 50,000th index to be wrong with the following code:

const tempArr = new Array(100000);
for(let i = 0; i < tempArr.length; i++) {
if( i === 50000 ) { tempArr[i] = 'wrong'; }
else { tempArr[i] = i; }
}

Then, we will loop over a new array with a simple for loop like so:

const mutArr = Array.apply([], tempArr);
const errs = [];
for(let i = 0; i < mutArr.length; i++) {
if( mutArr[i] !== i ) {
errs.push(`Error at loc ${i}. Value : ${mutArr[i]}`);
mutArr[i] = i;
}
}

We will also test the built-in map function:

const mut2Arr = Array.apply([], tempArr);
const errs2 = [];
const fArr = mut2Arr.map((val, index) => {
if( val !== index ) {
errs2.push(`Error at loc: ${index}. Value : ${val}`);
return index;
}
return val;
});

Finally, here's the immutable version:

const immArr = Immutable.List(tempArr);
const ierrs = [];
const corrArr = immArr.map((item, index) => {
if( item !== index ) {
ierrs.push(`Error at loc ${index}. Value : ${item}`);
return index;
}
return item;
});

If we run these instances, we will see that the fastest will go between the basic for loop and the built-in map function. The immutable version is still eight times slower than the others. What happens when we increase the number of incorrect values? Let's add a random number generator for building our temporary array to give a random number of errors and see how they perform. The code should appear as follows:

for(let i = 0; i < tempArr.length; i++) {
if( Math.random() < 0.4 ) {
tempArr[i] = 'wrong';
} else {
tempArr[i] = i;
}
}

Running the same test, we get roughly an average of a tenfold slowdown with the immutable version. Now, this is not to say that the immutable version will not run faster in certain cases since we only touched on the map and list features of it, but it does bring up the point that immutability comes at a cost in terms of memory and speed when applying it to JavaScript libraries.

We will look in the next section at why mutability can lead to some issues, but also at how we can handle it by utilizing similar ideas to how Redux works with data.

There is always a time and a place for different libraries, and this is not to say that Immutable.js or libraries like it are bad. If we find that our datasets are small or other considerations come into play, Immutable.js might work for us. But, when we are working on high-performance applications, this usually means two things. One, we will get a large amount of data in a single hit or second, and second, we will get a bunch of events that lead to a lot of data build-up. We need to use the most efficient means possible and these are usually built into the runtime that we are utilizing.

Writing safe mutable code

Before we move on to writing safe mutable code, we need to discuss references and values. A value can be considered anything that is a primitive type. Primitive types, in JavaScript, are anything that are not considered objects. To put it simply, numbers, strings, Booleans, null, and undefined are values. This means that if you create a new variable and assign it to the original, it will actually give it a new value. What does this mean for our code then? Well, we saw earlier with Redux that it was not able to see that we updated a property in our state system, so our previous state and current state showed they were the same. This is due to a shallow equality test. This basic test tests whether the two variables that were passed in are pointing to the same object. A simple example of this is seen with the following code:

let x = {};
let y = x;
console.log( x === y );
y = Object.assign({}, x);
console.log( x === y );

We will see that the first version says that the two items are equal. But, when we create a copy of the object, it states that they are not equal. y now has a brand-new object and this means that it points to a new location in memory. While a deeper understanding of pass by value and pass by reference can be good, this should be sufficient to move on to mutable code.

When writing safe mutable code, we want to give the illusion that we are writing immutable code. In other words, the interface should look like we are utilizing immutable systems, but we are instead utilizing mutable systems internally. Hence, there is a separation of the interface from the implementation.

We can make the implementation very fast by writing in a mutable way but give an interface that looks immutable. An example of this is as follows:

Array.prototype._map = function(fun) {
if( typeof fun !== 'function' ) {
return null;
}
const arr = new Array(this.length);
for(let i = 0; i < this.length; i++) {
arr[i] = fun(this[i]);
}
return arr;
}

We have written a _map function on the array prototype so that every array gets it and we write a simple map function. If we now test run this code, we will see that some browsers perform better with this, while others perform better with the built-in option. As stated before, the built-ins will eventually get faster, but, more often than not, a simple loop is going to be faster. Let's now look at another example of a mutable implementation, but with an immutable interface:

Array.prototype._reduce = function(fun, initial=null) {
if( typeof fun !== 'function' ) {
return null;
}
let val = initial ? initial : this[0];
const startIndex = initial ? 0 : 1;
for(let i = startIndex; i < this.length; i++) {
val = fun(val, this[i], i, this);
}
return val;
}

We wrote a reduce function that performs better in every browser. Now, it does not have the same amount of type checking, which could lead to better performance, but it does showcase how we can write functions that can perform better but give the same type of interface that a user of our system expects.

What we have talked about so far is if we were writing a library for someone to use to make their lives easier. What happens if we are writing something that we or an internal team is going to utilize, as is the case for most application developers?

We have two options in this case. First, we may find that we are working on a legacy system and that we are going to have to try to program in a similar style to what has already been done, or we are developing something rather new and we are able to start off from scratch.

Writing legacy code is a hard job and most people will usually get it wrong. While we should be aiming to improve on the code base, we are also trying to match the style. It is especially difficult for developers to walk through the code and see 10 different code choices used because 10 different developers have worked on the project over its lifespan. If we are working on something that someone else has written, it is usually better to match the code style than to come up with something completely different.

With a new system, we are able to write how we want and, with proper documentation, we can write something that is quite fast but is also easy for someone else to pick up. In this case, we can write mutable code that may have side effects in the functions, but we are able to document these cases.

Side effects are conditions that occur when a function does not just return a new variable or even a reference that the variable passed in. It is when we update another variable that we do not have current scope over that this constitutes a side effect. An example of this is as follows:

var glob = 'a single point system';
const implement = function(x) {
glob = glob.concat(' more');
return x += 2;
}

We have a global variable called glob that we are changing inside our function. Technically, this function has scope over glob, but we should try to define the scope of implement to be only what was passed into it and the temporary variables that implement have defined inside. Since we are mutating glob, we have introduced a side effect into our code base.

Now, in some situations, side effects are needed. We may need to update a single point, or we may need to store something in a single location, but we should try to implement an interface that does this for us instead of us directly affecting the global item (this should start to sound a lot like Redux). By writing a function or two to affect the out-of-scope items, we can now diagnose where an issue may come in because we have those single points of entry.

So what might this look like? We could create a state object just as a plain old object. Then, we could write a function on the global scope called updateState that would look like the following:

const updateState = function(update) {
const x = Object.keys(update);
for(let i = 0; i < x.length; i++) {
state[x[i]] = update[x[i]];
}
}

Now, while this may be good, we are still vulnerable to someone updating our state object through the actual global property. Luckily, by making our state object and our function const, we can make sure that erroneous code cannot touch these actual names. Let's update our code so our state is protected from being updated directly. There are two ways that we could do this. The first approach would be to code with modules and then our state objects which will be scoped to that module. We will look at modules and the import syntax further in the book. Instead, on this occasion, we are going to use the second method, code the Immediately Invoked Function Expression (IIFE) way. The following showcases this implementation:

const state = {};
(function(scope) {
const _state = {};
scope.update = function(obj) {
const x = Object.keys(obj);
for(let i = 0; i < x.length; i++) {
_state[x[i]] = obj[x[i]];
}
}
scope.set = function(key, val) {
_state[key] = val;
}
scope.get = function(key) {
return _state[key];
}
scope.getAll = function() {
return Object.assign({}, _state);
}
})(state);
Object.freeze(state);

First, we create a constant state. We then IIFE and pass in the state object, setting a bunch of functions on it. It works on an internally scoped _state variable. We then have all the basic functions that we would expect for an internal state system. We also freeze the external state object so it can no longer be messed with. One question that may arise is why we are passing back a new object instead of a reference. If we are trying to make sure that we don't want anyone able to touch the internal state, then we cannot pass a reference out; we have to pass a new object.

We still have a problem. What happens if we want to update more than one layer deep? We will start running into reference issues again. That means that we will need to update our update function to perform a deep update. We can do this in a variety of ways, but one way would be to pass the value in as a string and we will split on the decimal point.

This is not the best way to handle this since we could technically have a property of an object be named with decimal points, but it will allow us to write something quickly. Balancing between writing something that is functional, and what is considered a complete solution, are two different things and they have to be balanced when writing high-performance code bases.

So, we will have a method that will now look like the following:

const getNestedProperty = function(key) {
const tempArr = key.split('.');
let temp = _state;
while( tempArr.length > 1 ) {
temp = temp[tempArr.shift()];
if( temp === undefined ) {
throw new Error('Unable to find key!');
}
}
return {obj : temp, finalKey : tempArr[0] };
}
scope.set = function(key, val) {
const {obj, finalKey} = getNestedProperty(key);
obj[finalKey] = val;
}
scope.get = function(key) {
const {obj, finalKey} = getNestedProperty(key);
return obj[finalKey];
}

What we are doing is breaking the key upon the decimal character. We are also grabbing a reference to the internal state object. While we still have items in the list, we move one level down in the object. If we find that it is undefined, then we will throw an error. Otherwise, once we are one level above where we want to be, we return an object with that reference and the final key. We will then use this in the getter and setter to replace those values.

Now, we still have a problem. What if we want to make a reference type be the property value for our internal state system? Well, we will run into the same issues that we saw before. We will have references outside the single state object. This means we will have to clone each step of the way to make sure that the external reference does not point to anything in the internal copy. We can create this system by adding a bunch of checks and making sure that when we get to a reference type, we clone it in a way that is efficient. This looks like the following code:

const _state = {},
checkPrimitives = function(item) {
return item === null || typeof item === 'boolean' || typeof item ===
'string' || typeof item === 'number' || typeof item === 'undefined';
},
cloneFunction = function(fun, scope=null) {
return fun.bind(scope);
},
cloneObject = function(obj) {
const newObj = {};
const keys = Object.keys(obj);
for(let i = 0; i < keys.length; i++) {
const key = keys[i];
const item = obj[key];
newObj[key] = runUpdate(item);
}
return newObj;
},
cloneArray = function(arr) {
const newArr = new Array(arr.length);
for(let i = 0; i < arr.length; i++) {
newArr[i] = runUpdate(arr[i]);
}
return newArr;
},
runUpdate = function(item) {
return checkPrimitives(item) ?
item :
typeof item === 'function' ?
cloneFunction(item) :
Array.isArray(item) ?
cloneArray(item) :
cloneObject(item);
};

scope.update = function(obj) {
const x = Object.keys(obj);
for(let i = 0; i < x.length; i++) {
_state[x[i]] = runUpdate(obj[x[i]]);
}
}

What we have done is write a simple clone system. Our update function will go through the keys and run the update. We will then check for various conditions, such as if we are a primitive type. If we are, we just copy the value, otherwise, we need to figure out the complex type we are. We first search to see whether we are a function; if we are, we just bind the value. If we are an array, we will run through all of the values and make sure that none of them are complex types. Finally, if we are an object, we will run through all of the keys and try to update these running the same checks.

However, we have just done what we have been avoiding; we have created an immutable state system. We can add more bells and whistles to this centralized state system, such as eventing, or we can implement a coding standard that has been around for quite some time, called Resource Allocation Is Initialization (RAII).

There is a really nice built-in web API called proxies. These are essentially systems where we are able to do something when something happens on an object. At the time of writing, these are still quite slow and should not really be used unless it is on an object that we are not worried about for time-sensitive activities. We are not going to talk about them extensively, but they are available for those readers who want to check them out.

Resource allocation is initialization (RAII)

The idea of RAII comes from C++, where we have no such thing as a memory manager. We encapsulate logic where we potentially want to share resources that need to be freed after their use. This makes sure that we do not have memory leaks, and that objects that are utilizing the item are doing so in a safe manner. Another name for this is scope-bound resource management (SBRM), and is also utilized in another recent language called Rust.

We can apply the same types of ideas that C++ and Rust do in terms of RAII in our JavaScript code. There are a couple of ways that we can handle this and we are going to look at them. The first is the idea that when we pass an object into a function, we can then null out that object from our calling function.

Now, we will have to use let instead of const in most cases for this to work, but it is a useful paradigm to make sure that we are only holding on to objects that we need.

This concept can be seen in the following code:

const getData = function() {
return document.getElementById('container').value;
};
const encodeData = function(data) {
let te = new TextEncoder();
return te.encode(data);
};
const hashData = function(algorithm) {
let str = getData();
let finData = encodeData(str);
str = null;
return crypto.subtle.digest(algorithm, finData);
};
{
let but = document.getElementById('submit');
but.onclick = function(ev) {
let algos = ['SHA-1', 'SHA-256', 'SHA-384', 'SHA-512'];
let out = document.getElementById('output');
for(let i = 0; i < algos.length; i++) {
const newEl = document.createElement('li');
hashData(algos[i]).then((res) => {
let te = new TextDecoder();
newEl.textContent = te.decode(res);
out.append(newEl);
});
}
out = null;
}
but = null;
}

If we run the following code, we will notice that we are trying to append to a null. This is where this design can get us into a bit of trouble. We have an asynchronous method and we are trying to use a value that we have nullified even though we still need it. What is the best way to handle this situation? One way is to null it out once we are done using it. Hence, we can change the code to look like the following:

for(let i = 0; i < algos.length; i++) {
let temp = out;
const newEl = document.createElement('li');
hashData(algos[i]).then((res) => {
let te = new TextDecoder();
newEl.textContent = te.decode(res);
temp.append(newEl);
temp = null
});
}

We still have a problem. Before the next part of the Promise (the then method) runs, we could still modify the value. One final good idea would be to wrap this input to output in a new function. This will give us the safety that we are looking for, while also making sure we are following the principle behind RAII. The following code is what comes out of this:

const showHashData = function(parent, algorithm) {
const newEl = document.createElement('li');
hashData(algorithm).then((res) => {
let te = new TextDecoder();
newEl.textContent = te.decode(res);
parent.append(newEl);
});
}

We can also get rid of some of the preceding nulls since the functions will take care of those temporary variables. While this example is rather trivial, it does showcase one way of handling RAII inside JavaScript.

On top of this paradigm, we can also add properties to the item that we are passing to say that it is a read-only version. This would ensure that we are not modifying the item, but we also do not need to null out the element on the calling function if we still want to read from it. This gives us the benefit of making sure our objects can be utilized and maintained without the worry that they will be modified.

We will take out the previous code example and update it to utilize this read-only property. We first define a function that will add it to any object that comes in like so:

const addReadableProperty = function(item) {
Object.defineProperty(item, 'readonly', {
value : true,
writable :false
});
return item;
}

Next, in our onclick method, we pass our output into this method. This has now attached the readonly property to it. Finally, in our showHashData function, when we try to access it, we have put a guard on the readonly property. If we notice that the object has it, we will not try to append to it, like so:

if(!parent.readonly ) {
parent.append(newEl);
}

We have also set this property to not be writable, so if a nefarious actor decided to manipulate our object's readonly property, they will still notice that we are no longer appending to the DOM. The defineProperty method is very powerful for writing APIs and libraries that cannot be easily manipulated. Another way of handling this is to freeze the object. With the freeze method, we are able to make sure that the shallow copy of an object is read-only. Remember that this is only for the shallow instance, not any other properties that hold reference types.

Finally, we can utilize a counter to see whether we can set the data. We are essentially creating a read-side lock. This means that while we are reading the data, we do not want to set the data. This means we have to take many precautions that we are properly releasing the data once we have read what we want. This can look like the following:

const ReaderWriter = function() {
let data = {};
let readers = 0;
let readyForSet = new CustomEvent('readydata');
this.getData = function() {
readers += 1;
return data;
}
this.releaseData = function() {
if( readers ) {
readers -= 1;
if(!readers ) {
document.dispatchEvent(readyForSet);
}
}
return readers;
}
this.setData = function(d) {
return new Promise((resolve, reject) => {
if(!readers ) {
data = d;
resolve(true);
} else {
document.addEventListener('readydata', function(e) {
data = d;
resolve(true);
}, { once : true });
}
});
}
}

What we have done is set up a constructor function. We hold the data, the number of readers, and a custom event as private variables. We then create three methods. First, getData will grab the data and also add a counter to someone that is utilizing it. Next, we have the release method. This will decrement the counter, and if we are at 0, we will dispatch an event to tell the setData event that it can finally write to the mutable state. Finally, we have the setData function. A promise will be the return value. If there is no one that is holding the data, we will set it and resolve it right away. Otherwise, we will set up an event listener for our custom event. Once it fires, we will set the data and resolve the promise.

Now, this final method of locking mutable data should not be utilized in most contexts. There may only be a handful of times when you will want to utilize this, such as a hot cache where we need to make sure that we do not overwrite something while a reader is reading from this (this can happen on the Node.js side of things especially).

All of these methods help create a safe mutable state. With each of these, we are able to mutate an object directly and share that memory space. Most of the time, good documentation and careful control over our data will make it so we do not need to go to the extremes that we have here, but it is good to have these methods of RAII in our back pocket when we find something crops up and we are mutating something that we should not be.

Most of the time, the immutable and highly functional code will be more readable in the end and, if something does not need to be highly optimized, it is suggested to go for being readable. But, in high optimization cases, such as encoding and decoding or decorating columns in a table, we will need to squeeze out as much performance as we can. This will be seen later in the book where we utilize a mixture of programming techniques.

Even though mutable programming can be fast, sometimes, we want to implement things in a functional manner. The following section will explore ways to implement programs in this functional manner.

Functional style programming

Even after all of this talk about functional concepts not being the best in terms of raw speed, it can still be quite helpful to utilize them in JavaScript. There are many languages out there that are not purely functional and all of these give us the ability to utilize the best ideas from many paradigms. Languages such as F# and Scala come to mind. There are a few ideas that are great when it comes to this style of programming and we can utilize them in JavaScript with built-in concepts.

Lazy evaluation

In JavaScript, we can perform what is called lazy evaluation. Lazy evaluation means that the program does not run what it does not need to. One way of thinking about this is when someone is given a list of answers to a problem and they are told to put the correct answer to the problem. If they see that the answer was the second item that they looked at, they are not going to keep going through the rest of the answers they were given; they are going to stop at the second item. The way we use lazy evaluation in JavaScript is with generators.

Generators are functions that will pause execution until the next method is called on them. A simple example of this is shown as follows:

const simpleGenerator = function*() {
let it = 0;
for(;;) {
yield it;
it++;
}
}

const sg = simpleGenerator();
for(let i = 0; i < 10; i++) {
console.log(sg.next().value);
}
sg.return();
console.log(sg.next().value);

First, we notice that function has a star next to it. This shows that this is a generator function. Next, we set up a simple variable to hold our value and then we have an infinite loop. Some may think that this will run continuously, but lazy evaluation shows that we will only run up to the yield. This yield means we will pause execution here and that we can grab the value that we send back.

So, we start the function up. We have nothing to pass to it so we just simply start it. Next, we call next on the generator and grab the value. This gives us a single iteration and returns whatever was on the yield statement. Finally, we call return to say that we are done with this generator. If we wanted to, we can grab the final value here.

Now, we will notice that when we call next and try to grab the value, it returns undefined. We can take a look at the generator and notice that it has a property called done. This can allow us to see with finite generators if they are finished. So, how can this be helpful when we want to do something? A rather trivial example is a timing function. What we will do is start off the timer before we want to time something and then we will call it another time to calculate the time it took for something to run (very similar to console.time and timeEnd, but it should showcase what is available with generators).

This generator could look like the following:

const timing = function*(time) {
yeild Date.now() - time;
}
const time = timing(Date.now());
let sum = 0;
for(let i = 0; i < 1000000; i++) {
sum = sum + i;
}
console.log(time.next().value);

We are now timing a simple summing function. All this does is seed the timing generator with the current time. Once the next function is called, it runs the statements up to the yield and returns the value held in the yield. This will give us a new time against the time that we passed in. We now have a simple function for timings. This can be especially useful for environments where we may not have access to the console and we are going to log this information somewhere else.

Just as shown in the preceding code block, we can also work with many different types of lazy loading. One of the best types that utilize this interface is streams. Streams have been available inside Node.js for quite some time, but the stream interface for browsers has a basic standardization and certain parts are still under debate. A simple example of this type of lazy loading or lazy reading can be seen in the following code:

const nums = function*(fn=null) {
let i = 0;
for(;;) {
yield i;
if( fn ) {
i += fn(i);
} else {
i += 1;
}
}
}
const data = {};
const gen = nums();
for(let i of gen) {
console.log(i);
if( i > 100 ) {
break;
}
data.push(i);
}

const fakestream = function*(data) {
const chunkSize = 10;
const dataLength = data.length;
let i = 0;
while( i < dataLength) {
const outData = [];
for(let j = 0; j < chunkSize; j++) {
outData.push(data[i]);
i+=1;
}
yield outData;
}
}

for(let i of fakestream(data)) {
console.log(i);
}

This example shows the concept of lazy evaluation along with a couple of concepts for streaming that we will see in a later chapter. First, we create a generator that can take in a function and can utilize it for our logic function in creating numbers. In our case, we are just going to use the default case and have it generate one number at a time. Next, we are going to run this through a for/of loop to generate numbers up to 101.

Next, we create a fakestream generator that will chunk our data for us. This is similar to streams that allow us to work on a chunk of data at a time. We can transform this data if we want to (known as a TransformStream) or we can just let it pass through (a special type of TransformStream called a PassThrough). We create a fake chunk size at 10. We then run another for/of loop over the data we had before and simply log it. However, we could decide to do something with this data if we wanted to.

This is not the exact interface that streams utilize, but it does showcase how we can have lazy evaluation inside our code with generators and that it is also built into certain concepts such as streaming. There are many other potential uses for generators and lazy evaluation techniques that will not be covered here, but they are available to developers who are looking for a more functional-style approach to list and map comprehensions.

Tail-end recursion optimization

This is another concept that many functional languages have, but most JavaScript engines do not have (WebKit being the exception). Tail-end recursion optimizations allow recursive functions that are built in a certain way to run just like a simple loop. In pure functional languages, there is no such thing as a loop, so the only method of working over a collection is to recursively go through it. We can see that if we build a function as a tail-recursive function, it will break our stack. The following code illustrates this:

const _d = new Array(100000);
for(let i = 0; i < _d.length; i++) {
_d[i] = i;
}
const recurseSummer = function(data, sum=0) {
if(!data.length ) {
return sum;
}
return recurseSummer(data.slice(1), sum + data[0]);
}
console.log(recurseSummer(_d));

We create an array of 100,000 items and assign them all the value that is at their index. We then try using a recursive function to sum all of the data in the array. Since the last call for the function is the function itself, some compilers are able to make an optimization here. If they notice that the last call is to the same function, they know that the current stack can be destroyed (there is nothing left for the function to do). However, non-optimized compilers (most JavaScript engines) will not make this optimization so we keep adding stacks to our call system. This leads to a call stack size exceedance and makes it so we cannot utilize this purely functional concept.

There is hope for JavaScript, however. A concept called trampolining can be utilized to make tail-end recursion possible by modifying the function a bit and how we call it. The following is the modified code to utilize trampolining and give us what we want:

const trampoline = (fun) => {
return (...arguments) => {
let result = fun(...arguments);
while( typeof result === 'function' ) {
result = result();
}
return result;
}
}

const _d = new Array(100000);
for(let i = 0; i < _d.length; i++) {
_d[i] = i;
}
const recurseSummer = function(data, sum=0) {
if(!data.length ) {
return sum;
}
return () => recurseSummer(data.slice(1), sum + data[0]);
}
const final = trampoline(recurseSummer);
console.log(final(_d));

What we are doing is wrapping our recursive function inside one that we run through in a simple loop. The trampoline function works like this:

  • It takes in a function and returns a newly constructed function that will run our recursive function but loop through it, checking the return type.
  • Inside this inner function, it starts the loop up by executing a first run of the function.
  • While we still see a function as our return type, it will continue looping.
  • Once we finally do not get a function, we will return the results.

We are now able to utilize tail-end recursion to do some of the things that we would do in a purely functional world. An example of this was seen previously (which could be seen as a simple reduce function). Another example is as follows:

const recurseFilter = function(data, con, filtered=[]) {
if(!data.length ) {
return filtered;
}
return () => recurseFilter(data.slice(1), con, con(data[0]) ?
filtered.length ? new Array(...filtered), data[0]) : [data[0]] : filtered);

const finalFilter = trampoline(recurseFilter);
console.log(finalFilter(_d, item => item % 2 === 0));

With this function, we are simulating what a filter-based operation may look like in a pure functional language. Again, if there is no length, we are at the end of the array and we return our filtered array. Otherwise, we return a new function that recursively calls itself with a new list, the function that we are going to filter with, and then the filtered list. There is a bit of weird syntax here. We have to pass back a single array with the new item if we have an empty list, otherwise, it will give us an empty array with the number of items that we pass in.

We can see that both of these functions pass what is known as tail-end recursion and are also functions that could be written in a purely functional language. But, we will also see that these run a lot slower than simple for loops or even the built-in array methods for these types of functions. At the end of the day, if we wanted to write purely functional programming using tail-end recursion, we could, but it is wise not to do this in JavaScript.

Currying

The final concept that we will be looking at is currying. Currying is the ability of a function that takes multiple arguments to actually be a series of functions that takes a single argument and returns either another function or the final value. Let's take a look at a simple example to see this concept in action:

const add = function(a) {
return function(b) {
return a + b;
}
}

What we are doing is taking a function that accepts multiple arguments, such as the add function. We then return a function that takes a single argument, in this case, b. This function then adds the numbers a and b together. What this allows us to do is either use the function as we normally would (except we run the function that comes back to us and pass in the second argument) or we get the return value from running it on a single argument and then use that function to add whatever values come next. Each of these concepts can be seen in the following code:

console.log(add(2)(5), 'this will be 7');
const add5 = add(5);
console.log(add5(5), 'this will be 10');

There are a couple of uses for currying and they also show off a concept that can be used quite frequently. First, it shows off the idea of partial application. What this does is set some of our arguments for us and return a function. We can then pass this function along in the chain of statements and eventually use it to fill in the remaining functions.

Just remember that all currying functions are partially applied functions, but not all partially applied functions are currying functions.

An example of partial application can be seen in the following code:

const fullFun = function(a, b, c) {
console.log('a', a);
console.log('b', b);
console.log('c', c);
}
const tempFun = fullFun.bind(null, 2);
setTimeout(() => {
const temp2Fun = tempFun.bind(null, 3);
setTimeout(() => {
const temp3Fun = temp2Fun.bind(null, 5);
setTimeout() => {
console.log('temp3Fun');
temp3Fun();
}, 1000);
}, 1000);
console.log('temp2Fun');
temp2Fun(5);
}, 1000);
console.log('tempFun');
tempFun(3, 5);

First, we create a function that takes three arguments. We then create a new temporary function that binds 2 to the first argument of that function. Bind is an interesting function. It takes the scope that we want as the first argument (what this points to) and then takes an arbitrary length of arguments to fill in for the arguments of the function we are working on. In our case, we only bind the first variable to the number 2. We then create a second temporary function where we bind the first variables of the first temporary function to 3. Finally, we create a third and final temporary function where we bind the first argument of the second function to the number 5.

We can see at each run that we are able to run each of these functions and that they take a different number of arguments depending on which version of the function we have used. bind is a very powerful tool and allows us to pass functions around that may get arguments filled in from other functions before the final function is used.

Currying is the idea that we will use partial application, but that we are going to compose a multi-argument function with multiple nested functions inside it. So what does currying give us that we cannot already do with other concepts? If we are in the pure functional world, we can actually get quite a bit. Take, for example, the map function on arrays. It wants a function definition of a single item (we are going to ignore the other parameters that we normally do not use) and wants the function to return a single item. What happens when we have a function such as the following one and it could be used inside the map function, but it has multiple arguments? The following code showcases what we are able to do with currying and this use case:

const calculateArtbitraryValueWithPrecision = function(prec=0, val) {
return function(val) {
return parseFloat((val / 1000).toFixed(prec));
}
}
const arr = new Array(50000);
for(let i = 0; i < arr.length; i++) {
arr[i] = i + 1000;
}
console.log(arr.map(calculatorArbitraryValueWithPrecision(2)));

What we are doing is taking a generic function (an arbitrary one at that) and utilizing it in the map function by making it more specific, in this case by giving the precision two decimal places. This allows us to write very generic functions that can work over arbitrary data and make specific functions out of them.

We will utilize partial application a bit in our code and we may use currying. In general, however, we will not utilize currying as is seen in purely functional languages as this can lead to a slowdown and higher memory consumption. The main ideas to take away are partial application and the idea of how variables on the outer scope can be used in an inner scoped location.

These three concepts are quite crucial to the idea of pure functional programming, but we will not utilize most of them. In highly performant code, we need to squeeze out every ounce of speed and memory that we can and most of these constructs take up more than we care for. Certain concepts can be used to great lengths in high-performance code. The following will be used in later chapters: partial application, streaming/lazy evaluation, and possibly some recursion. Being comfortable with seeing functional code will help when working with libraries that utilize these concepts, but as we have talked about at length, they are not as performant as our iterative methods.

Summary

In this chapter, we have looked at the ideas of mutability and immutability. We have seen how immutability can lead to slowdowns and higher memory consumption and can be an issue when we are writing high-performance code. We have taken a look at mutability and how to make sure we write code that utilizes it, but also makes it safe. On top of this, we have performed performance comparisons between mutable and immutable code and have seen where speed and memory consumption increases for the immutable types. Finally, we took a look at functional-style programming in JavaScript and how we can utilize these concepts. Functional programming can help with many issues, such as lock-free concurrency, but we also know that the JavaScript runtimes are single-threaded and so this does not give us an advantage. Overall, there are many concepts that we can borrow from the different paradigms of programming and having all of these in our toolkit can make us better programmers and help us write clean, safe, and high-performance code.

In the next chapter, we will take a look at how JavaScript has evolved as a language. We will also take a look at how browsers have changed to meet the demands of developers, with new APIs that cover everything from accessing the DOM to long-term storage.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Discover effective techniques for accessing DOM, minimizing painting, and using a V8 engine to optimize JavaScript
  • Understand what makes the web tick and create apps that look and feel like native desktop applications
  • Explore modern JavaScript frameworks like Svelte.js for building next-gen web apps

Description

High-performance web development is all about cutting through the complexities in different layers of a web app and building services and APIs that improve the speed and performance of your apps on the browser. With emerging web technologies, building scalable websites and sustainable web apps is smoother than ever. This book starts by taking you through the web frontend, popular web development practices, and the latest version of ES and JavaScript. You'll work with Node.js and learn how to build web apps without a framework. The book consists of three hands-on examples that help you understand JavaScript applications at both the server-side and the client-side using Node.js and Svelte.js. Each chapter covers modern techniques such as DOM manipulation and V8 engine optimization to strengthen your understanding of the web. Finally, you’ll delve into advanced topics such as CI/CD and how you can harness their capabilities to speed up your web development dramatically. By the end of this web development book, you'll have understood how the JavaScript landscape has evolved, not just for the frontend but also for the backend, and be ready to use new tools and techniques to solve common web problems.

Who is this book for?

This JavaScript book is for web developers, C/C++ programmers, and anyone who wants to build robust web applications using advanced web technologies. This book assumes a good grasp of Vanilla JavaScript and an understanding of web development tools, such as Chrome Developer tools or Mozilla’s developer tools.

What you will learn

  • Explore Vanilla JavaScript for optimizing the DOM, classes, and modules, and querying with jQuery
  • Understand immutable and mutable code and develop faster web apps
  • Delve into Svelte.js and use it to build a complete real-time Todo app
  • Build apps to work offline by caching calls using service workers
  • Write C++ native code and call the WebAssembly module with JavaScript to run it on a browser
  • Implement CircleCI for continuous integration in deploying your web apps

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Feb 28, 2020
Length: 376 pages
Edition : 1st
Language : English
ISBN-13 : 9781838825867
Vendor :
Node.js Developers
Languages :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Feb 28, 2020
Length: 376 pages
Edition : 1st
Language : English
ISBN-13 : 9781838825867
Vendor :
Node.js Developers
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 147.97
Mastering JavaScript Functional Programming
$69.99
Hands-On JavaScript High Performance
$38.99
Clean Code in JavaScript
$38.99
Total $ 147.97 Stars icon
Banner background image

Table of Contents

14 Chapters
Tools for High Performance on the Web Chevron down icon Chevron up icon
Immutability versus Mutability - The Balance between Safety and Speed Chevron down icon Chevron up icon
Vanilla Land - Looking at the Modern Web Chevron down icon Chevron up icon
Practical Example - A Look at Svelte and Being Vanilla Chevron down icon Chevron up icon
Switching Contexts - No DOM, Different Vanilla Chevron down icon Chevron up icon
Message Passing - Learning about the Different Types Chevron down icon Chevron up icon
Streams - Understanding Streams and Non-Blocking I/O Chevron down icon Chevron up icon
Data Formats - Looking at Different Data Types Other Than JSON Chevron down icon Chevron up icon
Practical Example - Building a Static Server Chevron down icon Chevron up icon
Workers - Learning about Dedicated and Shared Workers Chevron down icon Chevron up icon
Service Workers - Caching and Making Things Faster Chevron down icon Chevron up icon
Building and Deploying a Full Web Application Chevron down icon Chevron up icon
WebAssembly - A Brief Look into Native Code on the Web Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.3
(3 Ratings)
5 star 33.3%
4 star 0%
3 star 0%
2 star 0%
1 star 66.7%
Batiklady Mar 12, 2020
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As a professional programmer with over 30 years experience, I was intrigued about the premise of using JavaScript for high-performance web applications. The industry is ripe with Web Programmers, and books on the subject seem to stem from either the front-end or back-end programming. This book approaches the subject from both aspects. It takes the reader through the steps in both logic and by example for utilizing JavaScript in High Performance (i.e. high speed and high bandwidth) applications.I call this book a journey, as it moves the reader from the requirements for high-performance tools through DOM and API calls, Streams, Service Workers to Web Assembly. There are even brief glimpses into HTTP3 and Database applications within the browser. It appears to be a well thought out process, that although some subjects might be controversial amongst web programmers, the author expresses their view in a logical manner. This book is a great tool for those who may be interested in generating optimized code, something that in my view we need to concentrate on more. It is also for those who may be new to the field, as it compiles all of the necessary information into one succinct manual/story.I purchased the hard-copy of this book with great hopes that it would meet up with my expectations and it certainly did. The formatting and examples were easy to read. I also want to note that the examples appear to be skeletal in nature, making them easy to enter, test and modify for my own use. Well worth the money.
Amazon Verified review Amazon
Konstantin Oct 03, 2023
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Book about anything and nothing at the same time. Title is absolutely misleading.
Subscriber review Packt
Charles Rector Mar 08, 2020
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Hundreds of pages are formatted with narrow one- or two-letter columns of text and code. The majority of this content can be found for free on the web.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.