IJsjes logo

When principles degrade performance — the memory problem of immutability

Posted in:

Sooner or later, every developer ends up following principles to help writing good code. One common principle, that’s popular in JavaScript, is immutability. Unfortunately, it impacts performance in a way that it can be unpractical to stick to the principle strictly.

What is immutability?

Immutability is the idea that objects shouldn’t be mutated. Unlike primitives like numbers and strings, multiple variables can refer to one and the same object.

If multiple variables can refer to the same data, that means that modifying the object through one reference, modifies it for all. Consider this example:

const myObject1 = { foo: 'bar' };
const myObject2 = myObject1;

myObject1.foo = 'Hello World';

console.log(myObject2.foo); // Prints: "Hello World"

This behaviour can lead to unexpected results. If we follow the immutability principle, we choose creating new objects over modifying existing ones. Take a look at the examples below:

Mutation
function updateQuantity(product, quantity) {
	product.quantity = quantity;
}
Immutability through creation
function updateQuantity(product, quantity) {
	return {
		...product,
		quantity,
	};
}

The first example mutates the product object. The second example creates a copy of product into a new object via the spread syntax (...) and overwrites the quantity property in one fell swoop.

Regardless of our preference in terms of readability, immutability can avoid very confusing bugs. Now, how does this affect performance?

Memory allocation and garbage collection

When we assign a value to a variable, this information is stored on memory. In various languages, like C/C++, programmers must allocate memory before they can store values in said memory, and release it once there’s no reason to retain that information.

Conveniently, JavaScript, or rather the JavaScript engine, allocates memory for us. Whenever we concatenate strings or create a new object, more memory is allocated to store those values.

Consider the code below:

function getRandomAverage() {
	let sum = 0;

	for (let i = 0; i < 10; i++) {
		sum += Math.random();
	}

	return sum / iterations;
}

This function generates a bunch of random numbers and returns the average. With every Math.random() call, memory is allocated to hold that number. More specifically, it allocates memory for a double-precision 64-bit binary format IEEE 754, often referred to as a double or a float. For sake of simplicity, let’s say memory isn’t reused, and this function allocates memory for 10 doubles to hold the random values, and another double for the return value.

When this function is done and the result value is returned, this means we have 10 doubles in memory we no longer need because we have no references to those values.

The excess memory stays until the garbage collector collects these unreferenced values and releases their associated memory. The more memory is allocated, the more values need to be checked.

Now that we know how a JavaScript engine manages memory, let’s get back to immutability.

Objects, objects, objects

Let’s revisit our examples from before and think about these in terms of memory allocation:

Mutation
function updateQuantity(product, quantity) {
	product.quantity = quantity;
}
Immutability through creation
function updateQuantity(product, quantity) {
	return {
		...product,
		quantity,
	};
}

One of these allocates much more memory than the other. Can you tell which? The first example only updates a reference, while the other creates a completely new object based off product. As a result, the latter allocates more memory.

By now, you must be wondering, how much impact does it have? A quick benchmark on jsbench.me shows creating new objects is roughly 90% slower, regardless whether the object was created via an object literal with spread (like the example above) or with Object.assign().

Benchmarks are an oversimplification of reality, though. Despite being ~90% slower, it still achieved 95 million operations per second on my machine. Although that’s a massive amount, low-end mobile devices could show noticeable lag when you’re aggregating a dataset, for example.

Is immutability bad?

Immutability is still a perfectly fine principle to stick to, but we can apply it less strictly and can think of scopes in which mutation is allowed. For example, we can permit it within functions, as long it doesn’t produce side effects.

Let’s say we have a list of individual page visits, and we want to turn that into a list of distinct pages with a count. Let’s compare the implementations of two functions following strictly immutable and loose immutable principles, respectively:

Strictly immutable
function getPageVisitCount(pageVisits) {
	const pagesByKey = pageVisits
		.reduce((pages, visit) => {
			if (visit.pageId in pages) {
				return {
					...pages,
					[visit.pageId]: {
						...pages[visit.pageId],
						count: pages[visitPageId].count + 1,
					},
				};
			}

			return {
					...pages,
					[visit.pageId]: {
						pageId: visit.pageId,
						count: 1,
					},
				};
		}, {});

	return Object.values(pagesById);
}
Loosely immutable
function getPageVisitCount(pageVisits) {
	const pagesByKey = pageVisits
		.reduce((pages, visit) => {
			if (visit.pageId in pages) {
				pages[visit.pageId].count++;
				return pages;
			}

			pages[visit.pageId] = {
				pageId: visit.pageId,
				count: 1,
			};

			return pages;
		}, {});

	return Object.values(pagesById);
}

In the strictly immutable example, many objects will be created simply for the sake of calling it immutable. Realistically, the objects we’re creating aren’t exposed outside the reduce() callback function. In other words: it’s safe to mutate any object that doesn’t live outside getPageVisitCount(). This is why the second implementation is just as safe and many times more performant.

If pageVisits is a list of two or three items, there’s virtually no difference in performance. However, if it’s a list of thousands or millions of items, the difference definitely will be noticeable.

Note how this relates to other principles. Imagine we’ve decided to prefer many small functions over larger ones. Let’s rewrite the code from before to adhere to that principle:

function createPageVisitCountObject(pageId) {
	return {
		pageId,
		count: 1,
	};
}

function incrementPageVisitCount(pageVisitCount) {
	return {
		...pageVisitCount,
		count: pageVisitCount.count + 1,
	};
}

function getPageVisitCount(pageVisits) {
	const pagesByKey = pageVisits
		.reduce((pages, visit) => {
			if (visit.pageId in pages) {
				pages[visit.pageId] = incrementPageVisitCount(pages[visit.pageId]);
				return pages;
			}

			pages[visit.pageId] = createPageVisitCountObject(visit.pageId);

			return pages;
		}, {});

	return Object.values(pagesById);
}

By writing very small functions and using immutability to avoid side effects, we’re implicitly opting into creating objects again while still following the loose immutability principle. The larger function we had before, on the other hand, provided the overview to recognize that we can increment the count property without causing side effects. It’s all a balancing act, and it’s up to you to find that balance.

Conclusion

Immutability is a good principle that we should encourage. Dogmatically practising the principle without reaping the benefits, however, isn’t helpful and may significantly degrade performance.

By defining the boundaries in which immutability is useful and therefore when it should be applied, we can avoid that performance pitfall we saw.