Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

New memory usage optimizations implemented in V8 Lite can also benefit V8

Save for later
  • 4 min read
  • 13 Sep 2019

article-image

V8 Lite was released in late 2018 in V8 version 7.3 to dramatically reduce V8’s memory usage. V8 is Google’s open-source JavaScript and WebAssembly engine, written in C++. V8 Lite provides a 22% reduction in typical web page heap size compared to V8 version 7.1 by disabling code optimization, not allocating feedback vectors and performed aging of seldom executed bytecode.

Initially, this project was envisioned as a separate Lite mode of V8. However, the team realized that many of the memory optimizations could be used in regular V8 thereby benefiting all users of V8. The team realized that most of the memory savings of Lite mode with none of the performance impact can be achieved by making V8 lazier. They performed Lazy feedback allocation, Lazy source positions, and Bytecode flushing to bring V8 Lite memory optimizations to regular V8.

Read also: LLVM WebAssembly backend will soon become Emscripten default backend, V8 announces

Lazy allocation of Feedback Vectors


The team lazily allocated feedback vectors after a function executes a certain amount of bytecode (currently 1KB). Since most functions aren’t executed very often, they avoid feedback vector allocation in most cases but quickly allocate them where needed, to avoid performance regressions and still allow code to be optimized.

One hitch was that lazy allocation of feedback vectors did not allow feedback vectors to form a tree. To address this, they created a new ClosureFeedbackCellArray to maintain this tree, then swap out a function’s ClosureFeedbackCellArray with a full FeedbackVector when it becomes hot.

The team says that they, “have enabled lazy feedback allocation in all builds of V8, including Lite mode where the slight regression in memory compared to their original no-feedback allocation approach is more than compensated by the improvement in real-world performance.

Compiling bytecode without collecting source positions


Source position tables are generated when compiling bytecode from JavaScript. However, this information is only needed when symbolizing exceptions or performing developer tasks such as debugging. To avoid this waste, bytecode is now compiled without collecting source positions. The source positions are only collected when a stack trace is actually generated.

They have also fixed bytecode mismatches and added checks and a stress mode to ensure that eager and lazy compilation of a function always produces consistent outputs.

Flush compiled bytecode from functions not executed recently


Bytecode compiled from JavaScript source takes up a significant chunk of V8 heap space. Therefore, now compiled bytecode is flushed from functions during garbage collection if they haven’t been executed recently. They also flush feedback vectors associated with the flushed functions. To keep track of the age of a function’s bytecode, they have incremented the age after every major garbage collection, and reset it to zero when the function is executed.

Additional memory optimizations


Reduce the size of FunctionTemplateInfo objects. The FunctionTemplateInfo object is split such that the rare fields are stored in a side-table which is only allocated on demand if required.

The TurboFan optimized code is now deoptimized such that deopt points in optimized code load the deopt id directly before calling into the runtime.

Read also: V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

Result comparison for V8 Lite and V8


memory-usage-optimizations-implemented-in-v8-lite-can-benefit-v8-img-0

Source: V8 blog


People on Hacker News appreciated the work done by the team being V8.

A comment reads, “Great engineering stuff. I am consistently amazed by the work of V8 team. I hope V8 v7.8 makes it to Node v12 before its LTS release in coming October.

Another says, “At the beginning of the article, they are talking about building a "v8 light" for embedded application purposes, which was pretty exciting to me, then they diverged and focused on memory optimization that's useful for all v8. This is great work, no doubt, but as the most popular and well-tested JavaScript engine, I'd love to see a focus on ease of building and embedding.

https://twitter.com/vpodk/status/1172320685634420737

More details are available on the V8 blog.

Other interesting news in Tech


Google releases Flutter 1.9 at GDD (Google Developer Days) conference

Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack

Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, new iPad, and more.