Emerging web standards
There are a couple of problems that make developing micro frontends that run on the client problematic. Among other considerations, we need to be able to isolate code as much as possible to avoid conflicts. However, in contrast to backend microservices, we may also want to share resources such as a runtime framework; otherwise, exhausting the available resources on the user's computer is possible.
While this sounds exotic at first, performance is one of the bigger challenges for micro frontends. On the backend, we can give every service a suitable amount of hardware. On the frontend, we need to live with the browser running code on a machine that the user chooses. This could be a larger desktop machine, but it could also be a Raspberry Pi or a smartphone.
One area where recent web standards help quite a bit is with style isolation. Without style isolation, the Document Object Model (DOM) would treat every style as global, resulting in accidental style leaks or overwrites due to conflicts.
Isolation via Web Components
Styles may be isolated using the technique of a shadow DOM. A shadow DOM enables us to write components that are projected into the parent DOM without being part of the parent DOM. Instead, only a carrier element (referred to as the host) is directly mounted. Not only do style rules from the parent not leak into the shadow DOM—style definitions in general are not applied either.
Consequently, a shadow DOM requires style sheets from the parent to be loaded again—if we want to use these styles. This isolation thus only makes sense if we want to be completely autonomous regarding styling in the shadow DOM.
The same isolation does not work for the JavaScript parts. Through global variables, we are still able to access the exports of the parent's scripts. Here, another drawback of the Web Components' standard shines through. Usually, the way to transport shadow DOM definitions is by using custom elements. However, custom elements require a unique name that cannot be redefined.
A way around many of the JavaScript drawbacks is to fall back to the mentioned <iframe>
element. An iframe comes with style and script isolation, but how can we communicate between the parent and the content living in the frame?
Frame communication
Originally introduced in HTML5, the window.postMessage
function has proven to be quite useful when it comes to micro frontends. This was already introduced quite early in reusable frontend pieces. Today, we can find reusable frontend pieces relying on frame communication—for instance—in most chatbot services or cookie consent services.
While sending a message is one part of the story, the other part is to receive messages. For this, a message
event was introduced on the window
object. The difference from many other events is that this event also gives us an origin
property, which allows us to track the URL of the sender.
The URL of a particular frame to send the message to is also necessary when sending a message. Let's see an example of such a process.
Here is the HTML code of the parent document:
<!doctype html> <iframe src="iframe.html" id="iframe"></iframe> <script> setTimeout(() => { iframe.contentWindow.postMessage('Hello!', '*'); }, 1000); </script>
After a second, we'll send a message containing the string Hello!
to the document loaded from iframe.html
. For the URL, we use the *
wildcard string.
The iframe can be defined like so:
<!doctype html> <script> window.addEventListener('message', event => { const text = document.body.appendChild( document.createElement('div')); text.textContent = ` Received "${event.data}" from ${event.origin}`; }); </script>
This will display the posted message in the frame.
Important note
The access and manipulation of frames is only forbidden when using a cross-origin. The definition of a cross-domain origin is one of the security fundamentals that the web is currently built upon. An origin consists of the protocol (for example, HTTP Secure (HTTPS)), the domain, and the port. Different subdomains also correspond to different origins. Many HTTP requests can only be performed if the cross-origin rules are positively evaluated by the browser. Technically, this is referred to as cross-origin resource sharing (CORS). See the following MDN Web Docs page for more on this: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS.
While these communication patterns are quite fruitful, they do not allow the sharing of any resources. Remember that only strings can be transported via the postMessage
function. While these strings may be complex objects serialized as JSON, they can never be true objects with nested cyclic references or functions inside.
An alternative is to give up on direct isolation and instead work on indirect isolation. A wonderful possibility is to leverage web workers to do this.
Web workers and proxies
A web worker represents an easy way to break out of the single-threaded model of the JavaScript engine, without all the usual hassle of multithreading.
As with iframes, the only method of communication between the main thread and the worker thread is to post messages. A key difference, however, is that the worker runs in another global context that is different from the current window
object. While some APIs still exist, many parts are either different or not available at all.
One example is in the way of loading additional scripts. While standard code may just append another <script>
element, the web worker needs to use the importScripts
function. This one is synchronous and allows not only one URL to be specified, but actually multiple URLs. The given URLs are loaded and evaluated in order.
So far so good, but how can we use the web worker if it comes with a different global context? Any frontend-related code that tries to do DOM manipulation will fail here. This is where proxies come to the rescue.
A proxy can be used to capture desired object accesses and function calls. This allows us to forward certain behavior from the web worker to the host. The only drawback is that the postMessage
interface is asynchronous in nature, which can be a challenge when synchronous APIs should be mimicked.
One of the simplest proxies is actually the I can handle everything proxy. This small amount of code, shown in the following snippet, provides a solid basis for 90% of all stubs:
const generalProxy = new Proxy(() => generalProxy, { get(target, name) { if (name === Symbol.toPrimitive) { return () => ({}).toString(); } else { return generalProxy(); } }, });
The trick here is that this allows it to be used like generalProxy.foo.bar().qxz
without any problems and with no undefined access or invalid functions.
Using a proxy, we can mock necessary DOM APIs, which are then forwarded to the parent document. Of course, we would only define and forward API calls that can be safely used. Ultimately, the trick is to filter against a safe list of acceptable APIs.
Facing the problem of transporting object references such as callbacks from the web worker to the parent, we may fall back to wrapping these. A custom marker may be used to allow the web worker to be called to run the callback at a later point in time.
We will go into the details of such a solution later—for now, it's enough to know that the web offers a fair share of security and performance measures that can be utilized to allow strong modularization.
For the final section, let's close out the original question with a look at the business reasons for choosing micro frontends.