The evolution of web applications
Before looking for reasons to use micro frontends, we should look at why micro frontends came to exist at all. How did the web evolve from a small proof of concept (POC) running on a NeXT computer in a small office at Conseil Européen pour la Recherche Nucléaire (CERN)/The European Organization for Nuclear Research to become a central piece of the information age?
Programming the web
My first contact with web development was in the mid-1990s. Back then, the web was mostly composed of static web pages. While some people were experienced enough to bring in some dynamic websites using the Common Gateway Interface (CGI) technology, as shown in Figure 1.1, most webmasters did not have knowledge of this or wanted to spend money on server-side rendering (SSR). The term webmaster was commonly used for somebody who was in charge of a website and instead of doing SSR, everything was crafted by hand upfront.
Figure 1.1 – The web changes from static to dynamic pages
To avoid duplication and potential inconsistencies, a new technology was used – frames declared in <frameset>
tags. Frames allowed websites to be displayed within websites. Effectively, this enabled the reuse of things such as a menu, header, or footer on different pages. While frames have been removed from the HyperText Markup Language 5 (HTML5) specification, they are still available in all browsers. Their successor still lives on today – the inline frame (iframe) tag, <iframe>
.
One of the downsides of frames was that link handling became increasingly difficult. To get the best performance, the right target had to be selected explicitly. Another difficulty was encountered in correct Uniform Resource Locator (URL) handling. Since navigation was only performed on a given frame, the displayed page address did not change.
Consequently, people started to look for alternatives. One way was to use SSR without all the complexity. By introducing some special HTML comments as placeholders containing some server instructions, a generic layout could be added that would be dynamically resolved. The name for this technique was Server Side Includes, more commonly known as SSI.
The Apache web server was among the first to introduce SSI support through the mod_ssi
module. Other popular web servers, such as Microsoft’s Internet Information Services (IIS), followed quickly afterward. The progressive nature and Turing completeness of the allowed instructions made SSI an instant success.
It was around that time that websites became more and more dynamic. To tame the CGI, many solutions were implemented. However, only when a new programming language called PHP: Hypertext Preprocessor (PHP) was introduced did SSR become mainstream. The cost of running PHP was so cheap that SSI was almost forgotten.
The social web
With the rise of Web 2.0 and the capabilities of JavaScript – especially for dynamic data loading at runtime, known as asynchronous JavaScript and XML (AJAX) – the web community faced another challenge. SSR was no longer a silver bullet. Instead, the dynamic part of websites had to live on the client – at least partially. JavaScript became much more important, and the complexity of dividing an application into multiple areas (build, server, and client), along with their testing and development, skyrocketed.
As a result, frameworks for client-side rendering (CSR) emerged. The first-generation frameworks such as Backbone.js, Knockout.js, and AngularJS all came with architecture choices similar to the popular frameworks for SSR. They did not intend to place the full application on the client, and they did not intend to grow indefinitely.
In practice, this looked quite different. The application size increased and a lot of code was served to the client that was never intended to be used by the current user. Images and other media were served without any optimizations, and the web became slower and slower.
Of course, we’ve seen the rise of tools to mitigate this. While JavaScript minification has become nearly as old as JavaScript itself, other tools for image optimization and Cascading Style Sheets (CSS) minification came about to improve the situation too.
The missing link was to combine these tools into a single pipeline. Thanks to Node.js, the web community received a truly magnificent gem. Now, we had a runtime that could not only bring JavaScript to the server side but also allow the use of cross-platform tooling. New task runners such as Grunt and Gulp started making frontend code easier to develop efficiently.
The Web 2.0 movement also made the reuse of web services directly from the UI running in the browser popular, as shown in the following figure.
Figure 1.2 – With the Web 2.0 movement, services, and AJAX enter common architectures
Having dedicated backend services makes sense for multiple use cases. First, we can leverage AJAX in the frontend to do partial reloads, as outlined in Figure 1.2. Another use case is to allow other systems to access the information too. This way, useful data can be monetarized as well. Finally, the separation between representation (usually using HTML) and structure (using formats such as Extensible Markup Language (XML) or JavaScript Object Notation (JSON)) allows reuse across multiple applications.
The separation of the frontend and the backend
The enhanced rendering capabilities on the frontend also accelerated the separation between the backend and the frontend. Suddenly, there was no giant monolith that handled user activities, page generation, and database queries in one code base. Instead, the data handling was put into an Application Programming Interface (API) layer that could be used for page generation in SSR scenarios or directly from code running in the user’s browser. In particular, applications that handle all rendering in the client were labeled a single-page application (SPA).
Such a separation not only brought benefits – the design of these APIs became an art on their own. Providing suitable security settings and establishing a great performance baseline became more difficult, however. Ultimately, this also became a challenge from a deployment perspective.
From a user experience perspective, the capability of doing partial page updates is not without challenges either. Here, we rely on indicators such as loading spinners, skeleton styles, or other methods to transport the right signals to the user. Another thing to take care of here is correct error handling. Should we retry? Inform the user? Use a fallback? Multiple possibilities exist; however, besides making a good decision here, we also need to spend some time implementing and testing it.
Nevertheless, for many applications, the split into a dedicated frontend and a dedicated backend part is definitely a suitable choice. One reason is that dedicated teams can work on both sides of the story, thus making the development of larger web applications more efficient.
The gain in development efficiency, as we will see, is a driving force behind the move toward micro frontends. Let’s see how the trend toward modularization got started in web development.