Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering The Faster Web with PHP, MySQL, and JavaScript

You're reading from   Mastering The Faster Web with PHP, MySQL, and JavaScript Develop state-of-the-art web applications using the latest web technologies

Arrow left icon
Product type Paperback
Published in Jun 2018
Publisher Packt
ISBN-13 9781788392211
Length 278 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Andrew Caya Andrew Caya
Author Profile Icon Andrew Caya
Andrew Caya
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Faster Web – Getting Started FREE CHAPTER 2. Continuous Profiling and Monitoring 3. Harnessing the Power of PHP 7 Data Structures and Functions 4. Envisioning the Future with Asynchronous PHP 5. Measuring and Optimizing Database Performance 6. Querying a Modern SQL Database Efficiently 7. JavaScript and Danger-Driven Development 8. Functional JavaScript 9. Boosting a Web Server's Performance 10. Going Beyond Performance 11. Other Books You May Enjoy

What is the Faster Web?

In 2009, Google announced its intentions to make the web faster[1] and launched a corresponding initiative by which the web community was invited to think of ways of making the internet go faster. It was stated that "people prefer faster, more responsive apps" and that this was the main reason behind Google's initiative. The announcement also included a list of many challenges identified by Google as being the first order of business of this initiative. The main ones were:

  • Updating aging protocols
  • Fixing JavaScript's lack of performance
  • Finding new measurement, diagnostics and optimization tools
  • Providing more access to broadband installations across the world

The Faster Web and performance

The Faster Web can be defined as a series of qualities to be developed in all spheres of web technology in order to speed up any transaction between a client and a server.

But how important is speed? It is important enough for Google to have discovered, in 2010, that any slowdown had a direct impact on a company's website traffic and ad revenue. In fact, Google successfully established a statistical correlation between traffic and ad revenue, and the number of results and the time it takes to obtain them. The end result of their research was that it is possible to observe a decrease of the order of 20% in traffic and add revenue when obtaining more results in 0.9 seconds versus fewer results on a page in only 0.4 seconds. Yahoo also confirmed that about 5% to 9% of its users would abandon a web page that took more than 400 milliseconds to load. Microsoft Bing saw a 4% decrease in revenue when the search results were delivered with an additional delay of only 2 seconds. Clearly, speed not only ensures user engagement, but also has a major effect on a company's revenue and general performance.

At first glance, it would seem that the Faster Web is exactly the same thing as web performance. But is this really the case?

Performance is defined as the manner in which a mechanism performs. According to André B. Bondi[2], "the performance of a computer-based system is often characterized by its ability to perform defined sets of activities at fast rates and with quick response time." And, as J. D. Meier et al. stated in their book on performance testing[3], "performance testing is a type of testing intended to determine the responsiveness, throughput, reliability, and/or scalability of a system under a given workload."

Thus, it is very clear that web performance is a core concept of the Faster Web. But, do we always expect these characteristics to be the only ones? If an application promises a thorough analysis of a hard drive and completes its task in less than five seconds, we will most certainly think that something went wrong. According to Denys Mishunov[4], performance is also about perception. As stated by Stéphanie Walter[5] in one of her presentations on perceived performance, "time measurement depends on the moment of measurement and can vary depending on the complexity of the task to be performed, the psychological state of the user (stress), and the user's expectations as he has defined them according to what he considers to be the software of reference when executing a certain task." Therefore, a good manner in which an application does what it has to do also means that the software would have to meet the user’s expectations as to how this computer program ought to do things.

Even though the Faster Web initiative first concentrated its efforts on making the different web technologies go faster, the different studies led researchers back to the notion of subjective, or perceived, time versus objective, or clocked, time in order to fully measure how website performance influenced the user's habits and general behavior when it came to browsing the web.

Therefore, in this book, we will be covering the Faster Web as it applies to all the major web technologies—that is to say, those that run on 70 to 80 % of web servers around the world and on all the major browsers, namely Apache, PHP, MySQL, and JavaScript. Moreover, we will not only cover these major web technologies from a developer's standpoint, but we will also discuss the Faster Web from the system administrator's viewpoint by covering HTTP/2 and reverse proxy caching in the last chapters. And, although the greater part of this book will be addressing the question of web performance only, the last chapter will be covering the other aspect of the Faster Web, which concerns satisfying the user's expectations through good user interface (UI) design.

Measuring the Faster Web

Now that we better understand in what way web performance is a very important part of the Faster Web as a whole and that the Faster Web is concerned with achieving not only efficiency and speed, but also with satisfying the user's expectations entirely, we can now ask ourselves how we can objectively measure the Faster Web and which tools are best suited to do so.

Before Measuring

When discussing speed measurement, it is always important to remember that speed always ultimately depends on hardware and that poorly performing software is not necessarily a problem if it is running on a poorly performing hardware infrastructure.

Of course, input and output (I/O) always accounts for the better part of the hardware infrastructure's aggregate latency. The network and the filesystem are the two main possible bottlenecks that will offer the worst possible performance when it comes to speed. For example, accessing data on the disk can be up to a hundred times slower than random-access memory (RAM) and very busy networks can make web services practically unreachable.

RAM limits also force us to make certain tradeoffs when it comes to speed, scalability and accuracy. It is always possible to get top-speed performance by caching the greater part of an application's data and loading everything into memory. But will this be the optimal solution in all circumstances? Will it still maintain speed in the context of a heavy workload? Will the data be refreshed adequately in the context of highly volatile data? The obvious answer to these questions is probably not. Thus, optimal speed is the balance between pure speed, reasonable memory consumption and acceptable data staleness.

Measuring performance in order to determine the optimal speed of a computer program is the art of finding the perfect balance in the context of particular business rules and available resources by implementing the appropriate tradeoffs and fine-tuning them afterwards.

The first step of assessing speed performance will therefore be to analyze available resources and determine the upper and lower limits of our hardware's speed performance. And since we are working on web performance, this first step will be accomplished by benchmarking the web server itself.

The second step will consist of profiling the web application in order to analyze the performance of each part of its inner workings and determine which parts of the application's code lack perfect balance and should be optimized.

Benchmark testing and profiling

Web server benchmarking is the process of evaluating a web server's performance under a certain workload. Software profiling is the process of analyzing a computer program's use of memory and execution time in order to optimize the program's inner structure.

In this part of the chapter, we will set up and test a few of the tools that will allow us to benchmark our web server and profile the source code that we will be analyzing in the next chapters of this book.

Practical prerequisites

In order to run the source code included in this book, we recommend that you start by installing Docker on your computer (https://docs.docker.com/engine/installation/). Docker is a software container platform that allows you to easily connect to your computer's devices in an isolated and sophisticated chroot-like environment. Unlike virtual machines, containers do not come bundled with full operating systems, but rather come with the required binaries in order to run some software. You can install Docker on Windows, Mac, or Linux. It should be noted, however, that some features, like full-featured networking, are still not available when running Docker on macOS (https://docs.docker.com/docker-for-mac/networking/#known-limitations-use-cases-and-workarounds).

The main Docker image that we will be using throughout this book is Linux for PHP 8.1 (https://linuxforphp.net/) with a non-thread safe version of PHP 7.1.16 and MariaDB (MySQL) 10.2.8 (asclinux/linuxforphp-8.1:7.1.16-nts). Once Docker is installed on your computer, please run the following commands in a bash-like Terminal in order to get a copy of the book's code examples and start the appropriate Docker container:

# git clone https://github.com/andrewscaya/fasterweb 
# cd fasterweb  
# docker run --rm -it \ 
 -v ${PWD}/:/srv/fasterweb \ 
 -p 8181:80 \ 
 asclinux/linuxforphp-8.1:7.1.16-nts \ 
 /bin/bash 

After running these commands, you should get the following command prompt:

The Linux for PHP container’s command line interface (CLI)
Note to Windows users: please make sure to replace the '${PWD}' portion of the shared volumes option in the previous Docker command with the full path to your working directory (ex. '/c/Users/fasterweb'), because you will not be able to start the container otherwise. Also, you should make sure that volume sharing is enabled in your Docker settings. Moreover, if you are running Docker on Windows 7 or 8, you will only be able to access the container at the address http://192.168.99.100:8181 and not at 'localhost:8181'.

All the code examples given in this book can be found, within the code repository, in a folder named according to the chapter's number. Thus, it is expected that you change your working directory at the beginning of each chapter in order to run the code examples given within. Thus, for this chapter, you are expected to enter, on the container's CLI, the following commands:

# mv /srv/www /srv/www.OLD
# ln -s /srv/fasterweb/chapter_1 /srv/www

And, for the next chapter, you are expected to enter these commands:

# rm /srv/www
# ln -s /srv/fasterweb/chapter_2 /srv/www

And, so on for the following chapters.

Also, if you prefer using multithreading technologies while optimizing your code, you can do so by running the thread-safe version of Linux for PHP (asclinux/linuxforphp-8.1:7.0.29-zts).

If you prefer running the container in detached mode (-d switch), please do so. This will allow you to docker exec many command shells against the same container while keeping it up and running at all times independently of whether you have a running Terminal or not.

Moreover, you should docker commit any changes you made to the container and create new images of it so that you can docker run it at a later time. If you are not familiar with the Docker command line and its run command, please find the documentation at the following address: https://docs.docker.com/engine/reference/run/.

Finally, many excellent books and videos on Docker have been published by Packt Publishing and I highly recommend that you read them in order to master this fine tool.

Now, enter the following commands in order to start all the services that will be needed throughout this book and to create a test script that will allow you to make sure everything is working as expected:

# cd /srv/www
# /etc/init.d/mysql start # /etc/init.d/php-fpm start # /etc/init.d/httpd start # touch /srv/www/index.php # echo -e "<?php phpinfo();" > /srv/www/index.php

Once you are done running these commands, you should point your favorite browser to http://localhost:8181/ and see the following result:

The phpinfo page

If you do not see this page, please try to troubleshoot your Docker installation.

Moreover, please note that, if you do not docker commit your changes and prefer to use an original Linux for PHP base image whenever you wish to start working with a code example contained in this book, the previous commands will have to be repeated each and every time.

We are now ready to benchmark our server.

Understanding Apache Bench (AB)

Many tools are available to benchmark a web server. The better-known ones are Apache Bench (AB), Siege, JMeter, and Tsung. Although JMeter (https://jmeter.apache.org/) and Tsung (http://tsung.erlang-projects.org/) are very interesting load-testing tools and should be explored when doing more advanced testing in the context of system administration, we will focus on AB and Siege for our development purposes.

AB is included with the Apache web server's development tools and is installed by default in Linux for PHP images that contain PHP binaries. Otherwise, AB can be found in a separate Apache development tools installation package on most Linux distributions. It is important to note that Apache Bench does not support multithreading, which can create problems when running high-concurrency tests.

Also, there are some common pitfalls to avoid when benchmarking. The main ones are:

  • Avoid running other resource-hungry applications simultaneously on the computer that is being benchmarked
  • Avoid benchmarking remote servers, as the network, especially in concurrency tests, might become the main cause of measured latency
  • Avoid testing on web pages that are cached through HTTP accelerators or proxies, as the result will be skewed and will not reveal actual server speed performance
  • Do not think that benchmarking and load testing will perfectly represent user interaction with your server, as the results are indicative in nature only
  • Be aware that benchmarking results are specific to the hardware architecture being tested and will vary from one computer to the other

For our tests, we will be using Apache Bench’s -k, -l, -c, and -n switches. Here are the definitions of these switches:

  • -k enables the KeepAlive feature in order to perform multiple requests in one single HTTP session
  • -l disables error reporting when the content lengths vary in size from one response to the other
  • -c enables concurrency in order to perform multiple requests at the same time
  • -n determines the number of requests to perform in the current benchmarking session

For more information on AB's options, please see the corresponding entry in Apache's documentation (https://httpd.apache.org/docs/2.4/programs/ab.html).

Before launching the benchmark tests, open a new Terminal window and docker exec a new bash Terminal to the container. This way, you will be able to see resource consumption through the top utility. To do so, start by getting the name of your container. It will appear in the list that will be returned by this command:

# docker ps 

You will then be able to tap into the container and start watching resource consumption with the following command:

# docker exec -it [name_of_your_container_here] /bin/bash 

And, on the container’s newly obtained command line, please run the top command:

# top 

Now, launch a benchmark test from within the first Terminal window:

# ab -k -l -c 2 -n 2000 localhost/index.html 

You will then get a benchmark test report containing information on the average number of requests per second that the server was able to respond to (Requests per second), the average response time per request (Time per request) and the response time’s standard deviation (Percentage of requests served within a certain time (ms)).

The report should be similar to the following:

The benchmark report shows that Apache is serving about 817 requests per second on average

Now, try a new benchmark test by requesting the index.php file:

# ab -k -l -c 2 -n 2000 localhost/index.php 

You will notice that the average number of requests per second has dropped and that the average response time and the standard deviation are higher. In my case, the average has dropped from about 800 to around 300 on my computer, the average response time has passed from 2 milliseconds to 6 milliseconds and the response time’s standard deviation has now gone from 100% of requests being served within 8 milliseconds to 24 milliseconds:

The benchmark report shows that Apache is now serving about 313 requests per second on average

These results allow us to have a general idea of our hardware's performance limits and to determine the different thresholds we would have to deal with when scaling performance of PHP scripts that are generating some dynamic content.

Now, let's dig a little deeper into our web server's performance with Siege, a tool of choice when benchmarking and load testing.

Understanding Siege

Siege is a load testing and benchmarking tool that allows us to further analyze our web server's performance. Let's begin by installing Siege inside our Docker container.

From the container's command line, please download and decompress version 4.0.2 of Siege:

# wget -O siege-4.0.2.tar.gz http://download.joedog.org/siege/siege-4.0.2.tar.gz 
# tar -xzvf siege-4.0.2.tar.gz 

Then, please enter Siege's source code directory to compile and install the software:

# cd siege-4.0.2 
# ./configure 
# make 
# make install 

For these tests with Siege, we will be using the -b, -c, and -r switches. Here are the definitions of these switches:

  • -b, enables benchmark mode, which means that there are no delays between iterations
  • -c, enables concurrency in order to perform multiple requests at the same time
  • -r, determines the number of requests to perform with each concurrent user

Of course, you can get more information on Siege's command-line options by invoking the manual from the container's command line:

# man siege  

Now launch a Siege benchmark test:

# siege -b -c 3000 -r 100 localhost/index.html 

You will then get a benchmark test report like this one:

The Siege benchmark report confirms the results that were obtained from AB

As you can see, the results match those that we got from AB previously. Our test shows a transaction rate of almost 800 transactions per second.

Siege also comes with a handy tool named Bombard that can automate tests and help to verify scalability. Bombard allows you to use Siege with an ever-increasing number of concurrent users. It can take a few optional arguments. These are: the name of a file containing URLs to use when performing the tests, the number of initial concurrent clients, the number of concurrent clients to add each time Siege is called, the number of times Bombard should call Siege and the time delay, in seconds, between each request.

We can, therefore, try to confirm the results of our previous tests by issuing the following commands inside the container:

# cd /srv/www
# touch urlfile.txt # for i in {1..4}; do echo "http://localhost/index.html" >> urlfile.txt ; done # bombardment urlfile.txt 10 100 4 0

Once done, you should obtain a report similar to the following one:

The results show that the longest transaction is much higher when there are 210 or more concurrent users

Try again, but by requesting the PHP file:

# echo "http://localhost/index.php" > urlfile.txt 
# for i in {1..3}; do echo "http://localhost/index.php" >> urlfile.txt ;  done 
# bombardment urlfile.txt 10 100 4 0 

This test should provide results similar to these:

The efficiency of serving dynamic content is analogous to that of serving static content, but with a much lower transaction rate

The second Terminal window that is running top is now showing 50% usage of both of the available processors and almost 50% RAM usage on my computer:

The container’s usage of CPU and memory resources when it is submitted to benchmarking tests

We now know that, when there are not many concurrent requests, this hardware can allow for good performance on a small scale, with 800 transactions per second on static files and about 200 transactions per second on pages that have dynamically generated content.

Now that we have a better idea of the base speed performance of our web server based solely on our hardware's resources, we can now start to truly measure the speed and efficiency of the web server's dynamically generated content through profiling. We will now proceed to install and configure tools that will allow us to profile and optimize PHP code.

Installing and configuring useful tools

We will now install and configure MySQL benchmarking and JavaScript profiling tools. But first, let's start by installing and configuring xdebug, a PHP debugger and profiler.

Profiling PHP – xdebug Installation and Configuration

The first tool we will install and configure is xdebug, a debugging and profiling tool for PHP. This extension can be downloaded, decompressed, configured, compiled and installed in a very easy manner by using the PECL utility included with PHP (https://pecl.php.net/). To do this, inside the container's Terminal window, please enter the following commands:

# pecl install xdebug 
# echo -e "zend_extension=$( php -i | grep extensions | awk '{print $3}' )/xdebug.so\n" >> /etc/php.ini
# echo -e "xdebug.remote_enable = 1\n" >> /etc/php.ini 
# echo -e "xdebug.remote_enable_trigger = 1\n" >> /etc/php.ini 
# echo -e "xdebug.remote_connect_back = 1\n" >> /etc/php.ini 
# echo -e "xdebug.idekey = PHPSTORM\n" >> /etc/php.ini 
# echo -e "xdebug.profiler_enable = 1\n" >> /etc/php.ini 
# echo -e "xdebug.profiler_enable_trigger = 1\n" >> /etc/php.ini 
# /etc/init.d/php-fpm restart
# tail -50 /etc/php.ini

The last lines of your container's /etc/php.ini file should now look like this:

Newly added lines to the php.ini file

Once done, please reload the http://localhost:8181 page in your favorite browser. It should now read as follows:

Confirmation that the xdebug extension has been loaded

If you scroll towards the end of the page, you should now see an xdebug section:

The xdebug section of the phpinfo page

You should also notice that the profiler options are now enabled under the xdebug entry:

Confirmation that xdebug code profiling is enabled

We will now configure PHPStorm to be the debugging server. This will allow us to use our IDE as the control center for our debugging sessions.

Before we start, we will make the entire fasterweb folder available as the server’s web root directory by entering these commands inside the container:

# rm /srv/www
# ln -s /srv/fasterweb /srv/www
# cd /srv/www

Now, start PHPStorm and make our fasterweb directory the home root of this project. To do so, select Create New Project from Existing Files and Source files are in a local directory and designate our fasterweb directory as the Project root before clicking on Finish.

Once created, select Settings from within the File menu. Under the Languages & Frameworks section, unfold the PHP menu entry and click on the Servers entry. Please enter all the appropriate information according to the specifics of your setup. The Host option must contain the value of the IP address of the Linux for PHP container. If you are not sure what is the IP address of your Docker container, please enter the following command on the container's command line in order to obtain it:

# ifconfig 

Once done, you can confirm by clicking on the Apply and OK buttons:

Configuring PHPStorm to connect to the web server and xdebug

Then, under the Run menu, you will find the Edit Configurations... entry. It can also be found on the right-hand side of the IDE's screen:

The ‘Edit configurations…’ setting

You can then add a PHP Remote Debug entry by clicking on the green plus sign in the upper-left corner of this window. Please select the server that we created in the previous step and please make sure that the Ide key(session id) is set to PHPSTORM:

Configuring the debugging session

You can now activate the PHPStorm debugging server by clicking on the Listen to debugger connections button in the upper-right menu of the main PHPStorm screen, set a breakpoint by clicking in the space to the right of any line number of the index.php file, and launch the debug tool corresponding to the index.php configuration that we created in the previous step.

If the top-right toolbar menu is not displayed on your screen, please click on the Toolbar entry of the View menu to make them appear on your screen. These buttons are also accessible as entries in the Run menu.

Activating the PHPStorm debugging server, setting a breakpoint and launching the debug tool

Now, open your favorite browser and request the same web page by entering the IP address of your Docker container:  http://[IP_ADDRESS]/?XDEBUG_SESSION_START=PHPSTORM.

You will then notice that the browser is caught in an infinite loop:

The browser is waiting for the debug session to resume or end

You will also notice that the debugging information is now showing inside the IDE. We can also control the session and determine when execution will resume from within the IDE. Please inspect the contents of the variables before allowing execution to resume by clicking on the green play button on the left-hand side of the screen. You can also end the debugging session by clicking on the pink stop button in the same icon menu:

The debugging session allows for detailed inspection of variables during runtime

Once the debugging session is over, we can now inspect our container's /tmp directory and should find the profiler output in a file named cachegrind.out. You can then inspect this file directly through your favorite text editor or by installing specialized software such as Kcachegrind with the package manager of your Linux distribution. Here is a sample output when using Kcachegrind:

Viewing the xdebug profiling report with Kcachegrind

Thus, xdebug’s profiling tool will be available to you if you wish to use it on top of those that we will be using to optimize our code examples in the next chapters. This being said, in the next chapter, we will be looking into more advanced profiling tools such as  Blackfire.io.

Once you are done testing xdebug, you can restore the chapter_1 folder as the server's web root directory:

# rm /srv/www
# ln -s /srv/fasterweb/chapter_1 /srv/www
# cd /srv/www

Now, let's continue by having a look at SQL speed testing tools.

SQL – Speed Testing

Even though the PostgreSQL server is often considered to be the fastest RDBMS in the world after Oracle Database, the MariaDB (fork of MySQL) server remains one of the fastest and most popular RDBMSs, especially when it comes to simple SQL queries. Thus, when discussing SQL optimizations in this book, we will mostly use MariaDB.

To benchmark our MariaDB server, we will be using the mysqlslap utility included with MySQL servers since version 5.1.4. In order to run the tests, we will start by loading the Sakila test database. On the container's command line, enter the following commands:

# wget -O sakila-db.tar.gz \ 
> https://downloads.mysql.com/docs/sakila-db.tar.gz 
# tar -xzvf sakila-db.tar.gz 
# mysql -uroot < sakila-db/sakila-schema.sql 
# mysql -uroot < sakila-db/sakila-data.sql 

Once the database is loaded, you can launch the first benchmarking test:

# mysqlslap --user=root --host=localhost --concurrency=20 --number-of-queries=1000 --create-schema=sakila --query="SELECT * FROM film;" --delimiter=";" --verbose --iterations=2 --debug-info  

You should then obtain a result similar to this:

Benchmarking the MariaDB server with the mysqlslap tool

You can then run a second benchmark test, but with a different level of concurrency in order to compare the results:

# mysqlslap --user=root --host=localhost --concurrency=50 --number-of-queries=1000 --create-schema=sakila --query="SELECT * FROM film;" --delimiter=";" --verbose --iterations=2 --debug-info 

Here are the results of the second test:

Benchmarking the MariaDB server with the mysqlslap tool using higher concurrency

The results of my tests show me that, with a full table scan query on a table with approximately 1,000 entries, performance degrades drastically when 50 or more concurrent queries are sent to the server.

We will see how these types of tests and many other more advanced ones will be particularly useful when discussing SQL query optimizations in the chapters dedicated to this topic.

JavaScript – Developer Tools

In order to measure performance and profile the JavaScript code contained in this book, we will use Google Chrome's built-in developer tools. Specifically, Chrome includes a timeline recorder and JavaScript CPU profiler that will allow you to identify bottlenecks in your JavaScript code. To activate these tools, please click on the three dots in the upper-right corner of the browser and click on the Developer Tools entry in the More Tools submenu, as shown:

Finding the ‘Developer Tools’ entry in the ‘More Tools’ section of Chrome’s main menu

Using the profiler is as easy as clicking the Record button and refreshing the page you wish to profile. You can then analyze the results in order to identify potential problems with the code:

Chrome’s timeline recorder and JavaScript CPU profiler

In Chapter 7, JavaScript and "Danger Driven Development", and Chapter 8, Functional JavaScript, we will be using this tool more extensively in order to measure and optimize JavaScript code performance in general.

You have been reading a chapter from
Mastering The Faster Web with PHP, MySQL, and JavaScript
Published in: Jun 2018
Publisher: Packt
ISBN-13: 9781788392211
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime