Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Performance Testing with JMeter  3
Performance Testing with JMeter  3

Performance Testing with JMeter 3: Enhance the performance of your web application , Third Edition

eBook
€8.99 €23.99
Paperback
€29.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Performance Testing with JMeter 3

Performance Testing Fundamentals

Software performance testing is used to determine the speed or effectiveness of a computer, network, software program, or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions.
- Wikipedia

Let's consider a case study. Baysoft Training Inc. is an emerging start-up company focused on redefining how software will help get more people trained in various fields in the IT industry. The company achieves this goal by providing a suite of products, including online courses, and on-site and off-site training. As such, one of their flagship products--TrainBot, a web-based application--is focused solely on registering individuals for courses of interest that will aid them in attaining career goals. Once registered, the client can then go on to take a series of interactive online courses.

The incident

Up until recently, traffic on TrainBot was light, as it was only open to a handful of clients since it was still in closed beta. Everything was fully operational, and the application as a whole was very responsive. Just a few weeks ago, TrainBot was opened to the public and all is still fine and dandy. To celebrate the launch and promote its online training courses, Baysoft Training Inc. recently offered 75 percent off, on all the training courses. However, this promotional offer caused a sudden influx on TrainBot, far beyond what the company had anticipated. Web traffic shot up by 300 percent and, suddenly, things took a turn for the worse.

Network resources weren't holding up well, server CPUs and memory were at 90-95 percent, and database servers weren't far behind, due to high I/O and contention. As a result, most web requests began to get slower response times, making TrainBot totally unresponsive for most of its first-time clients. It didn't take too long for the servers to crash and for the support lines to get flooded after that.

The aftermath

It was a long night at the Baysoft Training Inc. corporate office. How did this happen? Could this have been avoided? Why were the application and system not able to handle the load? Why weren't adequate performance and stress tests conducted on the system and application? Was it an application problem, a system resource issue, or a combination of both? All these were questions that the management demanded answers to from the group of engineers, which comprised software developers, network and system engineers, Quality Assurance (QA) testers, and database administrators gathered in the meeting room. There sure was a lot of finger-pointing and blame going around the room. After a little brainstorming, it wasn't long before the group had to decide what needed to be done. The application and its system resources needed to undergo extensive and rigorous testing. This included all facets of the application and all supporting system resources, including, but not limited to, infrastructure, network, database, servers, and load balancers. Such a test would help all involved parties to discover exactly where the bottlenecks were and address them accordingly.

Performance testing

Performance testing is a type of testing intended to determine the responsiveness, reliability, throughput, interoperability, and scalability of a system and/or application under a given workload. It can also be defined as a process of determining the speed or effectiveness of a computer, network, software application, or device. Testing can be conducted on software applications, system resources, targeted application components, databases, and a whole lot more. It normally involves an automated test suite, as this allows easy and repeatable simulations of a variety of normal, peak, and exceptional load conditions. Such forms of testing help verify whether a system or application meets the specifications claimed by its vendor. The process can compare applications in terms of parameters such as speed, data transfer rate, throughput, bandwidth, efficiency, or reliability. Performance testing can also aid as a diagnostic tool in determining bottlenecks and single points of failure. It is often conducted in a controlled environment and in conjunction with stress testing; a process of determining the ability of a system or application to maintain a certain level of effectiveness under unfavorable conditions.

Why bother? Using Baysoft's case study, it should be obvious why companies
bother and go to great lengths to conduct performance testing. The disaster could have been minimized, if not totally prevented, if effective performance testing had been conducted on TrainBot prior to opening it up to the masses. As we go ahead in this chapter, we will continue to explore the many benefits of effective performance testing.

At a very high level, performance testing is mostly conducted to address one or more risks related to expenses, opportunity costs, continuity, and/or corporate reputation. Conducting such tests helps give insights into software application release readiness, adequacy of network and system resources, infrastructure stability, and application scalability, to name a few. Gathering estimated performance characteristics of application and system resources prior to launch helps address issues early and provides valuable feedback to stakeholders, helping them make key and strategic decisions.

Performance testing covers a whole lot of ground, including areas such as the following:

  • Assessing application and system production readiness
  • Evaluating against performance criteria (for example, transactions per second, page views per day, and registrations per day)
  • Comparing performance characteristics of multiple systems or
    system configurations
  • Identifying the source of performance bottlenecks
  • Aiding with performance and system tuning
  • Helping identify system throughput levels
  • Acting as a testing tool

Most of these areas are intertwined with each other, each aspect contributing to attaining the overall objectives of stakeholders. However, before jumping right in, let's take a moment to understand the following core activities in conducting performance tests:

  • Identifying acceptance criteria: What is the acceptable performance of the various modules of the application under load? Specifically, we need to identify the response time, throughput, and resource utilization goals and constraints. How long should the end user wait before rendering a particular page? How long should the user wait to perform an operation? Response time is usually a user concern, throughput a business concern, and resource utilization a system concern. As such, response time, throughput, and resource utilization are key aspects of performance testing. Acceptance criteria are usually driven by stakeholders, and it is important to continuously involve them as the testing progresses, as the criteria may need to be revised.
  • Identifying the test environment: Becoming familiar with the physical test and production environments is crucial for a successful test run. Knowing things such as the hardware, software, and network configurations of the environment helps derive an effective test plan and identify testing challenges from the outset. In most cases, these will be revisited and/or revised during the testing cycle.
  • Planning and designing tests: Know the usage pattern of the application
    (if any), and come up with realistic usage scenarios, including variability among the various scenarios. For example, if the application in question has a user registration module, how many users typically register for an account in a day? Do those registrations happen all at once, at the same time, or are they spaced out? How many people frequent the landing page of the application within an hour? Questions such as these help put things in perspective and design variations in the test plan. Having said that, there may be times when the application under test is new, and so, no usage pattern has been formed yet. At such times, stakeholders should be consulted to understand their business process and come up with as close to a realistic test plan as possible.
  • Preparing the test environment: Configure the test environment, tools, and resources necessary to conduct the planned test scenarios. It is important to ensure that the test environment is instrumented for resource monitoring to help analyze results more efficiently. Depending on the company, a separate team might be responsible for setting up the test tools, while another team may be responsible for configuring other aspects, such as resource monitoring. In other organizations, a single team may be responsible for setting up all aspects.
  • Preparing the test plan: Using a test tool, record the planned test scenarios. There are numerous testing tools available, both free and commercial, that do the job quite well, with each having their pros and cons.
    • Such tools include HP LoadRunner, NeoLoad, LoadUI, Gatling, WebLOAD, WAPT, Loadster, Load Impact, Rational Performance Tester, Testing Anywhere, OpenSTA, LoadStorm, The Grinder, Apache Benchmark, httpref, and so on. Some of these are commercial, while others are not as mature, portable or extendable as JMeter. HP LoadRunner, for example, is a bit pricey and limits the number of simulated threads to 250 without purchasing additional licenses, although it does offer a much better graphical interface and monitoring capability. Gatling is the new kid on the block, is free, and looks rather promising. It is still in its infancy and aims to address some of the shortcomings of JMeter, including easier testing of domain-specific language (DSL) versus JMeter's verbose XML, and better and more meaningful HTML reports, among others.

Having said that, it still has only a tiny user base as compared to JMeter, and not everyone may be comfortable with building test plans in Scala, its language of choice. Programmers may find it more appealing.

    • In this book, our tool of choice will be Apache JMeter to perform this step. This shouldn't be a surprise considering the title of the book.
  • Running the tests: Once recorded, execute the test plans under light load and verify the correctness of the test scripts and output results. In cases where test or input data is fed into the scripts to simulate more realistic data (more on this in subsequent chapters), also validate the test data. Another aspect to pay careful attention to during test plan execution is the server logs. This can be achieved through the resource monitoring agents set up to monitor the servers. It is paramount to watch for warnings and errors. A high rate of errors, for example, can be an indication that something is wrong with the test scripts, application under test, system resource, or a combination of all these.
  • Analyzing results, report, and retest: Examine the results of each successive run and identify areas of bottleneck that need to be addressed. These can be related to system, database, or application. System-related bottlenecks may lead to infrastructure changes, such as increasing the memory available to the application, reducing CPU consumption, increasing or decreasing thread pool sizes, revising database pool sizes, and reconfiguring network settings. Database-related bottlenecks may lead to analyzing database I/O operations, top queries from the application under test, profiling SQL queries, introducing additional indexes, running statistics gathering, changing table page sizes and locks, and a lot more. Finally, application-related changes might lead to activities such as refactoring application components, and reducing application memory consumption and database round trips. Once the identified bottlenecks are addressed, the test(s) should then be rerun and compared with the previous runs. To help better track what change or group of changes resolved a particular bottleneck, it is vital that changes are applied in an orderly fashion, preferably one at a time. In other words, once a change is applied, the same test plan is executed and the results are compared to a previous run to check whether the change made had any improved or worsened effect on the results. This process is repeated until the performance goals of the project have been met.

The performance testing core activities are displayed as follows:

Performance testing core activities

Performance testing is usually a collaborative effort between all parties involved. Parties include business stakeholders, enterprise architects, developers, testers, DBAs, system admins, and network admins. Such collaboration is necessary to effectively gather accurate and valuable results when conducting tests. Monitoring network utilization, database I/O and waits, top queries, and invocation counts helps the team find bottlenecks and areas that need further attention in ongoing tuning efforts.

Performance testing and tuning

There is a strong relationship between performance testing and tuning, in the
sense that one often leads to the other. Often, end-to-end testing unveils system
or application bottlenecks that are regarded as unacceptable for project target goals. Once those bottlenecks are discovered, the next step for most teams is a series of tuning efforts to make the application perform adequately.

Such efforts normally include, but are not limited to, the following:

  • Configuring changes in system resources
  • Optimizing database queries
  • Reducing round trips in application calls, sometimes leading to redesigning and re-architecting problematic modules
  • Scaling out application and database server capacity
  • Reducing application resource footprint
  • Optimizing and refactoring code, including eliminating redundancy and reducing execution time

Tuning efforts may also commence if the application has reached acceptable performance but the team wants to reduce the amount of system resources being
used, decrease the volume of hardware needed, or further increase system performance.

After each change (or series of changes), the test is re-executed to see whether the performance has improved or declined due to the changes. The process
will be continued with the performance results having reached acceptable goals.
The outcome of these test-tuning circles normally produces a baseline.

Baselines

A Baseline is the process of capturing performance metric data for the sole purpose of evaluating the efficacy of successive changes to the system or application. It is important that all characteristics and configurations, except those specifically being varied for comparison, remain the same in order to make effective comparisons as to which change (or series of changes) is driving results toward the targeted goal. Armed with such baseline results, subsequent changes can be made to the system configuration or application and testing results can be compared to see whether such changes were relevant or not. Some considerations when generating baselines include the following:

  • They are application-specific
  • They can be created for systems, applications, or modules
  • They are metrics/results
  • They should not be over generalized
  • They evolve and may need to be redefined from time to time
  • They act as a shared frame of reference
  • They are reusable
  • They help identify changes in performance

Load and stress testing

Load testing is the process of putting demand on a system and measuring its response, that is, determining how much volume the system can handle. Stress testing is the process of subjecting the system to unusually high loads, far beyond its normal usage pattern, to determine its responsiveness. These are different from performance testing, whose sole purpose is to determine the response and effectiveness of a system, that is, how fast the system is. Since load ultimately affects how a system responds, performance testing is mostly done in conjunction with stress testing.

JMeter to the rescue

In the last section, we covered the fundamentals of conducting a performance test. One of the areas performance testing covers is testing tools. Which testing tool do you use to put the system and application under load? There are numerous testing tools available to perform this operation, from free to commercial solutions. However, our focus in this book will be on Apache JMeter, a free, open source, cross-platform desktop application from the Apache Software foundation. JMeter has been around since 1998 according to historic change logs on its official site, making it a mature, robust, and reliable testing tool. Cost may also have played a role in its wide adoption. Small companies usually don't want to foot the bill for commercial end testing tools, which often place restrictions, for example, on how many concurrent users one can spin off. My first encounter with JMeter was exactly a result of this. I worked in a small shop that had paid for a commercial testing tool, but during the course of testing, we had outrun the licensing limits of how many concurrent users we needed to simulate for realistic test plans. Since JMeter was free, we explored it and were quite delighted with the offerings and the share amount of features we got for free.

Here are some of its features:

  • Performance tests of different server types, including web (HTTP and HTTPS), SOAP, database, LDAP, JMS, mail, and native commands or shell scripts
  • Complete portability across various operating systems
  • Full multithreading framework allowing concurrent sampling by many threads and simultaneous sampling of different functions by separate
    thread groups
  • Full featured Test IDE that allows fast test plan recording, building, and debugging
  • Dashboard report for detailed analysis of application performance indexes and key transactions
  • In-built integration with real-time reporting and analysis tools, such as Graphite, InfluxDB, and Grafana, to name a few
  • Complete dynamic HTML reports
  • Graphical User Interface (GUI)
  • HTTP proxy recording server
  • Caching and offline analysis/replaying of test results
  • High extensibility
  • Live view of results as testing is being conducted

JMeter allows multiple concurrent users to be simulated on the application, allowing you to work toward most of the target goals obtained earlier in this chapter, such as attaining baselines and identifying bottlenecks.

It will help answer questions, such as the following:

  • Will the application still be responsive if 50 users are accessing it concurrently?
  • How reliable will it be under a load of 200 users?
  • How much of the system resources will be consumed under a load of
    250 users?
  • What will the throughput look like with 1000 users active in the system?
  • What will be the response time for the various components in the application under load?

JMeter, however, should not be confused with a browser (more on this in Chapter 2, Recording Your First Test, and Chapter 3, Submitting Forms). It doesn't perform all the operations supported by browsers; in particular, JMeter does not execute JavaScript found in HTML pages, nor does it render HTML pages the way a browser does. However, it does give you the ability to view request responses as HTML through many of its listeners, but the timings are not included in any samples. Furthermore, there are limitations to how many users can be spun on a single machine. These vary depending on the machine specifications (for example, memory, processor speed, and so on) and the test scenarios being executed. In our experience, we have mostly been able to successfully spin off 250-450 users on a single machine with a 2.2 GHz processor and 8 GB of RAM.

Up and running with JMeter

Now, let's get up and running with JMeter, beginning with its installation.

Installation

JMeter comes as a bundled archive, so it is super easy to get started with it. Those working in corporate environments behind a firewall or machines with non-admin privileges appreciate this more. To get started, grab the latest binary release by pointing your browser to http://jmeter.apache.org/download_jmeter.cgi. At the time of writing this, the current release version is 3.1. The download site offers the bundle as both a .zip file and a .tgz file. In this book, we go with the .zip file option, but feel free to download the .tgz file if that's your preferred way of grabbing archives.

Once downloaded, extract the archive to a location of your choice. Throughout this book, the location you extracted the archive to will be referred to as JMETER_HOME.

Provided you have a JDK/JRE correctly installed and a JAVA_HOME environment variable set, you are all set and ready to run!

The following screenshot shows a trimmed down directory structure of a vanilla JMeter install:

JMETER_HOME folder structure

The following are some of the folders in Apache-JMeter-3.2, as shown in the preceding screenshot:

  • bin: This folder contains executable scripts to run and perform other operations in JMeter
  • docs: This folder contains a well-documented user guide
  • extras: This folder contains miscellaneous items, including
    samples illustrating the usage of the Apache Ant build tool
    (http://ant.apache.org/) with JMeter, and bean shell scripting
  • lib: This folder contains utility JAR files needed by JMeter (you may add additional JARs here to use from within JMeter; we will cover this in detail later)
  • printable_docs: This is the printable documentation

Installing Java JDK

Follow these steps to install Java JDK:

  1. Go to http://www.oracle.com/technetwork/java/javase/downloads/index.html.
  2. Download Java JDK (not JRE) compatible with the system that you will use to test. At the time of writing, JDK 1.8 (update 131) was the latest, and that's what we use throughout this book.
  3. Double-click on the executable and follow the onscreen instructions.
On Windows systems, the default location for the JDK is under Program Files. While there is nothing wrong with this, the issue is that the folder name contains a space, which can sometimes be problematic when attempting to set PATH and run programs, such as JMeter, depending on the JDK from the command line. With this in mind, it is advisable to change the default location to something like C:\tools\jdk.

Setting up JAVA_HOME

Here are the steps to set up the JAVA_HOME environment variable on Windows and Unix operating systems.

On Windows

For illustrative purposes, assume that you have installed Java JDK at C:\tools\jdk:

  1. Go to Control Panel.
  2. Click on System.
  3. Click on Advance System settings.
  4. Add Environment to the following variables:
    • Value: JAVA_HOME
    • Path: C:\tools\jdk
  5. Locate Path (under system variables, bottom half of the screen).
  6. Click on Edit.
  7. Append %JAVA_HOME%/bin to the end of the existing path value (if any).

On Unix

For illustrative purposes, assume that you have installed Java JDK at /opt/tools/jdk:

  1. Open up a Terminal window.
  2. Export JAVA_HOME=/opt/tools/jdk.
  3. Export PATH=$PATH:$JAVA_HOME.

It is advisable to set this in your shell profile settings, such as .bash_profile
(for bash users) or .zshrc (for zsh users), so that you won't have to set it for each
new Terminal window you open.

Running JMeter

Once installed, the bin folder under the JMETER_HOME folder contains all the executable scripts that can be run. Based on the operating system that you installed JMeter on, you either execute the shell scripts (.sh file) for operating systems that are Unix/Linux flavored, or their batch (.bat file) counterparts on operating systems that are Windows flavored.

JMeter files are saved as XML files with a .jmx extension. We refer to them as test scripts or JMX files in this book.

These scripts include the following:

  • jmeter.sh: This script launches JMeter GUI (the default)
  • jmeter-n.sh: This script launches JMeter in non-GUI mode (takes a JMX file as input)
  • jmeter-n-r.sh: This script launches JMeter in non-GUI mode remotely
  • jmeter-t.sh: This opens a JMX file in the GUI
  • jmeter-server.sh: This script starts JMeter in server mode (this will be kicked off on the master node when testing with multiple machines remotely; more on this in Chapter 6, Distributed Testing)
  • mirror-server.sh: This script runs the mirror server for JMeter
  • shutdown.sh: This script gracefully shuts down a running non-GUI instance
  • stoptest.sh: This script abruptly shuts down a running non-GUI instance

To start JMeter, open a Terminal shell, change to the JMETER_HOME/bin folder, and run the following command on Unix/Linux:

./jmeter.sh

Alternatively, run the following command on Windows:

jmeter.bat

A short moment later, you will see the JMeter GUI displayed in the configuring proxy server section. Take a moment to explore the GUI. Hover over each icon to see a short description of what it does. The Apache JMeter team has done an excellent job with the GUI. Most icons are very similar to what you are used to, which helps ease the learning curve for new adapters. Some of the icons, for example, stop and shutdown, are disabled for now till a scenario/test is being conducted. In the next chapter, we will explore the GUI in more detail as we record our first test script.

The JVM_ARGS environment variable can be used to override JVM settings in the jmeter.bat or jmeter.sh script. Consider the following example:
export JVM_ARGS="-Xms1024m -Xmx1024m -Dpropname=propvalue".

Command-line options

To see all the options available to start JMeter, run the JMeter executable with the -? command. The options provided are as follows:

    ./jmeter.sh -?

-?
print command line options and exit
-h, --help
print usage information and exit
-v, --version
print the version information and exit
-p, --propfile <argument>
the jmeter property file to use
-q, --addprop <argument>
additional JMeter property file(s)
-t, --testfile <argument>
the jmeter test(.jmx) file to run
-l, --logfile <argument>
the file to log samples to
-j, --jmeterlogfile <argument>
jmeter run log file (jmeter.log)
-n, --nongui
run JMeter in nongui mode
...
-J, --jmeterproperty <argument>=<value>
Define additional JMeter properties
-G, --globalproperty <argument>=<value>
Define Global properties (sent to servers)
e.g. -Gport=123
or -Gglobal.properties
-D, --systemproperty <argument>=<value>
Define additional system properties
-S, --systemPropertyFile <argument>
additional system property file(s)

This is a snippet (non-exhaustive list) of what you might see if you did the same.
We will explore some, but not all, of these options as we go through the book.

JMeter's classpath

Since JMeter is 100 percent pure Java, it comes packed with functionalities to get most of the test cases scripted. However, there might come a time when you need to pull in a functionality provided by a third-party library, or one developed by yourself, which is not present by default. As such, JMeter provides two directories where such third-party libraries can be placed to be auto discovered on its classpath:

  • JMETER_HOME/lib: This is used for utility JARs.
  • JMETER_HOME/lib/ext: This is used for JMeter components and add-ons. All custom-developed JMeter components should be placed in the lib/ext folder, while third-party libraries (JAR files) should reside in the lib folder.

Configuring a proxy server

If you are working from behind a corporate firewall, you may need to configure JMeter to work with it, providing it with the proxy server host and port number.
To do so, supply additional command-line parameters to JMeter when starting
it up. Some of them are as follows:

  • -H: This command-line parameter specifies the proxy server hostname or
    IP address
  • -P: This specifies the proxy server port
  • -u: This specifies the proxy server username if it is secure
  • -a: This specifies the proxy server password if it is secure; consider the following example:
    ./jmeter.sh -H proxy.server -P 7567 -u username -a password

On Windows, run the jmeter.bat file instead.

Do not confuse the proxy server mentioned here with JMeter's built-in HTTP(S) Test Script Recorder, which is used to record HTTP or HTTPS browser sessions. We will be exploring this in the next chapter when we record our first test scenario.

The screen is displayed as follows:

JMeter GUI

Running in non-GUI mode

As described earlier, JMeter can run in non-GUI mode. This is needed when you run remotely, or want to optimize your testing system by not taking the extra overhead cost of running the GUI. Normally, you will run the default (GUI) when preparing your test scripts and running light load, but run the non-GUI mode for higher loads.

To do so, use the following command-line options:

  • -n: This command-line option indicates running in non-GUI mode
  • -t: This command-line option specifies the name of the JMX test file
  • -l: This command-line option specifies the name of the JTL file to
    log results to
  • -j: This command-line option specifies the name of the JMeter run log file
  • -r: This command-line option runs the test servers specified by the
    remote_hosts JMeter property
  • -R: This command-line option runs the test in the specified remote servers (for example, -Rserver1,server2)

In addition, you can also use the -H and -P options to specify proxy server host and post, as we saw earlier:

./jmeter.sh -n -t test_plan_01.jmx -l log.jtl

Running in server mode

This is used when performing distributed testing, that is, using more testing servers to generate additional load on your system. JMeter will be kicked off in server mode on each remote server (slave), and then a GUI on the master server will be used to control the slave nodes. We will discuss this in detail when we dive into distributed testing in Chapter 4, Managing Sessions:

./jmeter-server.sh
Specify the server.exitaftertest=true JMeter property if you want the server to exit after a single test is completed. It is set as off by default.

Overriding properties

JMeter provides two ways to override Java, JMeter, and logging properties. One way is to directly edit jmeter.properties, which resides in the JMETER_HOME/bin folder. I suggest that you take a peek into this file and see the vast number of properties you can override. This is one of the things that makes JMeter so powerful and flexible. On most occasions, you will not need to override the defaults, as they have sensible default values.

The other way to override these values is directly from the command line when starting JMeter.

The options available to you include the following ones:

  • Defining a Java system property value:
    -D<property name>=<value>
  • Defining a local JMeter property:
    -J<property name>=<value>
  • Defining a JMeter property to be sent to all remote servers:
    -G<property name>=<value>
  • Defining a file containing JMeter properties to be sent to all remote servers:
    -G<property file>
  • Overriding a logging setting, setting a category to a given priority level:
    -L<category>=<priority>
./jmeter.sh -Duser.dir=/home/bobbyflare/jmeter_stuff
-Jremote_hosts=127.0.0.1 -Ljmeter.engine=DEBUG
Since command-line options are processed after the logging system has been set up, any attempt to use the -J flag to update the log_level or log_file properties will have no effect.

Tracking errors during test execution

JMeter keeps track of all errors that occur during a test in a log file named jmeter.log by default. The file resides in the folder from which JMeter was launched. The name of this log file, like most things, can be configured in jmeter.properties or via a command-line parameter, -j <name_of_log_file>. When running the GUI, the error count is indicated in the top-right corner, that is, to the left of the number of threads running for the test, as shown in the following screenshot. Clicking on it reveals the log file contents directly at the bottom of the GUI. The log file provides an insight into what exactly is going on in JMeter when your tests are being executed and helps determine the cause of error(s) when they occur:

JMeter GUI error count/indicator

Configuring JMeter

Should you need to customize JMeter default values, you can do so by editing the jmeter.properties file in the JMETER_HOME/bin folder, or making a copy of that file, renaming it as something different (for example, my-jmeter.properties), and specifying that as a command-line option when starting JMeter.

Some options you can configure include the following:

  • xml.parser: This specifies Custom XML parser implementation. The default value is org.apache.xerces.parsers.SAXParser; it is not mandatory. If you found the provided SAX parser buggy for some of your use cases, it provides you with the option to override it with another implementation. For example, you can use javax.xml.parsers.SAXParser, provided that the right JARs exist on your instance of JMeter classpath.
  • remote_hosts: This is a comma-delimited list of remote JMeter hosts (or host:port if required). When running JMeter in a distributed environment, list the machines where you have JMeter remote servers running. This will allow you to control those servers from this machine's GUI. This applies only to distributed testing and is not mandatory. More on this will be discussed in Chapter 6, Distributed Testing.
  • not_in_menu: This is a list of components you do not want to see in JMeter's menus. Since JMeter has quite a number of components, you may wish to restrict it to show only components you are interested in or those you use regularly. You may list their class name or their class label (the string that appears in JMeter's UI) here, and they will no longer appear in the menus. The defaults are fine, and in our experience, we have never had to customize them, but we list it here so that you are aware of its existence; it's not mandatory.
  • user.properties: This specifies the name of the file containing additional JMeter properties. These are added after the initial property file, but before the -q and -J options are processed. This is not mandatory. User properties can be used to provide additional classpath configurations, such as plugin paths via the search_paths attribute, and utility JAR paths via the user_classpath attribute. In addition, these properties files can be used to fine-tune JMeter components' log verbosity.
  • search_paths: This specifies a list of paths (separated by ;) that JMeter will search for JMeter add-on classes; for example, additional samplers. This is in addition to any of the JARs found in the lib/ext folder. This is not mandatory. This comes in handy, for example, when extending JMeter with additional plugins that you don't intend to install in the JMETER_HOME/lib/ext folder. You can use this to specify an alternate location on the machine to pick up the plugins. Refer to Chapter 4, Managing Sessions.
  • user.classpath: In addition to JARs in the lib folder, use this attribute to provide additional paths that JMeter will search for utility classes. It is not mandatory.
  • system.properties: This specifies the name of the file containing additional system properties for JMeter to use. These are added before the -S and -D options are processed. This is not mandatory; it typically provides you with the ability to fine-tune various SSL settings, key stores, and certificates.
  • ssl.provider: This specifies the custom SSL implementation if you don't want to use the built-in Java implementation; it is not mandatory. If, for some reason, the default built-in Java implementation of SSL, which is quite robust, doesn't meet your particular usage scenario, this allows you to provide a custom one.
    In our experience, the default has always been sufficient.

The command-line options are processed in the following order:

  • -p profile: This specifies the custom jmeter properties file to be used. If present, it is loaded and processed. This is optional.
  • jmeter.properties file: This is the default configuration file for JMeter and is already populated with sensible default values. It is loaded and processed after any user-provided custom properties files.
  • -j logfile: This is optional; it specifies the jmeter logfile. It is loaded and processed after the jmeter.properties file that we discussed earlier.
  • Logging is initialized.
  • user.properties: It is loaded (if any).
  • system.properties: It is loaded (if any).

All other command-line options are processed.

Summary

In this chapter, we covered the fundamentals of performance testing. We also discussed key concepts and activities surrounding performance testing in general. In addition, we installed JMeter, and you learned how to get it fully running on a machine and explored some of the configurations available with it. We explored some of the options that make JMeter a great tool of choice for your next performance testing assignment. These include the fact that it is free and mature, open source, easily extensible and customizable, completely portable across various operating systems, is a great plugin ecosystem, has a large user community, built-in GUI and recording, and validates test scenarios, among others. In comparison with other tools for performance testing, JMeter holds its stance.

In the next chapter, we will record our first test scenario and dive deeper into JMeter.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Use JMeter to create and run tests to improve the performance of your webpages and applications
  • Learn to build a test plan for your websites and analyze the results
  • Unleash the power of various features and changes introduced in Apache JMeter 3.0

Description

JMeter is a Java application designed to load and test performance for web application. JMeter extends to improve the functioning of various other static and dynamic resources. This book is a great starting point to learn about JMeter. It covers the new features introduced with JMeter 3 and enables you to dive deep into the new techniques needed for measuring your website performance. The book starts with the basics of performance testing and guides you through recording your first test scenario, before diving deeper into JMeter. You will also learn how to configure JMeter and browsers to help record test plans. Moving on, you will learn how to capture form submission in JMeter, dive into managing sessions with JMeter and see how to leverage some of the components provided by JMeter to handle web application HTTP sessions. You will also learn how JMeter can help monitor tests in real-time. Further, you will go in depth into distributed testing and see how to leverage the capabilities of JMeter to accomplish this. You will get acquainted with some tips and best practices with regard to performance testing. By the end of the book, you will have learned how to take full advantage of the real power behind Apache JMeter.

Who is this book for?

This book is for software professionals who want to understand and improve the performance of their applications with Apache JMeter.

What you will learn

  • See why performance testing is necessary and learn how to set up JMeter
  • Record and test with JMeter
  • Handle various form inputs in JMeter and parse results during testing
  • Manage user sessions in web applications in the context of a JMeter test
  • Monitor JMeter results in real time
  • Perform distributed testing with JMeter
  • Get acquainted with helpful tips and best practices for working with JMeter

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 21, 2017
Length: 166 pages
Edition : 3rd
Language : English
ISBN-13 : 9781787285774
Vendor :
Apache
Category :
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jul 21, 2017
Length: 166 pages
Edition : 3rd
Language : English
ISBN-13 : 9781787285774
Vendor :
Apache
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 75.97 197.97 122.00 saved
Learning JMeter 3.0 [V]
€8.99 €130.99
JMeter Cookbook
€36.99
Performance Testing with JMeter  3
€29.99
Total 75.97 197.97 122.00 saved Stars icon
Banner background image

Table of Contents

8 Chapters
Performance Testing Fundamentals Chevron down icon Chevron up icon
Recording Your First Test Chevron down icon Chevron up icon
Submitting Forms Chevron down icon Chevron up icon
Managing Sessions Chevron down icon Chevron up icon
Monitoring Tests in Real-Time Chevron down icon Chevron up icon
Distributed Testing Chevron down icon Chevron up icon
Helpful Tips - Part 1 Chevron down icon Chevron up icon
Helpful Tips - Part 2 Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3
(3 Ratings)
5 star 33.3%
4 star 66.7%
3 star 0%
2 star 0%
1 star 0%
Ramkrishna Bhandare Jan 23, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
value for money
Amazon Verified review Amazon
Traymane Sep 26, 2017
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Really informative book for those whom are novice or want to further their skill set of JMeter.
Amazon Verified review Amazon
とあるエンジニア Feb 18, 2018
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
負荷テストを行う際にどういった指標を用いるべきか、またどんなことに気をつけるべきかなどから始まり、Jmeterの使い方が学べます。他にもどういったテストツールが存在しているかなども書いてあり、個人的に好感が持てます。あくまでも個人の感想ですが、負荷テスト経験がない方やJmeterを初めて扱う方には、よくまとまった良書だと思います。既にJmeterを大規模に運用されている方には物足りないかもしれません。
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.