Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Performance Testing with JMeter  3

You're reading from   Performance Testing with JMeter 3 Enhance the performance of your web application

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781787285774
Length 166 pages
Edition 3rd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Bayo Erinle Bayo Erinle
Author Profile Icon Bayo Erinle
Bayo Erinle
Arrow right icon
View More author details
Toc

Performance testing

Performance testing is a type of testing intended to determine the responsiveness, reliability, throughput, interoperability, and scalability of a system and/or application under a given workload. It can also be defined as a process of determining the speed or effectiveness of a computer, network, software application, or device. Testing can be conducted on software applications, system resources, targeted application components, databases, and a whole lot more. It normally involves an automated test suite, as this allows easy and repeatable simulations of a variety of normal, peak, and exceptional load conditions. Such forms of testing help verify whether a system or application meets the specifications claimed by its vendor. The process can compare applications in terms of parameters such as speed, data transfer rate, throughput, bandwidth, efficiency, or reliability. Performance testing can also aid as a diagnostic tool in determining bottlenecks and single points of failure. It is often conducted in a controlled environment and in conjunction with stress testing; a process of determining the ability of a system or application to maintain a certain level of effectiveness under unfavorable conditions.

Why bother? Using Baysoft's case study, it should be obvious why companies
bother and go to great lengths to conduct performance testing. The disaster could have been minimized, if not totally prevented, if effective performance testing had been conducted on TrainBot prior to opening it up to the masses. As we go ahead in this chapter, we will continue to explore the many benefits of effective performance testing.

At a very high level, performance testing is mostly conducted to address one or more risks related to expenses, opportunity costs, continuity, and/or corporate reputation. Conducting such tests helps give insights into software application release readiness, adequacy of network and system resources, infrastructure stability, and application scalability, to name a few. Gathering estimated performance characteristics of application and system resources prior to launch helps address issues early and provides valuable feedback to stakeholders, helping them make key and strategic decisions.

Performance testing covers a whole lot of ground, including areas such as the following:

  • Assessing application and system production readiness
  • Evaluating against performance criteria (for example, transactions per second, page views per day, and registrations per day)
  • Comparing performance characteristics of multiple systems or
    system configurations
  • Identifying the source of performance bottlenecks
  • Aiding with performance and system tuning
  • Helping identify system throughput levels
  • Acting as a testing tool

Most of these areas are intertwined with each other, each aspect contributing to attaining the overall objectives of stakeholders. However, before jumping right in, let's take a moment to understand the following core activities in conducting performance tests:

  • Identifying acceptance criteria: What is the acceptable performance of the various modules of the application under load? Specifically, we need to identify the response time, throughput, and resource utilization goals and constraints. How long should the end user wait before rendering a particular page? How long should the user wait to perform an operation? Response time is usually a user concern, throughput a business concern, and resource utilization a system concern. As such, response time, throughput, and resource utilization are key aspects of performance testing. Acceptance criteria are usually driven by stakeholders, and it is important to continuously involve them as the testing progresses, as the criteria may need to be revised.
  • Identifying the test environment: Becoming familiar with the physical test and production environments is crucial for a successful test run. Knowing things such as the hardware, software, and network configurations of the environment helps derive an effective test plan and identify testing challenges from the outset. In most cases, these will be revisited and/or revised during the testing cycle.
  • Planning and designing tests: Know the usage pattern of the application
    (if any), and come up with realistic usage scenarios, including variability among the various scenarios. For example, if the application in question has a user registration module, how many users typically register for an account in a day? Do those registrations happen all at once, at the same time, or are they spaced out? How many people frequent the landing page of the application within an hour? Questions such as these help put things in perspective and design variations in the test plan. Having said that, there may be times when the application under test is new, and so, no usage pattern has been formed yet. At such times, stakeholders should be consulted to understand their business process and come up with as close to a realistic test plan as possible.
  • Preparing the test environment: Configure the test environment, tools, and resources necessary to conduct the planned test scenarios. It is important to ensure that the test environment is instrumented for resource monitoring to help analyze results more efficiently. Depending on the company, a separate team might be responsible for setting up the test tools, while another team may be responsible for configuring other aspects, such as resource monitoring. In other organizations, a single team may be responsible for setting up all aspects.
  • Preparing the test plan: Using a test tool, record the planned test scenarios. There are numerous testing tools available, both free and commercial, that do the job quite well, with each having their pros and cons.
    • Such tools include HP LoadRunner, NeoLoad, LoadUI, Gatling, WebLOAD, WAPT, Loadster, Load Impact, Rational Performance Tester, Testing Anywhere, OpenSTA, LoadStorm, The Grinder, Apache Benchmark, httpref, and so on. Some of these are commercial, while others are not as mature, portable or extendable as JMeter. HP LoadRunner, for example, is a bit pricey and limits the number of simulated threads to 250 without purchasing additional licenses, although it does offer a much better graphical interface and monitoring capability. Gatling is the new kid on the block, is free, and looks rather promising. It is still in its infancy and aims to address some of the shortcomings of JMeter, including easier testing of domain-specific language (DSL) versus JMeter's verbose XML, and better and more meaningful HTML reports, among others.

Having said that, it still has only a tiny user base as compared to JMeter, and not everyone may be comfortable with building test plans in Scala, its language of choice. Programmers may find it more appealing.

    • In this book, our tool of choice will be Apache JMeter to perform this step. This shouldn't be a surprise considering the title of the book.
  • Running the tests: Once recorded, execute the test plans under light load and verify the correctness of the test scripts and output results. In cases where test or input data is fed into the scripts to simulate more realistic data (more on this in subsequent chapters), also validate the test data. Another aspect to pay careful attention to during test plan execution is the server logs. This can be achieved through the resource monitoring agents set up to monitor the servers. It is paramount to watch for warnings and errors. A high rate of errors, for example, can be an indication that something is wrong with the test scripts, application under test, system resource, or a combination of all these.
  • Analyzing results, report, and retest: Examine the results of each successive run and identify areas of bottleneck that need to be addressed. These can be related to system, database, or application. System-related bottlenecks may lead to infrastructure changes, such as increasing the memory available to the application, reducing CPU consumption, increasing or decreasing thread pool sizes, revising database pool sizes, and reconfiguring network settings. Database-related bottlenecks may lead to analyzing database I/O operations, top queries from the application under test, profiling SQL queries, introducing additional indexes, running statistics gathering, changing table page sizes and locks, and a lot more. Finally, application-related changes might lead to activities such as refactoring application components, and reducing application memory consumption and database round trips. Once the identified bottlenecks are addressed, the test(s) should then be rerun and compared with the previous runs. To help better track what change or group of changes resolved a particular bottleneck, it is vital that changes are applied in an orderly fashion, preferably one at a time. In other words, once a change is applied, the same test plan is executed and the results are compared to a previous run to check whether the change made had any improved or worsened effect on the results. This process is repeated until the performance goals of the project have been met.

The performance testing core activities are displayed as follows:

Performance testing core activities

Performance testing is usually a collaborative effort between all parties involved. Parties include business stakeholders, enterprise architects, developers, testers, DBAs, system admins, and network admins. Such collaboration is necessary to effectively gather accurate and valuable results when conducting tests. Monitoring network utilization, database I/O and waits, top queries, and invocation counts helps the team find bottlenecks and areas that need further attention in ongoing tuning efforts.

You have been reading a chapter from
Performance Testing with JMeter 3 - Third Edition
Published in: Jul 2017
Publisher: Packt
ISBN-13: 9781787285774
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime