Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Selenium WebDriver 3.0

You're reading from   Mastering Selenium WebDriver 3.0 Boost the performance and reliability of your automated checks by mastering Selenium WebDriver

Arrow left icon
Product type Paperback
Published in Jun 2018
Publisher Packt
ISBN-13 9781788299671
Length 376 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Mark Collin Mark Collin
Author Profile Icon Mark Collin
Mark Collin
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Creating a Fast Feedback Loop 2. Producing the Right Feedback When Failing FREE CHAPTER 3. Exceptions Are Actually Oracles 4. The Waiting Game 5. Working with Effective Page Objects 6. Utilizing the Advanced User Interactions API 7. JavaScript Execution with Selenium 8. Keeping It Real 9. Hooking Docker into Selenium 10. Selenium – the Future 11. Other Books You May Enjoy Appendix A: Contributing to Selenium 1. Appendix B: Working with JUnit 2. Appendix C: Introduction to Appium

Running your tests in parallel

Running your tests in parallel means different things to different people, as it can mean either of the following:

  • Run all of your tests against multiple browsers at the same time
  • Run your tests against multiple instances of the same browser

Should we run our tests in parallel to increase coverage?

I'm sure that when you are writing automated tests, to make sure things work with the website you are testing, you are initially told that your website has to work on all browsers. The reality is that this is just not true. There are many browsers out there and it's just not feasible to support everything. For example, will your AJAX-intensive site that has the odd flash object work in the Lynx browser?

Lynx is a text-based web browser that can be used in a Linux Terminal window and was still in active development in 2014.

The next thing you will hear is, "OK, well, we will support every browser supported by Selenium." Again, that's great, but we have problems. Something that most people don't realize is that the core Selenium teams official browser support is the current browser version, and the previous version at the time of release of a version of Selenium. In practice, it may well work on older browsers and the core team does a lot of work to try and make sure they don't break support for older browsers. However, if you want to run a series of tests on Internet Explorer 6, Internet Explorer 7, or even Internet Explorer 8, you are actually running tests against browsers that are not officially supported by Selenium.

We then come to our next set of problems. Internet Explorer is only supported on Windows machines, and you can have only one version of Internet Explorer installed on a Windows machine at a time.

There are hacks to install multiple versions of Internet Explorer on the same machine, but you will not get accurate tests if you do this. It's much better to have multiple operating systems running with just one version of Internet Explorer.

Safari is only supported on OS X machines, and, again, you can have only one version installed at a time.

There is an old version of Safari for Windows hidden away in Apple's archives, but it is no longer actively supported and shouldn't be used.

It soon becomes apparent that even if we do want to run all of our tests against every browser supported by Selenium, we are not going to be able to do it on one machine.

At this point, people tend to modify their test framework so that it can accept a list of browsers to run against. They write some code that detects, or specifies, which browsers are available on a machine. Once they have done this, they start running all of their tests over a few machines in parallel and end up with a matrix that looks like this:

This is great, but it doesn't get around the problem that there is always going to be one or two browsers you can't run against your local machine, so you will never get full cross-browser coverage. Using multiple different driver instances (potentially in multiple threads) to run against different browsers has given us slightly increased coverage. We still don't have full coverage though.

We also suffer some side effects by doing this. Different browsers run tests at different speeds because JavaScript engines in all browsers are not equal. We have probably drastically slowed down the process of checking that the code works before you push it to a source code repository.

Finally, by doing this we can make it much harder to diagnose issues. When a test fails, you now have to work out which browser it was running against, as well as why it failed. This may only take a minute of your time, but all those minutes do add up.

So, why don't we just run our tests against one type of browser for the moment. Let's make that test run against that browser nice and quickly, and then worry about cross-browser compatibility later.

It's probably a good idea to just pick one browser to run our tests against on our development machines. We can then use a CI server to pick up the slack and worry about browser coverage as part of our build pipeline. It's probably also a good idea to pick a browser with a fast JavaScript engine for our local machines.
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime