The tutorial in this article will help you test your web interfaces. We will create a simple robust web interface testing solution using Python , pytest and Selenium WebDriver . We'll look at strategies for building good tests and patterns for writing good automated tests. Of course, the developed testing project can serve as a good basis for creating your own test cases.
Which browser?
The DuckDuckGo search test from one of the previous chapters works just fine ... but only in Chrome. Let's take
browser
another look at the fixture :
@pytest.fixture
def browser():
driver = Chrome()
driver.implicitly_wait(10)
yield driver
driver.quit()
Driver type and timeout are hardcoded. For a proof-of-concept, this may be good, but production tests need to be able to configure at runtime. Tests for web interfaces should work in any browser. The default timeout values should be adjusted in case some environments run slower than others. Sensitive data such as usernames and passwords should also never appear in source code. How do you work with such test data ?
All of these values are configuration data for the automated test system. They are discrete values that systematically affect how automation works. The configuration data must come to the input with each test run. Anything related to test and environment configuration should be treated as configuration data so that the automation code can be reused.
Sources of input
In an automated testing system, there are several ways to read input data:
- Command line arguments;
- Environment Variables;
- Properties of the system;
- Configuration files;
- API requests.
Unfortunately, most testing frameworks do not support reading data from command line arguments. Environment variables and system properties are difficult to manage and potentially dangerous to handle. Service APIs are a great way to consume input, especially getting secrets (like passwords) from a key management service like AWS KMS or Azure Key Vault . However, paying for such functionality may be unacceptable, and writing yourself is unreasonable. In this case, config files are the best option.
A config file is a regular file that contains configuration data. Automated testing can read it when tests run and use the input values to drive tests. For example, the config file might specify the type of browser used as the browser fixture in our sample project. Typically, configuration files are in a standard format such as JSON, YAML, or INI. They should also be flat so that they can be easily distinguished from other files.
Our config file
Let's write a configuration file for our testing project. We'll use the JSON format because it is easy to use, popular, and has a clear hierarchy. In addition, the json module is a Python standard library that converts JSON files to dictionaries with ease. Create a new file named
tests/config.json
and add the following code:
{
"browser": "chrome",
"wait_time": 10
}
JSON uses key-value pairs. As we said, there are two configuration values in our project: browser selection and timeout. Here "browser" is a string and "wait_time" is an integer.
Reading a config file with pytest
Fixtures are the best way to read config files using pytest. They can be used to read config files before starting tests, and then insert values into tests or even other fixtures. Add the following fixture to
tests/test_web.py
:
import json
@pytest.fixture(scope='session')
def config():
with open('tests/config.json') as config_file:
data = json.load(config_file)
return data
The fixture
config
reads and parses the file tests/config.json
into a dictionary using the json module. Hard-coded file paths are a fairly common practice. In fact, many automation tools and systems will check for files in multiple directories or against naming patterns. The scope of the fixture is set to "session", so the fixture will run once per test session. It is not necessary to read the same config file every time in a new test - this is inefficient!
Configuration input is needed when initializing the WebDriver. Update the fixture
browser
as follows:
@pytest.fixture
def browser(config):
if config['browser'] == 'chrome':
driver = Chrome()
else:
raise Exception(f'"{config["browser"]}" is not a supported browser')
driver.implicitly_wait(config['wait_time'])
yield driver
driver.quit()
The fixture
browser
will now have a fixture dependency config
. Even if it is config
launched once per test session, browser will still be called before each test. Now I browser
have a chaining if-else
to determine which type of WebDriver to use. For now, only Chrome is supported, but we'll be adding a few more types soon. If the browser is not detected, an exception will be thrown. The implicit timeout will also take its value from the configuration file.
Since
browser
it still returns a WebDriver instance, tests that use it don't need to be refactored! Let's run tests to make sure the config file works:
$ pipenv run python -m pytest tests/test_web.py
============================= test session starts ==============================
platform darwin -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.12.0
rootdir: /Users/andylpk247/Programming/automation-panda/python-webui-testing
collected 1 item
tests/test_web.py . [100%]
=========================== 1 passed in 5.00 seconds ===========================
Adding new browsers
Now that our project has a config file, we can use it to change the browser. Let's run the test on Mozilla Firefox instead of Google Chrome. To do this, download and install the latest Firefox , and then download the latest geckodriver (Firefox driver). Make sure it's
geckodriver
also in the system path.
Update the fixture code
browser
to work with Firefox:
from selenium.webdriver import Chrome, Firefox
@pytest.fixture
def browser(config):
if config['browser'] == 'chrome':
driver = Chrome()
elif config['browser'] == 'firefox':
driver = Firefox()
else:
raise Exception(f'"{config["browser"]}" is not a supported browser')
driver.implicitly_wait(config['wait_time'])
yield driver
driver.quit()
Then add an option to the config file
«firefox»
:
{
"browser": "firefox",
"wait_time": 10
}
Now restart the test and you will see a Firefox window instead of Chrome!
Validation
Despite the fact that the config file works, there is a significant flaw in the logic of its processing: the data is not checked before running tests. The fixture
browser
will throw an exception if the browser is not selected correctly, but it will happen for every test. It will be much more effective if an exception of this type is thrown once per test session. In addition, testing will fail if the "browser" or "wait_time" keys are missing in the config file . Let's fix this.
Add a new fixture to validate browser selection:
@pytest.fixture(scope='session')
def config_browser(config):
if 'browser' not in config:
raise Exception('The config file does not contain "browser"')
elif config['browser'] not in ['chrome', 'firefox']:
raise Exception(f'"{config["browser"]}" is not a supported browser')
return config['browser']
The fixture
config_browser
depends on the config fixture . Also, like config, it has scope = "session". We will get an exception if there is no "browser" key in the config file or if the selected browser is not supported. Finally, it returns the selected browser so that tests and other fixtures can safely access this value.
Next is the following fixture for timeout validation:
@pytest.fixture(scope='session')
def config_wait_time(config):
return config['wait_time'] if 'wait_time' in config else 10
If a timeout is specified in the config file, the fixture
config_wait_time
will return it. Otherwise, it will return 10 seconds by default.
Update the fixture
browser
again to use the new validation fixtures:
@pytest.fixture
def browser(config_browser, config_wait_time):
if config_browser == 'chrome':
driver = Chrome()
elif config_browser == 'firefox':
driver = Firefox()
else:
raise Exception(f'"{config_browser}" is not a supported browser')
driver.implicitly_wait(config_wait_time)
yield driver
driver.quit()
Writing separate fixture functions for each value of the configuration data makes them simple, clear, and specific. They also allow you to declare only those values that are needed to send requests.
Run the test and make sure everything works:
$ pipenv run python -m pytest tests/test_web.py
============================= test session starts ==============================
platform darwin -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.12.0
rootdir: /Users/andylpk247/Programming/automation-panda/python-webui-testing
collected 1 item
tests/test_web.py . [100%]
=========================== 1 passed in 4.58 seconds ===========================
And that's cool! However, you need to be tricky to make the validation more realistic. Let's change the value of "browser" to "safari" - an unsupported browser.
$ pipenv run python -m pytest tests/test_web.py
============================= test session starts ==============================
platform darwin -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.12.0
rootdir: /Users/andylpk247/Programming/automation-panda/python-webui-testing
collected 1 item
tests/test_web.py E [100%]
==================================== ERRORS ====================================
________________ ERROR at setup of test_basic_duckduckgo_search ________________
config = {'browser': 'safari', 'wait_time': 10}
@pytest.fixture(scope='session')
def config_browser(config):
# Validate and return the browser choice from the config data
if 'browser' not in config:
raise Exception('The config file does not contain "browser"')
elif config['browser'] not in SUPPORTED_BROWSERS:
> raise Exception(f'"{config["browser"]}" is not a supported browser')
E Exception: "safari" is not a supported browser
tests/conftest.py:30: Exception
=========================== 1 error in 0.09 seconds ============================
Wow! The error clearly indicated why it appeared. Now, what happens if we remove the browser selection from the config file?
$ pipenv run python -m pytest tests/test_web.py
============================= test session starts ==============================
platform darwin -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.12.0
rootdir: /Users/andylpk247/Programming/automation-panda/python-webui-testing
collected 1 item
tests/test_web.py E [100%]
==================================== ERRORS ====================================
________________ ERROR at setup of test_basic_duckduckgo_search ________________
config = {'wait_time': 10}
@pytest.fixture(scope='session')
def config_browser(config):
# Validate and return the browser choice from the config data
if 'browser' not in config:
> raise Exception('The config file does not contain "browser"')
E Exception: The config file does not contain "browser"
tests/conftest.py:28: Exception
=========================== 1 error in 0.10 seconds ============================
Excellent! Another helpful error message. For the last test, let's add a browser selection, but remove the timeout:
$ pipenv run python -m pytest tests/test_web.py
============================= test session starts ==============================
platform darwin -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.12.0
rootdir: /Users/andylpk247/Programming/automation-panda/python-webui-testing
collected 1 item
tests/test_web.py . [100%]
=========================== 1 passed in 4.64 seconds ===========================
The test should run because the timeout is optional. Well, the changes we've made have been beneficial! Remember that sometimes you need to test your tests as well .
Final test
There are two more small things we can do to make the test code cleaner. First, let's move our web fixtures to a file
conftest.py
so that all tests can use them, not just the tests in tests / test_web.py. Second, let's pull a few literal values into module variables.
Create a new file named
tests/conftest.py
with the following code:
import json
import pytest
from selenium.webdriver import Chrome, Firefox
CONFIG_PATH = 'tests/config.json'
DEFAULT_WAIT_TIME = 10
SUPPORTED_BROWSERS = ['chrome', 'firefox']
@pytest.fixture(scope='session')
def config():
# Read the JSON config file and returns it as a parsed dict
with open(CONFIG_PATH) as config_file:
data = json.load(config_file)
return data
@pytest.fixture(scope='session')
def config_browser(config):
# Validate and return the browser choice from the config data
if 'browser' not in config:
raise Exception('The config file does not contain "browser"')
elif config['browser'] not in SUPPORTED_BROWSERS:
raise Exception(f'"{config["browser"]}" is not a supported browser')
return config['browser']
@pytest.fixture(scope='session')
def config_wait_time(config):
# Validate and return the wait time from the config data
return config['wait_time'] if 'wait_time' in config else DEFAULT_WAIT_TIME
@pytest.fixture
def browser(config_browser, config_wait_time):
# Initialize WebDriver
if config_browser == 'chrome':
driver = Chrome()
elif config_browser == 'firefox':
driver = Firefox()
else:
raise Exception(f'"{config_browser}" is not a supported browser')
# Wait implicitly for elements to be ready before attempting interactions
driver.implicitly_wait(config_wait_time)
# Return the driver object at the end of setup
yield driver
# For cleanup, quit the driver
driver.quit()
The complete content
tests/test_web.py
should now be simpler and cleaner:
import pytest
from pages.result import DuckDuckGoResultPage
from pages.search import DuckDuckGoSearchPage
def test_basic_duckduckgo_search(browser):
# Set up test case data
PHRASE = 'panda'
# Search for the phrase
search_page = DuckDuckGoSearchPage(browser)
search_page.load()
search_page.search(PHRASE)
# Verify that results appear
result_page = DuckDuckGoResultPage(browser)
assert result_page.link_div_count() > 0
assert result_page.phrase_result_count(PHRASE) > 0
assert result_page.search_input_value() == PHRASE
Well, this is already Python style!
What's next?
So, the sample code for our testing project is complete. You can use it as a base for creating new tests. You can also find the final example of the project on GitHub . However, the fact that we have finished writing the code does not mean that we have finished training. In future articles, we'll be talking about how to take Python test automation to the next level!