Behave Rest API – Web tutorial links
- Behave Rest API – Web tutorial links
- How to Setup Python's BEHAVE to Automate REST APIs
- Understanding Behave Feature Files and Step Implementations
- Quick Overview of HTTP Messages
- Using Python's Request Library to Make Rest API calls
- Writing Step Implementations of your API Tests
- Using Scenario Outline to Make Your Tests Data Driven
- Behave Test Hooks - The Setup() and Teardown() Equivalent
- Run Custom Implementations in Test Hooks for Specific Scenarios
If you're used to traditional test frameworks you've probably encountered
teardowns(). These are called test hooks and just like the usual testing frameworks, Behave has a similar version which can be defined in
environment.py. This is usually created at the top level of your test directory as shown in my previous write up of common structure of Behave tests. These test hooks are important because this is where you'll inject some pre-tests and post-tests instructions. These can be test-data preparations (fixtures), initializing web drivers for UI Tests or kicking off a mock server for component level testing.
Behave's test hook is a little bit different but easy to understand. At the time of this writing Behave as a total of 10 hooks as you can see below
before_all(context) after_all(context) before_feature(context, feature) after_feature(context, feature) before_tag(context, tag) after_tag(context, tag) before_scenario(context, scenario) after_scenario(context, scenario) before_step(context, step) after_step(context, step)
Test Hook Execution Order
It is easier to recall the order of execution if you've written a
Feature File before because the execution order the test hooks are patterned on how you structure the contents of the feature files. Take a look at the feature file below and guess which test hook will be triggered first.
Feature: Application can validate a transaction based on specific tags associated in the account profile @fraudulent Scenario: Fraudulent tagged account trades an asset Given an account with "2400" asset points But the account profile is tagged as "fraudulent" When the account owner trades his asset Then the application should prompt an "XXXX" error
To test your hypothesis about test hook execution order you can implement an
environment.py and define all the listed test hooks which will only print their method call. Below is the snippet which you can copy-paste and run a behave test.
def before_feature(context, feature): print("before_feature activated") def after_feature(context, feature): print("after_feature activated") def before_tag(context, tag): print("before_tag activated") def before_scenario(context, scenario): print("before_scenario activated") def before_step(context, step): print("before_step activated") def after_step(context, step): print("after_step activated") def after_scenario(context, step): print("after_scenario activated") def after_tag(context, tag): print("after_tag activated") def before_all(context): print("before_all activated") def after_all(context): print("after_all activated")
And of course to properly run the test let's implement the steps in the
steps_folder/implementations.py of your project directory and let the function simply do nothing.
from behave import given, when, then, step @given(u'an account with "2400" asset points') def step_impl(context): pass @step(u'the account rofile is tagged as "fraudulent"') def step_impl(context): pass @when(u'the account owner trades his asset') def step_impl(context): pass @then(u'the application should prompt an "XXXX" error') def step_impl(context): pass
The only remaining thing to do is to run
behave --no-color --no-capture and spot the
print() statements that you've defined in
environment.py. But if you're too lazy to run an experiment let me spill the beans to you by listing the execution order if you run a feature file with a single scenario
before_all() before_feature() before_tag() before_scenario() before_step() after_step() after_scenario() after_tag() after_feature() after_all()
Pay close attention to the indentations. The indentations are the scope covered by the hooks, meaning it will not run the partnered
afterunless all the hooks inside the scope are done executing.
If you've been automating tests before then this might be a walk in the park. But to those who are new to automating tests, I suggests running experiments and let the scoping rules sink in. My suggestion is to extend the snippets that I've provided in these article and see what would happen if there are two scenarios defined in the feature file.
Now that you know the scoping and order of the test hooks the next session is on how to use the test hook attributes if you want to run a special instruction to a specific scenario, step, feature or tag.