Guidelines for Writing PTL Tests

This page appears in PTL Developer's GuidePlease follow these location conventions when adding new PTL tests:

PTL Test Directory Structure


DirectoryDescription of Contents
tests


functional

Feature-specific tests

Test suites under this directory should inherit base class TestFunctional


interfaces

Tests related to PBS interfaces (IFL, TM, RM)

Test suites under this directory should inherit base class TestInterfaces


performance

Performance tests

Test suites under this directory should inherit base class TestPerformance


resilience

Server & comm failover tests

Stress, load, and endurance tests

Test suites under this directory should inherit base class TestResilience


security

Security tests

Test suites under this directory should inherit base class TestSecurity


selftest

Testing PTL itself

Test suites under this directory should inherit base class TestSelf


upgrades

Upgrade-related tests

Test suites under this directory should inherit base class TestUpgrades

PTL Test Naming Conventions

Test File Conventions

Each test file contains one test suite.

Name: pbs_<feature name>.py

- Start file name with pbs_ then use feature name

- Use only lower-case characters and the underscore (“_”) (this is the only special character allowed)

- Start comments inside filename with a single # (No triple/single quotes)

- No camel case needed in test file name

Examples: pbs_reservations.py, pbs_preemption.py

- Permission for file should be 0644

Test Suite Conventions

Each test suite is a Python class made up of tests.  Each test is a method in the class.

Name: Test<Feature>

- Start name of test suite with with string “Test”

- Use unique, English-language explanatory name

- Use naming conventions for Python Class names (Camel case)

- Docstring is mandatory.  This gives broad summary of tests in the suite

- Start comments with a single # (No triple/single quotes)

- Do not use ticket ID

Examples: TestReservations, TestNodesQueues

Test Case Conventions

Each test is a Python method and is a member of a Python class defining a test suite.  A test is also called a test case.

Name: test_<test description>

- Start test name with "test_", then use all lower case alphanumeric for the test description

- Make name unique, accurate, & explanatory, but concise; can have multiple words if needed

- Docstring is mandatory.  This gives summary of the whole test case

- Tagging is optional.  Tag can be based on category in which the test belongs.  Ex: :@tags('smoke')  ### leading colon?

- Test case name need not include the feature name (as it will be part of the test suite anyways)

- Start comments with a single # (No triple/single quoted comments)

Examples: test_create_routing_queue, test_finished_jobs

Inherited Python Classes

PTL is derived from and inherits classes from the Python unittest unit testing framework. 

PTL test suites are directly inherited from the unittest TestCase class.

Writing Your PTL Test

Main Parts of a PTL Test

You can think of a PTL test as having 3 parts:

  1. Setting up your environment
  2. Running your test
  3. Checking the results

Using Attributes

Many PTL commands take an attribute dictionary. 

  • This is of the form of {‘attr_name1’: ‘val1’, ‘attr_name2’: ‘val2’, …, ‘attr_nameN’: ‘valN’}
  • To modify resources, use the ‘attr_name.resource_name’ form like ‘resources_available.ncpus’ or ‘Resource_List.ncpus’
  • Many of the ‘attr_nameN’ can be replaced with its formal name like ATTR_o or ATTR_queue.  A list of these can be found in the pbs_ifl.h C header file.

Setting Up Your Environment

The idea here is to create an environment which can run the test no matter what machine the test is being run on.  You may need to create queues, nodes, or resources, or set attributes, etc.

First you need to set up your vnode(s).  This is a required step because if you don’t, the natural vnode will be left as is.  This means the vnode will have different resources depending on what machine the test is run on.   This can be done in one of two ways: 

  1. If you only need one vnode, you can modify the natural vnode.
    1. This is done by setting resources/attributes with self.server.manager().  You use self.server.manager(MGR_CMD_SET, VNODE, {attribute dictionary} id=self.mom.shortname)
  2. If you need more than one vnode, you create them with self.server.create_vnodes(attrib={attribute dictionary}, num=N, mom=self.mom)

After you set up your vnodes, you might need to set attributes on servers or queues or even create new queues or resources.  This is all done via the self.server.manager() call.

Examples of Setting up Environment

  • To create a queue named workq2:
    • self.server.manager(MGR_CMD_CREATE, QUEUE, {attribute dictionary}, id=<name>)
  • Similarly to create a resource:
    • self.server.manager(MGR_CMD_CREATE, RSC, {‘type’: <type>, ‘flag’: <flags>}, id=<name>)
  • To set a server attribute:
    • self.server.manager(MGR_CMD_SET, SERVER, {attribute dictionary})
  • To set a queue attribute:
    • self.server.manager(MGR_CMD_SET, QUEUE, {attribute dictionary}, id=<queue name>)

Creating Your Test Workload

Usually to run a test you need to submit jobs or reservations.  These are of the form:

  • j = Job(<user>, {attribute dictionary})

OR

  • r = Reservation(<user>, {attribute dictionary})
  • <user> can be one of the test users that are created for PTL.  A common user is TEST_USER.

The attribute dictionary usually consists of the resources (e.g. Resource_List.ncpus or Resource_List.select) and maybe other attributes like ATTR_o.  To submit a job to another queue, use ATTR_queue.

This just creates a PTL job or reservation object.  By default these jobs will sleep for 100 seconds and exit.  To change the sleep time of a job, you do ‘j.set_sleep_time(N)’

Finally you submit your job/reservation.

  • job_id = self.server.submit(j)
  • resv_id = self.server.submit(r)

Many tests require more than one reservation.  Follow the above steps multiple times for those.

Once you have submitted your job or reservation, you should check if it is in the correct state.

  • self.server.expect(JOB, {ATTR_STATE: ‘R’, id=job_id)
  • self.server.expect(RESV, {reserve_state’: (MATCH_RE, ‘RESV_CONFIRMED|2}) (don’t worry about this funny match, just use it).

As you are running your test, you should make sure most steps have correctly completed.  This is mostly done through expect().  The expect() function will query PBS 60 times once every half seconds (total of 30 seconds) to see if the attributes are true.  If after 60 attempts the attribute is still not true, a PtlExpectError exception will be raised.

Running Your Test

Checking Your Results

This is when you check to see if your test has correctly passed.  To do this you will either use self.server.expect() as described above, log_match(), or the series of assert functions provided by unittest.  The most useful asserts are self.assertTrue(), self.assertFalse(), self.assertEquals().  There are asserts for all the normal conditional operators (even the in operator).  For example, self.assertGreater(a, b) tests a > b.  Each of the asserts take a final argument that is a message which is printed if the assertion fails.  The log_match() function is on each of the daemon objects (e.g. self.server, self.mom, self.scheduler, etc). 

Examples of Checking Results

  • self.server.expect(NODE, {‘state’: ‘offline’}, id=<name>)
  • self.scheduler.log_match(‘Insufficient amount of resource’)
  • self.assertTrue(a)
  • self.assertEquals(a, b)

Adding PTL Test Tags

PTL test tags let you list or execute a category of similar or related test cases across test suites in test directories.  To include a test case in a category, tag it with the “@tags(<tag_name>)” decorator. Tag names are case-insensitive.

See the pbs_benchpress page for how you can use tags to select tests.


Pre-defined PTL Test Tags


Tag Name

Description

1

smoke

Tests related to basic features of PBS, such as job or reservation submission/execution/tracking, etc.

2

server

Test related to server features exclusively. Ex: server requests, receiving & sending job for execution etc.

3

sched

Tests related to scheduler exclusively. Ex: tests related to scheduler daemon, placement of jobs, implementation of scheduling policy etc.

4

mom

Tests related to mom, i.e. processing of jobs received from server and reporting back etc. Ex: Mom polling etc.

5

comm

Tests related to communication between server, scheduler and mom.

6

hooks

Tests related to server hooks or mom hooks

7

reservations

Tests related to reservations

8

configuration

Tests related to any PBS daemon configurations.

9

accounting

Tests related to accounting logs

10

scheduling_policy

Tests related to job scheduling policy of the scheduler -

11

multi_node

Tests involving more than one node complex

12

commands

Tests related to PBS commands and its outputs (Client related)

13

security

Tests related to authentication, authorisation etc.

14

windows

Tests that can run only on windows platform

15

cray

Tests that can run only on cray platform

16

cpuset

Tests that can run only on cpuset system


Tagging Test Cases

Examples of tagging test cases:

All the test cases of pbs_smoketest.py are tagged with “smoke”.

>>>>>
@tags('smoke')
class SmokeTest(PBSTestSuite)
>>>>>

Multiple tags can be specified, as shown here:
>>>>>
@tags(‘smoke’, ’mom’, ’configuration’)
class Mom_fail_requeue(TestFunctional)
>>>>>

Using Tags to List Tests

Use the --tags-info option to list the test cases with a specific tag.  For example, to find test cases tagged with "smoke":

    pbs_benchpress --tags-info --tags=smoke

Finding Existing Tests

To find a test case of a particular feature:

Ex: Find a ASAP reservations test case

  1. Look for the appropriate directory – which is functional in this case
  2. Look for the existence of the relevant feature test suite file in this directory or run command to find test suites

ex. pbs_reservations.py

ex. pbs_benchpress -t  TestFunctional -i

  1. Look for the test suite info to get doc string of test cases related to ASAP reservation

pbs_benchpress -t TestReservations -i –verbose

  1. All reservations tests can be listed as below

pbs_benchpress --tags-info--tags=reservations --verbose

The same command can be used to list tests inside the directories. Ex: All reservations tests inside performance directory

Placing New Tests in Correct Location

To add a new test case of a particular feature or bug fix:

Ex: A test case for a bug fix that updated accounting logs

  1. Look for the appropriate directory - which is functional in this case
  2. Look for the existence of the relevant feature test suite file or run command to list test suites of base class

ex. In Functional test directory any test file / test suites associated with “log”. If present, add test case into that test suite

ex. pbs_benchpress -t  TestFunctional -i

  1. If test suite is not present, add a new test suite with name Test<Featurename> in file with name pbs_<featurename>.py
  2. Tag the new test case if necessary

If the test case seems to belong to any of the features listed in tag list, it can be tagged so.

Ex. @tags(‘accounting’)

Using Tags to Run Desired Tests

Use the --tags option to execute the test cases, including hierarchical tests, tagged with a specific tag.  For example, to execute the test cases tagged with "smoke":

    pbs_benchpress --tags=smoke

Ex: All scheduling_policy tests

  1. Look for the appropriate directory - which is functional in this case
  2. Look for that feature in tag list. If present, run with tag name as below:

pbs_benchpress --tags=scheduling_policy

  1. If no tags present then look for relevant test suite/s present, run the same

pbs_benchpress -t <suite names>  





OSS Site Map

Developer Guide Pages