Guidelines for Writing PTL Tests
This page appears in PTL Developer's GuidePlease follow these location conventions when adding new PTL tests: Feature-specific tests Test suites under this directory should inherit base class TestFunctional Tests related to PBS interfaces (IFL, TM, RM) Test suites under this directory should inherit base class TestInterfaces Performance tests Test suites under this directory should inherit base class TestPerformance Server & comm failover tests Stress, load, and endurance tests Test suites under this directory should inherit base class TestResilience Security tests Test suites under this directory should inherit base class TestSecurity Testing PTL itself Test suites under this directory should inherit base class TestSelf Upgrade-related tests Test suites under this directory should inherit base class TestUpgrades Each test file contains one test suite. Name: pbs_<feature name>.py - Start file name with pbs_ then use feature name - Use only lower-case characters and the underscore (“_”) (this is the only special character allowed) - Start comments inside filename with a single # (No triple/single quotes) - No camel case needed in test file name Examples: pbs_reservations.py, pbs_preemption.py - Permission for file should be 0644 Each test suite is a Python class made up of tests. Each test is a method in the class. Name: Test<Feature> - Start name of test suite with with string “Test” - Use unique, English-language explanatory name - Use naming conventions for Python Class names (Camel case) - Docstring is mandatory. This gives broad summary of tests in the suite - Start comments with a single # (No triple/single quotes) - Do not use ticket ID Examples: TestReservations, TestNodesQueues Each test is a Python method and is a member of a Python class defining a test suite. A test is also called a test case. Name: test_<test description> - Start test name with "test_", then use all lower case alphanumeric for the test description - Make name unique, accurate, & explanatory, but concise; can have multiple words if needed - Docstring is mandatory. This gives summary of the whole test case - Tagging is optional. Tag can be based on category in which the test belongs. Ex: :@tags('smoke') ### leading colon? - Test case name need not include the feature name (as it will be part of the test suite anyways) - Start comments with a single # (No triple/single quoted comments) Examples: test_create_routing_queue, test_finished_jobs PTL is derived from and inherits classes from the Python unittest unit testing framework. PTL test suites are directly inherited from the unittest TestCase class. You can think of a PTL test as having 3 parts: Many PTL commands take an attribute dictionary. The idea here is to create an environment which can run the test no matter what machine the test is being run on. You may need to create queues, nodes, or resources, or set attributes, etc. First you need to set up your vnode(s). This is a required step because if you don’t, the natural vnode will be left as is. This means the vnode will have different resources depending on what machine the test is run on. This can be done in one of two ways: After you set up your vnodes, you might need to set attributes on servers or queues or even create new queues or resources. This is all done via the self.server.manager() call. Usually to run a test you need to submit jobs or reservations. These are of the form: OR The attribute dictionary usually consists of the resources (e.g. Resource_List.ncpus or Resource_List.select) and maybe other attributes like ATTR_o. To submit a job to another queue, use ATTR_queue. This just creates a PTL job or reservation object. By default these jobs will sleep for 100 seconds and exit. To change the sleep time of a job, you do ‘j.set_sleep_time(N)’ Finally you submit your job/reservation. Many tests require more than one reservation. Follow the above steps multiple times for those. Once you have submitted your job or reservation, you should check if it is in the correct state. As you are running your test, you should make sure most steps have correctly completed. This is mostly done through expect(). The expect() function will query PBS 60 times once every half seconds (total of 30 seconds) to see if the attributes are true. If after 60 attempts the attribute is still not true, a PtlExpectError exception will be raised. This is when you check to see if your test has correctly passed. To do this you will either use self.server.expect() as described above, log_match(), or the series of assert functions provided by unittest. The most useful asserts are self.assertTrue(), self.assertFalse(), self.assertEquals(). There are asserts for all the normal conditional operators (even the in operator). For example, self.assertGreater(a, b) tests a > b. Each of the asserts take a final argument that is a message which is printed if the assertion fails. The log_match() function is on each of the daemon objects (e.g. self.server, self.mom, self.scheduler, etc). PTL test tags let you list or execute a category of similar or related test cases across test suites in test directories. To include a test case in a category, tag it with the “@tags(<tag_name>)” decorator. Tag names are case-insensitive. See the pbs_benchpress page for how you can use tags to select tests. Tag Name Description 1 smoke Tests related to basic features of PBS, such as job or reservation submission/execution/tracking, etc. 2 server Test related to server features exclusively. Ex: server requests, receiving & sending job for execution etc. 3 sched Tests related to scheduler exclusively. Ex: tests related to scheduler daemon, placement of jobs, implementation of scheduling policy etc. 4 mom Tests related to mom, i.e. processing of jobs received from server and reporting back etc. Ex: Mom polling etc. 5 comm Tests related to communication between server, scheduler and mom. 6 hooks Tests related to server hooks or mom hooks 7 reservations Tests related to reservations 8 configuration Tests related to any PBS daemon configurations. 9 accounting Tests related to accounting logs 10 scheduling_policy Tests related to job scheduling policy of the scheduler - 11 multi_node Tests involving more than one node complex 12 commands Tests related to PBS commands and its outputs (Client related) 13 security Tests related to authentication, authorisation etc. 14 windows Tests that can run only on windows platform 15 cray Tests that can run only on cray platform 16 cpuset Tests that can run only on cpuset system Examples of tagging test cases: All the test cases of pbs_smoketest.py are tagged with “smoke”. >>>>> Multiple tags can be specified, as shown here: Use the --tags-info option to list the test cases with a specific tag. For example, to find test cases tagged with "smoke": pbs_benchpress --tags-info --tags=smoke To find a test case of a particular feature: Ex: Find a ASAP reservations test case ex. pbs_reservations.py ex. pbs_benchpress -t TestFunctional -i pbs_benchpress -t TestReservations -i –verbose pbs_benchpress --tags-info--tags=reservations --verbose The same command can be used to list tests inside the directories. Ex: All reservations tests inside performance directory To add a new test case of a particular feature or bug fix: Ex: A test case for a bug fix that updated accounting logs ex. In Functional test directory any test file / test suites associated with “log”. If present, add test case into that test suite ex. pbs_benchpress -t TestFunctional -i If the test case seems to belong to any of the features listed in tag list, it can be tagged so. Ex. @tags(‘accounting’) Use the --tags option to execute the test cases, including hierarchical tests, tagged with a specific tag. For example, to execute the test cases tagged with "smoke": pbs_benchpress --tags=smoke Ex: All scheduling_policy tests pbs_benchpress --tags=scheduling_policy pbs_benchpress -t <suite names> PTL Test Directory Structure
Directory Description of Contents tests functional interfaces performance resilience security selftest upgrades PTL Test Naming Conventions
Test File Conventions
Test Suite Conventions
Test Case Conventions
Inherited Python Classes
Writing Your PTL Test
Main Parts of a PTL Test
Using Attributes
Setting Up Your Environment
Examples of Setting up Environment
Creating Your Test Workload
Running Your Test
Checking Your Results
Examples of Checking Results
Adding PTL Test Tags
Pre-defined PTL Test Tags
Tagging Test Cases
@tags('smoke')
class SmokeTest(PBSTestSuite)
>>>>>
>>>>>
@tags(‘smoke’, ’mom’, ’configuration’)
class Mom_fail_requeue(TestFunctional)
>>>>>Using Tags to List Tests
Finding Existing Tests
Placing New Tests in Correct Location
Using Tags to Run Desired Tests