PTL test execution report in JSON format
Design for JSON format of PTL test execution report.
Forum Discussion: http://community.pbspro.org/t/design-for-ptl-test-execution-report-in-json-format/1157
1. New pbs_benchpress option value: “--db-type=json”:
Output file name: “ptl_test_results.json”
When pbs_benchpress is run with option “--db-type=json”; the test execution output is given in json format in the file name "ptl_test_results.json" in the execution directory. The file is over written if already present. Below are the details of information that appear in the output json file:
Ex: pbs_benchpress -t SmokeTest --db-type=json
2. JSON output format:
{
"command":"<Complete pbs_benchpress command with arguments>",
"user":"<test runner username>",
"product_version":"<Product version number being tested upon>",
"run_id": <time stamp when test started taken as run identifier>,
"test_conf": { #dictionary of key value pairs given as test configuration parameters (specified in "-p" option or "--param-file
"<key>": "<value>",
"<key>": "<value>",
"<key>": "<value>"
},
"machine_info": { #dictionary of details of hostnames specifying dictionary
"<hostname>": {
"platform":"<uname command output of the PBS node>",
"os_info": "<Operating system information>",
"pbs_install_type":"<Type of pbs installation server/execution/client/communication/null>"
}
},
"testsuites": { #dictionary of details of test suites which inturn are dictionaries
"<testsuite1>": { #Name of the test suite
"docstring":"<Docstring of test suite>",
"module": "<test suite module>",
"file": "<test suite file name>",
"testcases": { #dictionary of details of test cases which are dictionaries
"<testcase1>": {
"docstring":"<Docstring of test case>",
"tags": [<List of tags of the test case>],
"requirements": { #dictionary of Test case requirements i.e. key value pairs mentioned in @requirements decorator
"<key>": "<value>",
"<key>": "<value>",
"<key>": "<value>"
},
"results": { #dictionary of test case results
"duration": "<time duration of the test case run>",
"status": "<test result status of test case>",
"status_data": "<Output or Error message string of the result>",
"start_time": "<start time of the test case>",
'end_time': "<end time of the test case>",
'measurements': [ #List of dictionaries containing data related to the test results. Ex: Performance test results giving out analytical data
{
},
{
}
]
},
"<testcase2>": {
"docstring":"<Docstring of test case>",
"tags": [<List of tags of the test case>],
"requirements": {
"<key>": "<value>",
"<key>": "<value>",
"<key>": "<value>"
},
"results": {
"duration": "<time duration of the test case run>",
"status": "<test result status of test case>",
"status_data": "",
"start_time": "",
'end_time': "",
'measurements': [
{
},
{
}
]
}
}
}
},
"<testsuite2>": {
"docstring":"<Docstring of test suite>",
"module": "<test module>",
"file": "<Path of the test file>",
"testcases": {
"<testcase1>": {
"docstring":"<Docstring of testcase1>",
"tags": [<List of tags of the test case>],
"requirements": {
"<key>": "<value>",
"<key>": "<value>",
"<key>": "<value>"
},
"results": {
"duration": "<time duration of the test case run>",
"status": "<test result status of test case>",
"status_data": "",
"start_time": "",
'end_time': "",
'measurements': [
{
},
{
}
]
},
"<testcase2>": {
"docstring":"<Docstring of testcase2>",
"tags": [<List of tags of the test case>],
"requirements": {
"<key>": "<value>",
"<key>": "<value>",
"<key>": "<value>"
},
"results": {
"duration": "<time duration of the test case run>",
"status": "<test result status of test case>",
"status_data": "",
"start_time": "",
'end_time': "",
'measurements': [
{
},
{
}
]
}
}
}
}
},
"test_summary": { #Total test run summary
"result_summary": {
"run": <Total count of tests run>,
"succeeded": <Total count of tests that succeeded>,
"failed": <Total count of tests that failed>,
"errors": <Total count of tests that errored out>,
"skipped": <Total count of tests that got skipped>,
"timedout": <Total count of tests that got timedout>
},
"test_start_time": <Time when test started>,
"test_end_time": <Time when test ended>,
"test_duration": "<Total test run duration>",
"tests_with_failures": "<List of test cases that failed and errored out>",
"test_suites_with_failures": "<List of test suites that has failed or errored test cases>"
}
}
3. Interface: PBSTestSuite.set_test_measurements(dict{})
Visibility: Public
Change Control: Stable
Details:
This method needs to be called in the test code to set of dictionary of analytical results of the test (like performance tests or load tests) that would be populated as test outcome in "measurements" field of json test report. When this method is called the given dictionary is appended to an internal list of dictionaries. If this method is called more than once, then each dictionary is appended to that internal list of dictionaries. The whole list is populated in "measurements" field of json test report.
This method can be called only from a test case since its data is placed under test case section in the json report.
4. Interface: PBSTestSuite.add_additional_data_to_report(dict{})
Visibility: Public
Change Control: Stable
Details:
This method needs to be called in order to set a dictionary that would be merged with the json data and provided in the test report for the overall run. This might be required if the test wants to spell out any additional non-analytical information regarding the test run, for example a string format of data.
This method can be called anywhere from test case method or any custom setUpClass()/setUp()/tearDown()/tearDownClass() methods. The data would be added as a dictionary called "additional_data" in parallel to "test_summary" section of the test run. In case of repetitive calls, the key-value pairs would be appended to the same dictionary and in case of repetition of any keywords; latest value would be retained in the merged report.
Below is a sample file that will be generated for a test case run from test suite SmokeTest:
Sample test:
def test_t1(self):
#test steps
#test steps
d1 = {"job_submission": 3.299776315689087,
"job_stat_queued": x.y
}
self.set_test_measurements(d1)
#some more steps
d2 = {"job_stat_running": p.q,
"job_run_rate": m.n
}
self.set_test_measurements(d2)
dx = {"python_version":2.7,
"Time zone": "America/New_York (EDT, -0400)"
}
self.add_additional_data_to_report(dx)
json report:
{
"command":"pbs_benchpress -t SmokeTest.test_submit_job -p nomom=servertestnode1,moms=testnode2 -o out1.txt",
"user":"pbsuser1",
"pbs_version":"19.1.0",
"run_id": 083120181546,
"test_conf": {
"nomom": "serverhostname",
"moms": "testnode2"
},
"machine_info": {
"servertestnode1": {
"platform":"CentOS Linux release 7.5.1804 (Core)",
"os_info": "Linux",
"pbs_install_type":"server"
},
"testnode2": {
"platform":"CentOS Linux release 7.5.1804 (Core)",
"os_info": "Linux",
"pbs_install_type":"execution"
}
},
"testsuites": {
"SmokeTest": {
"docstring":"<Docstring of test suite>",
"module": "tests.functional.SmokeTest",
"file": "tests/functional/pbs_smoketest.py",
"testcases": {
"test_submit_job": {
"docstring":"<Docstring of test_submit_job>",
"tags": ['smoke'],
"requirements": {
"num_servers": 1,
"num_moms": 1
},
"results": {
"duration": "0:00:05.518129",
"status": "PASS",
"status_data": "",
"start_time": "Sat Jun 23 04:21:49 2018",
'end_time': "Sat Jun 23 04:27:49 2018",
'measurements': [
{"job_submission": 3.299776315689087,
"job_status_queued": x.y
},
{"job_status_running": p.q,
"job_run_rate": m.n
}
]
}
}
}
}
},
"test_summary": {
"result_summary": {
"run": 1,
"succeeded": 1,
"failed": 0,
"errors": 0,
"skipped": 0,
"timedout": 0
},
"test_start_time": 00:01:05,
"test_end_time": 0:06:05,
"test_duration": "0:00:05.518129",
"tests_with_failures": "",
"test_suites_with_failures": ""
},
"additional_data": {
"python_version":2.7,
"Time_zone": "America/New_York (EDT, -0400)"
}
}
Project Documentation Main Page