Performance Test Automation

Community forum : http://community.pbspro.org/t/ptl-performance-test-automation/1127

Summary :

Performance test should be able to log results in a json file consisting the test details and results, Which will enable us to store results in a db and fetch reports from db.

Interface: TestPerformance.perf_test_result(test_measure='str', value=float/[floats], unit='str')

Params:

test_measure : String (name of the value test writer is measuring)

value : Float / List of Floats

unit: String (unit of the value which was measured)

Synopsis:

This API will take the value passed by the test and store them in a json file by calling existing PTL API set_test_measurements(). The API will be available for only for tests in performance directory. The json out file will be generated in the format mentioned in  PTL test execution report in JSON format

measurements dictionary will be storing all the data in the below mentioned format

"measurements": [{ 

                                "test_measure": 'str',

                                "unit": 'str'

                                "test_data": {  "trials": [{
                                                       "trial_no": <int>,
                                                        "value": <float> }],

                                                        "minimum": <float>,
                                                        "std_dev": <float>,
                                                        "maximum": <float>,
                                                        "mean": <float>
                                                        }

                                }]

"trials" key will hold a list of dicts which have value recorded for every trial .

mean , minimum, maximum, std_dev will hold calculated values respectively for all the trials.

                                                         

For example:

Consider if test writer is calculating number of jobs/sec submitted and number of jobs/sec run on server when scheduler is on

In the test case if test writer calls these API's as follows . The below mentioned JSON output will be created in JSON file for measurements API.

1st Case: Value is list of floats

sub_rate = [255.32,279.60,.............,276.10, 214.03] These are the 10 values captured for submitting 1000 jobs in the test

self.perf_test_result("job_submission", sub_rate ,'jobs/sec')

2nd Case: Value is a float

run_rate = 192.31

self.perf_test_result("job_run_rate", run_rate ,'jobs/sec')


            "measurements": [

              {

                "test_data": {

                  "trials": [

                    {

                      "trial_no": 1,

                      "value": 255.32082230234357

                    },

                    {

                      "trial_no": 2,

                      "value": 279.60889590489984

                    },

                    {

                      "trial_no": 3,

                      "value": 213.03301884204984

                    },

                    {

                      "trial_no": 4,

                      "value": 201.6252296120904

                    },

                    {

                      "trial_no": 5,

                      "value": 220.6541168080534

                    },

                    {

                      "trial_no": 6,

                      "value": 229.02245731694467

                    },

                    {

                      "trial_no": 7,

                      "value": 244.59469735670262

                    },

                    {

                      "trial_no": 8,

                      "value": 162.3048622842061

                    },

                    {

                      "trial_no": 9,

                      "value": 276.10969752841146

                    },

                    {

                      "trial_no": 10,

                      "value": 214.03395575584588

                    }

                  ],

                  "minimum": 162.3048622842061,

                  "std_dev": 107.04571980925742,

                  "maximum": 279.60889590489984,

                  "mean": 229.63077537115478

                },

                "test_measure": "job_submission",

                "unit": "jobs/sec"

              },

              {

                "test_data": {

                  "minimum": 192.31,

                  "std_dev": 0,

                  "maximum": 192.31,

                  "mean": 192.31

                },

                "test_measure": "job_run_rate",

                "unit": "jobs/sec"

              }

            ]

Note:

Test writer who wants to store per test configuration optionally can add directly from set_test_measurements API.

self.set_test_measurements({"test_config": <config>})

test_config - config can be a dictionary with the configurable params for the test

Example:

self.set_test_measurements({"test_config": config})

        "measurements": [

              {

                "test_config": {

                  "No_of_users": 10,

                  "No_of_ncpus_per_node": 48,

                  "No_of_moms": 21,

                  "qsub_exec": "-- /bin/true",

                  "No_of_jobs_per_user": 100,

                  "svr_log_level": 511,

                  "qsub_exec_arg": null,

                  "No_of_tries": 1,

                  "No_of_iterations": 10

                }

              }]