Objective
...
- Visibility: Public
- Change Control: Stable
- Value: 'all', 'job_start', or 'none'
- Python type: str
- Synopsis:
- When set to 'all', this means to tolerate all node failures resulting from communication problems (e.g. polling) between the primary mom and the sister moms assigned to the job, as well as due to rejections from execjob_begin, or execjob_prologue hook execution by remote moms.
- When set to 'job_start', this means to tolerate node failures that occur only during job start like an assigned sister mom failing to join job, communication errors that happen between the primary mom or sister moms, just before the job executes the execjob_launch hook and/or the top level shell or executable.
When set to 'none' or if the attribute is unset, this means no node failures are tolerated (default behavior).
- It can be set via qsub, qalter, or in a Python hook say queuejob hook. If set via qalter and the job is already running, it will be consulted the next time the job is rerun.
- This can also be specified in the server attribute 'default_qsub_arguments' to allow all jobs to be submitted with tolerate_node_failures attribute set.
- This option is best used when job is assigned extra nodes using pbs.event().job.select.increment_chunks() method (interface 7).
- The ‘tolerate_node_failures’ job option is currently not supported on Cray systems. If specified, a Cray primary mom would ignore the setting.
- Privilege: user, admin, or operator can set it
- Examples:
- Via qsub:
...
qalter -W tolerate_node_failures="job_start" <jobid>
- Via a hook:
# cat qjob.py
import pbs
e=pbs.event()
e.job.tolerate_node_failures = "all"
# qmgr -c "create hook qjob event=queuejob"
# qmgr -c "import hook application/x-python default qjob.py"
% qsub job.scr
23.borg
% qstat -f 23
...
tolerate_node_failures = all
...
pbs.release_nodes(keep_select=...)
NOTE: On Windows, where PBS_NODEFILE would always appear in pbs.event().env, need to put the following on top of the execjob_launch hook:
if any("mom_open_demux.exe") in s for s in e.argv):
e.accept()
- This call makes sense only when job is node failure tolerant (i.e. tolerant_node_failures=job_start or tolerate_node_failures=all) since it is when the
list of healthy and failed nodes are gathered to be consulted by release_nodes() for determining which chunk should be assigned, freed. If it is invoked and yet the job is not tolerant of node failures, the following message is displayed in mom_logs under DEBUG level:
- This call makes sense only when job is node failure tolerant (i.e. tolerant_node_failures=job_start or tolerate_node_failures=all) since it is when the
...
Seeing this log message means that a job can momentarily receive an error when doing tm_spawn or pbsdsh to a node that did not complete the nodes table update yet.
- When mother superior fails to prune currently assigned chunk resource, then the following detailed mom_logs message are shown in DEBUG2 level:
"could not satisfy select chunk (<resc1>=<val1> <resc2>=<val2> ...<rescN>=valN)
- "NEED chunks for keep_select (<resc1>=<val1> <resc2>=<val2> ...<rescN>=valN)
- "HAVE chunks from job's exec_vnode (<exec_vnode value>
- When mother superior fails to prune currently assigned chunk resource, then the following detailed mom_logs message are shown in DEBUG2 level:
...