toil.utils.toilStatus

Tool for reporting on job status.

Attributes

logger

Exceptions

JobException

General job exception.

NoSuchFileException

Indicates that the specified file does not exist.

NoSuchJobStoreException

Indicates that the specified job store does not exist.

Classes

Config

Class to represent configuration operations for a toil workflow run.

Toil

A context manager that represents a Toil workflow.

JobDescription

Stores all the information that the Toil Leader ever needs to know about a Job.

ServiceJobDescription

A description of a job that hosts a service.

StatsAndLogging

A thread to aggregate statistics and logging.

ToilStatus

Tool for reporting on job status.

Functions

replay_message_bus(path)

Replay all the messages and work out what they mean for jobs.

parser_with_common_options([provisioner_options, ...])

set_logging_from_options(options)

main()

Reports the state of a Toil workflow.

Module Contents

toil.utils.toilStatus.replay_message_bus(path)[source]

Replay all the messages and work out what they mean for jobs.

We track the state and name of jobs here, by ID. We would use a list of two items but MyPy can’t understand a list of items of multiple types, so we need to define a new class.

Returns a dictionary from the job_id to a dataclass, JobStatus. A JobStatus contains information about a job which we have gathered from the message bus, including the job store id, name of the job the exit code, any associated annotations, the toil batch id the external batch id, and the batch system on which the job is running.

Parameters:

path (str)

Return type:

Dict[str, JobStatus]

class toil.utils.toilStatus.Config[source]

Class to represent configuration operations for a toil workflow run.

logFile: str | None
logRotating: bool
cleanWorkDir: str
max_jobs: int
max_local_jobs: int
manualMemArgs: bool
run_local_jobs_on_workers: bool
coalesceStatusCalls: bool
mesos_endpoint: str | None
mesos_framework_id: str | None
mesos_role: str | None
mesos_name: str
kubernetes_host_path: str | None
kubernetes_owner: str | None
kubernetes_service_account: str | None
kubernetes_pod_timeout: float
kubernetes_privileged: bool
tes_endpoint: str
tes_user: str
tes_password: str
tes_bearer_token: str
aws_batch_region: str | None
aws_batch_queue: str | None
aws_batch_job_role_arn: str | None
scale: float
batchSystem: str
batch_logs_dir: str | None

The backing scheduler will be instructed, if possible, to save logs to this directory, where the leader can read them.

statePollingWait: int
state_polling_timeout: int
disableAutoDeployment: bool
workflowID: str | None

This attribute uniquely identifies the job store and therefore the workflow. It is necessary in order to distinguish between two consecutive workflows for which self.jobStore is the same, e.g. when a job store name is reused after a previous run has finished successfully and its job store has been clean up.

workflowAttemptNumber: int
jobStore: str
logLevel: str
colored_logs: bool
workDir: str | None
coordination_dir: str | None
noStdOutErr: bool
stats: bool
clean: str | None
clusterStats: str
restart: bool
caching: bool | None
symlinkImports: bool
moveOutputs: bool
provisioner: str | None
nodeTypes: List[Tuple[Set[str], float | None]]
minNodes: List[int]
maxNodes: List[int]
targetTime: float
betaInertia: float
scaleInterval: int
preemptibleCompensation: float
nodeStorage: int
nodeStorageOverrides: List[str]
metrics: bool
assume_zero_overhead: bool
maxPreemptibleServiceJobs: int
maxServiceJobs: int
deadlockWait: float | int
deadlockCheckInterval: float | int
defaultMemory: int
defaultCores: float | int
defaultDisk: int
defaultPreemptible: bool
defaultAccelerators: List[toil.job.AcceleratorRequirement]
maxCores: int
maxMemory: int
maxDisk: int
retryCount: int
enableUnlimitedPreemptibleRetries: bool
doubleMem: bool
maxJobDuration: int
rescueJobsFrequency: int
job_store_timeout: float
maxLogFileSize: int
writeLogs: str
writeLogsGzip: str
writeLogsFromAllJobs: bool
write_messages: str | None
realTimeLogging: bool
environment: Dict[str, str]
disableChaining: bool
disableJobStoreChecksumVerification: bool
sseKey: str | None
servicePollingInterval: int
useAsync: bool
forceDockerAppliance: bool
statusWait: int
disableProgress: bool
readGlobalFileMutableByDefault: bool
debugWorker: bool
disableWorkerOutputCapture: bool
badWorker: float
badWorkerFailInterval: float
kill_polling_interval: int
cwl: bool
set_from_default_config()[source]
Return type:

None

prepare_start()[source]

After options are set, prepare for initial start of workflow.

Return type:

None

prepare_restart()[source]

Before restart options are set, prepare for a restart of a workflow. Set up any execution-specific parameters and clear out any stale ones.

Return type:

None

setOptions(options)[source]

Creates a config object from the options object.

Parameters:

options (argparse.Namespace)

Return type:

None

check_configuration_consistency()[source]

Old checks that cannot be fit into an action class for argparse

Return type:

None

__eq__(other)[source]

Return self==value.

Parameters:

other (object)

Return type:

bool

__hash__()[source]

Return hash(self).

Return type:

int

class toil.utils.toilStatus.Toil(options)[source]

Bases: ContextManager[Toil]

A context manager that represents a Toil workflow.

Specifically the batch system, job store, and its configuration.

Parameters:

options (argparse.Namespace)

config: Config
__enter__()[source]

Derive configuration from the command line options.

Then load the job store and, on restart, consolidate the derived configuration with the one from the previous invocation of the workflow.

Return type:

Toil

__exit__(exc_type, exc_val, exc_tb)[source]

Clean up after a workflow invocation.

Depending on the configuration, delete the job store.

Parameters:
Return type:

Literal[False]

start(rootJob)[source]

Invoke a Toil workflow with the given job as the root for an initial run.

This method must be called in the body of a with Toil(...) as toil: statement. This method should not be called more than once for a workflow that has not finished.

Parameters:

rootJob (toil.job.Job) – The root job of the workflow

Returns:

The root job’s return value

Return type:

Any

restart()[source]

Restarts a workflow that has been interrupted.

Returns:

The root job’s return value

Return type:

Any

classmethod getJobStore(locator)[source]

Create an instance of the concrete job store implementation that matches the given locator.

Parameters:

locator (str) – The location of the job store to be represent by the instance

Returns:

an instance of a concrete subclass of AbstractJobStore

Return type:

toil.jobStores.abstractJobStore.AbstractJobStore

static parseLocator(locator)[source]
Parameters:

locator (str)

Return type:

Tuple[str, str]

static buildLocator(name, rest)[source]
Parameters:
Return type:

str

classmethod resumeJobStore(locator)[source]
Parameters:

locator (str)

Return type:

toil.jobStores.abstractJobStore.AbstractJobStore

static createBatchSystem(config)[source]

Create an instance of the batch system specified in the given config.

Parameters:

config (Config) – the current configuration

Returns:

an instance of a concrete subclass of AbstractBatchSystem

Return type:

toil.batchSystems.abstractBatchSystem.AbstractBatchSystem

importFile(srcUrl: str, sharedFileName: str, symlink: bool = True) None[source]
importFile(srcUrl: str, sharedFileName: None = None, symlink: bool = True) toil.fileStores.FileID
import_file(src_uri: str, shared_file_name: str, symlink: bool = True, check_existence: bool = True) None[source]
import_file(src_uri: str, shared_file_name: None = None, symlink: bool = True, check_existence: bool = True) toil.fileStores.FileID

Import the file at the given URL into the job store.

By default, returns None if the file does not exist.

Parameters:

check_existence – If true, raise FileNotFoundError if the file does not exist. If false, return None when the file does not exist.

See toil.jobStores.abstractJobStore.AbstractJobStore.importFile() for a full description

exportFile(jobStoreFileID, dstUrl)[source]
Parameters:
Return type:

None

export_file(file_id, dst_uri)[source]

Export file to destination pointed at by the destination URL.

See toil.jobStores.abstractJobStore.AbstractJobStore.exportFile() for a full description

Parameters:
Return type:

None

static normalize_uri(uri, check_existence=False)[source]

Given a URI, if it has no scheme, prepend “file:”.

Parameters:
  • check_existence (bool) – If set, raise FileNotFoundError if a URI points to a local file that does not exist.

  • uri (str)

Return type:

str

static getToilWorkDir(configWorkDir=None)[source]

Return a path to a writable directory under which per-workflow directories exist.

This directory is always required to exist on a machine, even if the Toil worker has not run yet. If your workers and leader have different temp directories, you may need to set TOIL_WORKDIR.

Parameters:

configWorkDir (Optional[str]) – Value passed to the program using the –workDir flag

Returns:

Path to the Toil work directory, constant across all machines

Return type:

str

classmethod get_toil_coordination_dir(config_work_dir, config_coordination_dir)[source]

Return a path to a writable directory, which will be in memory if convenient. Ought to be used for file locking and coordination.

Parameters:
  • config_work_dir (Optional[str]) – Value passed to the program using the –workDir flag

  • config_coordination_dir (Optional[str]) – Value passed to the program using the –coordinationDir flag

Returns:

Path to the Toil coordination directory. Ought to be on a POSIX filesystem that allows directories containing open files to be deleted.

Return type:

str

static get_workflow_path_component(workflow_id)[source]

Get a safe filesystem path component for a workflow.

Will be consistent for all processes on a given machine, and different for all processes on different machines.

Parameters:

workflow_id (str) – The ID of the current Toil workflow.

Return type:

str

classmethod getLocalWorkflowDir(workflowID, configWorkDir=None)[source]

Return the directory where worker directories and the cache will be located for this workflow on this machine.

Parameters:
  • configWorkDir (Optional[str]) – Value passed to the program using the –workDir flag

  • workflowID (str)

Returns:

Path to the local workflow directory on this machine

Return type:

str

classmethod get_local_workflow_coordination_dir(workflow_id, config_work_dir, config_coordination_dir)[source]

Return the directory where coordination files should be located for this workflow on this machine. These include internal Toil databases and lock files for the machine.

If an in-memory filesystem is available, it is used. Otherwise, the local workflow directory, which may be on a shared network filesystem, is used.

Parameters:
  • workflow_id (str) – Unique ID of the current workflow.

  • config_work_dir (Optional[str]) – Value used for the work directory in the current Toil Config.

  • config_coordination_dir (Optional[str]) – Value used for the coordination directory in the current Toil Config.

Returns:

Path to the local workflow coordination directory on this machine.

Return type:

str

toil.utils.toilStatus.parser_with_common_options(provisioner_options=False, jobstore_option=True, prog=None, default_log_level=None)[source]
Parameters:
  • provisioner_options (bool)

  • jobstore_option (bool)

  • prog (Optional[str])

  • default_log_level (Optional[int])

Return type:

configargparse.ArgParser

class toil.utils.toilStatus.JobDescription(requirements, jobName, unitName='', displayName='', local=None)[source]

Bases: Requirer

Stores all the information that the Toil Leader ever needs to know about a Job.

This includes:
  • Resource requirements.

  • Which jobs are children or follow-ons or predecessors of this job.

  • A reference to the Job object in the job store.

Can be obtained from an actual (i.e. executable) Job object, and can be used to obtain the Job object from the JobStore.

Never contains other Jobs or JobDescriptions: all reference is by ID.

Subclassed into variants for checkpoint jobs and service jobs that have their specific parameters.

Parameters:
  • requirements (Mapping[str, Union[int, str, bool]])

  • jobName (str)

  • unitName (Optional[str])

  • displayName (Optional[str])

  • local (Optional[bool])

get_names()[source]

Get the names and ID of this job as a named tuple.

Return type:

toil.bus.Names

get_chain()[source]

Get all the jobs that executed in this job’s chain, in order.

For each job, produces a named tuple with its various names and its original job store ID. The jobs in the chain are in execution order.

If the job hasn’t run yet or it didn’t chain, produces a one-item list.

Return type:

List[toil.bus.Names]

serviceHostIDsInBatches()[source]

Find all batches of service host job IDs that can be started at the same time.

(in the order they need to start in)

Return type:

Iterator[List[str]]

successorsAndServiceHosts()[source]

Get an iterator over all child, follow-on, and service job IDs.

Return type:

Iterator[str]

allSuccessors()[source]

Get an iterator over all child, follow-on, and chained, inherited successor job IDs.

Follow-ons will come before children.

Return type:

Iterator[str]

successors_by_phase()[source]

Get an iterator over all child/follow-on/chained inherited successor job IDs, along with their phase numbere on the stack.

Phases ececute higher numbers to lower numbers.

Return type:

Iterator[Tuple[int, str]]

property services
Get a collection of the IDs of service host jobs for this job, in arbitrary order.

Will be empty if the job has no unfinished services.

has_body()[source]

Returns True if we have a job body associated, and False otherwise.

Return type:

bool

attach_body(file_store_id, user_script)[source]

Attach a job body to this JobDescription.

Takes the file store ID that the body is stored at, and the required user script module.

The file store ID can also be “firstJob” for the root job, stored as a shared file instead.

Parameters:
Return type:

None

detach_body()[source]

Drop the body reference from a JobDescription.

Return type:

None

get_body()[source]

Get the information needed to load the job body.

Returns:

a file store ID (or magic shared file name “firstJob”) and a user script module.

Return type:

Tuple[str, toil.resource.ModuleDescriptor]

Fails if no body is attached; check has_body() first.

nextSuccessors()[source]

Return the collection of job IDs for the successors of this job that are ready to run.

If those jobs have multiple predecessor relationships, they may still be blocked on other jobs.

Returns None when at the final phase (all successors done), and an empty collection if there are more phases but they can’t be entered yet (e.g. because we are waiting for the job itself to run).

Return type:

Optional[Set[str]]

filterSuccessors(predicate)[source]

Keep only successor jobs for which the given predicate function approves.

The predicate function is called with the job’s ID.

Treats all other successors as complete and forgets them.

Parameters:

predicate (Callable[[str], bool])

Return type:

None

filterServiceHosts(predicate)[source]

Keep only services for which the given predicate approves.

The predicate function is called with the service host job’s ID.

Treats all other services as complete and forgets them.

Parameters:

predicate (Callable[[str], bool])

Return type:

None

clear_nonexistent_dependents(job_store)[source]

Remove all references to child, follow-on, and associated service jobs that do not exist.

That is to say, all those that have been completed and removed.

Parameters:

job_store (toil.jobStores.abstractJobStore.AbstractJobStore)

Return type:

None

clear_dependents()[source]

Remove all references to successor and service jobs.

Return type:

None

is_subtree_done()[source]

Check if the subtree is done.

Returns:

True if the job appears to be done, and all related child, follow-on, and service jobs appear to be finished and removed.

Return type:

bool

replace(other)[source]

Take on the ID of another JobDescription, retaining our own state and type.

When updated in the JobStore, we will save over the other JobDescription.

Useful for chaining jobs: the chained-to job can replace the parent job.

Merges cleanup state and successors other than this job from the job being replaced into this one.

Parameters:

other (JobDescription) – Job description to replace.

Return type:

None

assert_is_not_newer_than(other)[source]

Make sure this JobDescription is not newer than a prospective new version of the JobDescription.

Parameters:

other (JobDescription)

Return type:

None

is_updated_by(other)[source]

Return True if the passed JobDescription is a distinct, newer version of this one.

Parameters:

other (JobDescription)

Return type:

bool

addChild(childID)[source]

Make the job with the given ID a child of the described job.

Parameters:

childID (str)

Return type:

None

addFollowOn(followOnID)[source]

Make the job with the given ID a follow-on of the described job.

Parameters:

followOnID (str)

Return type:

None

addServiceHostJob(serviceID, parentServiceID=None)[source]

Make the ServiceHostJob with the given ID a service of the described job.

If a parent ServiceHostJob ID is given, that parent service will be started first, and must have already been added.

hasChild(childID)[source]

Return True if the job with the given ID is a child of the described job.

Parameters:

childID (str)

Return type:

bool

hasFollowOn(followOnID)[source]

Test if the job with the given ID is a follow-on of the described job.

Parameters:

followOnID (str)

Return type:

bool

hasServiceHostJob(serviceID)[source]

Test if the ServiceHostJob is a service of the described job.

Return type:

bool

renameReferences(renames)[source]

Apply the given dict of ID renames to all references to jobs.

Does not modify our own ID or those of finished predecessors. IDs not present in the renames dict are left as-is.

Parameters:

renames (Dict[TemporaryID, str]) – Rename operations to apply.

Return type:

None

addPredecessor()[source]

Notify the JobDescription that a predecessor has been added to its Job.

Return type:

None

onRegistration(jobStore)[source]

Perform setup work that requires the JobStore.

Called by the Job saving logic when this JobDescription meets the JobStore and has its ID assigned.

Overridden to perform setup work (like hooking up flag files for service jobs) that requires the JobStore.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore) – The job store we are being placed into

Return type:

None

setupJobAfterFailure(exit_status=None, exit_reason=None)[source]

Configure job after a failure.

Reduce the remainingTryCount if greater than zero and set the memory to be at least as big as the default memory (in case of exhaustion of memory, which is common).

Requires a configuration to have been assigned (see toil.job.Requirer.assignConfig()).

Parameters:
Return type:

None

getLogFileHandle(jobStore)[source]

Create a context manager that yields a file handle to the log file.

Assumes logJobStoreFileID is set.

property remainingTryCount
Get the number of tries remaining.

The try count set on the JobDescription, or the default based on the retry count from the config if none is set.

clearRemainingTryCount()[source]

Clear remainingTryCount and set it back to its default value.

Returns:

True if a modification to the JobDescription was made, and False otherwise.

Return type:

bool

__str__()[source]

Produce a useful logging string identifying this job.

Return type:

str

__repr__()[source]

Return repr(self).

reserve_versions(count)[source]

Reserve a job version number for later, for journaling asynchronously.

Parameters:

count (int)

Return type:

None

pre_update_hook()[source]

Run before pickling and saving a created or updated version of this job.

Called by the job store.

Return type:

None

exception toil.utils.toilStatus.JobException(message)[source]

Bases: Exception

General job exception.

Parameters:

message (str)

class toil.utils.toilStatus.ServiceJobDescription(*args, **kwargs)[source]

Bases: JobDescription

A description of a job that hosts a service.

onRegistration(jobStore)[source]

Setup flag files.

When a ServiceJobDescription first meets the JobStore, it needs to set up its flag files.

exception toil.utils.toilStatus.NoSuchFileException(jobStoreFileID, customName=None, *extra)[source]

Bases: Exception

Indicates that the specified file does not exist.

Parameters:
exception toil.utils.toilStatus.NoSuchJobStoreException(locator, prefix)[source]

Bases: LocatorException

Indicates that the specified job store does not exist.

Parameters:
class toil.utils.toilStatus.StatsAndLogging(jobStore, config)[source]

A thread to aggregate statistics and logging.

Parameters:
start()[source]

Start the stats and logging thread.

Return type:

None

classmethod formatLogStream(stream, stream_name)[source]

Given a stream of text or bytes, and the job name, job itself, or some other optional stringifyable identity info for the job, return a big text string with the formatted job log, suitable for printing for the user.

We don’t want to prefix every line of the job’s log with our own logging info, or we get prefixes wider than any reasonable terminal and longer than the messages.

Parameters:
  • stream (Union[IO[str], IO[bytes]]) – The stream of text or bytes to print for the user.

  • stream_name (str)

Return type:

str

classmethod logWithFormatting(stream_name, jobLogs, method=logger.debug, message=None)[source]
Parameters:
  • stream_name (str)

  • jobLogs (Union[IO[str], IO[bytes]])

  • method (Callable[[str], None])

  • message (Optional[str])

Return type:

None

classmethod writeLogFiles(jobNames, jobLogList, config, failed=False)[source]
Parameters:
Return type:

None

classmethod statsAndLoggingAggregator(jobStore, stop, config)[source]

The following function is used for collating stats/reporting log messages from the workers. Works inside of a thread, collates as long as the stop flag is not True.

Parameters:
Return type:

None

check()[source]

Check on the stats and logging aggregator. :raise RuntimeError: If the underlying thread has quit.

Return type:

None

shutdown()[source]

Finish up the stats/logging aggregation thread.

Return type:

None

toil.utils.toilStatus.set_logging_from_options(options)[source]
Parameters:

options (Union[toil.common.Config, argparse.Namespace])

Return type:

None

toil.utils.toilStatus.logger
class toil.utils.toilStatus.ToilStatus(jobStoreName, specifiedJobs=None)[source]

Tool for reporting on job status.

Parameters:
  • jobStoreName (str)

  • specifiedJobs (Optional[List[str]])

print_dot_chart()[source]

Print a dot output graph representing the workflow.

Return type:

None

printJobLog()[source]

Takes a list of jobs, finds their log files, and prints them to the terminal.

Return type:

None

printJobChildren()[source]

Takes a list of jobs, and prints their successors.

Return type:

None

printAggregateJobStats(properties, childNumber)[source]

Prints each job’s ID, log file, remaining tries, and other properties.

Parameters:
  • properties (List[Set[str]]) – A set of string flag names for each job in self.jobsToReport.

  • childNumber (List[int]) – A list of child counts for each job in self.jobsToReport.

Return type:

None

report_on_jobs()[source]

Gathers information about jobs such as its child jobs and status.

Returns jobStats:

Dict containing some lists of jobs by category, and some lists of job properties for each job in self.jobsToReport.

Return type:

Dict[str, Any]

static getPIDStatus(jobStoreName)[source]

Determine the status of a process with a particular local pid.

Checks to see if a process exists or not.

Returns:

A string indicating the status of the PID of the workflow as stored in the jobstore.

Return type:

str

Parameters:

jobStoreName (str)

static getStatus(jobStoreName)[source]

Determine the status of a workflow.

If the jobstore does not exist, this returns ‘QUEUED’, assuming it has not been created yet.

Checks for the existence of files created in the toil.Leader.run(). In toil.Leader.run(), if a workflow completes with failed jobs, ‘failed.log’ is created, otherwise ‘succeeded.log’ is written. If neither of these exist, the leader is still running jobs.

Returns:

A string indicating the status of the workflow. [‘COMPLETED’, ‘RUNNING’, ‘ERROR’, ‘QUEUED’]

Return type:

str

Parameters:

jobStoreName (str)

print_bus_messages()[source]

Goes through bus messages, returns a list of tuples which have correspondence between PID on assigned batch system and

Prints a list of the currently running jobs

Return type:

None

fetchRootJob()[source]

Fetches the root job from the jobStore that provides context for all other jobs.

Exactly the same as the jobStore.loadRootJob() function, but with a different exit message if the root job is not found (indicating the workflow ran successfully to completion and certain stats cannot be gathered from it meaningfully such as which jobs are left to run).

Raises:

JobException – if the root job does not exist.

Return type:

toil.job.JobDescription

fetchUserJobs(jobs)[source]

Takes a user input array of jobs, verifies that they are in the jobStore and returns the array of jobsToReport.

Parameters:

jobs (list) – A list of jobs to be verified.

Returns jobsToReport:

A list of jobs which are verified to be in the jobStore.

Return type:

List[toil.job.JobDescription]

traverseJobGraph(rootJob, jobsToReport=None, foundJobStoreIDs=None)[source]

Find all current jobs in the jobStore and return them as an Array.

Parameters:
  • rootJob (toil.job.JobDescription) – The root job of the workflow.

  • jobsToReport (list) – A list of jobNodes to be added to and returned.

  • foundJobStoreIDs (set) – A set of jobStoreIDs used to keep track of jobStoreIDs encountered in traversal.

Returns jobsToReport:

The list of jobs currently in the job graph.

Return type:

List[toil.job.JobDescription]

toil.utils.toilStatus.main()[source]

Reports the state of a Toil workflow.

Return type:

None