toil.leader

The leader script (of the leader/worker pair) for running jobs.

Attributes

EXIT_STATUS_UNAVAILABLE_VALUE

CWL_UNSUPPORTED_REQUIREMENT_EXIT_CODE

logger

Exceptions

DeadlockException

Exception thrown by the Leader or BatchSystem when a deadlock is encountered due to insufficient

FailedJobsException

Common base class for all non-exit exceptions.

NoSuchJobException

Indicates that the specified job does not exist.

Classes

AbstractBatchSystem

An abstract base class to represent the interface the batch system must provide to Toil.

BatchJobExitReason

Enum where members are also (and must be) ints

JobCompletedMessage

Produced when a job is completed, whether successful or not.

JobFailedMessage

Produced when a job is completely failed, and will not be retried again.

JobIssuedMessage

Produced when a job is issued to run on the batch system.

JobMissingMessage

Produced when a job goes missing and should be in the batch system but isn't.

JobUpdatedMessage

Produced when a job is "updated" and ready to have something happen to it.

QueueSizeMessage

Produced to describe the size of the queue of jobs issued but not yet

Config

Class to represent configuration operations for a toil workflow run.

ToilMetrics

CheckpointJobDescription

A description of a job that is a checkpoint.

JobDescription

Stores all the information that the Toil Leader ever needs to know about a Job.

ServiceJobDescription

A description of a job that hosts a service.

TemporaryID

Placeholder for a unregistered job ID used by a JobDescription.

AbstractJobStore

Represents the physical storage for the jobs and files in a Toil workflow.

LocalThrottle

A thread-safe rate limiter that throttles each thread independently. Can be used as a

AbstractProvisioner

Interface for provisioning worker nodes to use in a Toil cluster.

ScalerThread

A thread that automatically scales the number of either preemptible or non-preemptible worker

ServiceManager

Manages the scheduling of services.

StatsAndLogging

A thread to aggregate statistics and logging.

ToilState

Holds the leader's scheduling information.

Leader

Represents the Toil leader.

Functions

resolveEntryPoint(entryPoint)

Find the path to the given entry point that should work on a worker.

gen_message_bus_path()

Return a file path in tmp to store the message bus at.

get_job_kind(names)

Return an identifying string for the job.

Module Contents

toil.leader.resolveEntryPoint(entryPoint)[source]

Find the path to the given entry point that should work on a worker.

Returns:

The path found, which may be an absolute or a relative path.

Parameters:

entryPoint (str)

Return type:

str

exception toil.leader.DeadlockException(msg)[source]

Bases: Exception

Exception thrown by the Leader or BatchSystem when a deadlock is encountered due to insufficient resources to run the workflow

__str__()[source]

Stringify the exception, including the message.

class toil.leader.AbstractBatchSystem[source]

Bases: abc.ABC

An abstract base class to represent the interface the batch system must provide to Toil.

classmethod supportsAutoDeployment()[source]
Abstractmethod:

Return type:

bool

Whether this batch system supports auto-deployment of the user script itself.

If it does, the setUserScript() can be invoked to set the resource object representing the user script.

Note to implementors: If your implementation returns True here, it should also override

classmethod supportsWorkerCleanup()[source]
Abstractmethod:

Return type:

bool

Whether this batch system supports worker cleanup.

Indicates whether this batch system invokes BatchSystemSupport.workerCleanup() after the last job for a particular workflow invocation finishes. Note that the term worker refers to an entire node, not just a worker process. A worker process may run more than one job sequentially, and more than one concurrent worker process may exist on a worker node, for the same workflow. The batch system is said to shut down after the last worker process terminates.

abstract setUserScript(userScript)[source]

Set the user script for this workflow.

This method must be called before the first job is issued to this batch system, and only if supportsAutoDeployment() returns True, otherwise it will raise an exception.

Parameters:

userScript (toil.resource.Resource) – the resource object representing the user script or module and the modules it depends on.

Return type:

None

set_message_bus(message_bus)[source]

Give the batch system an opportunity to connect directly to the message bus, so that it can send informational messages about the jobs it is running to other Toil components.

Parameters:

message_bus (toil.bus.MessageBus)

Return type:

None

abstract issueBatchJob(command, job_desc, job_environment=None)[source]

Issues a job with the specified command to the batch system and returns a unique job ID number.

Parameters:
  • command (str) – the command to execute somewhere to run the Toil worker process

  • job_desc (toil.job.JobDescription) – the JobDescription for the job being run

  • job_environment (Optional[Dict[str, str]]) – a collection of job-specific environment variables to be set on the worker.

Returns:

a unique job ID number that can be used to reference the newly issued job

Return type:

int

abstract killBatchJobs(jobIDs)[source]

Kills the given job IDs. After returning, the killed jobs will not appear in the results of getRunningBatchJobIDs. The killed job will not be returned from getUpdatedBatchJob.

Parameters:

jobIDs (List[int]) – list of IDs of jobs to kill

Return type:

None

abstract getIssuedBatchJobIDs()[source]

Gets all currently issued jobs

Returns:

A list of jobs (as job ID numbers) currently issued (may be running, or may be waiting to be run). Despite the result being a list, the ordering should not be depended upon.

Return type:

List[int]

abstract getRunningBatchJobIDs()[source]

Gets a map of jobs as job ID numbers that are currently running (not just waiting) and how long they have been running, in seconds.

Returns:

dictionary with currently running job ID number keys and how many seconds they have been running as the value

Return type:

Dict[int, float]

abstract getUpdatedBatchJob(maxWait)[source]

Returns information about job that has updated its status (i.e. ceased running, either successfully or with an error). Each such job will be returned exactly once.

Does not return info for jobs killed by killBatchJobs, although they may cause None to be returned earlier than maxWait.

Parameters:

maxWait (int) – the number of seconds to block, waiting for a result

Returns:

If a result is available, returns UpdatedBatchJobInfo. Otherwise it returns None. wallTime is the number of seconds (a strictly positive float) in wall-clock time the job ran for, or None if this batch system does not support tracking wall time.

Return type:

Optional[UpdatedBatchJobInfo]

getSchedulingStatusMessage()[source]

Get a log message fragment for the user about anything that might be going wrong in the batch system, if available.

If no useful message is available, return None.

This can be used to report what resource is the limiting factor when scheduling jobs, for example. If the leader thinks the workflow is stuck, the message can be displayed to the user to help them diagnose why it might be stuck.

Returns:

User-directed message about scheduling state.

Return type:

Optional[str]

abstract shutdown()[source]

Called at the completion of a toil invocation. Should cleanly terminate all worker threads.

Return type:

None

abstract setEnv(name, value=None)[source]

Set an environment variable for the worker process before it is launched.

The worker process will typically inherit the environment of the machine it is running on but this method makes it possible to override specific variables in that inherited environment before the worker is launched. Note that this mechanism is different to the one used by the worker internally to set up the environment of a job. A call to this method affects all jobs issued after this method returns. Note to implementors: This means that you would typically need to copy the variables before enqueuing a job.

If no value is provided it will be looked up from the current environment.

Parameters:
  • name (str)

  • value (Optional[str])

Return type:

None

classmethod add_options(parser)[source]

If this batch system provides any command line options, add them to the given parser.

Parameters:

parser (Union[argparse.ArgumentParser, argparse._ArgumentGroup])

Return type:

None

classmethod setOptions(setOption)[source]

Process command line or configuration options relevant to this batch system.

Parameters:

setOption (toil.batchSystems.options.OptionSetter) – A function with signature setOption(option_name, parsing_function=None, check_function=None, default=None, env=None) returning nothing, used to update run configuration as a side effect.

Return type:

None

getWorkerContexts()[source]

Get a list of picklable context manager objects to wrap worker work in, in order.

Can be used to ask the Toil worker to do things in-process (such as configuring environment variables, hot-deploying user scripts, or cleaning up a node) that would otherwise require a wrapping “executor” process.

Return type:

List[ContextManager[Any]]

class toil.leader.BatchJobExitReason[source]

Bases: enum.IntEnum

Enum where members are also (and must be) ints

FINISHED: int = 1

Successfully finished.

FAILED: int = 2

Job finished, but failed.

LOST: int = 3

Preemptable failure (job’s executing host went away).

KILLED: int = 4

Job killed before finishing.

ERROR: int = 5

Internal error.

MEMLIMIT: int = 6

Job hit batch system imposed memory limit.

MISSING: int = 7

Job disappeared from the scheduler without actually stopping, so Toil killed it.

MAXJOBDURATION: int = 8

Job ran longer than –maxJobDuration, so Toil killed it.

PARTITION: int = 9

Job was not able to talk to the leader via the job store, so Toil declared it failed.

classmethod to_string(value)[source]

Convert to human-readable string.

Given an int that may be or may be equal to a value from the enum, produce the string value of its matching enum entry, or a stringified int.

Parameters:

value (int)

Return type:

str

toil.leader.EXIT_STATUS_UNAVAILABLE_VALUE = 255
class toil.leader.JobCompletedMessage[source]

Bases: NamedTuple

Produced when a job is completed, whether successful or not.

job_type: str
job_id: str
exit_code: int
class toil.leader.JobFailedMessage[source]

Bases: NamedTuple

Produced when a job is completely failed, and will not be retried again.

job_type: str
job_id: str
class toil.leader.JobIssuedMessage[source]

Bases: NamedTuple

Produced when a job is issued to run on the batch system.

job_type: str
job_id: str
toil_batch_id: int
class toil.leader.JobMissingMessage[source]

Bases: NamedTuple

Produced when a job goes missing and should be in the batch system but isn’t.

job_id: str
class toil.leader.JobUpdatedMessage[source]

Bases: NamedTuple

Produced when a job is “updated” and ready to have something happen to it.

job_id: str
result_status: int
class toil.leader.QueueSizeMessage[source]

Bases: NamedTuple

Produced to describe the size of the queue of jobs issued but not yet completed. Theoretically recoverable from other messages.

queue_size: int
toil.leader.gen_message_bus_path()[source]

Return a file path in tmp to store the message bus at. Calling function is responsible for cleaning the generated file.

Return type:

str

toil.leader.get_job_kind(names)[source]

Return an identifying string for the job.

The result may contain spaces.

Returns: Either the unit name, job name, or display name, which identifies

the kind of job it is to toil. Otherwise “Unknown Job” in case no identifier is available

Parameters:

names (Names)

Return type:

str

class toil.leader.Config[source]

Class to represent configuration operations for a toil workflow run.

logFile: str | None
logRotating: bool
cleanWorkDir: str
max_jobs: int
max_local_jobs: int
manualMemArgs: bool
run_local_jobs_on_workers: bool
coalesceStatusCalls: bool
mesos_endpoint: str | None
mesos_framework_id: str | None
mesos_role: str | None
mesos_name: str
kubernetes_host_path: str | None
kubernetes_owner: str | None
kubernetes_service_account: str | None
kubernetes_pod_timeout: float
kubernetes_privileged: bool
tes_endpoint: str
tes_user: str
tes_password: str
tes_bearer_token: str
aws_batch_region: str | None
aws_batch_queue: str | None
aws_batch_job_role_arn: str | None
scale: float
batchSystem: str
batch_logs_dir: str | None

The backing scheduler will be instructed, if possible, to save logs to this directory, where the leader can read them.

statePollingWait: int
state_polling_timeout: int
disableAutoDeployment: bool
workflowID: str | None

This attribute uniquely identifies the job store and therefore the workflow. It is necessary in order to distinguish between two consecutive workflows for which self.jobStore is the same, e.g. when a job store name is reused after a previous run has finished successfully and its job store has been clean up.

workflowAttemptNumber: int
jobStore: str
logLevel: str
colored_logs: bool
workDir: str | None
coordination_dir: str | None
noStdOutErr: bool
stats: bool
clean: str | None
clusterStats: str
restart: bool
caching: bool | None
symlinkImports: bool
moveOutputs: bool
provisioner: str | None
nodeTypes: List[Tuple[Set[str], float | None]]
minNodes: List[int]
maxNodes: List[int]
targetTime: float
betaInertia: float
scaleInterval: int
preemptibleCompensation: float
nodeStorage: int
nodeStorageOverrides: List[str]
metrics: bool
assume_zero_overhead: bool
maxPreemptibleServiceJobs: int
maxServiceJobs: int
deadlockWait: float | int
deadlockCheckInterval: float | int
defaultMemory: int
defaultCores: float | int
defaultDisk: int
defaultPreemptible: bool
defaultAccelerators: List[toil.job.AcceleratorRequirement]
maxCores: int
maxMemory: int
maxDisk: int
retryCount: int
enableUnlimitedPreemptibleRetries: bool
doubleMem: bool
maxJobDuration: int
rescueJobsFrequency: int
job_store_timeout: float
maxLogFileSize: int
writeLogs: str
writeLogsGzip: str
writeLogsFromAllJobs: bool
write_messages: str | None
realTimeLogging: bool
environment: Dict[str, str]
disableChaining: bool
disableJobStoreChecksumVerification: bool
sseKey: str | None
servicePollingInterval: int
useAsync: bool
forceDockerAppliance: bool
statusWait: int
disableProgress: bool
readGlobalFileMutableByDefault: bool
debugWorker: bool
disableWorkerOutputCapture: bool
badWorker: float
badWorkerFailInterval: float
kill_polling_interval: int
cwl: bool
set_from_default_config()[source]
Return type:

None

prepare_start()[source]

After options are set, prepare for initial start of workflow.

Return type:

None

prepare_restart()[source]

Before restart options are set, prepare for a restart of a workflow. Set up any execution-specific parameters and clear out any stale ones.

Return type:

None

setOptions(options)[source]

Creates a config object from the options object.

Parameters:

options (argparse.Namespace)

Return type:

None

check_configuration_consistency()[source]

Old checks that cannot be fit into an action class for argparse

Return type:

None

__eq__(other)[source]

Return self==value.

Parameters:

other (object)

Return type:

bool

__hash__()[source]

Return hash(self).

Return type:

int

class toil.leader.ToilMetrics(bus, provisioner=None)[source]
Parameters:
startDashboard(clusterName, zone)[source]
Parameters:
  • clusterName (str)

  • zone (str)

Return type:

None

add_prometheus_data_source()[source]
Return type:

None

log(message)[source]
Parameters:

message (str)

Return type:

None

logClusterSize(m)[source]
Parameters:

m (toil.bus.ClusterSizeMessage)

Return type:

None

logClusterDesiredSize(m)[source]
Parameters:

m (toil.bus.ClusterDesiredSizeMessage)

Return type:

None

logQueueSize(m)[source]
Parameters:

m (toil.bus.QueueSizeMessage)

Return type:

None

logMissingJob(m)[source]
Parameters:

m (toil.bus.JobMissingMessage)

Return type:

None

logIssuedJob(m)[source]
Parameters:

m (toil.bus.JobIssuedMessage)

Return type:

None

logFailedJob(m)[source]
Parameters:

m (toil.bus.JobFailedMessage)

Return type:

None

logCompletedJob(m)[source]
Parameters:

m (toil.bus.JobCompletedMessage)

Return type:

None

shutdown()[source]
Return type:

None

toil.leader.CWL_UNSUPPORTED_REQUIREMENT_EXIT_CODE = 33
exception toil.leader.FailedJobsException(job_store, failed_jobs, exit_code=1)[source]

Bases: Exception

Common base class for all non-exit exceptions.

Parameters:
__str__()[source]

Stringify the exception, including the message.

Return type:

str

class toil.leader.CheckpointJobDescription(*args, **kwargs)[source]

Bases: JobDescription

A description of a job that is a checkpoint.

set_checkpoint()[source]

Save a body checkpoint into self.checkpoint

Return type:

str

restore_checkpoint()[source]

Restore the body checkpoint from self.checkpoint

Return type:

None

restartCheckpoint(jobStore)[source]

Restart a checkpoint after the total failure of jobs in its subtree.

Writes the changes to the jobStore immediately. All the checkpoint’s successors will be deleted, but its try count will not be decreased.

Returns a list with the IDs of any successors deleted.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore)

Return type:

List[str]

class toil.leader.JobDescription(requirements, jobName, unitName='', displayName='', local=None)[source]

Bases: Requirer

Stores all the information that the Toil Leader ever needs to know about a Job.

This includes:
  • Resource requirements.

  • Which jobs are children or follow-ons or predecessors of this job.

  • A reference to the Job object in the job store.

Can be obtained from an actual (i.e. executable) Job object, and can be used to obtain the Job object from the JobStore.

Never contains other Jobs or JobDescriptions: all reference is by ID.

Subclassed into variants for checkpoint jobs and service jobs that have their specific parameters.

Parameters:
  • requirements (Mapping[str, Union[int, str, bool]])

  • jobName (str)

  • unitName (Optional[str])

  • displayName (Optional[str])

  • local (Optional[bool])

get_names()[source]

Get the names and ID of this job as a named tuple.

Return type:

toil.bus.Names

get_chain()[source]

Get all the jobs that executed in this job’s chain, in order.

For each job, produces a named tuple with its various names and its original job store ID. The jobs in the chain are in execution order.

If the job hasn’t run yet or it didn’t chain, produces a one-item list.

Return type:

List[toil.bus.Names]

serviceHostIDsInBatches()[source]

Find all batches of service host job IDs that can be started at the same time.

(in the order they need to start in)

Return type:

Iterator[List[str]]

successorsAndServiceHosts()[source]

Get an iterator over all child, follow-on, and service job IDs.

Return type:

Iterator[str]

allSuccessors()[source]

Get an iterator over all child, follow-on, and chained, inherited successor job IDs.

Follow-ons will come before children.

Return type:

Iterator[str]

successors_by_phase()[source]

Get an iterator over all child/follow-on/chained inherited successor job IDs, along with their phase numbere on the stack.

Phases ececute higher numbers to lower numbers.

Return type:

Iterator[Tuple[int, str]]

property services
Get a collection of the IDs of service host jobs for this job, in arbitrary order.

Will be empty if the job has no unfinished services.

has_body()[source]

Returns True if we have a job body associated, and False otherwise.

Return type:

bool

attach_body(file_store_id, user_script)[source]

Attach a job body to this JobDescription.

Takes the file store ID that the body is stored at, and the required user script module.

The file store ID can also be “firstJob” for the root job, stored as a shared file instead.

Parameters:
Return type:

None

detach_body()[source]

Drop the body reference from a JobDescription.

Return type:

None

get_body()[source]

Get the information needed to load the job body.

Returns:

a file store ID (or magic shared file name “firstJob”) and a user script module.

Return type:

Tuple[str, toil.resource.ModuleDescriptor]

Fails if no body is attached; check has_body() first.

nextSuccessors()[source]

Return the collection of job IDs for the successors of this job that are ready to run.

If those jobs have multiple predecessor relationships, they may still be blocked on other jobs.

Returns None when at the final phase (all successors done), and an empty collection if there are more phases but they can’t be entered yet (e.g. because we are waiting for the job itself to run).

Return type:

Optional[Set[str]]

filterSuccessors(predicate)[source]

Keep only successor jobs for which the given predicate function approves.

The predicate function is called with the job’s ID.

Treats all other successors as complete and forgets them.

Parameters:

predicate (Callable[[str], bool])

Return type:

None

filterServiceHosts(predicate)[source]

Keep only services for which the given predicate approves.

The predicate function is called with the service host job’s ID.

Treats all other services as complete and forgets them.

Parameters:

predicate (Callable[[str], bool])

Return type:

None

clear_nonexistent_dependents(job_store)[source]

Remove all references to child, follow-on, and associated service jobs that do not exist.

That is to say, all those that have been completed and removed.

Parameters:

job_store (toil.jobStores.abstractJobStore.AbstractJobStore)

Return type:

None

clear_dependents()[source]

Remove all references to successor and service jobs.

Return type:

None

is_subtree_done()[source]

Check if the subtree is done.

Returns:

True if the job appears to be done, and all related child, follow-on, and service jobs appear to be finished and removed.

Return type:

bool

replace(other)[source]

Take on the ID of another JobDescription, retaining our own state and type.

When updated in the JobStore, we will save over the other JobDescription.

Useful for chaining jobs: the chained-to job can replace the parent job.

Merges cleanup state and successors other than this job from the job being replaced into this one.

Parameters:

other (JobDescription) – Job description to replace.

Return type:

None

assert_is_not_newer_than(other)[source]

Make sure this JobDescription is not newer than a prospective new version of the JobDescription.

Parameters:

other (JobDescription)

Return type:

None

is_updated_by(other)[source]

Return True if the passed JobDescription is a distinct, newer version of this one.

Parameters:

other (JobDescription)

Return type:

bool

addChild(childID)[source]

Make the job with the given ID a child of the described job.

Parameters:

childID (str)

Return type:

None

addFollowOn(followOnID)[source]

Make the job with the given ID a follow-on of the described job.

Parameters:

followOnID (str)

Return type:

None

addServiceHostJob(serviceID, parentServiceID=None)[source]

Make the ServiceHostJob with the given ID a service of the described job.

If a parent ServiceHostJob ID is given, that parent service will be started first, and must have already been added.

hasChild(childID)[source]

Return True if the job with the given ID is a child of the described job.

Parameters:

childID (str)

Return type:

bool

hasFollowOn(followOnID)[source]

Test if the job with the given ID is a follow-on of the described job.

Parameters:

followOnID (str)

Return type:

bool

hasServiceHostJob(serviceID)[source]

Test if the ServiceHostJob is a service of the described job.

Return type:

bool

renameReferences(renames)[source]

Apply the given dict of ID renames to all references to jobs.

Does not modify our own ID or those of finished predecessors. IDs not present in the renames dict are left as-is.

Parameters:

renames (Dict[TemporaryID, str]) – Rename operations to apply.

Return type:

None

addPredecessor()[source]

Notify the JobDescription that a predecessor has been added to its Job.

Return type:

None

onRegistration(jobStore)[source]

Perform setup work that requires the JobStore.

Called by the Job saving logic when this JobDescription meets the JobStore and has its ID assigned.

Overridden to perform setup work (like hooking up flag files for service jobs) that requires the JobStore.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore) – The job store we are being placed into

Return type:

None

setupJobAfterFailure(exit_status=None, exit_reason=None)[source]

Configure job after a failure.

Reduce the remainingTryCount if greater than zero and set the memory to be at least as big as the default memory (in case of exhaustion of memory, which is common).

Requires a configuration to have been assigned (see toil.job.Requirer.assignConfig()).

Parameters:
Return type:

None

getLogFileHandle(jobStore)[source]

Create a context manager that yields a file handle to the log file.

Assumes logJobStoreFileID is set.

property remainingTryCount
Get the number of tries remaining.

The try count set on the JobDescription, or the default based on the retry count from the config if none is set.

clearRemainingTryCount()[source]

Clear remainingTryCount and set it back to its default value.

Returns:

True if a modification to the JobDescription was made, and False otherwise.

Return type:

bool

__str__()[source]

Produce a useful logging string identifying this job.

Return type:

str

__repr__()[source]

Return repr(self).

reserve_versions(count)[source]

Reserve a job version number for later, for journaling asynchronously.

Parameters:

count (int)

Return type:

None

pre_update_hook()[source]

Run before pickling and saving a created or updated version of this job.

Called by the job store.

Return type:

None

class toil.leader.ServiceJobDescription(*args, **kwargs)[source]

Bases: JobDescription

A description of a job that hosts a service.

onRegistration(jobStore)[source]

Setup flag files.

When a ServiceJobDescription first meets the JobStore, it needs to set up its flag files.

class toil.leader.TemporaryID[source]

Placeholder for a unregistered job ID used by a JobDescription.

Needs to be held:
  • By JobDescription objects to record normal relationships.

  • By Jobs to key their connected-component registries and to record predecessor relationships to facilitate EncapsulatedJob adding itself as a child.

  • By Services to tie back to their hosting jobs, so the service tree can be built up from Service objects.

__str__()[source]

Return str(self).

Return type:

str

__repr__()[source]

Return repr(self).

Return type:

str

__hash__()[source]

Return hash(self).

Return type:

int

__eq__(other)[source]

Return self==value.

Parameters:

other (Any)

Return type:

bool

__ne__(other)[source]

Return self!=value.

Parameters:

other (Any)

Return type:

bool

class toil.leader.AbstractJobStore(locator)[source]

Bases: abc.ABC

Represents the physical storage for the jobs and files in a Toil workflow.

JobStores are responsible for storing toil.job.JobDescription (which relate jobs to each other) and files.

Actual toil.job.Job objects are stored in files, referenced by JobDescriptions. All the non-file CRUD methods the JobStore provides deal in JobDescriptions and not full, executable Jobs.

To actually get ahold of a toil.job.Job, use toil.job.Job.loadJob() with a JobStore and the relevant JobDescription.

Parameters:

locator (str)

initialize(config)[source]

Initialize this job store.

Create the physical storage for this job store, allocate a workflow ID and persist the given Toil configuration to the store.

Parameters:

config (toil.common.Config) – the Toil configuration to initialize this job store with. The given configuration will be updated with the newly allocated workflow ID.

Raises:

JobStoreExistsException – if the physical storage for this job store already exists

Return type:

None

writeConfig()[source]
Return type:

None

write_config()[source]

Persists the value of the AbstractJobStore.config attribute to the job store, so that it can be retrieved later by other instances of this class.

Return type:

None

resume()[source]

Connect this instance to the physical storage it represents and load the Toil configuration into the AbstractJobStore.config attribute.

Raises:

NoSuchJobStoreException – if the physical storage for this job store doesn’t exist

Return type:

None

property config: toil.common.Config

Return the Toil configuration associated with this job store.

Return type:

toil.common.Config

property locator: str

Get the locator that defines the job store, which can be used to connect to it.

Return type:

str

rootJobStoreIDFileName = 'rootJobStoreID'
setRootJob(rootJobStoreID)[source]

Set the root job of the workflow backed by this job store.

Parameters:

rootJobStoreID (toil.fileStores.FileID)

Return type:

None

set_root_job(job_id)[source]

Set the root job of the workflow backed by this job store.

Parameters:

job_id (toil.fileStores.FileID) – The ID of the job to set as root

Return type:

None

loadRootJob()[source]
Return type:

toil.job.JobDescription

load_root_job()[source]

Loads the JobDescription for the root job in the current job store.

Raises:

toil.job.JobException – If no root job is set or if the root job doesn’t exist in this job store

Returns:

The root job.

Return type:

toil.job.JobDescription

createRootJob(desc)[source]
Parameters:

desc (toil.job.JobDescription)

Return type:

toil.job.JobDescription

create_root_job(job_description)[source]

Create the given JobDescription and set it as the root job in this job store.

Parameters:

job_description (toil.job.JobDescription) – JobDescription to save and make the root job.

Return type:

toil.job.JobDescription

getRootJobReturnValue()[source]
Return type:

Any

get_root_job_return_value()[source]

Parse the return value from the root job.

Raises an exception if the root job hasn’t fulfilled its promise yet.

Return type:

Any

importFile(srcUrl: str, sharedFileName: str, hardlink: bool = False, symlink: bool = True) None[source]
importFile(srcUrl: str, sharedFileName: None = None, hardlink: bool = False, symlink: bool = True) toil.fileStores.FileID
import_file(src_uri: str, shared_file_name: str, hardlink: bool = False, symlink: bool = True) None[source]
import_file(src_uri: str, shared_file_name: None = None, hardlink: bool = False, symlink: bool = True) toil.fileStores.FileID

Imports the file at the given URL into job store. The ID of the newly imported file is returned. If the name of a shared file name is provided, the file will be imported as such and None is returned. If an executable file on the local filesystem is uploaded, its executability will be preserved when it is downloaded.

Currently supported schemes are:

Raises FileNotFoundError if the file does not exist.

Parameters:
  • src_uri (str) – URL that points to a file or object in the storage mechanism of a supported URL scheme e.g. a blob in an AWS s3 bucket. It must be a file, not a directory or prefix.

  • shared_file_name (str) – Optional name to assign to the imported file within the job store

Returns:

The jobStoreFileID of the imported file or None if shared_file_name was given

Return type:

toil.fileStores.FileID or None

exportFile(jobStoreFileID, dstUrl)[source]
Parameters:
Return type:

None

export_file(file_id, dst_uri)[source]

Exports file to destination pointed at by the destination URL. The exported file will be executable if and only if it was originally uploaded from an executable file on the local filesystem.

Refer to AbstractJobStore.import_file() documentation for currently supported URL schemes.

Note that the helper method _exportFile is used to read from the source and write to destination. To implement any optimizations that circumvent this, the _exportFile method should be overridden by subclasses of AbstractJobStore.

Parameters:
  • file_id (str) – The id of the file in the job store that should be exported.

  • dst_uri (str) – URL that points to a file or object in the storage mechanism of a supported URL scheme e.g. a blob in an AWS s3 bucket.

Return type:

None

classmethod url_exists(src_uri)[source]

Return True if the file at the given URI exists, and False otherwise.

Parameters:

src_uri (str) – URL that points to a file or object in the storage mechanism of a supported URL scheme e.g. a blob in an AWS s3 bucket.

Return type:

bool

classmethod get_size(src_uri)[source]

Get the size in bytes of the file at the given URL, or None if it cannot be obtained.

Parameters:

src_uri (str) – URL that points to a file or object in the storage mechanism of a supported URL scheme e.g. a blob in an AWS s3 bucket.

Return type:

Optional[int]

classmethod get_is_directory(src_uri)[source]

Return True if the thing at the given URL is a directory, and False if it is a file. The URL may or may not end in ‘/’.

Parameters:

src_uri (str)

Return type:

bool

classmethod list_url(src_uri)[source]

List the directory at the given URL. Returned path components can be joined with ‘/’ onto the passed URL to form new URLs. Those that end in ‘/’ correspond to directories. The provided URL may or may not end with ‘/’.

Currently supported schemes are:

Parameters:

src_uri (str) – URL that points to a directory or prefix in the storage mechanism of a supported URL scheme e.g. a prefix in an AWS s3 bucket.

Returns:

A list of URL components in the given directory, already URL-encoded.

Return type:

List[str]

classmethod read_from_url(src_uri, writable)[source]

Read the given URL and write its content into the given writable stream.

Raises FileNotFoundError if the URL doesn’t exist.

Returns:

The size of the file in bytes and whether the executable permission bit is set

Parameters:
Return type:

Tuple[int, bool]

classmethod open_url(src_uri)[source]

Read from the given URI.

Raises FileNotFoundError if the URL doesn’t exist.

Has a readable stream interface, unlike read_from_url() which takes a writable stream.

Parameters:

src_uri (str)

Return type:

IO[bytes]

abstract destroy()[source]

The inverse of initialize(), this method deletes the physical storage represented by this instance. While not being atomic, this method is at least idempotent, as a means to counteract potential issues with eventual consistency exhibited by the underlying storage mechanisms. This means that if the method fails (raises an exception), it may (and should be) invoked again. If the underlying storage mechanism is eventually consistent, even a successful invocation is not an ironclad guarantee that the physical storage vanished completely and immediately. A successful invocation only guarantees that the deletion will eventually happen. It is therefore recommended to not immediately reuse the same job store location for a new Toil workflow.

Return type:

None

getEnv()[source]
Return type:

Dict[str, str]

get_env()[source]

Returns a dictionary of environment variables that this job store requires to be set in order to function properly on a worker.

Return type:

dict[str,str]

clean(jobCache=None)[source]

Function to cleanup the state of a job store after a restart.

Fixes jobs that might have been partially updated. Resets the try counts and removes jobs that are not successors of the current root job.

Parameters:

jobCache (Optional[Dict[Union[str, toil.job.TemporaryID], toil.job.JobDescription]]) – if a value it must be a dict from job ID keys to JobDescription object values. Jobs will be loaded from the cache (which can be downloaded from the job store in a batch) instead of piecemeal when recursed into.

Return type:

toil.job.JobDescription

assignID(jobDescription)[source]
Parameters:

jobDescription (toil.job.JobDescription)

Return type:

None

abstract assign_job_id(job_description)[source]

Get a new jobStoreID to be used by the described job, and assigns it to the JobDescription.

Files associated with the assigned ID will be accepted even if the JobDescription has never been created or updated.

Parameters:

job_description (toil.job.JobDescription) – The JobDescription to give an ID to

Return type:

None

batch()[source]

If supported by the batch system, calls to create() with this context manager active will be performed in a batch after the context manager is released.

Return type:

Iterator[None]

create(jobDescription)[source]
Parameters:

jobDescription (toil.job.JobDescription)

Return type:

toil.job.JobDescription

abstract create_job(job_description)[source]

Writes the given JobDescription to the job store. The job must have an ID assigned already.

Must call jobDescription.pre_update_hook()

Returns:

The JobDescription passed.

Return type:

toil.job.JobDescription

Parameters:

job_description (toil.job.JobDescription)

exists(jobStoreID)[source]
Parameters:

jobStoreID (str)

Return type:

bool

abstract job_exists(job_id)[source]

Indicates whether a description of the job with the specified jobStoreID exists in the job store

Return type:

bool

Parameters:

job_id (str)

publicUrlExpiration
getPublicUrl(fileName)[source]
Parameters:

fileName (str)

Return type:

str

abstract get_public_url(file_name)[source]

Returns a publicly accessible URL to the given file in the job store. The returned URL may expire as early as 1h after its been returned. Throw an exception if the file does not exist.

Parameters:

file_name (str) – the jobStoreFileID of the file to generate a URL for

Raises:

NoSuchFileException – if the specified file does not exist in this job store

Return type:

str

getSharedPublicUrl(sharedFileName)[source]
Parameters:

sharedFileName (str)

Return type:

str

abstract get_shared_public_url(shared_file_name)[source]

Differs from getPublicUrl() in that this method is for generating URLs for shared files written by writeSharedFileStream().

Returns a publicly accessible URL to the given file in the job store. The returned URL starts with ‘http:’, ‘https:’ or ‘file:’. The returned URL may expire as early as 1h after its been returned. Throw an exception if the file does not exist.

Parameters:

shared_file_name (str) – The name of the shared file to generate a publically accessible url for.

Raises:

NoSuchFileException – raised if the specified file does not exist in the store

Return type:

str

load(jobStoreID)[source]
Parameters:

jobStoreID (str)

Return type:

toil.job.JobDescription

abstract load_job(job_id)[source]

Loads the description of the job referenced by the given ID, assigns it the job store’s config, and returns it.

May declare the job to have failed (see toil.job.JobDescription.setupJobAfterFailure()) if there is evidence of a failed update attempt.

Parameters:

job_id (str) – the ID of the job to load

Raises:

NoSuchJobException – if there is no job with the given ID

Return type:

toil.job.JobDescription

update(jobDescription)[source]
Parameters:

jobDescription (toil.job.JobDescription)

Return type:

None

abstract update_job(job_description)[source]

Persists changes to the state of the given JobDescription in this store atomically.

Must call jobDescription.pre_update_hook()

Parameters:
Return type:

None

delete(jobStoreID)[source]
Parameters:

jobStoreID (str)

Return type:

None

abstract delete_job(job_id)[source]

Removes the JobDescription from the store atomically. You may not then subsequently call load(), write(), update(), etc. with the same jobStoreID or any JobDescription bearing it.

This operation is idempotent, i.e. deleting a job twice or deleting a non-existent job will succeed silently.

Parameters:

job_id (str) – the ID of the job to delete from this job store

Return type:

None

abstract jobs()[source]

Best effort attempt to return iterator on JobDescriptions for all jobs in the store. The iterator may not return all jobs and may also contain orphaned jobs that have already finished successfully and should not be rerun. To guarantee you get any and all jobs that can be run instead construct a more expensive ToilState object

Returns:

Returns iterator on jobs in the store. The iterator may or may not contain all jobs and may contain invalid jobs

Return type:

Iterator[toil.job.jobDescription]

writeFile(localFilePath, jobStoreID=None, cleanup=False)[source]
Parameters:
  • localFilePath (str)

  • jobStoreID (Optional[str])

  • cleanup (bool)

Return type:

str

abstract write_file(local_path, job_id=None, cleanup=False)[source]

Takes a file (as a path) and places it in this job store. Returns an ID that can be used to retrieve the file at a later time. The file is written in a atomic manner. It will not appear in the jobStore until the write has successfully completed.

Parameters:
  • local_path (str) – the path to the local file that will be uploaded to the job store. The last path component (basename of the file) will remain associated with the file in the file store, if supported, so that the file can be searched for by name or name glob.

  • job_id (str) – the id of a job, or None. If specified, the may be associated with that job in a job-store-specific way. This may influence the returned ID.

  • cleanup (bool) – Whether to attempt to delete the file when the job whose jobStoreID was given as jobStoreID is deleted with jobStore.delete(job). If jobStoreID was not given, does nothing.

Raises:
Return type:

str

FIXME: some implementations may not raise this

Returns:

an ID referencing the newly created file and can be used to read the file in the future.

Return type:

str

Parameters:
  • local_path (str)

  • job_id (Optional[str])

  • cleanup (bool)

writeFileStream(jobStoreID=None, cleanup=False, basename=None, encoding=None, errors=None)[source]
Parameters:
  • jobStoreID (Optional[str])

  • cleanup (bool)

  • basename (Optional[str])

  • encoding (Optional[str])

  • errors (Optional[str])

Return type:

ContextManager[Tuple[IO[bytes], str]]

abstract write_file_stream(job_id=None, cleanup=False, basename=None, encoding=None, errors=None)[source]

Similar to writeFile, but returns a context manager yielding a tuple of 1) a file handle which can be written to and 2) the ID of the resulting file in the job store. The yielded file handle does not need to and should not be closed explicitly. The file is written in a atomic manner. It will not appear in the jobStore until the write has successfully completed.

Parameters:
  • job_id (str) – the id of a job, or None. If specified, the may be associated with that job in a job-store-specific way. This may influence the returned ID.

  • cleanup (bool) – Whether to attempt to delete the file when the job whose jobStoreID was given as jobStoreID is deleted with jobStore.delete(job). If jobStoreID was not given, does nothing.

  • basename (str) – If supported by the implementation, use the given file basename so that when searching the job store with a query matching that basename, the file will be detected.

  • encoding (str) – the name of the encoding used to encode the file. Encodings are the same as for encode(). Defaults to None which represents binary mode.

  • errors (str) – an optional string that specifies how encoding errors are to be handled. Errors are the same as for open(). Defaults to ‘strict’ when an encoding is specified.

Raises:
Return type:

Iterator[Tuple[IO[bytes], str]]

FIXME: some implementations may not raise this

Returns:

a context manager yielding a file handle which can be written to and an ID that references the newly created file and can be used to read the file in the future.

Return type:

Iterator[Tuple[IO[bytes], str]]

Parameters:
  • job_id (Optional[str])

  • cleanup (bool)

  • basename (Optional[str])

  • encoding (Optional[str])

  • errors (Optional[str])

getEmptyFileStoreID(jobStoreID=None, cleanup=False, basename=None)[source]
Parameters:
  • jobStoreID (Optional[str])

  • cleanup (bool)

  • basename (Optional[str])

Return type:

str

abstract get_empty_file_store_id(job_id=None, cleanup=False, basename=None)[source]

Creates an empty file in the job store and returns its ID. Call to fileExists(getEmptyFileStoreID(jobStoreID)) will return True.

Parameters:
  • job_id (str) – the id of a job, or None. If specified, the may be associated with that job in a job-store-specific way. This may influence the returned ID.

  • cleanup (bool) – Whether to attempt to delete the file when the job whose jobStoreID was given as jobStoreID is deleted with jobStore.delete(job). If jobStoreID was not given, does nothing.

  • basename (str) – If supported by the implementation, use the given file basename so that when searching the job store with a query matching that basename, the file will be detected.

Returns:

a jobStoreFileID that references the newly created file and can be used to reference the file in the future.

Return type:

str

readFile(jobStoreFileID, localFilePath, symlink=False)[source]
Parameters:
  • jobStoreFileID (str)

  • localFilePath (str)

  • symlink (bool)

Return type:

None

abstract read_file(file_id, local_path, symlink=False)[source]

Copies or hard links the file referenced by jobStoreFileID to the given local file path. The version will be consistent with the last copy of the file written/updated. If the file in the job store is later modified via updateFile or updateFileStream, it is implementation-defined whether those writes will be visible at localFilePath. The file is copied in an atomic manner. It will not appear in the local file system until the copy has completed.

The file at the given local path may not be modified after this method returns!

Note! Implementations of readFile need to respect/provide the executable attribute on FileIDs.

Parameters:
  • file_id (str) – ID of the file to be copied

  • local_path (str) – the local path indicating where to place the contents of the given file in the job store

  • symlink (bool) – whether the reader can tolerate a symlink. If set to true, the job store may create a symlink instead of a full copy of the file or a hard link.

Return type:

None

readFileStream(jobStoreFileID, encoding=None, errors=None)[source]
Parameters:
  • jobStoreFileID (str)

  • encoding (Optional[str])

  • errors (Optional[str])

Return type:

Union[ContextManager[IO[bytes]], ContextManager[IO[str]]]

read_file_stream(file_id: toil.fileStores.FileID | str, encoding: Literal[None] = None, errors: str | None = None) ContextManager[IO[bytes]][source]
read_file_stream(file_id: toil.fileStores.FileID | str, encoding: str, errors: str | None = None) ContextManager[IO[str]]

Similar to readFile, but returns a context manager yielding a file handle which can be read from. The yielded file handle does not need to and should not be closed explicitly.

Parameters:
  • file_id (str) – ID of the file to get a readable file handle for

  • encoding (str) – the name of the encoding used to decode the file. Encodings are the same as for decode(). Defaults to None which represents binary mode.

  • errors (str) – an optional string that specifies how encoding errors are to be handled. Errors are the same as for open(). Defaults to ‘strict’ when an encoding is specified.

Returns:

a context manager yielding a file handle which can be read from

Return type:

Iterator[Union[IO[bytes], IO[str]]]

deleteFile(jobStoreFileID)[source]
Parameters:

jobStoreFileID (str)

Return type:

None

abstract delete_file(file_id)[source]

Deletes the file with the given ID from this job store. This operation is idempotent, i.e. deleting a file twice or deleting a non-existent file will succeed silently.

Parameters:

file_id (str) – ID of the file to delete

Return type:

None

fileExists(jobStoreFileID)[source]

Determine whether a file exists in this job store.

Parameters:

jobStoreFileID (str)

Return type:

bool

abstract file_exists(file_id)[source]

Determine whether a file exists in this job store.

Parameters:

file_id (str) – an ID referencing the file to be checked

Return type:

bool

getFileSize(jobStoreFileID)[source]

Get the size of the given file in bytes.

Parameters:

jobStoreFileID (str)

Return type:

int

abstract get_file_size(file_id)[source]

Get the size of the given file in bytes, or 0 if it does not exist when queried.

Note that job stores which encrypt files might return overestimates of file sizes, since the encrypted file may have been padded to the nearest block, augmented with an initialization vector, etc.

Parameters:

file_id (str) – an ID referencing the file to be checked

Return type:

int

updateFile(jobStoreFileID, localFilePath)[source]

Replaces the existing version of a file in the job store.

Parameters:
  • jobStoreFileID (str)

  • localFilePath (str)

Return type:

None

abstract update_file(file_id, local_path)[source]

Replaces the existing version of a file in the job store.

Throws an exception if the file does not exist.

Parameters:
  • file_id (str) – the ID of the file in the job store to be updated

  • local_path (str) – the local path to a file that will overwrite the current version in the job store

Raises:
Return type:

None

updateFileStream(jobStoreFileID, encoding=None, errors=None)[source]
Parameters:
  • jobStoreFileID (str)

  • encoding (Optional[str])

  • errors (Optional[str])

Return type:

ContextManager[IO[Any]]

abstract update_file_stream(file_id, encoding=None, errors=None)[source]

Replaces the existing version of a file in the job store. Similar to writeFile, but returns a context manager yielding a file handle which can be written to. The yielded file handle does not need to and should not be closed explicitly.

Parameters:
  • file_id (str) – the ID of the file in the job store to be updated

  • encoding (str) – the name of the encoding used to encode the file. Encodings are the same as for encode(). Defaults to None which represents binary mode.

  • errors (str) – an optional string that specifies how encoding errors are to be handled. Errors are the same as for open(). Defaults to ‘strict’ when an encoding is specified.

Raises:
Return type:

Iterator[IO[Any]]

sharedFileNameRegex
writeSharedFileStream(sharedFileName, isProtected=None, encoding=None, errors=None)[source]
Parameters:
  • sharedFileName (str)

  • isProtected (Optional[bool])

  • encoding (Optional[str])

  • errors (Optional[str])

Return type:

ContextManager[IO[bytes]]

abstract write_shared_file_stream(shared_file_name, encrypted=None, encoding=None, errors=None)[source]

Returns a context manager yielding a writable file handle to the global file referenced by the given name. File will be created in an atomic manner.

Parameters:
  • shared_file_name (str) – A file name matching AbstractJobStore.fileNameRegex, unique within this job store

  • encrypted (bool) – True if the file must be encrypted, None if it may be encrypted or False if it must be stored in the clear.

  • encoding (str) – the name of the encoding used to encode the file. Encodings are the same as for encode(). Defaults to None which represents binary mode.

  • errors (str) – an optional string that specifies how encoding errors are to be handled. Errors are the same as for open(). Defaults to ‘strict’ when an encoding is specified.

Raises:

ConcurrentFileModificationException – if the file was modified concurrently during an invocation of this method

Returns:

a context manager yielding a writable file handle

Return type:

Iterator[IO[bytes]]

readSharedFileStream(sharedFileName, encoding=None, errors=None)[source]
Parameters:
  • sharedFileName (str)

  • encoding (Optional[str])

  • errors (Optional[str])

Return type:

ContextManager[IO[bytes]]

abstract read_shared_file_stream(shared_file_name, encoding=None, errors=None)[source]

Returns a context manager yielding a readable file handle to the global file referenced by the given name.

Parameters:
  • shared_file_name (str) – A file name matching AbstractJobStore.fileNameRegex, unique within this job store

  • encoding (str) – the name of the encoding used to decode the file. Encodings are the same as for decode(). Defaults to None which represents binary mode.

  • errors (str) – an optional string that specifies how encoding errors are to be handled. Errors are the same as for open(). Defaults to ‘strict’ when an encoding is specified.

Returns:

a context manager yielding a readable file handle

Return type:

Iterator[IO[bytes]]

writeStatsAndLogging(statsAndLoggingString)[source]
Parameters:

statsAndLoggingString (str)

Return type:

None

abstract write_logs(msg)[source]

Stores a message as a log in the jobstore.

Parameters:

msg (str) – the string to be written

Raises:

ConcurrentFileModificationException – if the file was modified concurrently during an invocation of this method

Return type:

None

readStatsAndLogging(callback, readAll=False)[source]
Parameters:
  • callback (Callable[Ellipsis, Any])

  • readAll (bool)

Return type:

int

abstract read_logs(callback, read_all=False)[source]

Reads logs accumulated by the write_logs() method. For each log this method calls the given callback function with the message as an argument (rather than returning logs directly, this method must be supplied with a callback which will process log messages).

Only unread logs will be read unless the read_all parameter is set.

Parameters:
  • callback (Callable) – a function to be applied to each of the stats file handles found

  • read_all (bool) – a boolean indicating whether to read the already processed stats files in addition to the unread stats files

Raises:

ConcurrentFileModificationException – if the file was modified concurrently during an invocation of this method

Returns:

the number of stats files processed

Return type:

int

write_leader_pid()[source]

Write the pid of this process to a file in the job store.

Overwriting the current contents of pid.log is a feature, not a bug of this method. Other methods will rely on always having the most current pid available. So far there is no reason to store any old pids.

Return type:

None

read_leader_pid()[source]

Read the pid of the leader process to a file in the job store.

Raises:

NoSuchFileException – If the PID file doesn’t exist.

Return type:

int

write_leader_node_id()[source]

Write the leader node id to the job store. This should only be called by the leader.

Return type:

None

read_leader_node_id()[source]

Read the leader node id stored in the job store.

Raises:

NoSuchFileException – If the node ID file doesn’t exist.

Return type:

str

write_kill_flag(kill=False)[source]

Write a file inside the job store that serves as a kill flag.

The initialized file contains the characters “NO”. This should only be changed when the user runs the “toil kill” command.

Changing this file to a “YES” triggers a kill of the leader process. The workers are expected to be cleaned up by the leader.

Parameters:

kill (bool)

Return type:

None

read_kill_flag()[source]

Read the kill flag from the job store, and return True if the leader has been killed. False otherwise.

Return type:

bool

default_caching()[source]

Jobstore’s preference as to whether it likes caching or doesn’t care about it. Some jobstores benefit from caching, however on some local configurations it can be flaky.

see https://github.com/DataBiosphere/toil/issues/4218

Return type:

bool

exception toil.leader.NoSuchJobException(jobStoreID)[source]

Bases: Exception

Indicates that the specified job does not exist.

Parameters:

jobStoreID (toil.fileStores.FileID)

class toil.leader.LocalThrottle(min_interval)[source]

A thread-safe rate limiter that throttles each thread independently. Can be used as a function or method decorator or as a simple object, via its .throttle() method.

The use as a decorator is deprecated in favor of throttle().

Parameters:

min_interval (int)

throttle(wait=True)[source]

If the wait parameter is True, this method returns True after suspending the current thread as necessary to ensure that no less than the configured minimum interval has passed since the last invocation of this method in the current thread returned True.

If the wait parameter is False, this method immediatley returns True (if at least the configured minimum interval has passed since the last time this method returned True in the current thread) or False otherwise.

Parameters:

wait (bool)

Return type:

bool

__call__(function)[source]
class toil.leader.AbstractProvisioner(clusterName=None, clusterType='mesos', zone=None, nodeStorage=50, nodeStorageOverrides=None, enable_fuse=False)[source]

Bases: abc.ABC

Interface for provisioning worker nodes to use in a Toil cluster.

Parameters:
  • clusterName (Optional[str])

  • clusterType (Optional[str])

  • zone (Optional[str])

  • nodeStorage (int)

  • nodeStorageOverrides (Optional[List[str]])

  • enable_fuse (bool)

LEADER_HOME_DIR = '/root/'
cloud: str = None
abstract supportedClusterTypes()[source]

Get all the cluster types that this provisioner implementation supports.

Return type:

Set[str]

abstract createClusterSettings()[source]

Initialize class for a new cluster, to be deployed, when running outside the cloud.

abstract readClusterSettings()[source]

Initialize class from an existing cluster. This method assumes that the instance we are running on is the leader.

Implementations must call _setLeaderWorkerAuthentication().

setAutoscaledNodeTypes(nodeTypes)[source]

Set node types, shapes and spot bids for Toil-managed autoscaling. :param nodeTypes: A list of node types, as parsed with parse_node_types.

Parameters:

nodeTypes (List[Tuple[Set[str], Optional[float]]])

hasAutoscaledNodeTypes()[source]

Check if node types have been configured on the provisioner (via setAutoscaledNodeTypes).

Returns:

True if node types are configured for autoscaling, and false otherwise.

Return type:

bool

getAutoscaledInstanceShapes()[source]

Get all the node shapes and their named instance types that the Toil autoscaler should manage.

Return type:

Dict[Shape, str]

static retryPredicate(e)[source]

Return true if the exception e should be retried by the cluster scaler. For example, should return true if the exception was due to exceeding an API rate limit. The error will be retried with exponential backoff.

Parameters:

e – exception raised during execution of setNodeCount

Returns:

boolean indicating whether the exception e should be retried

abstract launchCluster(*args, **kwargs)[source]

Initialize a cluster and create a leader node.

Implementations must call _setLeaderWorkerAuthentication() with the leader so that workers can be launched.

Parameters:
  • leaderNodeType – The leader instance.

  • leaderStorage – The amount of disk to allocate to the leader in gigabytes.

  • owner – Tag identifying the owner of the instances.

abstract addNodes(nodeTypes, numNodes, preemptible, spotBid=None)[source]

Used to add worker nodes to the cluster

Parameters:
  • numNodes (int) – The number of nodes to add

  • preemptible (bool) – whether or not the nodes will be preemptible

  • spotBid (Optional[float]) – The bid for preemptible nodes if applicable (this can be set in config, also).

  • nodeTypes (Set[str])

Returns:

number of nodes successfully added

Return type:

int

addManagedNodes(nodeTypes, minNodes, maxNodes, preemptible, spotBid=None)[source]

Add a group of managed nodes of the given type, up to the given maximum. The nodes will automatically be launched and terminated depending on cluster load.

Raises ManagedNodesNotSupportedException if the provisioner implementation or cluster configuration can’t have managed nodes.

Parameters:
  • minNodes – The minimum number of nodes to scale to

  • maxNodes – The maximum number of nodes to scale to

  • preemptible – whether or not the nodes will be preemptible

  • spotBid – The bid for preemptible nodes if applicable (this can be set in config, also).

  • nodeTypes (Set[str])

Return type:

None

abstract terminateNodes(nodes)[source]

Terminate the nodes represented by given Node objects

Parameters:

nodes (List[toil.provisioners.node.Node]) – list of Node objects

Return type:

None

abstract getLeader()[source]
Returns:

The leader node.

abstract getProvisionedWorkers(instance_type=None, preemptible=None)[source]

Gets all nodes, optionally of the given instance type or preemptability, from the provisioner. Includes both static and autoscaled nodes.

Parameters:
  • preemptible (Optional[bool]) – Boolean value to restrict to preemptible nodes or non-preemptible nodes

  • instance_type (Optional[str])

Returns:

list of Node objects

Return type:

List[toil.provisioners.node.Node]

abstract getNodeShape(instance_type, preemptible=False)[source]

The shape of a preemptible or non-preemptible node managed by this provisioner. The node shape defines key properties of a machine, such as its number of cores or the time between billing intervals.

Parameters:

instance_type (str) – Instance type name to return the shape of.

Return type:

Shape

abstract destroyCluster()[source]

Terminates all nodes in the specified cluster and cleans up all resources associated with the cluster. :param clusterName: identifier of the cluster to terminate.

Return type:

None

class InstanceConfiguration[source]

Allows defining the initial setup for an instance and then turning it into an Ignition configuration for instance user data.

addFile(path, filesystem='root', mode='0755', contents='', append=False)[source]

Make a file on the instance with the given filesystem, mode, and contents.

See the storage.files section: https://github.com/kinvolk/ignition/blob/flatcar-master/doc/configuration-v2_2.md

Parameters:
addUnit(name, enabled=True, contents='')[source]

Make a systemd unit on the instance with the given name (including .service), and content. Units will be enabled by default.

Unit logs can be investigated with:

systemctl status whatever.service

or:

journalctl -xe

Parameters:
addSSHRSAKey(keyData)[source]

Authorize the given bare, encoded RSA key (without “ssh-rsa”).

Parameters:

keyData (str)

toIgnitionConfig()[source]

Return an Ignition configuration describing the desired config.

Return type:

str

getBaseInstanceConfiguration()[source]

Get the base configuration for both leader and worker instances for all cluster types.

Return type:

InstanceConfiguration

addVolumesService(config)[source]

Add a service to prepare and mount local scratch volumes.

Parameters:

config (InstanceConfiguration)

addNodeExporterService(config)[source]

Add the node exporter service for Prometheus to an instance configuration.

Parameters:

config (InstanceConfiguration)

toil_service_env_options()[source]
Return type:

str

add_toil_service(config, role, keyPath=None, preemptible=False)[source]

Add the Toil leader or worker service to an instance configuration.

Will run Mesos master or agent as appropriate in Mesos clusters. For Kubernetes clusters, will just sleep to provide a place to shell into on the leader, and shouldn’t run on the worker.

Parameters:
  • role (str) – Should be ‘leader’ or ‘worker’. Will not work for ‘worker’ until leader credentials have been collected.

  • keyPath (str) – path on the node to a server-side encryption key that will be added to the node after it starts. The service will wait until the key is present before starting.

  • preemptible (bool) – Whether a worker should identify itself as preemptible or not to the scheduler.

  • config (InstanceConfiguration)

getKubernetesValues(architecture='amd64')[source]

Returns a dict of Kubernetes component versions and paths for formatting into Kubernetes-related templates.

Parameters:

architecture (str)

addKubernetesServices(config, architecture='amd64')[source]

Add installing Kubernetes and Kubeadm and setting up the Kubelet to run when configured to an instance configuration. The same process applies to leaders and workers.

Parameters:
abstract getKubernetesAutoscalerSetupCommands(values)[source]

Return Bash commands that set up the Kubernetes cluster autoscaler for provisioning from the environment supported by this provisioner.

Should only be implemented if Kubernetes clusters are supported.

Parameters:

values (Dict[str, str]) – Contains definitions of cluster variables, like AUTOSCALER_VERSION and CLUSTER_NAME.

Returns:

Bash snippet

Return type:

str

getKubernetesCloudProvider()[source]

Return the Kubernetes cloud provider (for example, ‘aws’), to pass to the kubelets in a Kubernetes cluster provisioned using this provisioner.

Defaults to None if not overridden, in which case no cloud provider integration will be used.

Returns:

Cloud provider name, or None

Return type:

Optional[str]

addKubernetesLeader(config)[source]

Add services to configure as a Kubernetes leader, if Kubernetes is already set to be installed.

Parameters:

config (InstanceConfiguration)

addKubernetesWorker(config, authVars, preemptible=False)[source]

Add services to configure as a Kubernetes worker, if Kubernetes is already set to be installed.

Authenticate back to the leader using the JOIN_TOKEN, JOIN_CERT_HASH, and JOIN_ENDPOINT set in the given authentication data dict.

Parameters:
  • config (InstanceConfiguration) – The configuration to add services to

  • authVars (Dict[str, str]) – Dict with authentication info

  • preemptible (bool) – Whether the worker should be labeled as preemptible or not

class toil.leader.ScalerThread(provisioner, leader, config, stop_on_exception=False)[source]

Bases: toil.lib.threading.ExceptionalThread

A thread that automatically scales the number of either preemptible or non-preemptible worker nodes according to the resource requirements of the queued jobs.

The scaling calculation is essentially as follows: start with 0 estimated worker nodes. For each queued job, check if we expect it can be scheduled into a worker node before a certain time (currently one hour). Otherwise, attempt to add a single new node of the smallest type that can fit that job.

At each scaling decision point a comparison between the current, C, and newly estimated number of nodes is made. If the absolute difference is less than beta * C then no change is made, else the size of the cluster is adapted. The beta factor is an inertia parameter that prevents continual fluctuations in the number of nodes.

Parameters:
check()[source]

Attempt to join any existing scaler threads that may have died or finished.

This insures any exceptions raised in the threads are propagated in a timely fashion.

Return type:

None

shutdown()[source]

Shutdown the cluster.

Return type:

None

addCompletedJob(job, wallTime)[source]
Parameters:
Return type:

None

tryRun()[source]
Return type:

None

class toil.leader.ServiceManager(job_store, toil_state)[source]

Manages the scheduling of services.

Parameters:
services_are_starting(job_id)[source]

Check if services are being started.

Returns:

True if the services for the given job are currently being started, and False otherwise.

Parameters:

job_id (str)

Return type:

bool

get_job_count()[source]

Get the total number of jobs we are working on.

(services and their parent non-service jobs)

Return type:

int

start()[source]

Start the service scheduling thread.

Return type:

None

put_client(client_id)[source]

Schedule the services of a job asynchronously.

When the job’s services are running the ID for the job will be returned by toil.leader.ServiceManager.get_ready_client.

Parameters:

client_id (str) – ID of job with services to schedule.

Return type:

None

get_ready_client(maxWait)[source]

Fetch a ready client, waiting as needed.

Parameters:

maxWait (float) – Time in seconds to wait to get a JobDescription before returning

Returns:

the ID of a client whose services are running, or None if no such job is available.

Return type:

Optional[str]

get_unservable_client(maxWait)[source]

Fetch a client whos services failed to start.

Parameters:

maxWait (float) – Time in seconds to wait to get a JobDescription before returning

Returns:

the ID of a client whose services failed to start, or None if no such job is available.

Return type:

Optional[str]

get_startable_service(maxWait)[source]

Fetch a service job that is ready to start.

Parameters:

maxWait (float) – Time in seconds to wait to get a job before returning.

Returns:

the ID of a service job that the leader can start, or None if no such job exists.

Return type:

Optional[str]

kill_services(service_ids, error=False)[source]

Stop all the given service jobs.

Parameters:
  • services – Service jobStoreIDs to kill

  • error (bool) – Whether to signal that the service failed with an error when stopping it.

  • service_ids (Iterable[str])

Return type:

None

is_active(service_id)[source]

Return true if the service job has not been told to terminate.

Parameters:

service_id (str) – Service to check on

Return type:

bool

is_running(service_id)[source]

Return true if the service job has started and is active.

Parameters:
  • service – Service to check on

  • service_id (str)

Return type:

bool

check()[source]

Check on the service manager thread.

Raises:

RuntimeError – If the underlying thread has quit.

Return type:

None

shutdown()[source]

Terminate worker threads cleanly; starting and killing all service threads.

Will block until all services are started and blocked.

Return type:

None

class toil.leader.StatsAndLogging(jobStore, config)[source]

A thread to aggregate statistics and logging.

Parameters:
start()[source]

Start the stats and logging thread.

Return type:

None

classmethod formatLogStream(stream, stream_name)[source]

Given a stream of text or bytes, and the job name, job itself, or some other optional stringifyable identity info for the job, return a big text string with the formatted job log, suitable for printing for the user.

We don’t want to prefix every line of the job’s log with our own logging info, or we get prefixes wider than any reasonable terminal and longer than the messages.

Parameters:
  • stream (Union[IO[str], IO[bytes]]) – The stream of text or bytes to print for the user.

  • stream_name (str)

Return type:

str

classmethod logWithFormatting(stream_name, jobLogs, method=logger.debug, message=None)[source]
Parameters:
  • stream_name (str)

  • jobLogs (Union[IO[str], IO[bytes]])

  • method (Callable[[str], None])

  • message (Optional[str])

Return type:

None

classmethod writeLogFiles(jobNames, jobLogList, config, failed=False)[source]
Parameters:
Return type:

None

classmethod statsAndLoggingAggregator(jobStore, stop, config)[source]

The following function is used for collating stats/reporting log messages from the workers. Works inside of a thread, collates as long as the stop flag is not True.

Parameters:
Return type:

None

check()[source]

Check on the stats and logging aggregator. :raise RuntimeError: If the underlying thread has quit.

Return type:

None

shutdown()[source]

Finish up the stats/logging aggregation thread.

Return type:

None

class toil.leader.ToilState(jobStore)[source]

Holds the leader’s scheduling information.

But onlt that which does not need to be persisted back to the JobStore (such as information on completed and outstanding predecessors)

Holds the true single copies of all JobDescription objects that the Leader and ServiceManager will use. The leader and service manager shouldn’t do their own load() and update() calls on the JobStore; they should go through this class.

Everything in the leader should reference JobDescriptions by ID.

Only holds JobDescription objects, not Job objects, and those JobDescription objects only exist in single copies.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore)

load_workflow(rootJob, jobCache=None)[source]

Load the workflow rooted at the given job.

If jobs are loaded that have updated and need to be dealt with by the leader, JobUpdatedMessage messages will be sent to the message bus.

The jobCache is a map from jobStoreID to JobDescription or None. Is used to speed up the building of the state when loading initially from the JobStore, and is not preserved.

Parameters:
Return type:

None

job_exists(job_id)[source]

Test if the givin job exists now.

Returns True if the given job exists right now, and false if it hasn’t been created or it has been deleted elsewhere.

Doesn’t guarantee that the job will or will not be gettable, if racing another process, or if it is still cached.

Parameters:

job_id (str)

Return type:

bool

get_job(job_id)[source]

Get the one true copy of the JobDescription with the given ID.

Parameters:

job_id (str)

Return type:

toil.job.JobDescription

commit_job(job_id)[source]

Save back any modifications made to a JobDescription.

(one retrieved from get_job())

Parameters:

job_id (str)

Return type:

None

delete_job(job_id)[source]

Destroy a JobDescription.

May raise an exception if the job could not be cleaned up (i.e. files belonging to it failed to delete).

Parameters:

job_id (str)

Return type:

None

reset_job(job_id)[source]

Discard any local modifications to a JobDescription.

Will make modifications from other hosts visible.

Parameters:

job_id (str)

Return type:

None

reset_job_expecting_change(job_id, timeout)[source]

Discard any local modifications to a JobDescription.

Will make modifications from other hosts visible.

Will wait for up to timeout seconds for a modification (or deletion) from another host to actually be visible.

Always replaces the JobDescription with what is stored in the job store, even if no modification ends up being visible.

Returns True if an update was detected in time, and False otherwise.

Parameters:
Return type:

bool

successors_pending(predecessor_id, count)[source]

Remember that the given job has the given number more pending successors.

(that have not yet succeeded or failed.)

Parameters:
  • predecessor_id (str)

  • count (int)

Return type:

None

successor_returned(predecessor_id)[source]

Remember that the given job has one fewer pending successors.

(because one has succeeded or failed.)

Parameters:

predecessor_id (str)

Return type:

None

count_pending_successors(predecessor_id)[source]

Count number of pending successors of the given job.

Pending successors are those which have not yet succeeded or failed.

Parameters:

predecessor_id (str)

Return type:

int

toil.leader.logger
class toil.leader.Leader(config, batchSystem, provisioner, jobStore, rootJob, jobCache=None)[source]

Represents the Toil leader.

Responsible for determining what jobs are ready to be scheduled, by consulting the job store, and issuing them in the batch system.

Parameters:
run()[source]

Run the leader process to issue and manage jobs.

Raises:

toil.exceptions.FailedJobsException if failed jobs remain after running.

Returns:

The return value of the root job’s run function.

Return type:

Any

create_status_sentinel_file(fail)[source]

Create a file in the jobstore indicating failure or success.

Parameters:

fail (bool)

Return type:

None

innerLoop()[source]

Process jobs.

This is the leader’s main loop.

checkForDeadlocks()[source]

Check if the system is deadlocked running service jobs.

feed_deadlock_watchdog()[source]

Note that progress has been made and any pending deadlock checks should be reset.

Return type:

None

issueJob(jobNode)[source]

Add a job to the queue of jobs currently trying to run.

Parameters:

jobNode (toil.job.JobDescription)

Return type:

None

issueJobs(jobs)[source]

Add a list of jobs, each represented as a jobNode object.

issueServiceJob(service_id)[source]

Issue a service job.

Put it on a queue if the maximum number of service jobs to be scheduled has been reached.

Parameters:

service_id (str)

Return type:

None

issueQueingServiceJobs()[source]

Issues any queuing service jobs up to the limit of the maximum allowed.

getNumberOfJobsIssued(preemptible=None)[source]

Get number of jobs that have been added by issueJob(s) and not removed by removeJob.

Parameters:

preemptible (Optional[bool]) – If none, return all types of jobs. If true, return just the number of preemptible jobs. If false, return just the number of non-preemptible jobs.

Return type:

int

removeJob(jobBatchSystemID)[source]

Remove a job from the system by batch system ID.

Returns:

Job description as it was issued.

Parameters:

jobBatchSystemID (int)

Return type:

toil.job.JobDescription

getJobs(preemptible=None)[source]

Get all issued jobs.

Parameters:

preemptible (Optional[bool]) – If specified, select only preemptible or only non-preemptible jobs.

Return type:

List[toil.job.JobDescription]

killJobs(jobsToKill, exit_reason=BatchJobExitReason.KILLED)[source]

Kills the given set of jobs and then sends them for processing.

Returns the jobs that, upon processing, were reissued.

Parameters:

exit_reason (toil.batchSystems.abstractBatchSystem.BatchJobExitReason)

reissueOverLongJobs()[source]

Check each issued job.

If a job is running for longer than desirable issue a kill instruction. Wait for the job to die then we pass the job to process_finished_job.

Return type:

None

reissueMissingJobs(killAfterNTimesMissing=3)[source]

Check all the current job ids are in the list of currently issued batch system jobs.

If a job is missing, we mark it as so, if it is missing for a number of runs of this function (say 10).. then we try deleting the job (though its probably lost), we wait then we pass the job to process_finished_job.

processRemovedJob(issuedJob, result_status)[source]
process_finished_job(batch_system_id, result_status, wall_time=None, exit_reason=None)[source]

Process finished jobs.

Called when an attempt to run a job finishes, either successfully or otherwise.

Takes the job out of the issued state, and then works out what to do about the fact that it succeeded or failed.

Returns:

True if the job is going to run again, and False if the job is fully done or completely failed.

Return type:

bool

process_finished_job_description(finished_job, result_status, wall_time=None, exit_reason=None, batch_system_id=None)[source]

Process a finished JobDescription based upon its success or failure.

If wall-clock time is available, informs the cluster scaler about the job finishing.

If the job failed and a batch system ID is available, checks for and reports batch system logs.

Checks if it succeeded and was removed, or if it failed and needs to be set up after failure, and dispatches to the appropriate function.

Returns:

True if the job is going to run again, and False if the job is fully done or completely failed.

Parameters:
Return type:

bool

getSuccessors(job_id, alreadySeenSuccessors)[source]

Get successors of the given job by walking the job graph recursively.

Parameters:
  • alreadySeenSuccessors (Set[str]) – any successor seen here is ignored and not traversed.

  • job_id (str)

Returns:

The set of found successors. This set is added to alreadySeenSuccessors.

Return type:

Set[str]

processTotallyFailedJob(job_id)[source]

Process a totally failed job.

Parameters:

job_id (str)

Return type:

None