toil.job

Module Contents

Classes

TemporaryID

Placeholder for a unregistered job ID used by a JobDescription.

AcceleratorRequirement

Requirement for one or more computational accelerators, like a GPU or FPGA.

RequirementsDict

Typed storage for requirements for a job.

Requirer

Base class implementing the storage and presentation of requirements.

JobDescription

Stores all the information that the Toil Leader ever needs to know about a Job.

ServiceJobDescription

A description of a job that hosts a service.

CheckpointJobDescription

A description of a job that is a checkpoint.

Job

Class represents a unit of work in toil.

FunctionWrappingJob

Job used to wrap a function. In its run method the wrapped function is called.

JobFunctionWrappingJob

A job function is a function whose first argument is a Job

PromisedRequirementFunctionWrappingJob

Handles dynamic resource allocation using toil.job.Promise instances.

PromisedRequirementJobFunctionWrappingJob

Handles dynamic resource allocation for job functions.

EncapsulatedJob

A convenience Job class used to make a job subgraph appear to be a single job.

ServiceHostJob

Job that runs a service. Used internally by Toil. Users should subclass Service instead of using this.

Promise

References a return value from a method as a promise before the method itself is run.

PromisedRequirement

Class for dynamically allocating job function resource requirements.

UnfulfilledPromiseSentinel

This should be overwritten by a proper promised value.

Functions

parse_accelerator(spec)

Parse an AcceleratorRequirement specified by user code.

accelerator_satisfies(candidate, requirement[, ignore])

Test if candidate partially satisfies the given requirement.

accelerators_fully_satisfy(candidates, requirement[, ...])

Determine if a set of accelerators satisfy a requirement.

unwrap(p)

Function for ensuring you actually have a promised value, and not just a promise.

unwrap_all(p)

Function for ensuring you actually have a collection of promised values,

Attributes

logger

REQUIREMENT_NAMES

ParsedRequirement

ParseableIndivisibleResource

ParseableDivisibleResource

ParseableFlag

ParseableAcceleratorRequirement

ParseableRequirement

T

Promised

toil.job.logger
exception toil.job.JobPromiseConstraintError(promisingJob, recipientJob=None)[source]

Bases: RuntimeError

Error for job being asked to promise its return value, but it not available.

(Due to the return value not yet been hit in the topological order of the job graph.)

Parameters:
  • promisingJob (Job) –

  • recipientJob (Optional[Job]) –

exception toil.job.ConflictingPredecessorError(predecessor, successor)[source]

Bases: Exception

Common base class for all non-exit exceptions.

Parameters:
  • predecessor (Job) –

  • successor (Job) –

class toil.job.TemporaryID[source]

Placeholder for a unregistered job ID used by a JobDescription.

Needs to be held:
  • By JobDescription objects to record normal relationships.

  • By Jobs to key their connected-component registries and to record predecessor relationships to facilitate EncapsulatedJob adding itself as a child.

  • By Services to tie back to their hosting jobs, so the service tree can be built up from Service objects.

__str__()[source]

Return str(self).

Return type:

str

__repr__()[source]

Return repr(self).

Return type:

str

__hash__()[source]

Return hash(self).

Return type:

int

__eq__(other)[source]

Return self==value.

Parameters:

other (Any) –

Return type:

bool

__ne__(other)[source]

Return self!=value.

Parameters:

other (Any) –

Return type:

bool

class toil.job.AcceleratorRequirement[source]

Bases: TypedDict

Requirement for one or more computational accelerators, like a GPU or FPGA.

count: int

How many of the accelerator are needed to run the job.

kind: str

What kind of accelerator is required. Can be “gpu”. Other kinds defined in the future might be “fpga”, etc.

model: typing_extensions.NotRequired[str]

What model of accelerator is needed. The exact set of values available depends on what the backing scheduler calls its accelerators; strings like “nvidia-tesla-k80” might be expected to work. If a specific model of accelerator is not required, this should be absent.

brand: typing_extensions.NotRequired[str]

What brand or manufacturer of accelerator is required. The exact set of values available depends on what the backing scheduler calls the brands of its accleerators; strings like “nvidia” or “amd” might be expected to work. If a specific brand of accelerator is not required (for example, because the job can use multiple brands of accelerator that support a given API) this should be absent.

api: typing_extensions.NotRequired[str]

What API is to be used to communicate with the accelerator. This can be “cuda”. Other APIs supported in the future might be “rocm”, “opencl”, “metal”, etc. If the job does not need a particular API to talk to the accelerator, this should be absent.

toil.job.parse_accelerator(spec)[source]

Parse an AcceleratorRequirement specified by user code.

Supports formats like:

>>> parse_accelerator(8)
{'count': 8, 'kind': 'gpu'}
>>> parse_accelerator("1")
{'count': 1, 'kind': 'gpu'}
>>> parse_accelerator("nvidia-tesla-k80")
{'count': 1, 'kind': 'gpu', 'brand': 'nvidia', 'model': 'nvidia-tesla-k80'}
>>> parse_accelerator("nvidia-tesla-k80:2")
{'count': 2, 'kind': 'gpu', 'brand': 'nvidia', 'model': 'nvidia-tesla-k80'}
>>> parse_accelerator("gpu")
{'count': 1, 'kind': 'gpu'}
>>> parse_accelerator("cuda:1")
{'count': 1, 'kind': 'gpu', 'brand': 'nvidia', 'api': 'cuda'}
>>> parse_accelerator({"kind": "gpu"})
{'count': 1, 'kind': 'gpu'}
>>> parse_accelerator({"brand": "nvidia", "count": 5})
{'count': 5, 'kind': 'gpu', 'brand': 'nvidia'}

Assumes that if not specified, we are talking about GPUs, and about one of them. Knows that “gpu” is a kind, and “cuda” is an API, and “nvidia” is a brand.

Raises:
  • ValueError – if it gets somethign it can’t parse

  • TypeError – if it gets something it can’t parse because it’s the wrong type.

Parameters:

spec (Union[int, str, Dict[str, Union[str, int]]]) –

Return type:

AcceleratorRequirement

toil.job.accelerator_satisfies(candidate, requirement, ignore=[])[source]

Test if candidate partially satisfies the given requirement.

Returns:

True if the given candidate at least partially satisfies the given requirement (i.e. check all fields other than count).

Parameters:
Return type:

bool

toil.job.accelerators_fully_satisfy(candidates, requirement, ignore=[])[source]

Determine if a set of accelerators satisfy a requirement.

Ignores fields specified in ignore.

Returns:

True if the requirement AcceleratorRequirement is fully satisfied by the ones in the list, taken together (i.e. check all fields including count).

Parameters:
Return type:

bool

class toil.job.RequirementsDict[source]

Bases: TypedDict

Typed storage for requirements for a job.

Where requirement values are of different types depending on the requirement.

cores: typing_extensions.NotRequired[int | float]
memory: typing_extensions.NotRequired[int]
disk: typing_extensions.NotRequired[int]
accelerators: typing_extensions.NotRequired[List[AcceleratorRequirement]]
preemptible: typing_extensions.NotRequired[bool]
toil.job.REQUIREMENT_NAMES = ['disk', 'memory', 'cores', 'accelerators', 'preemptible']
toil.job.ParsedRequirement
toil.job.ParseableIndivisibleResource
toil.job.ParseableDivisibleResource
toil.job.ParseableFlag
toil.job.ParseableAcceleratorRequirement
toil.job.ParseableRequirement
class toil.job.Requirer(requirements)[source]

Base class implementing the storage and presentation of requirements.

Has cores, memory, disk, and preemptability as properties.

Parameters:

requirements (Mapping[str, ParseableRequirement]) –

property requirements: RequirementsDict

Get dict containing all non-None, non-defaulted requirements.

Return type:

RequirementsDict

property disk: int

Get the maximum number of bytes of disk required.

Return type:

int

property memory: int

Get the maximum number of bytes of memory required.

Return type:

int

property cores: int | float

Get the number of CPU cores required.

Return type:

Union[int, float]

property preemptible: bool

Whether a preemptible node is permitted, or a nonpreemptible one is required.

Return type:

bool

property accelerators: List[AcceleratorRequirement]

Any accelerators, such as GPUs, that are needed.

Return type:

List[AcceleratorRequirement]

assignConfig(config)[source]

Assign the given config object to be used to provide default values.

Must be called exactly once on a loaded JobDescription before any requirements are queried.

Parameters:

config (toil.common.Config) – Config object to query

Return type:

None

__getstate__()[source]

Return the dict to use as the instance’s __dict__ when pickling.

Return type:

Dict[str, Any]

__copy__()[source]

Return a semantically-shallow copy of the object, for copy.copy().

Return type:

Requirer

__deepcopy__(memo)[source]

Return a semantically-deep copy of the object, for copy.deepcopy().

Parameters:

memo (Any) –

Return type:

Requirer

preemptable(val)[source]
Parameters:

val (ParseableFlag) –

Return type:

None

scale(requirement, factor)[source]

Return a copy of this object with the given requirement scaled up or down.

Only works on requirements where that makes sense.

Parameters:
  • requirement (str) –

  • factor (float) –

Return type:

Requirer

requirements_string()[source]

Get a nice human-readable string of our requirements.

Return type:

str

class toil.job.JobDescription(requirements, jobName, unitName='', displayName='', command=None, local=None)[source]

Bases: Requirer

Stores all the information that the Toil Leader ever needs to know about a Job.

(requirements information, dependency information, commands to issue, etc.)

Can be obtained from an actual (i.e. executable) Job object, and can be used to obtain the Job object from the JobStore.

Never contains other Jobs or JobDescriptions: all reference is by ID.

Subclassed into variants for checkpoint jobs and service jobs that have their specific parameters.

Parameters:
  • requirements (Mapping[str, Union[int, str, bool]]) –

  • jobName (str) –

  • unitName (Optional[str]) –

  • displayName (Optional[str]) –

  • command (Optional[str]) –

  • local (Optional[bool]) –

property services

Get a collection of the IDs of service host jobs for this job, in arbitrary order.

Will be empty if the job has no unfinished services.

property remainingTryCount

Get the number of tries remaining.

The try count set on the JobDescription, or the default based on the retry count from the config if none is set.

get_names()[source]

Get the names and ID of this job as a named tuple.

Return type:

toil.bus.Names

get_chain()[source]

Get all the jobs that executed in this job’s chain, in order.

For each job, produces a named tuple with its various names and its original job store ID. The jobs in the chain are in execution order.

If the job hasn’t run yet or it didn’t chain, produces a one-item list.

Return type:

List[toil.bus.Names]

serviceHostIDsInBatches()[source]

Find all batches of service host job IDs that can be started at the same time.

(in the order they need to start in)

Return type:

Iterator[List[str]]

successorsAndServiceHosts()[source]

Get an iterator over all child, follow-on, and service job IDs.

Return type:

Iterator[str]

allSuccessors()[source]

Get an iterator over all child, follow-on, and chained, inherited successor job IDs.

Follow-ons will come before children.

Return type:

Iterator[str]

successors_by_phase()[source]

Get an iterator over all child/follow-on/chained inherited successor job IDs, along with their phase numbere on the stack.

Phases ececute higher numbers to lower numbers.

Return type:

Iterator[Tuple[int, str]]

nextSuccessors()[source]

Return the collection of job IDs for the successors of this job that are ready to run.

If those jobs have multiple predecessor relationships, they may still be blocked on other jobs.

Returns None when at the final phase (all successors done), and an empty collection if there are more phases but they can’t be entered yet (e.g. because we are waiting for the job itself to run).

Return type:

Set[str]

filterSuccessors(predicate)[source]

Keep only successor jobs for which the given predicate function approves.

The predicate function is called with the job’s ID.

Treats all other successors as complete and forgets them.

Parameters:

predicate (Callable[[str], bool]) –

Return type:

None

filterServiceHosts(predicate)[source]

Keep only services for which the given predicate approves.

The predicate function is called with the service host job’s ID.

Treats all other services as complete and forgets them.

Parameters:

predicate (Callable[[str], bool]) –

Return type:

None

clear_nonexistent_dependents(job_store)[source]

Remove all references to child, follow-on, and associated service jobs that do not exist.

That is to say, all those that have been completed and removed.

Parameters:

job_store (toil.jobStores.abstractJobStore.AbstractJobStore) –

Return type:

None

clear_dependents()[source]

Remove all references to successor and service jobs.

Return type:

None

is_subtree_done()[source]

Check if the subtree is done.

Returns:

True if the job appears to be done, and all related child, follow-on, and service jobs appear to be finished and removed.

Return type:

bool

replace(other)[source]

Take on the ID of another JobDescription, retaining our own state and type.

When updated in the JobStore, we will save over the other JobDescription.

Useful for chaining jobs: the chained-to job can replace the parent job.

Merges cleanup state and successors other than this job from the job being replaced into this one.

Parameters:

other (JobDescription) – Job description to replace.

Return type:

None

check_new_version(other)[source]

Make sure a prospective new version of the JobDescription is actually moving forward in time and not backward.

Parameters:

other (JobDescription) –

Return type:

None

addChild(childID)[source]

Make the job with the given ID a child of the described job.

Parameters:

childID (str) –

Return type:

None

addFollowOn(followOnID)[source]

Make the job with the given ID a follow-on of the described job.

Parameters:

followOnID (str) –

Return type:

None

addServiceHostJob(serviceID, parentServiceID=None)[source]

Make the ServiceHostJob with the given ID a service of the described job.

If a parent ServiceHostJob ID is given, that parent service will be started first, and must have already been added.

hasChild(childID)[source]

Return True if the job with the given ID is a child of the described job.

Parameters:

childID (str) –

Return type:

bool

hasFollowOn(followOnID)[source]

Test if the job with the given ID is a follow-on of the described job.

Parameters:

followOnID (str) –

Return type:

bool

hasServiceHostJob(serviceID)[source]

Test if the ServiceHostJob is a service of the described job.

Return type:

bool

renameReferences(renames)[source]

Apply the given dict of ID renames to all references to jobs.

Does not modify our own ID or those of finished predecessors. IDs not present in the renames dict are left as-is.

Parameters:

renames (Dict[TemporaryID, str]) – Rename operations to apply.

Return type:

None

addPredecessor()[source]

Notify the JobDescription that a predecessor has been added to its Job.

Return type:

None

onRegistration(jobStore)[source]

Perform setup work that requires the JobStore.

Called by the Job saving logic when this JobDescription meets the JobStore and has its ID assigned.

Overridden to perform setup work (like hooking up flag files for service jobs) that requires the JobStore.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore) – The job store we are being placed into

Return type:

None

setupJobAfterFailure(exit_status=None, exit_reason=None)[source]

Configure job after a failure.

Reduce the remainingTryCount if greater than zero and set the memory to be at least as big as the default memory (in case of exhaustion of memory, which is common).

Requires a configuration to have been assigned (see toil.job.Requirer.assignConfig()).

Parameters:
Return type:

None

getLogFileHandle(jobStore)[source]

Create a context manager that yields a file handle to the log file.

Assumes logJobStoreFileID is set.

clearRemainingTryCount()[source]

Clear remainingTryCount and set it back to its default value.

Returns:

True if a modification to the JobDescription was made, and False otherwise.

Return type:

bool

__str__()[source]

Produce a useful logging string identifying this job.

Return type:

str

__repr__()[source]

Return repr(self).

reserve_versions(count)[source]

Reserve a job version number for later, for journaling asynchronously.

Parameters:

count (int) –

Return type:

None

pre_update_hook()[source]

Run before pickling and saving a created or updated version of this job.

Called by the job store.

Return type:

None

class toil.job.ServiceJobDescription(*args, **kwargs)[source]

Bases: JobDescription

A description of a job that hosts a service.

onRegistration(jobStore)[source]

Setup flag files.

When a ServiceJobDescription first meets the JobStore, it needs to set up its flag files.

class toil.job.CheckpointJobDescription(*args, **kwargs)[source]

Bases: JobDescription

A description of a job that is a checkpoint.

restartCheckpoint(jobStore)[source]

Restart a checkpoint after the total failure of jobs in its subtree.

Writes the changes to the jobStore immediately. All the checkpoint’s successors will be deleted, but its try count will not be decreased.

Returns a list with the IDs of any successors deleted.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore) –

Return type:

List[str]

class toil.job.Job(memory=None, cores=None, disk=None, accelerators=None, preemptible=None, preemptable=None, unitName='', checkpoint=False, displayName='', descriptionClass=None, local=None)[source]

Class represents a unit of work in toil.

Parameters:
  • memory (Optional[ParseableIndivisibleResource]) –

  • cores (Optional[ParseableDivisibleResource]) –

  • disk (Optional[ParseableIndivisibleResource]) –

  • accelerators (Optional[ParseableAcceleratorRequirement]) –

  • preemptible (Optional[ParseableFlag]) –

  • preemptable (Optional[ParseableFlag]) –

  • unitName (Optional[str]) –

  • checkpoint (Optional[bool]) –

  • displayName (Optional[str]) –

  • descriptionClass (Optional[type]) –

  • local (Optional[bool]) –

class Runner[source]

Used to setup and run Toil workflow.

static getDefaultArgumentParser(jobstore_as_flag=False)[source]

Get argument parser with added toil workflow options.

Parameters:

jobstore_as_flag (bool) – make the job store option a –jobStore flag instead of a required jobStore positional argument.

Returns:

The argument parser used by a toil workflow with added Toil options.

Return type:

argparse.ArgumentParser

static getDefaultOptions(jobStore=None, jobstore_as_flag=False)[source]

Get default options for a toil workflow.

Parameters:
  • jobStore (Optional[str]) – A string describing the jobStore for the workflow.

  • jobstore_as_flag (bool) – make the job store option a –jobStore flag instead of a required jobStore positional argument.

Returns:

The options used by a toil workflow.

Return type:

argparse.Namespace

static addToilOptions(parser, jobstore_as_flag=False)[source]

Adds the default toil options to an optparse or argparse parser object.

Parameters:
Return type:

None

static startToil(job, options)[source]

Run the toil workflow using the given options.

Deprecated by toil.common.Toil.start.

(see Job.Runner.getDefaultOptions and Job.Runner.addToilOptions) starting with this job. :param job: root job of the workflow :raises: toil.exceptions.FailedJobsException if at the end of function there remain failed jobs. :return: The return value of the root job’s run function.

Parameters:

job (Job) –

Return type:

Any

class Service(memory=None, cores=None, disk=None, accelerators=None, preemptible=None, unitName=None)[source]

Bases: Requirer

Abstract class used to define the interface to a service.

Should be subclassed by the user to define services.

Is not executed as a job; runs within a ServiceHostJob.

abstract start(job)[source]

Start the service.

Parameters:

job (Job) – The underlying host job that the service is being run in. Can be used to register deferred functions, or to access the fileStore for creating temporary files.

Returns:

An object describing how to access the service. The object must be pickleable and will be used by jobs to access the service (see toil.job.Job.addService()).

Return type:

Any

abstract stop(job)[source]

Stops the service. Function can block until complete.

Parameters:

job (Job) – The underlying host job that the service is being run in. Can be used to register deferred functions, or to access the fileStore for creating temporary files.

Return type:

None

check()[source]

Checks the service is still running.

Raises:

exceptions.RuntimeError – If the service failed, this will cause the service job to be labeled failed.

Returns:

True if the service is still running, else False. If False then the service job will be terminated, and considered a success. Important point: if the service job exits due to a failure, it should raise a RuntimeError, not return False!

Return type:

bool

property jobStoreID: str | TemporaryID

Get the ID of this Job.

Return type:

Union[str, TemporaryID]

property description: JobDescription

Expose the JobDescription that describes this job.

Return type:

JobDescription

property disk: int

The maximum number of bytes of disk the job will require to run.

Return type:

int

property memory

The maximum number of bytes of memory the job will require to run.

property cores: int | float

The number of CPU cores required.

Return type:

Union[int, float]

property accelerators: List[AcceleratorRequirement]

Any accelerators, such as GPUs, that are needed.

Return type:

List[AcceleratorRequirement]

property preemptible: bool

Whether the job can be run on a preemptible node.

Return type:

bool

property checkpoint: bool

Determine if the job is a checkpoint job or not.

Return type:

bool

property tempDir: str

Shortcut to calling job.fileStore.getLocalTempDir().

Temp dir is created on first call and will be returned for first and future calls :return: Path to tempDir. See job.fileStore.getLocalTempDir

Return type:

str

__str__()[source]

Produce a useful logging string to identify this Job and distinguish it from its JobDescription.

check_initialized()[source]

Ensure that Job.__init__() has been called by any subclass __init__().

This uses the fact that the self._description instance variable should always be set after __init__().

If __init__() has not been called, raise an error.

Return type:

None

preemptable()[source]
assignConfig(config)[source]

Assign the given config object.

It will be used by various actions implemented inside the Job class.

Parameters:

config (toil.common.Config) – Config object to query

Return type:

None

run(fileStore)[source]

Override this function to perform work and dynamically create successor jobs.

Parameters:

fileStore (toil.fileStores.abstractFileStore.AbstractFileStore) – Used to create local and globally sharable temporary files and to send log messages to the leader process.

Returns:

The return value of the function can be passed to other jobs by means of toil.job.Job.rv().

Return type:

Any

addChild(childJob)[source]

Add a childJob to be run as child of this job.

Child jobs will be run directly after this job’s toil.job.Job.run() method has completed.

Returns:

childJob: for call chaining

Parameters:

childJob (Job) –

Return type:

Job

hasChild(childJob)[source]

Check if childJob is already a child of this job.

Returns:

True if childJob is a child of the job, else False.

Parameters:

childJob (Job) –

Return type:

bool

addFollowOn(followOnJob)[source]

Add a follow-on job.

Follow-on jobs will be run after the child jobs and their successors have been run.

Returns:

followOnJob for call chaining

Parameters:

followOnJob (Job) –

Return type:

Job

hasPredecessor(job)[source]

Check if a given job is already a predecessor of this job.

Parameters:

job (Job) –

Return type:

bool

hasFollowOn(followOnJob)[source]

Check if given job is already a follow-on of this job.

Returns:

True if the followOnJob is a follow-on of this job, else False.

Parameters:

followOnJob (Job) –

Return type:

bool

addService(service, parentService=None)[source]

Add a service.

The toil.job.Job.Service.start() method of the service will be called after the run method has completed but before any successors are run. The service’s toil.job.Job.Service.stop() method will be called once the successors of the job have been run.

Services allow things like databases and servers to be started and accessed by jobs in a workflow.

Raises:

toil.job.JobException – If service has already been made the child of a job or another service.

Parameters:
  • service (Job) – Service to add.

  • parentService (Optional[Job]) – Service that will be started before ‘service’ is started. Allows trees of services to be established. parentService must be a service of this job.

Returns:

a promise that will be replaced with the return value from toil.job.Job.Service.start() of service in any successor of the job.

Return type:

Promise

hasService(service)[source]

Return True if the given Service is a service of this job, and False otherwise.

Parameters:

service (Job) –

Return type:

bool

addChildFn(fn, *args, **kwargs)[source]

Add a function as a child job.

Parameters:

fn (Callable) – Function to be run as a child job with *args and **kwargs as arguments to this function. See toil.job.FunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new child job that wraps fn.

Return type:

FunctionWrappingJob

addFollowOnFn(fn, *args, **kwargs)[source]

Add a function as a follow-on job.

Parameters:

fn (Callable) – Function to be run as a follow-on job with *args and **kwargs as arguments to this function. See toil.job.FunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new follow-on job that wraps fn.

Return type:

FunctionWrappingJob

addChildJobFn(fn, *args, **kwargs)[source]

Add a job function as a child job.

See toil.job.JobFunctionWrappingJob for a definition of a job function.

Parameters:

fn (Callable) – Job function to be run as a child job with *args and **kwargs as arguments to this function. See toil.job.JobFunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new child job that wraps fn.

Return type:

FunctionWrappingJob

addFollowOnJobFn(fn, *args, **kwargs)[source]

Add a follow-on job function.

See toil.job.JobFunctionWrappingJob for a definition of a job function.

Parameters:

fn (Callable) – Job function to be run as a follow-on job with *args and **kwargs as arguments to this function. See toil.job.JobFunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new follow-on job that wraps fn.

Return type:

FunctionWrappingJob

log(text, level=logging.INFO)[source]

Log using fileStore.log_to_leader().

Parameters:

text (str) –

Return type:

None

static wrapFn(fn, *args, **kwargs)[source]

Makes a Job out of a function.

Convenience function for constructor of toil.job.FunctionWrappingJob.

Parameters:

fn – Function to be run with *args and **kwargs as arguments. See toil.job.JobFunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new function that wraps fn.

Return type:

FunctionWrappingJob

static wrapJobFn(fn, *args, **kwargs)[source]

Makes a Job out of a job function.

Convenience function for constructor of toil.job.JobFunctionWrappingJob.

Parameters:

fn – Job function to be run with *args and **kwargs as arguments. See toil.job.JobFunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new job function that wraps fn.

Return type:

JobFunctionWrappingJob

encapsulate(name=None)[source]

Encapsulates the job, see toil.job.EncapsulatedJob. Convenience function for constructor of toil.job.EncapsulatedJob.

Parameters:

name (Optional[str]) – Human-readable name for the encapsulated job.

Returns:

an encapsulated version of this job.

Return type:

EncapsulatedJob

rv(*path)[source]

Create a promise (toil.job.Promise).

The “promise” representing a return value of the job’s run method, or, in case of a function-wrapping job, the wrapped function’s return value.

Parameters:

path ((Any)) – Optional path for selecting a component of the promised return value. If absent or empty, the entire return value will be used. Otherwise, the first element of the path is used to select an individual item of the return value. For that to work, the return value must be a list, dictionary or of any other type implementing the __getitem__() magic method. If the selected item is yet another composite value, the second element of the path can be used to select an item from it, and so on. For example, if the return value is [6,{‘a’:42}], .rv(0) would select 6 , rv(1) would select {‘a’:3} while rv(1,’a’) would select 3. To select a slice from a return value that is slicable, e.g. tuple or list, the path element should be a slice object. For example, assuming that the return value is [6, 7, 8, 9] then .rv(slice(1, 3)) would select [7, 8]. Note that slicing really only makes sense at the end of path.

Returns:

A promise representing the return value of this jobs toil.job.Job.run() method.

Return type:

Promise

registerPromise(path)[source]
prepareForPromiseRegistration(jobStore)[source]

Set up to allow this job’s promises to register themselves.

Prepare this job (the promisor) so that its promises can register themselves with it, when the jobs they are promised to (promisees) are serialized.

The promissee holds the reference to the promise (usually as part of the job arguments) and when it is being pickled, so will the promises it refers to. Pickling a promise triggers it to be registered with the promissor.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore) –

Return type:

None

checkJobGraphForDeadlocks()[source]

Ensures that a graph of Jobs (that hasn’t yet been saved to the JobStore) doesn’t contain any pathological relationships between jobs that would result in deadlocks if we tried to run the jobs.

See toil.job.Job.checkJobGraphConnected(), toil.job.Job.checkJobGraphAcyclic() and toil.job.Job.checkNewCheckpointsAreLeafVertices() for more info.

Raises:

toil.job.JobGraphDeadlockException – if the job graph is cyclic, contains multiple roots or contains checkpoint jobs that are not leaf vertices when defined (see toil.job.Job.checkNewCheckpointsAreLeaves()).

getRootJobs()[source]

Return the set of root job objects that contain this job.

A root job is a job with no predecessors (i.e. which are not children, follow-ons, or services).

Only deals with jobs created here, rather than loaded from the job store.

Return type:

Set[Job]

checkJobGraphConnected()[source]
Raises:

toil.job.JobGraphDeadlockException – if toil.job.Job.getRootJobs() does not contain exactly one root job.

As execution always starts from one root job, having multiple root jobs will cause a deadlock to occur.

Only deals with jobs created here, rather than loaded from the job store.

checkJobGraphAcylic()[source]
Raises:

toil.job.JobGraphDeadlockException – if the connected component of jobs containing this job contains any cycles of child/followOn dependencies in the augmented job graph (see below). Such cycles are not allowed in valid job graphs.

A follow-on edge (A, B) between two jobs A and B is equivalent to adding a child edge to B from (1) A, (2) from each child of A, and (3) from the successors of each child of A. We call each such edge an edge an “implied” edge. The augmented job graph is a job graph including all the implied edges.

For a job graph G = (V, E) the algorithm is O(|V|^2). It is O(|V| + |E|) for a graph with no follow-ons. The former follow-on case could be improved!

Only deals with jobs created here, rather than loaded from the job store.

checkNewCheckpointsAreLeafVertices()[source]

A checkpoint job is a job that is restarted if either it fails, or if any of its successors completely fails, exhausting their retries.

A job is a leaf it is has no successors.

A checkpoint job must be a leaf when initially added to the job graph. When its run method is invoked it can then create direct successors. This restriction is made to simplify implementation.

Only works on connected components of jobs not yet added to the JobStore.

Raises:

toil.job.JobGraphDeadlockException – if there exists a job being added to the graph for which checkpoint=True and which is not a leaf.

Return type:

None

defer(function, *args, **kwargs)[source]

Register a deferred function, i.e. a callable that will be invoked after the current attempt at running this job concludes. A job attempt is said to conclude when the job function (or the toil.job.Job.run() method for class-based jobs) returns, raises an exception or after the process running it terminates abnormally. A deferred function will be called on the node that attempted to run the job, even if a subsequent attempt is made on another node. A deferred function should be idempotent because it may be called multiple times on the same node or even in the same process. More than one deferred function may be registered per job attempt by calling this method repeatedly with different arguments. If the same function is registered twice with the same or different arguments, it will be called twice per job attempt.

Examples for deferred functions are ones that handle cleanup of resources external to Toil, like Docker containers, files outside the work directory, etc.

Parameters:
  • function (callable) – The function to be called after this job concludes.

  • args (list) – The arguments to the function

  • kwargs (dict) – The keyword arguments to the function

Return type:

None

getUserScript()[source]
Return type:

toil.resource.ModuleDescriptor

getTopologicalOrderingOfJobs()[source]
Returns:

a list of jobs such that for all pairs of indices i, j for which i < j, the job at index i can be run before the job at index j.

Return type:

List[Job]

Only considers jobs in this job’s subgraph that are newly added, not loaded from the job store.

Ignores service jobs.

saveBody(jobStore)[source]

Save the execution data for just this job to the JobStore, and fill in the JobDescription with the information needed to retrieve it.

The Job’s JobDescription must have already had a real jobStoreID assigned to it.

Does not save the JobDescription.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore) – The job store to save the job body into.

Return type:

None

saveAsRootJob(jobStore)[source]

Save this job to the given jobStore as the root job of the workflow.

Returns:

the JobDescription describing this job.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore) –

Return type:

JobDescription

classmethod loadJob(jobStore, jobDescription)[source]

Retrieves a toil.job.Job instance from a JobStore

Parameters:
Returns:

The job referenced by the JobDescription.

Return type:

Job

exception toil.job.JobException(message)[source]

Bases: Exception

General job exception.

Parameters:

message (str) –

exception toil.job.JobGraphDeadlockException(string)[source]

Bases: JobException

An exception raised in the event that a workflow contains an unresolvable dependency, such as a cycle. See toil.job.Job.checkJobGraphForDeadlocks().

class toil.job.FunctionWrappingJob(userFunction, *args, **kwargs)[source]

Bases: Job

Job used to wrap a function. In its run method the wrapped function is called.

run(fileStore)[source]

Override this function to perform work and dynamically create successor jobs.

Parameters:

fileStore – Used to create local and globally sharable temporary files and to send log messages to the leader process.

Returns:

The return value of the function can be passed to other jobs by means of toil.job.Job.rv().

getUserScript()[source]
class toil.job.JobFunctionWrappingJob(userFunction, *args, **kwargs)[source]

Bases: FunctionWrappingJob

A job function is a function whose first argument is a Job instance that is the wrapping job for the function. This can be used to add successor jobs for the function and perform all the functions the Job class provides.

To enable the job function to get access to the toil.fileStores.abstractFileStore.AbstractFileStore instance (see toil.job.Job.run()), it is made a variable of the wrapping job called fileStore.

To specify a job’s resource requirements the following default keyword arguments can be specified:

  • memory

  • disk

  • cores

  • accelerators

  • preemptible

For example to wrap a function into a job we would call:

Job.wrapJobFn(myJob, memory='100k', disk='1M', cores=0.1)
property fileStore
run(fileStore)[source]

Override this function to perform work and dynamically create successor jobs.

Parameters:

fileStore – Used to create local and globally sharable temporary files and to send log messages to the leader process.

Returns:

The return value of the function can be passed to other jobs by means of toil.job.Job.rv().

class toil.job.PromisedRequirementFunctionWrappingJob(userFunction, *args, **kwargs)[source]

Bases: FunctionWrappingJob

Handles dynamic resource allocation using toil.job.Promise instances. Spawns child function using parent function parameters and fulfilled promised resource requirements.

classmethod create(userFunction, *args, **kwargs)[source]

Creates an encapsulated Toil job function with unfulfilled promised resource requirements. After the promises are fulfilled, a child job function is created using updated resource values. The subgraph is encapsulated to ensure that this child job function is run before other children in the workflow. Otherwise, a different child may try to use an unresolved promise return value from the parent.

run(fileStore)[source]

Override this function to perform work and dynamically create successor jobs.

Parameters:

fileStore – Used to create local and globally sharable temporary files and to send log messages to the leader process.

Returns:

The return value of the function can be passed to other jobs by means of toil.job.Job.rv().

evaluatePromisedRequirements()[source]
class toil.job.PromisedRequirementJobFunctionWrappingJob(userFunction, *args, **kwargs)[source]

Bases: PromisedRequirementFunctionWrappingJob

Handles dynamic resource allocation for job functions. See toil.job.JobFunctionWrappingJob

run(fileStore)[source]

Override this function to perform work and dynamically create successor jobs.

Parameters:

fileStore – Used to create local and globally sharable temporary files and to send log messages to the leader process.

Returns:

The return value of the function can be passed to other jobs by means of toil.job.Job.rv().

class toil.job.EncapsulatedJob(job, unitName=None)[source]

Bases: Job

A convenience Job class used to make a job subgraph appear to be a single job.

Let A be the root job of a job subgraph and B be another job we’d like to run after A and all its successors have completed, for this use encapsulate:

#  Job A and subgraph, Job B
A, B = A(), B()
Aprime = A.encapsulate()
Aprime.addChild(B)
#  B will run after A and all its successors have completed, A and its subgraph of
# successors in effect appear to be just one job.

If the job being encapsulated has predecessors (e.g. is not the root job), then the encapsulated job will inherit these predecessors. If predecessors are added to the job being encapsulated after the encapsulated job is created then the encapsulating job will NOT inherit these predecessors automatically. Care should be exercised to ensure the encapsulated job has the proper set of predecessors.

The return value of an encapsulated job (as accessed by the toil.job.Job.rv() function) is the return value of the root job, e.g. A().encapsulate().rv() and A().rv() will resolve to the same value after A or A.encapsulate() has been run.

addChild(childJob)[source]

Add a childJob to be run as child of this job.

Child jobs will be run directly after this job’s toil.job.Job.run() method has completed.

Returns:

childJob: for call chaining

addService(service, parentService=None)[source]

Add a service.

The toil.job.Job.Service.start() method of the service will be called after the run method has completed but before any successors are run. The service’s toil.job.Job.Service.stop() method will be called once the successors of the job have been run.

Services allow things like databases and servers to be started and accessed by jobs in a workflow.

Raises:

toil.job.JobException – If service has already been made the child of a job or another service.

Parameters:
  • service – Service to add.

  • parentService – Service that will be started before ‘service’ is started. Allows trees of services to be established. parentService must be a service of this job.

Returns:

a promise that will be replaced with the return value from toil.job.Job.Service.start() of service in any successor of the job.

addFollowOn(followOnJob)[source]

Add a follow-on job.

Follow-on jobs will be run after the child jobs and their successors have been run.

Returns:

followOnJob for call chaining

rv(*path)[source]

Create a promise (toil.job.Promise).

The “promise” representing a return value of the job’s run method, or, in case of a function-wrapping job, the wrapped function’s return value.

Parameters:

path ((Any)) – Optional path for selecting a component of the promised return value. If absent or empty, the entire return value will be used. Otherwise, the first element of the path is used to select an individual item of the return value. For that to work, the return value must be a list, dictionary or of any other type implementing the __getitem__() magic method. If the selected item is yet another composite value, the second element of the path can be used to select an item from it, and so on. For example, if the return value is [6,{‘a’:42}], .rv(0) would select 6 , rv(1) would select {‘a’:3} while rv(1,’a’) would select 3. To select a slice from a return value that is slicable, e.g. tuple or list, the path element should be a slice object. For example, assuming that the return value is [6, 7, 8, 9] then .rv(slice(1, 3)) would select [7, 8]. Note that slicing really only makes sense at the end of path.

Returns:

A promise representing the return value of this jobs toil.job.Job.run() method.

Return type:

Promise

prepareForPromiseRegistration(jobStore)[source]

Set up to allow this job’s promises to register themselves.

Prepare this job (the promisor) so that its promises can register themselves with it, when the jobs they are promised to (promisees) are serialized.

The promissee holds the reference to the promise (usually as part of the job arguments) and when it is being pickled, so will the promises it refers to. Pickling a promise triggers it to be registered with the promissor.

__reduce__()[source]

Called during pickling to define the pickled representation of the job.

We don’t want to pickle our internal references to the job we encapsulate, so we elide them here. When actually run, we’re just a no-op job that can maybe chain.

getUserScript()[source]
class toil.job.ServiceHostJob(service)[source]

Bases: Job

Job that runs a service. Used internally by Toil. Users should subclass Service instead of using this.

property fileStore

Return the file store, which the Service may need.

addChild(child)[source]

Add a childJob to be run as child of this job.

Child jobs will be run directly after this job’s toil.job.Job.run() method has completed.

Returns:

childJob: for call chaining

addFollowOn(followOn)[source]

Add a follow-on job.

Follow-on jobs will be run after the child jobs and their successors have been run.

Returns:

followOnJob for call chaining

addService(service, parentService=None)[source]

Add a service.

The toil.job.Job.Service.start() method of the service will be called after the run method has completed but before any successors are run. The service’s toil.job.Job.Service.stop() method will be called once the successors of the job have been run.

Services allow things like databases and servers to be started and accessed by jobs in a workflow.

Raises:

toil.job.JobException – If service has already been made the child of a job or another service.

Parameters:
  • service – Service to add.

  • parentService – Service that will be started before ‘service’ is started. Allows trees of services to be established. parentService must be a service of this job.

Returns:

a promise that will be replaced with the return value from toil.job.Job.Service.start() of service in any successor of the job.

saveBody(jobStore)[source]

Serialize the service itself before saving the host job’s body.

run(fileStore)[source]

Override this function to perform work and dynamically create successor jobs.

Parameters:

fileStore – Used to create local and globally sharable temporary files and to send log messages to the leader process.

Returns:

The return value of the function can be passed to other jobs by means of toil.job.Job.rv().

getUserScript()[source]
class toil.job.Promise(job, path)[source]

References a return value from a method as a promise before the method itself is run.

References a return value from a toil.job.Job.run() or toil.job.Job.Service.start() method as a promise before the method itself is run.

Let T be a job. Instances of Promise (termed a promise) are returned by T.rv(), which is used to reference the return value of T’s run function. When the promise is passed to the constructor (or as an argument to a wrapped function) of a different, successor job the promise will be replaced by the actual referenced return value. This mechanism allows a return values from one job’s run method to be input argument to job before the former job’s run function has been executed.

Parameters:
  • job (Job) –

  • path (Any) –

filesToDelete

A set of IDs of files containing promised values when we know we won’t need them anymore

__reduce__()[source]

Return the Promise class and construction arguments.

Called during pickling when a promise (an instance of this class) is about to be be pickled. Returns the Promise class and construction arguments that will be evaluated during unpickling, namely the job store coordinates of a file that will hold the promised return value. By the time the promise is about to be unpickled, that file should be populated.

toil.job.T
toil.job.Promised
toil.job.unwrap(p)[source]

Function for ensuring you actually have a promised value, and not just a promise. Mostly useful for satisfying type-checking.

The “unwrap” terminology is borrowed from Rust.

Parameters:

p (Promised[T]) –

Return type:

T

toil.job.unwrap_all(p)[source]

Function for ensuring you actually have a collection of promised values, and not any remaining promises. Mostly useful for satisfying type-checking.

The “unwrap” terminology is borrowed from Rust.

Parameters:

p (Sequence[Promised[T]]) –

Return type:

Sequence[T]

class toil.job.PromisedRequirement(valueOrCallable, *args)[source]

Class for dynamically allocating job function resource requirements.

(involving toil.job.Promise instances.)

Use when resource requirements depend on the return value of a parent function. PromisedRequirements can be modified by passing a function that takes the Promise as input.

For example, let f, g, and h be functions. Then a Toil workflow can be defined as follows:: A = Job.wrapFn(f) B = A.addChildFn(g, cores=PromisedRequirement(A.rv()) C = B.addChildFn(h, cores=PromisedRequirement(lambda x: 2*x, B.rv()))

getValue()[source]

Return PromisedRequirement value.

static convertPromises(kwargs)[source]

Return True if reserved resource keyword is a Promise or PromisedRequirement instance.

Converts Promise instance to PromisedRequirement.

Parameters:

kwargs (Dict[str, Any]) – function keyword arguments

Return type:

bool

class toil.job.UnfulfilledPromiseSentinel(fulfillingJobName, file_id, unpickled)[source]

This should be overwritten by a proper promised value.

Throws an exception when unpickled.

Parameters:
  • fulfillingJobName (str) –

  • file_id (str) –

  • unpickled (Any) –

static __setstate__(stateDict)[source]

Only called when unpickling.

This won’t be unpickled unless the promise wasn’t resolved, so we throw an exception.

Parameters:

stateDict (Dict[str, Any]) –

Return type:

None