toil.test.lib.dockerTest

Attributes

FORGO

RM

STOP

logger

Exceptions

FailedJobsException

Common base class for all non-exit exceptions.

Classes

Toil

A context manager that represents a Toil workflow.

Job

Class represents a unit of work in toil.

ToilTest

A common base class for Toil tests.

DockerTest

Tests dockerCall and ensures no containers are left around.

Functions

apiDockerCall(job, image[, parameters, deferParam, ...])

A toil wrapper for the python docker API.

containerIsRunning(container_name[, timeout])

Checks whether the container is running or not.

dockerKill(container_name[, gentleKill, remove, timeout])

Immediately kills a container. Equivalent to "docker kill":

needs_docker(test_item)

Use as a decorator before test classes or methods to only run them if

slow(test_item)

Use this decorator to identify tests that are slow and not critical.

Module Contents

class toil.test.lib.dockerTest.Toil(options)[source]

Bases: ContextManager[Toil]

A context manager that represents a Toil workflow.

Specifically the batch system, job store, and its configuration.

Parameters:

options (argparse.Namespace)

config: Config
__enter__()[source]

Derive configuration from the command line options.

Then load the job store and, on restart, consolidate the derived configuration with the one from the previous invocation of the workflow.

Return type:

Toil

__exit__(exc_type, exc_val, exc_tb)[source]

Clean up after a workflow invocation.

Depending on the configuration, delete the job store.

Parameters:
Return type:

Literal[False]

start(rootJob)[source]

Invoke a Toil workflow with the given job as the root for an initial run.

This method must be called in the body of a with Toil(...) as toil: statement. This method should not be called more than once for a workflow that has not finished.

Parameters:

rootJob (toil.job.Job) – The root job of the workflow

Returns:

The root job’s return value

Return type:

Any

restart()[source]

Restarts a workflow that has been interrupted.

Returns:

The root job’s return value

Return type:

Any

classmethod getJobStore(locator)[source]

Create an instance of the concrete job store implementation that matches the given locator.

Parameters:

locator (str) – The location of the job store to be represent by the instance

Returns:

an instance of a concrete subclass of AbstractJobStore

Return type:

toil.jobStores.abstractJobStore.AbstractJobStore

static parseLocator(locator)[source]
Parameters:

locator (str)

Return type:

Tuple[str, str]

static buildLocator(name, rest)[source]
Parameters:
Return type:

str

classmethod resumeJobStore(locator)[source]
Parameters:

locator (str)

Return type:

toil.jobStores.abstractJobStore.AbstractJobStore

static createBatchSystem(config)[source]

Create an instance of the batch system specified in the given config.

Parameters:

config (Config) – the current configuration

Returns:

an instance of a concrete subclass of AbstractBatchSystem

Return type:

toil.batchSystems.abstractBatchSystem.AbstractBatchSystem

importFile(srcUrl: str, sharedFileName: str, symlink: bool = True) None[source]
importFile(srcUrl: str, sharedFileName: None = None, symlink: bool = True) toil.fileStores.FileID
import_file(src_uri: str, shared_file_name: str, symlink: bool = True, check_existence: bool = True) None[source]
import_file(src_uri: str, shared_file_name: None = None, symlink: bool = True, check_existence: bool = True) toil.fileStores.FileID

Import the file at the given URL into the job store.

By default, returns None if the file does not exist.

Parameters:

check_existence – If true, raise FileNotFoundError if the file does not exist. If false, return None when the file does not exist.

See toil.jobStores.abstractJobStore.AbstractJobStore.importFile() for a full description

exportFile(jobStoreFileID, dstUrl)[source]
Parameters:
Return type:

None

export_file(file_id, dst_uri)[source]

Export file to destination pointed at by the destination URL.

See toil.jobStores.abstractJobStore.AbstractJobStore.exportFile() for a full description

Parameters:
Return type:

None

static normalize_uri(uri, check_existence=False)[source]

Given a URI, if it has no scheme, prepend “file:”.

Parameters:
  • check_existence (bool) – If set, raise FileNotFoundError if a URI points to a local file that does not exist.

  • uri (str)

Return type:

str

static getToilWorkDir(configWorkDir=None)[source]

Return a path to a writable directory under which per-workflow directories exist.

This directory is always required to exist on a machine, even if the Toil worker has not run yet. If your workers and leader have different temp directories, you may need to set TOIL_WORKDIR.

Parameters:

configWorkDir (Optional[str]) – Value passed to the program using the –workDir flag

Returns:

Path to the Toil work directory, constant across all machines

Return type:

str

classmethod get_toil_coordination_dir(config_work_dir, config_coordination_dir)[source]

Return a path to a writable directory, which will be in memory if convenient. Ought to be used for file locking and coordination.

Parameters:
  • config_work_dir (Optional[str]) – Value passed to the program using the –workDir flag

  • config_coordination_dir (Optional[str]) – Value passed to the program using the –coordinationDir flag

Returns:

Path to the Toil coordination directory. Ought to be on a POSIX filesystem that allows directories containing open files to be deleted.

Return type:

str

static get_workflow_path_component(workflow_id)[source]

Get a safe filesystem path component for a workflow.

Will be consistent for all processes on a given machine, and different for all processes on different machines.

Parameters:

workflow_id (str) – The ID of the current Toil workflow.

Return type:

str

classmethod getLocalWorkflowDir(workflowID, configWorkDir=None)[source]

Return the directory where worker directories and the cache will be located for this workflow on this machine.

Parameters:
  • configWorkDir (Optional[str]) – Value passed to the program using the –workDir flag

  • workflowID (str)

Returns:

Path to the local workflow directory on this machine

Return type:

str

classmethod get_local_workflow_coordination_dir(workflow_id, config_work_dir, config_coordination_dir)[source]

Return the directory where coordination files should be located for this workflow on this machine. These include internal Toil databases and lock files for the machine.

If an in-memory filesystem is available, it is used. Otherwise, the local workflow directory, which may be on a shared network filesystem, is used.

Parameters:
  • workflow_id (str) – Unique ID of the current workflow.

  • config_work_dir (Optional[str]) – Value used for the work directory in the current Toil Config.

  • config_coordination_dir (Optional[str]) – Value used for the coordination directory in the current Toil Config.

Returns:

Path to the local workflow coordination directory on this machine.

Return type:

str

exception toil.test.lib.dockerTest.FailedJobsException(job_store, failed_jobs, exit_code=1)[source]

Bases: Exception

Common base class for all non-exit exceptions.

Parameters:
__str__()[source]

Stringify the exception, including the message.

Return type:

str

class toil.test.lib.dockerTest.Job(memory=None, cores=None, disk=None, accelerators=None, preemptible=None, preemptable=None, unitName='', checkpoint=False, displayName='', descriptionClass=None, local=None)[source]

Class represents a unit of work in toil.

Parameters:
  • memory (Optional[ParseableIndivisibleResource])

  • cores (Optional[ParseableDivisibleResource])

  • disk (Optional[ParseableIndivisibleResource])

  • accelerators (Optional[ParseableAcceleratorRequirement])

  • preemptible (Optional[ParseableFlag])

  • preemptable (Optional[ParseableFlag])

  • unitName (Optional[str])

  • checkpoint (Optional[bool])

  • displayName (Optional[str])

  • descriptionClass (Optional[type])

  • local (Optional[bool])

__str__()[source]

Produce a useful logging string to identify this Job and distinguish it from its JobDescription.

check_initialized()[source]

Ensure that Job.__init__() has been called by any subclass __init__().

This uses the fact that the self._description instance variable should always be set after __init__().

If __init__() has not been called, raise an error.

Return type:

None

property jobStoreID: str | TemporaryID

Get the ID of this Job.

Return type:

Union[str, TemporaryID]

property description: JobDescription

Expose the JobDescription that describes this job.

Return type:

JobDescription

property disk: int

The maximum number of bytes of disk the job will require to run.

Return type:

int

property memory
The maximum number of bytes of memory the job will require to run.
property cores: int | float

The number of CPU cores required.

Return type:

Union[int, float]

property accelerators: List[AcceleratorRequirement]

Any accelerators, such as GPUs, that are needed.

Return type:

List[AcceleratorRequirement]

property preemptible: bool

Whether the job can be run on a preemptible node.

Return type:

bool

preemptable()[source]
property checkpoint: bool

Determine if the job is a checkpoint job or not.

Return type:

bool

assignConfig(config)[source]

Assign the given config object.

It will be used by various actions implemented inside the Job class.

Parameters:

config (toil.common.Config) – Config object to query

Return type:

None

run(fileStore)[source]

Override this function to perform work and dynamically create successor jobs.

Parameters:

fileStore (toil.fileStores.abstractFileStore.AbstractFileStore) – Used to create local and globally sharable temporary files and to send log messages to the leader process.

Returns:

The return value of the function can be passed to other jobs by means of toil.job.Job.rv().

Return type:

Any

addChild(childJob)[source]

Add a childJob to be run as child of this job.

Child jobs will be run directly after this job’s toil.job.Job.run() method has completed.

Returns:

childJob: for call chaining

Parameters:

childJob (Job)

Return type:

Job

hasChild(childJob)[source]

Check if childJob is already a child of this job.

Returns:

True if childJob is a child of the job, else False.

Parameters:

childJob (Job)

Return type:

bool

addFollowOn(followOnJob)[source]

Add a follow-on job.

Follow-on jobs will be run after the child jobs and their successors have been run.

Returns:

followOnJob for call chaining

Parameters:

followOnJob (Job)

Return type:

Job

hasPredecessor(job)[source]

Check if a given job is already a predecessor of this job.

Parameters:

job (Job)

Return type:

bool

hasFollowOn(followOnJob)[source]

Check if given job is already a follow-on of this job.

Returns:

True if the followOnJob is a follow-on of this job, else False.

Parameters:

followOnJob (Job)

Return type:

bool

addService(service, parentService=None)[source]

Add a service.

The toil.job.Job.Service.start() method of the service will be called after the run method has completed but before any successors are run. The service’s toil.job.Job.Service.stop() method will be called once the successors of the job have been run.

Services allow things like databases and servers to be started and accessed by jobs in a workflow.

Raises:

toil.job.JobException – If service has already been made the child of a job or another service.

Parameters:
  • service (Job) – Service to add.

  • parentService (Optional[Job]) – Service that will be started before ‘service’ is started. Allows trees of services to be established. parentService must be a service of this job.

Returns:

a promise that will be replaced with the return value from toil.job.Job.Service.start() of service in any successor of the job.

Return type:

Promise

hasService(service)[source]

Return True if the given Service is a service of this job, and False otherwise.

Parameters:

service (Job)

Return type:

bool

addChildFn(fn, *args, **kwargs)[source]

Add a function as a child job.

Parameters:

fn (Callable) – Function to be run as a child job with *args and **kwargs as arguments to this function. See toil.job.FunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new child job that wraps fn.

Return type:

FunctionWrappingJob

addFollowOnFn(fn, *args, **kwargs)[source]

Add a function as a follow-on job.

Parameters:

fn (Callable) – Function to be run as a follow-on job with *args and **kwargs as arguments to this function. See toil.job.FunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new follow-on job that wraps fn.

Return type:

FunctionWrappingJob

addChildJobFn(fn, *args, **kwargs)[source]

Add a job function as a child job.

See toil.job.JobFunctionWrappingJob for a definition of a job function.

Parameters:

fn (Callable) – Job function to be run as a child job with *args and **kwargs as arguments to this function. See toil.job.JobFunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new child job that wraps fn.

Return type:

FunctionWrappingJob

addFollowOnJobFn(fn, *args, **kwargs)[source]

Add a follow-on job function.

See toil.job.JobFunctionWrappingJob for a definition of a job function.

Parameters:

fn (Callable) – Job function to be run as a follow-on job with *args and **kwargs as arguments to this function. See toil.job.JobFunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new follow-on job that wraps fn.

Return type:

FunctionWrappingJob

property tempDir: str

Shortcut to calling job.fileStore.getLocalTempDir().

Temp dir is created on first call and will be returned for first and future calls :return: Path to tempDir. See job.fileStore.getLocalTempDir

Return type:

str

log(text, level=logging.INFO)[source]

Log using fileStore.log_to_leader().

Parameters:

text (str)

Return type:

None

static wrapFn(fn, *args, **kwargs)[source]

Makes a Job out of a function.

Convenience function for constructor of toil.job.FunctionWrappingJob.

Parameters:

fn – Function to be run with *args and **kwargs as arguments. See toil.job.JobFunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new function that wraps fn.

Return type:

FunctionWrappingJob

static wrapJobFn(fn, *args, **kwargs)[source]

Makes a Job out of a job function.

Convenience function for constructor of toil.job.JobFunctionWrappingJob.

Parameters:

fn – Job function to be run with *args and **kwargs as arguments. See toil.job.JobFunctionWrappingJob for reserved keyword arguments used to specify resource requirements.

Returns:

The new job function that wraps fn.

Return type:

JobFunctionWrappingJob

encapsulate(name=None)[source]

Encapsulates the job, see toil.job.EncapsulatedJob. Convenience function for constructor of toil.job.EncapsulatedJob.

Parameters:

name (Optional[str]) – Human-readable name for the encapsulated job.

Returns:

an encapsulated version of this job.

Return type:

EncapsulatedJob

rv(*path)[source]

Create a promise (toil.job.Promise).

The “promise” representing a return value of the job’s run method, or, in case of a function-wrapping job, the wrapped function’s return value.

Parameters:

path ((Any)) – Optional path for selecting a component of the promised return value. If absent or empty, the entire return value will be used. Otherwise, the first element of the path is used to select an individual item of the return value. For that to work, the return value must be a list, dictionary or of any other type implementing the __getitem__() magic method. If the selected item is yet another composite value, the second element of the path can be used to select an item from it, and so on. For example, if the return value is [6,{‘a’:42}], .rv(0) would select 6 , rv(1) would select {‘a’:3} while rv(1,’a’) would select 3. To select a slice from a return value that is slicable, e.g. tuple or list, the path element should be a slice object. For example, assuming that the return value is [6, 7, 8, 9] then .rv(slice(1, 3)) would select [7, 8]. Note that slicing really only makes sense at the end of path.

Returns:

A promise representing the return value of this jobs toil.job.Job.run() method.

Return type:

Promise

registerPromise(path)[source]
prepareForPromiseRegistration(jobStore)[source]

Set up to allow this job’s promises to register themselves.

Prepare this job (the promisor) so that its promises can register themselves with it, when the jobs they are promised to (promisees) are serialized.

The promissee holds the reference to the promise (usually as part of the job arguments) and when it is being pickled, so will the promises it refers to. Pickling a promise triggers it to be registered with the promissor.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore)

Return type:

None

checkJobGraphForDeadlocks()[source]

Ensures that a graph of Jobs (that hasn’t yet been saved to the JobStore) doesn’t contain any pathological relationships between jobs that would result in deadlocks if we tried to run the jobs.

See toil.job.Job.checkJobGraphConnected(), toil.job.Job.checkJobGraphAcyclic() and toil.job.Job.checkNewCheckpointsAreLeafVertices() for more info.

Raises:

toil.job.JobGraphDeadlockException – if the job graph is cyclic, contains multiple roots or contains checkpoint jobs that are not leaf vertices when defined (see toil.job.Job.checkNewCheckpointsAreLeaves()).

getRootJobs()[source]

Return the set of root job objects that contain this job.

A root job is a job with no predecessors (i.e. which are not children, follow-ons, or services).

Only deals with jobs created here, rather than loaded from the job store.

Return type:

Set[Job]

checkJobGraphConnected()[source]
Raises:

toil.job.JobGraphDeadlockException – if toil.job.Job.getRootJobs() does not contain exactly one root job.

As execution always starts from one root job, having multiple root jobs will cause a deadlock to occur.

Only deals with jobs created here, rather than loaded from the job store.

checkJobGraphAcylic()[source]
Raises:

toil.job.JobGraphDeadlockException – if the connected component of jobs containing this job contains any cycles of child/followOn dependencies in the augmented job graph (see below). Such cycles are not allowed in valid job graphs.

A follow-on edge (A, B) between two jobs A and B is equivalent to adding a child edge to B from (1) A, (2) from each child of A, and (3) from the successors of each child of A. We call each such edge an edge an “implied” edge. The augmented job graph is a job graph including all the implied edges.

For a job graph G = (V, E) the algorithm is O(|V|^2). It is O(|V| + |E|) for a graph with no follow-ons. The former follow-on case could be improved!

Only deals with jobs created here, rather than loaded from the job store.

checkNewCheckpointsAreLeafVertices()[source]

A checkpoint job is a job that is restarted if either it fails, or if any of its successors completely fails, exhausting their retries.

A job is a leaf it is has no successors.

A checkpoint job must be a leaf when initially added to the job graph. When its run method is invoked it can then create direct successors. This restriction is made to simplify implementation.

Only works on connected components of jobs not yet added to the JobStore.

Raises:

toil.job.JobGraphDeadlockException – if there exists a job being added to the graph for which checkpoint=True and which is not a leaf.

Return type:

None

defer(function, *args, **kwargs)[source]

Register a deferred function, i.e. a callable that will be invoked after the current attempt at running this job concludes. A job attempt is said to conclude when the job function (or the toil.job.Job.run() method for class-based jobs) returns, raises an exception or after the process running it terminates abnormally. A deferred function will be called on the node that attempted to run the job, even if a subsequent attempt is made on another node. A deferred function should be idempotent because it may be called multiple times on the same node or even in the same process. More than one deferred function may be registered per job attempt by calling this method repeatedly with different arguments. If the same function is registered twice with the same or different arguments, it will be called twice per job attempt.

Examples for deferred functions are ones that handle cleanup of resources external to Toil, like Docker containers, files outside the work directory, etc.

Parameters:
  • function (callable) – The function to be called after this job concludes.

  • args (list) – The arguments to the function

  • kwargs (dict) – The keyword arguments to the function

Return type:

None

class Runner[source]

Used to setup and run Toil workflow.

static getDefaultArgumentParser(jobstore_as_flag=False)[source]

Get argument parser with added toil workflow options.

Parameters:

jobstore_as_flag (bool) – make the job store option a –jobStore flag instead of a required jobStore positional argument.

Returns:

The argument parser used by a toil workflow with added Toil options.

Return type:

argparse.ArgumentParser

static getDefaultOptions(jobStore=None, jobstore_as_flag=False)[source]

Get default options for a toil workflow.

Parameters:
  • jobStore (Optional[str]) – A string describing the jobStore for the workflow.

  • jobstore_as_flag (bool) – make the job store option a –jobStore flag instead of a required jobStore positional argument.

Returns:

The options used by a toil workflow.

Return type:

argparse.Namespace

static addToilOptions(parser, jobstore_as_flag=False)[source]

Adds the default toil options to an optparse or argparse parser object.

Parameters:
Return type:

None

static startToil(job, options)[source]

Run the toil workflow using the given options.

Deprecated by toil.common.Toil.start.

(see Job.Runner.getDefaultOptions and Job.Runner.addToilOptions) starting with this job. :param job: root job of the workflow :raises: toil.exceptions.FailedJobsException if at the end of function there remain failed jobs. :return: The return value of the root job’s run function.

Parameters:

job (Job)

Return type:

Any

class Service(memory=None, cores=None, disk=None, accelerators=None, preemptible=None, unitName=None)[source]

Bases: Requirer

Abstract class used to define the interface to a service.

Should be subclassed by the user to define services.

Is not executed as a job; runs within a ServiceHostJob.

abstract start(job)[source]

Start the service.

Parameters:

job (Job) – The underlying host job that the service is being run in. Can be used to register deferred functions, or to access the fileStore for creating temporary files.

Returns:

An object describing how to access the service. The object must be pickleable and will be used by jobs to access the service (see toil.job.Job.addService()).

Return type:

Any

abstract stop(job)[source]

Stops the service. Function can block until complete.

Parameters:

job (Job) – The underlying host job that the service is being run in. Can be used to register deferred functions, or to access the fileStore for creating temporary files.

Return type:

None

check()[source]

Checks the service is still running.

Raises:

exceptions.RuntimeError – If the service failed, this will cause the service job to be labeled failed.

Returns:

True if the service is still running, else False. If False then the service job will be terminated, and considered a success. Important point: if the service job exits due to a failure, it should raise a RuntimeError, not return False!

Return type:

bool

getUserScript()[source]
Return type:

toil.resource.ModuleDescriptor

getTopologicalOrderingOfJobs()[source]
Returns:

a list of jobs such that for all pairs of indices i, j for which i < j, the job at index i can be run before the job at index j.

Return type:

List[Job]

Only considers jobs in this job’s subgraph that are newly added, not loaded from the job store.

Ignores service jobs.

saveBody(jobStore)[source]

Save the execution data for just this job to the JobStore, and fill in the JobDescription with the information needed to retrieve it.

The Job’s JobDescription must have already had a real jobStoreID assigned to it.

Does not save the JobDescription.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore) – The job store to save the job body into.

Return type:

None

saveAsRootJob(jobStore)[source]

Save this job to the given jobStore as the root job of the workflow.

Returns:

the JobDescription describing this job.

Parameters:

jobStore (toil.jobStores.abstractJobStore.AbstractJobStore)

Return type:

JobDescription

classmethod loadJob(job_store, job_description)[source]

Retrieves a toil.job.Job instance from a JobStore

Parameters:
Returns:

The job referenced by the JobDescription.

Return type:

Job

set_debug_flag(flag)[source]

Enable the given debug option on the job.

Parameters:

flag (str)

Return type:

None

has_debug_flag(flag)[source]

Return true if the given debug flag is set.

Parameters:

flag (str)

Return type:

bool

files_downloaded_hook(host_and_job_paths=None)[source]

Function that subclasses can call when they have downloaded their input files.

Will abort the job if the “download_only” debug flag is set.

Can be hinted a list of file path pairs outside and inside the job container, in which case the container environment can be reconstructed.

Parameters:

host_and_job_paths (Optional[List[Tuple[str, str]]])

Return type:

None

toil.test.lib.dockerTest.FORGO = 0
toil.test.lib.dockerTest.RM = 2
toil.test.lib.dockerTest.STOP = 1
toil.test.lib.dockerTest.apiDockerCall(job, image, parameters=None, deferParam=None, volumes=None, working_dir=None, containerName=None, entrypoint=None, detach=False, log_config=None, auto_remove=None, remove=False, user=None, environment=None, stdout=None, stderr=False, stream=False, demux=False, streamfile=None, accelerators=None, timeout=365 * 24 * 60 * 60, **kwargs)[source]

A toil wrapper for the python docker API.

Docker API Docs: https://docker-py.readthedocs.io/en/stable/index.html Docker API Code: https://github.com/docker/docker-py

This implements docker’s python API within toil so that calls are run as jobs, with the intention that failed/orphaned docker jobs be handled appropriately.

Example of using dockerCall in toil to index a FASTA file with SAMtools:

def toil_job(job):
    working_dir = job.fileStore.getLocalTempDir()
    path = job.fileStore.readGlobalFile(ref_id,
                                      os.path.join(working_dir, 'ref.fasta')
    parameters = ['faidx', path]
    apiDockerCall(job,
                  image='quay.io/ucgc_cgl/samtools:latest',
                  working_dir=working_dir,
                  parameters=parameters)

Note that when run with detach=False, or with detach=True and stdout=True or stderr=True, this is a blocking call. When run with detach=True and without output capture, the container is started and returned without waiting for it to finish.

Parameters:
  • job (toil.Job.job) – The Job instance for the calling function.

  • image (str) – Name of the Docker image to be used. (e.g. ‘quay.io/ucsc_cgl/samtools:latest’)

  • parameters (list[str]) – A list of string elements. If there are multiple elements, these will be joined with spaces. This handling of multiple elements provides backwards compatibility with previous versions which called docker using subprocess.check_call(). If list of lists: list[list[str]], then treat as successive commands chained with pipe.

  • working_dir (str) – The working directory.

  • deferParam (int) – Action to take on the container upon job completion. FORGO (0) leaves the container untouched and running. STOP (1) Sends SIGTERM, then SIGKILL if necessary to the container. RM (2) Immediately send SIGKILL to the container. This is the default behavior if deferParam is set to None.

  • name (str) – The name/ID of the container.

  • entrypoint (str) – Prepends commands sent to the container. See: https://docker-py.readthedocs.io/en/stable/containers.html

  • detach (bool) – Run the container in detached mode. (equivalent to ‘-d’)

  • stdout (bool) – Return logs from STDOUT when detach=False (default: True). Block and capture stdout to a file when detach=True (default: False). Output capture defaults to output.log, and can be specified with the “streamfile” kwarg.

  • stderr (bool) – Return logs from STDERR when detach=False (default: False). Block and capture stderr to a file when detach=True (default: False). Output capture defaults to output.log, and can be specified with the “streamfile” kwarg.

  • stream (bool) – If True and detach=False, return a log generator instead of a string. Ignored if detach=True. (default: False).

  • demux (bool) – Similar to demux in container.exec_run(). If True and detach=False, returns a tuple of (stdout, stderr). If stream=True, returns a log generator with tuples of (stdout, stderr). Ignored if detach=True. (default: False).

  • streamfile (str) – Collect container output to this file if detach=True and stderr and/or stdout are True. Defaults to “output.log”.

  • log_config (dict) – Specify the logs to return from the container. See: https://docker-py.readthedocs.io/en/stable/containers.html

  • remove (bool) – Remove the container on exit or not.

  • user (str) – The container will be run with the privileges of the user specified. Can be an actual name, such as ‘root’ or ‘lifeisaboutfishtacos’, or it can be the uid or gid of the user (‘0’ is root; ‘1000’ is an example of a less privileged uid or gid), or a complement of the uid:gid (RECOMMENDED), such as ‘0:0’ (root user : root group) or ‘1000:1000’ (some other user : some other user group).

  • environment – Allows one to set environment variables inside of the container, such as:

  • timeout (int) – Use the given timeout in seconds for interactions with the Docker daemon. Note that the underlying docker module is not always able to abort ongoing reads and writes in order to respect the timeout. Defaults to 1 year (i.e. wait essentially indefinitely).

  • accelerators (Optional[List[int]]) – Toil accelerator numbers (usually GPUs) to forward to the container. These are interpreted in the current Python process’s environment. See toil.lib.accelerators.get_individual_local_accelerators() for the menu of available accelerators.

  • kwargs – Additional keyword arguments supplied to the docker API’s run command. The list is 75 keywords total, for examples and full documentation see: https://docker-py.readthedocs.io/en/stable/containers.html

Returns:

Returns the standard output/standard error text, as requested, when detach=False. Returns the underlying docker.models.containers.Container object from the Docker API when detach=True.

toil.test.lib.dockerTest.containerIsRunning(container_name, timeout=365 * 24 * 60 * 60)[source]

Checks whether the container is running or not.

Parameters:
  • container_name (str) – Name of the container being checked.

  • timeout (int) – Use the given timeout in seconds for interactions with the Docker daemon. Note that the underlying docker module is not always able to abort ongoing reads and writes in order to respect the timeout. Defaults to 1 year (i.e. wait essentially indefinitely).

Returns:

True if status is ‘running’, False if status is anything else, and None if the container does not exist.

toil.test.lib.dockerTest.dockerKill(container_name, gentleKill=False, remove=False, timeout=365 * 24 * 60 * 60)[source]

Immediately kills a container. Equivalent to “docker kill”: https://docs.docker.com/engine/reference/commandline/kill/

Parameters:
  • container_name (str) – Name of the container being killed.

  • gentleKill (bool) – If True, trigger a graceful shutdown.

  • remove (bool) – If True, remove the container after it exits.

  • timeout (int) – Use the given timeout in seconds for interactions with the Docker daemon. Note that the underlying docker module is not always able to abort ongoing reads and writes in order to respect the timeout. Defaults to 1 year (i.e. wait essentially indefinitely).

Return type:

None

class toil.test.lib.dockerTest.ToilTest(methodName='runTest')[source]

Bases: unittest.TestCase

A common base class for Toil tests.

Please have every test case directly or indirectly inherit this one.

When running tests you may optionally set the TOIL_TEST_TEMP environment variable to the path of a directory where you want temporary test files be placed. The directory will be created if it doesn’t exist. The path may be relative in which case it will be assumed to be relative to the project root. If TOIL_TEST_TEMP is not defined, temporary files and directories will be created in the system’s default location for such files and any temporary files or directories left over from tests will be removed automatically removed during tear down. Otherwise, left-over files will not be removed.

setup_method(method)[source]
Parameters:

method (Any)

Return type:

None

classmethod setUpClass()[source]

Hook method for setting up class fixture before running tests in the class.

Return type:

None

classmethod tearDownClass()[source]

Hook method for deconstructing the class fixture after running all tests in the class.

Return type:

None

setUp()[source]

Hook method for setting up the test fixture before exercising it.

Return type:

None

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

Return type:

None

classmethod awsRegion()[source]

Pick an appropriate AWS region.

Use us-west-2 unless running on EC2, in which case use the region in which the instance is located

Return type:

str

toil.test.lib.dockerTest.needs_docker(test_item)[source]

Use as a decorator before test classes or methods to only run them if docker is installed and docker-based tests are enabled.

Parameters:

test_item (MT)

Return type:

MT

toil.test.lib.dockerTest.slow(test_item)[source]

Use this decorator to identify tests that are slow and not critical. Skip if TOIL_TEST_QUICK is true.

Parameters:

test_item (MT)

Return type:

MT

toil.test.lib.dockerTest.logger
class toil.test.lib.dockerTest.DockerTest(methodName='runTest')[source]

Bases: toil.test.ToilTest

Tests dockerCall and ensures no containers are left around. When running tests you may optionally set the TOIL_TEST_TEMP environment variable to the path of a directory where you want temporary test files be placed. The directory will be created if it doesn’t exist. The path may be relative in which case it will be assumed to be relative to the project root. If TOIL_TEST_TEMP is not defined, temporary files and directories will be created in the system’s default location for such files and any temporary files or directories left over from tests will be removed automatically removed during tear down. Otherwise, left-over files will not be removed.

setUp()[source]

Hook method for setting up the test fixture before exercising it.

testDockerClean(caching=False, detached=True, rm=True, deferParam=None)[source]

Run the test container that creates a file in the work dir, and sleeps for 5 minutes. Ensure that the calling job gets SIGKILLed after a minute, leaving behind the spooky/ghost/zombie container. Ensure that the container is killed on batch system shutdown (through the deferParam mechanism).

testDockerClean_CRx_FORGO()[source]
testDockerClean_CRx_STOP()[source]
testDockerClean_CRx_RM()[source]
testDockerClean_CRx_None()[source]
testDockerClean_CxD_FORGO()[source]
testDockerClean_CxD_STOP()[source]
testDockerClean_CxD_RM()[source]
testDockerClean_CxD_None()[source]
testDockerClean_Cxx_FORGO()[source]
testDockerClean_Cxx_STOP()[source]
testDockerClean_Cxx_RM()[source]
testDockerClean_Cxx_None()[source]
testDockerClean_xRx_FORGO()[source]
testDockerClean_xRx_STOP()[source]
testDockerClean_xRx_RM()[source]
testDockerClean_xRx_None()[source]
testDockerClean_xxD_FORGO()[source]
testDockerClean_xxD_STOP()[source]
testDockerClean_xxD_RM()[source]
testDockerClean_xxD_None()[source]
testDockerClean_xxx_FORGO()[source]
testDockerClean_xxx_STOP()[source]
testDockerClean_xxx_RM()[source]
testDockerClean_xxx_None()[source]
testDockerPipeChain(caching=False)[source]

Test for piping API for dockerCall(). Using this API (activated when list of argument lists is given as parameters), commands a piped together into a chain. ex: parameters=[ ['printf', 'x\n y\n'], ['wc', '-l'] ] should execute: printf 'x\n y\n' | wc -l

testDockerPipeChainErrorDetection(caching=False)[source]

By default, executing cmd1 | cmd2 | … | cmdN, will only return an error if cmdN fails. This can lead to all manor of errors being silently missed. This tests to make sure that the piping API for dockerCall() throws an exception if non-last commands in the chain fail.

testNonCachingDockerChain()[source]
testNonCachingDockerChainErrorDetection()[source]
testDockerLogs(stream=False, demux=False)[source]

Test for the different log outputs when deatch=False.

testDockerLogs_Stream()[source]
testDockerLogs_Demux()[source]
testDockerLogs_Demux_Stream()[source]