toil.jobStores.abstractJobStore

Attributes

logger

Exceptions

ProxyConnectionError

Dummy class.

LocatorException

Base exception class for all locator exceptions.

InvalidImportExportUrlException

Common base class for all non-exit exceptions.

UnimplementedURLException

Unspecified run-time error.

NoSuchJobException

Indicates that the specified job does not exist.

ConcurrentFileModificationException

Indicates that the file was attempted to be modified by multiple processes at once.

NoSuchFileException

Indicates that the specified file does not exist.

NoSuchJobStoreException

Indicates that the specified job store does not exist.

JobStoreExistsException

Indicates that the specified job store already exists.

Classes

AbstractJobStore

Represents the physical storage for the jobs and files in a Toil workflow.

JobStoreSupport

A mostly fake JobStore to access URLs not really associated with real job

Module Contents

toil.jobStores.abstractJobStore.logger
exception toil.jobStores.abstractJobStore.ProxyConnectionError[source]

Bases: BaseException

Dummy class.

exception toil.jobStores.abstractJobStore.LocatorException(error_msg, locator, prefix=None)[source]

Bases: Exception

Base exception class for all locator exceptions. For example, job store/aws bucket exceptions where they already exist

Parameters:
  • error_msg (str)

  • locator (str)

  • prefix (Optional[str])

exception toil.jobStores.abstractJobStore.InvalidImportExportUrlException(url)[source]

Bases: Exception

Common base class for all non-exit exceptions.

Parameters:

url (urllib.parse.ParseResult)

exception toil.jobStores.abstractJobStore.UnimplementedURLException(url, operation)[source]

Bases: RuntimeError

Unspecified run-time error.

Parameters:
exception toil.jobStores.abstractJobStore.NoSuchJobException(jobStoreID)[source]

Bases: Exception

Indicates that the specified job does not exist.

Parameters:

jobStoreID (toil.fileStores.FileID)

exception toil.jobStores.abstractJobStore.ConcurrentFileModificationException(jobStoreFileID)[source]

Bases: Exception

Indicates that the file was attempted to be modified by multiple processes at once.

Parameters:

jobStoreFileID (toil.fileStores.FileID)

exception toil.jobStores.abstractJobStore.NoSuchFileException(jobStoreFileID, customName=None, *extra)[source]

Bases: Exception

Indicates that the specified file does not exist.

Parameters:
exception toil.jobStores.abstractJobStore.NoSuchJobStoreException(locator, prefix)[source]

Bases: LocatorException

Indicates that the specified job store does not exist.

Parameters:
exception toil.jobStores.abstractJobStore.JobStoreExistsException(locator, prefix)[source]

Bases: LocatorException

Indicates that the specified job store already exists.

Parameters:
class toil.jobStores.abstractJobStore.AbstractJobStore(locator)[source]

Bases: abc.ABC

Represents the physical storage for the jobs and files in a Toil workflow.

JobStores are responsible for storing toil.job.JobDescription (which relate jobs to each other) and files.

Actual toil.job.Job objects are stored in files, referenced by JobDescriptions. All the non-file CRUD methods the JobStore provides deal in JobDescriptions and not full, executable Jobs.

To actually get ahold of a toil.job.Job, use toil.job.Job.loadJob() with a JobStore and the relevant JobDescription.

Parameters:

locator (str)

initialize(config)[source]

Initialize this job store.

Create the physical storage for this job store, allocate a workflow ID and persist the given Toil configuration to the store.

Parameters:

config (toil.common.Config) – the Toil configuration to initialize this job store with. The given configuration will be updated with the newly allocated workflow ID.

Raises:

JobStoreExistsException – if the physical storage for this job store already exists

Return type:

None

writeConfig()[source]
Return type:

None

write_config()[source]

Persists the value of the AbstractJobStore.config attribute to the job store, so that it can be retrieved later by other instances of this class.

Return type:

None

resume()[source]

Connect this instance to the physical storage it represents and load the Toil configuration into the AbstractJobStore.config attribute.

Raises:

NoSuchJobStoreException – if the physical storage for this job store doesn’t exist

Return type:

None

property config: toil.common.Config

Return the Toil configuration associated with this job store.

Return type:

toil.common.Config

property locator: str

Get the locator that defines the job store, which can be used to connect to it.

Return type:

str

rootJobStoreIDFileName = 'rootJobStoreID'
setRootJob(rootJobStoreID)[source]

Set the root job of the workflow backed by this job store.

Parameters:

rootJobStoreID (toil.fileStores.FileID)

Return type:

None

set_root_job(job_id)[source]

Set the root job of the workflow backed by this job store.

Parameters:

job_id (toil.fileStores.FileID) – The ID of the job to set as root

Return type:

None

loadRootJob()[source]
Return type:

toil.job.JobDescription

load_root_job()[source]

Loads the JobDescription for the root job in the current job store.

Raises:

toil.job.JobException – If no root job is set or if the root job doesn’t exist in this job store

Returns:

The root job.

Return type:

toil.job.JobDescription

createRootJob(desc)[source]
Parameters:

desc (toil.job.JobDescription)

Return type:

toil.job.JobDescription

create_root_job(job_description)[source]

Create the given JobDescription and set it as the root job in this job store.

Parameters:

job_description (toil.job.JobDescription) – JobDescription to save and make the root job.

Return type:

toil.job.JobDescription

getRootJobReturnValue()[source]
Return type:

Any

get_root_job_return_value()[source]

Parse the return value from the root job.

Raises an exception if the root job hasn’t fulfilled its promise yet.

Return type:

Any

importFile(srcUrl: str, sharedFileName: str, hardlink: bool = False, symlink: bool = True) None[source]
importFile(srcUrl: str, sharedFileName: None = None, hardlink: bool = False, symlink: bool = True) toil.fileStores.FileID
import_file(src_uri: str, shared_file_name: str, hardlink: bool = False, symlink: bool = True) None[source]
import_file(src_uri: str, shared_file_name: None = None, hardlink: bool = False, symlink: bool = True) toil.fileStores.FileID

Imports the file at the given URL into job store. The ID of the newly imported file is returned. If the name of a shared file name is provided, the file will be imported as such and None is returned. If an executable file on the local filesystem is uploaded, its executability will be preserved when it is downloaded.

Currently supported schemes are:

Raises FileNotFoundError if the file does not exist.

Parameters:
  • src_uri (str) – URL that points to a file or object in the storage mechanism of a supported URL scheme e.g. a blob in an AWS s3 bucket. It must be a file, not a directory or prefix.

  • shared_file_name (str) – Optional name to assign to the imported file within the job store

Returns:

The jobStoreFileID of the imported file or None if shared_file_name was given

Return type:

toil.fileStores.FileID or None

exportFile(jobStoreFileID, dstUrl)[source]
Parameters:
Return type:

None

export_file(file_id, dst_uri)[source]

Exports file to destination pointed at by the destination URL. The exported file will be executable if and only if it was originally uploaded from an executable file on the local filesystem.

Refer to AbstractJobStore.import_file() documentation for currently supported URL schemes.

Note that the helper method _exportFile is used to read from the source and write to destination. To implement any optimizations that circumvent this, the _exportFile method should be overridden by subclasses of AbstractJobStore.

Parameters:
  • file_id (str) – The id of the file in the job store that should be exported.

  • dst_uri (str) – URL that points to a file or object in the storage mechanism of a supported URL scheme e.g. a blob in an AWS s3 bucket. May also be a local path.

Return type:

None

classmethod url_exists(src_uri)[source]

Return True if the file at the given URI exists, and False otherwise.

May raise an error if file existence cannot be determined.

Parameters:

src_uri (str) – URL that points to a file or object in the storage mechanism of a supported URL scheme e.g. a blob in an AWS s3 bucket.

Return type:

bool

classmethod get_size(src_uri)[source]

Get the size in bytes of the file at the given URL, or None if it cannot be obtained.

Parameters:

src_uri (str) – URL that points to a file or object in the storage mechanism of a supported URL scheme e.g. a blob in an AWS s3 bucket.

Return type:

Optional[int]

classmethod get_is_directory(src_uri)[source]

Return True if the thing at the given URL is a directory, and False if it is a file. The URL may or may not end in ‘/’.

Parameters:

src_uri (str)

Return type:

bool

classmethod list_url(src_uri)[source]

List the directory at the given URL. Returned path components can be joined with ‘/’ onto the passed URL to form new URLs. Those that end in ‘/’ correspond to directories. The provided URL may or may not end with ‘/’.

Currently supported schemes are:

Parameters:

src_uri (str) – URL that points to a directory or prefix in the storage mechanism of a supported URL scheme e.g. a prefix in an AWS s3 bucket.

Returns:

A list of URL components in the given directory, already URL-encoded.

Return type:

list[str]

classmethod read_from_url(src_uri, writable)[source]

Read the given URL and write its content into the given writable stream.

Raises FileNotFoundError if the URL doesn’t exist.

Returns:

The size of the file in bytes and whether the executable permission bit is set

Parameters:
Return type:

tuple[int, bool]

classmethod open_url(src_uri)[source]

Read from the given URI.

Raises FileNotFoundError if the URL doesn’t exist.

Has a readable stream interface, unlike read_from_url() which takes a writable stream.

Parameters:

src_uri (str)

Return type:

IO[bytes]

abstract destroy()[source]

The inverse of initialize(), this method deletes the physical storage represented by this instance. While not being atomic, this method is at least idempotent, as a means to counteract potential issues with eventual consistency exhibited by the underlying storage mechanisms. This means that if the method fails (raises an exception), it may (and should be) invoked again. If the underlying storage mechanism is eventually consistent, even a successful invocation is not an ironclad guarantee that the physical storage vanished completely and immediately. A successful invocation only guarantees that the deletion will eventually happen. It is therefore recommended to not immediately reuse the same job store location for a new Toil workflow.

Return type:

None

getEnv()[source]
Return type:

dict[str, str]

get_env()[source]

Returns a dictionary of environment variables that this job store requires to be set in order to function properly on a worker.

Return type:

dict[str,str]

clean(jobCache=None)[source]

Function to cleanup the state of a job store after a restart.

Fixes jobs that might have been partially updated. Resets the try counts and removes jobs that are not successors of the current root job.

Parameters:

jobCache (Optional[dict[Union[str, toil.job.TemporaryID], toil.job.JobDescription]]) – if a value it must be a dict from job ID keys to JobDescription object values. Jobs will be loaded from the cache (which can be downloaded from the job store in a batch) instead of piecemeal when recursed into.

Return type:

toil.job.JobDescription

assignID(jobDescription)[source]
Parameters:

jobDescription (toil.job.JobDescription)

Return type:

None

abstract assign_job_id(job_description)[source]

Get a new jobStoreID to be used by the described job, and assigns it to the JobDescription.

Files associated with the assigned ID will be accepted even if the JobDescription has never been created or updated.

Parameters:

job_description (toil.job.JobDescription) – The JobDescription to give an ID to

Return type:

None

batch()[source]

If supported by the batch system, calls to create() with this context manager active will be performed in a batch after the context manager is released.

Return type:

collections.abc.Iterator[None]

create(jobDescription)[source]
Parameters:

jobDescription (toil.job.JobDescription)

Return type:

toil.job.JobDescription

abstract create_job(job_description)[source]

Writes the given JobDescription to the job store. The job must have an ID assigned already.

Must call jobDescription.pre_update_hook()

Returns:

The JobDescription passed.

Return type:

toil.job.JobDescription

Parameters:

job_description (toil.job.JobDescription)

exists(jobStoreID)[source]
Parameters:

jobStoreID (str)

Return type:

bool

abstract job_exists(job_id)[source]

Indicates whether a description of the job with the specified jobStoreID exists in the job store

Return type:

bool

Parameters:

job_id (str)

publicUrlExpiration
getPublicUrl(fileName)[source]
Parameters:

fileName (str)

Return type:

str

abstract get_public_url(file_name)[source]

Returns a publicly accessible URL to the given file in the job store. The returned URL may expire as early as 1h after its been returned. Throw an exception if the file does not exist.

Parameters:

file_name (str) – the jobStoreFileID of the file to generate a URL for

Raises:

NoSuchFileException – if the specified file does not exist in this job store

Return type:

str

getSharedPublicUrl(sharedFileName)[source]
Parameters:

sharedFileName (str)

Return type:

str

abstract get_shared_public_url(shared_file_name)[source]

Differs from getPublicUrl() in that this method is for generating URLs for shared files written by writeSharedFileStream().

Returns a publicly accessible URL to the given file in the job store. The returned URL starts with ‘http:’, ‘https:’ or ‘file:’. The returned URL may expire as early as 1h after its been returned. Throw an exception if the file does not exist.

Parameters:

shared_file_name (str) – The name of the shared file to generate a publically accessible url for.

Raises:

NoSuchFileException – raised if the specified file does not exist in the store

Return type:

str

load(jobStoreID)[source]
Parameters:

jobStoreID (str)

Return type:

toil.job.JobDescription

abstract load_job(job_id)[source]

Loads the description of the job referenced by the given ID, assigns it the job store’s config, and returns it.

May declare the job to have failed (see toil.job.JobDescription.setupJobAfterFailure()) if there is evidence of a failed update attempt.

Parameters:

job_id (str) – the ID of the job to load

Raises:

NoSuchJobException – if there is no job with the given ID

Return type:

toil.job.JobDescription

update(jobDescription)[source]
Parameters:

jobDescription (toil.job.JobDescription)

Return type:

None

abstract update_job(job_description)[source]

Persists changes to the state of the given JobDescription in this store atomically.

Must call jobDescription.pre_update_hook()

Parameters:
Return type:

None

delete(jobStoreID)[source]
Parameters:

jobStoreID (str)

Return type:

None

abstract delete_job(job_id)[source]

Removes the JobDescription from the store atomically. You may not then subsequently call load(), write(), update(), etc. with the same jobStoreID or any JobDescription bearing it.

This operation is idempotent, i.e. deleting a job twice or deleting a non-existent job will succeed silently.

Parameters:

job_id (str) – the ID of the job to delete from this job store

Return type:

None

abstract jobs()[source]

Best effort attempt to return iterator on JobDescriptions for all jobs in the store. The iterator may not return all jobs and may also contain orphaned jobs that have already finished successfully and should not be rerun. To guarantee you get any and all jobs that can be run instead construct a more expensive ToilState object

Returns:

Returns iterator on jobs in the store. The iterator may or may not contain all jobs and may contain invalid jobs

Return type:

Iterator[toil.job.jobDescription]

writeFile(localFilePath, jobStoreID=None, cleanup=False)[source]
Parameters:
  • localFilePath (str)

  • jobStoreID (Optional[str])

  • cleanup (bool)

Return type:

str

abstract write_file(local_path, job_id=None, cleanup=False)[source]

Takes a file (as a path) and places it in this job store. Returns an ID that can be used to retrieve the file at a later time. The file is written in a atomic manner. It will not appear in the jobStore until the write has successfully completed.

Parameters:
  • local_path (str) – the path to the local file that will be uploaded to the job store. The last path component (basename of the file) will remain associated with the file in the file store, if supported, so that the file can be searched for by name or name glob.

  • job_id (str) – the id of a job, or None. If specified, the may be associated with that job in a job-store-specific way. This may influence the returned ID.

  • cleanup (bool) – Whether to attempt to delete the file when the job whose jobStoreID was given as jobStoreID is deleted with jobStore.delete(job). If jobStoreID was not given, does nothing.

Raises:
Return type:

str

FIXME: some implementations may not raise this

Returns:

an ID referencing the newly created file and can be used to read the file in the future.

Return type:

str

Parameters:
  • local_path (str)

  • job_id (Optional[str])

  • cleanup (bool)

writeFileStream(jobStoreID=None, cleanup=False, basename=None, encoding=None, errors=None)[source]
Parameters:
  • jobStoreID (Optional[str])

  • cleanup (bool)

  • basename (Optional[str])

  • encoding (Optional[str])

  • errors (Optional[str])

Return type:

ContextManager[tuple[IO[bytes], str]]

abstract write_file_stream(job_id=None, cleanup=False, basename=None, encoding=None, errors=None)[source]

Similar to writeFile, but returns a context manager yielding a tuple of 1) a file handle which can be written to and 2) the ID of the resulting file in the job store. The yielded file handle does not need to and should not be closed explicitly. The file is written in a atomic manner. It will not appear in the jobStore until the write has successfully completed.

Parameters:
  • job_id (str) – the id of a job, or None. If specified, the may be associated with that job in a job-store-specific way. This may influence the returned ID.

  • cleanup (bool) – Whether to attempt to delete the file when the job whose jobStoreID was given as jobStoreID is deleted with jobStore.delete(job). If jobStoreID was not given, does nothing.

  • basename (str) – If supported by the implementation, use the given file basename so that when searching the job store with a query matching that basename, the file will be detected.

  • encoding (str) – the name of the encoding used to encode the file. Encodings are the same as for encode(). Defaults to None which represents binary mode.

  • errors (str) – an optional string that specifies how encoding errors are to be handled. Errors are the same as for open(). Defaults to ‘strict’ when an encoding is specified.

Raises:
Return type:

collections.abc.Iterator[tuple[IO[bytes], str]]

FIXME: some implementations may not raise this

Returns:

a context manager yielding a file handle which can be written to and an ID that references the newly created file and can be used to read the file in the future.

Return type:

Iterator[Tuple[IO[bytes], str]]

Parameters:
  • job_id (Optional[str])

  • cleanup (bool)

  • basename (Optional[str])

  • encoding (Optional[str])

  • errors (Optional[str])

getEmptyFileStoreID(jobStoreID=None, cleanup=False, basename=None)[source]
Parameters:
  • jobStoreID (Optional[str])

  • cleanup (bool)

  • basename (Optional[str])

Return type:

str

abstract get_empty_file_store_id(job_id=None, cleanup=False, basename=None)[source]

Creates an empty file in the job store and returns its ID. Call to fileExists(getEmptyFileStoreID(jobStoreID)) will return True.

Parameters:
  • job_id (str) – the id of a job, or None. If specified, the may be associated with that job in a job-store-specific way. This may influence the returned ID.

  • cleanup (bool) – Whether to attempt to delete the file when the job whose jobStoreID was given as jobStoreID is deleted with jobStore.delete(job). If jobStoreID was not given, does nothing.

  • basename (str) – If supported by the implementation, use the given file basename so that when searching the job store with a query matching that basename, the file will be detected.

Returns:

a jobStoreFileID that references the newly created file and can be used to reference the file in the future.

Return type:

str

readFile(jobStoreFileID, localFilePath, symlink=False)[source]
Parameters:
  • jobStoreFileID (str)

  • localFilePath (str)

  • symlink (bool)

Return type:

None

abstract read_file(file_id, local_path, symlink=False)[source]

Copies or hard links the file referenced by jobStoreFileID to the given local file path. The version will be consistent with the last copy of the file written/updated. If the file in the job store is later modified via updateFile or updateFileStream, it is implementation-defined whether those writes will be visible at localFilePath. The file is copied in an atomic manner. It will not appear in the local file system until the copy has completed.

The file at the given local path may not be modified after this method returns!

Note! Implementations of readFile need to respect/provide the executable attribute on FileIDs.

Parameters:
  • file_id (str) – ID of the file to be copied

  • local_path (str) – the local path indicating where to place the contents of the given file in the job store

  • symlink (bool) – whether the reader can tolerate a symlink. If set to true, the job store may create a symlink instead of a full copy of the file or a hard link.

Return type:

None

readFileStream(jobStoreFileID, encoding=None, errors=None)[source]
Parameters:
  • jobStoreFileID (str)

  • encoding (Optional[str])

  • errors (Optional[str])

Return type:

Union[ContextManager[IO[bytes]], ContextManager[IO[str]]]

read_file_stream(file_id: toil.fileStores.FileID | str, encoding: Literal[None] = None, errors: str | None = None) ContextManager[IO[bytes]][source]
read_file_stream(file_id: toil.fileStores.FileID | str, encoding: str, errors: str | None = None) ContextManager[IO[str]]

Similar to readFile, but returns a context manager yielding a file handle which can be read from. The yielded file handle does not need to and should not be closed explicitly.

Parameters:
  • file_id (str) – ID of the file to get a readable file handle for

  • encoding (str) – the name of the encoding used to decode the file. Encodings are the same as for decode(). Defaults to None which represents binary mode.

  • errors (str) – an optional string that specifies how encoding errors are to be handled. Errors are the same as for open(). Defaults to ‘strict’ when an encoding is specified.

Returns:

a context manager yielding a file handle which can be read from

Return type:

Iterator[Union[IO[bytes], IO[str]]]

deleteFile(jobStoreFileID)[source]
Parameters:

jobStoreFileID (str)

Return type:

None

abstract delete_file(file_id)[source]

Deletes the file with the given ID from this job store. This operation is idempotent, i.e. deleting a file twice or deleting a non-existent file will succeed silently.

Parameters:

file_id (str) – ID of the file to delete

Return type:

None

fileExists(jobStoreFileID)[source]

Determine whether a file exists in this job store.

Parameters:

jobStoreFileID (str)

Return type:

bool

abstract file_exists(file_id)[source]

Determine whether a file exists in this job store.

Parameters:

file_id (str) – an ID referencing the file to be checked

Return type:

bool

getFileSize(jobStoreFileID)[source]

Get the size of the given file in bytes.

Parameters:

jobStoreFileID (str)

Return type:

int

abstract get_file_size(file_id)[source]

Get the size of the given file in bytes, or 0 if it does not exist when queried.

Note that job stores which encrypt files might return overestimates of file sizes, since the encrypted file may have been padded to the nearest block, augmented with an initialization vector, etc.

Parameters:

file_id (str) – an ID referencing the file to be checked

Return type:

int

updateFile(jobStoreFileID, localFilePath)[source]

Replaces the existing version of a file in the job store.

Parameters:
  • jobStoreFileID (str)

  • localFilePath (str)

Return type:

None

abstract update_file(file_id, local_path)[source]

Replaces the existing version of a file in the job store.

Throws an exception if the file does not exist.

Parameters:
  • file_id (str) – the ID of the file in the job store to be updated

  • local_path (str) – the local path to a file that will overwrite the current version in the job store

Raises:
Return type:

None

updateFileStream(jobStoreFileID, encoding=None, errors=None)[source]
Parameters:
  • jobStoreFileID (str)

  • encoding (Optional[str])

  • errors (Optional[str])

Return type:

ContextManager[IO[Any]]

abstract update_file_stream(file_id, encoding=None, errors=None)[source]

Replaces the existing version of a file in the job store. Similar to writeFile, but returns a context manager yielding a file handle which can be written to. The yielded file handle does not need to and should not be closed explicitly.

Parameters:
  • file_id (str) – the ID of the file in the job store to be updated

  • encoding (str) – the name of the encoding used to encode the file. Encodings are the same as for encode(). Defaults to None which represents binary mode.

  • errors (str) – an optional string that specifies how encoding errors are to be handled. Errors are the same as for open(). Defaults to ‘strict’ when an encoding is specified.

Raises:
Return type:

collections.abc.Iterator[IO[Any]]

sharedFileNameRegex
writeSharedFileStream(sharedFileName, isProtected=None, encoding=None, errors=None)[source]
Parameters:
  • sharedFileName (str)

  • isProtected (Optional[bool])

  • encoding (Optional[str])

  • errors (Optional[str])

Return type:

ContextManager[IO[bytes]]

abstract write_shared_file_stream(shared_file_name, encrypted=None, encoding=None, errors=None)[source]

Returns a context manager yielding a writable file handle to the global file referenced by the given name. File will be created in an atomic manner.

Parameters:
  • shared_file_name (str) – A file name matching AbstractJobStore.fileNameRegex, unique within this job store

  • encrypted (bool) – True if the file must be encrypted, None if it may be encrypted or False if it must be stored in the clear.

  • encoding (str) – the name of the encoding used to encode the file. Encodings are the same as for encode(). Defaults to None which represents binary mode.

  • errors (str) – an optional string that specifies how encoding errors are to be handled. Errors are the same as for open(). Defaults to ‘strict’ when an encoding is specified.

Raises:

ConcurrentFileModificationException – if the file was modified concurrently during an invocation of this method

Returns:

a context manager yielding a writable file handle

Return type:

Iterator[IO[bytes]]

readSharedFileStream(sharedFileName, encoding=None, errors=None)[source]
Parameters:
  • sharedFileName (str)

  • encoding (Optional[str])

  • errors (Optional[str])

Return type:

ContextManager[IO[bytes]]

abstract read_shared_file_stream(shared_file_name, encoding=None, errors=None)[source]

Returns a context manager yielding a readable file handle to the global file referenced by the given name.

Parameters:
  • shared_file_name (str) – A file name matching AbstractJobStore.fileNameRegex, unique within this job store

  • encoding (str) – the name of the encoding used to decode the file. Encodings are the same as for decode(). Defaults to None which represents binary mode.

  • errors (str) – an optional string that specifies how encoding errors are to be handled. Errors are the same as for open(). Defaults to ‘strict’ when an encoding is specified.

Returns:

a context manager yielding a readable file handle

Return type:

Iterator[IO[bytes]]

writeStatsAndLogging(statsAndLoggingString)[source]
Parameters:

statsAndLoggingString (str)

Return type:

None

abstract write_logs(msg)[source]

Stores a message as a log in the jobstore.

Parameters:

msg (str) – the string to be written

Raises:

ConcurrentFileModificationException – if the file was modified concurrently during an invocation of this method

Return type:

None

readStatsAndLogging(callback, readAll=False)[source]
Parameters:
  • callback (Callable[Ellipsis, Any])

  • readAll (bool)

Return type:

int

abstract read_logs(callback, read_all=False)[source]

Reads logs accumulated by the write_logs() method. For each log this method calls the given callback function with the message as an argument (rather than returning logs directly, this method must be supplied with a callback which will process log messages).

Only unread logs will be read unless the read_all parameter is set.

Parameters:
  • callback (Callable) – a function to be applied to each of the stats file handles found

  • read_all (bool) – a boolean indicating whether to read the already processed stats files in addition to the unread stats files

Raises:

ConcurrentFileModificationException – if the file was modified concurrently during an invocation of this method

Returns:

the number of stats files processed

Return type:

int

write_leader_pid()[source]

Write the pid of this process to a file in the job store.

Overwriting the current contents of pid.log is a feature, not a bug of this method. Other methods will rely on always having the most current pid available. So far there is no reason to store any old pids.

Return type:

None

read_leader_pid()[source]

Read the pid of the leader process to a file in the job store.

Raises:

NoSuchFileException – If the PID file doesn’t exist.

Return type:

int

write_leader_node_id()[source]

Write the leader node id to the job store. This should only be called by the leader.

Return type:

None

read_leader_node_id()[source]

Read the leader node id stored in the job store.

Raises:

NoSuchFileException – If the node ID file doesn’t exist.

Return type:

str

write_kill_flag(kill=False)[source]

Write a file inside the job store that serves as a kill flag.

The initialized file contains the characters “NO”. This should only be changed when the user runs the “toil kill” command.

Changing this file to a “YES” triggers a kill of the leader process. The workers are expected to be cleaned up by the leader.

Parameters:

kill (bool)

Return type:

None

read_kill_flag()[source]

Read the kill flag from the job store, and return True if the leader has been killed. False otherwise.

Return type:

bool

default_caching()[source]

Jobstore’s preference as to whether it likes caching or doesn’t care about it. Some jobstores benefit from caching, however on some local configurations it can be flaky.

see https://github.com/DataBiosphere/toil/issues/4218

Return type:

bool

class toil.jobStores.abstractJobStore.JobStoreSupport(locator)[source]

Bases: AbstractJobStore

A mostly fake JobStore to access URLs not really associated with real job stores.

Parameters:

locator (str)