The job store interface¶
The job store interface is an abstraction layer that that hides the specific details of file storage,
for example standard file systems, S3, etc. The
API is implemented to support a give file store, e.g. S3. Implement this API to support a new file store.
Represents the physical storage for the jobs and files in a Toil workflow.
Create an instance of the job store. The instance will not be fully functional until either
resume()is invoked. Note that the
destroy()method may be invoked on the object with or without prior invocation of either of these two methods.
Create the physical storage for this job store, allocate a workflow ID and persist the given Toil configuration to the store.
Parameters: config (toil.common.Config) – the Toil configuration to initialize this job store with. The given configuration will be updated with the newly allocated workflow ID. Raises: JobStoreExistsException – if the physical storage for this job store already exists
Persists the value of the
AbstractJobStore.configattribute to the job store, so that it can be retrieved later by other instances of this class.
Connect this instance to the physical storage it represents and load the Toil configuration into the
Raises: NoSuchJobStoreException – if the physical storage for this job store doesn’t exist
The Toil configuration associated with this job store.
Return type: toil.common.Config
Set the root job of the workflow backed by this job store
Parameters: rootJobStoreID (str) – The ID of the job to set as root
Loads the root job in the current job store.
Raises: toil.job.JobException – If no root job is set or if the root job doesn’t exist in this job store Returns: The root job. Return type: toil.jobGraph.JobGraph
Create a new job and set it as the root job in this job store
Return type: toil.jobGraph.JobGraph
Imports the file at the given URL into job store. The ID of the newly imported file is returned. If the name of a shared file name is provided, the file will be imported as such and None is returned.
Currently supported schemes are:
The jobStoreFileId of the imported file or None if sharedFileName was given
FileID or None
Exports file to destination pointed at by the destination URL.
importFile()documentation for currently supported URL schemes.
Note that the helper method _exportFile is used to read from the source and write to destination. To implement any optimizations that circumvent this, the _exportFile method should be overridden by subclasses of AbstractJobStore.
returns the size of the file at the given URL
The inverse of
initialize(), this method deletes the physical storage represented by this instance. While not being atomic, this method is at least idempotent, as a means to counteract potential issues with eventual consistency exhibited by the underlying storage mechanisms. This means that if the method fails (raises an exception), it may (and should be) invoked again. If the underlying storage mechanism is eventually consistent, even a successful invocation is not an ironclad guarantee that the physical storage vanished completely and immediately. A successful invocation only guarantees that the deletion will eventually happen. It is therefore recommended to not immediately reuse the same job store location for a new Toil workflow.
Returns a dictionary of environment variables that this job store requires to be set in order to function properly on a worker.
Return type: dict[str,str]
Function to cleanup the state of a job store after a restart. Fixes jobs that might have been partially updated. Resets the try counts and removes jobs that are not successors of the current root job.
Parameters: jobCache (dict[str,toil.jobGraph.JobGraph]) – if a value it must be a dict from job ID keys to JobGraph object values. Jobs will be loaded from the cache (which can be downloaded from the job store in a batch) instead of piecemeal when recursed into.
Creates a job graph from the given job node & writes it to the job store.
Return type: toil.jobGraph.JobGraph
Indicates whether the job with the specified jobStoreID exists in the job store
Return type: bool
Returns a publicly accessible URL to the given file in the job store. The returned URL may expire as early as 1h after its been returned. Throw an exception if the file does not exist.
Parameters: fileName (str) – the jobStoreFileID of the file to generate a URL for Raises: NoSuchFileException – if the specified file does not exist in this job store Return type: str
getPublicUrl()in that this method is for generating URLs for shared files written by
Returns a publicly accessible URL to the given file in the job store. The returned URL starts with ‘http:’, ‘https:’ or ‘file:’. The returned URL may expire as early as 1h after its been returned. Throw an exception if the file does not exist.
Parameters: sharedFileName (str) – The name of the shared file to generate a publically accessible url for. Raises: NoSuchFileException – raised if the specified file does not exist in the store Return type: str
Loads the job referenced by the given ID and returns it.
Parameters: jobStoreID (str) – the ID of the job to load Raises: NoSuchJobException – if there is no job with the given ID Return type: toil.jobGraph.JobGraph
Persists the job in this store atomically.
Parameters: job (toil.jobGraph.JobGraph) – the job to write to this job store
Removes from store atomically, can not then subsequently call load(), write(), update(), etc. with the job.
This operation is idempotent, i.e. deleting a job twice or deleting a non-existent job will succeed silently.
Parameters: jobStoreID (str) – the ID of the job to delete from this job store
Best effort attempt to return iterator on all jobs in the store. The iterator may not return all jobs and may also contain orphaned jobs that have already finished succesfully and should not be rerun. To guarantee you get any and all jobs that can be run instead construct a more expensive ToilState object
Returns: Returns iterator on jobs in the store. The iterator may or may not contain all jobs and may contain invalid jobs Return type: Iterator[toil.jobGraph.JobGraph]
Takes a file (as a path) and places it in this job store. Returns an ID that can be used to retrieve the file at a later time.
FIXME: some implementations may not raise this
Returns: an ID referencing the newly created file and can be used to read the file in the future. Return type: str
Similar to writeFile, but returns a context manager yielding a tuple of 1) a file handle which can be written to and 2) the ID of the resulting file in the job store. The yielded file handle does not need to and should not be closed explicitly.
jobStoreID (str) – the id of a job, or None. If specified, the file will be associated with that job and when when jobStore.delete(job) is called all files written with the given job.jobStoreID will be removed from the job store.
FIXME: some implementations may not raise this
Returns: an ID that references the newly created file and can be used to read the file in the future. Return type: str
Creates an empty file in the job store and returns its ID. Call to fileExists(getEmptyFileStoreID(jobStoreID)) will return True.
Parameters: jobStoreID (str) – the id of a job, or None. If specified, the file will be associated with that job and when jobStore.delete(job) is called a best effort attempt is made to delete all files written with the given job.jobStoreID Returns: a jobStoreFileID that references the newly created file and can be used to reference the file in the future. Return type: str
Copies the file referenced by jobStoreFileID to the given local file path. The version will be consistent with the last copy of the file written/updated.
The file at the given local path may not be modified after this method returns!
Similar to readFile, but returns a context manager yielding a file handle which can be read from. The yielded file handle does not need to and should not be closed explicitly.
Parameters: jobStoreFileID (str) – ID of the file to get a readable file handle for
Deletes the file with the given ID from this job store. This operation is idempotent, i.e. deleting a file twice or deleting a non-existent file will succeed silently.
Parameters: jobStoreFileID (str) – ID of the file to delete
Determine whether a file exists in this job store.
Parameters: jobStoreFileID (str) – an ID referencing the file to be checked Return type: bool
Replaces the existing version of a file in the job store. Throws an exception if the file does not exist.
Replaces the existing version of a file in the job store. Similar to writeFile, but returns a context manager yielding a file handle which can be written to. The yielded file handle does not need to and should not be closed explicitly.
jobStoreFileID (str) – the ID of the file in the job store to be updated
Returns a context manager yielding a writable file handle to the global file referenced by the given name.
ConcurrentFileModificationException – if the file was modified concurrently during an invocation of this method
Returns a context manager yielding a readable file handle to the global file referenced by the given name.
Parameters: sharedFileName (str) – A file name matching AbstractJobStore.fileNameRegex, unique within this job store
Adds the given statistics/logging string to the store of statistics info.
Parameters: statsAndLoggingString (str) – the string to be written to the stats file Raises: ConcurrentFileModificationException – if the file was modified concurrently during an invocation of this method
Reads stats/logging strings accumulated by the writeStatsAndLogging() method. For each stats/logging string this method calls the given callback function with an open, readable file handle from which the stats string can be read. Returns the number of stats/logging strings processed. Each stats/logging string is only processed once unless the readAll parameter is set, in which case the given callback will be invoked for all existing stats/logging strings, including the ones from a previous invocation of this method.
- callback (Callable) – a function to be applied to each of the stats file handles found
- readAll (bool) – a boolean indicating whether to read the already processed stats files in addition to the unread stats files
the number of stats files processed