toil.lib.ec2nodes¶
Attributes¶
Classes¶
Functions¶
|
Determines if a unicode string (that may include commas) is a number. |
|
Parses EC2 JSON storage param string into a number. |
|
Returns EC2 'memory' string as a float. |
|
Downloads and writes the AWS Billing JSON to a file using the AWS pricing API. |
|
Deletes information in the json file that we don't need, and rewrites it. This makes the file smaller. |
Generates a new python file of fetchable EC2 Instances by region with current prices and specs. |
Module Contents¶
- toil.lib.ec2nodes.logger¶
- toil.lib.ec2nodes.manager¶
- toil.lib.ec2nodes.dirname¶
- toil.lib.ec2nodes.region_json_dirname¶
- toil.lib.ec2nodes.EC2Regions¶
- class toil.lib.ec2nodes.InstanceType(name, cores, memory, disks, disk_capacity, architecture)[source]¶
- Parameters:
- __slots__ = ('name', 'cores', 'memory', 'disks', 'disk_capacity', 'architecture')¶
- name¶
- cores¶
- memory¶
- disks¶
- disk_capacity¶
- architecture¶
- toil.lib.ec2nodes.is_number(s)[source]¶
Determines if a unicode string (that may include commas) is a number.
- toil.lib.ec2nodes.parse_storage(storage_info)[source]¶
Parses EC2 JSON storage param string into a number.
- Examples:
“2 x 160 SSD” “3 x 2000 HDD” “EBS only” “1 x 410” “8 x 1.9 NVMe SSD” “900 GB NVMe SSD”
- toil.lib.ec2nodes.parse_memory(mem_info)[source]¶
Returns EC2 ‘memory’ string as a float.
Format should always be ‘#’ GiB (example: ‘244 GiB’ or ‘1,952 GiB’). Amazon loves to put commas in their numbers, so we have to accommodate that. If the syntax ever changes, this will raise.
- toil.lib.ec2nodes.download_region_json(filename, region='us-east-1')[source]¶
Downloads and writes the AWS Billing JSON to a file using the AWS pricing API.
See: https://aws.amazon.com/blogs/aws/new-aws-price-list-api/
- toil.lib.ec2nodes.reduce_region_json_size(filename)[source]¶
Deletes information in the json file that we don’t need, and rewrites it. This makes the file smaller.
The reason being: we used to download the unified AWS Bulk API JSON, which eventually crept up to 5.6Gb, the loading of which could not be done on a 32Gb RAM machine. Now we download each region JSON individually (with AWS’s new Query API), but even those may eventually one day grow ridiculously large, so we do what we can to keep the file sizes down (and thus also the amount loaded into memory) to keep this script working for longer.