Is there a way to get AWS pricing programmatically (cost per hour of each instance type, cost per GB/month of storage on S3, and etc)?
Also, are there cost monitoring tools? For example, is there a tool that can report your EC2 instance usage on an hourly basis (versus a monthly basis, which is what Amazon does)?
Thanks in advance.
UPDATE:
There is now AWS pricing API:
https://aws.amazon.com/blogs/aws/new-aws-price-list-api/
Orginal answer:
The price lists are available in form of JSONP files (you need to strip off function call) which are used by the AWS pricing pages. Each table (and each tab for table) has separate JSON file. It is not an API maybe, but definitely computer digestible. Here is a list that supports EC2 pricing page (as of 17 December 2014):
On-demand Linux: http://a0.awsstatic.com/pricing/1/ec2/linux-od.min.js
On-demand RedHat: http://a0.awsstatic.com/pricing/1/ec2/rhel-od.min.js
On-demand SUSE: http://a0.awsstatic.com/pricing/1/ec2/sles-od.min.js
On-demand Windows: http://a0.awsstatic.com/pricing/1/ec2/mswin-od.min.js
On-demand SQL Standard: http://a0.awsstatic.com/pricing/1/ec2/mswinSQL-od.min.js
On-demand SQL Web: http://a0.awsstatic.com/pricing/1/ec2/mswinSQLWeb-od.min.js
Reserved Linux: http://a0.awsstatic.com/pricing/1/ec2/ri-v2/linux-unix-shared.min.js
Reserved RedHat: http://a0.awsstatic.com/pricing/1/ec2/ri-v2/red-hat-enterprise-linux-shared.min.js
Reserved SUSE: http://a0.awsstatic.com/pricing/1/ec2/ri-v2/suse-linux-shared.min.js
Reserved Windows: http://a0.awsstatic.com/pricing/1/ec2/ri-v2/windows-shared.min.js
Reserved SQL Standard: http://a0.awsstatic.com/pricing/1/ec2/ri-v2/windows-with-sql-server-standard-shared.min.js
Reserved SQL Web: http://a0.awsstatic.com/pricing/1/ec2/ri-v2/windows-with-sql-server-web-shared.min.js
Reserved Spot instances: http://spot-price.s3.amazonaws.com/spot.js
Data transfer: http://a0.awsstatic.com/pricing/1/ec2/pricing-data-transfer-with-regions.min.js
EBS optimized: http://a0.awsstatic.com/pricing/1/ec2/pricing-ebs-optimized-instances.min.js
EBS: http://a0.awsstatic.com/pricing/1/ebs/pricing-ebs.min.js
Elastic IP: http://a0.awsstatic.com/pricing/1/ec2/pricing-elastic-ips.min.js
CloudWatch: http://a0.awsstatic.com/pricing/1/cloudwatch/pricing-cloudwatch.min.js
ELB: http://a0.awsstatic.com/pricing/1/ec2/pricing-elb.min.js
EMR: https://a0.awsstatic.com/pricing/1/emr/pricing-emr.min.js
WARNING: The endpoints change from time to time and often old URL is still there with old values. It is best to check what is the current status rather than relying on links provided in this thread.
So, here is a short command to get current set or URLs from any AWS pricing page. Example based on EC2. Run it on Linux or Cygwin. Actually this command was used to create the list above.
curl http://aws.amazon.com/ec2/pricing/ 2>/dev/null | grep 'model:' | sed -e "s/.*'\(.*\)'.*/http:\\1/"
For those who don't like command line, you can also check in a web browser network console (you get there with F12), filter with JS objects:
Just to let you know that they seem to have changed the JSON addresses. It includes the new C3 instance types
Update 01/21/2014: addresses changed again. Please note that these are JS files with a callback function that should be removed so that it becomes a parsable JSON.
Update 09/21/2014: addresses changed once again and include the new T2 instace types. To be treated as JSON files, the initial comments and the callback function should be removed and the keys should be wrapped in double quotes.
On Demand
Linux: http://a0.awsstatic.com/pricing/1/ec2/linux-od.min.js
Windows: http://a0.awsstatic.com/pricing/1/ec2/mswin-od.min.js
RHEL: http://a0.awsstatic.com/pricing/1/ec2/rhel-od.min.js
SLES: http://a0.awsstatic.com/pricing/1/ec2/sles-od.min.js
Windows w/ SQL Std: http://a0.awsstatic.com/pricing/1/ec2/mswinSQL-od.min.js
Windows w/ SQL Web: http://a0.awsstatic.com/pricing/1/ec2/mswinSQLWeb-od.min.js
Reserved Light
Linux: http://a0.awsstatic.com/pricing/1/ec2/linux-ri-light.min.js
Windows: http://a0.awsstatic.com/pricing/1/ec2/mswin-ri-light.min.js
RHEL: http://a0.awsstatic.com/pricing/1/ec2/rhel-ri-light.min.js
SLES: http://a0.awsstatic.com/pricing/1/ec2/sles-ri-light.min.js
Windows w/ SQL Std: http://a0.awsstatic.com/pricing/1/ec2/mswinSQL-ri-light.min.js
Windows w/ SQL Web: http://a0.awsstatic.com/pricing/1/ec2/mswinSQLWeb-ri-light.min.js
Reserved Medium
Linux: http://a0.awsstatic.com/pricing/1/ec2/linux-ri-medium.min.js
Windows: http://a0.awsstatic.com/pricing/1/ec2/mswin-ri-medium.min.js
RHEL: http://a0.awsstatic.com/pricing/1/ec2/rhel-ri-medium.min.js
SLES: http://a0.awsstatic.com/pricing/1/ec2/sles-ri-medium.min.js
Windows w/ SQL Std: http://a0.awsstatic.com/pricing/1/ec2/mswinSQL-ri-medium.min.js
Windows w/ SQL Web: http://a0.awsstatic.com/pricing/1/ec2/mswinSQLWeb-ri-medium.min.js
Reserved Heavy
Linux: http://a0.awsstatic.com/pricing/1/ec2/linux-ri-heavy.min.js
Windows: http://a0.awsstatic.com/pricing/1/ec2/mswin-ri-heavy.min.js
RHEL: http://a0.awsstatic.com/pricing/1/ec2/rhel-ri-heavy.min.js
SLES: http://a0.awsstatic.com/pricing/1/ec2/sles-ri-heavy.min.js
Windows w/ SQL Std: http://a0.awsstatic.com/pricing/1/ec2/mswinSQL-ri-heavy.min.js
Windows w/ SQL Web: http://a0.awsstatic.com/pricing/1/ec2/mswinSQLWeb-ri-heavy.min.js
Other
Spot Instances: http://spot-price.s3.amazonaws.com/spot.js
Data Transfer: http://a0.awsstatic.com/pricing/1/ec2/pricing-data-transfer-with-regions.min.js
EBS-Optimized Instances Surcharge: http://a0.awsstatic.com/pricing/1/ec2/pricing-ebs-optimized-instances.min.js
EBS: http://a0.awsstatic.com/pricing/1/ec2/pricing-ebs.min.js
Elastic IP: http://a0.awsstatic.com/pricing/1/ec2/pricing-elastic-ips.min.js
CloudWatch: http://a0.awsstatic.com/pricing/1/ec2/pricing-cloudwatch.min.js
ELB: http://a0.awsstatic.com/pricing/1/ec2/pricing-elb.min.js
Previous endpoint: http://aws-assets-pricing-prod.s3.amazonaws.com/pricing/ec2/linux-od.js
using the aws cli (in the examples below, I have also included how the same thing can be executed using jq)
to get a list of service codes:
aws pricing describe-services --region us-east-1
to get a list of service codes (with jq):
aws pricing describe-services --region us-east-1 | jq -r '.Services[] | .ServiceCode'
which will return values like:
AmazonEC2
AmazonS3
AmazonRoute53
[...]
to get a list of attributes for a given service code:
aws pricing describe-services --service-code AmazonEC2 --region us-east-1
to get a list of attributes for a given service code (with jq):
aws pricing describe-services --service-code AmazonEC2 --region us-east-1 | jq -r '.Services[] | .AttributeNames[]'
which will return values like:
instancesku
location
memory
vcpu
volumeType
[...]
to get pricing info now that you have a service code and attribute:
(this will take a while since it is every sku for the service code, so I will show examples using filtering further down)
aws pricing get-products --service-code AmazonEC2 --region us-east-1
to get pricing info now that you have a service code and attribute using a filter on instanceType and another for location:
aws pricing get-products --service-code AmazonEC2 --filters "Type=TERM_MATCH,Field=instanceType,Value=m5.xlarge" "Type=TERM_MATCH,Field=location,Value=US East (N. Virginia)" --region us-east-1
to get pricing info now that you have a service code and attribute using a filter on instanceType and another for location (with jq):
aws pricing get-products --service-code AmazonEC2 --filters "Type=TERM_MATCH,Field=instanceType,Value=m5.xlarge" "Type=TERM_MATCH,Field=location,Value=US East (N. Virginia)" --region us-east-1 | jq -rc '.PriceList[]' | jq -r '[ .product.attributes.servicecode, .product.attributes.location, .product.attributes.instancesku?, .product.attributes.instanceType, .product.attributes.usagetype, .product.attributes.operatingSystem, .product.attributes.memory, .product.attributes.physicalProcessor, .product.attributes.processorArchitecture, .product.attributes.vcpu, .product.attributes.currentGeneration, .terms.OnDemand[].priceDimensions[].unit, .terms.OnDemand[].priceDimensions[].pricePerUnit.USD, .terms.OnDemand[].priceDimensions[].description] | #csv'
which will return values like:
"AmazonEC2","US East (N. Virginia)","EWZRARGKPMTYQJFP","m5.xlarge","UnusedDed:m5.xlarge","Linux","16 GiB","Intel Xeon Platinum 8175 (Skylake)","64-bit","4","Yes","Hrs","0.6840000000","$0.684 per Dedicated Unused Reservation Linux with SQL Std m5.xlarge Instance Hour"```
[...]
In addition to #arturhoo's answer which provides the EC2 spots
You can obtain the historic prices with the CLI tool
aws ec2 describe-spot-price-history \
--instance-types m1.xlarge \
--product-description "Linux/UNIX (Amazon VPC)" \
--start-time 2016-10-31T03:00:00 \
--end-time 2016-10-31T03:16:00 \
--query 'SpotPriceHistory[*].[Timestamp,SpotPrice]'
which takes the spot price between 3:00am and 3:16am Monday 31st October 2016 (UTC)
[
[
"2016-10-31T03:06:12.000Z",
"0.041500"
],
[
"2016-10-31T03:00:26.000Z",
"0.041600"
],
[
"2016-10-31T02:59:14.000Z",
"0.041500"
],
[
"2016-10-31T02:00:18.000Z",
"0.040600"
],
[
"2016-10-30T23:55:06.000Z",
"0.043200"
]
]
This ruby gem wraps the JSON pricing data provided by Amazon and provides a simple interface, which takes care of mapping the region and instance type names to the ones used in the EC2 API.
https://github.com/sonian/amazon-pricing
Beside the official AWS JSON endpoint, https://ec2.shop is also an option. It respond both to json and text based(so you can use grep, awk etc).
curl -L 'ec2.shop?filter=.large'
Instance Type Memory vCPUs Storage Network Price Monthly Spot Price
m6g.large 8 GiB 2 vCPUs EBS only Up to 10 Gigabit 0.0770 56.210 0.0357
t3.large 8 GiB 2 vCPUs EBS only Up to 5 Gigabit 0.0832 60.736 0.0250
m5a.large 8 GiB 2 vCPUs EBS only Up to 10 Gigabit 0.0860 62.780 0.0345
i3en.large 16 GiB 2 vCPUs 1 x 1250 NVMe SSD Up to 25 Gigabit 0.2260 164.980 0.0678
r4.large 15.25 GiB 2 vCPUs EBS only Up to 10 Gigabit 0.1330 97.090 0.0343
r5.large 16 GiB 2 vCPUs EBS only Up to 10 Gigabit 0.1260 91.980 0.0356
r5a.large 16 GiB 2 vCPUs EBS only 10 Gigabit 0.1130 82.490 0.0356
r5dn.large 16 GiB 2 vCPUs 1 x 75 NVMe SSD Up to 25 Gigabit 0.1670 121.910 0.0356
t3a.large 8 GiB 2 vCPUs EBS only Up to 5 Gigabit 0.0752 54.896 0.0226
m4.large 8 GiB 2 vCPUs EBS only Moderate 0.1000 73.000 0.0362
i3.large 15.25 GiB 2 vCPUs 1 x 475 NVMe SSD Up to 10 Gigabit 0.1560 113.880 0.0468
m5dn.large 8 GiB 2 vCPUs 1 x 75 NVMe SSD Up to 25 Gigabit 0.1360 99.280 0.0340
m5d.large 8 GiB 2 vCPUs 1 x 75 NVMe SSD Up to 10 Gigabit 0.1130 82.490 0.0340
t2.large 8 GiB 2 vCPUs EBS only Low to Moderate 0.0928 67.744 0.0278
z1d.large 16 GiB 2 vCPUs 1 x 75 NVMe SSD Up to 10 Gigabit 0.1860 135.780 0.0558
c5.large 4 GiB 2 vCPUs EBS only Up to 10 Gigabit 0.0850 62.050 0.0324
m5ad.large 8 GiB 2 vCPUs 1 x 75 NVMe SSD Up to 10 Gigabit 0.1030 75.190 0.0345
r6g.large 16 GiB 2 vCPUs EBS only Up to 10 Gigabit 0.1008 73.584 0.0374
r5d.large 16 GiB 2 vCPUs 1 x 75 NVMe SSD 10 Gigabit 0.1440 105.120 0.0356
r5n.large 16 GiB 2 vCPUs EBS only Up to 25 Gigabit 0.1490 108.770 0.0356
r5ad.large 16 GiB 2 vCPUs 1 x 75 NVMe SSD 10 Gigabit 0.1310 95.630 0.0356
c6g.large 4 GiB 2 vCPUs EBS only Up to 10 Gigabit 0.0680 49.640 0.0340
m5n.large 8 GiB 2 vCPUs EBS only Up to 25 Gigabit 0.1190 86.870 0.0340
c4.large 3.75 GiB 2 vCPUs EBS only Moderate 0.1000 73.000 0.0308
c5a.large 4 GiB 2 vCPUs EBS only Up to 10 Gigabit 0.0770 56.210 0.0324
c5d.large 4 GiB 2 vCPUs 1 x 50 NVMe SSD Up to 10 Gigabit 0.0960 70.080 0.0324
m5.large 8 GiB 2 vCPUs EBS only Up to 10 Gigabit 0.0960 70.080 0.0341
c5n.large 5.25 GiB 2 vCPUs EBS only Up to 25 Gigabit 0.1080 78.840 0.0326
a1.large 4 GiB 2 vCPUs EBS only Up to 10 Gigabit 0.0510 37.230 0.0227
c3.large 3.75 GiB 2 vCPUs 2 x 16 SSD Moderate 0.1050 76.650 0.0294
r3.large 15.25 GiB 2 vCPUs 1 x 32 SSD Moderate 0.1660 121.180 0.0323
m1.large 7.5 GiB 2 vCPUs 2 x 420 SSD Moderate 0.1750 127.750 0.0175
m3.large 7.5 GiB 2 vCPUs 1 x 32 SSD Moderate 0.1330 97.090 0.0308
Also JSON like this:
curl -sL 'ec2.shop?filter=m4' -H 'accept: json' | jq .
{
"Prices": [
{
"InstanceType": "m4.16xlarge",
"Memory": "256 GiB",
"VCPUS": 64,
"Storage": "EBS only",
"Network": "20 Gigabit",
"Cost": 3.2,
"MonthlyPrice": 2336,
"SpotPrice": "0.9479"
},
{
"InstanceType": "m4.large",
"Memory": "8 GiB",
"VCPUS": 2,
"Storage": "EBS only",
"Network": "Moderate",
"Cost": 0.1,
"MonthlyPrice": 73,
"SpotPrice": "0.0362"
},
{
"InstanceType": "m4.2xlarge",
"Memory": "32 GiB",
"VCPUS": 8,
"Storage": "EBS only",
"Network": "High",
"Cost": 0.4,
"MonthlyPrice": 292,
"SpotPrice": "0.1504"
},
{
"InstanceType": "m4.4xlarge",
"Memory": "64 GiB",
"VCPUS": 16,
"Storage": "EBS only",
"Network": "High",
"Cost": 0.8,
"MonthlyPrice": 584,
"SpotPrice": "0.3168"
},
{
"InstanceType": "m4.xlarge",
"Memory": "16 GiB",
"VCPUS": 4,
"Storage": "EBS only",
"Network": "High",
"Cost": 0.2,
"MonthlyPrice": 146,
"SpotPrice": "0.0661"
},
{
"InstanceType": "m4.10xlarge",
"Memory": "160 GiB",
"VCPUS": 40,
"Storage": "EBS only",
"Network": "10 Gigabit",
"Cost": 2,
"MonthlyPrice": 1460,
"SpotPrice": "0.7050"
}
]
}
If you are using golang, I wrote a library that can query the data using the
"https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/{offer_code}/current/index.{format}"
format.
https://github.com/Chronojam/aws-pricing-api
import (
"github.com/chronojam/aws-pricing-api/types/schema"
)
func main() {
ec2 := &schema.AmazonEC2{}
// Populate this object with new pricing data
err := ec2.Refresh()
if err != nil {
panic(err)
}
// Get the price of all c4.Large instances,
// running linux, on shared tenancy
c4Large := []*schema.AmazonEC2_Product{}
for _, p := range ec2.Products {
if p.Attributes.InstanceType == "c4.large" &&
p.Attributes.OperatingSystem == "Linux" &&
p.Attributes.Tenancy == "Shared" {
c4Large = append(c4Large, p)
}
}
}
AWS has launched the new price list API for programming integration.
URL Syntax:
https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/{offer_code}/current/index.{format}
To get list of supporting services:
https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/index.json
AWS blog Referance: https://aws.amazon.com/blogs/aws/new-aws-price-list-api/
I am the author of an open-source tool called ec2-cost-calculate that will "report your EC2 instance usage on an hourly basis" - the tool is available at awsmissingtools.com. Output can be hourly, daily, monthly. Two versions of the tool exist, one written in Ruby and another written in bash.
As Amazon has recently changed the pricing scheme for EC2 instances (no more Medium or Light, only Heavy which has multiple payment options - allUpfront, partialUpfront, noUpfront) and also some time ago separated the old generation instances from the current ones, the list of undocumented pricing API links has changed as well the structure of JSON provisioned by these links.
The full list if links of EC2 pricing undocumented API with descriptions, as well as the Python module for convenient access and structured output of pricing in JSON, CSV or Table formats can be found in the following repository:
https://github.com/ilia-semenov/awspricingfull
If you're using Go, I wrote a package to decode the data into a struct, based on the files linked to in #okrasz's answer
https://github.com/recursionpharma/ec2prices
Feel free to contribute with more pricing data.
here's a bash implementation, maintained as a gist that will give you pricing in a concise csv file for on-demand, license-free instances.
it works by parsing the nearly 3gb ec2 pricing file and filtering out all the reserved instance and private customer prices that are inexplicably included in the only free mechanism aws provides for obtaining programmatically parse-able pricing data and results in a price.csv file of just over 7000 lines.
#!/usr/bin/env bash
# csv filenames
raw_path=/tmp/ec2.csv
machine_path=/tmp/machine.csv
price_path=/tmp/price.csv
region_path=/tmp/region.csv
# dependencies
for package in coreutils curl datamash grep miller python3-csvkit; do
if ! dnf list installed ${package} &> /dev/null; then
sudo dnf install -y ${package}
fi
done
# fetch raw data from aws
if [ ! -s ${raw_path} ]; then
curl -Lo ${raw_path} https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonEC2/current/index.csv
fi
# build machine list
if [ ! -s ${machine_path} ]; then
echo '"id","current-generation","family","vcpu","processor","clock","memory","storage","network","architecture"' > ${machine_path}
tail -n +7 ${raw_path} | cut -d , -f 20-29 | grep -v '"AWS Region"' | grep -v '"NA","NA","NA"' | grep -v ',,,,,,,,,' | sort -u >> ${machine_path}
fi
# build price list, filtering out everything but license-free, on-demand instances in regions available to the public
if [ ! -s ${price_path} ]; then
echo machine,region,price > ${price_path}
cat ${raw_path} | grep '"Linux","No License required"' | grep -v 'SQL' | grep -v '"Reserved"' | grep -v '"0.0000000000"' | grep -v 'Unused Reservation' | grep -v 'Dedicated' | cut -d , -f 10,20,85 | egrep '"(af|ap|ca|eu|me|sa|us)-(north|south|east|west|central)(east|west)?-[1-3]"' | mlr --csv --implicit-csv-header reorder -f 2,3,1 | grep -v '2,3,1' | sort >> ${price_path}
fi
# build region list, filtering out private customer regions
if [ ! -s ${region_path} ]; then
echo '"id","name"' > ${region_path}
tail -n +7 ${raw_path} | cut -d , -f 18,85 | sort -u | datamash -t , reverse | egrep '"(af|ap|ca|eu|me|sa|us)-(north|south|east|west|central)(east|west)?-[1-3]"' | sort >> ${region_path}
fi
Related
I have log files that are broken down into between 1 and 4 "Tasks". In each "Task" there are sections for "WU Name" and "estimated CPU time remaining". Ultimately, I want to the bash script output to look like this 3 Task example;
Task 1 Mini_Protein_binds_COVID-19_boinc_ 0d:7h:44m:28s
Task 2 shapeshift_pair6_msd4X_4_f_e0_161_ 0d:4h:14m:22s
Task 3 rep730_0078_symC_reordered_0002_pr 1d:1h:38m:41s
So far; I can count the Tasks in the log. I can isolate x number of characters I want from the "WU Name". I can convert the "estimated CPU time remaining" in seconds to days:hours:minutes:seconds. And I can output all of that into 'pretty' columns. Problem is that I can only process 1 Task using;
# Initialize counter
counter=1
# Count how many iterations
cnt_wu=`grep -c "WU name:" /mnt/work/sec-conv/bnc-sample3.txt`
# Iterate the loop for cnt-wu times
while [ $counter -le ${cnt_wu} ]
do
core_cnt=$counter
wu=`cat /mnt/work/sec-conv/bnc-sample3.txt | grep -Po 'WU name: \K.*' | cut -c1-34`
sec=`cat /mnt/work/sec-conv/bnc-sample3.txt | grep -Po 'estimated CPU time remaining: \K.*' | cut -f1 -d"."`
dhms=`printf '%dd:%dh:%dm:%ds\n' $(($sec/86400)) $(($sec%86400/3600)) $(($sec%3600/60)) \ $(($sec%60))`
echo "Task ${core_cnt}" $'\t' $wu $'\t' $dhms | column -ts $'\t'
counter=$((counter + 1))
done
Note: /mnt/work/sec-conv/bnc-sample3.txt is a static one Task sample only used for this scripts dev.
What I can't figure out is the next step which is to be able to process x number of multiple Tasks. I can't figure out how to leverage the while/counter combination properly, and can't figure out how to increment through the occurrences of Tasks.
Adding bnc-sample.txt (contains 3 Tasks)
1) -----------
name: Rosetta#home
master URL: https://boinc.bakerlab.org/rosetta/
user_name: XXXXXXX
team_name:
resource share: 100.000000
user_total_credit: 10266.993660
user_expavg_credit: 512.420495
host_total_credit: 10266.993660
host_expavg_credit: 512.603669
nrpc_failures: 0
master_fetch_failures: 0
master fetch pending: no
scheduler RPC pending: no
trickle upload pending: no
attached via Account Manager: no
ended: no
suspended via GUI: no
don't request more work: no
disk usage: 0.000000
last RPC: Wed Jun 10 15:55:29 2020
project files downloaded: 0.000000
GUI URL:
name: Message boards
description: Correspond with other users on the Rosetta#home message boards
URL: https://boinc.bakerlab.org/rosetta/forum_index.php
GUI URL:
name: Your account
description: View your account information
URL: https://boinc.bakerlab.org/rosetta/home.php
GUI URL:
name: Your tasks
description: View the last week or so of computational work
URL: https://boinc.bakerlab.org/rosetta/results.php?userid=XXXXXXX
jobs succeeded: 117
jobs failed: 0
elapsed time: 2892439.609931
cross-project ID: 3538b98e5f16a958a6bdd2XXXXXXXXX
======== Tasks ========
1) -----------
name: shapeshift_pair6_msd4X_4_f_e0_161_X_0001_0001_fragments_abinitio_SAVE_ALL_OUT_946179_730_0
WU name: shapeshift_pair6_msd4X_4_f_e0_161_X_0001_0001_fragments_abinitio_SAVE_ALL_OUT_946179_730
project URL: https://boinc.bakerlab.org/rosetta/
received: Mon Jun 8 09:58:08 2020
report deadline: Thu Jun 11 09:58:08 2020
ready to report: no
state: downloaded
scheduler state: scheduled
active_task_state: EXECUTING
app version num: 420
resources: 1 CPU
estimated CPU time remaining: 26882.771040
slot: 1
PID: 28434
CPU time at last checkpoint: 3925.896000
current CPU time: 4314.761000
fraction done: 0.066570
swap size: 431 MB
working set size: 310 MB
2) -----------
name: rep730_0078_symC_reordered_0002_propagated_0001_0001_0001_A_v9_fold_SAVE_ALL_OUT_946618_54_0
WU name: rep730_0078_symC_reordered_0002_propagated_0001_0001_0001_A_v9_fold_SAVE_ALL_OUT_946618_54
project URL: https://boinc.bakerlab.org/rosetta/
received: Mon Jun 8 09:58:08 2020
report deadline: Thu Jun 11 09:58:08 2020
ready to report: no
state: downloaded
scheduler state: scheduled
active_task_state: EXECUTING
app version num: 420
resources: 1 CPU
estimated CPU time remaining: 26412.937920
slot: 2
PID: 28804
CPU time at last checkpoint: 3829.626000
current CPU time: 3879.975000
fraction done: 0.082884
swap size: 628 MB
working set size: 513 MB
3) -----------
name: Mini_Protein_binds_COVID-19_boinc_site3_2_SAVE_ALL_OUT_IGNORE_THE_REST_0aw6cb3u_944116_2_0
WU name: Mini_Protein_binds_COVID-19_boinc_site3_2_SAVE_ALL_OUT_IGNORE_THE_REST_0aw6cb3u_944116_2
project URL: https://boinc.bakerlab.org/rosetta/
received: Mon Jun 8 09:58:47 2020
report deadline: Thu Jun 11 09:58:46 2020
ready to report: no
state: downloaded
scheduler state: scheduled
active_task_state: EXECUTING
app version num: 420
resources: 1 CPU
estimated CPU time remaining: 27868.559616
slot: 0
PID: 30988
CPU time at last checkpoint: 1265.356000
current CPU time: 1327.603000
fraction done: 0.032342
swap size: 792 MB
working set size: 668 MB
Again, I appreciate any guidance!
Background:
I'm working on a powershell script to automate installation from a USB stick via WinPE. Because the target systems have several drives, each possibly having a couple partitions, Windows quickly runs out of drive letters. Part of my script unassigns all drive letters, then reassigns only the necessary disks. Right now, I assign hard-coded letters to certain partitions, but I've run into a problem with one of the letters not being unassigned.
The issue is that I somehow have a volume with an assigned drive letter, yet there's apparently no underlying partition, and since Remove-PartitionAccessPath requires a partition object, there's no way to do it from powershell (without resorting to diskpart).
Here's the output of diskpart - you can see the selected disk has no partitions, yet somehow has a volume:
Microsoft DiskPart version 10.0.15063.0
Copyright (C) Microsoft Corporation.
On computer: MININT-6GI0UNM
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0 Online 5589 GB 0 B *
Disk 1 Online 5589 GB 0 B *
Disk 2 Online 5589 GB 0 B *
Disk 3 Online 5589 GB 0 B *
Disk 4 Online 5589 GB 0 B *
Disk 5 Online 5589 GB 0 B *
Disk 6 Online 5589 GB 0 B *
Disk 7 Online 5589 GB 0 B *
Disk 8 Online 5589 GB 0 B *
Disk 9 Online 5589 GB 0 B *
Disk 10 Online 5589 GB 0 B *
Disk 11 Online 5589 GB 0 B *
Disk 12 Online 447 GB 0 B *
Disk 13 Online 447 GB 0 B *
Disk 14 Online 232 GB 0 B *
Disk 15 Online 29 GB 29 GB
Disk 16 Online 28 GB 0 B *
DISKPART> sel disk 15
Disk 15 is now the selected disk.
DISKPART> list part
There are no partitions on this disk to show.
DISKPART> detail disk
ATA Hypervisor USB Device
Disk ID: E0623CE6
Type : USB
Status : Online
Path : 0
Target : 0
LUN ID : 0
Location Path : UNAVAILABLE
Current Read-only State : No
Read-only : No
Boot Disk : No
Pagefile Disk : No
Hibernation File Disk : No
Crashdump Disk : No
Clustered Disk : No
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- --------
Volume 20 E Removable 0 B Unusable
DISKPART>
Here's what happens when I try to remove the letter from powershell:
PS X:\sources> Get-Volume -DriveLetter E | Remove-PartitionAccessPath -AccessPath "E:"
Remove-PartitionAccessPath : The input object cannot be bound to any parameters for the command either because the
command does not take pipeline input or the input and its properties do not match any of the parameters that take
pipeline input.
At line:1 char:29
+ ... t-Volume -DriveLetter E | Remove-PartitionAccessPath -AccessPath "E:"
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (MSFT_Volume (Ob...rosoft/Wind...):PSObject) [Remove-PartitionAccessPat
h], ParameterBindingException
+ FullyQualifiedErrorId : InputObjectNotBound,Remove-PartitionAccessPath
PS X:\sources> Get-Volume -DriveLetter E | fl *
OperationalStatus : Unknown
HealthStatus : Healthy
DriveType : Removable
FileSystemType : Unknown
DedupMode : NotAvailable
ObjectId : {1}\\MININT-6GI0UNM\root/Microsoft/Windows/Storage/Providers_v2\WSP_Volume.ObjectId="{63585070-
3cd2-11e7-b877-806e6f6e6963}:VO:\\?\Volume{635850c4-3cd2-11e7-b877-806e6f6e6963}\"
PassThroughClass :
PassThroughIds :
PassThroughNamespace :
PassThroughServer :
UniqueId : \\?\Volume{635850c4-3cd2-11e7-b877-806e6f6e6963}\
AllocationUnitSize : 0
DriveLetter : E
FileSystem :
FileSystemLabel :
Path : \\?\Volume{635850c4-3cd2-11e7-b877-806e6f6e6963}\
Size : 0
SizeRemaining : 0
PSComputerName :
CimClass : ROOT/Microsoft/Windows/Storage:MSFT_Volume
CimInstanceProperties : {ObjectId, PassThroughClass, PassThroughIds, PassThroughNamespace...}
CimSystemProperties : Microsoft.Management.Infrastructure.CimSystemProperties
PS X:\sources> Get-Volume -DriveLetter E | Get-Partition
PS X:\sources> $null -eq (Get-Volume -DriveLetter E | Get-Partition)
True
Powershell version table:
PS X:\sources> $PSVersionTable
Name Value
---- -----
PSVersion 5.1.15063.0
PSEdition Desktop
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.15063.0
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
I can try to get more details about the contents of the disk in question if necessary.
What could be causing this? Is there a powershell workaround?
Note: I realize it would probably be better to have Windows pick drive letters instead of hard-coding them, but I'm still curious about the mysterious volume.
Try this:
Get-Volume -Drive 'E' | Get-Partition | Remove-PartitionAccessPath -AccessPath 'E:\'
Reference: https://blogs.technet.microsoft.com/heyscriptingguy/2015/12/07/powertip-use-powershell-to-remove-drive-letter/
I want to get the various model names of my hard drives using bash.
I can do it if there's just one, using hwinfo like this:
hwinfo --ide|grep Model|sed -ne '/Model/s/.*Model: "\([^"]*\)".*/\1/p'
But this obviously fails when there's more than one. One regular hwinfo output when there are several hard drives is this:
[faidoc#Delorean ~]$ hwinfo --ide
11: IDE 200.0: 10600 Disk
[Created at block.245]
Unique ID: 3OOL.XFCtBh10jZ2
Parent ID: qnJ_.3_X41NtKT36
SysFS ID: /class/block/sda
SysFS BusID: 2:0:0:0
SysFS Device Link: /devices/pci0000:00/0000:00:0d.0/ata3/host2/target2:0:0/2:0:0:0
Hardware Class: disk
Model: "VBOX HARDDISK"
Vendor: "VBOX"
Device: "HARDDISK"
Revision: "1.0"
Serial ID: "VBfa9b1456-03d78f51"
Driver: "ahci", "sd"
Driver Modules: "ahci"
Device File: /dev/sda
Device Files: /dev/sda, /dev/disk/by-id/ata-VBOX_HARDDISK_VBfa9b1456-03d78f51
Device Number: block 8:0-8:15
BIOS id: 0x80
Geometry (Logical): CHS 1305/255/63
Size: 20971520 sectors a 512 bytes
Capacity: 10 GB (10737418240 bytes)
Config Status: cfg=new, avail=yes, need=no, active=unknown
Attached to: #10 (SATA controller)
12: IDE 300.0: 10600 Disk
[Created at block.245]
Unique ID: WZeP.0xN7VsONW+D
Parent ID: qnJ_.3_X41NtKT36
SysFS ID: /class/block/sdb
SysFS BusID: 3:0:0:0
SysFS Device Link: /devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0
Hardware Class: disk
Model: "VBOX HARDDISK"
Vendor: "VBOX"
Device: "HARDDISK"
Revision: "1.0"
Serial ID: "VB350f9911-48221ae2"
Driver: "ahci", "sd"
Driver Modules: "ahci"
Device File: /dev/sdb
Device Files: /dev/sdb, /dev/disk/by-id/ata-VBOX_HARDDISK_VB350f9911-48221ae2
Device Number: block 8:16-8:31
BIOS id: 0x81
Geometry (Logical): CHS 2349/255/63
Size: 37748736 sectors a 512 bytes
Capacity: 18 GB (19327352832 bytes)
Config Status: cfg=new, avail=yes, need=no, active=unknown
Attached to: #10 (SATA controller)
Every drive begans with for example "11:" or "12:" so if I could get one at a time that will be the solution.
Any ideas?
Thanks
You can get the info with:
hdparm -i /dev/sda | grep -i model
or, if you want just the model name:
hdparm -i /dev/sda | perl -n -e 'print "$1\n" if (m/model=(.+?),/i);'
If you know which one you want a very easy way would be grep -A8 -E '^11:'
hwinfo --ide|grep -A8 -E '^11:'|grep Model|sed -ne '/Model/s/.*Model: "\([^"]*\)".*/\1/p'
The -A flag on grep grabs that many lines "After" the match as well as the line with the match.
There's also -B for "Before" and -C for "Context"
here's a quick and dirty awk statement that may help:
hwinfo --ide | awk '{ if($2=="IDE"){ide=$3} if($1=="Model:"){print "IDE " ide $0} }'
basically searches for the pattern "IDE" in the second word of each line.
if it finds that, it stores the third word of the line in the variable named "ide".
then it searches the first word of each line for "Model:".
if found, it prints the IDE it stored earlier and the whole line containing the model name.
so you end up with the name and IDE location in your output:
IDE 200.0: Model: "VBOX HARDDISK"
IDE 300.0: Model: "VBOX HARDDISK"
and it should work no matter how many disks are attached.
I have setup autoscaling using these steps...
$ elb-create-lb autoscalelb --headers --listener
"lb-port=80,instance-port=80,protocol=http" --listener
"lb-port=443,instance-port=443,protocol=tcp" --availability-zones
us-east-1d
$ elb-describe-lbs autoscalelb
$ elb-register-instances-with-lb autoscalelb --instances i-ee364697
$ elb-configure-healthcheck autoscalelb --headers --target "TCP:80"
--interval 5 --timeout 3 --unhealthy-threshold 2 --healthy-threshold 4
$ as-create-launch-config autoscalelc --image-id ami-baba68d3
--instance-type t1.micro
$ as-create-auto-scaling-group autoscleasg --availability-zones
us-east-1d --launch-configuration autoscalelc --min-size 1 --max-size
5 --desired-capacity 1 --load-balancers autoscalelb
$ as-describe-auto-scaling-groups autoscleasg
$ as-put-scaling-policy MyScaleUpPolicy --auto-scaling-group
autoscleasg --adjustment=1 --type ChangeInCapacity --cooldown 300
$ mon-put-metric-alarm MyHighCPUAlarm --comparison-operator
GreaterThanThreshold --evaluation-periods 1 --metric-name
CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average
--threshold 80 --alarm-actions arn:aws:autoscaling:us-east-1:616259365041:scalingPolicy:46c2d3b3-7f29-42b6-ab64-548f45de334f:autoScalingGroupName/autoscleasg:policyName/MyScaleUpPolicy
--dimensions "AutoScalingGroupName=autoscleasg"
$ as-put-scaling-policy MyScaleDownPolicy --auto-scaling-group
autoscleasg --adjustment=-1 --type ChangeInCapacity --cooldown 300
$ mon-put-metric-alarm MyLowCPUAlarm --comparison-operator
LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization
--namespace "AWS/EC2" --period 600 --statistic Average --threshold 50 --alarm-actions arn:aws:autoscaling:us-east-1:616259365041:scalingPolicy:30ccd42c-06fe-401a-8b8f-a4e49bbb9c7d:autoScalingGroupName/autoscleasg:policyName/MyScaleDownPolicy
--dimensions "AutoScalingGroupName=autoscleasg"
After this I'm running this command:
$ as-describe-auto-scaling-groups autoscleasg --headers
Response:
AUTO-SCALING-GROUP GROUP-NAME LAUNCH-CONFIG AVAILABILITY-ZONES
LOAD-BALANCERS MIN-SIZE MAX-SIZE DESIRED-CAPACITY
AUTO-SCALING-GROUP autoscleasg autoscalelc us-east-1d
autoscalelb 1 5 1 INSTANCE INSTANCE-ID
AVAILABILITY-ZONE STATE STATUS LAUNCH-CONFIG INSTANCE
i-acf48bd5 us-east-1d InService Healthy autoscalelc
And then:
$ elb-describe-instance-health autoscalelb --headers
It shows:
INSTANCE_ID INSTANCE_ID STATE DESCRIPTION
REASON-CODE INSTANCE_ID i-ee364697 InService N/A
N/A INSTANCE_ID i-acf48bd5 OutOfService Instance has failed at
least the UnhealthyThreshold number of health checks consecutively.
Instance
My first problem is:
It automatically creates One extra instance when there is no load on Main instance.
Secondly,
Newly created instance is always OutOfService.
if I change Min Size to 0 using following command:
$ as-update-auto-scaling-group autoscleasg --launch-configuration
autoscalelc --availability-zones us-east-1d --min-size 0 --max-size 5
And trying to put load on instance using xen:
hg clone http://xenbits.xensource.com/xen-unstable.hg
Autoscaling not creating any instance. Even if I'm running above command on upto 5 session, CPU Utilization reaches to 100% and still no instance is being created.
Please help me...
I am not sure what you want to achieve but if you want to use autoscaling capabilities to add more instances based on traffic increase or decrease , you need to use the load balancer parameters (i.e. Latency):
Change yours to:
--namespace='AWS/ELB'
--metric-name Latency
--period 60 (this is super quick)
--threshold 2.0 (this is very low)
To test if it works, I use Apache Bench, I run below command on multiple micro instances
$ ab -n 10000 -c 10 http://<your ELB>.us-east-1.elb.amazonaws.com/index.php
I'm on a Mac. In the terminal, how would you figure out each of the following values?
Word size (64 bit vs. 32 bit)
L1/L2 cache size
Determine how much memory is being used (like df, but for RAM)
Thanks! I know you can find these in Activity Monitor, System Profiler etc. but I am trying to boost my knowledge of the terminal, and UNIX.
System Profiler is a GUI wrapper around /usr/sbin/system_profiler.
mress:10008 Z$ system_profiler -listDataTypes
Available Datatypes:
SPHardwareDataType
SPNetworkDataType
SPSoftwareDataType
SPParallelATADataType
SPAudioDataType
SPBluetoothDataType
SPCardReaderDataType
SPDiagnosticsDataType
SPDiscBurningDataType
SPEthernetDataType
SPFibreChannelDataType
SPFireWireDataType
SPDisplaysDataType
SPHardwareRAIDDataType
SPMemoryDataType
SPPCIDataType
SPParallelSCSIDataType
SPPowerDataType
SPPrintersDataType
SPSASDataType
SPSerialATADataType
SPUSBDataType
SPAirPortDataType
SPFirewallDataType
SPNetworkLocationDataType
SPModemDataType
SPNetworkVolumeDataType
SPWWANDataType
SPApplicationsDataType
SPDeveloperToolsDataType
SPExtensionsDataType
SPFontsDataType
SPFrameworksDataType
SPLogsDataType
SPManagedClientDataType
SPPrefPaneDataType
SPStartupItemDataType
SPSyncServicesDataType
SPUniversalAccessDataType
mress:10009 Z$ system_profiler SPHardwareDataType
Hardware:
Hardware Overview:
Model Name: iMac
Model Identifier: iMac10,1
Processor Name: Intel Core 2 Duo
Processor Speed: 3.33 GHz
Number Of Processors: 1
Total Number Of Cores: 2
L2 Cache: 6 MB
Memory: 16 GB
Bus Speed: 1.33 GHz
Boot ROM Version: IM101.00CC.B00
SMC Version (system): 1.52f9
Serial Number (system): QP0241DXB9S
Hardware UUID: 01C6B9E9-B0CB-5249-8AC7-069A3E44A188
You can also get some useful information from /usr/sbin/sysctl (try sysctl -a).
mress:10014 Z$ sudo sysctl -a | grep cache
Password:
hw.cachelinesize = 64
hw.l1icachesize = 32768
hw.l1dcachesize = 32768
hw.l2cachesize = 6291456
kern.flush_cache_on_write: 0
vfs.generic.nfs.client.access_cache_timeout: 60
vfs.generic.nfs.server.reqcache_size: 64
net.inet.ip.rtmaxcache: 128
net.inet6.ip6.rtmaxcache: 128
hw.cacheconfig: 2 1 2 0 0 0 0 0 0 0
hw.cachesize: 17179869184 32768 6291456 0 0 0 0 0 0 0
hw.cachelinesize: 64
hw.l1icachesize: 32768
hw.l1dcachesize: 32768
hw.l2cachesize: 6291456
machdep.cpu.cache.linesize: 64
machdep.cpu.cache.L2_associativity: 8
machdep.cpu.cache.size: 6144