I'm trying to write a scalable and reusable script to provision ec2s using ansible. As part of this, I would like to be able to determine which Route53 hosted zone my machine is a part of, so I can add it as a record set for a private zone. I don't want to have to enter the zone ... I want to be able to figure it out using the ec2.
For a given ec2, I can get the instance. From the instance, I get get VPC-ID. I know that VPC-IDs are associated with Route53 hosted zones, but I can't seem to find an AWS CLI command to figure out the hosted zone from the VPC-ID.
I've found the command'route53 list-vpc-association-authorizations --hosted-zone-id=' command, which has to be run on each individual zone, but the result is an empty array for a zone that I know for a fact is associated with a VPC.
Can anyone help me to derive the correct private hosted zone, given that I know the VPC ID and ec2 instance id?
Thanks
Maybe too simple for people, but this works:
aws route53 list-hosted-zones --output text | grep 'MYDOMAIN' | awk '{print $3}' | cut -c13-
...Just lists the domains in AWS in column format, searches for your domain and then cuts out the zone id with awk and cut.
Took me a while, but I figured it out:
getHostedZone(){
ZONE_IDS=$(aws route53 --region $2 list-hosted-zones | jq ".HostedZones | map(.Id)")
while IFS= read -r; do
ZONE=$(aws route53 --region $2 get-hosted-zone --id $REPLY)
hasVPCs=$(echo $ZONE | jq 'has("VPCs")')
VPCs=$(echo $ZONE | jq ".VPCs")
if [ "$hasVPCs" == true ]
then
VPC=$(echo $VPCs | jq ".[] | select(.VPCId == \"$1\")")
if [ -n "$VPC" ]
then
HOSTED_ZONE=$(echo $REPLY | sed 's/^\/hostedzone\///g')
fi
fi
done < <(echo $ZONE_IDS | jq -r '.[]')
echo $HOSTED_ZONE
}
Called with:
ZONE_ID=$(getHostedZone $VPC_ID $EC2_REGION)
Related
Let me rephrase the total question.
I got already the config file for all vpcs using below script
#!/bin/bash
VPC="$(aws ec2 describe-vpcs | jq -r '.Vpcs[] | .VpcId')"
for x in $VPC
do
echo $x
aws ec2 describe-instances --filters "Name=vpc-id,Values=$x" | jq -r '.Reservations[].Instances[] | (.Tags[]//[]|select(.Key=="Name")|.Value) as $name | "Host \($name) \nHostname \(.PrivateIpAddress)"'
done
and this gave me output vpc wise with Host and Hostname like this
vpc-agdh5j6j
Host remote-server1
Hostname 123.45.6.6
Host remote-server2
Hostname 456.4.56.7
vpc-guh5jk6y
Host remote-server3
Hostname 245.24.789.9
Host remote-server4
Hostname 457.87.1.7
.
.
.
so on
Now i saved this output to a file and trying to add one more line after Hostname wherever it exists in the file.
#!/bin/bash
echo "user : "$1
VPC="$(aws ec2 describe-vpcs | jq -r '.Vpcs[] | .VpcId')"
for x in $VPC
do
echo $x
aws ec2 describe-instances --filters "Name=vpc-id,Values=$x" | jq -r '.Reservations[].Instances[] | (.Tags[]//[]|select(.Key=="Name")|.Value) as $name | "Host \($name) \nHostname \(.PrivateIpAddress)"' > config.txt
sed -i '/^Hostname.*/a User\t$1' config.txt
cat config.txt
done
Here user is passed as an argument and is passing to script as mentioned above. It is not taking the input value given to "user" and is simply displaying in the output as $1 only.
Its giving the output as below.
vpc-agdh5j6j
Host remote-server1
Hostname 123.45.6.6
User $1
Host remote-server2
Hostname 456.4.56.7
User $1
vpc-guh5jk6y
Host remote-server3
Hostname 245.24.789.9
User $1
Host remote-server4
Hostname 457.87.1.7
User $1
.
.
so on
Its not taking value given in the argument.
Probably not a 100% fit as your requirements are a bit unclear, but this should get you started.
sed "s/^ *\(HostName .*\)/ \1\n User ${username}\n ProxyJump ${jump}\n/" ~/.ssh/config
Check that the output is as you would like it, then add -i to sed to make the changes in-place.
The logic:
s/ -- search/replace
^ *\(HostName .*\) -- the line on which to act
^ * -- start of line (^ followed by 0..n spaces)
\( -- starts a group
HostName -- verbatim string
.* -- space, followed by 0..n characters (to end of line)
\) -- ends group
/ -- end of search pattern, start replace pattern
\1 -- four spaces, followed by the first (and only) group from the search pattern
\n -- newline
User ${username}\n -- your User entry
ProxyJump ${jump}\n -- your ProxyJump entry
/ -- end of replace pattern
Script is working if i use double quotes for sed command it worked. Its taking variable value passed in the argument
sed -i "/^Hostname.*/a User\t$1" config.txt
I am using this code to get my AWS region. I want working on them in a while loop.
awsRegionList=$(aws ec2 describe-regions | jq -r '.Regions[] | .RegionName')
while [I can't find the expression work with my variable]:
do
echo " working on : (I want here the regionName)"
done
In bash you need to use a for loop to iterate over a list, instead of a while loop:
awsRegionList=$(aws ec2 describe-regions | jq -r '.Regions[] | .RegionName')
for region in $awsRegionList
do
echo " working on : ${region}"
done
I need a list of all repository tags for a remote Docker registry, along with their date = the date the image tag was pushed.
Or at least the tags sorted according when they were created (without the date, but in chronological order).
What I have now (from the Official Docker docs) is:
curl -XGET -u my_user:my_pass 'https://my_registry.com/v2/my_repo/tags/list'
But that seems to return the tags in random order. I need to further filter / postprocess the tags programmatically, so I need to fetch additional per-tag metadata.
Any idea how to do that?
My two cents:
order_tags_by_date() {
if [ -z $1 ] || [ -z $2 ]; then
echo "Get tag list of a component and order by date. Please enter a component name and a tag "
echo "For example: tags my-app 20200606"
return 1
fi
# get all tags list
url=some_host
result=$(curl -s $url:5000/v2/$1/tags/list)
# parse page and get "tags" array, sort by name, reverse, and get as a tab separated values;
# separate with space and put into an array in bash;
# "reverse" to get the latest tag first; if you want old tag first, remove it
IFS=" " read -r -a tags <<< "$(echo $result | jq -r '.tags | sort | reverse | #tsv')"
# for each tag, get the same component in docker api the manifest,
# parse the created field by the first element in history; I assume all histories has the same created timestamp
json="["
for tag in $tags
do
host=another-docker-api-host
date=$(curl -sk $host/v2/$1/manifests/$tag | jq -r ".history[0].v1Compatibility" | jq ".created")
json+='{"tag":"'$tag'", "date":'$date"},"
done;
valid_json=${json::-1}']'
echo $valid_json | jq 'sort_by(.date, .tag) | reverse'
}
I constructed another JSON for jq to sort on the date, tag field; because in my case, some image has timestamp based tags, but creation date is always 19700101; while other images have correct date but tags are number based; so I order by both. You can also remove "reverse" at the end to sort asc.
If you don't want the date, add this on the last line of the script, after reverse:
| .[] | .tag
So it will get all elements' tag value and they are already sorted.
jib created images, though, will have date of 1970-01-01, this is by design.
In the past, I wrote a script to migrate images from local docker registry to ECR, maybe you want to use some of the lines like,
tags=$(curl -s https://$username:$password#$privreg/v2/$repo/tags/list?n=2048 | jq '.tags[]' | tr -d '"')
creationdate=$(curl -s https://$username:$password#$privreg/v2/$repo/manifests/$tag | jq -r '.history[].v1Compatibility' | jq '.created' | sort | tail -n1)
#!/bin/bash
read -p 'Username: ' username
read -sp 'Password: ' password
privreg="privreg.example.com"
awsreg="<account_id>.dkr.ecr.<region_code>.amazonaws.com"
repos=$(curl -s https://$username:$password#$privreg/v2/_catalog?n=2048 | jq '.repositories[]' | tr -d '"')
for repo in $repos; do
tags=$(curl -s https://$username:$password#$privreg/v2/$repo/tags/list?n=2048 | jq '.tags[]' | tr -d '"')
project=${repo%/*}
service=${repo#*/}
awsrepo=$(aws ecr describe-repositories | grep -o \"$repo\" | tr -d '"')
if [ "$awsrepo" != "$repo" ]; then aws ecr create-repository --repository-name $repo; fi
for tag in $tags; do
creationdate=$(curl -s https://$username:$password#$privreg/v2/$repo/manifests/$tag | jq -r '.history[].v1Compatibility' | jq '.created' | sort | tail -n1)
echo "$repo:$tag $creationdate" >> $project-$service.txt
done
sort -k2 $project-$service.txt | tail -n3 | cut -d " " -f1 > $project-$service-new.txt
cat $project-$service-new.txt
while read repoandtags; do
sudo docker pull $privreg/$repoandtags
sudo docker tag $privreg/$repoandtags $awsreg/$repoandtags
sudo docker push $awsreg/$repoandtags
done < $project-$service-new.txt
Script might not work and need some changes, but you can use some parts of it, so I will leave it in the post as an example.
The tag list should be returned using lexical ordering according to the OCI spec:
If the list is not empty, the tags MUST be in lexical order (i.e. case-insensitive alphanumeric order).
There's no API presently available in OCI to query when the image was pushed. However, images do include a configuration that you can query and parse to extract the image creation time as reported by the build tooling. This date may be a lie, I've worked on tooling that backdates this for purposes of reproducibility (some well known date like 1970-01-01, or the datestamp of the Git commit being built), some manifests pushed may have an empty config (especially when it's not actually an image), and a manifest list will have multiple manifests each with different configs.
So with that disclaimer that this is very error prone, you can fetch that config blob and extract the creation date. I prefer to use tooling for this beyond curl to handle:
different types of authentication, or anonymous logins, that registries require
different types of image manifests, which I'm aware of at least 6 different manifests, OCI and Docker for images and manifest list and schema v1 manifests with and without a siganture
selecting a platform from multi-platform manifests and parsing the json of the config blob
The tooling I work on for this is regclient, but there's also crane and skopeo. With regclient/regctl, this looks like:
repo=registry.example.com/your/repo
regctl tag ls "$repo" | while read tag; do
regctl image config "$repo:$tag" --format "{{ printf \"%s: $tag\\n\" .Created }}"
done
I've got a group of AWS instances that I'm parsing via aws ec describe-instances. I'm looking to trim out all the records whose IP's do not start with '10.10'.
aws ec2 describe-instances --no-paginate --filter "Name=instance-state-name,Values=running" --query 'Reservations[].Instances[].{Private:PrivateIpAddress,PublicDNS:PublicDnsName,PublicIP:PublicIpAddress}' | jq '.[] | select( .Private | contains("10.10"))'
This gets me the exact opposite of what I want. It seems logical that I should be able to negate the contains in some way - but I've not been able to glean it from the documentation, nor through experimentation. My jq proficiency is middling, so perhaps I'm using the wrong operator or function here.
While i WOULD like an answer to this specific jq question - I'll accept an answer that utilizes JMESPath through the --query switch yield the same result.
Jeff Marcado's answer in the comments will be accepted if he writes it up as a full fledged answer. In the meantime, since I had hit a wall with trying to get JQ to do it, I experimented with the --query syntax for AWS to get this.
It might be a bit better, since this catches only objects that start with 10.10, whereas the jq from above will catch any object that contains 10.10, so things like 10.100. or 110.100, etc... will get through. That's assuming there is not a similar operator to "starts_with" in jq. Probably there is. Regardless, I'm putting this here because it worked for my end goal and may be helpful to someone else at some point.
aws ec2 describe-instances \
--no-paginate --filter "Name=instance-state-name,Values=running" \
--query 'Reservations[].Instances[?starts_with(PrivateIpAddress, `10.10.`) == `false`]' |
jq '.[] | .[] | {PrivateIpAddress, PublicIpAddress, PublicDnsName}'
The size property on the DescribeVolumes returns the allocated size, but I would like to know the current used size so that I can plan for requesting an EBS size increase.
if you're looking for 'used' size, then you'll need to do this using an operating system command fired from the instance to which this EBS is attached to.
On unix you can use df -Th and check the values against your mount point. On windows, you can just use the 'My Computer' page to check this
It sounds like the intent of this question is to determine how much allocated EBS space you are using so you can know when you're about to hit your limit. Since Amazon has a default limit of 20T, hitting it unexpectedly (as I did) is not pleasant.
If you have the command line tools, a little bash magic gets you the answer:
t=0; for i in `ec2-describe-volumes | grep VOLUME | awk '{print $3}'`; do t=`expr $t + $i`; done; echo $t
(get the ticks right!)
Though it would be nice if amazon told you in an easy-to-find spot what your current allocated and max allowed is.
EDIT: there are now standard and provisioned iops ebs. The command above shows you the cumulative allocated of both types.
For standard ebs only:
t=0; for i in `ec2-describe-volumes | grep VOLUME | grep standard | awk '{print $3}'`; do t=`expr $t + $i`; done; echo $t
for provisioned-iops ebs only:
t=0; for i in `ec2-describe-volumes | grep VOLUME | grep io1 | awk '{print $3}'`; do t=`expr $t + $i`; done; echo $t
Inspired by rdickeyvii#'s answer above, a simple solution summing using jq
Get all sizes
aws ec2 describe-volumes | jq '.Volumes[] | .Size'
If you just care about certain volume types:
aws ec2 describe-volumes | jq '.Volumes[] | select(.VolumeType="gp2") | [.Size] | add'
Printout with a sum using jq's add
echo "Total volume is $(aws ec2 describe-volumes | jq '.Volumes[] | select(.VolumeType="gp2") | .Size' | jq --slurp 'add') GB"
Looks like:
Total volume is 89708 GB