I'm creating an image from a running instance in OpenStack
nova image-create <server-name>
and i'm just wondering,can this image be uploaded to ec2? Or do i need to create an ami from it?
Can someone guide me on how to go about this?
glance, the openstack image service is capable of storing a number of image types:
Raw
Machine (kernel/ramdisk outside of image, a.k.a. AMI)
VHD (Hyper-V)
VDI (VirtualBox)
qcow2 (Qemu/KVM)
VMDK (VMWare)
OVF (VMWare, others)
Ref: http://www.openstack.org/projects/image-service/
So basically. You can upload AMIs to openstack directly.
Example:
KERNEL_ID=`glance image-create --name="tty-linux-kernel" --disk-format=aki --container-format=aki < ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz | awk '/ id / { print $4 }'`
INITRD_ID=`glance image-create --name="tty-linux-ramdisk" --disk-format=ari --container-format=ari < ttylinux-uec-amd64-12.1_2.6.35-22_1-loader | awk '/ id / { print $4 }'`
glance image-create --name="tty-linux" --disk-format=ami --container-format=ami --property kernel_id=${KERNEL_ID} --property ramdisk_id=${INITRD_ID} < ttylinux-uec-amd64-12.1_2.6.35-22_1.img
When performing an image-create against a running instance
Images can only be created from running instances if Compute is configured to use qcow2 > images, which is the default setting. You can explicitly enable the use of qcow2 images > by adding the following line to nova.conf:
But assuming you are configured as such, yes it will output in AMI format.
Ref:
http://docs.openstack.org/trunk/openstack-compute/admin/content/creating-images-from-running-instances.html
Related
Using a CLI, I want to list the images in each repository in a Google Container Registry project but with the following conditions:
Lists the images with the latest tag only
Lists the human-readable size of the images
Lists the name of the images
The closest I've managed to get us through gsutil:
gsutil du -h gs://eu.artifacts.my-registry.appspot.com/containers/images
Resulting in:
33.77 MiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c1a2387ef6cb30a7428a46821f946d6a2c591a26cb2066891c55b2b6846ae2
1.27 MiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c1e7db6bf0140bd5fa34236a35453cb73cef01f6d89b98bc5995ae8ea07aaf
1.32 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c3c97495d60c68d37d04a7e6c9b3a48bb159ce5dde13d0d81b4e75e2a3f1d4
81.92 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c5483cb8ac9c9ae498507e15d68d909a11859a8e5238556b7188e0af4d9264
457.43 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c7f98faa1cfc05264e743e23ca2e118d24c57bfd67d5cb2e2c7a57e8124b6c
7.88 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c83b13d044844cd3f6b278382e408541f22029acaf55d9e7e5689b8d51eeea
But obviously this does not meet most of my criteria.
The information is available through the GUI like so on a per image basis:
Any ideas?
I'm open to gsutil, gcloud, docker, anything really which can be installed on a docker container.
You can use the Google Cloud UI to accomplish this. There's a column selector right next to the filter bar and it has an option for the image size.
Once the column is displayed, you'll be able to order by size.
Its seems you have only one outstanding issue with listing container images size after reading your comment at Jason's answer. So it is not possible to retrieve with gcloud command directly. Here are two work around I tested:
You can use gcloud container images describe command to see the size of the images. Make sure you use "--log-http" flag with it. Command should be like this:
$ gcloud container images describe gcr.io/myproject/myimage:tag --log-http
Another way to get the size of the image is using gsutil stat command.
So here's what I did:
a. Upon running below command, I listed all my images from the GCS bucket and saved it to a file called images.txt
$ gsutil ls "BUCKET URL" > images.txt
b. I ran gcloud stat command like below to read image names from the images.txt file and return size of the images chronologically.
$ for x in $(cat images.txt); do `gsutil stat $x | grep Content-Length | awk '{print $2}'`; done
You can customize this little script according to your need.
I understand these are not efficient workaround but thats all seems to be an option now. However, GCR just implements the docker container API, so may be you can read this document to see if you can find/do something of your own.
Hi here just to share a rudimental script which takes the first tag and get the size of the whole layers and write it on a report, it takes ages on 3TB repo but at least i know which repo is big.
echo "REPO,SIZE" > repository-size-report.csv
for REPO in $(gcloud container images list --repository eu.gcr.io/comerge-comerge01-171833 --format="table[no-heading](NAME)") ; do
for TAGS in $(gcloud container images list-tags $REPO --format="table[no-heading](TAGS)"); do
TAG=$(echo $TAGS | cut -d, -f1)
SUM=0
for SIZE in $(gcloud container images describe $REPO:$TAG --log-http 2>&1 | grep size | grep -o '[0-9][0-9]*') ; do
SUM=$((SUM + SIZE))
done
HSUM=$(echo $SUM | numfmt --to iec --format "%8f")
echo "$REPO:$TAG,$HSUM"
echo "$REPO:$TAG,$HSUM" >> repository-size-report.csv
done
done
You can use the command gcloud container images list command to accomplish this task; however, you will need to set the appropriate flags to fulfill your use case. You can read more about the command and the flag options here.
When I'm hardcoding i. e.
server1=ip-10.237.40.10-aws-n-myhost
ssh - i /home/<passwordfile> akhil#$server1
It is working,but when I'm greping the host from other file its not working
Eg:
server2=$(grep - e host /home/akhil/configuration. File | awk FS, '{print $2} ' )
echo $server2
ssh - i /home/<passwordfile> akhil#$server2
While printing server 2 is fine, but when it is used in ssh I'm getting a error saying : not known host name.
I want to automate the script with a configuration file so that when ever the cluster changes I could simply change the ip address in configfile instead of changing in all my scripts.
I need help in this.
Thanks
I am trying to find a way to perform a simultaneously copy of a AMI to all other regions.
I have search near and far but beside seeing on a blog post that it can be done, I haven't found a way using aws cli ...
https://aws.amazon.com/blogs/aws/ec2-ami-copy-between-regions/
Currently I have written a bash script to do so, but I would like to find a better, easier way to do so
I have 8 AMI's that need to be passed to all regions.
using an array-
declare -a DEST=('us-east-1' ...2....3)
aws copy-image --source-region $SRC --region ${DESTx[#]} --source-ami-id $ami
Do you guys have any other suggestion?
Thanks.
you can make a single line bash, specially useful if in future there are new regions:
aws ec2 describe-regions
--output text |\
cut -f 3 | \
xargs -I {} aws copy-image
--source-region $SRC
--region {}
--source-ami-id $ami
basically it goes like this:
aws ec2 describe-regions --output text returns the list of all available regions for ec2, its a 3 columns table ("REGIONS", endpoint, region-name)
cut -f 3 takes the 3rd column of the previous table (read as list)
keep the current region from previous argument (xargs) into {} so you can send it to the region parameter of the copy-image command
Is there any way to ruyn the knife vsphere for unattended execution? I have a deploy shell script which I am using to help me:
cat deploy-production-20-vm.sh
#!/bin/bash
##############################################
# These are machine dependent variables (need to change)
##############################################
HOST_NAME=$1
IP_ADDRESS="$2/24"
CHEF_BOOTSTRAP_IP_ADDRESS="$2"
RUNLIST=\"$3\"
CHEF_HOST= $HOSTNAME.my.lan
##############################################
# These are psuedo-environment independent variables (could change)
##############################################
DATASTORE="dcesxds04"
##############################################
# These are environment dependent variables (should not change per env)
##############################################
TEMPLATE="\"CentOS\""
NETWORK="\"VM Network\""
CLUSTER="ProdCluster01" #knife-vsphere calls this a resource pool
GATEWAY="10.7.20.1"
DNS="\"10.7.20.11,10.8.20.11,10.6.20.11\""
##############################################
# the magic
##############################################
VM_CLONE_CMD="knife vsphere vm clone $HOST_NAME \
--template $TEMPLATE \
--cips $IP_ADDRESS \
--vsdc MarkleyDC\
--datastore $DATASTORE \
--cvlan $NETWORK\
--resource-pool $CLUSTER \
--cgw $GATEWAY \
--cdnsips $DNS \
--start true \
--bootstrap true \
--fqdn $CHEF_BOOTSTRAP_IP_ADDRESS \
--chost $HOST_NAME\
--cdomain my.lan \
--run-list=$RUNLIST"
echo $VM_CLONE_CMD
eval $VM_CLONE_CMD
Which echos (as a single line):
knife vsphere vm clone dcbsmtest --template "CentOS" --cips 10.7.20.84/24
--vsdc MarkleyDC --datastore dcesxds04 --cvlan "VM Network"
--resource-pool ProdCluster01 --cgw 10.7.20.1
--cdnsips "10.7.20.11,10.8.20.11,10.6.20.11" --start true
--bootstrap true --fqdn 10.7.20.84 --chost dcbsmtest --cdomain my.lan
--run-list="role[my-env-prod-server]"
When it runs it outputs:
Cloning template CentOS Template to new VM dcbsmtest
Finished creating virtual machine dcbsmtest
Powered on virtual machine dcbsmtest
Waiting for sshd...done
Doing old-style registration with the validation key at /home/me/chef-repo/.chef/our-validator.pem...
Delete your validation key in order to use your user credentials instead
Connecting to 10.7.20.84
root#10.7.20.84's password:
If I step away form my desk and it prompts for PWD - then sometimes it times out and the connection is lost and chef doesn't bootstrap. Also I would like to be able to automate all of this to be elastic based on system needs - which won't work with attended execution.
The idea I am going to run with, unless provided a better solution is to have a default password in the template and pass it on the command line to knife, and have chef change the password once the build is complete, minimizing the exposure of a hard coded password in the bash script controlling knife...
Update: I wanted to add that this is working like a charm. Ideally we could have changed the centOs template we were deploying - but it wasn't possible here - so this is a fine alternative (as we changed the root password after deploy anyhow).
I've created a snapshot using ec2-api-tools of a volume within my AWS EC2 account. Currently I have:
>> ec2addsnap vol-xxxxxxxx -d 'My-first-Snapshot'
SNAPSHOT snap-12345678 vol-xxxxxxxx pending 2013-01-30T17:09:35+0000 086018780037 8 My-first-Snapshot
What I want to do is add a --tag Name='Name Tag' to this newly created snapshot from the snap-12345678 id in the response.
This works>
>> ec2addtag snap-12345678 --tag Name='Name Tag'
How can I automate this? I've started writing a simple shell script - but I'm not sure how I would query the response from the initial ec2addsnap to grab the newly created snapshot id in order to apply ec2addtag? Cheers (Thought I was posting this in Serverfault - my apologies)
I managed to solve this via the use of awk. My Bash Script =
today=$(date +"%d-%m-%Y")
tagname=$2
ec2addsnap vol-$1 -d $2'-'$today;
ec2dsnap | grep $2'-'$today | awk -v tagname=$tagname '{print "Adding Tag too: " $2}; system("ec2addtag "$2" --tag Name=\""tagname"\"")';