Amazon Snap shot Query - amazon-ec2

I had a snapshot in my amazon VPC account, which should not be deleted by any one. So is there any option in amazon for making the snapshot read only or not deleted by any one.

If a person has sufficient privileges to the account, they will be able to delete it; if you are really worried throw a copy on S3 and make sure no one has access to it except you.

Related

Protect root volume for AMIs with product code

we found that it is possible to take a snapshot of the root volume of an AMI with a Marketplace product code. From this snapshot it is easy to create a new AMI (this one has the product code), or mount it on a new instance and copy the content to another volume and create an AMI from it without the product code.
I like to be able to protect any installed proprietary software on the AMI against reverse engineering (reading) and prevent impersonating an instance-id with a product code from one that has been tampered with. We've read many different articles on the subject and have not found a way to prevent this without getting 'identity view' permission of the owner of the instance. Any suggestions are most appreciated.
Unfortunately, as a seller, you cannot prevent buyers in AWS Marketplace from accessing your AMI contents.
The AWS Marketplace policy requires the following:
AMIs must allow operating system (OS)-level administration capabilities to allow for compliance requirements, vulnerability updates, and log file access. Linux-based AMIs use SSH, and Windows-based AMIs use RDP.
https://docs.aws.amazon.com/marketplace/latest/userguide/product-and-ami-policies.html#accessibility

How to save/backup amazon instance local

I would like to lower the cost paying to Amazon.
There are stopped instances that I want to backup and save on my local server, on-prem.
After creating an image from the instance, is there any way I can copy AMI to my local server and remove it from Amazon. So in a day, I will need back, it can transfer back from my local server to Amazon to use it again?
The instance first created on Amazon.I rather a way to save instance on-premise as a file and not as a virtual server.
The main issue is: How can I transfer and save the image of an instance, that created on Amazon, as a file to the local server and how I can return it back to be in Amazon, in case I need to build the instance again.
Is there any way to do it?
Thanks a lot!
You can use some backup software (duplicati, cloudberry, or anything else):
Install backup software to your EC2
Make an image backup to S3 cloud storage
Install backup software to your physical machine
Restore image from S3 cloud storage to physical machine or your local storage to keep this backup locally.
And the last, but not least thing:
Good luck!)))
You would need to use the VM Import/Export Tool for that. Read the docs to make sure you know how to upload again.
As to the cost, I am not sure how Amazon configures the cost, that is something you have to check out from your account. Once you create the image it is on your account. Even after you download it, not sure when AWS charges you or not.
You can create an image file from your current drive but it will be quite expensive:
create another instance
attach your volume there as the second drive
use something like dd if=/dev/xvd0 of=drive.img ... to copy volume to a file
rsync / ftp / etc the file to your local drive.
You will be billed for the second instance and for the transfer. When you want to restore the machine - you'll be billed too.
Have you checked free tier? You have a year of free access to AWS for small instances and volumes.
You need a tool to get what you want. Take eg Cloudberry and create an image and store it at Amazon and then restore and things are done. This is the best option for you. No other ways.

How to transfer an Amazon S3 bucket to another account?

I configure AWS instances for clients, and I need to transfer everything to them at the end, so that the billing for AWS and S3 usage also goes on their accounts.
I know there is a way to "transfer" an EC2 instance via AMI sharing, but is there a way to transfer ownership or share S3 buckets as well? (Preferably avoid making a copy but transfer the original bucket itself).
S3 Buckets cannot be transferred between accounts. At least in the simple sense of "here is my bucket, now it is your bucket". Everyone seems to use some form of copying. If you have permission to both your original bucket and their destination bucket then you can use the AWS CLI and just
aws s3 sync s3://bucket1 s3://bucket2
Have you tried adding their account as an ALL PERMISSION user to one of your buckets? http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-creatingrole-policyexamples.html
Then login as their account and see if they can then edit the policy to remove your original account? Not sure how the billing would turn out since you created the bucket.
If you are going to do this frequently then you should create a new account per customer and then transfer ownership of the whole account to the client.
See Consolidated billing and Organisations.
You can transfer the contents of the buckets between accounts by making the destination bucket public (There are more secure ways to do this).
Then using the aws CLI from the other account you authenticated with run the s3 cp command (This does not need bandwidth):
aws s3 cp "s3://bucket-source/file.zip" "s3://bucket-you-dont-own/NewFolder/" --acl bucket-owner-full-control
If you do not add "--acl bucket-owner-full-control" to your s3 string you will get an access denied error because the destination account does not have file permissions:
A client error (AccessDenied) occurred when calling the ListObjects operation: Access Denied

backup aws ec2 data to a totally separate aws account

I want to backup my AWS snapshots to a completely separate AWS account for additional security (if my AWS credentials were acquired someone could delete all my snapshots and volumes). But I'm a bit stumped on how to do this.
There doesn't seem to be a way to store a volume or snapshot in S3 such that another user could access that data in s3 and store it in a separate AWS account.
Does anyone have any suggestions on how to acheive this?
Thanks
Create an IAM user and an S3 bucket from your secret (backup)
account.
Add an IAM policy to the newly created bucket,
allowing your newly created IAM user to put objects, but denying him
to delete them.
Use IAM user account to upload your backups to S3.
You can share any EBS snapshot with another account by adding this permission. Once the snapshot is shared, the other account can either copy that snapshot to their account or create a volume using that snapshot.

how to use a private yum repo on amazon-s3 to provision amazon-ec2 instances?

My fantasy is to be able to spin up a standard AMI, load a tiny script and end up with a properly configured server instance.
Part of this is that I would like to have a PRIVATE yum repo in S3 that would contain some proprietary code.
It seems that S3 wants you to either be public or use AMZN's own special flavor of authentication.
Is there any way that I can use standard HTTPS + either Basic or Digest auth with S3? I'm talking about direct references to S3, not going through a web-server to get to S3.
If the answer is 'no', has anyone thought about adding AWS Auth support to yum?
The code in cgbystrom's git repo is an expression of intent rather than working code.
I've made a fork and gotten things working, at least for us, and would love for someone else to take over.
https://github.com/rmela/yum-s3-plugin
I'm not aware that you can use non-proprietary authentication with S3, however we accomplish a similar goal by mounting an EBS volume to our instances once they fire up. You can then access the EBS volume as if it were part of the local file system.
We can make changes to EBS as needed to keep it up to date (often updating it hourly). Each new instance that mounts the EBS volume gets the data current as of the mount time.
You can certainly use Amazon S3 to host a private Yum repository. Instead of fiddling with authentication, you could try a different route: limit access to your private S3 bucket by IP address. This is entirely supported, see the S3 documentation.
A second option is to use a Yum plug-in that provides the necessary authentication. Seems like someone already started working on such a plug-in: https://github.com/cgbystrom/yum-s3-plugin.

Resources