So I have access to a number of EC2 instances, some of which have been running for years. We have a special repository of the private keys to all of these; thus I can, for most of our instances, get into them as root (or the 'ubuntu' user in some cases) to administer them.
While playing with boto I noticed the EC2 .get_keypair() and get_all_keypairs methods and was wondering if this could be used to recover any SSH keys which have slipped through the cracks of our procedures and been lost.
When I inspect the resulting boto.ec2.keypair.KeyPair objects, however, I see that the .material attribute seems to be empty and when I try to use the keypair's .save() method I get an exception complaining that the materials haven't been fetched.
(Other operations, such as .get_all_instances() and .run_instances() are working during that session).
So, what am I missing? Are there some other operations for which I have to provide the X.509 cert. in addition to my normal AWS key/secret pair?
(Note: I don't actually need this yet. I'm just familiarizing myself with the API and preparing for such eventualities).
It is not possible to recover SSH keys like so, the get_all_key_pairs() method name is a bit misleading in this regard, though properly documented by means of the return object of class boto.ec2.keypair.KeyPair at least, see e.g. the save() method:
Save the material (the unencrypted PEM encoded RSA private key) of a
newly created KeyPair to a local file. [emphasis mine]
This is not a limitation of boto, but a result of the security architecture of Amazon EC2: you can only retrieve a complete key pair (i.e. including the private key) during the initial creation of a key pair, the private key is never stored by EC2 and cannot be recovered, if you ever loose it (but see below for a workaround).
Eric Hammond's recent answer to the related question consequences of deleted key pair on ec2 instance provides another angle to this topic, including a pointer to his article Fixing Files on the Root EBS Volume of an EC2 Instance, explaining how to get access to the instance regardless eventually.
Given some of your instances have been running for years, this might not work though, insofar This process is only available with an EBS boot instance (which haven't been available back then), and, as Eric stresses as well, is one of the many reasons why You Should Use EBS Boot Instances on Amazon EC2 nowadays.
Related
Recently I've started seeing some instances with the metadata key http://169.254.169.254/latest/meta-data/events/maintenance/ but most of the instances I manage do not have this key. I believe it's new but I have no proof either way.
Anyone have more info about this?
I hit this issue as well, SaltStack is affected by this (while building grains).
The endpoint previously was available only when a maintenance was scheduled.
I was able to confirm through support that this URI is now always available. Support was not sure if this was intentional or a bug. But this affects us-east-1 (for some use cases like mine with Salt) for now and will possibly affect other regions in future if they decide to roll it out.
I want to get an Amazon EC2 instance (first year trial is for free) for my tutorials but I have found out that I need to complete form about Pen-Testing on their website, as I will be using Amazon EC2 instance only to perform such actions on my own systems which I physically own so I was just wondering if a normal person like me can apply for it or is it just limited to a companies and normal users can't apply for it ?
I will appreciate any help.
Kind Regards
You can apply just like anybody else - no special qualifications needed. Mostly they want to make sure you are only pen testing against your own instances, not somebody else's instance.
But also keep in mind, since it sounds like you are trying to stay within the free-tier, that you are probably going to need to pay for a bigger instance to test against:
At this time, our policy does not permit testing small or micro RDS instance types. Testing of m1.small or t1.micro EC2 instance types is not permitted. This is to prevent potential adverse performance impacts on resources that may be shared with other customers.
Are there any examples of using encryption to encrypt the disk-cache used by OkHttp's HttpResponseCache? Naively, I don't think this is a very hard thing to do, but I'd appreciate any advice or experience to avoid security-pitfalls.
Without too many specifics, here's what I'm trying to achieve: a server that accept user's api-keys (typically 40-character random string) for established service X, and makes many API calls on the users behalf. The server won't persist user's api-keys, but a likely use case is that users will periodically call the server, supplying the api-key each time. Established service X uses reasonable rate-limiting, but supports conditional (ETag, If-Modified-Since) requests, so server-side caching by my server makes sense. The information is private though, and the server will be hosted on Heroku or the like, so I'd like to encrypt the files cached by HttpResponseCache so that if the machine is compromised, they don't yield any information.
My plan would be to create a wrapper around HttpResponseCache that accepts a secret key - which would actually be a hash of half of the api-key string. This would be used to AES-encrypt the cached contents and keys used by HttpResponseCache. Does that sound reasonable?
Very difficult to do with the existing cache code. It's a journaled on-disk datastructure that is not designed to support privacy, and privacy is not a feature you can add on top.
One option is to mount an encrypted disk image and put the cache in there. Similar to Mac OS X's FileVault for example. If you can figure out how to do that, you're golden.
Your other option is to implement your own cache, using the existing cache as a guide. Fair warning: the OkResponseCache is subject to change in the next release!
I'm starting to use EC2 with a lot of SPOT instances (>100), I'm trying to find a way to retrieve all my IC2 instances private ip's in order to use them later to deploy binaries and so on.
Can anyone help me to do it?
Thanks in advance.
Since you didn't list a framework or language:
Use the AWS Console.
Use ElasticFox.
Use the commandline tools.
Use the .NET SDK.
Use the Java SDK.
Amazon will start and stop spot instances without your involvement but based on your spot instance request parameters. Because of this, the list of spot instance IP addresses you query at time A might not be accurate at time B.
Problem 1: You think IP address A is one of your spot instances, but in the interim Amazon has terminated your spot instance and started somebody else's instance using the same private IP address. You'll want to make sure that an instance you are contacting is really yours before you pass it anything sensitive or trust any answers it gives you.
Problem 2: In the time since you got the query results, Amazon has started new spot instances for you based on the spot price. When you go to "deploy binaries and so on" you could miss some of the instances leaving them in unstable or out-of-date states.
You might consider having the spot instances configure and update themselves when they start up, and perhaps on regular intervals.
I'm building some AMIs from one of the basic ones on EC2. One of the instance types is running Tomcat and contains a lot of Lucene indexes; another instance will be running MySQL and have correspondingly large data requirements with it.
I'm trying to define the best way to include those in the AMIs that I'm authoring. If I mount /mnt/lucene and /mnt/mysql, those don't get included in the AMI generated. So it seems to me like the preferred way to deal with those is to have an EBS for each one, take snapshots and spin up instances which have their own EBS based on the most recent snapshots. Is that the best way to proceed?
What is the point of instance storage? It seems like it will only work as a temporary storage area - what am I missing? Presumably there is a reason Amazon offer up to 800GB of storage on standard large instances...
Instance storage is faster than EBS. You don't mention what you will be doing with your instances, but for some applications speed might be more important than durability. For an application that is primarily doing data mining on a large database, having a few hundred gigs of local, fast storage to host the DB might be beneficial. Worker nodes in a MapReduce cluster might also be great candidates for instance storage, depending on what type of job it was.
Another point of instance storage is that it's independent. There have been many EBS outages (google e.g. "site:aws.amazon.com ebs outage"). If the instance runs at all, it has the instance storage available. Obviously if you rely on instance storage, you need to run multiple instances (on multiple availability zones) and tolerate single failing instances.
I know this is late to the game, but one other little considered factoid...
EBS storage makes it exceedingly easy to create AMI's from, whereas, instance-store based storage requires that creation of AMI's be done locally on the machine itself with a whole bunch of work to prep, store, and register the AMI.