RightAws::Ec2.new, only per region? - amazon-ec2

This is a question about ruby RightAws API to Amazon EC2. When we create a new instance of Ec2 by RightAws::Ec2.new, passing region or not, all calls to methods using that instance of ec2 thus created, only talk about things inside that region (or default region), is this right? Or, are there calls which are universal, even while passing an explicit region?
Thanks
Udaya

Related

dynamic ec2 resourcing in declarative cloud formation/terraform

We are moving our infrastructure to cloud formation since it's much easier to describe the infrastructure in a nice manner. This works fantastically well for things like security groups, routing, VPCs, transit gateways.
However, we have two issues which we are struggling with and I don't think fit the declarative, infrastructure-as-code paradigm which things like terrafrom and cloud formation are.
(1) We have a business requirement where we run a scheduled batch at specific times in the day. These are very computationally intensive. To save costs, we run these on an EC2 which is brought up at that time, then torn down when the batch is finished. However, this seems to require a temporary change to the terraform/CF files, then a change back. Is there a more native way of doing this?
(2) We dynamically store and allow to be edited by clients their firewalling rules on their load balancer (ALB). This information cannot be stored in the terraform/CF files since it can be changed by clients on demand.
Is there a way of properly doing these things in CF/Terraform?
(1) If you have to use EC2, you could create a Lambda that would start your EC2 instances. Then, create a CloudWatch Event that triggers the Lambda at your specified date / time. For more details you can see https://aws.amazon.com/premiumsupport/knowledge-center/start-stop-lambda-cloudwatch/. Once the job is done, have your EC2 shut itself down using the awssdk or awscli.
Alternatively, you could use AWS Lambda to run your batch job. You only get charged when the Lambda runs. Likewise, create a CloudWatch Event rule that schedules the Lambda.
(2) You could store the firewall rules in your own DB and modify the actual ALB SG rules using the awssdk. I don't think it's a good idea to store these things in Terraform/CF. IMHO Terraform/CF are great for declaring infrastructure but won't be a good solution for resources that are dynamically changing, especially by third parties like your clients.

AWS Lambda trigger not having cloudfront

I created a new lambda function but do not see cloudfront as an option in the Triggers. Does anybody know why that might be? Thanks
As per AWS current documentation:
Make sure that you’re in the US-East-1 (N. Virginia) Region. You must
be in this Region to create Lambda#Edge functions.
See: AWS Tutorial: Creating a Simple Lambda#Edge Function
You cannot add from Lambda console. For adding trigger for cache behavior, you need to do it from CloudFront console.
Its is explained in detail here - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-add-triggers-cf-console.html
CloudFront's Lambda#Edge integration feature requires that the functions be written in Node.js. It isn't possible to trigger a function in another language directly from CloudFront.
You must create functions with the nodejs6.10 or nodejs8.10 runtime property.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-requirements-limits.html#lambda-requirements-lambda-function-configuration
Of course, in the Node.js runtime environment, you have the AWS Javascript SDK available, so if you had a really compelling case, you could use that from a Javascript function to invoke another, different Lambda function written in a different language... but it's difficult to imagine a common case where that would make sense, because of the added latency and cost, but I have, for example, used this solution to allow Lambda#Edge to reach inside a VPC -- which can only be done by invoking a second Lambda function (which could be configured to have VPC access) from inside the first (which can't, because Lambda#Edge functions run in the region nearest to the viewer, rather than in a single region, so they will not run inside a VPC).

Amazon EC2 and Pen-Testing Form

I want to get an Amazon EC2 instance (first year trial is for free) for my tutorials but I have found out that I need to complete form about Pen-Testing on their website, as I will be using Amazon EC2 instance only to perform such actions on my own systems which I physically own so I was just wondering if a normal person like me can apply for it or is it just limited to a companies and normal users can't apply for it ?
I will appreciate any help.
Kind Regards
You can apply just like anybody else - no special qualifications needed. Mostly they want to make sure you are only pen testing against your own instances, not somebody else's instance.
But also keep in mind, since it sounds like you are trying to stay within the free-tier, that you are probably going to need to pay for a bigger instance to test against:
At this time, our policy does not permit testing small or micro RDS instance types. Testing of m1.small or t1.micro EC2 instance types is not permitted. This is to prevent potential adverse performance impacts on resources that may be shared with other customers.

Cloudformation extra when compared with Chef

What can I do with Cloudformation that I can not do with chef? It looks like chef support spawning node in the ec2 and I can read back the information of the spawned nodes (ip,..) so why would still need cloudformation for?
Very little, but what it does makes it worth using.
The primary advantage of CloudFormation is that Amazon ties created resources together. In other words if your application comprises a db, four webservers, an autoscaling group, a launch configuration, ingress rules, some VPC subnets, an internet gateway or two, and a VPN connection, you can manage them in a single place, as a CloudFormation stack. Want to shoot them? That's easy. Kill the stack and every resource dies with it.
Sure you could technically do this with Chef and the Amazon API. CloudFormation is little more than a way of generating extremely sophisticated sets of AWS API calls + Magic (tm), so you could always roll your own. Netflix sort of did that with their OS tool, Asgard, and RightScale and other services are more or less that. If those softwares don't meet your needs or if you can't afford them, CloudFormation is a nice supplement to AWS deployments. In fact, you don't even need to rely on it. It's quite simple to launch chef-solo from CloudFormation, which allows you to leverage the advantages of both.

boto .get_all_keypairs() method and the .save() of its results

So I have access to a number of EC2 instances, some of which have been running for years. We have a special repository of the private keys to all of these; thus I can, for most of our instances, get into them as root (or the 'ubuntu' user in some cases) to administer them.
While playing with boto I noticed the EC2 .get_keypair() and get_all_keypairs methods and was wondering if this could be used to recover any SSH keys which have slipped through the cracks of our procedures and been lost.
When I inspect the resulting boto.ec2.keypair.KeyPair objects, however, I see that the .material attribute seems to be empty and when I try to use the keypair's .save() method I get an exception complaining that the materials haven't been fetched.
(Other operations, such as .get_all_instances() and .run_instances() are working during that session).
So, what am I missing? Are there some other operations for which I have to provide the X.509 cert. in addition to my normal AWS key/secret pair?
(Note: I don't actually need this yet. I'm just familiarizing myself with the API and preparing for such eventualities).
It is not possible to recover SSH keys like so, the get_all_key_pairs() method name is a bit misleading in this regard, though properly documented by means of the return object of class boto.ec2.keypair.KeyPair at least, see e.g. the save() method:
Save the material (the unencrypted PEM encoded RSA private key) of a
newly created KeyPair to a local file. [emphasis mine]
This is not a limitation of boto, but a result of the security architecture of Amazon EC2: you can only retrieve a complete key pair (i.e. including the private key) during the initial creation of a key pair, the private key is never stored by EC2 and cannot be recovered, if you ever loose it (but see below for a workaround).
Eric Hammond's recent answer to the related question consequences of deleted key pair on ec2 instance provides another angle to this topic, including a pointer to his article Fixing Files on the Root EBS Volume of an EC2 Instance, explaining how to get access to the instance regardless eventually.
Given some of your instances have been running for years, this might not work though, insofar This process is only available with an EBS boot instance (which haven't been available back then), and, as Eric stresses as well, is one of the many reasons why You Should Use EBS Boot Instances on Amazon EC2 nowadays.

Resources