Kubernetes on AWS ELB names - amazon-ec2

When running Kubernetes on AWS, exposing a Service with "type=LoadBalancer" works well. However, the name given to the Elastic Load Balancer is a rather long hash and as such, it is hard to keep track through the AWS console as to which load balancer belongs to which Service.
Is it possible to specify he name of the ELB object at service creation time?
If not, I might create an issue that the service name be used when creating the ELB.
On a related note, is it possible to modify the security group (firewall) that the load balancer uses?

The tags of the ELB contain the information you're looking for.
$ aws elb describe-tags --load-balancer-names xxxxx
{
"TagDescriptions": [
{
"LoadBalancerName": "xxxxx",
"Tags": [
{
"Value": "default/nginx",
"Key": "kubernetes.io/service-name"
},
{
"Value": "my-cluster",
"Key": "KubernetesCluster"
}
]
}
]
}
If you want to give the ELB a proper domain name, you can assign one using Route53. It can be automated with something like route53-kubernetes.

Related

Laravel in vpc, access sqs without programatic access

I have a Laravel app running on an ec2 instance inside a VPC. Now I want to connect to an SQS from the app. Using programmatic access seems to work but I want to use the SQS endpoint, without having to use the key and the secret.
Technically speaking this should be possible with the AWS resources being linked together. Any idea how to set this up in Laravel?
Sounds like you need to use an IAM role (basically a set of policies) which you attach to your EC2 instance. The policies you include would have a section for access to your SQS queue (or at least certain actions on it in SQS). This effectively allows temporary credentials to be given to the instance without having to have them in code.
The role might look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sqs:ReceiveMessage",
"sqs:DeleteMessage",
"sqs:GetQueueAttributes"
...<any other actions>
],
"Resource": <SQS Queue ARN>
}
]
}
You attach this role to your EC2 instance in the EC2 console - select instance, go to Instance Settings > Attach/Replace IAM Role

How to change RDS Security Group using boto3?

I am restoring (=creating a new instance) an RDS MySQL Snapshot using boto3. Unfortunately the Security Group does not get copied over instead it gets assigned the default Security Group which is has no limitations on incoming traffic.
Looking at the source RDS instance I can see the correct Security Group (sg-a247eec5) attached to the RDS instance. This Security Group is visible under EC2 - Security Groups and VPC - Security Groups but not under RDS - Security Groups.
I am using restore_db_instance_from_db_snapshot but I can't see where I would attach that Security Group to the new instance.
I can easily attach the correct Security Group by using the AWS UI (modifying my RDS Instance).
There is modify_instance_attribute on the EC2 client which can change Security Groups, but it requires an InstanceId which I don't get from my RDS instance. The only thing I can find is DBInstanceIdentifier.
Trying to set the correct IAM permissions confuses me too. I have an RDS ARN: arn:aws:rds:ap-southeast-2:<account_id>:db:<db_instance_name> but ModifyInstanceAttribute is listed under Amazon EC2. Selecting both in the policy editor gives me an error saying the ARN is invalid (which makes sense).
Whenever you use restore_db_instance_from_db_snapshot api, the default behavior is to apply default security group and default parameter group. The same is documented in RDS API reference.
The target database is created from the source database restore point
with the most of original configuration with the default security
group and the default DB parameter group.
The workaround is to use modify_db_instance api once the restore is complete.
DBInstanceIdentifier is to an RDS instance, what an InstanceId is to an EC2 instance.
Pass the same DBInstanceIdentifier which you used above, as an input to this api.
response = client.modify_db_instance(
DBInstanceIdentifier='string',
DBSecurityGroups=[
'string',
], /* If you are using ec2 security groups then remove this and use VpcSecurityGroupIds. */
VpcSecurityGroupIds=[
'string',
],
DBParameterGroupName='string',
ApplyImmediately=True,
)
I believe you need to change both the security-group as well as parameter-group(unless you are fine with the default one). If you are changing the parameter-group, then you need to reboot the db instance as well for the settings to take effect.
response = client.reboot_db_instance(
DBInstanceIdentifier='string',
)
Also, you need the role performing the above db operations to have the below policy permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1507879517000",
"Effect": "Allow",
"Action": [
"rds:CreateDBInstance",
"rds:ModifyDBInstance",
"rds:RebootDBInstance",
"rds:RestoreDBInstanceFromDBSnapshot"
],
"Resource": [
"arn:aws:rds:*:XXXXXXXXXX:db:*"
]
}
]
}

Is there a way to limit IAM access from within EC2 instance

First and foremost, I am aware of IAM roles and know that they would provide this feature. However, I have a requirement for a key attached to a IAM user.
Is there a way to limit access to resources from within EC2 instances (only allow if the origin of the request matches ec2 instances).
For example:
Using credentials from developer's laptop: denied
Using credentials from EC2 instance: allowed
We want to make sure that if these keys ever get leaked for whatever reason, no one will be able to control resources from outside our AWS environment.
Thanks
It's possible, but the level of granularity you want may result in more IAM management than you desire. It's possible to add conditions to IAM statements that restrict based on IP address, so you can create a statement like the one below that lists the IPs of your instances:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {"NotIpAddress": {"aws:SourceIp": [
"192.0.2.0/24",
"203.0.113.0/24"
]}}
}
}
However unless you use Elastic IPs for all your instances then their IPs can change over time, so you'd need some way of keeping these IAM statements properly updated.

How to start a private instance in EC2 using IAM

How can I start an EC2 instance by user A.
and the started instance by user A is unable to be seen by user B.
Can I do this with IAM?
I tried this set:
{
"Statement": [
{
"Sid": "Stmt1341824399883",
"Action": [
"ec2:DescribeInstanceAttribute",
"ec2:DescribeInstanceStatus",
"ec2:DescribeInstances"
],
"Effect": "Deny",
"Resource": [
"*"
]
}
]
}
but it hides everything including the instances started by user A
Unfortunately there are no Amazon Resource Names(ARNs) for Amazon EC2: you can't write a policy that applies only to certain EC2 instances.
If you require isolation between the two, the only way I know of is to create a separate AWS account. You can use consolidated billing so that billing is aggregated with your other account, and you can share some things like EBS snapshots between accounts. Most things however can't be shared between accounts.

Elastic IP on application deployed using Elastic Beanstalk

I'm a bit confused about the use of the Elastic IP service offered by Amazazon Web Services. I guess the main idea is that I can switch to a new version of the web application with no downtime following this simple procedure:
Deploy the new version on a new EC2 instance
Configure the new version properly and test it using a staging DB
Once properly tested, make this new version use the live DB
Associate the Elastic IP to this instance
Terminate all the useless services (staging DB and old EC2 instance)
Is this the common way to deploy a new version of a web application?
Now, what if the application is scaled on more instances? I configured the auto scaling in the Elastic Beanstalk settings and this created a load balancer (I can it see in the EC2 section of the AWS Management Console). The problem is that I apparently cannot associate the Elastic IP with the load balancer, I have to associate it with an existing instance. To which instance should I associate it to? I'm confused...
Sorry if some questions may sound stupid but I'm only a programmer and this is the first time I set up a cloud system.
Thank you!
Elastic Load Balancing (ELB) does not work with Amazon EC2 Elastic IP addresses, in fact the two concepts do not go together at all.
Elasticity via Elastic Load Balancing
Rather, ELB is usually used via CNAME records (but see below), and this provides the first level of elasticity/availability by allowing the aliased DNS address to change the IP of the ELB(s) in use, if need be. The second level of elasticity/availability is performed by the load balancer when distributing the traffic between the EC2 instances you have registered.
Think of it this way: The CNAME never changes (just like the Elastic IP address) and the replacement of EC2 instances is handled via the load balancer, Auto Scaling, or yourself (by registering/unregistering instances).
This is explained in more detail within Shlomo Swidler's excellent analysis The “Elastic” in “Elastic Load Balancing”: ELB Elasticity and How to Test it, which in turn refers to the recently provided Best Practices in Evaluating Elastic Load Balancing by AWS, which confirm his analysis and provide a good overall read regarding the Architecture of the Elastic Load Balancing Service and How It Works in itself (but lacks the illustrative step by step samples Shlomo provides).
Domain Names
Please note that the former limitation requiring a CNAME has meanwhile been addressed by respective additions to Amazon Route 53 to allow the root domain (or Zone Apex) being used as well, see section Aliases and the Zone Apex within Moving Ahead With Amazon Route 53 for a quick overview and Using Domain Names with Elastic Load Balancing for details.
Elasticity via Elastic Beanstalk
First and foremost, AWS Elastic Beanstalk uses Elastic Load Balancing in turn as described above. On top if that, it adds application lifecycle management:
AWS Elastic Beanstalk is an even easier way for you to quickly deploy
and manage applications in the AWS cloud. You simply upload your
application, and Elastic Beanstalk automatically handles the
deployment details of capacity provisioning, load balancing,
auto-scaling, and application health monitoring. [...] [emphasis mine]
This is achieved by adding the concept of an Environment into the mix, which is explained in the Architectural Overview:
The environment is the heart of the application. [...] When you create
an environment, AWS Elastic Beanstalk provisions the resources
required to run your application. AWS resources created for an
environment include one elastic load balancer (ELB in the diagram), an
Auto Scaling group, and one or more Amazon EC2 instances.
Please note that Every environment has a CNAME (URL) that points to a load balancer, i.e. just like using an ELB on its own.
All this comes together in Managing and Configuring Applications and Environments, which discusses some of the most important features of AWS Elastic Beanstalk in detail, including usage examples using the AWS Management Console, CLI, and the APIs.
Zero Downtime
Its hard to identify the most relevant part for illustration purposes, but Deploying Versions With Zero Downtime precisely addresses your use case and implies all required preceding steps (e.g. Creating New Application Versions and Launching New Environments), so reading section AWS Management Console might give you the best overall picture how this platform works.
Good luck!
In addition to the options described in Steffen's awesome answer, Elastic Beanstalk seems to have very recently enabled Elastic IP as an option if you don't need the full features of an Elastic Load Balancer (like auto-scaling beyond one instance).
I describe the option in my answer to a similar question. Elastic Beanstalk now allows you to choose between two Environment Types, and the Single-instance option creates an Elastic IP.
I think using an ELB will be the preferable option in most cases, but e.g. for a staging server it is nice to have an alternative that is less complex (and cheaper).
Apologies for answering a post a few years later, however for those that do actually need a set of static IP addresses on an ELB, it is possible to ask AWS nicely to add what they call 'Stable IP' addresses to an ELB, and thereby give it that static IP address feature.
They don't like doing this at all of course - but will if you can justify it (the main justification is when you have clients that have IP whitelist restrictions on outbound connections via their firewalls and are completely unwilling to budge on that stance).
Just be aware that the 'auto scaling' based on traffic option isn't straight forward any more - AWS would be unable to dynamically add more ELB endpoints to your ELB as they do with the out of the box solution and you have to go through the pain of opening up new IP addresses with your customers over time.
For the original question though, EB using an ELB to front EC2 instances where static IP addresses are not actually required (no client outbound firewall issues) is the best way as per the accepted answer.
In the case that none of the above solutions works, one alternative is to attach a NAT gateway to a private subnet and associate an EIP with the NAT gateway. In this case you’re able to use the ELB, use auto-scaling, and have a reserved EIP.
This is a bit more expensive though, especially for large throughput use cases. Also, SSHing into the instance to debug becomes a bit more complex.
I wrote a post describing how to accomplish this using a Cloudwatch rule when a new instance is launched, and a lambda function. Here's the lambda function code:
const AWS = require('aws-sdk');
const ec2 = new AWS.EC2();
const PROD_ENV_NAME = 'my-prod-env-name';
// Example Event
// {
// "version": "0",
// "id": "ee376907-2647-4179-9203-343cfb3017a4",
// "detail-type": "EC2 Instance State-change Notification",
// "source": "aws.ec2",
// "account": "123456789012",
// "time": "2015-11-11T21:30:34Z",
// "region": "us-east-1",
// "resources": [
// "arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"
// ],
// "detail": {
// "instance-id": "i-abcd1111",
// "state": "running"
// }
// }
exports.handler = async (event) => {
console.log("EVENT:", event);
// The newly launched instance ID.
const instanceId = event.detail['instance-id'];
// Fetch info about the newly launched instance
const result = await ec2.describeInstances({
Filters: [ { Name: "instance-id", Values: [instanceId] } ]
}).promise()
// The instance details are buried in this object
const instance = result.Reservations[0].Instances[0];
const isAttached = instance.NetworkInterfaces.find(int => int.Association.IpOwnerId !== 'amazon');
// Bail if the instance is already attached to another EIP
if (isAttached) {
console.log("This instance is already assigned to an elastic IP")
return { statusCode: 200, body: '' }
}
// In elastic beanstalk, the instance name gets assigned to the enviroment name.
// There is also an environment name tag, which could be used here.
const name = instance.Tags.find(t => t.Key === 'Name').Value;
// Only assign EIPs to production instances
if (name !== PROD_ENV_NAME) {
console.log('Not a production instance. Not assigning. Instance name:', name)
return { statusCode: 200, body: ''}
}
// Get a list of elastic IP addresses
const addresses = await ec2.describeAddresses().promise();
// Filter out addresses already assigned to instances
const availableAddresses = addresses.Addresses.filter(a => !a.NetworkInterfaceId);
// Raise an error if we have no more available IP addresses
if (availableAddresses.length === 0) {
console.log("ERROR: no available ip addresses");
return { statusCode: 400, body: JSON.stringify("ERROR: no available ip addresses") }
}
const firstAvail = availableAddresses[0]
try {
// Associate the instance to the address
const result = await ec2.associateAddress({
AllocationId: firstAvail.AllocationId,
InstanceId: instanceId
}).promise();
console.log('allocation result', result)
return { statusCode: 200, body: JSON.stringify('Associated IP address.') };
} catch (err) {
console.log("ERROR: ", err);
}
};
You can set the environment as a Single Instance as stated in the already accepted answer, or if you want to use an Elastic IP that you have already created, you can do the following.
Inside of the .ebextensions folder at the root of your project, make a file called setup.config and paste in the following:
container_commands:
00_setup_elastic_ip:
command: |
export AWS_ACCESS_KEY_ID={YOUR-ACCESS-KEY-ID}
export AWS_SECRET_ACCESS_KEY={YOUR-SECRET-ACCESS-KEY}
export AWS_DEFAULT_REGION=us-east-1
INSTANCE_ID=$(ec2-metadata -i)
words=( $INSTANCE_ID )
EC2_ID="${words[1]}"
aws ec2 associate-address --instance-id $EC2_ID --allocation-id {eipalloc-ID-TO-THE-IP}
All you have to do is replace the 3 parts contained inside of the {} and you are good to go. This will replace the IP of your Elastic Beanstalk instance with the Elastic IP of your choosing.
The parts contained in {} are (get rid of the {} though, that is just there to show you which parts to replace with your info):
Your AWS Access Key ID
Your AWS Secret Access Key
the allocation ID of the Elastic IP you want to assign to your Elastic Beanstalk environment's instance.

Resources