Edit EC2 security group from another AWS account - amazon-ec2

I have 2 accounts on AWS. On the first account I have created a permanent EC2 instance with a "dbSG" Security Group (which only allow connections by specific port and IP).
When I create an instance in second account using CloudFormation template it should:
Add this instance IP to "dbSG" security group and allow connection by specific port.
Connect to the first instance by this port.
Can I use AssumeRole in UserData when creating the instance in the second account and modify "dbSG" to allow connections from this instance? If yes, how it can be done step by step?

For EC2-Classic
The CLI help for ec2 authorize-security-group-ingress has this example:
To add a rule that allows inbound HTTP traffic from a security
group in another account
This example enables inbound traffic on TCP port 80 from a source
security group (otheraccountgroup) in a different AWS
account (123456789012). If the command succeeds, no output is
returned.
Command:
aws ec2 authorize-security-group-ingress --group-name
MySecurityGroup --protocol tcp --port 80 --source- group
otheraccountgroup --group-owner 123456789012
So, provided that you know the Security Group ID of the "appSG", with credentials from the "db" account:
aws ec2 authorize-security-group-ingress --group-name
dbSG --protocol tcp --port 1234 --source-group
appSG --group-owner XXX-APP-ACCOUNT-ID
Via CloudFormation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-security-group-rule.html#cfn-ec2-security-group-rule-sourcesecuritygroupownerid
Unfortunately, this seems not to work with Instances in a VPC, but only EC2-Classic.
For EC2-VPC: The user-data way
In the "db" account, add a Role to your CF template, specifying a Trust Policy that allows such role to be assumed by a specific role in another AWS account:
(replace XXX-... with your own values)
'RoleForOtherAccount': {
'Type': 'AWS::IAM::Role',
'Properties': {
'AssumeRolePolicyDocument': {
'Version': '2012-10-17',
'Statement': [{
'Effect': 'Allow',
'Principal': {
'AWS': "arn:aws:iam::XXX-OTHER-AWS-ACCOUNT-ID:role/XXX-ROLE-NAME-GIVEN-TO-APP-INSTANCES"
},
'Action': ['sts:AssumeRole']
}]
},
'Path': '/',
'Policies': [{
'PolicyName': 'manage-sg',
'PolicyDocument': {
'Version': '2012-10-17',
'Statement': [
{
'Effect': 'Allow',
'Action': [
'ec2: AuthorizeSecurityGroupIngress'
],
'Resource': '*'
}
]
}
}]
}
}
Then, on the "app" instance you can add the following User-data script (via CloudFormation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html#cfn-ec2-instance-userdata)
#!/bin/bash
# get current public IP address via EC2 meta-data
MY_IP=$(wget -qO- http://instance-data/latest/meta-data/public-ipv4)
# assume the "db" account role and get the credentials
CREDENTIALS_JSON=$(aws sts assume-role --role-arn XXX-ARN-OF-ROLE-IN-DB-ACCOUNT --role-session-name "AppSessionForSGIngress" --query '{"AWS_ACCESS_KEY_ID": Credentials.AccessKeyId, "AWS_SECRET_ACCESS_KEY": Credentials.SecretAccessKey, "AWS_SESSION_TOKEN": Credentials.SessionToken }')
# here you should find a way to extract AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN from the above $CREDENTIALS_JSON, then export them or pass as values replacing YYY below
# authorize the IP
aws --region XXX-DB-REGION --access-key-id YYY --secret-access-key YYY --session-token YYY ec2 authorize-security-group-ingress --group-id sg-XXX --protocol tcp --port 1234 --cidr $MY_IP/32
The IAM Role of the "app" instance must allow calls sts:AssumeRole.
Caveat: if you stop and restart the instance, its public IP will change (unless you've assigned an ElasticIP). Since User-data scripts are executed only during the first Launch, your dbSG wouldn't get updated.
via Lambda
You could also use a Lambda function triggered by a CloudTrail or Config, altough this is a bit tricky: Run AWS Lambda code when creating a new AWS EC2 instance
This way, you can also track calls to StopInstance and StartInstance and update (revoke/authorize) the dbSG rules in a more robust way.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Principal

It appears that your situation is:
Two VPCs, let's call them: VPC-A and VPC-B
Each VPC is owned by a different AWS account
Instance-A exists in VPC-A, with Securiy Group dbSG
When launching Instance-B in VPC-B, allow it to access Instance-A
The simplest method to achieve this is via VPC Peering, which permits direct communication between two VPCs in the same region. The VPCs can belong to different AWS accounts, but must have non-overlapping IP address ranges.
The process would be:
VPC-A invites VPC-B to peer
VPC-B accepts the invitation
Update routing tables in both VPCs to send traffic to each other
Create a security group in VPC-B, eg appSG
Modify dbSG to permit incoming connections from appSG
Associate Instance-B (and any other instances that should comunicate with Instance-A) with the appSG security group
That's it! Security works the same way between peered VPCs as within a VPC. The only difference is that the instances are in separate VPCs that have been peered together.
See: Working with VPC Peering Connections

Related

Let Lambda deployed in one VPC ("A") to talk to another VPC ("B") peered with A

I have difficulty to get a Lambda function consistently to talk to a VPC peered to the VPC that the lambda function is connected. I believe my configuration is identical to https://aws.amazon.com/premiumsupport/knowledge-center/lambda-dedicated-vpc/ , so I think this is a supported situation, and I will describe.
I have a lambda function connected to VPC A (us-east-1).
VPC A and VPC B (us-west-2) are peered.
A RDS database resides in VPC B and I need the lambda function to talk to it.
The current situation is sometimes they talk (port is open), and sometimes they cannot (port is not open). I do not know what causes one situation or the other, but I have a reproducer and can freely reproduce either scenario by redeploy the lambda, or simply wait after the deployment.
The reproducer lambda function:
import socket
def lambda_handler(event, context):
host = "aurora-xxx.cluster-xxx.us-west-2.rds.amazonaws.com"
ip=socket.gethostbyname(host)
port = 5432
a_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
location = (ip, port)
a_socket.settimeout(10)
result_of_check = a_socket.connect_ex(location)
a_socket.settimeout(None)
if result_of_check == 0:
print(f'Host {host}({ip}) port {port} is open.')
else:
print(f'Host {host}({ip}) port {port} is NOT open.')
And the AWS CLI that deploys the lambda is:
aws --region us-east-1 lambda delete-function --function-name test-2
aws --region us-east-1 lambda create-function --function-name test-2 --zip-file fileb://../lambda/lambda_function.zip --handler lambda_function.lambda_handler --runtime "python3.7" --role <role_arn> --vpc-config SubnetIds=subnet1,subnet2,SecurityGroupIds=sg-xxx --timeout 120
PS: The VPC A and VPC B are peered correctly and port is always open, because I can use psql on an instance in VPC A to connect to the RDS in VPC B. I need the lambda function talks to the RDS outside of its own VPC, because the RDS is part of global database which can be failed over to either VPC.

My EC2 Linux Failing to connect to awscli.amazonaws.com:443

My Linux EC2 Instance comes up under VPC --> subnet with proper route table having Internet gateway (0.0.0.0/0 as destination).
It comes up with Private IPv4 address assigned to it, no Public IPv4.
Attached the related Security group and NACL screenshot.
Security Group
NACL
Under Security Group, I have opened:
HTTPS (443) to 0.0.0.0/0,
ssh (22) to my machine IP and my VPC CIDR range.
After I ssh into my EC2 instance using the Private IPv4 address and keys, I've been trying to add AWS cli to my instance
My ec2 instance produces this after I enter this:
curl https://awscli.amazonaws.com/awscli-exe-linux-x86_64-2.0.30.zip -o awscliv2.zip
Error:
0curl: (7) Failed connect to awscli.amazonaws.com:443;
Where is the problem?
If your instance is in the private subnet and has no public IP, you can't route through Internet Gateway. You have to route through some NAT device. The simplest is NAT Gateway, although you can also set up an EC2 instance to serve same purpose.
When you set up a new VPC using (recently added) wizard, it offers you an option to create public and private subnets and NAT Gateway automatically. Or you can add it to an existing VPC following these instructions.
Note, that unlike Internet Gateway, NAT Gateway is not free
PS. Sorry again for misunderstanding your question.
Could you check if your instance has any firewall running? You can disable the firewalls (if any) using these commands :
# For Uncomplicated Firewall
sudo ufw disable
# For firewalld
sudo systemctl disable firewalld --now
Also, the official documentation for aws cli installation has double quotes surrounding the https address (https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
This may or may not be the issue but its worth a try.

discovery.seed_hosts in elasticsearch AWS EC2 with ELB

I have EC2 instances under an ELB. Whenever a new instance is started an ip address is assigned dynamically.
I have added the ELB DNS name, but it is referring the ip addresses from Network Interfaces tagged to the ELB. But I need to add the ec2 instance ip address.
So how do I add the new ip address in discovery.seed_hosts in elasticsearch without manual intervention?
Note:- I am looking for a way other than ec2 discovery plugin
I have used aws cli command to fetch the IP's from AWS ELB. Added the following script to my .sh file
export ELASTIC_INSTANCE_IPS=$(aws ec2 describe-instances --filters file://filters.json --query "Reservations[*].Instances[*].PrivateIpAddress" --region ${aws_region} --output text | paste -sd,)
tee -a elasticsearch.yml << END
discovery.seed_hosts: [$ELASTIC_INSTANCE_IPS]

AWS: Start EC2 Instance with Cloudformation and encrypt BlockDevices with specific KMS Key

When starting EC2 instances via aws cli I can specify a KmsKeyId for BlockDevices.
When starting an EC2 instance via Cloudformation (either directly or via ASG/LaunchConfiguration) this option does not exist.
How can I encrypt the block devices of my EC2 instances started via Cloudformation with a specific KMS Key?
It looks like the chain is:
Instance > [ BlockDeviceMapping ] > Ebs > KmsKeyId

Accessing AWS EC2 instances through ELB

I'm trying to set up two instances under an elastic load balancer, but cannot figure out how I'm supposed to access the instances through the load balancer.
I've set up the instances with a security group to allow access from anywhere to certain ports. I can access the instances directly using their "Public DNS" (publicdns) host name and the port PORT:
http://[publicdns]:PORT/
The load balancer contains the two instances and they are both "In Service" and it's forwarding the port (PORT) onto the same port on the instances.
However, if I request
http://[dnsname]:PORT (where dnsname is the A Record listed for the ELB)
it doesn't connect to the instance (connection times out).
Is this not the correct way to use the load balancer, or do I need to do anything to allow access to the load balancer? The only mention of security groups in relation to the load balancer is to restrict access to the instances to the load balancer only, but I don't want that. I want to be able to access them individually as well.
I'm sure there's something simple and silly that I've forgotten, not realised or done wrong :P
Cheers,
Svend.
Extra info added:
The Port Configuration for the Load Balancer looks like this (actually 3 ports):
10060 (HTTP) forwarding to 10060 (HTTP)
Stickiness: Disabled(edit)
10061 (HTTP) forwarding to 10061 (HTTP)
Stickiness: Disabled(edit)
10062 (HTTP) forwarding to 10062 (HTTP)
Stickiness: Disabled(edit)
And it's using the standard/default elb security group (amazon-elb-sg).
The instances have two security groups. One external looking like this:
22 (SSH) 0.0.0.0/0
10060 - 10061 0.0.0.0/0
10062 0.0.0.0/0
and one internal, allowing anything within the internal group to communicate on all ports:
0 - 65535 sg-xxxxxxxx (security group ID)
Not sure it makes any difference, but the instances are m1.small types of image ami-31814f58.
Something that might have relevance:
My health check used to be HTTP:PORT/ but the load balancer kept saying that the instances were "Out of Service", even though I seem to get a 200 response on the request on that port.
I then changed it to TCP:PORT and it then changed to say they were "In Service".
Is there something very specific that should be returned for the HTTP one, or is it simply a HTTP 200 response that's required? ... and does the fact that it wasn't working hint towards why the load balancing itself wasn't working either?
It sounds like you have everything set up correctly. Are they the same ports going into the loadbalancer as the instance? Or are you forwarding the request to another port?
As a side note, when I configure my loadbalancers I don't generally like to open up my instances on any port for the general public. I only allow the loadbalancer to make requests to those instances. I've noticed in the past that many people will make malicious requests to the IP of the instance trying to find a security breach. I've even seen people trying to brute force login into my windows machines....
To create a security rule only for the loadbalancers run the following commands and remove any other rules you have in the security-group for the port the loadbalancer is using. If you're not using the commandline to run these commands then just let me know which interface you're trying to use and i can try to come up with a sample that will work for you.
elb-create-lb-listeners <load-balancer> --listener "protocol=http, lb-port=<port>, instance-port=<port>"
ec2-authorize <security-group> -o amazon-elb-sg -u amazon-elb
Back to your question. Like I said, the steps you explained are correct, opening the port on the instance and forwarding the port to the instance should be enough. Maybe you need to post the full configuration of your instance's security group and the loadbalancer so that I can see if there is something else affecting your situation.
I went ahead and created a script that will reproduce the same exact steps that i'm using. This assumes you're using linux as an operating system and that the AWS CLI tools are already installed. If you don't have this setup already I recommend starting a new Amazon Linux micro instance and running the script from there since they have everything already installed.
Download the X.509 certificate files from amazon https://aws-portal.amazon.com/gp/aws/securityCredentials
Copy the certificate files to the machine where you will run the commands
Save two variables that are required in the script
aws_account=<aws account id>
keypair="<key pair name>"
Export the certificates as environmental variables
export EC2_PRIVATE_KEY=<private_Key_file>
export EC2_CERT=<cert_file>
export EC2_URL=https://ec2.us-east-1.amazonaws.com
Create the security groups
ec2-create-group loadbalancer-sg -d "Loadbalancer Test group"
ec2-authorize loadbalancer-sg -o loadbalancer-sg -u $aws_account
ec2-authorize loadbalancer-sg -p 80 -s 0.0.0.0/0
Create the user-data-file for the instance so that apache is started and the index.html file is created
mkdir -p ~/temp/
echo '#! /bin/sh
yum -qy install httpd
touch /var/www/html/index.html
/etc/init.d/httpd start' > ~/temp/user-data.sh
Start the new instance and save the instanceid
instanceid=`ec2-run-instances ami-31814f58 -k "$keypair" -t t1.micro -g loadbalancer-sg -g default -z us-east-1a -f ~/temp/user-data.sh | grep INSTANCE | awk '{ print $2 }'`
Create the loadbalancer and attach the instance
elb-create-lb test-lb --availability-zones us-east-1a --listener "protocol=http, lb-port=80, instance-port=80"
elb-register-instances-with-lb test-lb --instances $instanceid
Wait until your instance state in the loabalancer is "InService" and try to access the urls

Resources