I'm using Cloud9 IDE and AWS CLI to manage EC2 instances. I'm going through the AWS guidance on configuring security groups and trying to determine what CIDR I should use when creating a new security group (http://docs.aws.amazon.com/cli/latest/userguide/tutorial-ec2-ubuntu.html)
Here is the sample command from the guidance:
$ aws ec2 authorize-security-group-ingress --group-name devenv-sg --protocol tcp --port 22 --cidr 0.0.0.0/0
Thanks.
Related
I've followed the AWS DocumentDB docs for connecting outside VPC:
I created an EC2 instance in the same security group and VPC as the DocDB cluster
In the security group I opened 22 port access for my IP, and also opened port 27017 for communication inside the security so EC2 instance can SSH tunnel to the DocDB
I ran ssh -f -i "ssh-tunneling-access.pem" -L 27017:{doc-db-cluster}:27017 {ec2-instance-user}#{ec2-instance-dns} -N to open the SSH tunnel
In another terminal I tried to connect using Mongo shell with mongosh "mongodb://{credentials}!#localhost:27017/?tls=true&tlsAllowInvalidHostnames=true&tlsCAFile=rds-combined-ca-bundle.pem"
I got an error "MongoServerSelectionError: read ECONNRESET"
I'm running on Windows 11, and my terminal is Powershell Core.
Any ideas what did I miss and/or how to troubleshoot it?
First of all, make sure you can connect to DocumentDB from the EC2 instance. The security group attached to the DocumentDB cluster has to allow port 27017 with source the EC2 instance (or the security group of the EC2).
Second, is not clear from where you're initiating the tunnel. Did you execute step 3. on the Windows 11 machine? Have you installed OpenSSH on Windows?
How about using a GUI client, like Robo 3t, which has SSH tunneling support? Instructions on how to connect can be found here.
I have a private VPC with private subnets a private jumpbox in 1 private subnet and my private RDS aurora MySql serverless instance in another private subnet.
I did those commands on my local laptop to try to connect to my RDS via port forwarding:
aws ssm start-session --target i-0d5470040e7541ab9 --document-name AWS-StartPortForwardingSession --parameters "portNumber"=["5901"],"localPortNumber"=["9000"] --profile myProfile
aws ssm start-session --target i-0d5470040e7541ab9 --document-name AWS-StartPortForwardingSession --parameters "portNumber"=["22"],"localPortNumber"=["9999"] --profile myProfile
aws ssm start-session --target i-0d5470040e7541ab9 --document-name AWS-StartPortForwardingSession --parameters "portNumber"=["3306"],"localPortNumber"=["3306"] --profile myProfile
The connection to the server hangs.
I had this error on my local laptop:
Starting session with SessionId: myuser-09e5cd0206cc89542
Port 3306 opened for sessionId myuser-09e5cd0206cc89542.
Waiting for connections...
Connection accepted for session [myuser-09e5cd0206cc89542]
Connection to destination port failed, check SSM Agent logs.
and those errors in /var/log/amazon/ssm/errors.log:
2021-11-29 00:50:35 ERROR [handleServerConnections # port_mux.go.278] [ssm-session-worker] [myuser-017cfa9edxxxx] [DataBackend] [pluginName=Port] Unable to dial connection to server: dial tcp :3306: connect: connection refused
2021-11-29 14:13:07 ERROR [transferDataToMgs # port_mux.go.230] [ssm-session-worker] [myuser-09e5cdxxxxxx] [DataBackend] [pluginName=Port] Unable to read from connection: read unix #->/var/lib/amazon/ssm/session/3366606757_mux.sock: use of closed network connection
and I try to connect to RDS like this :
I even tried to put the RDS Endpoint using ssh Tunnel, but it doesn't work:
Are there any additional steps to do on the remote server ec2-instance?
It seems the connection is accepted but the connection to the destination port doesn't work.
Thank you for your help on this!!
The start-session command tunnels the port from the target EC2 instance to localhost. The RDS instance is on another host, so you must use SSH tunneling.
Send your public key to the EC2 instance. Fill in the region and availability zone parameters.
aws ec2-instance-connect send-ssh-public-key --region us-west-2 --instance-id i-0d5470040e7541ab9 --availability-zone us-west-2a --instance-os-user ec2-user --ssh-public-key file://~/.ssh/id_rsa.pub
Forward the SSH port 22 from the EC2 instance to 9999 locally.
aws ssm start-session --target i-0d5470040e7541ab9 --document-name AWS-StartPortForwardingSession --parameters "portNumber"=["22"],"localPortNumber"=["9999"] --profile myProfile
SSH into the instance with tunneling (in another terminal). Fill in rds-instance-dns with the DNS of your RDS instance.
ssh ec2-user#localhost -L 6606:rds-instance-dns:3306 -i ~/.ssh/id_rsa -p 9999
Access RDS
mysql -h localhost -p 6606
You also need to ensure that your EC2 instance has the correct permissions to access the RDS instance by configuring the security group.
I have EC2 instances under an ELB. Whenever a new instance is started an ip address is assigned dynamically.
I have added the ELB DNS name, but it is referring the ip addresses from Network Interfaces tagged to the ELB. But I need to add the ec2 instance ip address.
So how do I add the new ip address in discovery.seed_hosts in elasticsearch without manual intervention?
Note:- I am looking for a way other than ec2 discovery plugin
I have used aws cli command to fetch the IP's from AWS ELB. Added the following script to my .sh file
export ELASTIC_INSTANCE_IPS=$(aws ec2 describe-instances --filters file://filters.json --query "Reservations[*].Instances[*].PrivateIpAddress" --region ${aws_region} --output text | paste -sd,)
tee -a elasticsearch.yml << END
discovery.seed_hosts: [$ELASTIC_INSTANCE_IPS]
I have Memorystore instance:
gcloud redis instances list --region europe-west1
INSTANCE_NAME VERSION REGION TIER SIZE_GB HOST PORT NETWORK RESERVED_IP STATUS CREATE_TIME
sm-cache REDIS_4_0 europe-west1 BASIC 1 10.1.1.3 6379 default 10.1.1.0/28 READY 2019-05-30T19:03:29
and App Engine standard application running in same region.
There is VPC required to connect. I tried adding it without lack. What should be CIDR for such connection? Same as for Memorystore does not work:
gcloud beta compute networks vpc-access connectors describe sm-01-vpc --region europe-west1
ipCidrRange: 10.1.1.0/28
maxThroughput: 1000
minThroughput: 200
name: projects/salesmanago-data-01/locations/europe-west1/connectors/sm-01-vpc
network: default
state: ERROR
What IP I should use in Spring Boot configuration? Any suggestions? This is not clearly described in docs and tutorials.
So far I am getting error in application:
Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to 10.1.1.3:6379
What should be CIDR for such connection? Same as for Memorystore does not work:
Use the ip range does not exist in your VPC network and different from the one memorystore uses.
What IP I should use in Spring Boot configuration
The IP showed in gcloud redis instances list --region europe-west1
BTW Serverless seems only work on us-central1 as of now, not sure if it works on europe-west1
I have 2 accounts on AWS. On the first account I have created a permanent EC2 instance with a "dbSG" Security Group (which only allow connections by specific port and IP).
When I create an instance in second account using CloudFormation template it should:
Add this instance IP to "dbSG" security group and allow connection by specific port.
Connect to the first instance by this port.
Can I use AssumeRole in UserData when creating the instance in the second account and modify "dbSG" to allow connections from this instance? If yes, how it can be done step by step?
For EC2-Classic
The CLI help for ec2 authorize-security-group-ingress has this example:
To add a rule that allows inbound HTTP traffic from a security
group in another account
This example enables inbound traffic on TCP port 80 from a source
security group (otheraccountgroup) in a different AWS
account (123456789012). If the command succeeds, no output is
returned.
Command:
aws ec2 authorize-security-group-ingress --group-name
MySecurityGroup --protocol tcp --port 80 --source- group
otheraccountgroup --group-owner 123456789012
So, provided that you know the Security Group ID of the "appSG", with credentials from the "db" account:
aws ec2 authorize-security-group-ingress --group-name
dbSG --protocol tcp --port 1234 --source-group
appSG --group-owner XXX-APP-ACCOUNT-ID
Via CloudFormation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-security-group-rule.html#cfn-ec2-security-group-rule-sourcesecuritygroupownerid
Unfortunately, this seems not to work with Instances in a VPC, but only EC2-Classic.
For EC2-VPC: The user-data way
In the "db" account, add a Role to your CF template, specifying a Trust Policy that allows such role to be assumed by a specific role in another AWS account:
(replace XXX-... with your own values)
'RoleForOtherAccount': {
'Type': 'AWS::IAM::Role',
'Properties': {
'AssumeRolePolicyDocument': {
'Version': '2012-10-17',
'Statement': [{
'Effect': 'Allow',
'Principal': {
'AWS': "arn:aws:iam::XXX-OTHER-AWS-ACCOUNT-ID:role/XXX-ROLE-NAME-GIVEN-TO-APP-INSTANCES"
},
'Action': ['sts:AssumeRole']
}]
},
'Path': '/',
'Policies': [{
'PolicyName': 'manage-sg',
'PolicyDocument': {
'Version': '2012-10-17',
'Statement': [
{
'Effect': 'Allow',
'Action': [
'ec2: AuthorizeSecurityGroupIngress'
],
'Resource': '*'
}
]
}
}]
}
}
Then, on the "app" instance you can add the following User-data script (via CloudFormation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html#cfn-ec2-instance-userdata)
#!/bin/bash
# get current public IP address via EC2 meta-data
MY_IP=$(wget -qO- http://instance-data/latest/meta-data/public-ipv4)
# assume the "db" account role and get the credentials
CREDENTIALS_JSON=$(aws sts assume-role --role-arn XXX-ARN-OF-ROLE-IN-DB-ACCOUNT --role-session-name "AppSessionForSGIngress" --query '{"AWS_ACCESS_KEY_ID": Credentials.AccessKeyId, "AWS_SECRET_ACCESS_KEY": Credentials.SecretAccessKey, "AWS_SESSION_TOKEN": Credentials.SessionToken }')
# here you should find a way to extract AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN from the above $CREDENTIALS_JSON, then export them or pass as values replacing YYY below
# authorize the IP
aws --region XXX-DB-REGION --access-key-id YYY --secret-access-key YYY --session-token YYY ec2 authorize-security-group-ingress --group-id sg-XXX --protocol tcp --port 1234 --cidr $MY_IP/32
The IAM Role of the "app" instance must allow calls sts:AssumeRole.
Caveat: if you stop and restart the instance, its public IP will change (unless you've assigned an ElasticIP). Since User-data scripts are executed only during the first Launch, your dbSG wouldn't get updated.
via Lambda
You could also use a Lambda function triggered by a CloudTrail or Config, altough this is a bit tricky: Run AWS Lambda code when creating a new AWS EC2 instance
This way, you can also track calls to StopInstance and StartInstance and update (revoke/authorize) the dbSG rules in a more robust way.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Principal
It appears that your situation is:
Two VPCs, let's call them: VPC-A and VPC-B
Each VPC is owned by a different AWS account
Instance-A exists in VPC-A, with Securiy Group dbSG
When launching Instance-B in VPC-B, allow it to access Instance-A
The simplest method to achieve this is via VPC Peering, which permits direct communication between two VPCs in the same region. The VPCs can belong to different AWS accounts, but must have non-overlapping IP address ranges.
The process would be:
VPC-A invites VPC-B to peer
VPC-B accepts the invitation
Update routing tables in both VPCs to send traffic to each other
Create a security group in VPC-B, eg appSG
Modify dbSG to permit incoming connections from appSG
Associate Instance-B (and any other instances that should comunicate with Instance-A) with the appSG security group
That's it! Security works the same way between peered VPCs as within a VPC. The only difference is that the instances are in separate VPCs that have been peered together.
See: Working with VPC Peering Connections