My Redshift cluster is in a private VPC. I've written the following AWS Lamba in Node.js which should connect to Redshift (dressed down for this question):
'use strict';
console.log('Loading function');
const pg = require('pg');
exports.handler = (event, context, callback) => {
var client = new pg.Client({
user: 'myuser',
database: 'mydatabase',
password: 'mypassword',
port: 5439,
host: 'myhost.eu-west-1.redshift.amazonaws.com'
});
// connect to our database
console.log('Connecting...');
client.connect(function (err) {
if (err) throw err;
console.log('CONNECTED!!!');
});
};
I keep getting Task timed out after 60.00 seconds unfortunately. I see in the logs "Connecting...", but never "CONNECTED!!!".
Steps I've taken so far to get this to work:
As per Connect Lambda to Redshift in Different Availability Zones I have the Redshift cluster and the Lamba function in the same VPC
Also Redshift cluster and the Lamba function are on the same subnet
The Redshift cluster and the Lamba function share the same security group
Added an inbound rule at the security group of the Redshift cluster as per the suggestion here (https://github.com/awslabs/aws-lambda-redshift-loader/issues/86)
The IAM role associated with the Lamba Function has the following policies: AmazonDMSRedshiftS3Role, AmazonRedshiftFullAccess, AWSLambdaBasicExecutionRole, AWSLambdaVPCAccessExecutionRole, AWSLambdaENIManagementAccess scrambled together from this source: http://docs.aws.amazon.com/lambda/latest/dg/vpc.html (I realize I have some overlap here, but figured that it shouldn't matter)
Added Elastic IP to the Inbound rules of the Security Group as per an answer from a question listed prior (even if I don't even have a NAT gateway configured in the subnet)
I don't have Enhanced VPC Routing enabled because I figured that I don't need it.
Even tried it by adding the Inbound rule 0.0.0.0/0 ALL types, ALL protocols, ALL ports in the Security Group (following this question: Accessing Redshift from Lambda - Avoiding the 0.0.0.0/0 Security Group). But same issue!
So, does anyone have any suggestions as to what I should check?
*I should add that I am not a network expert, so perhaps I've made a mistake somewhere.
The timeout is probably because your lambda in VPC cannot access Internet in order to connect to your cluster(you seem to be using the public hostname to connect). Your connection options depend on your cluster configuration. Since both your lambda function and cluster are in the same VPC, you should use the private IP of your cluster to connect to it. In your case, I think simply using the private IP should solve your problem.
Depending on whether your cluster is publicly accessible, there are some points to keep in mind.
If your cluster is configured to NOT be publicly accessible, you can use the private IP to connect to the cluster from your lambda running in a VPC and it should work.
If you have a publicly accessible cluster in a VPC, and you want to
connect to it by using the private IP address from within the VPC, make sure the following VPC parameters to true/yes:
DNS resolution
DNS hostnames
The steps to verify/change these settings are given here.
If you do not set these parameters to true, connections from within VPC will resolve to the EIP instead of the private IP and your lambda won't be able to connect without having Internet access(which will need a NAT gateway or a NAT instance).
Also, an important note from the documentation here.
If you have an existing publicly accessible cluster in a VPC,
connections from within the VPC will continue to use the EIP to
connect to the cluster even with those parameters set until you resize
the cluster. Any new clusters will follow the new behavior of using
the private IP address when connecting to the publicly accessible
cluster from within the same VPC.
My issues got resolved after adding the CIDR range of the VPC to the Redshift Inbound rules.
For the ones that are trying to move to redshift serverless due to it's recent release to the public... this may be a commom issue but at least for me the answer from #pcothenet worked:
For what it's worth, I had a similar issue. My problem was that I had
set the lambda to have access to my public subnets only. My public
subnet is routing all outbound traffic to an internet gateway, while
my private subnets are routing outbound traffic via an NAT Gateway.
But according to the doc "You cannot use an Internet gateway attached
to your VPC, since that requires the ENI to have public IP addresses."
Switching the lambda to the private subnets (and therefore using the
NAT Gateway) solved the problem. – pcothenet
You must use the Endpoint to connect.
Best.
I had this same issue and followed the steps above and I found that in my case the issue was that the lambda was in a subnet that did not have a route to the NAT gateway. So I moved the lambda into a subnet with route to the NAT gateway.
Related
I'm trying to access a external MyQSL database (Not AWS RDS), and I need to have a static IP in order to open up the firewall for accepting connections. Is it possible to set a static IP with a Lambda instance? If not what are some other options?
In order to do that, you need to deploy your Lambda function into a VPC and within the VPC, provide NAT Gateway. Then assign an Elastic IP (static IP) to the NAT Gateway. These two links describe it step-by-step:
AWS: How to Create a Static IP Address Using a NAT Gateway (Medium)
How do I give internet access to my Lambda function in a VPC? (AWS Knowledge Center
I have to do this every year or two and always forget how to do it :) Fortunately, I've discovered the AWS now has a wizard that steps you through this process: https://ap-southeast-2.console.aws.amazon.com/vpc/home?region=ap-southeast-2#wizardFullpagePublicAndPrivate:
The wizard didn't pick up my Elastic IP Allocation ID so I had to manually paste it in from the Elastic IP section of the VPC console but after that everything works. https://ap-southeast-2.console.aws.amazon.com/vpc/home?region=ap-southeast-2#Addresses:sort=PublicIp
Then you just set up your lambda function to use that VPC. The only remaining gotcha is to select the Private Subnet that the wizard created rather than the public subnet (of course).
If you are deploying your Lambda functions using SAM rather than the console you can direct your function to use the VPC by including Policy and VpcConfig sections in your SAM template as shown below.
In another year or two when I have to do this again, I'll hopefully find this answer :)
No, this is not possible.
What you should do instead is:
deploy the Lambda function into the private subnet of a VPC
deploy a NAT Gateway (or NAT instance) into a public subnet of the VPC
deploy an Internet Gateway into the VPC
give the NAT an Elastic IP
make the NAT be the default route for the Lambda subnet
whitelist the NAT's Elastic IP at the remote firewall
I have a single AWS lambda function that connects to a single AWS RDS Postgres db and simply returns a json list of all records in the db.
If I don't assign a VPC to the lambda function, it is able to access the AWS RDS db. However, if I assign a VPC to the lambda function it can no longer access the db.
The VPC is the same for both the lambda function and the RDS db. I've also opened all traffic on port 0.0.0.0/0 for inbound and outbound connections temporarily to find the issue, but I am still unable to connect.
I believe it might be a role permission related to VPC for the lambda function, but I've already assigned the policy AmazonVPCFullAccess to the lambda role.
The fact that the lambda can access the DB when not in a VPC is a bit troubling in the sense that the DB is then probably public.
A common mistake that often happens is that lambda is deployed to a public subnet. Lambda's only get assigned private IP addresses in a VPC. When deployed to a public subnets, it's only route to the internet is the internet gateway. That doesn't really work well if the lambda itself has a private ip address (the internet couldn't route traffic back to you :P).
One part of the solution is to make sure your lambda is deployed to a private subnet instead with a route to a NAT gateway if it needs access to public resources.
However, the better part of the solution is actually put the database in the private subnet WITHOUT a public IP adresss.
Because I've seen many mistakes with this with my customers, and because it can't be stressed enough: I'd strongly suggest you follow a three-tier networking model with your VPC's. This basically means:
Don't use the default VPC. Create your own.
Create 9 subnets:
3 public
3 private. Put your private lambda's here.
3 isolated. Put your database here.
There are lot's of articles / templates available that do this for you. A quick google search gives me
https://github.com/aws-samples/vpc-multi-tier
https://www.wellarchitectedlabs.com/reliability/100_labs/100_deploy_cloudformation/1_deploy_vpc/
I set up an Aurora Database (provisioned) in a newly created VPC and no public accessibility. As I want to run a Lambda function in the VPC which is able to both, access the RDS instances as well as the Internet, I changed the routing tables of the RDS instances to allowing traffic from a NAT gateway which I placed in a public subnet in the same VPC.
For the Lambda function itself, I created a separate private subnet, also just allowing traffic from the NAT gateway in the routing table. I assigned this subnet and VPC to the Lambda function in the Lambda settings. The internet connection works fine with this configuration but I can not access the database. That's why I followed this post (https://serverfault.com/questions/941886/connect-an-aws-lambda-function-triggered-by-api-gateway-to-aurora-serverless-mys) and added the IP CIDR of the Lambda subnet to the Security Group of the RDS instances (called rds-launch-wizard).
Still, the Lambda function is able to interact with the public internet but can not connect to the RDS instances (timeout). I'm running out of ideas, what is wrong here?
The configuration should be:
A Public subnet with a NAT Gateway (and, by definition, an Internet Gateway)
A Private subnet with the Amazon RDS instance
The same, or a different, Private Subnet associated with the Lambda function
The Private Subnet(s) configured with a Route Table with a destination of 0.0.0.0/0 to the NAT Gateway
Then consider the Security Groups:
A security group for the Lambda function (Lambda-SG) that permits all outbound access
A security group for the RDS instance (RDS-SG) that should permit inbound access from Lambda-SG on the appropriate database port
That is, RDS-SG is allowing incoming traffic from Lambda-SG (by name). There is no need to use CIDRs in the security group.
The Lambda function will connect to a private subnet via an Elastic Network Interface (ENI) and will be able to communicate both with the RDS instance (directly) and with the Internet (via the NAT Gateway).
Please note that you are not directing "traffic from the NAT Gateway". Rather, you are directing Internet-bound traffic to the NAT Gateway. Nor is there such a thing as "routing tables of the RDS instances" because the Route Tables are associated with subnets, not RDS.
I'm running into a very weird problem with AWS lambdas. I've created two lambdas, A and B. Both are configured the exact same way. Both need to access the Internet and DynamoDB. I'm trying to move them into a new VPC. The lambdas are in subnets because I'm using DAX.
The old VPC has 2 private subnets split /24. The new one has 3 private subnets split /20. Both VPCs have a public subnet split /24 and /20 respectively. There is a NAT attached to the subnets in the route table for the private subnets, and an IGW attached to the public subnets.
Security groups for both have (Inbound) all traffic with the source security group, and a custom TCP Rule for 8111, and (Outbound) 0.0.0.0/0.
Moving lambda A to the new VPC works fine. It can access both the Internet and DynamoDB. That tells me that the NAT and IGW are configured correctly (I think).
Moving lambda B to the new VPC fails. It can't access either the Internet or DynamoDB.
Moving either lambda back to the old VPC works. The cluster endpoint for DAX isn't hardcoded in either lambda, so there isn't an issue where I'm not changing the code correctly.
Moving a lambda entails changing the VPC, the subnets, and security groups to match the correct VPC. And changing the DAX cluster endpoint.
Both the old and new VPCs have endpoints set up for DynamoDB for both the public and private subnets.
Any thoughts on what I should be looking at?
How can I make ec2 instance communicate with rds instance on aws by internal ip address or dns?
I only see public dns like xxx.cehmrvc73g1g.eu-west-1.rds.amazonaws.com:3306
Will internal ipaddress will be faster than public dns?
Thanks
A note for posterity, ensure that you enable DNS on the VPC Peering link!
Enabling DNS Resolution Support for a VPC Peering Connection
To enable a VPC to resolve public IPv4 DNS hostnames to private IPv4
addresses when queried from instances in the peer VPC, you must modify
the peering connection.
Both VPCs must be enabled for DNS hostnames and DNS resolution.
To enable DNS resolution support for the peering connection
Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
In the navigation pane, choose Peering Connections.
Select the VPC peering connection, and choose Actions, Edit DNS
Settings.
To ensure that queries from the peer VPC resolve to private IP
addresses in your local VPC, choose the option to enable DNS
resolution for queries from the peer VPC.
If the peer VPC is in the same AWS account, you can choose the option
to enable DNS resolution for queries from the local VPC. This ensures
that queries from the local VPC resolve to private IP addresses in the
peer VPC. This option is not available if the peer VPC is in a
different AWS account.
Choose Save.
If the peer VPC is in a different AWS account, the owner of the peer
VPC must sign into the VPC console, perform steps 2 through 4, and
choose Save.
You can use the "Endpoint" DNS name. It will resolve to the internal IP when used within the VPC and resolves to a public ip when used outside of your AWS network. You should never use the actual IP address because the way the RDS works it could possibly change in the future.
If you ping it from your EC2 (on the same VPC) server you can verify this.
It is amazing to see the amount of down votes I've got given that my answer is the only correct answer, here is 2 other sources:
https://forums.aws.amazon.com/thread.jspa?threadID=70112
You can use the "Endpoint" DNS name. It will resolve to the internal IP when used within EC2.
https://serverfault.com/questions/601548/cant-find-the-private-ip-address-for-my-amazon-rds-instance2
The DNS endpoint provided in the AWS console will resolve to the internal IPs from within Amazon's network.
Check out the AWS EC2 docs: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#concepts-private-addresses.
It doesn't appear that this necessarily applies to RDS, however.
When resolving your RDS instance from within the same VPC the internal IP is returned by the Amazon DNS service.
If the RDS instance is externally accessible you will see the external IP from outside the VPC. However, if the EC2 instance NOT available publiclly the internal IP address is returned to external and internal lookups.
Will internal ip address will be faster than the external address supplied by public dns?
Most likely as the packets will need to be routed when using the external addresses, increasing latency.
It also requires that your EC2 instances have a public IP or NAT gateway along with appropriate security groups and routes, increasing cost, increasing complexity and reducing security.
its pretty easy, telnet your RDS endpoint using command prompt on windows or through unix terminal
for example: telnet "you RDS endpoint" "Port"
trying to connect "You get your RDS internal IP here"