AWS Lambda Timeout when connecting to Redis Elasticache in same VPC - aws-lambda

Trying to publish from a Lambda function to a Redis Elasticache, but I just continue to get 502 Bad Gateway responses with the Lambda function timing out.
I have successfully connected to the Elasticache instance using an ECS in the same VPC which leads me to think that the VPC settings for my Lambda are not correct. I tried following this tutorial (https://docs.aws.amazon.com/lambda/latest/dg/services-elasticache-tutorial.html) and have looked at several StackOverflow threads to no avail.
The Lambda Function:
export const publisher = redis.createClient({
url: XXXXXX, // env var containing the URL which is also used in the ECS server to successfully connect
});
export const handler = async (
event: AWSLambda.APIGatewayProxyWithCognitoAuthorizerEvent
): Promise<AWSLambda.APIGatewayProxyResult> => {
try {
if (!event.body || !event.pathParameters || !event.pathParameters.channelId)
return ApiResponse.error(400, {}, new InvalidRequestError());
const { action, payload } = JSON.parse(event.body) as {
action: string;
payload?: { [key: string]: string };
};
const { channelId } = event.pathParameters;
const publishAsync = promisify(publisher.publish).bind(publisher);
await publishAsync(
channelId,
JSON.stringify({
action,
payload: payload || {},
})
);
return ApiResponse.success(204);
} catch (e) {
Logger.error(e);
return ApiResponse.error();
}
};
In my troubleshooting, I have verified the following in the Lambda functions console:
The correct role is showing in Configuration > Permissions
The lambda function has access to the VPC (Configuration > VPCs), Subnets, and the same SG as the Elasticache instance.
The SG is allowing all traffic from anywhere.
It is indeed the Redis connection. Using console.log the code stops at this line: await publishAsync()
I am sure it is something small, but it is racking my brain!
Update 1:
Tried adding an error handler to log any issues with the publish in addition to the main try/catch block, but it's not logging a thing.
publisher.on('error', (e) => {
Logger.error(e, 'evses-service', 'message-publisher');
});
Also have copied my Elasticache setup:
And my Elasticache Subnet Group:
And my Lambda VPC settings:
And that my Lambda has the right access:
Update 2:
Tried to follow the tutorial here (https://docs.aws.amazon.com/lambda/latest/dg/services-elasticache-tutorial.html) word for word, but getting the same issue. No logs, just a timeout after 30 seconds. Here is the test code:
const crypto = require('crypto');
const redis = require('redis');
const util = require('util');
const client = redis.createClient({
url: 'rediss://clusterforlambdatest.9nxhfd.0001.use1.cache.amazonaws.com',
});
client.on('error', (e) => {
console.log(e);
});
exports.handler = async (event) => {
try {
const len = 24;
const randomString = crypto
.randomBytes(Math.ceil(len / 2))
.toString('hex') // convert to hexadecimal format
.slice(0, len)
.toUpperCase();
const setAsync = util.promisify(client.set).bind(client);
const getAsync = util.promisify(client.get).bind(client);
await setAsync(randomString, 'We set this string bruh!');
const doc = await getAsync(randomString);
console.log(`Successfully receieved document ${randomString} with contents: ${doc}`);
return;
} catch (e) {
console.log(e);
return {
statusCode: 500,
};
}
};

If you have timeout, assuming the lambda network is well configured, you should check the following:
redis SSL configuration: check diffs between redisS connection url and cluster configuration (in-transit encryption and client configuration with tls: {})
configure the client with a specific retry strategy to avoid lambda timeout and catch connection issue
check VPC acl and security groups

I had same issue with my elasticache cluster, here are few findings -
Check the number client connections with elasticache and resources used
Check VPC subnet and CIDR for nodes security group
Try to increase the TTL for lambda and see which service is taking more time to respond Lambda or elasticache

Related

Write an AWS Lambda to query a Neptune DB with openCypher using aws-sdk v3

I have a Lambda with Node.js 18 runtime, in which I would like to send an openCypher query to an AWS Neptune DB.
My Lambda is using an IAM role with these policies:
NeptuneFullAccess
AWSLambdaBasicExecutionRole
AmazonSSMReadOnlyAccess
(The last policy will be used to fetch the Neptune endpoint from the SSM parameters store later).
I'm trying to find the aws-sdk/client-neptune method to submit the query, but I couldn't find any in the library # GitHub.
This is frustrating, as I'm struggling for days to find a simple way to use the aws-sdk V3 with a Nodejs 18 Lambda to do a simple task of querying the Neptune DB.
Here my current skeleton of the Lambda:
import { NeptuneClient } from "#aws-sdk/client-neptune";
export async function handler() {
const neptuneEndpoint = "https://<my-db-instance>.us-east-1.neptune.amazonaws.com";
const neptune = new NeptuneClient({
endpoint: neptuneEndpoint,
region: "us-east-1",
});
const cypher = `MATCH (n) RETURN n`;
const query = {
Gremlin: cypher
};
const command = {
GremlinCommand: query,
};
const result = await neptune.send(command).promise();
console.log(result);
return result;
}
Can anyone please help me turn this into a working Lambda?
The client you are using only exposes Control Plane actions, such as Creating/Modifying cluster instances, and is not meant to be used to query Neptune. For openCypher, the recommendation is to query Neptune using the HTTPS endpoint as described here.
I have created a sample Neptune db and sample lambda. I have also used Lambda layers to install necessary gremlin dependencies. Please find the lambda code and lambda response screenshots below
Lambda code
const gremlin = require('gremlin');
const DriverRemoteConnection = gremlin.driver.DriverRemoteConnection;
const Graph = gremlin.structure.Graph;
dc = new DriverRemoteConnection('wss://<db cluster name>.ap-south-1.neptune.amazonaws.com:8182/gremlin',{});
const graph = new Graph();
const g = graph.traversal().withRemote(dc);
const { t: { id } } = gremlin.process;
const { cardinality: { single } } = gremlin.process;
const createVertex = async (vertexId, vlabel) => {
const vertex = await g.addV(vlabel)
.property(id, vertexId)
.property(single, 'name', 'lambda')
.property('lastname', 'Testing') // default database cardinality
.next();
return vertex.value;
};
createVertex('sampledata1','testing')
exports.handler = async(event) => {
try {
const results = await g.V().hasLabel("sampledata1").properties("name")
console.log("--------",results);
return results
} catch (error) {
// error handling.
console.log("---error---",error);
return error
}
};
Note:
Replace db endpoint with your own db endpoint
Sample response from lambda
Lambda response screenshot

AWS Lambda function can't query elastic search running on ec2 instance

I have a lambda function which is trying to read data from elastic search running in an ec2 machine. Both of them are in the same VPC, subnet and have the same security group assigned to it. The Lambda can't seem to access the elastic search instance.
const AWS = require('aws-sdk');
const elasticsearch = require('elasticsearch');
exports.handler = async function (event, context, callback) {
const client = new elasticsearch.Client({
host: 'public_dns:9200',
httpAuth: 'user:password'
});
let self = this;
client.search({
index: 'index_name',
scroll: '30s',
size: 10000,
body: {
query: {
match_all: {}
}
}
})
.then(response => {
self.responseString = response.hits.hits;
console.log(response.hits.hits);
})
.catch(error => {
console.error(error);
});
const responseData = {
statusCode: 200,
body: JSON.stringify({
message: self.responseString
})
};
callback(null, responseData);
};
The error i get from lambda is
2023-02-06T23:19:54.890Z fcd62836-4fe3-4c6a-9871-ee70668ba07c ERROR StatusCodeError: Request Timeout after 30000ms
at /var/task/node_modules/elasticsearch/src/lib/transport.js:397:9
at Timeout.<anonymous> (/var/task/node_modules/elasticsearch/src/lib/transport.js:429:7)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
status: undefined,
displayName: 'RequestTimeout',
body: undefined
Any tips on how to debug this would be highly appreciated.
Note: I do not wish to make the elastic search endpoint accessible to public.
Merely putting two resources "in the same Security Group" does not mean that they are able to communicate with each other. In fact, resources are not 'inside' a Security Group -- rather they are associated with a Security Group.
Security Group rules are applied to each resource individually. This means that if both resources are associated with the same security group, there needs to be a specific rule that allows incoming access from the security group to itself.
Instead of using one Security Group, a preferred configuration would be:
A security group on the AWS Lambda function (lambda-SG) with the default "Allow All" outbound rules, and
A security group on the Amazon EC2 instance running Elasticsearch (elastic-SG) that permits Inbound access on port 9200 from lambda-SG
That is, elastic-SG specifically references lambda-SG when permitting the inbound access. This means that traffic from the Lambda function will be allowed to communicate with the EC2 instance on that port.

How do I sign API requests (AWS SigV4) to Lambda behind Proxy & API Gateway?

I'm working on a project where we currently use Cognito User pools for auth., but after some research we found that if we want more fine-grained access-control we should use an Identity pool instead.
The theory is simple : first we create an Identity Pool that uses the Cognito user pool as Auth provider. Then in API Gateway we set up our Lambda to use Authorizer: AWS_IAM. To access it, User now has to :
Sign in to User pool, which gives user a JWT Token.
Exchange that JWT Token with the Identity pool for temporary AWS Credentials.
Use those new credentials to sign API request to the protected Lambda.
Steps 1 and 2 work fine, with a test user we manage to get the JWT Token and successfully exchange it for AWS credentials. They look like this (modified for security reasons):
awsAccessKey: ASIAZFDXSW29NWI3QZ01
awsSecretKey: B+DrYdPMFGbDd1VRLSPV387uHT715zs7IsvdNnDk
awsSessionToken: IQoJb3JpZ2luX2VjEA8aCWV1LXdlc3QtMyJHMEUCIQC4kHasZrfnaMezJkcPtDD8YizZlKESas/a5N9juG/wIQIgShWaOIgIc4X9Xrtlc+wiGuSC1AQNncwoac2vFkpJ3gkqxAQIWBAAGgw2NTI5NTE0MDE0MDIiDDuTZ1aGOpVffl3+XCqhBDmjCS3+1vSsMqV1GxZ96WMoIoEC1DMffPrBhc+NnBf94eMOI4g03M5gAm3uKAVCBkKO713TsQMaf4GOqqNemFC8LcJpKNrEQb+c+kJqqf7VWeWxveuGuPdHl1dmD2/lIc8giY0+q4Wgtbgs6i0/gR5HzdPfantrElu+cRNrn/wIq4Akf+aARUm14XsIgq7/1fT9aKSHpTgrnTLHeXLKOyf/lZ947XdH71IHDZXBUdwdPikJP/Rikwill6RRTVw7kGNOoacagCmmK7CD6uh9h0OnoW3Qw5df+zX5Z8U7U55AyQfEyzeB7bW3KH65yJn6sopegxIIFfcG2CLIvtb5cZYImAz/4BdnppYpsrEgLPUTvRAXn6KUa5sXgc5Vd7tJeRo5qpYckrR2qfbebsU+0361BCYK2HxGJqsUyt1GVsEoAosxofpn/61mYJXqfeR0ifCAgL7OMOquvlaUVXhHmnhWnUSIOUQ+XtRc+DxUDjwn5RPD7QTwLHIat7d4BI4gZJPAcMT9gZrBVO/iN88lk5R0M5LBzFwd5jiUW46H/G755I4e5ZHaT1I37TY3tbcObIFGVVNz5iHDpK/NePTJevKTshe8cYxXczOQgos4J/RsNpqouO9qRgT9JDyXjU3Etyxqm9RzbLYgV3fl5WwZl5ofVmrBsy3adq+088qEz5b9cogPgDggA/nQaPv7nAZHT8u0ct/hw230pmXUDGCutjOML2G6ZYGOoUCy+BitAN0SZOYWlbZlYomIGKMNQuXjV4z+S9CEW8VunqW4Rgl7rTba6xbI0DdX9upYEczeln6pTl+2UPEDYf6usayFfMsGDvJXesqC5EOtWco1Z8tem/wDQIH7ZbioQHZ7UJDd5ntUAruFveY7sXmKsQbtah/RB5W5HLYy19hCmyGpYMnVXxR0FcNGImsweNcprtw9MmQqy2SUK9V6Rwn1yIE6svfAT3NVyzp9ILbP/qSQLGHNhm4CNd8+EJZZa9rcmCbQiQ+iBJ8FW+AmRSCC4LiB1dhuH1KsFo88DyNhYdVf3py8XV4CDR7l+UyuZMrIQsERwx9JzwVBjfv9COT948mvyGTY
The issue is the signing. Our Lambda is behind a CloudFront proxy + API Gateway. Requests to e.g john.dev.project.io are forwarded to the 'real' API origin at api.dev.project.io.
Using Postman and setting AWS Signature, the request doesn't work and gives following error :
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n\nThe Canonical String for this request should have been\n'................................................................................................................................................................................................................................................................'\n\nThe String-to-Sign should have been\n'............................................................................'\n
We found however, that by overriding the Host header to the real origin of the API, request now works fine :
So it seems that since the custom URL we use and the original API URL are different, signatures don't match. The problem is that by default browsers don't allow you to override Host header for security reasons, so our front-end signed requests always fail.
Maybe the proxy is also modifying other headers before forwarding to origin, which would also invalidate the signature from my understanding...
Any help appreciated in solving this issue!
I was facing a similar issue when trying to make a signed request to an API Gateway endpoint behind an Akamai proxy.
The trick to solve it was indeed to generate a request as if you were sending it directly to the API Gateway URL, sign that request using sigv4 and then send that signed request to the proxy endpoint instead.
I've put together a simple NodeJS code to exemplify how to do this:
const AWS = require("aws-sdk");
const { HttpRequest } = require("#aws-sdk/protocol-http");
const { SignatureV4 } = require("#aws-sdk/signature-v4");
const { NodeHttpHandler } = require("#aws-sdk/node-http-handler");
const { Sha256 } = require("#aws-crypto/sha256-browser");
const REGION = "ca-central-1";
const PROXY_DOMAIN = "proxy.domain.com";
const PROXY_PATH = "/proxypath";
const API_GATEWAY_DOMAIN = "API-ID.execute-api.ca-central-1.amazonaws.com";
const API_GATEWAY_PATH = "/apigateway/path";
const IDENTITY_ID = "{{identity-pool-region}}:{{identity-pool-id}}";
const POOL_REGION = "{{identity-pool-region}}";
const REQUEST_BODY = { test: "test" };
const METHOD = "POST";
const udpatedSignedRequestExample = async () => {
try {
const BODY = JSON.stringify(REQUEST_BODY);
const request = new HttpRequest({
body: BODY,
headers: {
"Content-Type": "application/json",
host: API_GATEWAY_DOMAIN,
},
hostname: API_GATEWAY_DOMAIN,
port: 443,
method: METHOD,
path: API_GATEWAY_PATH,
});
console.log("request", request);
const credentials = await getCredentials();
console.log(credentials);
const signedRequest = await signRequest(request, credentials);
console.log("signedRequest", signedRequest);
const updatedSignedRequest = updateRequest(signedRequest);
console.log("updatedSignedRequest", updatedSignedRequest);
const response = await makeSignedRequest(updatedSignedRequest);
console.log(response.statusCode + " " + response.body.statusMessage);
} catch (error) {
console.log(error);
}
};
const getCredentials = async () => {
var cognitoidentity = new AWS.CognitoIdentity({ region: POOL_REGION });
var params = {
IdentityId: IDENTITY_ID,
};
const response = await cognitoidentity
.getCredentialsForIdentity(params)
.promise();
return {
accessKeyId: response.Credentials.AccessKeyId,
secretAccessKey: response.Credentials.SecretKey,
sessionToken: response.Credentials.SessionToken,
expiration: response.Credentials.Expiration,
};
};
const signRequest = async (request, credentials) => {
const signer = new SignatureV4({
credentials: credentials,
region: REGION,
service: "execute-api",
sha256: Sha256,
});
const signedRequest = await signer.sign(request);
return signedRequest;
};
const updateRequest = (httpRequest) => {
httpRequest.hostname = PROXY_DOMAIN;
httpRequest.path = PROXY_PATH;
httpRequest.headers.host = PROXY_DOMAIN;
return httpRequest;
};
const makeSignedRequest = async (httpRequest) => {
const client = new NodeHttpHandler();
const { response } = await client.handle(httpRequest);
return response;
};
udpatedSignedRequestExample();
Hope that helps.

How do you use a nodejs Lambda in AWS as a producer to send messages to MSK topic without creating EC2 client server?

I am trying to create a Lambda in AWS that serves as a producer to an MSK topic. All the AWS docs say to create a new EC2 instance, but as my Lambda is in the same VPC I feel like this should work. I am very new to this and I notice my log statement never hits in my producer.on function.
I am using nodejs and the kafka-node module. The code can be found below.
Essentially, I am just wondering if anyone knows how to do this and why the producer.on function is never hit when I run test through the Lambda? This is just some test code to see if I can get it to send, but if any more data is needed oy help please let me know and thanks in advance.
exports.handler = async (event, context,callback) => {
const kafka = require('kafka-node');
const bp = require('body-parser');
const kafka_topic = 'MyTopic';
const Producer = kafka.Producer;
var KeyedMessage = kafka.KeyedMessage;
const Client = kafka.Client;
const client = new kafka.KafkaClient({kafkaHost: 'myhost:9094'});
console.log('client :: '+JSON.stringify(client));
const producer = new Producer(client);
console.log('about to hit producer code');
producer.on('ready', function() {
console.log('Hello there!')
let message = 'my message';
let keyedMessage = new KeyedMessage('keyed', 'me keyed message');
producer.send([
{ topic: kafka_topic, partition: 0, messages: [message, keyedMessage], attributes: 0 }
], function (err, result) {
console.log(err || result);
process.exit();
});
});
producer.on('error', function (err) {
console.log('error', err);
});
}
return "success";
What you need is to be able to produce messages on your MSK cluster using REST API. Why not setup a REST proxy for MSK as detailed here and then call this API to produce your messages to MSK.

AWS CDK passing API Gateway URL to static site in same Stack

I'm trying to deploy an S3 static website and API gateway/lambda in a single stack.
The javascript in the S3 static site calls the lambda to populate an HTML list but it needs to know the API Gateway URL for the lambda integration.
Currently, I generate a RestApi like so...
const handler = new lambda.Function(this, "TestHandler", {
runtime: lambda.Runtime.NODEJS_10_X,
code: lambda.Code.asset("build/test-service"),
handler: "index.handler",
environment: {
}
});
this.api = new apigateway.RestApi(this, "test-api", {
restApiName: "Test Service"
});
const getIntegration = new apigateway.LambdaIntegration(handler, {
requestTemplates: { "application/json": '{ "statusCode": "200" }' }
});
const apiUrl = this.api.url;
But on cdk deploy, apiUrl =
"https://${Token[TOKEN.39]}.execute-api.${Token[AWS::Region.4]}.${Token[AWS::URLSuffix.1]}/${Token[TOKEN.45]}/"
So the url is not parsed/generated until after the static site requires the value.
How can I calculate/find/fetch the API Gateway URL and update the javascript on cdk deploy?
Or is there a better way to do this? i.e. is there a graceful way for the static javascript to retrieve a lambda api gateway url?
Thanks.
You are creating a LambdaIntegration but it isn't connected to your API.
To add it to the root of the API do: this.api.root.addMethod(...) and use this to connect your LambdaIntegration and API.
This should give you an endpoint with a URL
If you are using the s3-deployment module to deploy your website as well, I was able to hack together a solution using what is available currently (pending a better solution at https://github.com/aws/aws-cdk/issues/12903). The following together allow for you to deploy a config.js to your bucket (containing attributes from your stack that will only be populated at deploy time) that you can then depend on elsewhere in your code at runtime.
In inline-source.ts:
// imports removed for brevity
export function inlineSource(path: string, content: string, options?: AssetOptions): ISource {
return {
bind: (scope: Construct, context?: DeploymentSourceContext): SourceConfig => {
if (!context) {
throw new Error('To use a inlineSource, context must be provided');
}
// Find available ID
let id = 1;
while (scope.node.tryFindChild(`InlineSource${id}`)) {
id++;
}
const bucket = new Bucket(scope, `InlineSource${id}StagingBucket`, {
removalPolicy: RemovalPolicy.DESTROY
});
const fn = new Function(scope, `InlineSource${id}Lambda`, {
runtime: Runtime.NODEJS_12_X,
handler: 'index.handler',
code: Code.fromAsset('./inline-lambda')
});
bucket.grantReadWrite(fn);
const myProvider = new Provider(scope, `InlineSource${id}Provider`, {
onEventHandler: fn,
logRetention: RetentionDays.ONE_DAY // default is INFINITE
});
const resource = new CustomResource(scope, `InlineSource${id}CustomResource`, { serviceToken: myProvider.serviceToken, properties: { bucket: bucket.bucketName, path, content } });
context.handlerRole.node.addDependency(resource); // Sets the s3 deployment to depend on the deployed file
bucket.grantRead(context.handlerRole);
return {
bucket: bucket,
zipObjectKey: 'index.zip'
};
},
};
}
In inline-lambda/index.js (also requires archiver installed into inline-lambda/node_modules):
const aws = require('aws-sdk');
const s3 = new aws.S3({ apiVersion: '2006-03-01' });
const fs = require('fs');
var archive = require('archiver')('zip');
exports.handler = async function(event, ctx) {
await new Promise(resolve => fs.unlink('/tmp/index.zip', resolve));
const output = fs.createWriteStream('/tmp/index.zip');
const closed = new Promise((resolve, reject) => {
output.on('close', resolve);
output.on('error', reject);
});
archive.pipe(output);
archive.append(event.ResourceProperties.content, { name: event.ResourceProperties.path });
archive.finalize();
await closed;
await s3.upload({Bucket: event.ResourceProperties.bucket, Key: 'index.zip', Body: fs.createReadStream('/tmp/index.zip')}).promise();
return;
}
In your construct, use inlineSource:
export class TestConstruct extends Construct {
constructor(scope: Construct, id: string, props: any) {
// set up other resources
const source = inlineSource('config.js', `exports.config = { apiEndpoint: '${ api.attrApiEndpoint }' }`);
// use in BucketDeployment
}
}
You can move inline-lambda elsewhere but it needs to be able to be bundled as an asset for the lambda.
This works by creating a custom resource that depends on your other resources in the stack (thereby allowing for the attributes to be resolved) that writes your file into a zip that is then stored into a bucket, which is then picked up and unzipped into your deployment/destination bucket. Pretty complicated but gets the job done with what is currently available.
The pattern I've used successfully is to put a CloudFront distribution or an API Gateway in front of the S3 bucket.
So requests to https://[api-gw]/**/* are proxied to https://[s3-bucket]/**/*.
Then I will create a new Proxy path in the same API gateway, for the route called /config which is a standard Lambda-backed API endpoint, where I can return all sorts of things like branding information or API keys to the frontend, whenever the frontend calls GET /config.
Also, this avoids issues like CORS, because both origins are the same (the API Gateway domain).
With CloudFront distribution instead of an API Gateway, it's pretty much the same, except you use the CloudFront distribution's "origin" configuration instead of paths and methods.

Resources