I have a lambda function which is trying to read data from elastic search running in an ec2 machine. Both of them are in the same VPC, subnet and have the same security group assigned to it. The Lambda can't seem to access the elastic search instance.
const AWS = require('aws-sdk');
const elasticsearch = require('elasticsearch');
exports.handler = async function (event, context, callback) {
const client = new elasticsearch.Client({
host: 'public_dns:9200',
httpAuth: 'user:password'
});
let self = this;
client.search({
index: 'index_name',
scroll: '30s',
size: 10000,
body: {
query: {
match_all: {}
}
}
})
.then(response => {
self.responseString = response.hits.hits;
console.log(response.hits.hits);
})
.catch(error => {
console.error(error);
});
const responseData = {
statusCode: 200,
body: JSON.stringify({
message: self.responseString
})
};
callback(null, responseData);
};
The error i get from lambda is
2023-02-06T23:19:54.890Z fcd62836-4fe3-4c6a-9871-ee70668ba07c ERROR StatusCodeError: Request Timeout after 30000ms
at /var/task/node_modules/elasticsearch/src/lib/transport.js:397:9
at Timeout.<anonymous> (/var/task/node_modules/elasticsearch/src/lib/transport.js:429:7)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
status: undefined,
displayName: 'RequestTimeout',
body: undefined
Any tips on how to debug this would be highly appreciated.
Note: I do not wish to make the elastic search endpoint accessible to public.
Merely putting two resources "in the same Security Group" does not mean that they are able to communicate with each other. In fact, resources are not 'inside' a Security Group -- rather they are associated with a Security Group.
Security Group rules are applied to each resource individually. This means that if both resources are associated with the same security group, there needs to be a specific rule that allows incoming access from the security group to itself.
Instead of using one Security Group, a preferred configuration would be:
A security group on the AWS Lambda function (lambda-SG) with the default "Allow All" outbound rules, and
A security group on the Amazon EC2 instance running Elasticsearch (elastic-SG) that permits Inbound access on port 9200 from lambda-SG
That is, elastic-SG specifically references lambda-SG when permitting the inbound access. This means that traffic from the Lambda function will be allowed to communicate with the EC2 instance on that port.
Related
Trying to publish from a Lambda function to a Redis Elasticache, but I just continue to get 502 Bad Gateway responses with the Lambda function timing out.
I have successfully connected to the Elasticache instance using an ECS in the same VPC which leads me to think that the VPC settings for my Lambda are not correct. I tried following this tutorial (https://docs.aws.amazon.com/lambda/latest/dg/services-elasticache-tutorial.html) and have looked at several StackOverflow threads to no avail.
The Lambda Function:
export const publisher = redis.createClient({
url: XXXXXX, // env var containing the URL which is also used in the ECS server to successfully connect
});
export const handler = async (
event: AWSLambda.APIGatewayProxyWithCognitoAuthorizerEvent
): Promise<AWSLambda.APIGatewayProxyResult> => {
try {
if (!event.body || !event.pathParameters || !event.pathParameters.channelId)
return ApiResponse.error(400, {}, new InvalidRequestError());
const { action, payload } = JSON.parse(event.body) as {
action: string;
payload?: { [key: string]: string };
};
const { channelId } = event.pathParameters;
const publishAsync = promisify(publisher.publish).bind(publisher);
await publishAsync(
channelId,
JSON.stringify({
action,
payload: payload || {},
})
);
return ApiResponse.success(204);
} catch (e) {
Logger.error(e);
return ApiResponse.error();
}
};
In my troubleshooting, I have verified the following in the Lambda functions console:
The correct role is showing in Configuration > Permissions
The lambda function has access to the VPC (Configuration > VPCs), Subnets, and the same SG as the Elasticache instance.
The SG is allowing all traffic from anywhere.
It is indeed the Redis connection. Using console.log the code stops at this line: await publishAsync()
I am sure it is something small, but it is racking my brain!
Update 1:
Tried adding an error handler to log any issues with the publish in addition to the main try/catch block, but it's not logging a thing.
publisher.on('error', (e) => {
Logger.error(e, 'evses-service', 'message-publisher');
});
Also have copied my Elasticache setup:
And my Elasticache Subnet Group:
And my Lambda VPC settings:
And that my Lambda has the right access:
Update 2:
Tried to follow the tutorial here (https://docs.aws.amazon.com/lambda/latest/dg/services-elasticache-tutorial.html) word for word, but getting the same issue. No logs, just a timeout after 30 seconds. Here is the test code:
const crypto = require('crypto');
const redis = require('redis');
const util = require('util');
const client = redis.createClient({
url: 'rediss://clusterforlambdatest.9nxhfd.0001.use1.cache.amazonaws.com',
});
client.on('error', (e) => {
console.log(e);
});
exports.handler = async (event) => {
try {
const len = 24;
const randomString = crypto
.randomBytes(Math.ceil(len / 2))
.toString('hex') // convert to hexadecimal format
.slice(0, len)
.toUpperCase();
const setAsync = util.promisify(client.set).bind(client);
const getAsync = util.promisify(client.get).bind(client);
await setAsync(randomString, 'We set this string bruh!');
const doc = await getAsync(randomString);
console.log(`Successfully receieved document ${randomString} with contents: ${doc}`);
return;
} catch (e) {
console.log(e);
return {
statusCode: 500,
};
}
};
If you have timeout, assuming the lambda network is well configured, you should check the following:
redis SSL configuration: check diffs between redisS connection url and cluster configuration (in-transit encryption and client configuration with tls: {})
configure the client with a specific retry strategy to avoid lambda timeout and catch connection issue
check VPC acl and security groups
I had same issue with my elasticache cluster, here are few findings -
Check the number client connections with elasticache and resources used
Check VPC subnet and CIDR for nodes security group
Try to increase the TTL for lambda and see which service is taking more time to respond Lambda or elasticache
I created an AWS API Gateway route for Websocket connections. I started with the AWS provided Simple Web Chat templates but have modified it to fit my needs. The API Gateway calls a Lambda function that writes to a DynamoDB table.
I am able to make a websocket connection but when I make my next request to insert some data the data appears successfully in my DynamoDB table but the response I get back is Internal Server Error.
I don't understand what is causing the Internal Server Error. When I look in the CloudWatch logs I just see normal traffic with no errors.
I could use some help understanding what is going wrong or how I can troubleshoot this better.
Here is the Lamba function that is being called:
const AWS = require("aws-sdk");
const customId = require("custom-id");
const ddb = new AWS.DynamoDB.DocumentClient({
apiVersion: "2012-08-10",
region: process.env.AWS_REGION,
});
exports.handler = async (event) => {
const uniqueId = customId({
randomLength: 1,
});
const data = {
uniqueId: uniqueId,
members: [
{
connectionId: event.requestContext.connectionId,
},
],
events: [],
parameters: [],
};
const putParams = {
TableName: process.env.EVENT_TABLE_NAME,
Item: data,
};
try {
await ddb.put(putParams).promise();
} catch (err) {
return {
statusCode: 400,
body: "Failed to create: " + JSON.stringify(err),
};
}
return { statusCode: 200, body: putParams };
};
Image of AWS CloudWatch Logs
The error returned by wcat looks like this:
{"message": "Internal server error", "connectionId":"NZxV_ddNIAMCJrw=", "requestId":"NZxafGiyoAMFoAA="}
I just had the same problem. The issue in my case was because API Gateway did not have permission to call the Lambda function in order to process a message arriving from the websocket. The 'internal server error' in this case is API Gateway saying it had some problem when it tried to invoke the Lambda function to handle the websocket message.
I was using CDK to deploy the infrastructure, and I created one WebSocketLambdaIntegration for the connect, disconnect and default websocket handlers, but this doesn't work. You have to create separate WebSocketLambdaIntegration instances even if you are calling the same Lambda function for all websocket events, otherwise CDK does not set the correct permissions.
I could see this was the problem because 1) I was using the same Lambda function for the connect, disconnect and default routes, and 2) in CloudWatch Logs I was only seeing log messages for one of these routes, in this case the 'connect' one. When I sent a message over the websocket, I was not seeing the expected log messages from the Lambda that was supposed to be handling incoming websocket messages. When I disconnected from the websocket, I did not see the expected log messages from the 'disconnect' handler.
This was because CDK had only given Lambda invoke permission to specific routes on the API Gateway websocket stage, and it had only authorised the 'connect' route, not the others.
Fixing the CDK stack so that it correctly assigned permissions, allowing API Gateway to invoke my Lambda for all websocket routes, fixed the problem.
I see it now. It was the last line. I changed it and now it works fine.
return { statusCode: 200, body: JSON.stringify(putParams) };
I'm using Swagger Petstore via swagger-to-graphql npm module and able to run the GraphQL Playground for it.
graphQLSchema('./swagger.json', 'http://petstore.swagger.io/v2', {
Authorization: 'Basic YWRkOmJhc2ljQXV0aA=='
}).then(schema => {
const app = express();
app.use('/', graphqlHTTP(() => {
return {
schema,
graphiql: true
};
}));
app.listen(4001, 'localhost', () => {
console.info('http://localhost:4001/');
});
}).catch(e => {
console.log(e);
});
However, when I tried to feed the service to Apollo Gateway, it throws Error: Apollo Server requires either an existing schema or typeDefs
const gateway = new ApolloGateway({
serviceList: [
{ name: 'pet', url: 'http://localhost:4001' }
],
});
const server = new ApolloServer({
gateway,
// Currently, subscriptions are enabled by default with Apollo Server, however,
// subscriptions are not compatible with the gateway. We hope to resolve this
// limitation in future versions of Apollo Server. Please reach out to us on
// https://spectrum.chat/apollo/apollo-server if this is critical to your adoption!
subscriptions: false,
});
server.listen().then(({ url }) => {
console.log(`🚀 Server ready at ${url}`);
});
What am I missing?
From the docs:
Converting an existing schema into a federated service is the first step in building a federated graph. To do this, we'll use the buildFederatedSchema() function from the #apollo/federation package.
You cannot provide just any existing service to the gateway -- the service must meet the federation spec. The only way to currently do that is to use buildFederatedSchema to create the service's schema. At this time, buildFederatedSchema does not accept existing schemas so federation is not compatible with any other tools that generate a schema for you. Hopefully that feature will be added in the near future.
I am trying to deploy a simple Slack lambda api which uses the #slack/client library to remove members and pinned messages from a specific channel. The issue that I am running into is the function executes without a problem, and it is removing the channel members without a problem, but my Lambda function keeps returning:
HTTP/1.1 502 Bad Gateway
...
X-Cache: Error from cloudfront
...
{
"message": "Internal server error"
}
as the response body. When I check the logs using sls logs -f api, I dont see any errors there either. I see the console.log of my function successfully executing.
My serverless.yml is as follows:
provider:
name: aws
runtime: nodejs10.x
profile: serverless
functions:
api:
handler: handler.api
timeout: 30
events:
- http:
method: POST
path: clean
And my api code, i have removed the unnecessary function codes as they are doing their work, is :
module.exports.api = async (event, context, callback) => {
let channel = JSON.parse(event.body).ctf
let id = await findChannelId(channel)
removeMembersFromChannel(id[0]).then(() => {
removePinsFromChannel(id[0]).then(() => {
callback(null, {
statusCode: 200,
body: JSON.stringify({
message: `Cleaned ${channel} ${id}`,
}, null, 2),
})
})
})
};
Things I have tried:
returning the response instead of using the callback
using promises and async await
testing the function locally using sls invoke local
most of my search shows that this could be a permission issue, but all the references are for s3 usage which is something i am not using.
Questions
Why am I getting this error, and how I can resolve this?
After referenceing this In the handler function, I am using JSON.stringify. Using the serverless-framework, how can i avoid using Lambda proxy integration?
Please, add console.log for detailed logging via cloudwatch and use x-ray. Some typical problems with cloudfront:
- a lot of time to propagate to edge locations (maybe u need recreate your cdn)
- logs from lambda#edge locates in invoked region
I have been trying without success to use http module in my Node.js endpoint to do a simple http get.
I have followed the various tutorials to execute the get within my intent, but it keeps failing with getaddrinfo ENOTFOUND in the cloudwatch log.
It seems like I am preparing the url correctly, if I just cut and past the url output into the browswer I get the expected response, and its just a plain http get over port 80.
I suspect that maybe the Alexa hosted lambda doesn't have permission necessary to make remote calls to non-amazon web services, but I don't know this for sure.
Can anybody shed any light? FYI this is the code in my lambda:
var http = require('http');
function httpGet(address, zip, zillowid) {
const pathval = 'www.zillow.com/webservice/GetSearchResults.htm' + `?zws-id=${zillowid}` + `&address=${encodeURIComponent(address)}&citystatezip=${zip}`;
console.log ("pathval =" + pathval);
return new Promise(((resolve, reject) => {
var options = {
host: pathval,
port: 80,
method: 'GET',
};
const request = http.request(options, (response) => {
response.setEncoding('utf8');
console.log("options are" + options);
let returnData = '';
response.on('data', (chunk) => {
returnData += chunk;
});
response.on('end', () => {
resolve(JSON.parse(returnData));
});
response.on('error', (error) => {
console.log("I see there was an error, which is " + error);
reject(error);
});
});
request.end();
}));
}
host: pathval is incorrect usage of the Node.js http module. You need to provide the hostname and the path + query string as two different options.
An example of correct usage:
host: 'example.com',
path: '/webservice/GetSearchResults.htm?zws-id=...',
(Of course, these can be variables, they don't need to be literals as shown here for clarity.)
The error occurs because you're treating the whole URL as a hostname, and as such it doesn't exist.
I suspect that maybe the Alexa hosted lambda doesn't have permission necessary to make remote calls to non-amazon web services
There is no restriction on what services you can contact from a within a Lambda function (other than filters that protect against sending spam email directly to random mail servers).