AWS CDK passing API Gateway URL to static site in same Stack - aws-lambda

I'm trying to deploy an S3 static website and API gateway/lambda in a single stack.
The javascript in the S3 static site calls the lambda to populate an HTML list but it needs to know the API Gateway URL for the lambda integration.
Currently, I generate a RestApi like so...
const handler = new lambda.Function(this, "TestHandler", {
runtime: lambda.Runtime.NODEJS_10_X,
code: lambda.Code.asset("build/test-service"),
handler: "index.handler",
environment: {
}
});
this.api = new apigateway.RestApi(this, "test-api", {
restApiName: "Test Service"
});
const getIntegration = new apigateway.LambdaIntegration(handler, {
requestTemplates: { "application/json": '{ "statusCode": "200" }' }
});
const apiUrl = this.api.url;
But on cdk deploy, apiUrl =
"https://${Token[TOKEN.39]}.execute-api.${Token[AWS::Region.4]}.${Token[AWS::URLSuffix.1]}/${Token[TOKEN.45]}/"
So the url is not parsed/generated until after the static site requires the value.
How can I calculate/find/fetch the API Gateway URL and update the javascript on cdk deploy?
Or is there a better way to do this? i.e. is there a graceful way for the static javascript to retrieve a lambda api gateway url?
Thanks.

You are creating a LambdaIntegration but it isn't connected to your API.
To add it to the root of the API do: this.api.root.addMethod(...) and use this to connect your LambdaIntegration and API.
This should give you an endpoint with a URL

If you are using the s3-deployment module to deploy your website as well, I was able to hack together a solution using what is available currently (pending a better solution at https://github.com/aws/aws-cdk/issues/12903). The following together allow for you to deploy a config.js to your bucket (containing attributes from your stack that will only be populated at deploy time) that you can then depend on elsewhere in your code at runtime.
In inline-source.ts:
// imports removed for brevity
export function inlineSource(path: string, content: string, options?: AssetOptions): ISource {
return {
bind: (scope: Construct, context?: DeploymentSourceContext): SourceConfig => {
if (!context) {
throw new Error('To use a inlineSource, context must be provided');
}
// Find available ID
let id = 1;
while (scope.node.tryFindChild(`InlineSource${id}`)) {
id++;
}
const bucket = new Bucket(scope, `InlineSource${id}StagingBucket`, {
removalPolicy: RemovalPolicy.DESTROY
});
const fn = new Function(scope, `InlineSource${id}Lambda`, {
runtime: Runtime.NODEJS_12_X,
handler: 'index.handler',
code: Code.fromAsset('./inline-lambda')
});
bucket.grantReadWrite(fn);
const myProvider = new Provider(scope, `InlineSource${id}Provider`, {
onEventHandler: fn,
logRetention: RetentionDays.ONE_DAY // default is INFINITE
});
const resource = new CustomResource(scope, `InlineSource${id}CustomResource`, { serviceToken: myProvider.serviceToken, properties: { bucket: bucket.bucketName, path, content } });
context.handlerRole.node.addDependency(resource); // Sets the s3 deployment to depend on the deployed file
bucket.grantRead(context.handlerRole);
return {
bucket: bucket,
zipObjectKey: 'index.zip'
};
},
};
}
In inline-lambda/index.js (also requires archiver installed into inline-lambda/node_modules):
const aws = require('aws-sdk');
const s3 = new aws.S3({ apiVersion: '2006-03-01' });
const fs = require('fs');
var archive = require('archiver')('zip');
exports.handler = async function(event, ctx) {
await new Promise(resolve => fs.unlink('/tmp/index.zip', resolve));
const output = fs.createWriteStream('/tmp/index.zip');
const closed = new Promise((resolve, reject) => {
output.on('close', resolve);
output.on('error', reject);
});
archive.pipe(output);
archive.append(event.ResourceProperties.content, { name: event.ResourceProperties.path });
archive.finalize();
await closed;
await s3.upload({Bucket: event.ResourceProperties.bucket, Key: 'index.zip', Body: fs.createReadStream('/tmp/index.zip')}).promise();
return;
}
In your construct, use inlineSource:
export class TestConstruct extends Construct {
constructor(scope: Construct, id: string, props: any) {
// set up other resources
const source = inlineSource('config.js', `exports.config = { apiEndpoint: '${ api.attrApiEndpoint }' }`);
// use in BucketDeployment
}
}
You can move inline-lambda elsewhere but it needs to be able to be bundled as an asset for the lambda.
This works by creating a custom resource that depends on your other resources in the stack (thereby allowing for the attributes to be resolved) that writes your file into a zip that is then stored into a bucket, which is then picked up and unzipped into your deployment/destination bucket. Pretty complicated but gets the job done with what is currently available.

The pattern I've used successfully is to put a CloudFront distribution or an API Gateway in front of the S3 bucket.
So requests to https://[api-gw]/**/* are proxied to https://[s3-bucket]/**/*.
Then I will create a new Proxy path in the same API gateway, for the route called /config which is a standard Lambda-backed API endpoint, where I can return all sorts of things like branding information or API keys to the frontend, whenever the frontend calls GET /config.
Also, this avoids issues like CORS, because both origins are the same (the API Gateway domain).
With CloudFront distribution instead of an API Gateway, it's pretty much the same, except you use the CloudFront distribution's "origin" configuration instead of paths and methods.

Related

Write an AWS Lambda to query a Neptune DB with openCypher using aws-sdk v3

I have a Lambda with Node.js 18 runtime, in which I would like to send an openCypher query to an AWS Neptune DB.
My Lambda is using an IAM role with these policies:
NeptuneFullAccess
AWSLambdaBasicExecutionRole
AmazonSSMReadOnlyAccess
(The last policy will be used to fetch the Neptune endpoint from the SSM parameters store later).
I'm trying to find the aws-sdk/client-neptune method to submit the query, but I couldn't find any in the library # GitHub.
This is frustrating, as I'm struggling for days to find a simple way to use the aws-sdk V3 with a Nodejs 18 Lambda to do a simple task of querying the Neptune DB.
Here my current skeleton of the Lambda:
import { NeptuneClient } from "#aws-sdk/client-neptune";
export async function handler() {
const neptuneEndpoint = "https://<my-db-instance>.us-east-1.neptune.amazonaws.com";
const neptune = new NeptuneClient({
endpoint: neptuneEndpoint,
region: "us-east-1",
});
const cypher = `MATCH (n) RETURN n`;
const query = {
Gremlin: cypher
};
const command = {
GremlinCommand: query,
};
const result = await neptune.send(command).promise();
console.log(result);
return result;
}
Can anyone please help me turn this into a working Lambda?
The client you are using only exposes Control Plane actions, such as Creating/Modifying cluster instances, and is not meant to be used to query Neptune. For openCypher, the recommendation is to query Neptune using the HTTPS endpoint as described here.
I have created a sample Neptune db and sample lambda. I have also used Lambda layers to install necessary gremlin dependencies. Please find the lambda code and lambda response screenshots below
Lambda code
const gremlin = require('gremlin');
const DriverRemoteConnection = gremlin.driver.DriverRemoteConnection;
const Graph = gremlin.structure.Graph;
dc = new DriverRemoteConnection('wss://<db cluster name>.ap-south-1.neptune.amazonaws.com:8182/gremlin',{});
const graph = new Graph();
const g = graph.traversal().withRemote(dc);
const { t: { id } } = gremlin.process;
const { cardinality: { single } } = gremlin.process;
const createVertex = async (vertexId, vlabel) => {
const vertex = await g.addV(vlabel)
.property(id, vertexId)
.property(single, 'name', 'lambda')
.property('lastname', 'Testing') // default database cardinality
.next();
return vertex.value;
};
createVertex('sampledata1','testing')
exports.handler = async(event) => {
try {
const results = await g.V().hasLabel("sampledata1").properties("name")
console.log("--------",results);
return results
} catch (error) {
// error handling.
console.log("---error---",error);
return error
}
};
Note:
Replace db endpoint with your own db endpoint
Sample response from lambda
Lambda response screenshot

How do I sign API requests (AWS SigV4) to Lambda behind Proxy & API Gateway?

I'm working on a project where we currently use Cognito User pools for auth., but after some research we found that if we want more fine-grained access-control we should use an Identity pool instead.
The theory is simple : first we create an Identity Pool that uses the Cognito user pool as Auth provider. Then in API Gateway we set up our Lambda to use Authorizer: AWS_IAM. To access it, User now has to :
Sign in to User pool, which gives user a JWT Token.
Exchange that JWT Token with the Identity pool for temporary AWS Credentials.
Use those new credentials to sign API request to the protected Lambda.
Steps 1 and 2 work fine, with a test user we manage to get the JWT Token and successfully exchange it for AWS credentials. They look like this (modified for security reasons):
awsAccessKey: ASIAZFDXSW29NWI3QZ01
awsSecretKey: B+DrYdPMFGbDd1VRLSPV387uHT715zs7IsvdNnDk
awsSessionToken: IQoJb3JpZ2luX2VjEA8aCWV1LXdlc3QtMyJHMEUCIQC4kHasZrfnaMezJkcPtDD8YizZlKESas/a5N9juG/wIQIgShWaOIgIc4X9Xrtlc+wiGuSC1AQNncwoac2vFkpJ3gkqxAQIWBAAGgw2NTI5NTE0MDE0MDIiDDuTZ1aGOpVffl3+XCqhBDmjCS3+1vSsMqV1GxZ96WMoIoEC1DMffPrBhc+NnBf94eMOI4g03M5gAm3uKAVCBkKO713TsQMaf4GOqqNemFC8LcJpKNrEQb+c+kJqqf7VWeWxveuGuPdHl1dmD2/lIc8giY0+q4Wgtbgs6i0/gR5HzdPfantrElu+cRNrn/wIq4Akf+aARUm14XsIgq7/1fT9aKSHpTgrnTLHeXLKOyf/lZ947XdH71IHDZXBUdwdPikJP/Rikwill6RRTVw7kGNOoacagCmmK7CD6uh9h0OnoW3Qw5df+zX5Z8U7U55AyQfEyzeB7bW3KH65yJn6sopegxIIFfcG2CLIvtb5cZYImAz/4BdnppYpsrEgLPUTvRAXn6KUa5sXgc5Vd7tJeRo5qpYckrR2qfbebsU+0361BCYK2HxGJqsUyt1GVsEoAosxofpn/61mYJXqfeR0ifCAgL7OMOquvlaUVXhHmnhWnUSIOUQ+XtRc+DxUDjwn5RPD7QTwLHIat7d4BI4gZJPAcMT9gZrBVO/iN88lk5R0M5LBzFwd5jiUW46H/G755I4e5ZHaT1I37TY3tbcObIFGVVNz5iHDpK/NePTJevKTshe8cYxXczOQgos4J/RsNpqouO9qRgT9JDyXjU3Etyxqm9RzbLYgV3fl5WwZl5ofVmrBsy3adq+088qEz5b9cogPgDggA/nQaPv7nAZHT8u0ct/hw230pmXUDGCutjOML2G6ZYGOoUCy+BitAN0SZOYWlbZlYomIGKMNQuXjV4z+S9CEW8VunqW4Rgl7rTba6xbI0DdX9upYEczeln6pTl+2UPEDYf6usayFfMsGDvJXesqC5EOtWco1Z8tem/wDQIH7ZbioQHZ7UJDd5ntUAruFveY7sXmKsQbtah/RB5W5HLYy19hCmyGpYMnVXxR0FcNGImsweNcprtw9MmQqy2SUK9V6Rwn1yIE6svfAT3NVyzp9ILbP/qSQLGHNhm4CNd8+EJZZa9rcmCbQiQ+iBJ8FW+AmRSCC4LiB1dhuH1KsFo88DyNhYdVf3py8XV4CDR7l+UyuZMrIQsERwx9JzwVBjfv9COT948mvyGTY
The issue is the signing. Our Lambda is behind a CloudFront proxy + API Gateway. Requests to e.g john.dev.project.io are forwarded to the 'real' API origin at api.dev.project.io.
Using Postman and setting AWS Signature, the request doesn't work and gives following error :
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n\nThe Canonical String for this request should have been\n'................................................................................................................................................................................................................................................................'\n\nThe String-to-Sign should have been\n'............................................................................'\n
We found however, that by overriding the Host header to the real origin of the API, request now works fine :
So it seems that since the custom URL we use and the original API URL are different, signatures don't match. The problem is that by default browsers don't allow you to override Host header for security reasons, so our front-end signed requests always fail.
Maybe the proxy is also modifying other headers before forwarding to origin, which would also invalidate the signature from my understanding...
Any help appreciated in solving this issue!
I was facing a similar issue when trying to make a signed request to an API Gateway endpoint behind an Akamai proxy.
The trick to solve it was indeed to generate a request as if you were sending it directly to the API Gateway URL, sign that request using sigv4 and then send that signed request to the proxy endpoint instead.
I've put together a simple NodeJS code to exemplify how to do this:
const AWS = require("aws-sdk");
const { HttpRequest } = require("#aws-sdk/protocol-http");
const { SignatureV4 } = require("#aws-sdk/signature-v4");
const { NodeHttpHandler } = require("#aws-sdk/node-http-handler");
const { Sha256 } = require("#aws-crypto/sha256-browser");
const REGION = "ca-central-1";
const PROXY_DOMAIN = "proxy.domain.com";
const PROXY_PATH = "/proxypath";
const API_GATEWAY_DOMAIN = "API-ID.execute-api.ca-central-1.amazonaws.com";
const API_GATEWAY_PATH = "/apigateway/path";
const IDENTITY_ID = "{{identity-pool-region}}:{{identity-pool-id}}";
const POOL_REGION = "{{identity-pool-region}}";
const REQUEST_BODY = { test: "test" };
const METHOD = "POST";
const udpatedSignedRequestExample = async () => {
try {
const BODY = JSON.stringify(REQUEST_BODY);
const request = new HttpRequest({
body: BODY,
headers: {
"Content-Type": "application/json",
host: API_GATEWAY_DOMAIN,
},
hostname: API_GATEWAY_DOMAIN,
port: 443,
method: METHOD,
path: API_GATEWAY_PATH,
});
console.log("request", request);
const credentials = await getCredentials();
console.log(credentials);
const signedRequest = await signRequest(request, credentials);
console.log("signedRequest", signedRequest);
const updatedSignedRequest = updateRequest(signedRequest);
console.log("updatedSignedRequest", updatedSignedRequest);
const response = await makeSignedRequest(updatedSignedRequest);
console.log(response.statusCode + " " + response.body.statusMessage);
} catch (error) {
console.log(error);
}
};
const getCredentials = async () => {
var cognitoidentity = new AWS.CognitoIdentity({ region: POOL_REGION });
var params = {
IdentityId: IDENTITY_ID,
};
const response = await cognitoidentity
.getCredentialsForIdentity(params)
.promise();
return {
accessKeyId: response.Credentials.AccessKeyId,
secretAccessKey: response.Credentials.SecretKey,
sessionToken: response.Credentials.SessionToken,
expiration: response.Credentials.Expiration,
};
};
const signRequest = async (request, credentials) => {
const signer = new SignatureV4({
credentials: credentials,
region: REGION,
service: "execute-api",
sha256: Sha256,
});
const signedRequest = await signer.sign(request);
return signedRequest;
};
const updateRequest = (httpRequest) => {
httpRequest.hostname = PROXY_DOMAIN;
httpRequest.path = PROXY_PATH;
httpRequest.headers.host = PROXY_DOMAIN;
return httpRequest;
};
const makeSignedRequest = async (httpRequest) => {
const client = new NodeHttpHandler();
const { response } = await client.handle(httpRequest);
return response;
};
udpatedSignedRequestExample();
Hope that helps.

AWS Lambda Timeout when connecting to Redis Elasticache in same VPC

Trying to publish from a Lambda function to a Redis Elasticache, but I just continue to get 502 Bad Gateway responses with the Lambda function timing out.
I have successfully connected to the Elasticache instance using an ECS in the same VPC which leads me to think that the VPC settings for my Lambda are not correct. I tried following this tutorial (https://docs.aws.amazon.com/lambda/latest/dg/services-elasticache-tutorial.html) and have looked at several StackOverflow threads to no avail.
The Lambda Function:
export const publisher = redis.createClient({
url: XXXXXX, // env var containing the URL which is also used in the ECS server to successfully connect
});
export const handler = async (
event: AWSLambda.APIGatewayProxyWithCognitoAuthorizerEvent
): Promise<AWSLambda.APIGatewayProxyResult> => {
try {
if (!event.body || !event.pathParameters || !event.pathParameters.channelId)
return ApiResponse.error(400, {}, new InvalidRequestError());
const { action, payload } = JSON.parse(event.body) as {
action: string;
payload?: { [key: string]: string };
};
const { channelId } = event.pathParameters;
const publishAsync = promisify(publisher.publish).bind(publisher);
await publishAsync(
channelId,
JSON.stringify({
action,
payload: payload || {},
})
);
return ApiResponse.success(204);
} catch (e) {
Logger.error(e);
return ApiResponse.error();
}
};
In my troubleshooting, I have verified the following in the Lambda functions console:
The correct role is showing in Configuration > Permissions
The lambda function has access to the VPC (Configuration > VPCs), Subnets, and the same SG as the Elasticache instance.
The SG is allowing all traffic from anywhere.
It is indeed the Redis connection. Using console.log the code stops at this line: await publishAsync()
I am sure it is something small, but it is racking my brain!
Update 1:
Tried adding an error handler to log any issues with the publish in addition to the main try/catch block, but it's not logging a thing.
publisher.on('error', (e) => {
Logger.error(e, 'evses-service', 'message-publisher');
});
Also have copied my Elasticache setup:
And my Elasticache Subnet Group:
And my Lambda VPC settings:
And that my Lambda has the right access:
Update 2:
Tried to follow the tutorial here (https://docs.aws.amazon.com/lambda/latest/dg/services-elasticache-tutorial.html) word for word, but getting the same issue. No logs, just a timeout after 30 seconds. Here is the test code:
const crypto = require('crypto');
const redis = require('redis');
const util = require('util');
const client = redis.createClient({
url: 'rediss://clusterforlambdatest.9nxhfd.0001.use1.cache.amazonaws.com',
});
client.on('error', (e) => {
console.log(e);
});
exports.handler = async (event) => {
try {
const len = 24;
const randomString = crypto
.randomBytes(Math.ceil(len / 2))
.toString('hex') // convert to hexadecimal format
.slice(0, len)
.toUpperCase();
const setAsync = util.promisify(client.set).bind(client);
const getAsync = util.promisify(client.get).bind(client);
await setAsync(randomString, 'We set this string bruh!');
const doc = await getAsync(randomString);
console.log(`Successfully receieved document ${randomString} with contents: ${doc}`);
return;
} catch (e) {
console.log(e);
return {
statusCode: 500,
};
}
};
If you have timeout, assuming the lambda network is well configured, you should check the following:
redis SSL configuration: check diffs between redisS connection url and cluster configuration (in-transit encryption and client configuration with tls: {})
configure the client with a specific retry strategy to avoid lambda timeout and catch connection issue
check VPC acl and security groups
I had same issue with my elasticache cluster, here are few findings -
Check the number client connections with elasticache and resources used
Check VPC subnet and CIDR for nodes security group
Try to increase the TTL for lambda and see which service is taking more time to respond Lambda or elasticache

How to restrict API Key when calling the Reverse Geocoding API from Browser?

I am using Google Reverse Geocoding API from Browser.
The API works fine when using API Key with no restriction.
For example: https://maps.googleapis.com/maps/api/geocode/json?key=API_KEY_WITH_NO_RESTRICTION&latlng=41.8857156,-87.64823779999999 - OK
But as I am calling the API from the browser, I would like to restrict the API Key, preferably request originating from certain domains.
Now, as per the restriction guideline, HTTP Referer restrictions won't work for the Geocoding API (one of the Google Web Service API). It returns error "API keys with referer restrictions cannot be used with this API." in such case!
The other option is to use IP address restriction. But it seems to be more suited if the call was originating from the server. In that case server address could be added in the restriction.
How can I secure (restrict) the API Key if I want to continue to call the Geocoding API from the browser?
I figured out that I have to use Maps Javascript API in order to be able to call the Reverse Geocoding (Address Lookup) from browser (client) with HTTP Referer restrictions in place for the API Key.
In my initial implementation I used fetch(requestUrl) from the browser as it seemed very convenient and ended up with the above problem.
Example (using TypeScript):
Enable Maps Javascript API
Install required packages
npm install #googlemaps/js-api-loader
npm i -D #types/google.maps
reverseGeo.ts
import { Loader } from '#googlemaps/js-api-loader';
const loadScript = async (): Promise<void> => {
const loader = new Loader({
apiKey: API_KEY_WITH_REFERRER_RESTRICTION,
version: 'weekly',
});
await loader.load();
};
export async function reverseGeo(
lat: number, long: number
): Promise<string> {
await loadScript();
const geocoder = new google.maps.Geocoder();
const latlng = {
lat: lat,
lng: long,
};
try {
const { results } = await geocoder.geocode({ location: latlng });
if (results && results[0]) {
return results[0].formatted_address;
}
} catch (e) {
// handle exception
}
throw new TypeError('Zero result or reverse geo Failed'); // or handle other way
}
reverseGeo.spec.ts
import { reverseGeo} from './reverseGeo';
it('should test reverseGeo', async () => {
console.log(reverseGeo(22.5726, 88.3639));
});

Sending message from Cognito triggers

I want to restrict user sign-ins from Cognito hosted UI. I can see there are triggers in which we can attach lambda, but whenever I change event object inside of lambda, instead of getting my custom message User exceeded limits, I get unrecognizable lambda output error.
Can anyone help me in this or is there any other way to achieve this functionality?
Now,I'm getting this
with this code :
exports.handler = (event, context, callback) => {
if (true) {
var error = new Error("Cannot signin because your signin count is 5");
// Return error to Amazon Cognito
callback(error, event);
}
// Return to Amazon Cognito
callback(null, event);
};
But,I don't want prefix PreAuthentication failed with error,I just want to display my message.
Any help is appreciated.
Currently, there is no way to stop Cognito from adding the prefix because the form is a hosted web UI.
If this is a hard requirement, the workaround is to create your own login form and use the aws-cognito-sdk
Once you make the call to cognitoUser.authenticateUser in the code below the Pre authentication trigger will fire the Lambda function and you will need to handle the error and parse it to remove the unwanted prefix.
Hope this Helps
aws Examples: Using the JavaScript SDK
var authenticationData = {
Username : 'username',
Password : 'password',
};
var authenticationDetails = new AmazonCognitoIdentity.AuthenticationDetails(authenticationData);
var poolData = { UserPoolId : 'us-east-1_TcoKGbf7n',
ClientId : '4pe2usejqcdmhi0a25jp4b5sh3'
};
var userPool = new AmazonCognitoIdentity.CognitoUserPool(poolData);
var userData = {
Username : 'username',
Pool : userPool
};
var cognitoUser = new AmazonCognitoIdentity.CognitoUser(userData);
cognitoUser.authenticateUser(authenticationDetails, {
onSuccess: function (result) {
var accessToken = result.getAccessToken().getJwtToken();
/* Use the idToken for Logins Map when Federating User Pools with identity pools or when passing through an Authorization Header to an API Gateway Authorizer*/
var idToken = result.idToken.jwtToken;
},
//Your message from the Lambda will return here, you will need to parse the err to remove the unwanted prefix*
onFailure: function(err) {
alert(err);
},
});

Resources