Decrypt SSM stored parameter with KMS encryption, Error: InvalidCiphertextException - aws-lambda

I am trying to decrypt a parameter stored on SSM that is encrypted with a user managed KMS key, which I just created.
This post uses outdated methods
This post says that the context used on creation should be also used on decryption with the option EncryptionContext. But when I created the key and the parameter I did not used a context. I also checked on CloudTrail and there's no information about context. And I also didn't find any place to declare a context when creating a new parameter.
There is no example in the examples repo
This Lambda is being executed with the correct permissions to Decrypt with the key and to Read from the SSM parameter store.
I am sure the parameter is fetched correctly, because I am able to retrieve the stored parameter if I do not encrypt it with the KMS key.
I also tryied using another library base64-js to encrypt the string to Uint8Array, but the result is the same.
This is the sample code:
import { DecryptCommand, KMSClient } from '#aws-sdk/client-kms';
import { GetParameterCommand, SSMClient } from '#aws-sdk/client-ssm';
const kmsClient = new KMSClient({ region: process.env.REGION });
const ssmClient = new SSMClient({ region: process.env.REGION });
try {
const response = await ssmClient.send(new GetParameterCommand({
Name: `/path/to/param`
}));
// Value below verified without KMS key
const sureItIsValid = response.Parameter?.Value as string
// Obtained the same result for buff using base64-js lib
const buff: Uint8Array = Buffer.from(sureItIsValid, 'base64');
const command = new DecryptCommand({
CiphertextBlob: buff,
// The KeyId was also verified using the alias
KeyId: 'arn:aws:kms:<REGION>:...',
});
const secrets = await kmsClient.send(command);
console.error('result');
console.log(secrets.Plaintext?.toString());
} catch (error) {
console.error('error');
console.error(JSON.stringify(error));
}
And I get:
ERROR error
ERROR {"name":"InvalidCiphertextException","$fault":"client","$metadata":{"httpStatusCode":400,"requestId":"the-request-id","attempts":1,"totalRetryDelay":0},"__type":"InvalidCiphertextException","message":"UnknownError"}

Add WithDecryption: true to your GetParameterCommand. SSM will call KMS to decrypt* the SecretString paramter and return the plaintext to us in Parameter.Value:
const command = new GetParameterCommand({
Name: '/path/to/param',
WithDecryption: true,
});
* You are using the CDK to handle your Lambda permissions, so the following will work:
param.grantRead(func); // let your Lambda function read the SSM Parameter
key.grantDecrypt(func); // let your Lambda Function decrypt the SSM Parameter

Related

500 error when caching AWS Lambda Authenticator response

I'm using serverless stack, now attempting to add a Lambda Custom Authenticator to validate authorization tokens with Auth0 and add custom data to my request context when the authentication passes.
Everything works mostly fine at this point, except for when I cache the Authenticator response for the same token.
I'm using a 5-second cache for development. The first request with a valid token goes through as it should. The next requests in the 5-second window fail with a mysterious 500 error without ever reaching my code.
Authorizer configuration
// MyStack.ts
const authorizer = new sst.Function(this, "AuthorizerFunction", {
handler: "src/services/Auth/handler.handler",
});
const api = new sst.Api(this, "MarketplaceApi", {
defaultAuthorizationType: sst.ApiAuthorizationType.CUSTOM,
defaultAuthorizer: new HttpLambdaAuthorizer("Authorizer", authorizer, {
authorizerName: "LambdaAuthorizer",
resultsCacheTtl: Duration.seconds(5), // <-- this is the cache config
}),
routes: {
"ANY /{proxy+}": "APIGateway.handler",
},
});
Authorizer handler
const handler = async (event: APIGatewayAuthorizerEvent): Promise<APIGatewayAuthorizerResult> => {
// Authenticates with Auth0 and serializes context data I wanna
// forward to the underlying service
const authentication = await authenticate(event);
const context = packAuthorizerContext(authentication.value);
const result: APIGatewayAuthorizerResult = {
principalId: authentication.value?.id || "unknown",
policyDocument: buildPolicy(authentication.isSuccess ? "Allow" : "Deny", event.methodArn),
context, // context has the following shape:
// {
// info: {
// id: string,
// marketplaceId: string,
// roles: string,
// permissions: string
// }
// }
};
return result;
};
CloudWatch logs
☝️ Every uncached request succeeds, with status code 200, an integration ID and everything, as it's supposed to. Every other request during the 5-second cache fails with 500 error code and no integration ID, meaning it doesn't reach my code.
Any tips?
Update
I just found this in an api-gateway.d.ts #types file (attention to the comments, please):
// Poorly documented, but API Gateway will just fail internally if
// the context type does not match this.
// Note that although non-string types will be accepted, they will be
// coerced to strings on the other side.
export interface APIGatewayAuthorizerResultContext {
[name: string]: string | number | boolean | null | undefined;
}
And I did have this problem before I could get the Authorizer to work in the first place. I had my roles and permissions properties as string arrays, and I had to transform them to plain strings. Then it worked.
Lo and behold, I just ran a test right now, removing the context information I was returning for successfully validated tokens and now the cache is working 😔 every request succeeds, but I do need my context information...
Maybe there's a max length for the context object? Please let me know of any restrictions on the context object. As the #types file states, that thing is poorly documented. This is the docs I know about.
The issue is that none of the context object values may contain "special" characters.
Your context object must be something like:
"context": {
"someString": "value",
"someNumber": 1,
"someBool": true
},
You cannot set a JSON object or array as a valid value of any key in the context map. The only valid value types are string, number and boolean.
In my case, though, I needed to send a string array.
I tried to get around the type restriction by JSON-serializing the array, which produced "[\"valueA\",\"valueB\"]" and, for some reason, AWS didn't like it.
TL;DR
What solved my problem was using myArray.join(",") instead of JSON.stringify(myArray)

How do I invoke a SAM Lambda function locally with X-Ray statements?

I'm receiving the following error when invoking an AWS SAM Lambda function locally:
Missing AWS Lambda trace data for X-Ray. Ensure Active Tracing is enabled and no subsegments are created outside the function handler.
Below you can see my function:
/** Bootstrap */
require('dotenv').config()
const AWSXRay = require('aws-xray-sdk')
/** Libraries*/
const se = require('serialize-error')
/** Internal */
const logger = require('./src/utils/logger')
const ExecuteService = require('./src/service')
/**
*
*/
exports.handler = async (event) => {
const xraySegement = AWSXRay.getSegment()
const message = process.env.NODE_ENV == 'production' ? JSON.parse(event.Records[0].body) : event
try {
await ExecuteService(message)
} catch (err) {
logger.error({
error: se(err)
})
return err
}
}
In addition, I have Tracing set to Active in my template.yml.
What part of the documentation am I clearly misreading, missing, or reading over?
For now you can't invoke a SAM lambda locally with X-ray because it is not supported yet.
See
[Feature Request] Add support for X-Ray on SAM Local
#217
aws-lambda-runtime-interface-emulator#level-of-support
The component does not support X-ray and other Lambda integrations locally.
If you don't care about X-ray and just want your code to work you can check the env variable AWS_SAM_LOCAL to prevent X-ray usage:
let AWSXRay
if (!process.env.AWS_SAM_LOCAL) {
AWSXRay = require('aws-xray-sdk')
}
// ...
if (!process.env.AWS_SAM_LOCAL) {
const xraySegement = AWSXRay.getSegment()
}
I am using SAM CLI, version 1.36.0
I noticed importing aws-xray-sdk does not create this issue but invoking the functions do.
To not have a million if statements I created a middle man service that imports aws-xray-sdk then exports noops if the env var process.env.AWS_SAM_LOCAL is set.

Lambda Layers not installing with Serverless

Currently getting the following error with MongoDB:
no saslprep library specified. Passwords will not be sanitized
We are using Webpack so simply installing the module doesn't work (Webpack just ignores it). I found this thread which talks about how to exclude it from Webpack compilations, but then I have to manually load it into every Lambda function which led me to Lambda Layers.
Following the Serverless guide on using Lambda layers allowed me to get my layer published to AWS and included in all of my functions, but for some reason, it doesn't install the modules. If I download the layer using the AWS GUI, I get a folder with just the package.json and package-lock.json files.
My file structure is:
my-project
|_ layers
|_ saslprep
|_ package.json
and my serverless.yml is:
layers:
saslprep:
path: layers/saslprep
compatibleRuntimes:
- nodejs14.x
This is not my preferred solution as I'd like to use 256, but the way I got around this error/warning was by changing the authMechanism from SCRAM-SHA-256 to SCRAM-SHA-1 in the connection string. The serverless-bundle most likely needs to add this dependency into their package to enable support for Mongo 4.0 SHA256 (my best guess!).
You can specify this authentication mechanism by setting the authMechanism parameter to the value SCRAM-SHA-1 in the connection string as shown in the following sample code.
const { MongoClient } = require("mongodb");
// Replace the following with values for your environment.
const username = encodeURIComponent("<username>");
const password = encodeURIComponent("<password>");
const clusterUrl = "<MongoDB cluster url>";
const authMechanism = "SCRAM-SHA-1";
// Replace the following with your MongoDB deployment's connection string.
const uri =
`mongodb+srv://${username}:${password}#${clusterUrl}/?authMechanism=${authMechanism}`;
// Create a new MongoClient
const client = new MongoClient(uri);
// Function to connect to the server
async function run() {
try {
// Connect the client to the server
await client.connect();
// Establish and verify connection
await client.db("admin").command({ ping: 1 });
console.log("Connected successfully to server");
} finally {
// Ensures that the client will close when you finish/error
await client.close();
}
}
run().catch(console.dir);

Winston CloudWatch Transport not Creating Logs When Running on Lambda

I have an expressjs App that is setup to run from within a AWS Lambda function. When I deploy this app to the lambda, the console logs for the lambda cloudwatch log show up (i.e. /aws/lambda/lambda-name), but it doesn't create a new CloudWatch LogGroup as specified in the configuration.
If I run the lambda function locally and generate logs it will create a CloudWatch Log Group for the local environment.
The Lambda Functions are connecting to an RDS instance so they are contained within a VPC.
The Lambda has been assigned the CloudWatchFullAccess policy so it should not be a permissions error.
I've looked at the Lambda logs and I'm not seeing any errors coming through related to this.
const env = process.env.NODE_ENV || 'local'
const config = require('../../config/env.json')[env]
const winston = require('winston')
const WinstonCloudwatch = require('winston-cloudwatch')
const crypto = require('crypto')
let startTime = new Date().toISOString()
const logger = winston.createLogger({
exitOnError: false,
level: 'info',
transports: [
new winston.transports.Console({
json: true,
colorize: true,
level: 'info'
}),
new WinstonCloudwatch({
awsAccessKeyId: config.aws.accessKeyId,
awsSecretKey: config.aws.secretAccessKey,
logGroupName: 'my-api-' + env,
logStreamName: function () {
// Spread log streams across dates as the server stays up
let date = new Date().toISOString().split('T')[0]
return 'my-requests-' + date + '-' +
crypto.createHash('md5')
.update(startTime)
.digest('hex')
},
awsRegion: 'us-east-1',
jsonMessage: true
})
]
})
const winstonStream = {
write: (message, encoding) => {
// use the 'info' log level so the output will be picked up by both transports
logger.info(message)
}
}
module.exports.logger = logger
module.exports.winstonStream = winstonStream
Then within my express app.
const morgan = require('morgan')
const { winstonStream } = require('./providers/loggers')
app.use(morgan('combined', { stream: winstonStream }
Confirming that the problem was related to the lambda function being in a VPC and not granted public access to the internet through Subnets, Route Tables, NAT and Internet Gateways as described within this post. https://gist.github.com/reggi/dc5f2620b7b4f515e68e46255ac042a7
I believe that to access external internet services you'd need what you described.
But to access an AWS service outside the VPC you can create a VPC endpoint.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch-logs-and-interface-VPC.html

Lambda chaining - Invoke lambda from another lambda using terraform

I am trying to invoke one AWS lambda from another and perform lambda chaining. The rationale behind doing this is AWS does not provide multiple trigger from same S3 bucket.
I have created one lambda, with an s3 trigger. The java code of first lambda will listen to S3 event and contains the invocation of another lambda. The second lambda will be invoked from first lambda. Both the lambda creation is done by terraform.
Lambda A has S3 trigger. This will be invoked on S3 event on a particular bucket. Lambda A will do the processing and will invoke Lambda B using invoke request. Lambda B invocation from Lambda A code in java is :
public class EventHandler implements RequestHandler<S3Event, String> {
#Override
public String handleRequest(S3Event event, Context context) throws RuntimeException {
InvokeRequest req = new InvokeRequest()
.withFunctionName("LambdaFunctionB")
.withPayload(json);
return "Lambda B invoked"
}
}
Both the lambdas are created using terraform. Scripts below:
Lambda A terraform:
module "lambda_function" {
source = "Git Path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionA"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "EventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_bucket" {
statement_id = "statementId"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "s3.bucket.arn"
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "bucketName"
lambda_function {
lambda_function_arn = "${module.lambda_function.lambda_arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "path/subPath"
}
}
Lambda B terraform:
module "lambda_function" {
source = "git path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionB"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "LambdaBEventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_lambda" {
statement_id = "AllowExecutionFromLambda"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "arn:aws:lambda:eu-west-1:xxxxxxxxxx:function:LambdaFunctionA"
}
lambda-iam-role has below policies attached
AmazonS3FullAccess
AWSLambdaBasicExecutionRole
AWSLambdaVPCAccessExecutionRole
AmazonSNSFullAccess
CloudWatchEventsFullAccess
Expectation was that Lambda A should successfully invoke Lambda B. But I am getting AccessDeniedException in Lambda A logs and it is not able to invoke Lambda B. Error is
com.amazonaws.services.lambda.model.AWSLambdaException: User: arn:aws:sts::xxxxxxxxx:assumed-role/lambda-iam-role/LambdaFunctionA is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:eu-west-1:xxxxxxxxx:function:LambdaFunctionB (Service: AWSLambda; Status Code: 403; Error Code: AccessDeniedException; Request ID: f495ede3-b3cb-47a1-b884-16996545233d)
Hope this helps you, not exactly similar but its invoking one lambda from another lambda Github
I think the lambda needs this policy as well "lambda:InvokeFunction"
I found an answer online, using the aws-sdk.
var aws = require('aws-sdk');
var lambda = new aws.Lambda({
region: 'default'
});
lambda.invoke({
FunctionName: 'name_of_your_lambda_function',
Payload: JSON.stringify(event, null, 2) // pass params
}, function(error, data) {
if (error) {
context.done('error', error);
}
if(data.Payload){
context.succeed(data.Payload)
}
});
You can find the doc here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
Hope it helps
:)

Resources