Winston CloudWatch Transport not Creating Logs When Running on Lambda - aws-lambda

I have an expressjs App that is setup to run from within a AWS Lambda function. When I deploy this app to the lambda, the console logs for the lambda cloudwatch log show up (i.e. /aws/lambda/lambda-name), but it doesn't create a new CloudWatch LogGroup as specified in the configuration.
If I run the lambda function locally and generate logs it will create a CloudWatch Log Group for the local environment.
The Lambda Functions are connecting to an RDS instance so they are contained within a VPC.
The Lambda has been assigned the CloudWatchFullAccess policy so it should not be a permissions error.
I've looked at the Lambda logs and I'm not seeing any errors coming through related to this.
const env = process.env.NODE_ENV || 'local'
const config = require('../../config/env.json')[env]
const winston = require('winston')
const WinstonCloudwatch = require('winston-cloudwatch')
const crypto = require('crypto')
let startTime = new Date().toISOString()
const logger = winston.createLogger({
exitOnError: false,
level: 'info',
transports: [
new winston.transports.Console({
json: true,
colorize: true,
level: 'info'
}),
new WinstonCloudwatch({
awsAccessKeyId: config.aws.accessKeyId,
awsSecretKey: config.aws.secretAccessKey,
logGroupName: 'my-api-' + env,
logStreamName: function () {
// Spread log streams across dates as the server stays up
let date = new Date().toISOString().split('T')[0]
return 'my-requests-' + date + '-' +
crypto.createHash('md5')
.update(startTime)
.digest('hex')
},
awsRegion: 'us-east-1',
jsonMessage: true
})
]
})
const winstonStream = {
write: (message, encoding) => {
// use the 'info' log level so the output will be picked up by both transports
logger.info(message)
}
}
module.exports.logger = logger
module.exports.winstonStream = winstonStream
Then within my express app.
const morgan = require('morgan')
const { winstonStream } = require('./providers/loggers')
app.use(morgan('combined', { stream: winstonStream }

Confirming that the problem was related to the lambda function being in a VPC and not granted public access to the internet through Subnets, Route Tables, NAT and Internet Gateways as described within this post. https://gist.github.com/reggi/dc5f2620b7b4f515e68e46255ac042a7

I believe that to access external internet services you'd need what you described.
But to access an AWS service outside the VPC you can create a VPC endpoint.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch-logs-and-interface-VPC.html

Related

How to run nextjs in AWS lambda with `experimental-edge` runtime

I'm trying to find a way to run Next.js (v13.0.6) with OG image generation logic (using #vercel/og) in AWS Lambda
Everything works fine locally (in dev and prod mode) but when I try execute lambda handler getting "statusCode": 500,
It only fails for apis that involve ImageResponse (and runtime: 'experimental-edge' as a requirement for #vercel/og)
I'm pretty sure the problem is caused by Edge Runtime is not being configured correctly
There is my handler code
next build with next.config.js output: 'standalone' creates folder .next/standalone
insde standalone handler.js
const { parse } = require('url');
const NextServer = require('next/dist/server/next-server').default
const serverless = require('serverless-http');
const path = require('path');
process.env.NODE_ENV = 'production'
process.chdir(__dirname)
const currentPort = parseInt(process.env.PORT, 10) || 3000
const nextServer = new NextServer({
hostname: 'localhost',
port: currentPort,
dir: path.join(__dirname),
dev: false,
customServer: false,
conf: {...} // copied from `server.js` in there same folder
});
const requestHandler = nextServer.getRequestHandler();
// this is a AWS lambda handler that converts lambda event
// to http request that next server can process
const handler = serverless(async (req, res) => {
// const parsedUrl = parse(req.url, true);
try {
await requestHandler(req, res);
}catch(err){
console.error(err);
res.statusCode = 500
res.end('internal server error')
}
});
module.exports = {
handler
}
testing it locally with local-lambda, but getting similar results when test against AWS deployed lambda
what is confusing is that server.js (in .next/standalone) has a similar setup, it only involves http server on top of of it
update:
aws lambda logs show
ERROR [Error [CompileError]: WebAssembly.compile(): Compiling function #64 failed: invalid value type 'Simd128', enable with --experimental-wasm-simd #+3457 ]
update 2:
the first error was fixed by selecting Node 16 for AWS lambda, now getting this error
{
"errorType": "Error",
"errorMessage": "write after end",
"trace": [
"Error [ERR_STREAM_WRITE_AFTER_END]: write after end",
" at new NodeError (node:internal/errors:372:5)",
" at ServerlessResponse.end (node:_http_outgoing:846:15)",
" at ServerlessResponse.end (/var/task/node_modules/next/dist/compiled/compression/index.js:22:783)",
" at NodeNextResponse.send (/var/task/node_modules/next/dist/server/base-http/node.js:93:19)",
" at NextNodeServer.handleRequest (/var/task/node_modules/next/dist/server/base-server.js:332:47)",
" at processTicksAndRejections (node:internal/process/task_queues:96:5)",
" at async /var/task/index.js:34:5"
]
}
At the moment of writing Vercel's runtime: 'experimental-edge' seems to be unstable (run into multiple issues with it)
I ended up recreating #vercel/og lib without wasm and next.js dependencies, can be found here
and simply use it in AWS lambda. It depends on #resvg/resvg-js instead of wasm version, which uses binaries, so there should not be much perf loss comparing to wasm

Unable to connect to Heroku Redis from Node Server

Works well on connecting to Redis locally and through Official Redis Docker image. But, when I switch to Heroku Redis values for ENV variables. It is unable to connect.
I have tried full url option as well, but that doesn't seem to work for any Redis connections when I need to add options object as 2nd parameter to new Redis(), Url option works if I don't pass any options for only locally and Official Redis Docker image.
Adding only heroku redis URI with no options to new Redis(), looks like it works, but then I get Redis Connection Failure after 10 seconds.
Does Heroku-Redis need some sort of extra preparation step?
import Redis, { RedisOptions } from 'ioredis';
import logger from '../logger';
const REDIS_HOST = process.env.REDIS_HOST || '127.0.0.1';
const REDIS_PORT = Number(process.env.REDIS_PORT) || 6379;
const REDIS_PASSWORD = process.env.REDIS_PASSWORD;
const REDIS_DB = Number(process.env.REDIS_DB) || 0;
const redisConfig: RedisOptions = {
host: REDIS_HOST,
port: Number(REDIS_PORT),
password: REDIS_PASSWORD,
db: Number(REDIS_DB),
retryStrategy: function (times) {
if (times % 4 == 0) {
logger.error('Redis reconnect exhausted after 4 retries');
return null;
}
return 200;
},
};
const redis = new Redis(redisConfig);
redis.on('error', function () {
logger.error('Redis Connection Failure');
});
export default redis;
I'm not sure where you got the idea to use environment variables called REDIS_HOST, REDIS_PORT, REDIS_PASSWORD, and REDIS_DB. Heroku Data for Redis provides a single environment variable that captures all of this:
After Heroku Data for Redis has been created, the new release is created and the application restarts. A REDIS_URL config var is available in the app configuration. It contains the URL you can use to access the newly provisioned Heroku Data for Redis instance.
Here is their example of how to connect from Node.js:
const redis = require("redis");
const client = redis.createClient({
url: process.env.REDIS_URL,
socket: {
tls: true,
rejectUnauthorized: false
}
});
So, change your configuration object accordingly:
const REDIS_URL = process.env.REDIS_URL;
const redisConfig: RedisOptions = {
url: REDIS_URL, // <--
socket: { // <--
tls: true, // <--
rejectUnauthorized: false // <--
}, // <--
retryStrategy: function (times) {
if (times % 4 == 0) {
logger.error('Redis reconnect exhausted after 4 retries');
return null;
}
return 200;
},
};
You are already using an environment variable locally to set your Redis password locally. Replace that with an appropriate REDIS_URL that contains all of your defaults, e.g. something like this:
REDIS_URL=redis://user:password#host:port/database

How do I invoke a SAM Lambda function locally with X-Ray statements?

I'm receiving the following error when invoking an AWS SAM Lambda function locally:
Missing AWS Lambda trace data for X-Ray. Ensure Active Tracing is enabled and no subsegments are created outside the function handler.
Below you can see my function:
/** Bootstrap */
require('dotenv').config()
const AWSXRay = require('aws-xray-sdk')
/** Libraries*/
const se = require('serialize-error')
/** Internal */
const logger = require('./src/utils/logger')
const ExecuteService = require('./src/service')
/**
*
*/
exports.handler = async (event) => {
const xraySegement = AWSXRay.getSegment()
const message = process.env.NODE_ENV == 'production' ? JSON.parse(event.Records[0].body) : event
try {
await ExecuteService(message)
} catch (err) {
logger.error({
error: se(err)
})
return err
}
}
In addition, I have Tracing set to Active in my template.yml.
What part of the documentation am I clearly misreading, missing, or reading over?
For now you can't invoke a SAM lambda locally with X-ray because it is not supported yet.
See
[Feature Request] Add support for X-Ray on SAM Local
#217
aws-lambda-runtime-interface-emulator#level-of-support
The component does not support X-ray and other Lambda integrations locally.
If you don't care about X-ray and just want your code to work you can check the env variable AWS_SAM_LOCAL to prevent X-ray usage:
let AWSXRay
if (!process.env.AWS_SAM_LOCAL) {
AWSXRay = require('aws-xray-sdk')
}
// ...
if (!process.env.AWS_SAM_LOCAL) {
const xraySegement = AWSXRay.getSegment()
}
I am using SAM CLI, version 1.36.0
I noticed importing aws-xray-sdk does not create this issue but invoking the functions do.
To not have a million if statements I created a middle man service that imports aws-xray-sdk then exports noops if the env var process.env.AWS_SAM_LOCAL is set.

"Unsupported Media Type" using serverless offline

I'm working on a small serverless offline assignment and I got error Unsupported Media Type when tried to invoke one lambda function in another.
I found a solution but when I tried to applied to my project was not working:
here in the link all the details. cloud anyone help me with that
https://github.com/dherault/serverless-offline/issues/1005#issue-632401297
there are three possible solutions.
Make sure that the lambda_A have the same port and host where the lambda_B is running.
Lambda_A:
const { Lambda } = require('aws-sdk');
const lambda = new Lambda({
region: 'us-east-1',
endpoint: 'http://localhost:3000',
});
module.exports.main = async (event, context) => {
// invoke
}
Lambda_B: Is running on http://localhost:3000
You have configured out serverless-offline in twice functions.
https://www.serverless.com/plugins/serverless-offline#usage-with-invoke
Lambda_A or Lambda_B have correctly stage?. Remember to use sls offline --stage local in both functions.

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256

I get an error AWS::S3::Errors::InvalidRequest The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. when I try upload file to S3 bucket in new Frankfurt region. All works properly with US Standard region.
Script:
backup_file = '/media/db-backup_for_dev/2014-10-23_02-00-07/slave_dump.sql.gz'
s3 = AWS::S3.new(
access_key_id: AMAZONS3['access_key_id'],
secret_access_key: AMAZONS3['secret_access_key']
)
s3_bucket = s3.buckets['test-frankfurt']
# Folder and file name
s3_name = "database-backups-last20days/#{File.basename(File.dirname(backup_file))}_#{File.basename(backup_file)}"
file_obj = s3_bucket.objects[s3_name]
file_obj.write(file: backup_file)
aws-sdk (1.56.0)
How to fix it?
Thank you.
AWS4-HMAC-SHA256, also known as Signature Version 4, ("V4") is one of two authentication schemes supported by S3.
All regions support V4, but US-Standard¹, and many -- but not all -- other regions, also support the other, older scheme, Signature Version 2 ("V2").
According to http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html ... new S3 regions deployed after January, 2014 will only support V4.
Since Frankfurt was introduced late in 2014, it does not support V2, which is what this error suggests you are using.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html explains how to enable V4 in the various SDKs, assuming you are using an SDK that has that capability.
I would speculate that some older versions of the SDKs might not support this option, so if the above doesn't help, you may need a newer release of the SDK you are using.
¹US Standard is the former name for the S3 regional deployment that is based in the us-east-1 region. Since the time this answer was originally written,
"Amazon S3 renamed the US Standard Region to the US East (N. Virginia) Region to be consistent with AWS regional naming conventions." For all practical purposes, it's only a change in naming.
With node, try
var s3 = new AWS.S3( {
endpoint: 's3-eu-central-1.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-central-1'
} );
You should set signatureVersion: 'v4' in config to use new sign version:
AWS.config.update({
signatureVersion: 'v4'
});
Works for JS sdk.
For people using boto3 (Python SDK) use the below code
from botocore.client import Config
s3 = boto3.resource(
's3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
config=Config(signature_version='s3v4')
)
I have been using Django, and I had to add these extra config variables to make this work. (in addition to settings mentioned in https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html).
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Similar issue with the PHP SDK, this works:
$s3Client = S3Client::factory(array('key'=>YOUR_AWS_KEY, 'secret'=>YOUR_AWS_SECRET, 'signature' => 'v4', 'region'=>'eu-central-1'));
The important bit is the signature and the region
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
this also saved my time after surfing for 24Hours..
Code for Flask (boto3)
Don't forget to import Config. Also If you have your own config class, then change its name.
from botocore.client import Config
s3 = boto3.client('s3',config=Config(signature_version='s3v4'),region_name=app.config["AWS_REGION"],aws_access_key_id=app.config['AWS_ACCESS_KEY'], aws_secret_access_key=app.config['AWS_SECRET_KEY'])
s3.upload_fileobj(file,app.config["AWS_BUCKET_NAME"],file.filename)
url = s3.generate_presigned_url('get_object', Params = {'Bucket':app.config["AWS_BUCKET_NAME"] , 'Key': file.filename}, ExpiresIn = 10000)
In Java I had to set a property
System.setProperty(SDKGlobalConfiguration.ENFORCE_S3_SIGV4_SYSTEM_PROPERTY, "true")
and add the region to the s3Client instance.
s3Client.setRegion(Region.getRegion(Regions.EU_CENTRAL_1))
With boto3, this is the code :
s3_client = boto3.resource('s3', region_name='eu-central-1')
or
s3_client = boto3.client('s3', region_name='eu-central-1')
For thumbor-aws, that used boto config, i needed to put this to the $AWS_CONFIG_FILE
[default]
aws_access_key_id = (your ID)
aws_secret_access_key = (your secret key)
s3 =
signature_version = s3
So anything that used boto directly without changes, this may be useful
Supernova answer for django/boto3/django-storages worked with me:
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
just add them to your settings.py and change region code accordingly
you can check aws regions from:
enter link description here
For Android SDK, setEndpoint solves the problem, although it's been deprecated.
CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
context, "identityPoolId", Regions.US_EAST_1);
AmazonS3 s3 = new AmazonS3Client(credentialsProvider);
s3.setEndpoint("s3.us-east-2.amazonaws.com");
Basically the error was because I was using old version of aws-sdk and I updated the version so this error occured.
in my case with node js i was using signatureVersion in parmas object like this :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
signatureVersion: 'v4',
region: process.env.AWS_S3_REGION
}
});
Then I put signature out of params object and worked like charm :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
region: process.env.AWS_S3_REGION
},
signatureVersion: 'v4'
});
Check your AWS S3 Bucket Region and Pass proper Region in Connection Request.
In My Senario I have set 'APSouth1' for Asia Pacific (Mumbai)
using (var client = new AmazonS3Client(awsAccessKeyId, awsSecretAccessKey, RegionEndpoint.APSouth1))
{
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = keyName,
Expires = DateTime.Now.AddMinutes(50),
};
urlString = client.GetPreSignedURL(request1);
}
In my case, the request type was wrong. I was using GET(dumb) It must be PUT.
Here is the function I used with Python
def uploadFileToS3(filePath, s3FileName):
s3 = boto3.client('s3',
endpoint_url=settings.BUCKET_ENDPOINT_URL,
aws_access_key_id=settings.BUCKET_ACCESS_KEY_ID,
aws_secret_access_key=settings.BUCKET_SECRET_KEY,
region_name=settings.BUCKET_REGION_NAME
)
try:
s3.upload_file(
filePath,
settings.BUCKET_NAME,
s3FileName
)
# remove file from local to free up space
os.remove(filePath)
return True
except Exception as e:
logger.error('uploadFileToS3#Error')
logger.error(e)
return False
Sometime the default version will not update. Add this command
AWS_S3_SIGNATURE_VERSION = "s3v4"
in settings.py
For Boto3 , use this code.
import boto3
from botocore.client import Config
s3 = boto3.resource('s3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
region_name='us-south-1',
config=Config(signature_version='s3v4')
)
Try this combination.
const s3 = new AWS.S3({
endpoint: 's3-ap-south-1.amazonaws.com', // Bucket region
accessKeyId: 'A-----------------U',
secretAccessKey: 'k------ja----------------soGp',
Bucket: 'bucket_name',
useAccelerateEndpoint: true,
signatureVersion: 'v4',
region: 'ap-south-1' // Bucket region
});
I was stuck for 3 days and finally, after reading a ton of blogs and answers I was able to configure Amazon AWS S3 Bucket.
On the AWS Side
I am assuming you have already
Created an s3-bucket
Created a user in IAM
Steps
Configure CORS settings
you bucket > permissions > CORS configuration
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>```
Generate A bucket policy
your bucket > permissions > bucket policy
It should be similar to this one
{
"Version": "2012-10-17",
"Id": "Policy1602480700663",
"Statement": [
{
"Sid": "Stmt1602480694902",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::harshit-portfolio-bucket/*"
}
]
}
PS: Bucket policy should say `public` after this
Configure Access Control List
your bucket > permissions > acces control list
give public access
PS: Access Control List should say public after this
Unblock public Access
your bucket > permissions > Block Public Access
Edit and turn all options Off
**On a side note if you are working on django
add the following lines to you settings.py file of your project
**
#S3 BUCKETS CONFIG
AWS_ACCESS_KEY_ID = '****not to be shared*****'
AWS_SECRET_ACCESS_KEY = '*****not to be shared******'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'
AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = None
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# look for files first in aws
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# In India these settings work
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Also coming from: https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html
For me this was the solution:
AWS_S3_REGION_NAME = "eu-central-1"
AWS_S3_ADDRESSING_STYLE = 'virtual'
This needs to be added to settings.py in your Django project
Using PHP SDK Follow Below.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$client = S3Client::factory(
array(
'signature' => 'v4',
'region' => 'me-south-1',
'key' => YOUR_AWS_KEY,
'secret' => YOUR_AWS_SECRET
)
);
Nodejs
var aws = require("aws-sdk");
aws.config.update({
region: process.env.AWS_REGION,
secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
});
var s3 = new aws.S3({
signatureVersion: "v4",
});
let data = await s3.getSignedUrl("putObject", {
ContentType: mimeType, //image mime type from request
Bucket: "MybucketName",
Key: folder_name + "/" + uuidv4() + "." + mime.extension(mimeType),
Expires: 300,
});
console.log(data);
AWS S3 Bucket Permission Configuration
Deselect Block All Public Access
Add Below Policy
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::MybucketName/*"
]
}
]
}
Then Paste the returned URL and make PUT request on the URL with binary file of image
Full working nodejs version:
const AWS = require('aws-sdk');
var s3 = new AWS.S3( {
endpoint: 's3.eu-west-2.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-west-2'
} );
const getPreSignedUrl = async () => {
const params = {
Bucket: 'some-bucket-name/some-folder',
Key: 'some-filename.json',
Expires: 60 * 60 * 24 * 7
};
try {
const presignedUrl = await new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', params, (err, url) => {
err ? reject(err) : resolve(url);
});
});
console.log(presignedUrl);
} catch (err) {
if (err) {
console.log(err);
}
}
};
getPreSignedUrl();

Resources