Nodemailer ae not working with EC2 instance - amazon-ec2

var transporter = nodemailer.createTransport({
service: 'gmail',
host: 'smtp.gmail.com',
auth: {
user: process.env.COMPANY_EMAIL_ID,
pass: process.env.COMPANY_EMAIL_PASSWORD
}
});

Related

Strapi Admin on Heroku not opening even after disabling brotli in /config/middleware.js

Followed the documentation for deploying on Heroku and added a middleware appname.herokuapp.com still takes forever to start.
i followed this issues8375 and the docs.
my middleware.js
module.exports = {
settings: {
gzip: {
enabled: true,
options: {
br: false
}
}
},
};
/config/env/production/database.js
const parse = require('pg-connection-string').parse;
const config = parse(process.env.DATABASE_URL);
module.exports = ({ env }) => ({
defaultConnection: 'default',
connections: {
default: {
connector: 'bookshelf',
settings: {
client: 'postgres',
host: config.host,
port: config.port,
database: config.database,
username: config.user,
password: config.password,
ssl: {
rejectUnauthorized: false,
},
},
options: {
ssl: true,
},
},
},
});
/config/env/production/server.js
module.exports = ({ env }) => ({
url: env('HEROKU_URL'),
});
The /admin has not openend even after 10mins on heroku.
Logs at heroku logs --tail -a appname
2021-04-27T08:01:39.461994+00:00 heroku[router]: at=info method=GET path="/admin/main.88d9c53d.chunk.js" host=appname.herokuapp.com request_id=d63f32b0-aed3-4d40-b0aa-36ed1e1c734c fwd="62.8.85.117" dyno=web.1 connect=0ms service=606ms status=200 bytes=2563045 protocol=https
Could not find a way around this so did a redeploy following Alex
I left the /config/middleware.js as is,
module.exports = {
settings: {
gzip: {
enabled: true,
options: {
br: false
}
}
},
};

How to manage typeORM connection of Aurora Serverless data api inside Lambda using Serverless Framework

I'm using:
Aurora Serverless Data API (Postgres)
TypeORM with typeorm-aurora-data-api-driver
AWS Lambda with Serverless framework (TypeScript, WebPack)
I'm connecting to the db like it's described in github,
const connection = await createConnection({
type: 'aurora-data-api-pg',
database: 'test-db',
secretArn: 'arn:aws:secretsmanager:eu-west-1:537011205135:secret:xxxxxx/xxxxxx/xxxxxx',
resourceArn: 'arn:aws:rds:eu-west-1:xxxxx:xxxxxx:xxxxxx',
region: 'eu-west-1'
})
And this is how I use it inside of my Lambda function
export const testConfiguration: APIGatewayProxyHandler = async (event, _context) => {
let response;
try {
const connectionOptions: ConnectionOptions = await getConnectionOptions();
const connection = await createConnection({
...connectionOptions,
entities,
});
const userRepository = connection.getRepository(User);
const users = await userRepository.find();
response = {
statusCode: 200,
body: JSON.stringify({ users }),
};
} catch (e) {
response = {
statusCode: 500,
body: JSON.stringify({ error: 'server side error' }),
};
}
return response;
};
When I execute is first time it works just well.
But second and next times I'm getting an error
AlreadyHasActiveConnectionError: Cannot create a new connection named "default", because connection with such name already exist and it now has an active connection session.
So, what is the proper way to manage this connection?
Should it be somehow reused?
I've found some resolutions for simple RDS but the whole point of Aurora Serverless Data API is that you don't have to manage the connection
when you try to establish a connection, you need to check if there is already a connection it can use. this is my Database class used to handle connections
export default class Database {
private connectionManager: ConnectionManager;
constructor() {
this.connectionManager = getConnectionManager();
}
async getConnection(): Promise<Connection> {
const CONNECTION_NAME = 'default';
let connection: Connection;
if (this.connectionManager.has(CONNECTION_NAME)) {
logMessage(`Database.getConnection()-using existing connection::: ${CONNECTION_NAME}`);
connection = await this.connectionManager.get(CONNECTION_NAME);
if (!connection.isConnected) {
connection = await connection.connect();
}
} else {
logMessage('Database.getConnection()-creating connection ...');
logMessage(`DB host::: ${process.env.DB_HOST}`);
const connectionOptions: ConnectionOptions = {
name: CONNECTION_NAME,
type: 'postgres',
port: 5432,
logger: 'advanced-console',
logging: ['error'],
host: process.env.DB_HOST,
username: process.env.DB_USERNAME,
database: process.env.DB_DATABASE,
password: process.env.DB_PASSWORD,
namingStrategy: new SnakeNamingStrategy(),
entities: Object.keys(entities).map((module) => entities[module]),
};
connection = await createConnection(connectionOptions);
}
return connection;
}
}

Mqtt and Websocket at the same time with Aedes

I am trying to make Aedes works as a MQTT broker AND Websocket server. According to that doc: https://github.com/moscajs/aedes/blob/master/docs/Examples.md
what i am suppose to understand. Ideally, i want the listener fired up whatever if its a websocket client or a mqtt client.
Is it possible to do something like:
server.broadcast('foo/bar', {data:''})
and all client, websockets and mqtt receive the message ? The doc is not very clear and i am very suprised that websocket-stream is used. It is very low lvl right ?
here some server side code:
const port = 1883
const aedes = require('aedes')({
persistence: mongoPersistence({
url: 'mongodb://127.0.0.1/aedes-test',
// Optional ttl settings
ttl: {
packets: 300, // Number of seconds
subscriptions: 300
}
}),
authenticate: (client, username, password, callback) => {
},
authorizePublish: (client, packet, callback) => {
},
authorizeSubscribe: (client, packet, callback) => {
}
});
//const server = require('net').createServer(aedes.handle);
const httpServer = require('http').createServer()
const ws = require('websocket-stream')
ws.createServer({ server: httpServer }, aedes.handle)
httpServer.listen(port, function () {
Logger.debug('Aedes listening on port: ' + port)
aedes.publish({ topic: 'aedes/hello', payload: "I'm broker " + aedes.id })
});
It should just be case of starting both servers with the same aedes object as follows:
const port = 1883
const wsPort = 8883
const aedes = require('aedes')({
persistence: mongoPersistence({
url: 'mongodb://127.0.0.1/aedes-test',
// Optional ttl settings
ttl: {
packets: 300, // Number of seconds
subscriptions: 300
}
}),
authenticate: (client, username, password, callback) => {
},
authorizePublish: (client, packet, callback) => {
},
authorizeSubscribe: (client, packet, callback) => {
}
});
const server = require('net').createServer(aedes.handle);
const httpServer = require('http').createServer()
const ws = require('websocket-stream')
ws.createServer({ server: httpServer }, aedes.handle)
server.listen(port, function() {
Logger.debug('Ades MQTT listening on port: ' + port)
})
httpServer.listen(wsPort, function () {
Logger.debug('Aedes MQTT-WS listening on port: ' + wsPort)
aedes.publish({ topic: 'aedes/hello', payload: "I'm broker " + aedes.id })
});

ALB Trigger Lamda function missing permission CDK

Currently i've got a problem to invoke a Lamda function from an ALB as a trigger function. I am getting the error massage , that
elasticloadbalancing principal does not have permission to
invoke arn:aws:lambda:us-east-2:ACN:function
API: elasticloadbalancingv2:RegisterTargets elasticloadbalancing principal
does not have permission to invoke arn:aws:lambda:us-east-...function:Ddns
from target group arn:aws:elasticloadbalancing:us-east-2:...targetgroup/DdnsL
export class DdnsLamdaApiGateWayCdkStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = Vpc.fromLookup(this, 'global-vpc', {
vpcId: 'vpc-a0b8bec8',
});
const sg = ec2.SecurityGroup.fromSecurityGroupId(this, 'SG', 'sg-0740900526b94fd8f')
const fn = new lambda.Function(this, "API", {
handler: 'index.handler',
runtime: Runtime.NODEJS_12_X,
role: Role.fromRoleArn(this, 'lambda-role', 'arn:aws:iam::.....:role/service-role/LamdaR'),
code: Code.fromInline("test"),
});
fn.addToRolePolicy( new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: [
"lambda:InvokeFunction"
],
resources: [
"*"
]
}));
const lb = new elbv2.ApplicationLoadBalancer(this, "LoadBalancer", {
vpc,
internetFacing: false,
securityGroup: sg
});
const listener = lb.addListener("Listener", {
port: 80,
});
listener.addTargets('Targets', {
targets: [new LambdaALBTarget(fn)]
});
}
}
class LambdaALBTarget implements elbv2.IApplicationLoadBalancerTarget {
private fn: lambda.IFunction;
constructor(fn: lambda.IFunction) {
this.fn = fn;
}
attachToApplicationTargetGroup(
targetGroup: elbv2.ApplicationTargetGroup
): elbv2.LoadBalancerTargetProps {
return {
targetType: "lambda" as elbv2.TargetType,
targetJson: {
id: this.fn.functionArn
}
};
}
}
I am assuming -> that I missing this particular Permission:
LambdaFunctionPermission:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:InvokeFunction
FunctionName: !GetAtt LambdaTargetFunction.Arn
Principal: elasticloadbalancing.amazonaws.com
SourceArn: !Ref TargetGroup
But I cannot figure out know how to include this Permision in the given source-code. Does anyone had the same issue and know how to solve it?
I've found a workaround where the permission is set automatically when deploying the stack instead of creating the class LambdaALBTarget and then call the method attachToApplicationTargetGroup, just add the (new LambdaTarget) to the listener -> attachToApplicationTargetGroup&attachToNetworkTargetGroup get automatically called when you add the target to a load balancer
listener.addTargets('Targets', {
targets: [new LambdaTarget(fn)]
});
...
here the section of created invoke function permission (template.json)
"APIInvokeServicePrincipalelasticloadbalancingamazonawscom68C82386": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:InvokeFunction",
"FunctionName": {
"Fn::GetAtt": [
"API62EA1CFF",
"Arn"
]
},
"Principal": "elasticloadbalancing.amazonaws.com"
},
"Metadata": {
"aws:cdk:path": "DdnsLamdaApiGateWayCdkStack/API/InvokeServicePrincipal(elasticloadbalancing.amazonaws.com)"
}
Here is the finished Source-Code
export class DdnsLamdaApiGateWayCdkStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = Vpc.fromLookup(this, 'global-vpc', {
vpcId: '....',
});
const code = fs.readFileSync('./code.js','utf8');
const dnsRegistrationRole = new Role(this, 'DnsRegRole', {
roleName: 'Lamda-DnsRegRole',
managedPolicies: [
ManagedPolicy.fromAwsManagedPolicyName('AmazonVPCFullAccess'),
ManagedPolicy.fromAwsManagedPolicyName('AmazonRoute53AutoNamingRegistrantAccess'),
ManagedPolicy.fromAwsManagedPolicyName('AWSLambdaBasicExecutionRole '),
],
inlinePolicies: {
Route53ListHostedZone: new PolicyDocument({
statements: [
new PolicyStatement({
actions: ['route53:ListResourceRecordSets'],
resources: ['arn:aws:route53:::hostedzone/*'],
}),
],
}),
},
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
const dnsRegistrationLambda = new lambda.Function(this, "API", {
handler: 'index.handler',
runtime: Runtime.NODEJS_12_X,
role: dnsRegistrationRole,
code: Code.fromInline(code),
memorySize: 256,
});
const loadBalancerSecurityGroup = new ec2.SecurityGroup(this, "loadBalancer-security-group", {
vpc: vpc,
allowAllOutbound: true,
description: 'loadBalancerSecurityGroup'
});
loadBalancerSecurityGroup.addIngressRule(ec2.Peer.anyIpv4(),ec2.Port.tcp(80),"HTTP");
loadBalancerSecurityGroup.addIngressRule(ec2.Peer.anyIpv4(),ec2.Port.tcp(443),"HTTPS")
const lb = new elbv2.ApplicationLoadBalancer(this, "LoadBalancer", {
vpc,
internetFacing: true,
securityGroup: loadBalancerSecurityGroup
});
const listener = lb.addListener("Listener", {
port: 80,
});
listener.addTargets('Targets', {
targets: [new LambdaTarget(dnsRegistrationLambda)]
});
}
}
basically i've build a serverless Dynamic DNS System with ALB&Lamda

Cognito Trigger Lambda cannot connect to Appsync, responds {"size":0,"timeout":0}

I have a lambda whose purpose is to ingest Cognito Post Confirmation events and use some of that event data to invoke a createUser mutation via AppSync. The lambda is receiving the following response from AppSync: {"size":0,"timeout":0}. I cannot find docs on what this means and the mutation does not occur; additionally, the same mutation and same credentials work fine from the AppSync console. Have I missed something obvious?
Lambda
const URL = require("url");
const fetch = require("node-fetch");
const { CognitoIdentityServiceProvider } = require("aws-sdk");
const cognitoIdentityServiceProvider = new CognitoIdentityServiceProvider({
apiVersion: "2016-04-18"
});
const initiateAuth = ({ clientId, userPoolId, username, password }) =>
cognitoIdentityServiceProvider
.adminInitiateAuth({
AuthFlow: "ADMIN_NO_SRP_AUTH",
AuthParameters: {
USERNAME: username,
PASSWORD: password
},
ClientId: clientId,
UserPoolId: userPoolId
})
.promise();
exports.handler = async (event, context, callback) => {
console.log(event);
const clientId = process.env.COGNITO_CLIENT_ID;
const userPoolId = process.env.COGNITO_USER_POOL_ID;
const endPoint = process.env.APPSYNC_GRAPHQL_ENDPOINT;
const username = process.env.COGNITO_USERNAME;
const password = process.env.COGNITO_PASSWORD;
const { AuthenticationResult } = await initiateAuth({
clientId,
userPoolId,
username,
password
});
const accessToken = AuthenticationResult && AuthenticationResult.AccessToken;
console.log(`Access Token: ${accessToken}`);
const postBody = {
query: `mutation CreateUser($id: ID!, $username: String!) {
createUser(input: {id: $id, username: $username}) {
id,
username
}
}`,
operationName: "CreateUser",
variables: {
id: event.request.userAttributes.sub,
username: event.username
}
};
const uri = await URL.parse(endPoint);
console.log(uri);
const options = {
method: "POST",
body: JSON.stringify(postBody),
headers: {
host: uri.host,
"Content-Type": "application/json",
Authorization: accessToken
}
};
const response = await fetch(uri.href, options);
console.log(`AppSync mutation response: ${JSON.stringify(response)}`);
const { data } = await response.json();
const result = data && data.createUser;
callback(null, result);
};
SAM Template
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Stack for using Cognito events to create Users database. stack-ingest-cognito-events
Resources:
IngestCognitoEventsLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: role-ingest-cognito-events-lambda
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: policy-ingest-cognito-events-lambda
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: arn:aws:logs:*:*:*
- Effect: Allow
Action:
- cognito-idp:Admin*
Resource:
Fn::Sub: arn:aws:cognito-idp:${AWS::Region}:${AWS::AccountId}:userpool/us-east-1_mypool
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
IngestCognitoEventsLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: lambda-ingest-cognito-events
Description: Ingests Cognito events, propogates changes.
AutoPublishAlias: live
Runtime: nodejs8.10
Handler: index.handler
CodeUri: s3://mybucket
MemorySize: 128
Timeout: 10
Environment:
Variables:
COGNITO_CLIENT_ID: myclientid
COGNITO_USER_POOL_ID: us-east-1_mypool
APPSYNC_GRAPHQL_ENDPOINT: https://myhash.appsync-api.us-east-1.amazonaws.com/graphql
COGNITO_USERNAME: serviceAcctUsername
COGNITO_PASSWORD: serviceAcctPassword
Role:
Fn::GetAtt:
- IngestCognitoEventsLambdaRole
- Arn

Resources