I am trying to use the AWS RDS Proxy on my lambda to proxy our database (Aurora MySQL). I wasn't able to find any specific instructions for Sequelize, but it seemed like all I needed for RDS proxy to work is to create a signer, use it to get my token and then pass in the token as my password to the Sequelize constructor:
const signer = new RDS.Signer({
region: process.env.REGION,
hostname: process.env.DB_PROXY_ENDPOINT,
port: 3306,
username: process.env.DB_PROXY_USERNAME,
});
const token = signer.getAuthToken({
username: process.env.DB_PROXY_USERNAME,
});
const connection = new Sequelize(process.env.DB_DATABASE, process.env.DB_PROXY_USERNAME, token, {
dialect: 'mysql',
host: process.env.DB_HOSTNAME,
port: process.env.DB_PORT,
pool: {
acquire: 15000,
idle: 9000,
max: 10
},
});
The RDS proxy is attached to my lambda and I'm able to log the token, but as soon as I make a request against the database, my connection times out. Does anyone know if there is something I could be missing in this setup?
Here's how I connected from AWS Lambda to RDS Proxy using MySql (in typescript)
import { APIGatewayProxyEvent, APIGatewayProxyResult } from "aws-lambda";
import { Signer } from "#aws-sdk/rds-signer";
import { Sequelize } from "sequelize";
//other code
const signer = new Signer({
hostname: host
port: port,
region: region,
username: username,
});
const sequelize = new Sequelize({
username,
host,
port,
dialect: "mysql",
dialectOptions: {
ssl: "Amazon RDS",
authPlugins: {
mysql_clear_password: () => () => signer.getAuthToken(),
},
},
});
// some more code
Your connection timing out may be due to some authentication error, perhaps in the way you're passing in the token. I would double check your RDS Proxy IAM role has secretsmanager:GetSecretValue permission for the Secrets Manager resource of the db user credentials as well as kms:Decrypt on the key used to encrypt the secret. And your lambda (or whatever context your code is running in) has the rds-db:connect permission.
NOTE:
This doesn't include the connection pooling options, I'm still trying to figure out how to optimize that. Check out Using sequelize in AWS Lambda docs for a place to start.
Related
Why I cant connect to cockroachdb via powershell ?
I use this command:
cockroach sql --url postgres://username#cloud-host:26257/defaultdb?sslmode=require&options=--cluster=clustername;
I get the following error: Invalid clustername 08004
but the clustername is the right one.
€:
Nodejs
//For secure connection:
// const fs = require('fs');
const { Pool } = require("pg");
// Configure the database connection.
const config = {
user: "xxxxx",
password: "xxxx",
cluster_name: "xxxx",
host: "xxxx",
database: "wxxx",
port: 26257,
ssl: {
rejectUnauthorized: false,
},
//For secure connection:
/*ssl: {
ca: fs.readFileSync('/certs/ca.crt')
.toString()
}*/
};
// Create a connection pool
const pool = new Pool(config);
router.get('/', async (req, res) => {
const client = await pool.connect();
const d = await client.query('CREATE TABLE test (id INT, name VARCHAR, desc VARCHAR);');
console.log(d);
return res.json({
message: 'BOSY'
})
Get this error:
CodeParamsRoutingFailed: rejected by BackendConfigFromParams: Invalid cluster name
Try specifying the Cluster Name before dbname like this
cockroach sql --url postgres://username#cloud-host:26257/**clustername.defaultdb**?sslmode=require
I wonder if there's an issue with special characters in the shell. Having never used PowerShell this is only a guess, but does it work if you put the URL string in quotes?
cockroach sql --url "postgres://username#cloud-host:26257/defaultdb?sslmode=require&options=--cluster=clustername";
I am leveraging the use of AWS services for a functionality
Summary: I have a lambda that accesses a Postgres DB in RDS. Instead of directly connecting to DB, the proxy endpoint is accessed as it is architecturally advised. I have no problem generating IAM token and it is used as the password while creating the Sequelize connection.
Problem: Initially I was not using rds-proxy. In this scenario, I was making use of the execution context of lambda to reuse connections. Here I didn't close connections in lambda(It worked fine - was directly connecting to the database here). But on proxy implementation, without closing connections, there is a big spike in connections that proxy makes to the database and it is testing the limits on load. with 10req/sec I'm seeing 90 connections
On closing the connections in lambda the connections get substantially reduced to <20.
But I have nested database queries during a single lambda execution and it will be difficult to rewrite these functionalities.
Below is the Sequelize connection object written to create connection
const { Sequelize } = require('sequelize');
let proxyToken = '***latest iam token with 15min validity***';
let additionalConnectionDetails = {
host: process.env.PROXY_ENDPOINT,
schema: 'schemaname',
searchPath: 'searchpath',
dialect: 'postgres',
dialectOptions: {
prependSearchPath: true,
ssl: {
require: true,
rejectUnauthorized: false
}
},
// pool: {
// max: 2,
// min: 1,
// acquire: 3000,
// idle: 0,
// evict: 120000
// },
// // maxConcurrentQueries: 100
}
sequelize_connection = new Sequelize(dbCreds.app, dbCreds.userName, proxyToken, additionalConnectionDetails);
console.log('sequelize', sequelize_connection)
return sequelize_connection
I tried using the connection pool, but it didn't make much of a difference in lambda.
How can I reduce the number of connections established without closing connections. Any suggestions are appreciated. Thanks in advance.
Without changing anything in my settings, I can't connect to my PostgreSQL database hosted on Heroku. I can't access it in my application, and is given error
OperationalError: (psycopg2.OperationalError) FATAL: password authentication failed for user "<heroku user>" FATAL: no pg_hba.conf entry for host "<address>", user "<user>", database "<database>", SSL off
It says SSL off, but this is enabled as I have confirmed in PgAdmin. When attempting to access the database through PgAdmin 4 I get the same problem, saying that there is a fatal password authentication for user '' error.
I have checked the credentials for the database on Heroku, but nothing has changed. Am I doing something wrong? Do I have to change something in pg_hba.conf?
Edit: I can see in the notifications on Heroku that the database was updated right around the time the database stopped working for me. I am not sure if I triggered the update, however.
Here's the notification center:
In general, it isn't a good idea to hard-code credentials when connecting to Heroku Postgres:
Do not copy and paste database credentials to a separate environment or into your application’s code. The database URL is managed by Heroku and will change under some circumstances such as:
User-initiated database credential rotations using heroku pg:credentials:rotate.
Catastrophic hardware failures that require Heroku Postgres staff to recover your database on new hardware.
Security issues or threats that require Heroku Postgres staff to rotate database credentials.
Automated failover events on HA-enabled plans.
It is best practice to always fetch the database URL config var from the corresponding Heroku app when your application starts. For example, you may follow 12Factor application configuration principles by using the Heroku CLI and invoke your process like so:
DATABASE_URL=$(heroku config:get DATABASE_URL -a your-app) your_process
This way, you ensure your process or application always has correct database credentials.
Based on the messages in your screenshot, I suspect you were affected by the second bullet. Whatever the cause, one of those messages explicitly says
Once it has completed, your database URL will have changed
I had the same issue. Thx to #Chris I solved it this way.
This file is in config/database.js (Strapi 3.1.3)
var parseDbUrl = require("parse-database-url");
if (process.env.NODE_ENV === 'production') {
module.exports = ({ env }) => {
var dbConfig = parseDbUrl(env('DATABASE_URL', ''));
return {
defaultConnection: 'default',
connections: {
default: {
connector: 'bookshelf',
settings: {
client: dbConfig.driver,
host: dbConfig.host,
port: dbConfig.port,
database: dbConfig.database,
username: dbConfig.user,
password: dbConfig.password,
},
options: {
ssl: false,
},
},
},
}
};
} else {
// to use the default local provider you can return an empty configuration
module.exports = ({ env }) => ({
defaultConnection: 'default',
connections: {
default: {
connector: 'bookshelf',
settings: {
client: 'sqlite',
filename: env('DATABASE_FILENAME', '.tmp/data.db'),
},
options: {
useNullAsDefault: true,
},
},
},
});
}
I have an expressjs App that is setup to run from within a AWS Lambda function. When I deploy this app to the lambda, the console logs for the lambda cloudwatch log show up (i.e. /aws/lambda/lambda-name), but it doesn't create a new CloudWatch LogGroup as specified in the configuration.
If I run the lambda function locally and generate logs it will create a CloudWatch Log Group for the local environment.
The Lambda Functions are connecting to an RDS instance so they are contained within a VPC.
The Lambda has been assigned the CloudWatchFullAccess policy so it should not be a permissions error.
I've looked at the Lambda logs and I'm not seeing any errors coming through related to this.
const env = process.env.NODE_ENV || 'local'
const config = require('../../config/env.json')[env]
const winston = require('winston')
const WinstonCloudwatch = require('winston-cloudwatch')
const crypto = require('crypto')
let startTime = new Date().toISOString()
const logger = winston.createLogger({
exitOnError: false,
level: 'info',
transports: [
new winston.transports.Console({
json: true,
colorize: true,
level: 'info'
}),
new WinstonCloudwatch({
awsAccessKeyId: config.aws.accessKeyId,
awsSecretKey: config.aws.secretAccessKey,
logGroupName: 'my-api-' + env,
logStreamName: function () {
// Spread log streams across dates as the server stays up
let date = new Date().toISOString().split('T')[0]
return 'my-requests-' + date + '-' +
crypto.createHash('md5')
.update(startTime)
.digest('hex')
},
awsRegion: 'us-east-1',
jsonMessage: true
})
]
})
const winstonStream = {
write: (message, encoding) => {
// use the 'info' log level so the output will be picked up by both transports
logger.info(message)
}
}
module.exports.logger = logger
module.exports.winstonStream = winstonStream
Then within my express app.
const morgan = require('morgan')
const { winstonStream } = require('./providers/loggers')
app.use(morgan('combined', { stream: winstonStream }
Confirming that the problem was related to the lambda function being in a VPC and not granted public access to the internet through Subnets, Route Tables, NAT and Internet Gateways as described within this post. https://gist.github.com/reggi/dc5f2620b7b4f515e68e46255ac042a7
I believe that to access external internet services you'd need what you described.
But to access an AWS service outside the VPC you can create a VPC endpoint.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch-logs-and-interface-VPC.html
I'm trying to deploy my express server on Heroku which needs to connect to the remote MySQL database.
I used 'heroku config:add DATABASE_URL=mysql://dbusername:dbpassword#databasehostIP:databaseserverport/databasename with the correct information but still it tries to connect through wrong address.
I also used 'heroku config:add EXTERNAL_DATABASE_URL=mysql://dbusername:dbpassword#databasehostIP:databaseserverport/databasename with the correct information but still it tries to connect through wrong address.
In my Heroku app panel under 'setting' in 'Config Vars' section I see that DATABASE_URL and EXTERNAL_DATABASE_URL appeared with correct information. but in heroku log I still see the wrong information
This is my sequelize variable on the express server:
const sequelize = new Sequelize('dbName', 'USER', 'Password', {
host:"hostAddress",
dialect: 'mysql'
}
But I see the following on Heroku log:
2019-02-16T18:31:42.231390+00:00 app[web.1]: Unhandled rejection
SequelizeAccessDeniedError: Access denied for user
'USER'#'ec2-54-162-8-141.compute-1.amazonaws.com' (using
password: YES)
How can I change 'ec2-54-162-8-141.compute-1.amazonaws.com' to the remote MySQL host address?
Try setting your variable with something like this:
if (process.env.DATABASE_URL) {
const sequelize = new Sequelize(process.env.DATABASE_URL, {
define: {
freezeTableName: true, // don't make plural table names
underscored: true // don't use camel case
},
dialect: 'mysql',
dialectOptions: {
ssl: true
},
logging: true,
protocol: 'mysql',
quoteIdentifiers: false // set case-insensitive
});
} else {
console.log('Fatal error: DATABASE_URL not set');
process.exit(1);
}