Unable to connect to Heroku Redis from Node Server - heroku

Works well on connecting to Redis locally and through Official Redis Docker image. But, when I switch to Heroku Redis values for ENV variables. It is unable to connect.
I have tried full url option as well, but that doesn't seem to work for any Redis connections when I need to add options object as 2nd parameter to new Redis(), Url option works if I don't pass any options for only locally and Official Redis Docker image.
Adding only heroku redis URI with no options to new Redis(), looks like it works, but then I get Redis Connection Failure after 10 seconds.
Does Heroku-Redis need some sort of extra preparation step?
import Redis, { RedisOptions } from 'ioredis';
import logger from '../logger';
const REDIS_HOST = process.env.REDIS_HOST || '127.0.0.1';
const REDIS_PORT = Number(process.env.REDIS_PORT) || 6379;
const REDIS_PASSWORD = process.env.REDIS_PASSWORD;
const REDIS_DB = Number(process.env.REDIS_DB) || 0;
const redisConfig: RedisOptions = {
host: REDIS_HOST,
port: Number(REDIS_PORT),
password: REDIS_PASSWORD,
db: Number(REDIS_DB),
retryStrategy: function (times) {
if (times % 4 == 0) {
logger.error('Redis reconnect exhausted after 4 retries');
return null;
}
return 200;
},
};
const redis = new Redis(redisConfig);
redis.on('error', function () {
logger.error('Redis Connection Failure');
});
export default redis;

I'm not sure where you got the idea to use environment variables called REDIS_HOST, REDIS_PORT, REDIS_PASSWORD, and REDIS_DB. Heroku Data for Redis provides a single environment variable that captures all of this:
After Heroku Data for Redis has been created, the new release is created and the application restarts. A REDIS_URL config var is available in the app configuration. It contains the URL you can use to access the newly provisioned Heroku Data for Redis instance.
Here is their example of how to connect from Node.js:
const redis = require("redis");
const client = redis.createClient({
url: process.env.REDIS_URL,
socket: {
tls: true,
rejectUnauthorized: false
}
});
So, change your configuration object accordingly:
const REDIS_URL = process.env.REDIS_URL;
const redisConfig: RedisOptions = {
url: REDIS_URL, // <--
socket: { // <--
tls: true, // <--
rejectUnauthorized: false // <--
}, // <--
retryStrategy: function (times) {
if (times % 4 == 0) {
logger.error('Redis reconnect exhausted after 4 retries');
return null;
}
return 200;
},
};
You are already using an environment variable locally to set your Redis password locally. Replace that with an appropriate REDIS_URL that contains all of your defaults, e.g. something like this:
REDIS_URL=redis://user:password#host:port/database

Related

TypeError[ERR_INVALID_ARG_TYPE]: The "key" argument must be of type string or an instance of Buffer, TypedArray, DataView, or KeyObject. Received null

I'm trying to get my express app to connect to my local DB by I am getting an error when running my express app:
TypeError[ERR_INVALID_ARG_TYPE]: The "key" argument must be of type string or an instance of Buffer, TypedArray, DataView, or KeyObject. Received null
My DB configuration settings are used like so:
const herokuSSLSetting = { rejectUnauthorized: false };
// If local env variable is declared, turn ssl settings off
const sslSetting = process.env.LOCAL ? false : herokuSSLSetting;
const dbConfig = {
connectionString: process.env.DATABASE_URL,
ssl: sslSetting,
};
const app = express();
app.use(express.json()); //add body parser to each following route handler
app.use(cors()); //add CORS support to each following route handler
const client = new Client(dbConfig);
Now when I use my heroku DATABASE_URL, that works fine. I believe the issue is coming from when I declare LOCAL=true in my env file. If I remove that line when connecting to my local db, the error then becomes:
UnhandledPromiseRejectionWarning: Error: The server does not support SSL connections
This configuration has worked on my virtual workspace (for local and heroku db) so I think it may be a windows issue...
Other details:
Running on windows
Using postgres for my db
Can connect to my local db via beekeeper by providing the user, password and default db

Heroku deploy chat bot

so, my code works both locally and docker image but, when I deploy to heroku it seems to work on first 1 minute and then app crashes, there is heroku logs, after crashing so what can be problems? any thought? thank you
there is my code
const { default: axios } = require('axios')
const telegramBot = require('node-telegram-bot-api')
const express = require('express')
const dotenv = require('dotenv').config()
const links = `
GitLab
Linkdin
Personal
`
const bot = new telegramBot(process.env.TOKEN, { polling: true })
const aboutText = 'Hello, I am learning NODE JS!'
const app = express()
bot.on('message', (message) => {
const id = message.chat.id
if (message.text === '/start' || message.text === '/help') {
bot.sendMessage(message.chat.id, 'avalable commands', {
reply_markup: {
keyboard: [['/about', '/links']],
resize_keyboard: true,
one_time_keyboard: true,
force_reply: true,
},
})
} else if (message.text === '/about') {
bot.sendMessage(id, aboutText)
} else if (message.text === '/links') {
bot.sendMessage(id, links, { parse_mode: 'HTML' })
} else {
bot.sendMessage(
message.chat.id,
'no such command! there are avalable commands',
{
reply_markup: {
keyboard: [['/about', '/links']],
resize_keyboard: true,
one_time_keyboard: true,
force_reply: true,
},
}
)
}
})
dockerfile:
FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY ./app.js .
ENV port=8080
ENV TOKEN=21*****:AAEF******************fKiQ
EXPOSE 8080
CMD [ "node", "app.js" ]
Your app fails to bind to the port assigned by Heroku: you cannot set the port yourself (ie 8080) but instead bind to the port defined by the $PORT env variable.
See Why is my Node.js app crashing with an R10 error? to understand the details.
In your specific case if you run the Bot in polling mode (i.e. pulling the changes) you can instead use a worker node instead of a web: doing this you don't need to bind to the port as your application does not require to process incoming HTTP requests.

can not connect to cockroachdb invalid cluster name

Why I cant connect to cockroachdb via powershell ?
I use this command:
cockroach sql --url postgres://username#cloud-host:26257/defaultdb?sslmode=require&options=--cluster=clustername;
I get the following error: Invalid clustername 08004
but the clustername is the right one.
€:
Nodejs
//For secure connection:
// const fs = require('fs');
const { Pool } = require("pg");
// Configure the database connection.
const config = {
user: "xxxxx",
password: "xxxx",
cluster_name: "xxxx",
host: "xxxx",
database: "wxxx",
port: 26257,
ssl: {
rejectUnauthorized: false,
},
//For secure connection:
/*ssl: {
ca: fs.readFileSync('/certs/ca.crt')
.toString()
}*/
};
// Create a connection pool
const pool = new Pool(config);
router.get('/', async (req, res) => {
const client = await pool.connect();
const d = await client.query('CREATE TABLE test (id INT, name VARCHAR, desc VARCHAR);');
console.log(d);
return res.json({
message: 'BOSY'
})
Get this error:
CodeParamsRoutingFailed: rejected by BackendConfigFromParams: Invalid cluster name
Try specifying the Cluster Name before dbname like this
cockroach sql --url postgres://username#cloud-host:26257/**clustername.defaultdb**?sslmode=require
I wonder if there's an issue with special characters in the shell. Having never used PowerShell this is only a guess, but does it work if you put the URL string in quotes?
cockroach sql --url "postgres://username#cloud-host:26257/defaultdb?sslmode=require&options=--cluster=clustername";

Heroku postgres node connection timeout

I'm trying to connect to a Postgres database from my Heroku node app, which works when running locally, both through node and by running the heroku local web command, but when running it on Heroku, it times out while waiting for pool.connect
I'm running the following code snippet through the Heroku console (I've also tried using this code in my app directly, but this is more efficient than redeploying each time):
node -e "
const { Pool } = require('pg');
const pool = new Pool({
connectionTimeoutMillis: 15000,
connectionString: process.env.DATABASE_URL + '?sslmode=require',
ssl: {
rejectUnauthorized: true
}
});
console.log('pool created');
(async() => {
try {
console.log('connecting');
const client = await pool.connect(); // this never resolves
console.log('querying');
const { rows } = await client.query('SELECT * FROM test_table LIMIT 1;');
console.log('query success', rows);
client.release()
} catch (error) {
console.log('query error', error);
}
})()
"
Things I've tried so far:
Using the pg Clientinstead of Pool
Using ssl: true instead of ssl: { rejectUnauthorized: true }
Using client.query without using pool.connect
Increased and omitted connectionTimeoutMillis (it resolves quickly when running locally since I'm querying a database that has just one row)
I've also tried using callbacks and promises instead of async / await
I've tried setting the connectionString both with the ?sslmode=require parameter and without it
I have tried using pg versions ^7.4.1 and ^7.18.2 so far
My assumption is that there is something I'm missing with either the Heroku setup or SSL, any help would be greatly appreciated, Thanks!

Winston CloudWatch Transport not Creating Logs When Running on Lambda

I have an expressjs App that is setup to run from within a AWS Lambda function. When I deploy this app to the lambda, the console logs for the lambda cloudwatch log show up (i.e. /aws/lambda/lambda-name), but it doesn't create a new CloudWatch LogGroup as specified in the configuration.
If I run the lambda function locally and generate logs it will create a CloudWatch Log Group for the local environment.
The Lambda Functions are connecting to an RDS instance so they are contained within a VPC.
The Lambda has been assigned the CloudWatchFullAccess policy so it should not be a permissions error.
I've looked at the Lambda logs and I'm not seeing any errors coming through related to this.
const env = process.env.NODE_ENV || 'local'
const config = require('../../config/env.json')[env]
const winston = require('winston')
const WinstonCloudwatch = require('winston-cloudwatch')
const crypto = require('crypto')
let startTime = new Date().toISOString()
const logger = winston.createLogger({
exitOnError: false,
level: 'info',
transports: [
new winston.transports.Console({
json: true,
colorize: true,
level: 'info'
}),
new WinstonCloudwatch({
awsAccessKeyId: config.aws.accessKeyId,
awsSecretKey: config.aws.secretAccessKey,
logGroupName: 'my-api-' + env,
logStreamName: function () {
// Spread log streams across dates as the server stays up
let date = new Date().toISOString().split('T')[0]
return 'my-requests-' + date + '-' +
crypto.createHash('md5')
.update(startTime)
.digest('hex')
},
awsRegion: 'us-east-1',
jsonMessage: true
})
]
})
const winstonStream = {
write: (message, encoding) => {
// use the 'info' log level so the output will be picked up by both transports
logger.info(message)
}
}
module.exports.logger = logger
module.exports.winstonStream = winstonStream
Then within my express app.
const morgan = require('morgan')
const { winstonStream } = require('./providers/loggers')
app.use(morgan('combined', { stream: winstonStream }
Confirming that the problem was related to the lambda function being in a VPC and not granted public access to the internet through Subnets, Route Tables, NAT and Internet Gateways as described within this post. https://gist.github.com/reggi/dc5f2620b7b4f515e68e46255ac042a7
I believe that to access external internet services you'd need what you described.
But to access an AWS service outside the VPC you can create a VPC endpoint.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch-logs-and-interface-VPC.html

Resources