can not connect to cockroachdb invalid cluster name - cockroachdb

Why I cant connect to cockroachdb via powershell ?
I use this command:
cockroach sql --url postgres://username#cloud-host:26257/defaultdb?sslmode=require&options=--cluster=clustername;
I get the following error: Invalid clustername 08004
but the clustername is the right one.
€:
Nodejs
//For secure connection:
// const fs = require('fs');
const { Pool } = require("pg");
// Configure the database connection.
const config = {
user: "xxxxx",
password: "xxxx",
cluster_name: "xxxx",
host: "xxxx",
database: "wxxx",
port: 26257,
ssl: {
rejectUnauthorized: false,
},
//For secure connection:
/*ssl: {
ca: fs.readFileSync('/certs/ca.crt')
.toString()
}*/
};
// Create a connection pool
const pool = new Pool(config);
router.get('/', async (req, res) => {
const client = await pool.connect();
const d = await client.query('CREATE TABLE test (id INT, name VARCHAR, desc VARCHAR);');
console.log(d);
return res.json({
message: 'BOSY'
})
Get this error:
CodeParamsRoutingFailed: rejected by BackendConfigFromParams: Invalid cluster name

Try specifying the Cluster Name before dbname like this
cockroach sql --url postgres://username#cloud-host:26257/**clustername.defaultdb**?sslmode=require

I wonder if there's an issue with special characters in the shell. Having never used PowerShell this is only a guess, but does it work if you put the URL string in quotes?
cockroach sql --url "postgres://username#cloud-host:26257/defaultdb?sslmode=require&options=--cluster=clustername";

Related

Unable to connect to Heroku Redis from Node Server

Works well on connecting to Redis locally and through Official Redis Docker image. But, when I switch to Heroku Redis values for ENV variables. It is unable to connect.
I have tried full url option as well, but that doesn't seem to work for any Redis connections when I need to add options object as 2nd parameter to new Redis(), Url option works if I don't pass any options for only locally and Official Redis Docker image.
Adding only heroku redis URI with no options to new Redis(), looks like it works, but then I get Redis Connection Failure after 10 seconds.
Does Heroku-Redis need some sort of extra preparation step?
import Redis, { RedisOptions } from 'ioredis';
import logger from '../logger';
const REDIS_HOST = process.env.REDIS_HOST || '127.0.0.1';
const REDIS_PORT = Number(process.env.REDIS_PORT) || 6379;
const REDIS_PASSWORD = process.env.REDIS_PASSWORD;
const REDIS_DB = Number(process.env.REDIS_DB) || 0;
const redisConfig: RedisOptions = {
host: REDIS_HOST,
port: Number(REDIS_PORT),
password: REDIS_PASSWORD,
db: Number(REDIS_DB),
retryStrategy: function (times) {
if (times % 4 == 0) {
logger.error('Redis reconnect exhausted after 4 retries');
return null;
}
return 200;
},
};
const redis = new Redis(redisConfig);
redis.on('error', function () {
logger.error('Redis Connection Failure');
});
export default redis;
I'm not sure where you got the idea to use environment variables called REDIS_HOST, REDIS_PORT, REDIS_PASSWORD, and REDIS_DB. Heroku Data for Redis provides a single environment variable that captures all of this:
After Heroku Data for Redis has been created, the new release is created and the application restarts. A REDIS_URL config var is available in the app configuration. It contains the URL you can use to access the newly provisioned Heroku Data for Redis instance.
Here is their example of how to connect from Node.js:
const redis = require("redis");
const client = redis.createClient({
url: process.env.REDIS_URL,
socket: {
tls: true,
rejectUnauthorized: false
}
});
So, change your configuration object accordingly:
const REDIS_URL = process.env.REDIS_URL;
const redisConfig: RedisOptions = {
url: REDIS_URL, // <--
socket: { // <--
tls: true, // <--
rejectUnauthorized: false // <--
}, // <--
retryStrategy: function (times) {
if (times % 4 == 0) {
logger.error('Redis reconnect exhausted after 4 retries');
return null;
}
return 200;
},
};
You are already using an environment variable locally to set your Redis password locally. Replace that with an appropriate REDIS_URL that contains all of your defaults, e.g. something like this:
REDIS_URL=redis://user:password#host:port/database

Sequelize with AWS RDS Proxy

I am trying to use the AWS RDS Proxy on my lambda to proxy our database (Aurora MySQL). I wasn't able to find any specific instructions for Sequelize, but it seemed like all I needed for RDS proxy to work is to create a signer, use it to get my token and then pass in the token as my password to the Sequelize constructor:
const signer = new RDS.Signer({
region: process.env.REGION,
hostname: process.env.DB_PROXY_ENDPOINT,
port: 3306,
username: process.env.DB_PROXY_USERNAME,
});
const token = signer.getAuthToken({
username: process.env.DB_PROXY_USERNAME,
});
const connection = new Sequelize(process.env.DB_DATABASE, process.env.DB_PROXY_USERNAME, token, {
dialect: 'mysql',
host: process.env.DB_HOSTNAME,
port: process.env.DB_PORT,
pool: {
acquire: 15000,
idle: 9000,
max: 10
},
});
The RDS proxy is attached to my lambda and I'm able to log the token, but as soon as I make a request against the database, my connection times out. Does anyone know if there is something I could be missing in this setup?
Here's how I connected from AWS Lambda to RDS Proxy using MySql (in typescript)
import { APIGatewayProxyEvent, APIGatewayProxyResult } from "aws-lambda";
import { Signer } from "#aws-sdk/rds-signer";
import { Sequelize } from "sequelize";
//other code
const signer = new Signer({
hostname: host
port: port,
region: region,
username: username,
});
const sequelize = new Sequelize({
username,
host,
port,
dialect: "mysql",
dialectOptions: {
ssl: "Amazon RDS",
authPlugins: {
mysql_clear_password: () => () => signer.getAuthToken(),
},
},
});
// some more code
Your connection timing out may be due to some authentication error, perhaps in the way you're passing in the token. I would double check your RDS Proxy IAM role has secretsmanager:GetSecretValue permission for the Secrets Manager resource of the db user credentials as well as kms:Decrypt on the key used to encrypt the secret. And your lambda (or whatever context your code is running in) has the rds-db:connect permission.
NOTE:
This doesn't include the connection pooling options, I'm still trying to figure out how to optimize that. Check out Using sequelize in AWS Lambda docs for a place to start.

Heroku postgres node connection timeout

I'm trying to connect to a Postgres database from my Heroku node app, which works when running locally, both through node and by running the heroku local web command, but when running it on Heroku, it times out while waiting for pool.connect
I'm running the following code snippet through the Heroku console (I've also tried using this code in my app directly, but this is more efficient than redeploying each time):
node -e "
const { Pool } = require('pg');
const pool = new Pool({
connectionTimeoutMillis: 15000,
connectionString: process.env.DATABASE_URL + '?sslmode=require',
ssl: {
rejectUnauthorized: true
}
});
console.log('pool created');
(async() => {
try {
console.log('connecting');
const client = await pool.connect(); // this never resolves
console.log('querying');
const { rows } = await client.query('SELECT * FROM test_table LIMIT 1;');
console.log('query success', rows);
client.release()
} catch (error) {
console.log('query error', error);
}
})()
"
Things I've tried so far:
Using the pg Clientinstead of Pool
Using ssl: true instead of ssl: { rejectUnauthorized: true }
Using client.query without using pool.connect
Increased and omitted connectionTimeoutMillis (it resolves quickly when running locally since I'm querying a database that has just one row)
I've also tried using callbacks and promises instead of async / await
I've tried setting the connectionString both with the ?sslmode=require parameter and without it
I have tried using pg versions ^7.4.1 and ^7.18.2 so far
My assumption is that there is something I'm missing with either the Heroku setup or SSL, any help would be greatly appreciated, Thanks!

Google Cloud - Connect Sql Server to Google Apps Script

I have problems to connect sql server on google cloud to google apps script, have tried many options do url connection like: Jdbc.getCloudSqlConnection("jdbc:google:mysql://apis-para-pap:southamerica-east1:revistamarcasserver","sqlserver", "*****"); but is not connecting, Exception: Failed to establish a database connection. Check connection string.
Do you can help me to solve this problem to connect Sql Server to Google Apps Script?
Information about google cloud Sql Server:
DB Type: SQL Server 2017 Standard
Location: southamerica-east1-b
Instance name: apis-para-pap:southamerica-east1:revistamarcasserver
Public address: 34.95.157.142
White list: (72.14.192.0/18) (64.233.160.0/19) (209.85.128.0/17) (66.102.0.0/20) (74.125.0.0/16) (173.194.0.0/16) (66.249.80.0/20) (64.18.0.0/20) (216.239.32.0/19) (207.126.144.0/20)
(observation: Using sql server management studio, i have tested and connected successfully, with this informations).
Thank you so much
I created a Cloud SQL instance, authorize the following IP ranges and was able to connect to it using the Public Ip Address. I could not connect using Instance connection name.
var db = 'mydatabase';
var instanceUrl = "jdbc:mysql://Public_IP_address_SQL";
var dbUrl = instanceUrl + '/' + db;
/**
* Create a new database within a Cloud SQL instance.
*/
function createDatabase() {
var conn = Jdbc.getConnection(instanceUrl,{user: 'root', password: '****'} );
conn.createStatement().execute('CREATE DATABASE ' + db);
}
/**
* Create a new user for your database with full privileges.
*/
function createUser() {
var conn = Jdbc.getConnection(dbUrl,{user: 'root', password: '****'});
var stmt = conn.prepareStatement('CREATE USER ? IDENTIFIED BY ?');
stmt.setString(1, user);
stmt.setString(2, userPwd);
stmt.execute();
conn.createStatement().execute('GRANT ALL ON `%`.* TO ' + user);
}
/**
* Create a new table in the database.
*/
function createTable() {
var conn = Jdbc.getConnection(dbUrl, {user: 'new_user', password: '****'});
conn.createStatement().execute('CREATE TABLE entries '
+ '(guestName VARCHAR(255), content VARCHAR(255), '
+ 'entryID INT NOT NULL AUTO_INCREMENT, PRIMARY KEY(entryID));');
}
createDatabase()
createUser()
createTable()
}

Winston CloudWatch Transport not Creating Logs When Running on Lambda

I have an expressjs App that is setup to run from within a AWS Lambda function. When I deploy this app to the lambda, the console logs for the lambda cloudwatch log show up (i.e. /aws/lambda/lambda-name), but it doesn't create a new CloudWatch LogGroup as specified in the configuration.
If I run the lambda function locally and generate logs it will create a CloudWatch Log Group for the local environment.
The Lambda Functions are connecting to an RDS instance so they are contained within a VPC.
The Lambda has been assigned the CloudWatchFullAccess policy so it should not be a permissions error.
I've looked at the Lambda logs and I'm not seeing any errors coming through related to this.
const env = process.env.NODE_ENV || 'local'
const config = require('../../config/env.json')[env]
const winston = require('winston')
const WinstonCloudwatch = require('winston-cloudwatch')
const crypto = require('crypto')
let startTime = new Date().toISOString()
const logger = winston.createLogger({
exitOnError: false,
level: 'info',
transports: [
new winston.transports.Console({
json: true,
colorize: true,
level: 'info'
}),
new WinstonCloudwatch({
awsAccessKeyId: config.aws.accessKeyId,
awsSecretKey: config.aws.secretAccessKey,
logGroupName: 'my-api-' + env,
logStreamName: function () {
// Spread log streams across dates as the server stays up
let date = new Date().toISOString().split('T')[0]
return 'my-requests-' + date + '-' +
crypto.createHash('md5')
.update(startTime)
.digest('hex')
},
awsRegion: 'us-east-1',
jsonMessage: true
})
]
})
const winstonStream = {
write: (message, encoding) => {
// use the 'info' log level so the output will be picked up by both transports
logger.info(message)
}
}
module.exports.logger = logger
module.exports.winstonStream = winstonStream
Then within my express app.
const morgan = require('morgan')
const { winstonStream } = require('./providers/loggers')
app.use(morgan('combined', { stream: winstonStream }
Confirming that the problem was related to the lambda function being in a VPC and not granted public access to the internet through Subnets, Route Tables, NAT and Internet Gateways as described within this post. https://gist.github.com/reggi/dc5f2620b7b4f515e68e46255ac042a7
I believe that to access external internet services you'd need what you described.
But to access an AWS service outside the VPC you can create a VPC endpoint.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch-logs-and-interface-VPC.html

Resources