AWS RDS: error: password authentication failed for user "ubuntu" from EC2 - amazon-ec2

I have a postgres RDS instance which my Node.js web application running on an EC2 instance is not able to connect to. The error in my EC2 node logs is: error: password authentication failed for user "ubuntu"
I can confirm that I have the right username, password, database name, etc because it is working correctly on the development build on my machine. I copied all the .env parameters exactly into my ec2 machine for the production build. When attempting to connect to RDS on my production application web page, it fails. I have restarted my Node.js server multiple times and have rebooted the whole ec2 machine. I have confirmed that the env variables are there with printenv.
What would you recommend trying to fix this issue?
EDIT for more details: My nodejs setup should be correct because my nodejs server will call some external APIs that do not require my postgres database and those calls work properly.
EDIT2: This is strange because my username for RDS is postgres, while my username for EC2 is ubuntu. I wonder if somehow there's some clash between env variables. I checked printenv but didn't find any though
EDIT3: See comments for my workaround.

I would suggest to test the database credentials by directly connecting to RDS database using psql client on EC2 instance.

Related

Problem connecting to my Database when setting up my first node

When I try to execute this command:
cd ~/.chainlink-kovan && docker run -p 6688:6688 -v ~/.chainlink-kovan:/chainlink -it --env-file=.env smartcontract/chainlink: local n
(I entered ut with my version of course)
I get this error:
The node and the database are both hosted on AWS.
This is my environment:
the issue is related to the configuration of your postgresql server.
To connect to the database you need a specially created USER with a PASSWORD, which then locks this database by starting the Chainlink node. The default postgres USER and DATABASE will not work because it is used for administrative purposes. These credentials are then added to the environmental variable where you have the correct syntax:
DATABASE_URL=postgresql://$USERNAME:$PASSWORD#$SERVER:5432/$DATABASE
You can follow those steps to create the USER with credentials:
access the postgresql server/ host via psql command line interface:
psql --host=mypostgresql.c6c8mwvfdgv0.us-west-2.rds.amazonaws.com --port=5432
Create the USER and grant all privileges:
CREATE USER youruser WITH PASSWORD 'yourpass';
GRANT ALL PRIVILEGES ON DATABASE yourdbname TO youruser;
Now you just need to change the DATABASE_URL configuration in your environmenal file (.env) and kill & restart the Chainlink node
In addition and in order to access the postgresql server hosted on AWS, you can have a look at the official documentation: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html

unable to access aws instance through ssh

Whenever I try to access AWS instance by using ssh I the following error:
Connection blocked because server only allows public key authentication. Please contact your network administrator.
Connection to ec2-54-214-97-39.us-west-2.compute.amazonaws.com closed by remote
host.
Connection to ec2-54-214-97-39.us-west-2.compute.amazonaws.com
closed.
I am accessing by ssh enabled command prompt:
chmod 400 virtue.pem
ssh -i "file.pem" ubuntu#ec2-publicIp.us-west-2.compute.amazonaws.com
I am unable to access aws instance vitual machine .
The error is like the one mentioned here:
https://laracasts.com/discuss/channels/servers/ssh-key-no-longer-working
You need confirm that file.pem is the correct key to access to the instances, and use chmod 400 to give permissions to the .pem in your computer. you can view the logs in the AWS console to verify if there is any message about ssh access.
You can launch other instance with other .pem or detach root volume and attach to other instance to validate the config files
This may be a problem caused by (man-in-the-middle attack).
Change your network to a private one and retry!

How to run portworx backup to minio server

Trying to configure portworx volume backups (ptxctl cloudsnap) to localhost minio server (emulating S3).
First step is to create cloud credentials using ptxctl cred c
e.g.
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000
This results in:
Error configuring cloud provider.Make sure the credentials are correct: RequestError: send request failed caused by: Get https://10.0.0.1:9000/: EOF
disabling SSL (which is not configured as this is just a localhost test) gives me:
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000 --s3-disable-ssl
Which returns:
Not authenticated with the secrets endpoint
I've tried this with both minio gateway (nas) and minio server - same result.
Portworx container is running within Rancher
Any thoughts appreciated
Resolved via instructions at https://docs.portworx.com/secrets/portworx-with-kvdb.html
i.e. set secret type to kvdb in /etc/pwx/config.json
"secret": {
"cluster_secret_key": "",
"secret_type": "kvdb"
},
Then login using ./pxctl secrets kvdb login
After this, credentials create was successful and subsequent cloudsnap backup. Test was using --s3-disable-ssl switch
Note - kvdb is plain text so not suitable for production obvs.

How do I switch OS user in Datagrip to Postgres via SSH?

When I connect to my database remotely I use ssh to connect to the remote machine, then I run sudo -u postgres psql to access PostgresSQL. The postgres user is passwordless in my OS.
I can make an SSH tunnel connect in Datagrip, but I can't seem to find a way to switch to postgres user prior to attempting to access the database.
Is there a way to do this?
First, you need to configure SSH tunnel on datasource ssh/ssl tab (host/port/username/password).
Secondly, you need to specify database credentials to your db on general tab.
Also, make sure you configured server correctly for non-local connections.
You should go to ~/.ssh/config file and set the tunnel with the user, which is used on the server, and put 'postgres' as a user name in the connection properties.
Note, it is working only in 2017.3 EAP now (release will be available this week)

Trying to migrate local mysql server to AWS

Please advise on how to migrate my local mysql server to the cloud.
Currently I have a Fedora linux box and a NAS attached to it via ethernet.
I believe the best way to go about it is :
Take a mysqldump of all databases
Create an amazon RDS instance and try to load from the created mysqldump
Shift local connection to this instance
Am I on the right track ?
How should I go about doing (1). I have a username and password based access to the mysql server and it has only 1 database. I tried to follow a few links on the net but the commands did not seem to work.
Is (2) even possible ?
The end goal is to connect from local servers to DB server on AWS and be able to query seamlessly.
I've done similar migrations and I think you're on the right track.
"How should I go about doing (1)?"
Just take a mysqldump of your DB and store it in a file, e.g.:
mysqldump -h [host] -u [user] -p[password] [dbname] > dumpfilename.sql
"Is (2) even possible ?"
Absolutely. You can connect to a MySQL RDS instance just like you would connect to any other MySQL instance. The host name is refered to as "endpoint" in the AWS Management Console.
One you've created the RDS instance and setup the security group, you're ready to load the dump:
mysql -h [endpoint] -u [user] -p[password] [dbname] < dumpfilename.sql

Resources