Issue with manually giving password every time with psql - bash

We are trying to migrate data from one Amazon RDS database to Amazon Aurora Serverless database using psql of postgresql by COPY command. The script is working fine when I run it from an EC2 instance but I need to give password for rdswizard and postgres every iteration manually. I just want to give the password along with my psql command. How to give password along with the psql command not manually every time?
allSites=(3 5 9 11 29 30 31 32 33 34 37 38 39 40 41 45 46 47 48)
for i in "${allSites[#]}"
do
psql \
-X \
-U rdswizard \
-h my_rds_host_url_goes_here \
-d wizard \
-c "\\copy (select site_id,name,phone from client_${i} where date(create_date) > '2019-09-11' LIMIT 100) to stdout" \
| \
psql \
-X \
-U postgres \
-h my_aurora_serverless_host_url_goes_here \
-d wizard \
-c "\\copy client_${i}(site_id,name,phone) from stdin"
done
Both of my database host is on remote server not in local machine

You can add details to ~/.pgpass file to avoid regularly having to type in passwords. Make sure to provide -rw-------(0600) permissions to the file.
This file should contain lines of the following format:
hostname:port:database:username:password
The password field from the first line that matches the current connection parameters will be used. Refer official documentation

Related

Error with connect SQL Developer with Oracle Database using Docker

I am trying to install an oracle 18 database on my Ubuntu 22.04 virtual machine using Docker. As seen in the following images, I have managed to obtain the database image and instantiate the content. To instantiate the container I have executed the following command:
sudo docker run \
--name oracle18c \
-p 1521:1521 \
-p 5500:5500 \
-e ORACLE_PDB=orcl \
-e ORACLE_PWD=password \
-e ORACLE_MEM=4000 \
-v /opt/oracle/oradata \
-d \
oracle/database:18.4.0-xe
With SQL Developer running, I try to create my first connection with the following parameters:
Could someone understand me or give me a solution to the problem?
FailureTest: listener refused the connection with the following error ora-12514.

Shell script for creating user with password in postgres failing with quotes

I am trying to create a shell script to bootstrap new DBs.
I am able to create users, grant privileges and do all actions, except running any queries with passwords. The single quotes in shell script creates statements which postgres is not accepting.
Because of this, we cannot completely automate this process.
Below is one of the postgres line used in shell script.
PGPASSWORD=change123 psql -h $DB -p 5432 -d postgres -U root -c \"CREATE USER $(echo "$j" | cut -d "_" -f1)dbuser WITH PASSWORD \'$(echo $DBPASSWD|base64 --decode)\';\"
When executing the above script, the command is converted as
psql -h testdb -p 5432 -d postgres -U root -c '"CREATE' USER admindbuser WITH PASSWORD ''\''ZnuLEmu72R'\'''
where I want the command to be like
psql -h testdb -p 5432 -d postgres -U root -c "CREATE USER admindbuser WITH PASSWORD 'ZnuLEmu72R';"
Any help is very much appreciated. I want some help in guiding how to modify the line in shell so as to achieve the required command.
Change
PGPASSWORD=change123 psql\
-h $DB \
-p 5432 \
-d postgres \
-U root \
-c \"CREATE USER $(echo "$j" | cut -d "_" -f1)dbuser WITH PASSWORD \'$(echo $DBPASSWD|base64 --decode)\';\"
to
PGPASSWORD=change123 psql \
-h "$DB" \
-p 5432 \
-d postgres \
-U root \
-c "CREATE USER ${j%%_*}dbuser WITH PASSWORD '$(printf '%s' "$DBPASSWD" | base64 --decode)';"

is there is a way to automate the redshift vaccum process through udf?

I have more that 300+ table in redshift.
Data is getting update daily basic just want to know can i create a udf in redshift to automate the vaccum process.
I found a link automate using python but not that great python coder i am so looking for solution in sql script.
Unfortunately, you can't use a udf for something like this, udf's are simple input/ouput function meant to be used in queries.
Your best bet is to use this open source tool from AWS Labs: VaccumAnalyzeUtility. The great thing about using this tool is that it is very smart about only running VACUUM on tables that need them, and it will also run ANALYZE on tables that need it.
It's pretty easy to set up as cron job. Here is an example of how it can be done:
Pull the amazon-redshift-utils repo in git:
git clone https://github.com/awslabs/amazon-redshift-utils
cd amazon-redshift-utils
Create a script that can be run by cron. In your text editor create a file called run_vacuum_analyze.sh with the following, and fill in the values for the your environment:
export REDSHIFT_USER=<your db user name>
export REDSHIFT_PASSWORD=<your db password>
export REDSHIFT_DB=<your db>
export REDSHIFT_HOST=<your redshift host>
export REDSHIFT_PORT=<your redshift port>
export WORKSPACE=$PWD/src/AnalyzeVacuumUtility
#
# VIRTUALENV
#
rm -rf $WORKSPACE/ve1
virtualenv -p python2.6 "$WORKSPACE/ve1"
# enter virutalenv
source $WORKSPACE/ve1/bin/activate
#
# DEPENDENCIES
#
pip install PyGreSQL
cd $WORKSPACE/run
#
# RUN IT
#
python analyze-vacuum-schema.py --db $REDSHIFT_DB --db-user $REDSHIFT_USER --db-pwd $REDSHIFT_PASSWORD --db-port $REDSHIFT_PORT --db-host $REDSHIFT_HOST
Then create a cron job that will run this script (In this example, I run it daily at 2:30 AM)
chmod +x run_vacuum_analyze.sh
crontab -e
Add the following entry:
30 2 * * * <path-to-the-cloned-repo>/run_vacuum_analyze.sh
You CANNOT use a UDF for this, UDFs cannot run command that update data.
Yes, I have created a AWS lamda function in java and used cloud watch event to schedule using a cron. AWS lamda function in java expects shaded jar to be uploaded. I have created environment variable in lamda function for redshift connection properties which are passed into java handler.
Now you can use auto vaccum ,Redshift now providing this option
Here is my shell script utility to automate this with better control over the table filters.
https://thedataguy.in/automate-redshift-vacuum-analyze-using-shell-script-utility/
Example Commands:
Run vacuum and Analyze on all the tables.
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev
Run vacuum and Analyze on the schema sc1, sc2.
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -s 'sc1,sc2'
Run vacuum FULL on all the tables in all the schema except the schema sc1. But don’t want Analyze
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -k sc1 -o FULL -a 0 -v 1
or
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -k sc1 -o FULL -a 0
Run Analyze only on all the tables except the tables tb1,tbl3.
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -b 'tbl1,tbl3' -a 1 -v 0
or
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -b 'tbl1,tbl3' -v 0
Use a password on the command line.
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -P bhuvipassword
Run vacuum and analyze on the tables where unsorted rows are greater than 10%.
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -v 1 -a 1 -x 10
or
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -x 10
Run the Analyze on all the tables in schema sc1 where stats_off is greater than 5.
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -v 0 -a 1 -f 5
Run the vacuum only on the table tbl1 which is in the schema sc1 with the Vacuum threshold 90%.
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -s sc1 -t tbl1 -a 0 -c 90
Run analyze only the schema sc1 but set the analyze_threshold_percent=0.01
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -s sc1 -t tbl1 -a 1 -v 0 -r 0.01
Do a dry run (generate SQL queries) for analyze all the tables on the schema sc2.
./vacuum-analyze-utility.sh -h endpoint -u bhuvi -d dev -s sc2 -z 1

Facing issue in Shell while executing query on remote postgres database

I am running one shell script on my App server which will go on another machine where Postgres database is installed. It will execute query and return couple of IDs and store into variables. Please find below my shell script.
ssh root#<Remote_HOST> 'bash -s'<< EOF
projectid=`/usr/pgsql-9.4/bin/psql $DB_NAME -U $DB_USER -h $DB_HOST -t -c "select projectid from projects where project_Name='$projectName';"`
scenarioid=`/usr/pgsql-9.4/bin/psql $DB_NAME -U $DB_USER -h $DB_HOST -t -c "select scenarioid from scenarios where scenario='$scenario' and projectid='$projectid';"`
EOF
echo $projectid
If i execute Shell, i get following error :
/root/test/data.sh: line 62: /usr/pgsql-9.4/bin/psql: No such file or directory
/root/test/data.sh: line 62: /usr/pgsql-9.4/bin/psql: No such file or directory
But on machine where database is installed, if i execute same query, i get proper results. So i am not sure what is wrong, query is fine and directory is present. Even after SSH to remote host, if i do ls or pwd, i am getting proper output. I have already exported database password, so database login without password is already working fine.
Can some please tell me what am i missing here?
Finally i was able to resolve my issue by making changes in Shell
projectid=$(ssh root#<Remote_HOST> << EOF
/usr/pgsql-9.4/bin/psql $DB_NAME -U $DB_USER -h $DB_HOST -t -c "select projectid from projects where project_Name='$projectName';"
EOF)
scenarioid=$(ssh root#<Remote_HOST> << EOF
/usr/pgsql-9.4/bin/psql $DB_NAME -U $DB_USER -h $DB_HOST -t -c "select scenarioid from scenarios where scenario='$scenario' and projectid='$projectid';"
EOF)
echo "$projectid : $scenarioid"

mongodump with date in query parameter using shell script

I am trying to take mongodump of a collections of last 24 hours using bash but getting errors as i am unable to use custom date in query parameter of mongodump statement.
timeInMs=$(expr "$(date +'%s%3N')" - 86400000)
mongodump -u user -p password --authenticationDatabase admin --db dbname -c collection --query '{startTime:{$gte:new Date(${timeInMs})}}'
timeInMs is as expected (time in ms 24 hrs ago) but problem is getting query right. Lots of hit & trial used but no success yet. Have used following :
'{startTime:{$gte:{"$date":"${timeInMs}"}}}'
"{startTime:{$gte:new Date\"(${timeInMs})\"}}"
'{startTime:{$gte:new Date("${timeInMs}")}}'
You need to get your quotes properly:
timeInMs=$(expr "$(date +'%s%3N')" - 86400000)
mongodump -u user -p password --authenticationDatabase admin --db dbname -c collection --query '{startTime:{$gte:new Date('"$timeInMs"')}}'
For better readability:
mongodump -u user \
-p password \
--authenticationDatabase admin \
--db dbname -c collection --query '{startTime:{$gte:new Date('"$timeInMs"')}}'

Resources