Performing queries on a Postgres docker container through a bash script - bash

I'm working with a simple postgres database and docker. Currently, I have a docker-compose file which creates the container I need and loads in the SQL files. After loading in this data, I would like it to perform a simple query through a bash script that I"m going to use for some basic tests (i.e., confirm # of rows > 0, etc). To start, I'm just trying to make a simple script that will run and print the number of rows (then I can worry about implementing actual testing). Here is what I have so far:
docker-compose.yml:
services:
postgres:
image: postgres
environment:
POSTGRES_DB: test-db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
- ./database/create_table.sql:/docker-entrypoint-initdb.d/create_table.sql
- ./databases/data.sql:/docker-entrypoint-initdb.d/data.sql
- ./script/test.sh:/docker-entrypoint-initdb.d/test.sh
test.sh:
#!/bin/bash
echo "Accessing bash terminal of DB container"
docker exec -it postgres_1 bash
echo "Accessing psql terminal"
psql -U postgres
echo "Connecting to database"
\c database
echo "Checking number of rows"
numrows = $("SELECT count(*) FROM my_table")
echo numrows + " found."
Currently when I run docker-compose up, it creates the data from my SQL files and then stays idle. Is there something additional I need to run my script? I am able to do all of this myself through a separate terminal, but I would like this to all be automated so that I can just add tests to my test.sh and then run that rather than having to do it manually each time. What am I missing here? Shouldn't my script work since I really just recreated the commands I was executing manually? Thanks for any help!

By the time your bash script is executed you are already in the postgres container itself. So, you can simply query the database from there - as #DavidMaze already pointed out. Your script might look like this:
#!/bin/bash
db_name='test-db'
db_user='postgres'
function execute_sql() {
psql --tuples-only -U $db_user -d $db_name -c "$#"
}
function log() {
printf "Log [%s]: %s\n" "$(date --iso-8601=seconds)" "$*"
}
numrows=$(execute_sql "SELECT count(*) FROM my_table")
log "${numrows} rows found"
Output should be something like that:
postgres_1 | /usr/local/bin/docker-entrypoint.sh: sourcing /docker-entrypoint-initdb.d/test.sh
postgres_1 | [2020-03-31T23:02:14+00:00]: 4 rows found
Regarding the testing: If you only want to run SQL queries and don't do/need additional scripting you can simply put your SQL test queries into a .sql file (e.g. test.sql) as well.
One more important thing to mention - which I'm sure you already know: The files (*.sql, *.sh etc.) that you mount to the postgres container are executed in alphabetical order, i.e.
create_table.sql
data.sql
test.sh
So, you are good.

Related

Calling docker-container by name - making commands (docker exec) to it specifically

I have a docker-compose.yaml-file, that spins up a couple of containers: nginx, php, mysql.
I'm trying to automate a setup-process, that imports a database to the mysql-container, as part of a make-target.
The make-target looks like this:
startLocalEnv:
docker-compose up -d # Build containers
# A lot of other stuff happens here, that is omitted to keep it simple
importDb:
# THIS IS THE COMMAND I'M TRYING TO MAKE
docker exec -i CONTAINER_ID mysql -usomeuser -psomepassword local_db_name < ./dumps/existingDbDump.sql
How can I make this command: docker exec -i CONTAINER_ID mysql -usomeuser -psomepassword local_db_name < ./dumps/existingDbDump.sql
so that it doesn't become several steps, copying and pasting?
Currently
This is how it's done today:
Step 1: Do a docker ps and copy the Container ID. Let's say: 41e8203b54ea.
Step 2: Insert that into above-written command and run it. Example: docker exec -i 41e8203b54ea mysql -usomeuser -psomepassword local_db_name < ./dumps/existingDbDump.sql
It's not super painful, but it's something quite rudimentary, that I'm assuming (and hoping) can be made into one step fairly easily.
Solution attempt 1: Pipe the shit out of it!
I found this SO-article here: Get Docker container id from container name. Where I found this to output the Container ID: docker container ls | grep mysql | awk '{print $1}'.
So I imagine, that fiddling around with this, that maybe I can get a one-liner, that runs this import.
But it seems excessive. And if I have another project that also has a container called mysql (fairly possible!), that I have forgotten to stop, then this solution will target that.
There is a docker-compose exec command that will automatically do this lookup for you.
importDb: ./dumps/existingDbDump.sql
docker-compose exec mysql mysql -usomeuser -psomepassword local_db_name < $<
This would probably be my third-choice way to do the database load, though. If you have the mysql CLI tool on your host and your database container has published ports: then you can just run that directly, without doing anything Docker-specific.
importDb: ./dumps/existingDbDump.sql
mysql -h127.0.0.1 -usomeuser -psomepassword local_db_name < $<
Or you can docker-compose run a temporary container to do the load:
importDb: ./dumps/existingDbDump.sql
docker-compose run mysql \
mysql -hmysql -usomeuser -psomepassword local_db_name < $<
If your application framework has a database migration system, you could similarly docker-compose run the migrations.

how to capture docker compose exec command in bash variable

I am writing a bash script to check the status of a mongodb instance running in a docker container. This code validates that I can successfully execute the mongo command inside the container:
cat <<END | docker-compose exec -T mongodb1 mongo --username root --password passwd
rs.status().myState
END
However, I would like to be able to store the stdout of rs.status().myState in a variable. Something similar to this:
MY_STATE=$(docker-compose exec -T mongodb1 mongo --username root --password passwd &&
rs.status().myState)
But I get the exception: uncaught exception: ReferenceError: invalid assignment left-hand side
How do I capture the output from the mongo shell running inside the container and store it in a variable?
No matter what it looks like on your terminal, you can't write a shell script that first starts some program, and then second types some input into it. That's what it looks like your last invocation is trying to do. If you try to run something like
some-command && \
input to some-command
then first the command runs to completion, with no input, and then the shell tries to run the input as a second command.
Your first command is probably closer to something that would actually work. If the input fits on a single line then I might write
echo 'input to some-command' | some-command
or, in the more specific case of your command,
MY_STATE=$(echo 'rs.status().mystate' | docker-compose exec -T mongodb1 mongo --username root --password passwd)
You might reconsider whether you actually need docker-compose exec here. You can't run that without also having the ability to docker run a container that can take over the entire host system. If you have the MongoDB command-line tools available on your host, and if you've published a port with the Compose ports: option, then it might work to skip the docker-compose exec part
MY_STATE=$(echo 'rs.status().mystate' | mongo --username root --password passwd)
If you're doing this for a health check, the other thing to consider is that, if a container's main process exits, the process will exit too. That's not always a 100% guarantee and it's very possible for a container to not exit but also not be functional, maybe waiting for something in its environment to reappear (Kubernetes has much richer health checks). But if you can rely on seeing the database server exit if it becomes unhealthy then you don't need a check like this at all.

PSQL /copy :variable substitution not working | Postgresql 11

I'm trying to read CSV file and writing the same into the table, CSV file was located in my local machine(client). I used /copy command and achieved the same. Here I have hardcoded my filepath in sql script. I want to parameterised my csv file path.
Based on my analysis /copy not supported :variable substitution, but not sure
I believe we can achieve this using shell variables, but I tried the same, It's not working as expected.
Following are my sample scripts
command:
psql -U postgres -h localhost testdb -a -f '/tmp/psql.sql' -v path='"/tmp/userData.csv"'
psql script:
\copy test_user_table('username','dob') from :path DELIMITER ',' CSV HEADER;
I executing this commands from shell and I'm getting no such a file not found exception. But same script is working with hardcoded path.
Anyone able to advise me on this.
Reference :
Variable substitution in psql \copy
https://www.postgresql.org/docs/devel/app-psql.html
I am new to Bash. So far your problem is way hard for me.
I can do it in one shell script. Maybe later I can make it to two scripts.
The follow is a simple one script file.
#!bin/bash
p=\'"/mnt/c/Users/JIAN HE/Desktop/test.csv"\'
c="copy emp from ${p}"
a=${c}
echo $a
psql -U postgres -d postgres -c "${a}"

Unable to issue long sql query to postgres pod in kubernetes via bash script

I am trying to execute a query on postgres pod in k8s via bash script but cannot get results when i select a large number of columns. Here is my query:
kubectl exec -it postgres-pod-dcd-wvd -- bash -c "psql -U postgres -c \"Select json_build_object('f_name',json_agg(f_name),'l_name',json_agg(l_name),'email',json_agg(email),'date_joined',json_agg(date_joined),'dep_name',json_agg(dep_name),'address',json_agg(address),'zip_code',json_agg(zip_code),'city',json_agg(city), 'country',json_agg(country)) from accounts WHERE last_name='ABC';\""
When i reduce the number of columns to be selected in the query, i get the results but if I use all the column names, the query just hangs indefinitely. What could be wrong here?
Update:
I tried using the query as :
kubectl exec -it postgres-pod-dcd-wvd -- bash -c "psql -U postgres -c \"Select last_name,first_name,...(other column names).. row_to_json(accounts) from register_account WHERE last_name='ABC';\""
But this also hangs.
When i try from inside the pod, It works but i need to execute it via bash script
Means it is almost certainly the results pagination; when you run exec -t it sets up a TTY in the Pod, just like you were connected interactively, so it is likely waiting for you to press space or "n" for the next page
You can disable the pagination with env PAGER=cat psql -c "select ..." or use the --pset pager=off as in psql --pset pager=off -c "Select ..."
Also, there's no need to run bash -c unless your .bashrc is setting some variables or otherwise performing work in the Pod. Using exec -- psql should work just fine, all other things being equal. You will need to use the env command if you want to go with the PAGER=cat approach, because $ ENV=var some_command is shell syntax, and thus cannot be fed directly into exec
As the resulting columns are having a lot of json processing, I think the time taken to execute these two queries are different.
Maybe you can login into the pod and execute the query and see.
kubectl exec -it postgres-pod-dcd-wvd -- bash
Now you are inside the pod. Then we can execute the query.
# psql -U postgres -c \"Select json_build_object('f_name',json_agg(f_name),'l_name',json_agg(l_name),'email',json_agg(email),'date_joined',json_agg(date_joined),'dep_name',json_agg(dep_name),'address',json_agg(address),'zip_code',json_agg(zip_code),'city',json_agg(city), 'country',json_agg(country)) from accounts WHERE last_name='ABC';\"
# psql -U postgres -c \"Select last_name,first_name,...(other column names).. row_to_json(accounts) from register_account WHERE last_name='ABC';\"
Now you we will be able to see whether one query is taking longer time to execute.
Also, kubectl exec pod command can be executed with a request timeout value (--request-timeout=5m) to see if there is a slowness.

docker-compose script with prompt…. better solution?

I have a bash script to start various docker-compose.yml(s)
One of these compose instances is docker-compose.password.yml to create a password file for mysql. For that I need to prompt the user to input a user name and then run a service in docker (that is actually not running).
basically the only way I can think of to accomplish this is run the docker in idle state, exec the command and close the docker. Is there a better way?
(easier would be to do it directly with docker run, but then I would have to check if the image is already available and have image definitions in the various docker-compose.ymls plus now also in the bash script)
XXXXXXXXXX
My solution:
docker-compose.password.yml
version: '2'
services:
createpw:
command:
top -b -d 3600
then
docker-compose -f docker-compose.password.yml up -d
prompt the user by my bash script outside of docker for the credentials
read -p "Input user name.echo $’\n> ’" username
and send it to the running docker
docker exec createpw /bin/bash -c "mysql_config_editor set --user=${username} --password"
and then docker-compose down
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Tried and not working:
I tried to have just a small subscript prompting for the input right under command
command:
/bin/bash /somewhere/createpassword.sh
This did produce the file, but the user was an empty string, as the prompt didn’t stop the docker execution. It didn’t matter if I used compose -d or not.
Any suggestions are welcome. Thanks.

Resources