download the database dump to local using kubectl and pg_dump - shell

i am trying to download postgresql database dump to my local machine by using the below shell script
#!/bin/bash -x
namespace="staging"
current_date=`date +%d_%b_%Y`
file_name="${namespace}_${current_date}.dump"
kubectl config use-context <context_name> -n $namespace
pod_name=`kubectl get pods -n ${namespace} -l app=backend | grep backend | awk '{print $1}'`
kubectl exec -it $pod_name --container <cname> -n $namespace -- sh -c PGPASSWORD=$DATABASE_PASSWORD pg_dump --no-owner -x -Fc $DATABASE_NAME -h $DATABASE_HOST -U $DATABASE_USERNAME > $file_name
but this does not work and resultant file is blank without any error.
$DATABASE_PASSWORD, $DATABASE_NAME, $DATABASE_HOST & $DATABASE_USERNAME
are all environment variables.
Any help on how to get this working would be really helpful.
Thanks.

Related

grep failing in gitlab CI

I am trying to finalyze a script in Gitlab CI but struggling with some syntax error
script: |
echo "running jenkins job from user $EMAIL using following settings - $BRANCH / $TAGS in $ENV enviroment"
lastbuildNumber=$(curl -s --user ${EMAIL}:${TOKEN} "$JENKINS_URL/$ENV-SmokeTests-UIOnly/lastBuild/api/json" | jq ".number")
echo "last build was number ${lastbuildNumber}"
currentBuild=$((lastbuildNumber + 1 ))
echo "current build is ${currentBuild}"
echo "view cucumber report here"
baseurl="$JENKINS_URL/${ENV}-SmokeTests-UIOnly"
echo $baseurl
curl -s --user $EMAIL:$TOKEN $JENKINS_URL/$ENV-SmokeTests-UIOnly/ --output cucumber.txt
cucumber_endpoint=$(cat cucumber.txt | grep -o -m 1 "[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*")
full_cucumber=$baseurl$cucumber_endpoint
echo $full_cucumber
The script works fine on my local terminal, but fails in the CI when running
cucumber_endpoint=$(cat cucumber.txt | grep -o -m 1 "[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*")
is for sure something related to quotes but cannot work out what the issue is.
update:
I changed to:
after_script:
- |
echo "view cucumber report here"
baseurl="$JENKINS_URL/job/${ENV}-SmokeTests-UIOnly"
curl -s --user "$EMAIL":"$TOKEN" $JENKINS_URL/"$ENV"-SmokeTests-UIOnly/ --output cucumber.txt
cat cucumber.txt
cucumber_endpoint=$(cat cucumber.txt | grep -o -m 1 '[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*')
full_cucumber="${baseurl}${cucumber_endpoint}"
echo "${full_cucumber}"
and I have run the script through 'shellcheck.net'
it's the grep that is not working but is not returning anyerror now.
the result of the cat command are correct, as on my local machine.
proving is not an issue with set -e
#!/bin/bash
set -e
echo "view cucumber report here"
baseurl="https://example"
cucumber_endpoint=$(curl -s --user "$EMAIL":"$TOKEN" ${JENKINS_URL}/"$ENV"-SmokeTests-UIOnly/ | grep -o -m 1 '[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*')
# cat cucumber.txt
# cucumber_endpoint=$(cucumber.txt | grep -o -m 1 '[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*')
full_cucumber="${cucumber_endpoint}"
echo "${baseurl}${full_cucumber}"
which gets what I want:
➜ ./cucumber.sh [16/02/23|11:39:59|]
view cucumber report here
https://example/cucumber-html-reports_fb3a3468-c298-3fb5-ad9a-dacbc0323763/overview-features.html
apparently gitlab ci did not like the -m 1 option in the grep call
so changed to
cucumber_endpoint=$(curl -s --user "$EMAIL":"$TOKEN" ${JENKINS_URL}/"$ENV"-SmokeTests-UIOnly/ | grep -o '[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*'| sort -u)

Bash Script fails with error: OCI runtime exec failed

I am running the below script and getting error.
#!/bin/bash
webproxy=$(sudo docker ps -a --format "{{.Names}}"|grep webproxy)
webproxycheck="curl -k -s https://localhost:\${nginx_https_port}/HealthCheckService"
if [ -n "$webproxy" ] ; then
sudo docker exec $webproxy sh -c "$webproxycheck"
fi
Here is my docker ps -a output
$sudo docker ps -a --format "{{.Names}}"|grep webproxy
webproxy-dev-01
webproxy-dev2-01
when i run the command individually it works. For Example:
$sudo docker exec webproxy-dev-01 sh -c 'curl -k -s https://localhost:${nginx_https_port}/HealthCheckService'
HEALTHCHECK_OK
$sudo docker exec webproxy-dev2-01 sh -c 'curl -k -s https://localhost:${nginx_https_port}/HealthCheckService'
HEALTHCHECK_OK
Here is the error i get.
$ sh healthcheck.sh
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"webproxy-dev-01\": executable file not found in $PATH": unknown
Could someone please help me with the error. Any help will be greatly appreciated.
Because the variable contains two tokens (on two separate lines) that's what the variable expands to. You are running
sudo docker exec webproxy-dev-01 webproxy-dev2-01 ...
which of course is an error.
It's not clear what you actually expect to happen, but if you want to loop over those values, that's
for host in $webproxy; do
sudo docker exec "$host" sh -c "$webproxycheck"
done
which will conveniently loop zero times if the variable is empty.
If you just want one value, maybe add head -n 1 to the pipe, or pass a more specific regular expression to grep so it only matches one container. (If you have control over these containers, probably run them with --name so you can unambiguously identify them.)
Based on your given script, you are trying to "exec" the following
sudo docker exec webproxy-dev2-01
webproxy-dev-01 sh -c "curl -k -s https://localhost:${nginx_https_port}/HealthCheckService"
As you see, here is your error.
sudo docker exec webproxy-dev2-01
webproxy-dev-01 [...]
The problem is this line:
webproxy=$(sudo docker ps -a --format "{{.Names}}"|grep webproxy)
which results in the following (you also posted this):
webproxy-dev2-01
webproxy-dev-01
Now, the issue is, that your docker exec command now takes both images names (coming from the variable assignment $webproxy), interpreting the second entry (which is webproxy-dev-01 and sepetrated by \n) as the exec command. This is now intperreted as the given command which is not valid and cannot been found: That's what the error tells you.
A workaround would be the following:
webproxy=$(sudo docker ps -a --format "{{.Names}}"| grep webproxy | head -n 1)
It only graps the first entry of your output. You can of course adapt this to do this in a loop.
A small snippet:
#!/bin/bash
webproxy=$(sudo docker ps -a --format "{{.Names}}"| grep webproxy )
echo ${webproxy}
webproxycheck="curl -k -s https://localhost:\${nginx_https_port}/HealthCheckService"
while IFS= read -r line; do
if [ -n "$line" ] ; then
echo "sudo docker exec ${line} sh -c \"${webproxycheck}\""
fi
done <<< "$webproxy"

Turn off the return message from the executed command

I'm developing a bash script, I've used ssh command in my bash script to run some commands on a remote server and I need to get the result from the command which runs on the remote server. so I wrote this code:
db="$(ssh -t user#host 'mysql --user=username -ppassword -e \"SHOW DATABASES;\" | grep -Ev \"(Database|information_schema|performance_schema)\"' | grep -Ev \"(mysql)\")"
But each time which I run my bash script, I will get Connection to host closed. in first of the db result. this is a default message from ssh command.
Also, If I use > /dev/null 2>&1 end of my command the db variable would be empty.
How can I turn off the return message from the executed command?
Like this :
#!/bin/bash
db=$(
ssh -t user#host bash<<EOF
mysql --user=username -ppassword -e "SHOW DATABASES" |
grep -Ev "(Database|information_schema|performance_schema|mysql)" \
2> >(grep -v 'Connection to host closed')
EOF
)
or if Connection to host closed comes from STDOUT :
...
mysql --user=username -ppassword -e "SHOW DATABASES" |
grep -Ev "(Database|information_schema|performance_schema|mysql|Connection to host closed)"
...

Environment variable overrides command

I set the EC2_IP_ADDRESS variable
$ export EC2_IP_ADDRESS="`docker run -it -v $PWD/infrastructure:/terraform -v $PWD/data:/data terraform sh -c "terraform init; terraform state show module.aws_ec2.aws_eip.aws_instance_eip" | grep public_ip | awk '{print $3}'`"
And then I'm trying to copy some files into the EC2 instance:
$ scp -i key.pem -r src/* ec2-user#$EC2_IP_ADDRESS:/home/ec2-user/src/
But the output is an error: : nodename nor servname provided, or not known
Output of $ echo "scp -i key.pem -r src/* ec2-user#$EC2_IP_ADDRESS:/home/ec2-user/src/"
:/home/ec2-user/src/c/* ec2-user#X.X.X.X
It seems that anything after the variable EC2_IP_ADDRESS goes to the beginning of the string, overriding the command.
Any ideas on how to fix this?
It seems the variable contains $'\r' at the end. Remove it with
EC2_IP_ADDRESS=${EC2_IP_ADDRESS%$'\r'}

How to docker-compose run in windows?

How to use this command in windows 10 familly :
docker-compose run api composer install --no-interaction
Example:
docker-compose run api composer install --no-interaction
- Interactive mode is not yet supported on Windows.
Please pass the -d flag when using `docker-compose run`.
Is it possible ?
Do you have an example ?
The interactive mode support for docker-compose on Windows is tracked by issue 2836 which proposes some alternatives:
Use bash from within the container:
docker exec -it MY_CONTAINER bash
Use a docker-compose-run script by Rodrigo Baron:
Script ( put the function in ~/.zshrc or ~/.bashrc in a Windows git bash shell for instance):
#!/bin/bash
function docker-compose-run() {
if [ "$1" = "-f" ] || [ "$1" = "--file" ] ; then
docker exec -i $(docker-compose -f $2 ps $3 |grep -m 1 $3 | cut -d ' ' -f1) "${#:4}"
else
docker exec -i $(docker-compose ps $1 | grep -m 1 $1 | cut -d ' ' -f1) "${#:2}"
fi
}
docker-compose-run "$#"
Usage:
usage:
docker-compose-run web rspec
# or:
docker-compose-run -f docker-compose.development.yml web rspec
Simpler alternative is to use option -d and to get logs
docker-compose run -rm <service> <command>
is replaced by:
docker-compose-run <service> <command>
For this to work, add this snippet in your ~/.bashrc :
docker-compose-run() {
CONTAINER_NAME=$(docker-compose run -d $#)
docker logs -f $CONTAINER_NAME
docker rm $CONTAINER_NAME
}

Resources