EDIT:
Ok, so after answer from Vasanta Koli, I've looked deep in builds.
And actually, I have found the full console output.
It's a bit weird at first because you need to go in Build history or use the little arrow after your build name to access it... at the same place that "basic" console output when you click directly on your build's name.
Anyway, I can finally access to my full logs !
Original question:
This question may just look dumb, but in my Jenkins' configuration, I can't see all the logs from the shell script of my build.
I've looked an option to activate it, but I can't find it.
In my script, I'm just restoring a database, with an echo before each command, like this:
#!/usr/bin/env bash
timestamp=$(date +%T)
echo $timestamp "- Delete"
dropdb -h localhost -U user database
echo $timestamp "- Creation"
createdb -h localhost -E unicode -U user database
echo $timestamp "- Restore"
pg_restore -h localhost -U user -O -d database database.tar
All the script is executed, but no logs in my build, in the web UI (Console output)
I'm obviously missing something here.
Can someone help me ? Thank you !
Don't make variable for timestamp if you need actual time of the task (command) execution
also if you want logs to be redirected, then mention it.
It should be something like below
#!/usr/bin/env bash
logfile=/var/log/script.log
{
echo $(date +%T) "- Delete"
dropdb -h localhost -U user database
echo $(date +%T) "- Creation"
createdb -h localhost -E unicode -U user database
echo $(date +%T) "- Restore"
pg_restore -h localhost -U user -O -d database database.tar
} >> $logfile
Please check and update
Related
I am having issues on my Ubuntu server: I have two scripts which perform a pg_dump of two databases (a remote and a local one). However the backup file for the local one always ends up empty.
When I run the script manually, no problem.
The issue is when the script is ran via crontab while I am NOT logged into the machine. If I'm in a SSH session no problem, it works with crontab but when I'm not connected it does not work.
Check out my full scripts/setup under, and feel free to suggest any improvements. For now I just want it to work but if my method is insecure/unefficient I would gladly hear about alternatives :)
So far I've tried:
Using the postgres user for the local database (instead of another user I use to access the DB with my applications)
Switch pg_dump for /usr/bin/pg_dump
Here's my setup:
Crontab entry:
0 2 * * * path/to/my/script/local_databasesBackup.sh ; path/to/my/script/remote_databasesBackup.sh
scriptInitialization.sh
set LOCAL_PWD "password_goes_here"
set REMOTE_PWD "password_goes_here"
Expect script, called by crontab (local/remote_databaseBackup.sh):
#!/usr/bin/expect -f
source path/to/my/script/scriptInitialization.sh
spawn path/to/my/script/localBackup.sh expect "Password: " send "${LOCAL_PWD}\r"
expect eof exit
Actual backup script (local/remoteBackup.sh):
#!/bin/bash
DATE=$(date +"%Y-%m-%d_%H%M")
delete_yesterday_backup_and_perform_backup () {
/usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
YESTERDAY_2_AM=$(date --date="02:00 yesterday" +"%Y-%m-%d_%H%M")
YESTERDAY_BACKUP_FILE=/path/to/local/backup/folder/$YESTERDAY_2_AM.tar
if [ -f "$YESTERDAY_BACKUP_FILE" ]; then
echo "$YESTERDAY_BACKUP_FILE exists. Deleting"
rm $YESTERDAY_BACKUP_FILE
else
echo "$YESTERDAY_BACKUP_FILE does not exist."
fi
}
CURRENT_DAY_NUMBER=$(date +"%d")
FIRST_DAY_OF_THE_MONTH="01"
if [ "$CURRENT_DAY_NUMBER" = "$FIRST_DAY_OF_THE_MONTH" ]; then
echo "First day of the month: Backup without deleting the previous backup"
/usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
else
echo "Not the first day of the month: Delete backup from yesterday and backup"
delete_yesterday_backup_and_perform_backup
fi
The only difference between my local and remote script is the pg_dump parameters:
Local looks like this /usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
Remote looks like this: pg_dump -U remote_account -p 5432 -h remote.address.com -W -F t remoteDatabase > /path/to/local/backup/folder/$DATE.tar
I ended up making two scripts because I thought it may have been the cause of the issue. However I'm pretty sure it's not at the moment.
I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/KarthikDuggirala/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
# COPY script and make it executable
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN ["chmod", "+x", "/usr/share/kafka-connect-script/plugins-config.sh"]
#entrypoint
ENTRYPOINT [ "./usr/share/kafka-connect-script/plugins-config.sh" ]
and the following bash script
#!/bin/bash
#script to configure kafka connect with plugins
#export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
#export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=30
echo "Waiting for Kafka Connect to start listening on localhost"
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT"
while [[ $(eval $curl_command) -eq 000 ]]
do
echo "In"
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter"
echo "Going to sleep for $sleep_second seconds"
# sleep $sleep_second
echo "Finished sleeping"
# ((sleep_second_counter+=$sleep_second))
echo "Finished counter"
done
echo "Out"
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
I try to run the docker and using docker logs to see whats happening, and I am expecting that the script would run and wait till the kafka connect is started. But apparently after say few seconds the script or (I dont know what is hanging) hangs and I do not see any console prints anymore.
I am a bit lost what is wrong, so I need some guidance on what is that I am missing or is this not the correct way
What I am trying to do
I want to have logic defined that I could wait for kafka connect to start then run the curl command
curl -X POST -H "Content-Type: application/json" --data '{"name":"","config":{"connector.class":"com.github.jcustenborder.kafka.connect.snmp.SnmpTrapSourceConnector","topic":"fm_snmp"}}' http://localhost:8083/connectors
PS: I cannot use docker-compose way to do it, since there are places I have to use docker run
The problem here is that ENTRYPOINT will run when the container starts and will prevent the default CMD to run since the script will loop waiting for the server to be up , that is the script will loop forever since the CMD will not run.
you need to do one of the following:
start the kafka connect server in your Entrypoint and your script in CMD or running your script outside the container ....
I have very little experience with Bash but here is what I am trying to accomplish.
I have two different text files with a bunch of server names in them. Before installing any windows updates and rebooting them, I need to disable all the nagios host/service alerts.
host=/Users/bob/WSUS/wsus_test.txt
password="my_password"
while read -r host
do
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios.fqdn.here/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k
done < wsus_test.txt >> /Users/bob/WSUS/diable_test.log 2>&1
This is a reduced form of my current code which works as intended, however, we have servers in a bunch of regions. Each server name is prepended with a 3 letter code based on region (ie, LAX, NYC, etc). Secondly, we have a nagios server in each region so I need the code above to be connecting to the correct regional nagios server based on the server name being passed in.
I tried adding 4 test servers into a text file and just adding a line like this:
if grep lax1 /Users/bob/WSUS/wsus_text.txt; then
<same command as above but with the regional nagios server name>
fi
This doesn't work as intended and nothing is actually disabled/enabled via API calls. Again, I've done very little with Bash so any pointers would be appreciated.
Extract the region from host name and use it in the Nagios URL, like this:
while read -r host; do
region=$(cut -f1 -d- <<< "$host")
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios-$region.fqdn.here/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k
done < wsus_test.txt >> /Users/bob/WSUS/diable_test.log 2>&1
Hello to professionals !
There was a good and simplest script idea to make mysqldump of every database - taken from
dump all mysql tables into separate files automagically?
author - https://stackoverflow.com/users/1274838/elias-torres-arroyo
with script as follows
#!/bin/bash
# Optional variables for a backup script
MYSQL_USER="root"
MYSQL_PASS="PASSWORD"
BACKUP_DIR="/backup/01sql/";
# Get the database list, exclude information_schema
for db in $(mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' | grep -v information_schema)
do
# dump each database in a separate file
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" | gzip > "$BACKUP_DIR/$db.sql.gz"
done
sh
but the problem is that this script does not "understand" arguments like
--add-drop-database
to perform
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" --add-drop-database | gzip > "$BACKUP_DIR/$db.sql.gz"
Is there any idea how to force this script to understand the additional arguments listed under
mysqldump --help
because while all my tests shows it doesn't.
Thank you in advance for any hint to try !
--add-drop-database works only with --all-databases or --databases.
See please the reference in docs
So in your case mysqldump utility ignore mentioned parameter because you are going to dump one database.
I need a shell script that will login to a remote FTP server, get the list of files present in only root folder and identify only xml files and get those files to local system.
Login credentials can be mentioned in the script it self. This script must be run once a day only.
Please help me with a UNIX BASH SHELL script.
Thanks
script:
#!/bin/bash
SERVER=ftp://myserver
USER=user
PASS=password
EXT=xml
DESTDIR=/destinationdir
listOfFiles=$(curl $SERVER --user $USER:$PASS 2> /dev/null | awk '{ print $9 }' | grep -E "*.$EXT$")
for file in $listOfFiles
do
curl $SERVER/$file --user $USER:$PASS -o $DESTDIR/$file
done
for scheduled run every day check the crontab:
crontab -e
for edit your current jobs and add for example:
0 0 * * * bash /path/to/script
that will mean run the script every day at midnight.
If you can install ncftpget, this is a one-line operation:
ncftpget -u user -p password ftp.remote-host.com /my/local/dir '/*.xml'