I have a bash script that reads 10 lines on a file, each line has the server name that I need to boot up, the boot process takes long and outputs a lot of information, I need to send a "control c" on the script so it can proceed on the next server, or else the loop process will take long on each server during boot process
here is the actual output screen during boot process, it takes more than 30 minutes to complete on each server, loop may continue if control c is sent to the script/automation
$ eb restore environment_id
INFO: restoreEnvironment is starting.
-- Events -- (safe to Ctrl+C) Use "eb abort" to cancel the command.
2017-01-01 12:00 restoreEnvironment is starting
2017-01-01 12:15 Environment health has transitioned to Pending. Initialization in progress (running for 28 seconds). There are no instance
2017-01-01 12:20 Created security group named: sg-123123123
2017-01-01 12:22 Created load balancer named: awseb-e-3-qweasd2-DSLFLSFJHLS
2017-01-01 12:24 Created security group named: sg-123123124
2017-01-01 12:26 Created Auto Scaling launch configuration named: awseb-e-DSLFLSFJHLS-stack-AWSEBAutoScalingLaunchConfiguration-DSLFLSFJHLS
2017-01-01 12:28 Added instance [i-01askjdkasjd123] to your environment.
2017-01-01 12:29 Created Auto Scaling group named: awseb-e-DSLFLSFJHLS-stack-AWSEBAutoScalingLaunchConfiguration-DSLFLSFJHLS
2017-01-01 12:30 Waiting for EC2 instances to launch. This may take a 30 minutes
2017-01-01 13:15 Successfully launched environment: pogi-server
Here is my working script
#!/bin/bash
DIR=/jenkins/workspace/restore-all
INSTANCE_IDS=/jenkins/workspace/environment-ids
EB_FILE=$DIR/server.txt
echo "PROCEEDING TO WORK DIRECTORY"
cd $DIR ; pwd
echo""
echo "CREATING A CODE FOLDER"
mkdir $DIR/code ; pwd ; ls -ltrh $DIR/code
echo""
for APP in `cat $EB_FILE | awk '{print $NF}' | grep -v 0`
do
echo "#########################################"
echo "RESTORING = "$APP
echo ""
echo "COPYING BEANSTALKFILES"
mkdir $DIR/code/$APP ; cd $DIR/code/$APP
cp -pr $DIR/beasntalk/$APP/dev/.e* $DIR/code/$APP
echo ""
echo ""
echo "TRIGGERING EB RESTORE"
cd $DIR/code/$APP
eb restore `tail -1 $INSTANCE_IDS/$APP-dev.txt`
echo ""
echo "REMOVE CODE FOLDER"
cd $DIR/code ; rm -rf $DIR/code/*
echo""
done
echo "REMOVE WORKSPACE FOLDER"
rm -rf $DIR/*
echo""
Have you tried putting the eb restore command in the background with &?
I took a stab a rewriting what you posted. I don't know if it will work due to a few uncertainties (ie what do the contents of server.txt look like, do you copy it into the restore-all directory every time, do you put beanstalk/*/dev/.e* into restore-all each time, etc... etc...), but here it is:
#!/bin/bash
restore-all_dir=/jenkins/workspace/restore-all
InstanceIDs=/jenkins/workspace/environment-ids
EBFile=${restore-all_dir}/server.txt
echo -ne " PROCEEDING TO ${restore-all_dir} ..."
cd ${restore-all_dir}
echo -e "\b\b\b - done\n"
pwd
echo -ne " CREATING FOLDER ${restore-all_dir}/code ..."
mkdir ${restore-all_dir}/code
echo -e "\b\b\b - done\n"
pwd
ls -ltrh ${restore-all_dir}/code
for i in `awk '!/0/ {print $NF}' ${EBFile}`; do
echo "#########################################"
echo -e "#### RESTORING: ${i}\n"
echo -en " COPYING BEANSTALKFILES ..."
mkdir ${restore-all_dir}/code/${i}
cd ${restore-all_dir}/code/${i}
cp -pr ${restore-all_dir}/beanstalk/${i}/dev/.e* ${restore-all_dir}/code/${i}
echo -e "\b\b\b - done\n"
echo -en " TRIGGERING EB RESTORE ..."
cd ${restore-all_dir}/code/${i}
echo "eb restore `tail -1 ${InstanceIDs}/${i}-dev.txt` > ${restore-all_dir}/${i}_output.out &" ## directs output to a file and puts the process into the background
echo -e "\b\b\b - put into background\n output saved in ${restore-all_dir}/${i}_output.out\n"
## Is this necessary in every iteration, or can it be excluded and just removed with the final rm -rf command?
## echo -en " REMOVE CODE FOLDER ..."
## cd ${restore-all_dir}/code
## rm -rf ${restore-all_dir}/code/*
## echo -e "\b\b\b - done\n"
done
## this will loop "working..." until all background eb restore jobs are no longer "Running"
echo -en "\n working..."
while `jobs | awk '/Running/ && /eb restore/ {count++} END{if (count>=1) print "true"; else print "false"}'`; do
echo -en "\b\b\b\b\b\b\b\b\b\b\b working..."
done
echo -e "\b\b\b\b\b\b\b\b\b\b\b - done \n"
## this will also remove the output files unless they are put somewhere else
echo -en " REMOVE WORKSPACE FOLDER ..."
rm -rf ${restore-all_dir}/*
echo -e "\b\b\b - done\n"
I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.
I've installed Clamav on my webserver
I know that there are a lot of shell scripts to do daily scan
But unfortunately I can't understand its lines :)
And I want to create a simple bash script to scan the /home directory and send me an email if there are infected files
#!/bin/bash
var=$(clamscan -i /home &> /dev/null)
if [[ $var != *Infected files: 0* ]]
then
echo "Subject: There are infected files" | sendmail -v root
fi
But the previous code doesn't work good
note : the problem in the code not in the file permission
UPDATE
The final worked code
#!/bin/bash
scanoutput=$(clamscan -ri /home 2>&1)
if [ $? -gt 0 ]; then
echo -e "Subject: ClamScan: there are infected files\nTo: root\n\n$scanoutput" | /usr/sbin/sendmail -t
fi
Try to use exit code instead of using the string output.
In the clamscan manual there is the follow
RETURN CODES
0 : No virus found.
1 : Virus(es) found.
2 : Some error(s) occured.
So you can try something like:
#!/bin/bash
clamscan -i /home &> /dev/null
clamscan_exit_code="$?"
if [[ "${clamscan_exit_code}" == '1' ]]; then
echo "Subject: There are infected files" | sendmail -v root
fi
Also you have the possibility to send a specific email in case where there is some error by checking exit code == 2.
You have to quote or escape spaces in the string you want to compare to:
[[ $var != *"Infected files: 0"* ]]
This [ answer ] suggests another approach to solving your problem, but I slightly modify it to the below form
#!/bin/bash
clamscan --quiet /home 2>/dev/null || echo "System infected" | sendmail -v root
PS: Fixed wrong logical expression
I have an FTP server with thousands of directories. What I want to do is to download a specific number of them (for example, 500 directories) using a shell script. How can I do that? I tried wget with -Q command. For example, "wget -Q25MB", which gives me 25MB of data. The problem is that each folder has a different size. Therefore, using this command will stop the download in the middle of getting a specific folder.
Assuming wget returns an error when the download get interrupted:
#!/bin/bash
to_del= # empty to_del in case you want to copy-paste this to a terminal instead of using a file
username=blablabla
password=blablabla
server=blablabla
printf -v today '%(%Y_%m_%d)T'
# Get the 500 first directory names to download
ftp -n "$server" << EOF | grep -v '^\.\.\?$' | head -n 502 > "to_download_$today.txt"
user $username $password
ls
bye
EOF
# Then, you can download each folder one by one:
while read -r dir; do
if [[ -e $dir ]]; then
echo >&2 "WARNING: '$dir' already exists!"
continue # We don't download or remove it. Manual action needed
fi
if wget "$username:$password#$server/$dir"; then
to_del+=("$dir")
else
# A directory was not successfully downloaded, we delete the temporary files
echo >&2 "WARNING: '$dir' download failed, skipping..."
rm -rf "$dir"
fi
done < "to_download_$today.txt"
# Now, delete the successfully downloaded folders using a single FTP connection
{
printf 'user %s %s\n' "$username" "$password"
for dir in "${to_del[#]}"; do
printf 'del %s\n' "$dir"
done
printf 'bye\n'
} | ftp -i -n "$server"
Ive recently been introduced to bash scripting... So, Ive used my advanced theft course to throw together the attached script. it runs... and exits with "/xxx/ not mounted. You are not root! I have rdiff-backup and sshfs installed and working. The commands work fine on their own on the commandline, but in the script, well... Can you guys take a look and lemme know? PS I copied a LOT of this from scripts I found here and a few other places.
<code>
#!/bin/bash
# Version 1.5
# Prior to running this make sure you have ssh-keygen -t rsa to generate a key, then
# ssh username#target "mkdir .ssh/;chmod 700 .ssh"
# scp .ssh/id_rsa.pub username#target:.ssh/authorized_keys
#
# then check you can login and accept the ssh key
# ssh username#target "ls -la"
#
# Key things to remember, no spaces in pathnames, and try to use full paths (beginning with / )
#
# variables determine backup criteria
DATESTAMP=`date +%d%m%y`
USERNAME=username #remote site username here
TARGET=remote.ip.here #add the ip v4 address of the target
INCLUDES=/path/to/file/includes.txt #this is a txt file containing a list of directories you want backed up
EXCLUDES="**" #this is a list of files etc you want to skip
BACKUPLOG=/path/to/logfile/in/home/backuplog${DATESTAMP}.txt
OLDERTHAN=20W #change 20 to reflect how far back you want backups to exist
# to activate old backup expiry, uncomment the line below
#RMARGS=" --force --remove-older-than ${OLDERTHAN}"
TARGETMAIL="yourmailaddress#your.domain"
HOSTNAME=`hostname` #Dont change this!
TMPDIR=/backups Change this to the source folder
TARGETFOLDER=/backups change this to the TARGET folder
ARGS=" -v0 --terminal-verbosity 0 --exclude-special-files --exclude-other-filesystems --no-compression -v6"
# detecting distro and setting the correct path
if [ -e /etc/debian_version ];then
NICE=/usr/bin/nice
elif [ -e /etc/redhat-release ];then
NICE=/bin/nice
fi
if [ -e /tmp/backup.lock ];then
exit 0
fi
touch /tmp/backup.lock
touch -a ${BACKUPLOG}
cd /
/bin/mkdir -p ${TMPDIR}
/usr/bin/sshfs -o idmap=user -o ${USERNAME}#${TARGET}:/${TARGETFOLDER} ${TMPDIR} &>${BACKUPLOG}
# if you get errors mounting this then try
# mknod /dev/fuse -m 0666 c 10 229
for ITEMI in ${INCLUDES} ; do
ARGS="${ARGS} --include ${ITEMI} "
done
for ITEME in ${EXCLUDES} ; do
ARGS="${ARGS} --exclude-regexp '${ITEME}' "
done
# the --exclude ** / is a hack because it wont by default do multiple dirs, so use --include for all dirs then exclude everything else and sync / - if you dont understand dont worry
# ref: http://www.mail-archive.com/rdiff-backup-users#nongnu.org/msg00311.html
#echo /usr/bin/rdiff-backup ${ARGS} --exclude \'**\' / ${TMPDIR}/ &&
cat ${INCLUDES} | while read DIR; do
${NICE} -19 /usr/bin/rdiff-backup --exclude '**' ${DIR} ${TMPDIR}/ &>${BACKUPLOG}
if [ $? != 0 ]; then
echo "System Backup Failed" | mutt -s "Backup Log: System Backup Failed, Log attached!" -a ${BACKUPLOG} ${TARGETMAIL}
exit 1;
fi
done
#${NICE} -19 /usr/bin/rdiff-backup ${ARGS} --exclude '**' / ${TMPDIR}/ &>${BACKUPLOG} &&
echo Removing backups older than ${RMARGS}
${NICE} -19 /usr/bin/rdiff-backup -v0 --terminal-verbosity 0 ${RMARGS} ${TMPDIR}/ &>${BACKUPLOG}
/bin/umount ${TMPDIR} && /bin/rm -rf ${TMPDIR}/ &>${BACKUPLOG}
echo "System Backup Run" | mutt -s "Backup Log: System Backup Done!" -a ${BACKUPLOG} ${TARGETMAIL}
rm /tmp/backup.lock
rm ${BACKUPLOG}
Sorry, cannot paste, couldnt attach... BLIKSEM!
Thanks for ANY input... One HELL of a learning curve!!!
Regards,
B.