This is a follow up to an earlier question about a terminal script for Minecraft that will start up a server with 1 gigabyte of ram, and promptly begin a 30 minute loop that will make frequent backups of the server map.
This is the code I'm currently working with:
cd /Users/userme/Desktop/Minecraft
java -Xmx1024M -Xms1024M -jar minecraft_server.jar & bash -c 'while [ 0 ]; do cp -r /Users/userme/Desktop/Minecraft/world /Users/userme/Desktop/A ;sleep 1800;done'
Now obviously, this loop will save the backups in directory "A" with the name "world". Is there a modification I can make to this code so that it basically counts the amount of loops the script makes, and then applies that count to the end of the backups. For example, world5, or world 12. A modification that can get rid of old backups would be nice as well.
I broke it down into separate lines for better readability:
If you want to put it back all into one line, you can add the ; back in where appropriate
counter=1
while [ 0 ]
do
if [ -e /Users/userme/Desktop/A/world"$counter" ]; then
rm -f /Users/userme/Desktop/A/world"$counter"
fi
counter=$((counter+1))
cp -r /Users/userme/Desktop/Minecraft/world /Users/userme/Desktop/A/world"$counter"
sleep 1800
done
Related
First, sorry for the title, it's really difficult to resume my problem in a catch phrase.
I would like to build some bash script witch make a backup of a targeted directory, this backup will be in a server, using ssh.
My strategy is using rsync, to make a save properly and this program support ssh connection.
BUT, the problem is, when I use rsync command in order to copy heavy datas it's take some times and during this time I want print a loader.
My question is : how can I print a loader during rsync process, and when the copy is terminated close the loader ? I tried to lauch rsync in background with & but I don't know how to communicate with this process.
My script :
#!/bin/bash
#Strat : launch in local machine rsync
#$1 contain the source file ex : cmd my_dir
function loader(){
local i sp n
sp[0]="."
sp[1]=".."
sp[2]="..."
sp[3]=".."
sp[4]="."
for i in "${spin[#]}"
do
echo -ne "\b$i"
sleep 0.1
done
}
#main
if [ $# -ne 1 ]; then
help #a function not detailed here
exit 1
else
if [ $1 = "-h" ]; then
help
else
echo "==== SAVE PROGRAM ===="
echo "connection in progress ..."
sleep 1 #esthetic only
#I launch the copy, it works fine. rsync is launch with non-verbose mode
rsync -az $1"/" -e "ssh -p 22" serveradress:"SAVE/" &
$w=$! #rsync previous command's pid
#some code here to "wait" rsync processing, and during this time I would like to print some loading animation in my term (function loader).
while [ #condition ??? ]
do
loader
done
wait $w
echo "Copy complete !"
fi
fi
Thank's for the help.
I am trying to make the below script to execute a Restore binary between hours 17:00 - 07:00 for each folders which name starts with EAR_* in /backup_local/ARCHIVES/ but for some reason it is not working as expected, meaning that the for loop is not breaking if the time condition gets invalid.
Should I add the while loop inside the for loop?
#! /usr/bin/bash
#set -x
while :; do
currenttime=$(date +%H:%M)
if [[ "$currenttime" > "17:00" ]] || [[ "$currenttime" < "07:00" ]]; then
for path in /backup_local/ARCHIVES/EAR_*; do
[ -d "${path}" ] || continue # if not a directory, skip
dirname="$(basename "${path}")"
nohup /Restore -a /backup_local/ARCHIVES -c -I 0 -force -v > /backup_local/$dirname.txt &
wait $!
if [ $? -eq 0 ]; then
rm -rf $path
rm /backup_local/$dirname.txt
echo $dirname >> /backup_local/completed.txt
fi
done &
else
echo "Restore can be ran only outside working hours!"
break
fi
done &
your script looks like this in pseudo-code:
START
IF outside workinghours
EXIT
ELSE
RUN /Restore FOR EACH backupdir
GOTO START
The script only checks the time once, before starting a restore run (which will call /Restore for each directory to restore in a for loop)
It will continue to start the for loop, until the working hours start. Then it will exit.
E.g. if you have restore 3 folders to restore, each taking 2 hours; and you start the script at midnight; then the script will check whether it's outside working hours (it is), and will start the restore for the first folder (at 0:00), after two hours of work it will start the restore the 2nd folder (at 2:00), after another two hours it will start the restore of the 3rd folder (at 4:00). Once the 3rd folder has been restored, it will check the working hours again. Since it's now only 6:00, that is: outside the working hours, it will start the restore for the first folder (at 6:00), after two hours of work it will start the restore the 2nd folder (at 8:00), after another two hours it will start the restore of the 3rd folder (at 10:00).
It's noon when it does the next check against the working hours; since 12:00 falls within 7:00..17:00, the script will now stop. With an error message.
You probably only want the restore to run once for each folder, and stop proceeding to the next folder if the working hours start.
#!/bin/bash
for path in /backup_local/ARCHIVES/EAR_*/; do
currenttime=$(date +%H:%M)
if [[ "$currenttime" > "7:00" ]] && [[ "$currenttime" < "17:00" ]]; then
echo "Not restoring inside working hours!" 1>&2
break
fi
dirname="$(basename "${path}")"
/Restore -a /backup_local/ARCHIVES -c -I 0 -force -v > /backup_local/$dirname.txt
# handle exit code
done
update
I've just noticed your liberal spread of & for backgrounding jobs.
This is presumably to allow running the script from a remote shell. don't
What this will really do is:
it will run all the iterations over the restore-directories in parallel. This might create a bottleneck on your storage (if the directories to restore to/from share the same hardware)
it will background the entire loop-to-restore and immediately return to the out-of-hours check. if the check succeeds, it will spawn another loop-to-restore (and background it). then it will return to the out-of-hours check and spawn another backgrounded loop-to-restore.
Before dawn you probably have a few thousands background threads to restore directories. More likely you've exceeded your ressources and the process get's killed.
My example script above has omitted all the backgrounding (and the nohup).
If you want to run the script from a remote shell (and exit the shell after launching it), just run it with
nohup restore-script.sh &
Alternatively you could use
echo "restore-script.sh" | at now
or use a cron-job (if applicable)
The shebang contains an unwanted space. On my ubuntu the bash is found at /bin/bash.
Yours, is located there :
type bash
The while loop breaks in my test, replace the #!/bin/bash path with the result of the previous command:
#!/bin/bash --
#set -x
while : ; do
currenttime=$(date +%H:%M)
if [[ "$currenttime" > "17:00" ]] || [[ "$currenttime" < "07:00" ]]; then
for path in /backup_local/ARCHIVES/EAR_*; do
[ -d "${path}" ] || continue # if not a directory, skip
dirname="$(basename "${path}")"
nohup /Restore -a /backup_local/ARCHIVES -c -I 0 -force -v > /backup_local/$dirname.txt &
wait $!
if [ $? -eq 0 ]; then
rm -rf $path
rm /backup_local/$dirname.txt
echo $dirname >> /backup_local/completed.txt
fi
done &
else
echo "Restore can be ran only outside working hours!"
break
fi
done &
I have the following bash script, that I launch using the terminal.
dataset_dir='/home/super/datasets/Carpets_identification/data'
dest_dir='/home/super/datasets/Carpets_identification/augmented-data'
# if dest_dir does not exist -> create it
if [ ! -d ${dest_dir} ]; then
mkdir ${dest_dir}
fi
# for all folder of the dataset
for folder in ${dataset_dir}/*; do
curr_folder="${folder##*/}"
echo "Processing $curr_folder category"
# get all files
for item in ${folder}/*; do
# if the class dir in dest_dir does not exist -> create it
if [ ! -d ${dest_dir}/${curr_folder} ]; then
mkdir ${dest_dir}/${curr_folder}
fi
# for each file
if [ -f ${item} ]; then
# echo ${item}
filename=$(basename "$item")
extension="${filename##*.}"
filename=`readlink -e ${item}`
# get a certain number of patches
for i in {1..100}
do
python cropper.py ${filename} ${i} ${dest_dir}
done
fi
done
done
Given that it needs at least an hour to process all the files.
What happens if I change the '100' with '1000' in the last for loop and launch another instance of the same script?
Will the first process count to 1000 or will continue to count to 100?
I think the file will be readonly when a bash process executes it. But you can force the change. The already running process will count to its original value, 100.
You have to take care about the results. You are writing in the same output directory and have to expect side effects.
"When you make changes to your script, you make the changes on the disk(hard disk- the permanent storage); when you execute the script, the script is loaded to your memory(RAM).
(see https://askubuntu.com/questions/484111/can-i-modify-a-bash-script-sh-file-while-it-is-running )
BUT "You'll notice that the file is being read in at 8KB increments, so Bash and other shells will likely not load a file in its entirety, rather they read them in in blocks."
(see https://unix.stackexchange.com/questions/121013/how-does-linux-deal-with-shell-scripts )
So, in your case, all your script is loaded in the RAM memory by the script interpretor, and then executed. Meaning that if you change the value, then execute it again, the first instance will still have the "old" value.
Trying to create a shell script to cron at 4am everyday which will read the size of squid's access.log file and rotate it if it is past a certain size (20MB). Here is what I have so far:
#!/bin/sh
ymd=$(date '+%Y-%m-%d')
file=/var/squid/logs/access.log
minimumsize="20000000"
eval $(stat -s /var/squid/logs/access.log)
if [ $st_size > $minimumsize ]; then
cp /var/squid/logs/access.log /var/squid/logs/access_log_history/access.log.${ymd}
rm -fr /var/squid/logs/access.log
squid -k rotate
else
:
fi
The shell script runs but just rotates the log file regardless of size and creates a file named "20000000". That is it. Where am I going wrong here?
Instead of writing your own shellscript take a look at newsyslog(8) which does the same thing.
I have created a bash script to record sound input via the line in/mic port of the sound card and when sound is detected via breaking silence it records to a temp file, then date-stamps it in a new file and adds it to a database.
What i need to achieve is a good way of making the script start at boot, keep running the script over and over, and restart if it failed to restart itself. Below is the code i have currently put together from various sources and it works well so far. But i need it to be able to run 24/7 without any user interaction.
This is my first real bash script that i have created so would like a more experienced input to the method i have used and if it is wrong or right.
I did try to daemonize via the start-stop-daemon but ended up with multiple running scripts and sox commands. Currently i have it to execute at boot in rc.local, personally i don't think it is the correct way to restart the script by adding the command for it again at the bottom of the script... but i don't know of any other way.
Any sort of help is greatly appreciated.
#!/bin/bash
#Remove temp file just incase
rm -rf temp.mp3
#Listen for audio and record
sox -d /home/user/temp.mp3 silence 1 5 8% 1 0:00:01 8%
#Check if temp.mp3 is greater than 800 bytes so we don't get blank recordings added to the
#database, if the file is below 800 bytes remove the file and restart.
for i in /home/user/temp.mp3 ; do
b=`stat -c %s "$i"`
if [ $b -ge 800 ] ; then
NAME=`date +%Y-%m-%d_%H-%M-%S`
TIME=`date +%H:%M:%S`
FILENAME=/var/www/Recordings/$NAME.mp3
FILEWWW=Recordings/$NAME.mp3
mv /home/user/temp.mp3 $FILENAME
rm -rf temp.mp3
mysql --host=localhost --user=root --password=pass database << EOF
insert into recordings (id,time,filename,active,status) values('NULL','$TIME','$FILEWWW','1','1');
EOF
else
rm -rf /home/user/temp.mp3
echo 'No sound detected, Restarting...'
fi
done
/home/user/vox
exit 0
To restart script you can call it by crontab Crontab Howto
Did you try Daemonizing your script?
Different Operating System have their own documentation of how to daemonize a script and add it to the system startup.
Looking at your question, I believe this is what you have to do. But be careful of using system resources and include proper sleep times as to minimize the use of system resources.
Else Also adding a cron job is suggested as that would not be running all the time in the background.
#Gurubaran Here is the new code, it used to fork off a new process but now i use the loop via a function calling itself so it doesn't fork, although the sox command is a separate fork, it's necessary and i make sure i kill sox just incase. Does this all look ok? it seems to work very well. I need to test it properly when i receive the cable to test with what it's used for instead of my phone. The script is also daemonized via start-stop-daemon
#!/bin/bash
pkill sox
function vox() {
rm -rf /home/user/temp.mp3
sox -d /home/user/temp.mp3 silence 1 5 4% 1 0:00:01 4%
wait
for i in /home/user/temp.mp3 ; do
b=`stat -c %s "$i"`
if [ $b -ge 800 ] ; then
NAME=`date +%Y-%m-%d_%H-%M-%S`
TIME=`date +%H:%M:%S`
FILENAME=/var/www/Recordings/$NAME.mp3
FILEWWW=Recordings/$NAME.mp3
mv /home/user/temp.mp3 $FILENAME
rm -rf /home/user/temp.mp3
mysql --host=localhost --user=root --password=pass database << EOF
insert into recordings (id,time,filename,active,status) values('NULL','$TIME','$FILEWWW','1','1');
EOF
else
rm -rf /home/user/temp.mp3
fi
done
vox
}
vox