I have a watchdog implemented in bash that is restarting a service on certain conditions and it does move the old logs to an old directory.
The problem is that I do want to move the logs to old_1, old_2, ... if previous one exists.
How can I implement this in bash?
You can search for the first non-existing log like this:
#!/bin/bash
num=1
while [[ -f log_$num ]] ; do
let num++
done
echo Fresh new: log_$num
That is a pain to write, handle missing folders (which will break choroba solution for instace). This is why most systems requiring logs are just suffixing their names with dates, I encourage you to do the same, its easier to handle and also easier to retrieve a log afterward.
Related
I have a line in my .zshrc file, source $SNIPPETS/*.zsh and I know it's partially working because some functions defined in the directory work, but others do not unless I specifically source the file exclusively.
What steps would I take to find where in the sourcing there is an early exit or break?
You can only source one file at a time. The first file is being sourced, but not the rest.
You'll need to use a loop in .zshrc
for i in "$SNIPPETS"/*.zsh; do
[[ -e "$i" ]] && source "$i"
done
TODAY I LEARNED: that source only works on explicitly called files. One can input multiple files but no wild cards.
None of my files were breaking.
Steps I took for debugging:
Started a new shell.
Enabled zprof. (zmodload zsh/zprof)
Ran the source line. source $SNIPPETS/*.zsh
Realized that only one file was being sourced.
Re scoped my search for 'zsh source wildcard`
Found and implemented option one from this answer (https://stackoverflow.com/a/14680403/5724147).
Hi I am new to scripting and I do mean a complete noobie. I am working on a script to automatically make a torrent with nemo scripts.
#!/bin/bash
DIR="$NEMO_SCRIPT_SELECTED_FILE_PATHS"
BNAME=$(basename "$DIR")
TFILE="$BNAME.torrent"
TTRACKER="http://tracker.com/announce.php"
USER="USERNAME"
transmission-create -o "/home/$USER/Desktop/$TFILE" -t $TTRACKER "$DIR"
It does not work.
However if I replace
DIR="$NEMO_SCRIPT_SELECTED_FILE_PATHS"
with
DIR="absolutepath"
than it works like a charm. It creates it on the desktop with the tracker I want. I think this would come in handy for many people. I dont really know what to put. Have questions please ask. Again complete noobie.
The $NEMO_SCRIPT_SELECTED_FILE_PATHS is the same as $NAUTILUS_SCRIPT_SELECTED_FILE_PATHS. It's populated by nemo/nautilus when you run the script and contains a newline-delimited (I think) list of the selected files/folders. Assuming you are selecting only one file or folder, I don't really see why it wouldn't work - unless the newline character is in there and causing problems. If that's the case, you may be able to strip it with sed. Not running nemo or nautilus, so I can't test it.
I finally found the solution to yours and my problem [https://askubuntu.com/questions/243105/nautilus-scripts-nautilus-script-selected-file-paths-have-problems-with-spac][1]
The variable $NEMO_SCRIPT_SELECTED_FILE_PATH/$NAUTILUS_SCRIPT_SELECTED_FILE_PATH is a list of paths/filenames seperated by a Newline. This messes up anything that assumes its just one filename, even if it is.
#!/bin/bash
echo "$NEMO_SCRIPT_SELECTED_FILE_PATHS" | while read DIR; do
BNAME=$(basename "$DIR")
TFILE="$BNAME.torrent"
TTRACKER="http://tracker.com/announce.php"
USER="USERNAME"
transmission-create -o "/home/$USER/Desktop/$TFILE" -t $TTRACKER "$DIR"
done
Notice it seems to do an extra pass for the newline. You either need to filter that out or put an if the file/folder exists
I've written a script that backs up my financial spreadsheet document to another hard drive and another computer. I have also set up my server with email to send cronjob messages to my email instead of system mail. In my script I can't figure out how to use if then to check date of backed up file and current date. Here is the script
#!/bin/bash
mount -t cifs //IP/Share /Drive/Directory -o username=username,password=password
cp /home/user/Desktop/finances10.ods /media/MediaConn/financesbackup/Daily\ Bac$
cp /home/user/Desktop/finances10.ods /Drive/Directory/FinancesBackup/Daily\ Backup/
umount /Drive/Directory
export i=`stat -c %y /media/MediaConn/financesbackup/Daily\ Backup/finances10.o$
export j=`date +%d`
if ["$i"="$j"]; then
echo Your backup has completed successfully. Please check the Daily Backup fo$
echo This message is automated by the system and mail is not checked.
else
echo Your backup has failed. Please manually backup the financial file to the$
echo drive. This message is automated by the system and mail is not checked.
fi
Pretty simple script. The output is sent by email because it's a cronjob. If anyone could help I would greatly appreciate it. Thanks in advance
Your code is all messed up in the post, but anyway... you should probably compare the output of 'stat -c %Y' (not %y) to the output of 'date +%s' (not %d).
But, even better, use something like md5sum or sha1sum to make sure the backed up file really matches the original.
I would strongly recommend checking that each command in your script has succeeded. Otherwise the script will carry on blindly and (at best) exit with a success code or (at worst) do something completely unexpected.
Here's a decent tutorial on capturing and acting on exit codes.
This line needs spaces around the brackets and equal sign:
if [ "$i" = "$j" ]; then
There's no need to export the variables in this context.
You should use the format codes that vanza suggested, since they correspond to the same format which is the number of seconds since the Unix epoch.
You should put quotes around the messages that you echo:
echo "some message"
When you pasted your code (apparently copied from nano), it got truncated. It might work better if you list it using less and copy it from there since less is likely to wrap lines rather than truncate them.
Thanks for all of the input. Sorry, I did copy and paste from nano, lol, didn't realized it was truncated. All of your advise was very helpful. I was able to do what I wanted using the format I had but just putting the spaces between the brackets and equal sign. I've never used md5sum or sha1sum but will check it out. Thanks again for all your help, it's working great now!
I have many directories and need to delete them periodically with minimum time.
Additionally for each directories delete status need to know i.e whether deleted successfully or not.
I need to write on the ksh . Could you please help me out.
The sample code which I am using, in which I tried launching rm-rf in the background, is not working.
for var1 in 1...10
rm -rf <DIR> &
pid[var1]=$!
done
my_var=1
for my_var in 1...var1
wait $pid[my_var]
if [ $? -eq 1 ]
then
echo falied
else
echo passed
fi
done
You are out of luck. The bottleneck here is the filesystem, and you are very unlikely to find a filesystem that performs atomic operations (like directory deletion) in parallel. No amount of fiddling with shell code is going to make the OS or the filesystem do its job faster. There is only one disk, and if every deletion requires a write to disk, it is going to be slow.
Your best bet is to switch to a journaling filesystem that does deletions quickly. I have had good luck with XFS deleting large files (10-40GB) quickly, but I have not tried deleting directories. In any case, your path to improved performance lies in finding the right filesystem, not the right shell script.
This is generally the form that your script should take, with corrections for the serious errors in syntax. However, as Norman noted, it's not going to do what you want. Also, wait isn't going to work in a loop like you seem to intend.
# this script still won't work
find . -type d | while read dir
# or: for dir in ${dirs[#]}
do
rm -rf $dir &
pid[++var1]=$!
done
for my_var in {1...$var1}
do
wait ${pid[my_var]}
if [ $? -eq 1 ]
then
echo failed
else
echo passed
fi
done
Today I first saw the potential of a partial accidental deletion of a colleague's home directory (2 hours lost in a critical phase of a project).
I was enough worried about it to start thinking of the problem ad a possible solution.
In his case a file named '~' somehow went into a test folder, which he after deleted with rm -rf... when rm arrived to the file bash expanded it to his home folder (he managed to CTRL-C almost in time).
A similar problem could happen if one have a file named '*'.
My first thought was to prevent creation of files with "dangerous names", but that would still not solve the problem as mv or other corner case situations could lead to the risky situation as well.
Second thought was creating a listener (don't know if this is even possible) or an alias of rm that checks what files it processes and if it finds a dangerous one skips sending a message.
Something similar to this:
take all non-parameter arguments (so to get the files one wants to delete)
cycle on these items
check if current item is equal to a dangerous item (say for example '~' or '*'), don't know if this works, at this point is the item already expanded or not?
if so echo a message, don't do anything on the file
proceed with iteration
Third thought: has anyone already done or dealed with this? :]
There's actually pretty good justification for having critical files in your home directory checked into source control. As well as protecting against the situation you've just encountered it's nice being able to version control .bashrc, etc.
Since the shell probably expands the parameter, you can't really catch 'dangerous' names like that.
You could alias 'rm -rf' to 'rm -rfi' (interactive), but that can be pretty tedious if you actually mean 'rm -rf *'.
You could alias 'rm' to 'mv $# $HOME/.thrash', and have a separate command to empty the thrash, but that might cause problems if you really mean to remove the files because of disk quotas or similar.
Or, you could just keep proper backups or use a file system that allows "undeletion".
Accidents do happen. You only can reduce the impact of them.
Both version control (regular checkins) and backups are of vital importance here.
If I can't checkin (because it does not work yet), I backup to an USB stick.
And if the deadline aproaches, the backup frequency increases because Murphy strikes at the most inapropriate moment.
One thing I do is always have a file called "-i" in my $HOME.
My other tip is to always use "./*" or find instead of plain "*".
The version control suggestion gets an upvote from me. I'd recommend that for everything, not just source.
Another thought is a shared drive on a server that's backed up and archived.
A third idea is buying everyone an individual external hard drive that lets them back up their local drive. This is a good thing to do because there are two kinds of hard drives: those that have failed and those that will in the future.
You could also create an alias from rm that runs through a simple script that escapes all characters, effectively stopping you from using wildcards. Then create another alias that runs through real rm without escaping. You would only use the second if you are really sure. Bu then again, that's kinda the point of rm -rf.
Another option I personally like is create an alias that redirects through a script and then passes everything on to rm. If the script finds any dangerous characters, it prompts you Y/N if you want to continue, N cancelling the operation, Y continuing on as normal.
One company where I worked we had a cron job which ran every half an hour which copied all the source code files from everyone's home directory to backup directory structure elsewhere on the system just using find.
This wouldn't prevent actual deletion but it did minimise the work lost on a number of occasions.
This is pretty odd behaviour really - why is bash expanding twice?
Once * has expanded to
old~
this~
~
then no further substitution should happen!
I bravely tested this on my mac, and it just deleted ~, and not my home directory.
Is it possible your colleague somehow wrote code that expanded it twice?
e.g.
ls | xargs | rm -rf
You may disable file name generation (globbing):
set -f
Escaping special chars in file paths could be done with Bash builtins:
filepath='/abc*?~def'
filepath="$(printf "%q" "${filepath}")"
filepath="${filepath//\~/\\~}"
printf "%s\n" "${filepath}"
I use this in my ~/.basrc
alias rm="rm -i"
rm prompts before deleting anything, and the alias can be circumvented either with the -f flag, or by escabing, e.g.
\rm file
Degrades the problem yes; solves it no.