I'm trying to redirect the screen output to a log file but I don't seem to be getting this right, see the code below:
DT=$(date +%Y-%m-%d-%H-%m-%s)
echo $DT > log_copy_$DT.txt
cat dirfiles.txt | while read f ; do
dest=/mydir
scp "${f}" $dest >> log_copy_$DT.txt 2>&1
done
All I get is a file with the date, but not the screen results (I need to see if the files copied correctly).
So, basically I'm appending the results of the scp command into the log and doing the 2>&1 so that the standard output screen is written to the file but doesn't seem to work.
I need to run this on a crontab so I'm not sure if the screen contents will even go to the log once I set it up.
Well, after investigating, it seems scp can't really write standard screen output to a file, it kinda cancels the standard out as it shows % progress, so I ended up doing this:
scp "${f}" $dest && echo $f successfully copied! >> log_copy_$DT.txt
basically, it it can copy the file over, then it writes a message saying it was OK.
Related
I've been searching the web but couldn't find the answer for my situation. I believe it's a first?
Anyway, here's what I'm trying to do;
#!/bin/bash
if source <(curl -s -f http://example.com/$1.txt); then
echo "Found the file"
else
echo "Couldn't find it"
fi
This is supposed to source the bash from http://example.com/$1.txt when I run the script like this; ./myscript.sh fileName while hiding any success of error outputs because I don't want them to show up.
However, while it works fine for the files that exist, it still says "Found the file" even if the file isn't there and sources the bash from an empty file because of the -f flag. If I remove the -f flag then it works and says "Couldn't find it" but it also returns an HTTP error since the file isn't there and as I said, I want to hide the errors too.
How can I work this around?
The result code from source is simply the last command in the sourced file. If the file is empty (as it will be if curl fails) that's a success.
What you can do is guard against an error from curl separately.
if source <(curl -s -f "http://example.com/$1.txt" || echo "exit $?"); then
My requirement is to replace file only when it is not being accessed. I have following snippet:
if [ -f file ]
then
while true
do
if [ -n "$(fuser "file")" ]
then
echo "file is in use..."
else
echo "file is free..."
break
fi
done
fi
{
flock -x 3
mv newfile file
} 3>file
But I have a doubt that I am not handling concurrency properly. Please give some insights and possible way to achieve this.
Thanks.
My requirement is to replace file only when it is not being accessed.
Getting requirements right can be hard. In case your actual requirement is the following, you can boil down the whole script to just one command.
My guess on the actual requirement (not as strict as the original):
Replace file without disturbing any programs reading/writing file.
If this is the case, you can use a very neat behavior: In Unix-like systems file descriptors always point to the file (not path) for which they where opened. You can move or even delete the corresponding path. See also How do the UNIX commands mv and rm work with open files?.
Example:
Open a terminal and enter
i=1; while true; do echo $((i++)); sleep 1; done > file &
tail -f file
The first command writes output to file and runs in the background. The second command reads the file and continues to print its changing content.
Open another terminal and move or delete file, for instance with
mv file file2
echo overwritten > otherFile
mv otherFile file2
rm file2
echo overwritten > file
echo overwritten > file2
While executing these commands have a look at the output of tail -f in the first terminal – it won't be affected by any of these commands. You will never see overwritten.
Solution For New Requirement:
Because of this behavior you can replace the whole script with just one mv command:
mv newfile file
Consider lsof.
mvWhenClear() {
while [[ -f "$1" ]] && lsof "$1"
do sleep $delay
done
mv "$1" "$2" # still allows race condition
}
I'm getting the both right results but I'm also getting a additional weird result. A file is being created for each file that I'm downloading in the directory of script being executed and not in to log directory. When I commented the echo out, it goes away and the files are not created. Is there another way or what is the correct way for me to log the address that I'm downloading into CURL?
echo $DLADDR$'\r' >> Downloads/LOGS/$LOGFILE 2>$1
curl -o Downloads/$FILECATNAME $DLADDR >> Downloads/LOGS/$LOGFILE 2>&1
You should change that 2>$1 into 2>&1. Otherwise stderr will be redirected into a file of the name "$1" (first argument to the script).
I am using the following code to send stderr to a file.
.script >2 "errorlog.$(date)"
The problem is that a blank log file is created every time I run the script, even if an error doesn't exist. I have looked online and in a few books as well, and can't figure out how to create a log file only if errors exist.
Output redirection opens the file before the script is run, so there is no way to tell if the file will receive any output. What you can do, however, is immediately delete the file if it winds up being empty:
logfile="errorlog.$(date)"
# Note your typo; it's 2>, not >2
script 2> "$logfile"; [ -s "$logfile" ] || rm -f "$logfile"
I use -f just in case, as -s can fail if $logfile does not exist, not just if it's empty. I use ; to separate the commands because whether or not $logfile contains anything does not depend on whether or not script succeeds.
You can wrap this up in a function to make it easier to use.
save_log () {
logfile=${1:-errorlog.$(date)}
cat - > "$logfile"
[ -s "$logfile" ] || rm -f "$logfile"
}
script 2> >( save_log )
script 2> >( save_log my_logfile.txt )
Not quite as simple as redirecting to a file, and depends on a non-standard feature (process substitution), but not too bad, either.
I know you can create a log of the output by typing in script nameOfLog.txt and exit in terminal before and after running the script, but I want to write it in the actual script so it creates a log automatically. There is a problem I'm having with the exec >>log_file 2>&1 line:
The code redirects the output to a log file and a user can no longer interact with it. How can I create a log where it just basically copies what is in the output?
And, is it possible to have it also automatically record the process of files that were copied? For example, if a file at /home/user/Deskop/file.sh was copied to /home/bckup, is it possible to have that printed in the log too or will I have to write that manually?
Is it also possible to record the amount of time it took to run the whole process and count the number of files and directories that were processed or am I going to have to write that manually too?
My future self appreciates all the help!
Here is my whole code:
#!/bin/bash
collect()
{
find "$directory" -name "*.sh" -print0 | xargs -0 cp -t ~/bckup #xargs handles files names with spaces. Also gives error of "cp: will not overwrite just-created" even if file didn't exist previously
}
echo "Starting log"
exec >>log_file 2>&1
timelimit=10
echo "Please enter the directory that you would like to collect.
If no input in 10 secs, default of /home will be selected"
read -t $timelimit directory
if [ ! -z "$directory" ] #if directory doesn't have a length of 0
then
echo -e "\nYou want to copy $directory." #-e is so the \n will work and it won't show up as part of the string
else
directory=/home/
echo "Time's up. Backup will be in $directory"
fi
if [ ! -d ~/bckup ]
then
echo "Directory does not exist, creating now"
mkdir ~/bckup
fi
collect
echo "Finished collecting"
exit 0
To answer the "how to just copy the output" question: use a program called tee and then a bit of exec magic explained here:
redirect COPY of stdout to log file from within bash script itself
Regarding the analytics (time needed, files accessed, etc) -- this is a bit harder. Some programs that can help you are time(1):
time - run programs and summarize system resource usage
and strace(1):
strace - trace system calls and signals
Check the man pages for more info. If you have control over the script it will be probably easier to do the logging yourself instead of parsing strace output.