Bash echo output redirected to log file in specified directory - bash

I'm getting the both right results but I'm also getting a additional weird result. A file is being created for each file that I'm downloading in the directory of script being executed and not in to log directory. When I commented the echo out, it goes away and the files are not created. Is there another way or what is the correct way for me to log the address that I'm downloading into CURL?
echo $DLADDR$'\r' >> Downloads/LOGS/$LOGFILE 2>$1
curl -o Downloads/$FILECATNAME $DLADDR >> Downloads/LOGS/$LOGFILE 2>&1

You should change that 2>$1 into 2>&1. Otherwise stderr will be redirected into a file of the name "$1" (first argument to the script).

Related

Source bash from url only if file from the url exists and hide output

I've been searching the web but couldn't find the answer for my situation. I believe it's a first?
Anyway, here's what I'm trying to do;
#!/bin/bash
if source <(curl -s -f http://example.com/$1.txt); then
echo "Found the file"
else
echo "Couldn't find it"
fi
This is supposed to source the bash from http://example.com/$1.txt when I run the script like this; ./myscript.sh fileName while hiding any success of error outputs because I don't want them to show up.
However, while it works fine for the files that exist, it still says "Found the file" even if the file isn't there and sources the bash from an empty file because of the -f flag. If I remove the -f flag then it works and says "Couldn't find it" but it also returns an HTTP error since the file isn't there and as I said, I want to hide the errors too.
How can I work this around?
The result code from source is simply the last command in the sourced file. If the file is empty (as it will be if curl fails) that's a success.
What you can do is guard against an error from curl separately.
if source <(curl -s -f "http://example.com/$1.txt" || echo "exit $?"); then

redirect screen output to file

I'm trying to redirect the screen output to a log file but I don't seem to be getting this right, see the code below:
DT=$(date +%Y-%m-%d-%H-%m-%s)
echo $DT > log_copy_$DT.txt
cat dirfiles.txt | while read f ; do
dest=/mydir
scp "${f}" $dest >> log_copy_$DT.txt 2>&1
done
All I get is a file with the date, but not the screen results (I need to see if the files copied correctly).
So, basically I'm appending the results of the scp command into the log and doing the 2>&1 so that the standard output screen is written to the file but doesn't seem to work.
I need to run this on a crontab so I'm not sure if the screen contents will even go to the log once I set it up.
Well, after investigating, it seems scp can't really write standard screen output to a file, it kinda cancels the standard out as it shows % progress, so I ended up doing this:
scp "${f}" $dest && echo $f successfully copied! >> log_copy_$DT.txt
basically, it it can copy the file over, then it writes a message saying it was OK.

Checkin if a Variable File is in another directory

I'm looking to check if a variable file is in another directory, and if it is, stop the script from running any farther. So far I have this:
#! /bin/bash
for file in /directory/of/variable/file/*.cp;
do
test -f /directory/to/be/checked/$file;
echo $?
done
I ran an echo of $file and see that it includes the full path, which would explain why my test doesn't see the file, but I am at a loss for how to move forward so that I can check.
Any help would be greatly appreciated!
Thanks
I think you want
#! /bin/bash
for file in /directory/of/variable/file/*.cp ; do
newFile="${file##*/}"
if test -f /directory/to/be/checked/"$newFile" ; then
echo "/directory/to/be/checked/$newFile already exists, updating ..."
else
echo "/directory/to/be/checked/$newFile not found, copying ..."
fi
cp -i "$file" /directory/to/be/checked/"$newFile"
done
Note that you can replace cp -i with mv -i and move the file, leaving no file left behind in /directory/of/variable/file/.
The -i option means interrogate (I think), meaning if the file is already there, it will ask you overwrite /directory/to/be/checked/"$newFile" (or similar) to which you must reply y. This will only happen if the file already exists in the new location.
IHTH
The command basename will give you just the file (or directory) without the rest of the path.
#! /bin/bash
for file in /directory/of/variable/file/*.cp;
do
test -f /directory/to/be/checked/$(basename $file);
echo $?
done

storing error message of command output into a shell variable [duplicate]

This question already has answers here:
How to get error output and store it in a variable or file
(3 answers)
Closed 8 years ago.
I am trying to store error meesage of a copy command in to a variable. But its not happening
Unix Command
log=`cp log.txt`
cp: missing destination file operand after `log.txt'
Try `cp --help' for more information.
echo $log
<nothing displayed>
I want to store above error message into a variable so that i can echo it whenever i want
Just redirect the stdout (normal output) to /dev/null and keep the stderror:
a=$(cp log.txt 2>&1 >/dev/null)
See an example:
$ a=$(cp log.txt 2>&1 >/dev/null)
$ echo "$a"
cp: missing destination file operand after ‘log.txt’
Try 'cp --help' for more information.
The importance to >/dev/null to keep away to normal output that in this case we do not want:
$ ls a b
ls: cannot access a: No such file or directory
b
$ a=$(ls a b 2>&1)
$ echo "$a"
ls: cannot access a: No such file or directory
b
$ a=$(ls a b 2>&1 >/dev/null)
$ echo "$a"
ls: cannot access a: No such file or directory
Note the need of quoting $a when calling it, so that the format is kept. Also, it is better to use $() rather than , as it is easier to nest and also is deprecated.
What does 2>&1 mean?
1 is stdout. 2 is stderr.
Here is one way to remember this construct (altough it is not entirely
accurate): at first, 2>1 may look like a good way to redirect stderr
to stdout. However, it will actually be interpreted as "redirect
stderr to a file named 1". & indicates that what follows is a file
descriptor and not a filename. So the construct becomes: 2>&1.

Create a detailed self tracing log in bash

I know you can create a log of the output by typing in script nameOfLog.txt and exit in terminal before and after running the script, but I want to write it in the actual script so it creates a log automatically. There is a problem I'm having with the exec >>log_file 2>&1 line:
The code redirects the output to a log file and a user can no longer interact with it. How can I create a log where it just basically copies what is in the output?
And, is it possible to have it also automatically record the process of files that were copied? For example, if a file at /home/user/Deskop/file.sh was copied to /home/bckup, is it possible to have that printed in the log too or will I have to write that manually?
Is it also possible to record the amount of time it took to run the whole process and count the number of files and directories that were processed or am I going to have to write that manually too?
My future self appreciates all the help!
Here is my whole code:
#!/bin/bash
collect()
{
find "$directory" -name "*.sh" -print0 | xargs -0 cp -t ~/bckup #xargs handles files names with spaces. Also gives error of "cp: will not overwrite just-created" even if file didn't exist previously
}
echo "Starting log"
exec >>log_file 2>&1
timelimit=10
echo "Please enter the directory that you would like to collect.
If no input in 10 secs, default of /home will be selected"
read -t $timelimit directory
if [ ! -z "$directory" ] #if directory doesn't have a length of 0
then
echo -e "\nYou want to copy $directory." #-e is so the \n will work and it won't show up as part of the string
else
directory=/home/
echo "Time's up. Backup will be in $directory"
fi
if [ ! -d ~/bckup ]
then
echo "Directory does not exist, creating now"
mkdir ~/bckup
fi
collect
echo "Finished collecting"
exit 0
To answer the "how to just copy the output" question: use a program called tee and then a bit of exec magic explained here:
redirect COPY of stdout to log file from within bash script itself
Regarding the analytics (time needed, files accessed, etc) -- this is a bit harder. Some programs that can help you are time(1):
time - run programs and summarize system resource usage
and strace(1):
strace - trace system calls and signals
Check the man pages for more info. If you have control over the script it will be probably easier to do the logging yourself instead of parsing strace output.

Resources