Trying to redirect and quite the output to screen on a cURL command? Also what happens with an exception, if cURL can't find the file on the site? Its downloading to a FTP site with an FTP URL. I have many files so its iterating in a loop and I don't want it to stop if file isn't found so will continue if exception isn't found? Can I use the cURL command if I am strictly using a bash script? It does download the files but it outputs to much stuff and also haven't been able to test in situation where it would throw an error so not sure if it would continue.
How can I stop cURL from outputting to screen? This is what I have so far.
echo $DLADDR
curl -o Downloads/$FILECATNAME $DLADDR 2>&1 | tee $LOGFILE
I would like to stop this output and put each output into a LOG file.
Example cURL output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 574k 100 574k 0 0 622k 0 --:--:-- --:--:-- --:--:-- 632k
As mentioned in the comment above, it's tough to exactly figure what you're asking. However, one of the questions is:
How can I stop CURL from outputting to screen?
You are saying:
curl -o Downloads/$FILECATNAME $DLADDR 2>&1 | tee $LOGFILE
which would redirect the response output to the specified file but the STDERR of curl would be merged with STDOUT and teed to the logfile.
Redirect the output right away to stop CURL from outputting to screen, i.e. say:
curl -o Downloads/$FILECATNAME $DLADDR > $LOGFILE 2>&1
The other question appears to be:
I don't want it to stop if file isn't found so will continue if
exception isn't found?
The script shouldn't stop if curl encounters an error unless you've specified to do so, i.e. unless you've said:
set -e
in your script.
You might want to investigate curl's -w (--write-out) option, to get precisely the log information you require, and avoid the unnecessary/unreported bits.
#!/usr/bin/env bash
DLADDR=ftp://example.com/somefile
LOGFILE=/path/to/downloader.log
declare -a bashopts=()
bashopts+=(-s)
bashopts+=("-o 'Downloads/$FILECATNAME'")
bashopts+=("-w '%{size_download} %{speed_download} %{time_total} %{filename_effective}'")
curl "${bashopts[#]}" "$DLADDR" > "$LOGFILE"
We simplify the option management by putting it into an array. Note the options:
-s silences the progress meter.
-o sets an output file per your example.
-w writes output according to the format listed in curl's man page.
Since you're directing the output file using -o, the only information curl sends to stdout will be the log data generated by the -w command.
Note that the rule of thumb for quoting variables in bash is that if you're using a variable ... it should be quoted.
curl -o Downloads/$DESCFILE $TEXT1 > $TEXT 2>&1
Related
I'm building a bootstrap for a github project and would like it to be a simple one-liner. The script requires a password input.
This works and stops the script to wait for an input:
curl -s https://raw.github.com/willfarrell/.vhosts/master/setup.sh -o setup.sh
bash setup.sh
This does not, and just skips over the input request:
curl -s https://raw.github.com/willfarrell/.vhosts/master/setup.sh | bash
setup.sh contains code is something like:
# code before
read -p "Password:" -s password
# code after
Is it possible to have a clean one-liner? If so, how might one do it?
Workaround:
Use three commands instead of piping output.
curl -s https://raw.github.com/willfarrell/.vhosts/master/setup.sh -o vhosts.sh && bash vhosts.sh && rm vhosts.sh
I had the same exact problem as the OP and was looking for an answer. This question was one of the first hits on Google for me and since it doesn't have a real answer yet, here's the command that I eventually stumbled upon which solved my need of using read in a remote script.
bash <(curl -s https://example.com/my-bash-script.sh)
With the pipe, the read reads from standard input (the pipe), but the shell already read all the standard input so there isn't anything for the read to read.
This question already has answers here:
What does " 2>&1 " mean?
(19 answers)
Closed 26 days ago.
I found this piece of code in /etc/cron.daily/apf
#!/bin/bash
/etc/apf/apf -f >> /dev/null 2>&1
/etc/apf/apf -s >> /dev/null 2>&1
It's flushing and reloading the firewall.
I don't understand the >> /dev/null 2>&1 part.
What is the purpose of having this in the cron? It's overriding my firewall rules.
Can I safely remove this cron job?
>> /dev/null redirects standard output (stdout) to /dev/null, which discards it.
(The >> seems sort of superfluous, since >> means append while > means truncate and write, and either appending to or writing to /dev/null has the same net effect. I usually just use > for that reason.)
2>&1 redirects standard error (2) to standard output (1), which then discards it as well since standard output has already been redirected.
Let's break >> /dev/null 2>&1 statement into parts:
Part 1: >> output redirection
This is used to redirect the program output and append the output at the end of the file. More...
Part 2: /dev/null special file
This is a Pseudo-devices special file.
Command ls -l /dev/null will give you details of this file:
crw-rw-rw-. 1 root root 1, 3 Mar 20 18:37 /dev/null
Did you observe crw? Which means it is a pseudo-device file which is of character-special-file type that provides serial access.
/dev/null accepts and discards all input; produces no output (always returns an end-of-file indication on a read). Reference: Wikipedia
Part 3: 2>&1 (Merges output from stream 2 with stream 1)
Whenever you execute a program, the operating system always opens three files, standard input, standard output, and standard error as we know whenever a file is opened, the operating system (from kernel) returns a non-negative integer called a file descriptor. The file descriptor for these files are 0, 1, and 2, respectively.
So 2>&1 simply says redirect standard error to standard output.
& means whatever follows is a file descriptor, not a filename.
In short, by using this command you are telling your program not to shout while executing.
What is the importance of using 2>&1?
If you don't want to produce any output, even in case of some error produced in the terminal. To explain more clearly, let's consider the following example:
$ ls -l > /dev/null
For the above command, no output was printed in the terminal, but what if this command produces an error:
$ ls -l file_doesnot_exists > /dev/null
ls: cannot access file_doesnot_exists: No such file or directory
Despite I'm redirecting output to /dev/null, it is printed in the terminal. It is because we are not redirecting error output to /dev/null, so in order to redirect error output as well, it is required to add 2>&1:
$ ls -l file_doesnot_exists > /dev/null 2>&1
This is the way to execute a program quietly, and hide all its output.
/dev/null is a special filesystem object that discards everything written into it. Redirecting a stream into it means hiding your program's output.
The 2>&1 part means "redirect the error stream into the output stream", so when you redirect the output stream, error stream gets redirected as well. Even if your program writes to stderr now, that output would be discarded as well.
Let me explain a bit by bit.
0,1,2
0: standard input
1: standard output
2: standard error
>>
>> in command >> /dev/null 2>&1 appends the command output to /dev/null.
command >> /dev/null 2>&1
After command:
command
=> 1 output on the terminal screen
=> 2 output on the terminal screen
After redirect:
command >> /dev/null
=> 1 output to /dev/null
=> 2 output on the terminal screen
After /dev/null 2>&1
command >> /dev/null 2>&1
=> 1 output to /dev/null
=> 2 output is redirected to 1 which is now to /dev/null
/dev/null is a standard file that discards all you write to it, but reports that the write operation succeeded.
1 is standard output and 2 is standard error.
2>&1 redirects standard error to standard output. &1 indicates file descriptor (standard output), otherwise (if you use just 1) you will redirect standard error to a file named 1. [any command] >>/dev/null 2>&1 redirects all standard error to standard output, and writes all of that to /dev/null.
I use >> /dev/null 2>&1 for a silent cronjob. A cronjob will do the job, but not send a report to my email.
As far as I know, don't remove /dev/null. It's useful, especially when you run cPanel, it can be used for throw-away cronjob reports.
As described by the others, writing to /dev/null eliminates the output of a program. Usually cron sends an email for every output from the process started with a cronjob. So by writing the output to /dev/null you prevent being spammed if you have specified your adress in cron.
instead of using >/dev/null 2>&1
Could you use : wget -O /dev/null -o /dev/null example.com
what i can see on the other forum it says. "Here -O sends the downloaded file to /dev/null and -o logs to /dev/null instead of stderr. That way redirection is not needed at all."
and the other solution is : wget -q --spider mysite.com
https://serverfault.com/questions/619542/piping-wget-output-to-dev-null-in-cron/619546#619546
I normally used the command in connection with the log files… purpose would be to catch any errors to evaluate/troubleshoot issues when running scripts on multiple servers simultaneously.
sh -vxe cmd > cmd.logfile 2>&1
Edit /etc/conf.apf. Set DEVEL_MODE="0". DEVEL_MODE set to 1 will add a cron job to stop apf after 5 minutes.
I am trying to source a script file from the internet using curl, like this: source <( curl url ); echo done , and what I see is that 'done' is echoed before the curl even starts to download the file!
Here's the actual command and the output:
-bash-3.2# source <( curl --insecure https://raw.github.com/gurjeet/pg_dev_env/master/.bashrc ) ; echo done
done
-bash-3.2# % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2833 100 2833 0 0 6746 0 --:--:-- --:--:-- --:--:-- 0
I am not too worried about 'done' being echoed before or after anything, I am particularly concerned why the source command wouldn't read and act on the script!
This command works as expected on my LinuxMint's bash, but not on the CentOS server's bash!
At first, I failed to notice that you're using Bash 3.2. That version won't source from a process substitution, but later versions such as Bash 4 do.
You can save the file and do a normal source of it:
source /tmp/del
(to use the file from your comment)
Or, you can use /dev/stdin and a here-string and a quoted command substitution:
source /dev/stdin <<< "$(curl --insecure https://raw.github.com/gurjeet/pg_dev_env/master/.bashrc)"; echo done
Try this:
exec 69<> >(:);
curl url 1>&69;
source /dev/fd/69;
exec 69>&-;
This should force yer shell to wait for all data from the pipe. If that doesn't work this one will:
exec 69<> >(:);
{ curl url 1>&69 & } 2>/dev/null;
wait $!
source /dev/fd/69;
exec 69>&-;
Does the following work?
file=$(mktemp)
curl --insecure -o $file https://raw.github.com/gurjeet/pg_dev_env/master/.bashrc
source $file
rm $file
I have a bash script with some scp commands inside.
It works very well but, if I try to redirect my stdout with "./myscript.sh >log", only my explicit echos are shown in the "log" file.
The scp output is missing.
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
fi
Ok, what should I do now?
Thank you
scp is using interactive terminal in order to print that fancy progress bar. Printing that output to a file does not make sense at all, so scp detects when its output is redirected to somewhere else other than a terminal and does disable this output.
What makes sense, however, is redirect its error output into the file in case there are errors. You might want to disable standard output if you want.
There are two possible ways of doing this. First is to invoke your script with redirection of both stderr and stdout into the log file:
./myscript.sh >log 2>&1
Second, is to tell bash to do this right in your script:
#!/bin/sh
exec 2>&1
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
fi
...
If you need to check for errors, just verify that $? is 0 after scp command is executed:
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
RET=$?
if [ $RET -ne 0 ]; then
echo SOS 2>&1
exit $RET
fi
fi
Another option is to do set -e in your script which tells bash script to report failure as soon as one of commands in scripts fails:
#!/bin/bash
set -e
...
Hope it helps. Good luck!
You cat simply test your tty with:
[ ~]#echo "hello" >/dev/tty
hello
If that works, try:
[ ~]# scp <user>#<host>:<source> /dev/tty 2>/dev/null
This has worked for me...
Unfortunately SCP's output can't simply be redirected to stdout it seems.
I wanted to get the average transfer speed of my SCP transfer, and the only way that I could manage to do that was to send stderr and stdout to a file, and then to echo the file to stdout again.
For example:
#!/bin/sh
echo "Starting with upload test at `date`:"
scp -v -i /root/.ssh/upload_test_rsa /root/uploadtest.tar.gz speedtest#myhost:/home/speedtest/uploadtest.tar.gz > /tmp/scp.log 2>&1
grep -i bytes /tmp/scp.log
rm -f /tmp/scp.log
echo "Done with upload test at `date`."
Which would result in the following output:
Starting with upload test at Thu Sep 20 13:04:44 SAST 2012:
Transferred: sent 10191920, received 5016 bytes, in 15.7 seconds
Bytes per second: sent 650371.2, received 320.1
Done with upload test at Thu Sep 20 13:05:04 SAST 2012.
I found a rough solution for scp:
$ scp -qv $USER#$HOST:$SRC $DEST
According to the scp man page, -q (quiet) disables the progress meter, as well as disabling all other output. Add -v (verbose) as well, you get heaps of output... and the progress meter is still disabled! Disabling the progress meter allows you to redirect the output to a file.
If you don't need all the authentication debug output, redirect the output to stdout and grep out the bits you don't want:
$ scp -qv $USER#$HOST:$SRC $DEST 2>&1 | grep -v debug
Final output is something like this:
Executing: program /usr/bin/ssh host myhost, user (unspecified), command scp -v -f ~/file.txt
OpenSSH_6.0p1 Debian-4, OpenSSL 1.0.1e 11 Feb 2013
Warning: Permanently added 'myhost,10.0.0.1' (ECDSA) to the list of known hosts.
Authenticated to myhost ([10.0.0.1]:22).
Sending file modes: C0644 426 file.txt
Sink: C0644 426 file.txt
Transferred: sent 2744, received 2464 bytes, in 0.0 seconds
Bytes per second: sent 108772.7, received 97673.4
Plus, this can be redirected to a file:
$ scp -qv $USER#$HOST:$SRC $DEST 2>&1 | grep -v debug > scplog.txt
I am trying to create a script that will run wget to a few sites and check if we receive a 200 OK from the site.
My problem is that the result of wget application is shown in the stdout. Is there a way I can hide this.
My current script is:
RESULT=`wget -O wget.tmp http://mysite.com 2>&1`
Later I will use regex to look for the 200 OK we receive from the errout that wget produces.
When I run the script, it works fine, but I get the result of the wget added between my echos.
Any way around this?
You can use:
RESULT=`wget --spider http://mysite.com 2>&1`
And this does the trick too:
RESULT=`wget -O wget.tmp http://mysite.com >/dev/null 2>&1`
Played around a little and came up with that one:
RESULT=`curl -fSw "%{http_code}" http://example.com/ -o a.tmp 2>/dev/null`
This outputs nothing but "200" - Nothing else.
Jack's suggestions are good. I'd modify them just slightly.
If you only need to check the status code, use the --spider option that Jack referenced. From the docs:
When invoked with this option, Wget will behave as a Web spider, which means that it will not download the pages, just check that they are there.
And Jack's second suggestion shows the core ideas behind hiding output:
... >/dev/null 2>&1
The above redirects standard output to /dev/null. The 2>&1 then redirects standard error to the current standard output file descriptor, which has already been redirected to /dev/null, so it won't give you any output.
But, since you don't want output, you might be able to use the --quiet option. From the docs:
Turn off Wget's output.
So, I'd probably use the following command
wget --quiet --spider 'http://mysite.com/your/page'
if [[ $? != 0 ]] ; then
# error retrieving page, do something useful
fi
TCP_HOST="mydomain.com"
TCP_PORT=80
exec 5<>/dev/tcp/"${TCP_HOST}"/"${TCP_PORT}"
echo -e "HEAD / HTTP/1.0\nHOST:${TCP_HOST}\n" >&5
while read -r line
do
case "$line" in
*200*OK* )
echo "site OK:$TCP_HOST"
exec >&5-
exit
;;
*) echo "site:$TCP_HOST not ok"
;;
esac
done <&5