Hide part or script's output - bash

I wrote a script which does some operations with SVN, everything is working fine, but I want to suppress the output of certain commands executed by the script. The following code is a minor part of this script and i want to hide all output when it executes a section containing "sudo svn add *"
ng1=$(svn stat 2>&1 | grep "?")
if [[ "$ng1" != "" ]];
then
echo ' '
echo '[NGINX]New files in work folder, add???[y/n]?'
echo ' '
read qyn
case $qyn in
[yY]* ) sudo svn add *;; #Add all to repo
[nN]* ) echo ' ';; #Proceed further
esac
else
echo ' '
echo '[NGINX]No new files'
echo ' '
fi
I tried to redirect ouput this way - {sudo svn add *} &>/dev/null but it's not working.
Is there any way to hide this output, but still execute sudo svn add *

To suppress both stdout and stderr use this:
sudo svn add * >& /dev/null
OR:
sudo svn add * &> /dev/null

Related

Cron + nohup = script in cron cannot find command?

There is a simple cron job:
#reboot /home/user/scripts/run.sh > /dev/null 2>&1
run.sh starts a binary (simple web server):
#!/usr/bin/env bash
NPID=/home/user/server/websrv
if [ ! -f $NPID ]
then
echo "Not started"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
NUM=$(ps ax | grep $(cat $NPID) | grep -v grep | wc -l)
if [ $NUM -lt 1 ]
then
echo "Not working"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
ps ax | grep $(cat $NPID) | grep -v grep
echo "All Ok"
fi
fi
websrv gets JSON from user, and runs work.sh script itselves.
The problem is that sh script, which is invoked by websrv, "does not see" commands and stops with exit 1.
The script work.sh is like this:
#!/bin/sh -e
if [ "$#" -ne 1 ]; then
echo "Usage: $0 INPUT"
exit 1
fi
cd $(dirname $0) #good!
pwd #good!
IN="$1"
echo $IN #good!
KEYFORGIT="/some/path"
eval `ssh-agent -s` #good!
which ssh-add #good! (returns /usr/bin/ssh-add)
ssh-add $KEYFORGIT/openssh #error: exit 1!
git pull #error: exit 1!
cd $(dirname $0) #good!
rm -f somefile #error: exit 1!
#############==========Etc.==============
Usage of the full paths does not help.
If the script has been executed itself, it works.
If run.sh manually, it also works.
If I run the command nohup home/user/server/websrv & if works as well.
However, if all this chain of tools is started by cron on boot, work.sh is not able to perform any command except of cp, pwd, which, etc. But invoke of ssh-add, git, cp, rm, make etc., forces exit 1 status of the script. Why it "does not see" the commands? Unfortunately, I also cannot get any extended log which might explain the particular errors.
Try adding the path from the session that runs the script correctly to the cron entry (or inside the script)
Get the current path (where the script runs fine) with echo $PATH and add that to the crontab: replacing the string below with the output -> <REPLACE_WITH_OUTPUT_FROM_ABOVE>
#reboot export PATH=$PATH:<REPLACE_WITH_OUTPUT_FROM_ABOVE>; /home/user/scripts/run.sh > /dev/null 2>&1
You can compare paths with a cron entry like this to see what cron's PATH is:
* * * * * echo $PATH > /tmp/crons_path
Then cat /tmp/crons_path to see what it says.
Example output:
$ crontab -l | grep -v \#
* * * * * echo $PATH >> /tmp/crons_path
# wait a minute or so...
$ cat /tmp/crons_path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
$ echo $PATH
/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
As the commenter above mentioned, crontab doesn't always use the same path as user so likely something is missing.
Be sure to remove the temp cron entry after testing (crontab -e, etc.)...

How to change name of file if already present on remote machine?

I want to change the name of a file if it is already present on a remote server via SSH.
I tried this from here (SuperUser)
bash
ssh user#localhost -p 2222 'test -f /absolute/path/to/file' && echo 'YES' || echo 'NO'
This works well with a prompt, echoes YES when the file exists and NO when it doesn't. But I want this to be launched from a crontab, then it must be in a script.
Let's assume the file is called data.csv, a condition is set in a loop such as if there already is a data.csv file on the server, the file will be renamed data_1.csv and then data_2.csv, ... until the name is unique.
The renaming part works, but the detection part doesn't :
while [[ $fileIsPresent!='false' ]]
do
((appended+=1))
newFileName=${fileName}_${appended}.csv
remoteFilePathname=${remoteFolder}${newFileName}
ssh pi#localhost -p 2222 'test -f $remoteFilePathname' && fileIsPresent='true' || fileIsPresent='false'
done
always returns fileIsPresent='true' for any data_X.csv. All the paths are absolute.
Do you have any idea to help me?
This works:
$ cat replace.sh
#!/usr/bin/env bash
if [[ "$1" == "" ]]
then
echo "No filename passed."
exit
fi
if [[ ! -e "$1" ]]
then
echo "no such file"
exit
fi
base=${1%%.*} # get basename
ext=${1#*.} # get extension
for i in $(seq 1 100)
do
new="${base}_${i}.${ext}"
if [[ -e "$new" ]]
then
continue
fi
mv $1 $new
exit
done
$ ./replace.sh sample.csv
no such file
$ touch sample.csv
$ ./replace.sh sample.csv
$ ls
replace.sh
sample_1.csv
$ touch sample.csv
$ ./replace.sh sample.csv
$ ls
replace.sh
sample_1.csv
sample_2.csv
However, personally I'd prefer to use a timestamp instead of a number. Note that this sample will run out of names after 100. Timestamps won't. Something like $(date +%Y%m%d_%H%M%S).
As you asked for ideas to help you, I thought it worth mentioning that you probably don't want to start up to 100 ssh processes each one logging into the remote machine, so you might do better with a construct like this that only establishes a single ssh session that runs till complete:
ssh USER#REMOTE <<'EOF'
for ((i=0;i<10;i++)) ; do
echo $i
done
EOF
Alternatively, you can create and test a bash script locally and then run it remotely like this:
ssh USER#REMOTE 'bash -s' < LocallyTestedScript.bash

Git Hooks. How to read variables from user input?

I'm trying to create a pre-commit hook. And I want it to interract with user. So, I've found that I can use
read -p "Enter info: " info
Or just
read info
I created a file:
pre-commit:
#!/bin/sh
read -r input
echo $input
It just should read variable and output it. But it doesn't work. I mean it doesn't work as hook. If I run it using terminal with ./.githooks/pre-commit, everything is okay. But when I use git commit -am "Hook", it echos empty string and doesn't read anything. Am I doing something wrong?
Git version is 2.28.0.windows.1
As in this gist, you might need to take into account stderr (used by Git commands, as for instance here)
#!/bin/sh
# Redirect output to stderr.
exec 1>&2
# enable user input
exec < /dev/tty
Example of a script following those lines:
consoleregexp='console.log'
# CHECK
if test $(git diff --cached | grep $consoleregexp | wc -l) != 0
then
exec git diff --cached | grep -ne $consoleregexp
read -p "There are some occurrences of console.log at your modification. Are you sure want to continue? (y/n)" yn
echo $yn | grep ^[Yy]$
if [ $? -eq 0 ]
then
exit 0; #THE USER WANTS TO CONTINUE
else
exit 1; # THE USER DONT WANT TO CONTINUE SO ROLLBACK
fi
fi

the bash script only reboot the router without echoing whether it is up or down

#!/bin/bash
ip route add 10.105.8.100 via 192.168.1.100
date
cat /home/xxx/Documents/list.txt | while read output
do
ping="ping -c 3 -w 3 -q 'output'"
if $ping | grep -E "min/avg/max/mdev" > /dev/null; then
echo 'connection is ok'
else
echo "router $output is down"
then
cat /home/xxx/Documents/roots.txt | while read outputs
do
cd /home/xxx/Documents/routers
php rebootRouter.php "outputs" admin admin
done
fi
done
The other documents are:
lists.txt
10.105.8.100
roots.txt
192.168.1.100
when i run the script, the result is a reboot of the router am trying to ping. It doesn't ping.
Is there a problem with the bash script.??
If your files only contain a single line, there's no need for the while-loop, just use read:
read -r router_addr < /home/xxx/Documents/list.txt
# the grep is unnecessary, the return-code of the ping will be non-zero if the host is down
if ping -c 3 -w 3 -q "$router_addr" &> /dev/null; then
echo "connection to $router_addr is ok"
else
echo "router $router_addr is down"
read -r outputs < /home/xxx/Documents/roots.txt
cd /home/xxx/Documents/routers
php rebootRouter.php "$outputs" admin admin
fi
If your files contain multiple lines, you should redirect the file from the right-side of the while-loop:
while read -r output; do
...
done < /foo/bar/baz
Also make sure your files contain a newline at the end, or use the following pattern in your while-loops:
while read -r output || [[ -n $output ]]; do
...
done < /foo/bar/baz
where || [[ -n $output ]] is true even if the file doesn't end in a newline.
Note that the way you're checking for your routers status is somewhat brittle as even a single missed ping will force it to reboot (for example the checking computer returns from a sleep-state just as the script is running, the ping fails as the network is still down but the admin script succeeds as the network just comes up at that time).

Safe shell redirection when command not found

Let's say we have a text file named text (doesn't matter what it contains) in current directory, when I run the command (in Ubuntu 14.04, bash version 4.3.11):
nocommand > text # make sure noommand doesn't exists in your system
It reports a 'command not found' error and it erases the text file! I just wonder if I can avoid the clobber of the file if the command doesn't exist.
I try this command set -o noclobber but the same problem happens if I run:
nocommand >| text # make sure noommand doesn't exists in your system
It seems that bash redirects output before looking for specific command to run. Can anyone give me some advices how to avoid this?
Actually, the shell first looks at the redirection and creates the file. It evaluates the command after that.
Thus what happens exactly is: Because it's a > redirection, it first replaces the file with an empty file, then evaluates a command which does not exist, which produces an error message on stderr and nothing on stdout. It then stores stdout in this file (which is nothing so the file remains empty).
I agree with Nitesh that you simply need to check if the command exists first, but according to this thread, you should avoid using which. I think a good starting point would be to check at the beginning of your script that you can run all the required functions (see the thread, 3 solutions), and abort the script otherwise.
Write to a temporary file first, and only move it into place over the desired file if the command succeeds.
nocommand > tmp.txt && mv tmp.txt text
This avoids errors not only when nocommand doesn't exist, but also when an existing command exits before it can finish writing its output, so you don't overwrite text with incomplete data.
With a little more work, you can clean up the temp file in the event of an error.
{ nocommand > tmp.txt || { rm tmp.txt; false; } } && mv tmp.txt text
The inner command group ensures that the exit status of the outer command group is non-zero so that even if the rm succeeds, the mv command is not triggered.
A simpler command that carries the slight risk of removing the temp file when nocommand succeeds but the mv fails is
nocommand > tmp.txt && mv tmp.txt text || rm tmp.txt
This would write to file only if the pipe sends at least a single character:
nocommand | (
IFS= read -d '' -n 1 || exit
exec >myfile
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
)
Or using a function:
function protected_write {
IFS= read -d '' -n 1 || exit
exec >"$1"
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
}
nocommand | protected_write myfile
Note that if lastpipe option is enabled, you'll have to place it on a subshell:
nocommand | ( protected_write myfile )
At your option you can also just summon subshell on the function by default:
function protected_write {
(
IFS= read -d '' -n 1 || exit
exec >"$1"
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
)
}
() summons a subshell. A subshell is a fork and runs on a different process space. In x | y, y is also summoned by default in a subshell unless lastpipe option (try shopt lastpipe) is enabled.
IFS= read -d '' -n 1 waits for a single character (see help read) and would return zero code when it reads one which bypasses exit.
exec >"$1" redirects stdout to file. This makes everything that prints to stdout print to file instead.
Everything besides \x00 when read is stored in REPLY that is why we do printf '\x00' when REPLY has null (empty) value.
exec cat replaces the subshell's process with cat which would send everything that it receives to the file and finish the remaining job. See help exec.
If you do:
set -o noclobber
then
invalidcmd > myfile
if myfile exists in current path then you will get:
-bash: myfile: cannot overwrite existing file
Check using "which" command
#!/usr/bin/env bash
command_name="npm2" # Add your command here
command=`which $command_name`
if [ -z "$command" ]; then #if command exists go ahead with your logic
echo "Command not found"
else # Else fallback
echo "$command"
fi
Hope this helps

Resources