output of dns-sd browse command not redirected to file in busybox shell(ash) - shell

To check if mdnsd is in probing mode we are using below command to browse for service and redirect its output a file, and hostname of the device is found in the command we decide that mdnsd is in probing mode.
command used for publishing service
dns-sd -R "Test status" "_mytest._tcp." "local." "22"
To browse the service following command is used (Running in background)
dns-sd -lo -Z _mytest._tcp > /tmp/myfile &
To display the content of the file used cat.
cat /tmp/myfile
myfile is empty, if > replaced with tee , I see output on console myfile remains empty.
I am unable to understand what is going on.
Is there any pointer, help
EDIT
Just for completeness adding output, which i missed adding before.
# dns-sd -lo -Z _mytest._tcp local
Using LocalOnly
Using interface -1
Browsing for _mytest._tcp
DATE: ---Tue 25 Apr 2017---
11:09:24.775 ...STARTING...
; To direct clients to browse a different domain, substitute that domain in place of '#'
lb._dns-sd._udp PTR #
; In the list of services below, the SRV records will typically reference dot-local Multicast DNS names.
; When transferring this zone file data to your unicast DNS server, you'll need to replace those dot-local
; names with the correct fully-qualified (unicast) domain name of the target host offering the service.
_mytest._tcp PTR Test\032status._mytest._tcp
Test\032status._mytest._tcp SRV 0 0 22 DevBoard.local. ; Replace with unicast FQDN of target host
Test\032status._mytest._tcp TXT ""

You appear to have a program with behavior that differs based on whether its output is to a TTY. One workaround is to use a tool such as unbuffer or script to simulate a TTY.
Moreover, inasmuch as the use of a file at all is done as a workaround, I suggest using a FIFO to actually capture the line you want without needing to write to a file and poll that file's contents.
#!/bin/sh
newline='
'
# Let's define some helpers...
cleanup() {
[ -e /proc/self/fd/3 ] && exec 3<&- ## close FD 3 if it's open
rm -f "fifo.$$" ## delete the FIFO from disk
if [ -n "$pid" ] && kill -0 "$pid" 2>/dev/null; then ## if our pid is still running...
kill "$pid" ## ...then shut it down.
fi
}
die() { cleanup; echo "$*" >&2; exit 1; }
# Create a FIFO, and start `dns-sd` in the background *thinking* it's writing to a TTY
# but actually writing to that FIFO
mkfifo "fifo.$$"
script -q -f -c 'dns-sd -lo -Z _mytest._tcp local' /proc/self/fd/1 |
tr -d '\r' >"fifo.$$" & pid=$!
exec 3<"fifo.$$"
while read -t 1 -r line <&3; do
case $line in
"Script started on"*|";"*|"") continue;;
"Using "*|DATE:*|[[:digit:]]*) continue;;
*) result="${result}${line}${newline}"; break
esac
done
if [ -z "$result" ]; then
die "Timeout before receiving a non-boilerplate line"
fi
printf '%s\n' "Retrieved a result:" "$result"
cleanup

Related

monitoring and searching a file with inotify, and command line tools

Log files is written line by line by underwater drones on a server. TWhen at surface, the drones speak slowly to the server (say ~200o/s on a phone line which is not stable) and only from time to time (say every ~6h). Depending on the messages, I have to execute commands on the server while the drones are online and when they hang up other commands. Other processes may be looking at the same files with similar tasks.
A lot can be found on this website on somewhat similar problems but the solution I have built on is still unsatisfactory. Presently I'm doing this with bash
while logfile_drone=`inotifywait -e create --format '%f' log_directory`; do
logfile=log_directory/${logfile_drone}
while action=`inotifywait -q -t 120 -e modify -e close --format '%e' ${logfile} ` ; do
exidCode=$?
lastLine=$( tail -n2 ${logFile} | head -n1 ) # because with tail -n1 I can got only part of the line. this happens quite often
match =$( # match set to true if lastLine matches some pattern )
if [[ $action == 'MODIFY' ]] && $match ; then # do something ; fi
if [[ $( echo $action | cut -c1-5 ) == 'CLOSE' ]] ; then
# do something
break
fi
if [[ $exitCode -eq 2 ]] ; then break ; fi
done
# do something after the drone has hang up
done # wait for a new call from the same or another drone
The main problems are :
the second inotify misses lines, may be because of the other processes looking at the same file.
the way I catch the time out doesn't seem to work.
I can't monitor 2 drones simultaneously.
Basically the code works more or less but isn't very robust. I wonder if problem 3 can be managed by putting the second while loop in a function which is put in background when called. Finally, I wonder if a higher level language (I'm familiar with php which has a PECL extension for inotify) would not do this much better. However, I imagine that php will not solve problem 3 better than than bash.
Here is the code where I'm facing the problem of abrupt exit from the while loop, implemented according to Philippe's answer, which works fine otherwise:
while read -r action ; do
...
resume=$( grep -e 'RESUMING MISSION' <<< $lastLine )
if [ -n "$resume" ] ; then
ssh user#another_server "/usr/bin/php /path_to_matlab_command/matlabCmd.php --drone=${vehicle}" &
fi
if [ $( echo $action | cut -c1-5 ) == 'CLOSE' ] ; then ... ; sigKill=true ; fi
...
if $sigKill ; then break; fi
done < <(inotifywait -q -m -e modify -e close_write --format '%e' ${logFile})
When I comment the line with ssh the script can exit properly with a break triggered by CLOSE, otherwise the while loop finishes abruptly after the ssh command. The ssh is put in background because the matlab code runs for long time.
monitor mode (-m) of inotifywait may serve better here :
inotifywait -m -q -e create -e modify -e close log_directory |\
while read -r dir action file; do
...
done
monitor mode (-m) does not buffer, it just print all events to standard output.
To preserve the variables :
while read -r dir action file; do
echo $dir $action $file
done < <(inotifywait -m -q -e create -e modify -e close log_directory)
echo "End of script"

the bash script only reboot the router without echoing whether it is up or down

#!/bin/bash
ip route add 10.105.8.100 via 192.168.1.100
date
cat /home/xxx/Documents/list.txt | while read output
do
ping="ping -c 3 -w 3 -q 'output'"
if $ping | grep -E "min/avg/max/mdev" > /dev/null; then
echo 'connection is ok'
else
echo "router $output is down"
then
cat /home/xxx/Documents/roots.txt | while read outputs
do
cd /home/xxx/Documents/routers
php rebootRouter.php "outputs" admin admin
done
fi
done
The other documents are:
lists.txt
10.105.8.100
roots.txt
192.168.1.100
when i run the script, the result is a reboot of the router am trying to ping. It doesn't ping.
Is there a problem with the bash script.??
If your files only contain a single line, there's no need for the while-loop, just use read:
read -r router_addr < /home/xxx/Documents/list.txt
# the grep is unnecessary, the return-code of the ping will be non-zero if the host is down
if ping -c 3 -w 3 -q "$router_addr" &> /dev/null; then
echo "connection to $router_addr is ok"
else
echo "router $router_addr is down"
read -r outputs < /home/xxx/Documents/roots.txt
cd /home/xxx/Documents/routers
php rebootRouter.php "$outputs" admin admin
fi
If your files contain multiple lines, you should redirect the file from the right-side of the while-loop:
while read -r output; do
...
done < /foo/bar/baz
Also make sure your files contain a newline at the end, or use the following pattern in your while-loops:
while read -r output || [[ -n $output ]]; do
...
done < /foo/bar/baz
where || [[ -n $output ]] is true even if the file doesn't end in a newline.
Note that the way you're checking for your routers status is somewhat brittle as even a single missed ping will force it to reboot (for example the checking computer returns from a sleep-state just as the script is running, the ping fails as the network is still down but the admin script succeeds as the network just comes up at that time).

Pseudo-terminal will not be allocated because stdin is not a terminal ssh bash

okay heres part of my code when I ssh to my servers from my server.txt list.
while read server <&3; do #read server names into the while loop
serverName=$(uname -n)
if [[ ! $server =~ [^[:space:]] ]] ; then #empty line exception
continue
fi
echo server on list = "$server"
echo server signed on = "$serverName"
if [ $serverName == $server ] ; then #makes sure a server doesnt try to ssh to itself
continue
fi
echo "Connecting to - $server"
ssh "$server" #SSH login
echo Connected to "$serverName"
exec < filelist.txt
while read updatedfile oldfile; do
# echo updatedfile = $updatedfile #use for troubleshooting
# echo oldfile = $oldfile #use for troubleshooting
if [[ ! $updatedfile =~ [^[:space:]] ]] ; then #empty line exception
continue # empty line exception
fi
if [[ ! $oldfile =~ [^[:space:]] ]] ; then #empty line exception
continue # empty line exception
fi
echo Comparing $updatedfile with $oldfile
if diff "$updatedfile" "$oldfile" >/dev/null ; then
echo The files compared are the same. No changes were made.
else
echo The files compared are different.
cp -f -v $oldfile /infanass/dev/admin/backup/`uname -n`_${oldfile##*/}_$(date +%F-%T)
cp -f -v $updatedfile $oldfile
fi
done
done 3</infanass/dev/admin/servers.txt
I keep on getting this error and the ssh doesn't actually connect and perform the code on the server its suppose to be ssh'd on.
Pseudo-terminal will not be allocated because stdin is not a terminal
I feel like everything the guy above just said is so wrong.
Expect?
It's simple:
ssh -i ~/.ssh/bobskey bob#10.10.10.10 << EOF
echo I am creating a file called Apples in the /tmp folder
touch /tmp/apples
exit
EOF
Everything in between the 2 "EOF"s will be run in the remote server.
The tags need to be the same. If you decide to replace "EOF" with "WayneGretzky", you must change the 2nd EOF also.
You seem to assume that when you run ssh to connect to a server, the rest of the commands in the file are passed to the remote shell running in ssh. They are not; instead they will be processed by the local shell once ssh terminates and returns control to it.
To run remote commands through ssh there are a couple of things you can do:
Write the commands you want to execute to a file. Copy the file to the remote server using scp, and execute it with ssh user#remote command
Learn a bit of TCL and use expect
Write the commands in a heredoc, but be careful with variable substitution: substitution happens in the client, not on the server. For example this will output your local home directory, not the remote:
ssh remote <<EOF
echo $HOME
EOF
To make it print the remote home directory you have to use echo \$HOME.
Also, remember that data files such as filelist.txt have to be explicitly copied if you want to read them on the remote side.

FTP File Transfers Using Piping Safely

I have a file forwarding system where a bunch of files are downloaded to a directory, de-multiplexed and copied to individual machines.
The files are forwarded when they are received by the master server. And files normally arrive in bursts. (Auth by ssh keys)
This script creates the sftp session, and uses a pipe to watch the head of a fifo pipe.
HOST=$1
pipe=/tmp/pipes/${HOST%%.*}
ps aux | grep -v grep | grep sftp | grep "user#$HOST" > /dev/null
if [[ $? == 0 ]]; then
echo "FTP is Running on this Server"
exit
else
pid=`ps aux | grep -v grep | grep tail | tr -s ' ' | grep $pipe`
[[ $? == 0 ]] && kill -KILL `echo $pid | cut -f2 -d' '`
fi
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n +1 -f $pipe | sftp -o 'ServerAliveInterval 60' user#$HOST > /dev/null &
echo cd /tmp/data >>$pipe #Sends Command to Host
echo "Started FTP to $HOST"
Update: I ended up changing the cleanup code to use "ps aux" to see if an ftp session is running, and subsequently if the tail -f is still running. Grep by user#host and the name of the pipe respectively. This is done when the script is called, and the script is called whenever I try to upload a file.
IE:
FILENAME=`basename $1`
function transfer {
echo cd /apps/data >> $2 # For Safety
echo put $1 .$FILENAME >> $2
echo rename .$FILENAME $FILENAME >> $2
echo chmod 0666 $FILENAME >> $2
}
./ftp.sh host
[ -p $pipedir/host ] && transfer $1 $pipedir/host
Files received on the master server are caught by Incron which writes a put command and the available file's location to the fifo pipe, to be sent by sftp (rename is also preformed).
My question is, is this safe? Could this crash on ftp errors/events. Not really worried about login errors.
The goal is to reduce the number of ftp logins. Single Session/Minute(or more) intervals.
And allow files to be forwarded as they're received. Dynamic Commands.
I'd prefer to use standard ubuntu libraries, if possible.
EDIT: After testing and working through some issues the server simply runs with
[[ -p $pipe ]] && echo FTP is Running on this Server
ln -s $pipe $lock &> /dev/null || (echo FTP is Running on this Server && exit)
[[ ! -p $pipe ]] && mkfifo $pipe
( tail -n +1 -F $pipe & echo $! > $pipe.pid ) | tee >
( sed "/tail:/ q" >/dev/null && kill $(cat $pipe.pid) |& rm -f $pipe >/dev/null; )
| sftp -i ~/.ssh/$HOST.rsa -oServerAliveInterval=60 user#$HOST &
rm -f $lock
Its rather simple but works nicely.
you might be intrested in setting up a more simpler(and robust) syncronization infrastructure:
if a given host is not connected when a file arrives...it never recieves it (if i understand correctly your code)
i would do something like
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
on the client machines either periodically or by event...rsync is intelligently syncronizes the files by their timestamp and size (-a contains -t)
the event would be some process termination like:
client does(configure private key usage in ~/.ssh/config for host):
#!/bin/bash
while :;do
ssh user#host /srv/bin/sleepListener 600
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
done
on the server
/srv/bin/sleepListener is a symbolic link to /bin/sleep
server after recieving new file:
killall sleepListener
note: every 10 minutes a full check is performed...if nodes go offline/online it doesn't matter...

How can I detect if my shell script is running through a pipe?

How do I detect from within a shell script if its standard output is being sent to a terminal or if it's piped to another process?
The case in point: I'd like to add escape codes to colorize output, but only when run interactively, but not when piped, similar to what ls --color does.
In a pure POSIX shell,
if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi
returns "terminal", because the output is sent to your terminal, whereas
(if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi) | cat
returns "not a terminal", because the output of the parenthetic element is piped to cat.
The -t flag is described in man pages as
-t fd True if file descriptor fd is open and refers to a terminal.
... where fd can be one of the usual file descriptor assignments:
0: standard input
1: standard output
2: standard error
There is no foolproof way to determine if STDIN, STDOUT, or STDERR are being piped to/from your script, primarily because of programs like ssh.
Things that "normally" work
For example, the following bash solution works correctly in an interactive shell:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
But they don't always work
However, when executing this command as a non-TTY ssh command, STD streams always looks like they are being piped. To demonstrate this, using STDIN because it's easier:
# CORRECT: Forced-tty mode correctly reports '1', which represents
# no pipe.
ssh -t localhost '[[ -p /dev/stdin ]]; echo ${?}'
# CORRECT: Issuing a piped command in forced-tty mode correctly
# reports '0', which represents a pipe.
ssh -t localhost 'echo hi | [[ -p /dev/stdin ]]; echo ${?}'
# INCORRECT: Non-tty mode reports '0', which represents a pipe,
# even though one isn't specified here.
ssh -T localhost '[[ -p /dev/stdin ]]; echo ${?}'
Why it matters
This is a pretty big deal, because it implies that there is no way for a bash script to tell whether a non-tty ssh command is being piped or not. Note that this unfortunate behavior was introduced when recent versions of ssh started using pipes for non-TTY STDIO. Prior versions used sockets, which COULD be differentiated from within bash by using [[ -S ]].
When it matters
This limitation normally causes problems when you want to write a bash script that has behavior similar to a compiled utility, such as cat. For example, cat allows the following flexible behavior in handling various input sources simultaneously, and is smart enough to determine whether it is receiving piped input regardless of whether non-TTY or forced-TTY ssh is being used:
ssh -t localhost 'echo piped | cat - <( echo substituted )'
ssh -T localhost 'echo piped | cat - <( echo substituted )'
You can only do something like that if you can reliably determine if pipes are involved or not. Otherwise, executing a command that reads STDIN when no input is available from either pipes or redirection will result in the script hanging and waiting for STDIN input.
Other things that don't work
In trying to solve this problem, I've looked at several techniques that fail to solve the problem, including ones that involve:
examining SSH environment variables
using stat on /dev/stdin file descriptors
examining interactive mode via [[ "${-}" =~ 'i' ]]
examining tty status via tty and tty -s
examining ssh status via [[ "$(ps -o comm= -p $PPID)" =~ 'sshd' ]]
Note that if you are using an OS that supports the /proc virtual filesystem, you might have luck following the symbolic links for STDIO to determine whether a pipe is being used or not. However, /proc is not a cross-platform, POSIX-compatible solution.
I'm extremely interesting in solving this problem, so please let me know if you think of any other technique that might work, preferably POSIX-based solutions that work on both Linux and BSD.
The command test (builtin in Bash), has an option to check if a file descriptor is a tty.
if [ -t 1 ]; then
# Standard output is a tty
fi
See "man test" or "man bash" and search for "-t".
You don't mention which shell you are using, but in Bash, you can do this:
#!/bin/bash
if [[ -t 1 ]]; then
# stdout is a terminal
else
# stdout is not a terminal
fi
On Solaris, the suggestion from Dejay Clayton works mostly. The -p does not respond as desired.
File bash_redir_test.sh looks like:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
On Linux, it works great:
:$ ./bash_redir_test.sh
STDOUT is attached to TTY
:$ ./bash_redir_test.sh | xargs echo
STDOUT is attached to a pipe
:$ rm bash_redir_test.log
:$ ./bash_redir_test.sh >> bash_redir_test.log
:$ tail bash_redir_test.log
STDOUT is attached to a redirection
On Solaris:
:# ./bash_redir_test.sh
STDOUT is attached to TTY
:# ./bash_redir_test.sh | xargs echo
STDOUT is attached to a redirection
:# rm bash_redir_test.log
bash_redir_test.log: No such file or directory
:# ./bash_redir_test.sh >> bash_redir_test.log
:# tail bash_redir_test.log
STDOUT is attached to a redirection
:#
The following code (tested only in Linux Bash 4.4) should not be considered portable nor recommended, but for the sake of completeness here it is:
ls /proc/$$/fdinfo/* >/dev/null 2>&1 || grep -q 'flags: 00$' /proc/$$/fdinfo/0 && echo "pipe detected"
I don't know why, but it seems that file descriptor "3" is somehow created when a Bash function has standard input piped.

Resources