How to echo text and have output from a command be sent to a file in parallel - bash

I would like to echo text and also remotely ping a computer and have the output be sent to a log file. I also need to do this in parallel but I am having some trouble with how the output is being sent into the log file.
I would like the output to look like:
host | hostTo | result of ping command
but since I have this running as a background process it is outputting:
host hostTo host hostTo rtt
rtt
rtt
etc...
Is there a way to allow this to be a background process but make it so that the echo is part of that process, so the logfile isn't out of order?
here's my script, thanks in advance!
for host in `cat data/ips.txt`; do
echo -n "$host ";
for hostTo in `cat data/ips.txt`; do
{
echo -n "$host $hostTo " >> logs/$host.log;
(ssh -n -o StrictHostKeyChecking=no -o ConnectTimeout=1 -T username#$host ping -c 10 $hostTo | tail -1 >> logs/$host.log) &
};
done;
done

It's possible to do this with awk. What you're basically asking is how to print out the hosts as well as the result at the same time.
ie. Remove the line with echo and change the following:
ssh .... ping -c 10 $hostTo | awk -v from=$host -v to=$hostTo 'END {print from, to, $0}' >> logs/${host}.log
Note that tail is effectively being done inside awk also. Including shell var inside awk tends to be a PITA, maybe there's an easier way to do it without all the escaping and quotes. [Updated: Assign var in awk]
PS. The title of your question is a little unclear, it does sounds like you want to pipe your program output to the display and file at the same time.

Related

nslookup/dig/drill commands on a file that contains websites to add ip addresses

UPDATE : Still open for solutions using nslookup without parallel, dig or drill
I need to write a script that scans a file containing web page addresses on each line, and adds to these lines the IP address corresponding to the name using nslookup command. The script looks like this at the moment :
#!/usr/bin/
while read ip
do
nslookup "$ip" |
awk '/Name:/{val=$NF;flag=1;next} /Address:/ &&
flag{print val,$NF;val=""}' |
sed -n 'p;n'
done < is8.input
The input file contains the following websites :
www.edu.ro
vega.unitbv.ro
www.wikipedia.org
The final output should look like :
www.edu.ro 193.169.21.181
vega.unitbv.ro 193.254.231.35
www.wikipedia.org 91.198.174.192
The main problem i have with the current state of the script is that it takes the names from nslookup (which is good for www.edu.ro) instead of taking the aliases when those are available. My output looks like this:
www.edu.ro 193.169.21.181
etc.unitbv.ro 193.254.231.35
dyna.wikimedia.org 91.198.174.192
I was thinking about implementing a if-else for aliases but i don't know how to do one on the current command. Also the script can be changed if anyone has a better understanding of how to format nslookup to show it like the output given.
Minimalist workaround quasi-answer. Here's a one-liner replacement for the script using GNU parallel, host (less work to parse than nslookup), and sed:
parallel "host {} 2> /dev/null |
sed -n '/ has address /{s/.* /'{}' /p;q}'" < is8.input
...or using nslookup at the cost of added GNU sed complexity.
parallel "nslookup {} 2> /dev/null |
sed -n '/^A/{s/.* /'{}' /;T;p;q;}'" < is8.input
...or using xargs:
xargs -I '{}' sh -c \
"nslookup {} 2> /dev/null |
sed -n '/^A/{s/.* /'{}' /;T;p;q;}'" < is8.input
Output of any of those:
www.edu.ro 193.169.21.181
vega.unitbv.ro 193.254.231.35
www.wikipedia.org 208.80.154.224
Replace your complete nslookup line with:
echo "$IP $(dig +short "$IP" | grep -m 1 -E '^[0-9.]{7,15}$')"
This might work for you (GNU sed and host):
sed '/\S/{s#.*#host & | sed -n "/ has address/{s///p;q}"#e}' file
For all non-empty lines: invoke the host command on the supplied host name and pipe the results to another invocation of sed which strips out text and quits after the first result.

Unable to redirect output of a perl script to a file

Even though the question sounds annoyingly silly, I am stuck with this. The described issue occurs on both Ubuntu 14.04 and CentOS 6.3.
I am using a perl script called netbps as posted in the answer (by RedGrittyBrick): https://superuser.com/questions/356907/how-to-get-real-time-network-statistics-in-linux-with-kb-mb-bytes-format-and-for
The above script basically takes the output of tcpdump (a command whose details we don't need to know here) and represents it in a different format. Note that the script does this in streaming mode (i.e., the output is produced on the fly).
Hence, my command looks like this:
tcpdump -i eth0 -l -e -n "src portrange 22-233333 or dst portrange 22-23333" 2>&1 | ./netbps.prl
And the output produced on the shell/console looks like this:
13:52:09 47.86 Bps
13:52:20 517.54 Bps
13:52:30 222.59 Bps
13:52:41 4111.77 Bps
I am trying to capture this output to a file, however, I am unable to do so. I have tried the following:
Redirect to file:
tcpdump -i eth0 -l -e -n "src portrange 22-233333 or dst portrange 22-23333" 2>&1 | ./netbps.prl > out.out 2>&1
This creates an empty out.out file. No output appears on the shell/console.
Pipe and grep:
tcpdump -i eth0 -l -e -n "src portrange 22-233333 or dst portrange 22-23333" 2>&1 | ./netbps.prl 2>&1 | grep "Bps"
No output appears on the shell/console.
I don't know much about perl, but this seems to me like a buffering issue -- not sure though? Any help will be appreciated.
It is a buffering problem. Add the line STDOUT->autoflush(1) to netbps and it will work.
STDOUT is normally line buffered, so the newline on the end of printf should trigger a buffer flush, but because it's redirected to a file it is buffered like any normal file. You can see this with...
$ perl -e 'while(1) { print "foo\n"; sleep 5; }'
vs
$ perl -e 'while(1) { print "foo\n"; sleep 5; }' > test.out

Processing the real-time last line in a currently being written text file

I have a text file which is in fact open and does logging activities performed by process P1 in the system. I was wondering how I can get the real time content of the last line of this file in a bash script and do "echo" a message, say "done was seen", if the line equals to "done".
You could use something like this :
tail -f log.txt | sed -n '/^done$/q' && echo done was seen
Explanation:
tail -f will output appended data as the file grows
sed -n '/^done$/q' will exit when a line containing only done is encountered, ending the command pipeline.
This should work for you:
tail -f log.txt | grep -q -m 1 done && echo done was seen
The -m flag to grep means "exit after N matches", and the && ensures that the echo statement will only be done on a successful exit from grep.

Bash script: How to remote to a computer run a command and have output pipe to another computer?

I need to create a Bash Script that will be able to ssh into a computer or Machine B, run a command and have the output piped back to a .txt file on machine A how do I go about doing this? Ultimately it will be list of computers that I will ssh to and run a command but all of the output will append to the same .txt file on Machine A.
UPDATE: Ok so I went and followed what That other Guy suggested and this is what seems to work:
File=/library/logs/file.txt
ssh -n username#<ip> "$(< testscript.sh)" > $File
What I need to do now is instead of manually entering an ip address, I need to have it read from a list of hostnames coming from a .txt file and have it place it in a variable that will substitute the ip address. An example would be: ssh username#Variable in which "Variable" will be changing each time a word is read from a file containing hostnames. Any ideas how to go about this?
This should do it
ssh userB#machineB "some command" | ssh userA#machineA "cat - >> file.txt"
With your commands:
ssh userB#machineB <<'END' | ssh userA#machineA "cat - >> file.txt"
echo Hostname=$(hostname) LastChecked=$(date)
ls -l /applications/utilities/Disk\ Utility.app/contents/Plugins/*Partition.dumodule* | awk '{printf "Username=%s DateModified=%s %s %s\n", $3, $6, $7, $8}'
END
You could replace the ls -l | awk pipeline with a single stat call, but it appears that the OSX stat does not have a way to return the user name, only the user id

store command output in variable

I am working on a script that executes ssh to few systems (listed in lab.txt), run two commands, store the output of commands in two different variables and print them.
Here is the script used :
#!/bin/bash
while read host; do
ssh -n root#$host "$(STATUS=$(awk 'NR==1{print $1}' /etc/*release) \
OS=$(/opt/agent/bin/agent.sh status | awk 'NR==1{print $3 $4}'))"
echo $STATUS
echo $OS
done < lab.txt
The lab.txt file contains few Ips where I need to login, execute and print the command output.
~#] cat lab.txt
192.168.1.1
192.168.1.2
While executing the script, the ssh login prompt of 192.168.1.1 is shown and after entering the password, the output is shown blank. Same as for next ip 192.168.1.2
When I execute these command manually within 192.168.1.1, the following is returned.
~]# awk 'NR==1{print $1}' /etc/*release
CentOS
~]# /opt/agent/bin/agent.sh status | awk 'NR==1{print $3 $4}'
isrunning
What could be wrong with the script? Is there a better way of doing this?
As the comment says, you are setting the variables inside the bash session on the server side and trying to read them from the client side.
If you want to assign the variables in the client script you need to put the assignment in front of the ssh command, and separate the two assignments. Something like the following.
STATUS=`ssh -n root#$host 'awk \'NR==1{print $1}\' /etc/*release)`
OS=`ssh -n root#$host '/opt/agent/bin/agent.sh status | awk \'NR==1{print $3 $4}\''`
You need to do two ssh commands. It also simplifies things if you run awk on the client rather than the server, because quoting in the ssh command gets complicated.
while read host; do
STATUS=$(ssh -n root#$host 'cat /etc/*release' | awk 'NR==1{print $1}')
OS=$(ssh -n root#$host /opt/agent/bin/agent.sh status | awk 'NR==1{print $3 $4}')
echo $STATUS
echo $OS
done < lab.txt
with one ssh statement:
read STATUS OS < <(ssh -n root#$host "echo \
\$(awk 'NR==1{print \$1}' /etc/*release) \
\$(/opt/agent/bin/agent.sh status | awk 'NR==1{print \$3 \$4}')")
echo $STATUS
echo $OS
Explanation:
The <(command) syntax is called process substitution. You can use it anywhere where a file is expected.
Example:
sdiff <(echo -e "1\n2\n3") <(echo -e "1\n3")
The command sdiff expects two files as arguments. With the process substitution syntax you can use commands as arguments. ( e.g. fake files )

Resources