Copying variables to local text file from multiple ssh awk output - bash

I'm reasonably new to shell scripting, so I've had difficulty applying the answers to similar questions to my problem.
I am trying to ssh to a remote server, perform multiple awk commands, and return the value of each to a local .txt file. I am also trying to ssh to other servers, perform similar commands, and return them to the same text file.
If i manually ssh into the remote server and run df -h | awk '$6 == "/" {print $5; exit} I get the % used value for the root directory from the df -h command.
Thus far, I have the following, but while it enters and exits the remote server it doesn't save the value.
> testfile.txt
ssh {$CURRENT_ENV} << EOF
VAL=\$(df -h | awk '\$6 == "/" {print \$5; exit}')
echo "\$VAL > testfile.txt"
exit
EOF
I've looked at single line awk returns, but as I have multiple commands to run it doesn't seem optimal. I would appreciate any suggestions!

First of all, your ssh command. Using a here-document is a good idea. You can improve it in two ways by:
indenting with TABs owing to the <<- syntax. This is purely cosmetic and makes your code more readable.
avoiding the escaping of special characters like $ by quoting EOF. This is not only cosmetic but makes your code less error-prone.
This gives:
ssh {$CURRENT_ENV} <<- 'EOF'
VAL=$(df -h | awk '$6 == "/" {print $5; exit}')
echo "$VAL > testfile.txt"
exit
EOF
(we could even put a tabulation before EOF)
Now your code :
You don't tell us what CURRENT_ENV is. I assume this is something like user#server. To use that variable, write "$CURRENT_ENV", not '{CURRENT_ENV}'. Unless you know what you are doing, when using a variable, always enclose it in double-quotes to avoid any undesirable side-effect.
You put the result of df into variable VAL and write its content to textfile.txt:
As a universal convention, use lower cases for you variable names (unless they are exported to the environment which is not the case here); i.e. this should be val, not VAL.
echo "$val > testfile.txt" won't write anything into textfile.txt because your redirection is inside double-quotes, and thus belongs to the text that is echo-ed. Proper command would be echo "$val" > testfile.txt
Now, think about it: all this, including this echo is executed on the remote server, therefore this will create the file testfile.txt there, not on your machine. This is not what you want, so let's remove that echo line. Let's also remove val= since val is not needed any longer.
The exit command is of no need. Once the last command will be read and executed, the ssh session ends anyway.
We are left with this;
ssh "$CURRENT_ENV" <<- 'EOF'
df -h | awk '$6 == "/" {print $5; exit}'
EOF
(remember this is a tabulation before df but single spaces wouldn't harm in this case)
As it is now, your code outputs everything to your terminal. Let's now redirect this to your local file testfile.txt :
ssh {$CURRENT_ENV} <<- 'EOF' > testfile.txt
df -h | awk '$6 == "/" {print $5; exit}'
EOF
OK, this works for one server. You told us there were actually several ones. You don't show us your code, so I will assume there is a loop somewhere:
for ssh_target in u1#server1 u2#server2 ...; do
ssh "$ssh_target" <<- 'EOF' > testfile.txt
df -h | awk '$6 == "/" {print $5; exit}'
EOF
done
Almost there! The problem with this command is that each loop overwrites the content of testfile.txt. The solution is to redirect the for loop, NOT the ssh command inside it:
for ssh_target in u1#server1 u2#server2 ...; do
ssh "$ssh_target" <<- 'EOF'
df -h | awk '$6 == "/" {print $5; exit}'
EOF
done > testfile.txt
(the redirection must be put after done)
Here it is!

I think you can do this:
# Write all of this script's stdout to testfile.txt
> testfile.txt
# Run df -h on remote host, and pipe the stdout of df -h to awk
ssh "$host" df -h | awk '$6 == "/" {print $5; exit}'
Note that if you specify a filepath with df then you can get information for just that filepath. In your case you are only interested in /, so you can specify that.
If you are only interested in the percent used, you can use the --ouput=pcent option, and then use tail to get only the percentage part (leaving out the Use% header):
df --output=pcent / | tail -n 1
This will produce output with one or two leading spaces if the percent used is less than 100%. The leading spaces can be deleted with tr:
df --output=pcent / | tail -n 1 | tr -d ' '
If for some reason you want to avoid having two pipes, but you still want to remove leading spaces, you can use awk:
df --output=pcent / | awk 'NR == 2 {print $1}'
However using awk might be slower, I'm not sure.

If you want to print anything to local file, just do redirect immediately after EOF
here is example how to write VAL to remotefile and to localfile. Also you can use df --output pcent to show only percentage column. Unfortunately there is no option to hide header, so tail -n1
ssh {$CURRENT_ENV} << EOF >>localfile.txt
VAL=\$(df --output=pcent /|tail -n1)
echo "$VAL" > remotefile.txt
echo "$VAL" # for localfile
exit
EOF

Related

while-read loop broken on ssh-command

I have a bash-script that moves backup-files to the remote location. On few occasions the temporary HDDs on the remote server had no space left, so I added a md5 check to compare local and remote files.
The remote ssh breaks however the while-loop (i.e. it runs only for first item listed in dir_list file).
# populate /tmp/dir_list
(while read dirName
do
# create archive files for sub-directories
# populate listA variable with archive-file names
...
for fileName in $listA; do
scp /PoolZ/__Prepared/${dirName}/$fileName me#server:/archiv/${dirName}/
md5_local=`md5sum /PoolZ/__Prepared/${dirName}/${fileName} | awk '{ print $1 }'`
tmpRemoteName=`printf "%q\n" "$fileName"` # some file-names have strange characters
md5_remote=`ssh me#server 'md5sum /archiv/'${dirName}'/'$tmpRemoteName | awk '{ print $1 }'`
if [[ $md5_local == $md5_remote ]]; then
echo "Checksum of ${fileName}: on local ${md5_local}, on remote ${md5_remote}."
mv -f /PoolZ/__Prepared/${dirName}/$fileName /PoolZ/__Backuped/${dirName}/
else
echo "Checksum of ${fileName}: on local ${md5_local}, on remote ${md5_remote}."
# write eMail
fi
done
done) < /tmp/dir_list
When started the script gives the same md5-sums for the first directory listed in dir_list. The files are also copied both local and remote to expected directories and then script quits.
If I remove the line:
md5_remote=`ssh me#server 'md5sum /archiv/'${dirName}'/'$tmpRemoteName | awk '{ print $1 }'`
then apparently the md5-comaprison is not working but the whole script goes through whole list from dir_list.
I also tried to use double-quotes:
md5_remote=`ssh me#server "md5sum /archiv/${dirName}/${tmpRemoteName}" | awk '{ print $1 }'`
but there was no difference (broken dirName-loop).
I went so far, that I replaced the md5_remote... line with a remote ls-command without any shell-variables, and eventually I even tried a line without setting value to the md5_remote variable, i.e.:
ssh me#server "ls /dir/dir/dir/ | head -n 1"
Every solution that has a ssh-command breaks the while-loop. I have no idea why ssh should break bash-loop. Any suggestion are welcomed.
I'm plainly stupid. I found just answer on — what a surprise — stackoverflow.com.
ssh breaks out of while-loop in bash
As suggested I added a pipe to /dev/null and it works now:
md5_remote=`ssh me#server 'md5sum /archiv/'${dirName}'/'$tmpRemoteName < /dev/null | awk '{ print $1 }'`

Bash awk append to same line

There are numerous posts about removing leading white space and appending an entry to a single existing line in a file using awk. None of my attempts work - just three examples here of the many I have tried.
Say I have a file called $log with a single line
a:b:c
and I want to add a fourth entry,
awk '{ print $4"d" }' $log | tee -a $log
output seems to be a newline
`a:b:c:
d`
whereas, I want all on the same line;
a:b:c:d
try
BEGIN { FS = ":" } ; awk '{ print $4"d" }' $log | tee -a $log
or, this - avoid a new line
awk 'BEGIN { ORS=":" }; { print $4"d" }' $log | tee -a $log
no change
`a:b:c:
d`
awk is placing a space after c: and then writing d to the next line.
EDIT: | tee -a $log appears to be necessary to write the additional string to the file.
$log contains 39 variables and was generated using awk without | tee -a
odd...
The actual command to write $40 to the single line entries
awk '{ print $40"'$imagedir'" }' $log
output
+ awk '{ print $40"/home/geoland/Asterism-DEVEL/DSO" }'
/home/geoland/.asterism/log
but this does not write to the $log file.
How should I append d to the same line without leading white space using awk - also looking at sed xargs and other alternatives.
Using awk:
awk '{ print $0":d" }' file
Using sed:
sed 's/$/:d/' file
Using only bash:
while IFS= read -r line; do
echo "$line:d"
done < file
Using sed:
$ echo a:b:c | sed 's,\(^.*$\),\1:d,'
a:b:c:d
Thanks all... This is the solution I went with. I also needed to write the entire line to a perpetual log file because the log file is overwritten at each new process instance.
I will further investigate an awk solution.
logname=$imagedir/log_$name
while IFS=: read -r line; do
echo "$line$imagedir"
done < $log | tee $logname
This places $imagedir directly behind the last IFS ':' separator
There is probably room for refinement.
I too am not entirely sure what you're trying to do here.
Your command line, awk '{ print $4"d" }' $log | tee -a $log is problematic in a number of ways.
First, your awk script tries to print the 4th field, which is empty. Unless you say otherwise, fields are separated by whitespace, and the string a:b:c has no whitespace. So .. awk prints "d". And tee -a appends to your existing logfile, so what you're seeing is the original data, along with the d printed by awk. That's totally expected.
Second, it appears to have tee appending to the same file that awk is in the process of reading. This won't make an endless loop, as awk should stop reading the input file after whatever was the last byte when the file was opened, but it does mean you may have repeated data there.
Your other attempts, aside from some syntactical errors, all suffer from the same assumption that $4 means something that it does not.
The following awk snippet sets the input and output field separators to :, then sets the 4th field to "d", then prints the line.
$ echo "a:b:c" | awk 'BEGIN{FS=OFS=":"} {$4="d"} 1'
a:b:c:d
Is that what you want?
If you really do need to append this data to an existing log file, you can do so with tee -a or simple >> redirection. Just bear in mind that awk will only see the content of the file as of the time it was run, and by appending, you are not replacing lines.
One other thing. If you are actually hoping to use the content of the shell variable $imagedir inside awk, you should pass the variable in rather than exiting your quotes. For example:
$ echo "a:b:c" | awk -v d="foo/bar" 'BEGIN{FS=OFS=":"} {$4=d} 1'
a:b:c:foo/bar
sed "s|$|$imagedir|" file | tee newfile
This does the trick. Read 'file' and write the contents of 'file' with the substitution to a 'new file', so as to read the image directory when using a secondary standalone process.
Because the variable is a directory with several / these need to be escaped, so as not to interpret as sed delimiters. I had difficulty with this using a variable.
A neater option was to use an alternative delimiter. Not to be confused with the pipe that follows.

Splitting and looping over live command output in Bash

I am archiving and using split to produce several parts while also printing the output files (from split on STDERR, which I am redirecting to STDOUT). However the loop over the output data doesn't happen until after the command returns.
Is there anyway to actively split over the STDOUT output of a command before it returns?
The following is what I currently have, but it only prints the list of filenames after the command returns:
export IFS=$'\n'
for line in `data_producing_command | split -d -b $CHUNK_SIZE --verbose - $ARCHIVE_PREFIX 2>&1`; do
FILENAME=`echo $line | awk '{ print $3 }'`
echo " - $FILENAME"
done
Try this:
data_producing_command | split -d -b $CHUNK_SIZE --verbose - $ARCHIVE_PREFIX 2>&1 | while read -r line
do
FILENAME=`echo $line | awk '{ print $3 }'`
echo " - $FILENAME"
done
Note however that any variables set in the while loop will not preserve their values after the loop (the while loop runs in a subshell).
There's no reason for the for loop or the read or the echo. Just pipe the stream to awk:
... | split -d -b $CHUNK_SIZE --verbose - test 2>&1 |
awk '{printf " - %s\n", $3 }'
You are going to see some delay from buffering, but unless your system is very slow or you are very perceptive, you're not likely to notice it.
The command substitution needs1 to run before the for loop can start.
for item in $(command which produces items); do ...
whereas a while read -r can start consuming output as soon as the first line is produced (or, more realistically, as soon as the output buffer is full):
command which produces items |
while read -r item; do ...
1 Well, it doesn't absolutely need to, from a design point of view, I suppose, but that's how it currently works.
As William Pursell already noted, there is no particular reason to run Awk inside a while read loop, because that's something Awk does quite well on its own, actually.
command which produces items |
awk '{ print " - " $3 }'
Of course, with a reasonably recent GNU Coreutils split, you could simply do
split --filter='printf " - %s\n" "$FILE"'; cat >"$FILE" ... options

grep: compare string from file with another string

I have a list of files paths that I need to compare with a string:
git_root_path=$(git rev-parse --show-toplevel)
list_of_files=.git/ForGeneratingSBConfigAlert.txt
cd $git_root_path
echo "These files needs new OSB config:"
while read -r line
do
modfied="$line"
echo "File for compare: $modfied"
if grep -qf $list_of_files" $modfied"; then
echo "Found: $modfied"
fi
done < <(git status -s | grep -v " M" | awk '{if ($1 == "M") print $2}')
$modified - is a string variable that stores path to file
Pattern file example:
SVCS/resources/
SVCS/bus/projects/busCallout/
SVCS/bus/projects/busconverter/
SVCS/bus/projects/Resources/ (ignore .jar)
SVCS/bus/projects/Teema/
SVCS/common/
SVCS/domain/
SVCS/techutil/src/
SVCS/tech/mds/src/java/fi/vr/h/service/tech/mds/exception/
SVCS/tech/mds/src/java/fi/vr/h/service/tech/mds/interfaces/
SVCS/app/cashmgmt/src/java/fi/vr/h/service/app/cashmgmt/exception/
SVCS/app/cashmgmt/src/java/fi/vr/h/service/app/cashmgmt/interfaces/
SVCS/app/customer/src/java/fi/vr/h/service/app/customer/exception/
SVCS/app/customer/src/java/fi/vr/h/service/app/customer/interfaces/
SVCS/app/etravel/src/java/fi/vr/h/service/app/etravel/exception/
SVCS/app/etravel/src/java/fi/vr/h/service/app/etravel/interfaces/
SVCS/app/hermes/src/java/fi/vr/h/service/app/hermes/exception/
SVCS/app/hermes/src/java/fi/vr/h/service/app/hermes/interfaces/
SVCS/app/journey/src/java/fi/vr/h/service/app/journey/exception/
SVCS/app/journey/src/java/fi/vr/h/service/app/journey/interfaces/
SVCS/app/offline/src/java/fi/vr/h/service/app/offline/exception/
SVCS/app/offline/src/java/fi/vr/h/service/app/offline/interfaces/
SVCS/app/order/src/java/fi/vr/h/service/app/order/exception/
SVCS/app/order/src/java/fi/vr/h/service/app/order/interfaces/
SVCS/app/payment/src/java/fi/vr/h/service/app/payment/exception/
SVCS/app/payment/src/java/fi/vr/h/service/app/payment/interfaces/
SVCS/app/price/src/java/fi/vr/h/service/app/price/exception/
SVCS/app/price/src/java/fi/vr/h/service/app/price/interfaces/
SVCS/app/product/src/java/fi/vr/h/service/app/product/exception/
SVCS/app/product/src/java/fi/vr/h/service/app/product/interfaces/
SVCS/app/railcar/src/java/fi/vr/h/service/app/railcar/exception/
SVCS/app/railcar/src/java/fi/vr/h/service/app/railcar/interfaces/
SVCS/app/reservation/src/java/fi/vr/h/service/app/reservation/exception/
SVCS/app/reservation/src/java/fi/vr/h/service/app/reservation/interfaces/
kraken_test.txt
namaker_test.txt
shmaker_test.txt
I need to compare file search pattern with a string, is it possible using grep?
I'm not sure I understand the overall logic, but a few immediate suggestions come to mind.
You can avoid grep | awk in the vast majority of cases.
A while loop with a grep on a line at a time inside the loop is an antipattern. You probably just want to run one grep on the whole input.
Your question would still benefit from an explanation of what you are actually trying to accomplish.
cd "$(git rev-parse --show-toplevel)"
git status -s | awk '!/ M/ && $1 == "M" { print $2 }' |
grep -Fxf .git/ForGeneratingSBConfigAlert.txt
I was trying to think of a way to add back your human-readable babble, but on second thought, this program is probably better without it.
The -x option to grep might be wrong, depending on what you are really hoping to accomplish.
This should work:
git status -s | grep -v " M" | awk '{if ($1 == "M") print $2}' | \
grep --file=.git/ForGeneratingSBConfigAlert.txt --fixed-strings --line-regexp
Piping the awk output directly to grep avoids the while loop entirely. In most cases you'll find you don't really need to print debug messages and the like in it.
--file takes a file with one pattern to match per line.
--fixed-strings avoids treating any characters in the patterns as special.
--line-regexp anchors the patterns so that they only match if a full line of input matches one of the patterns.
All that said, could you clarify what you are actually trying to accomplish?

awk for different delimiters piped from xargs command

I run an xargs command invoking bash shell with multiple commands. I am unable to figure out how to print two columns with different delimiters.
The command is ran is below
cd /etc/yp
cat "$userlist" | xargs -I {} bash -c "echo -e 'For user {} \n'
grep -w {} auto_*home|sed 's/:/ /' | awk '{print \$1'\t'\$NF}'
grep -w {} passwd group netgroup |cut -f1 -d ':'|sort|uniq;echo -e '\n'"
the output I get is
For user xyz
auto_homeabc.jkl.com:/rtw2kop/xyz
group
netgroup
passwd
I need a tab after the auto_home(since it is a filename) like in
auto_home abc.jkl.com:/rtw2kop/xyz
The entry from auto_home file is below
xyz -rw,intr,hard,rsize=32768,wsize=32768 abc.jkl.com:/rtw2kop/xyz
How do I awk for the first field(auto_home) and the last field abc.jkl.com:/rtw2kop/xyz? As I have put a pipe from grep command to awk.'\t' isnt working in the above awk command.
If I understand what you are attempting correctly, then I suggest this approach:
while read user; do
echo "For user $user"
awk -v user="$user" '$1 == user { print FILENAME "\t" $NF }' auto_home
awk -F: -v user="$user" '$1 == user { print FILENAME; exit }' passwd group netgroup | sort -u
done < "$userlist"
The basic trick is the read loop, which will read a line into the variable $user from the file named in $userlist; after that, it's all straightforward awk.
I took the liberty of changing the selection criteria slightly; it looked as though you wanted to select for usernames, not strings anywhere in the line. This way, only lines will be selected in which the first token is equal to the currently inspected user, and lines in which other tokens are equal to the username but not the first are discarded. I believe this to be what you want; if it is not, please comment and we can work it out.
In the 1st awk command, double-escape the \t to \\t. (You may also need to double-escape the \n.)

Resources