How to append the rest of the command from a variable? - ksh

In the sample bellow, I would like to print text on the screen and also append this text into file, when the variable cpstdout is set to 1. Otherwise only print the text on screen. I need to have the echo part flexible to the append variable. Is there any way to correct my code?
#!/bin/ksh
cpstdout=1
if [ $cpstdout -eq 1 ]; then
append="| tee somefile"
else
append=""
fi
echo "test string" $append
Now the result is just like this:
./test.sh
test string | tee somefile
-no file is created of course
example of print function:
print_output(){
printf "\t/-------------------------------------------------\\ \n"
for i in "$#"; do
printf "\t| %-14s %-32s |\n" "$(echo $i | awk -F, '{print $1}')" "$(echo $i | awk -F, '{print $2}')"
shift
done
printf "\t\-------------------------------------------------/\n"
}

Define your appending command as a function:
output_with_append() {
tee -a somefile <<<"$1"
}
Then, in the if, set a variable to the appropriate outputting function:
if [ $cpstdout -eq 1 ]; then
output=output_with_append
else
output=echo
fi
Finally, use variable expansion to run the command:
$output "test_string"
Note that I've used tee -a since you said you wanted to append to a file and not overwrite it.
Setting cpstdout to $1 so we can control it through a command-line parameter:
cpstdout="$1"
A example session then looks like this:
$ ./test.sh 1
test_string
$ ./test.sh 1
test_string
$ cat somefile
test_string
test_string
$ ./test.sh 0
test_string
$ cat somefile
test_string
test_string

Related

How can i add quotes around each words stored in a variable in shell script

I have a variable foo.
echo "print foo" "$foo" ---> abc,bc,cde
I wanted to put quotes around each variable.
Expected result = 'abc','bc','cde'.
I have tried this way, but its not working:
join_lines() {
local IFS=${1:-,}
set --
while IFS= read -r line; do set -- "$#" "$'line'"; done
echo "$*"
}
Could you please try following, strictly written and tested with shown samples in GNU awk.
Without loop:
var="abc,bc,cde"
echo "$var" | awk -v s1="'" 'BEGIN{FS=",";OFS="\047,\047"} {$1=$1;$0=s1 $0 s1} 1'
With loop usual way to go through all fields(comma separated):
var="abc,bc,cde"
echo "$var" | awk -v s1="'" 'BEGIN{FS=OFS=","} {for(i=1;i<=NF;i++){$i=s1 $i s1}} 1'
Output will be 'abc','bc','cde'.
As alternative, using 'sed: replacing every 'with'', and adding ' at the beginning and end of the line to wrap the first/last tokens.
sed -e "s/^/'/" -e "s/$/'/" -e "s/,/','/g"
On surface, the question is on how to convert comma separated list of values (stored in a shell variable) into a comma separate list of quoted tokens. Extending the logic provided by OP, but using shell arrays
foo="abc,bc,cde"
IFS=, read -a items <<< "$foo"
result=
for r in "${items[#]}" ; do
[ "$result" ] && result+=","
result+="'$r'"
done
echo "RESULT=$result"
If needed, logic can be placed into a function/filter
function join_lines {
local -a items
local input result
while IFS=, read -a items ; do
result=
for r in "${items[#]}" ; do
[ "$result" ] && result+=","
result+="'$r'"
done
echo "$result"
done
}

Convert substring through command

Basically, how do I make a string substitution in which the substituted string is transformed by an external command?
For example, given the line 5aaecdab287c90c50da70455de03fd1e ./2015/01/26/GOPR0083.MP4, how to pipe the second part of the line (./2015/01/26/GOPR0083.MP4) to command xargs stat -c %.6Y and then replace it with the result so that we end up with 5aaecdab287c90c50da70455de03fd1e 1422296624.010000?
This can be done with a script, however a one-liner would be nice.
#!/bin/bash
hashtime()
{
while read longhex fname; do
echo "$longhex $(stat -c %.6Y "$fname")"
done
}
if [ $# -ne 1 ]; then
echo Usage: ${0##*/} infile 1>&2
exit 1
fi
hashtime < $1
exit 0
# one liner
awk 'BEGIN { args="stat -c %.6Y " } { printf "%s ", $1; cmd=args $2; system(cmd); }' infile
A one-liner using GNU sed, which will process the whole file:
sed -E "s/([[:xdigit:]]+) +(.*)/stat -c '\1 %.6Y' '\2'/e" file
or, using plain bash
while read -r hash pathname; do stat -c "$hash %.6Y" "$pathname"; done < file
It's typical to use awk sed cut to reformat input. For example:
line="5aaecdab287c90c50da70455de03fd1e ./2015/01/26/GOPR0083.MP4"
echo "$line" |
cut -d' ' -f2- |
xargs stat -c %.6Y

How to use variable with awk when being read from a file

I have a file with the following entries:
foop07_bar2_20190423152612.zip
foop07_bar1_20190423153115.zip
foop08_bar2_20190423152612.zip
foop08_bar1_20190423153115.zip
where
foop0* = host
bar* = fp
I would like to read the file and create 3 variables, the whole file name, host and fp (which stands for file_path_differentiator).
I am using read to take the first line and get my whole file name variable, I though I could then feed this into awk to grab the next two variables, however the first method of variable insertion creates an error and the second gives me all the variables.
I would like to loop each line, as I wish to use these variables to ssh to the host and grab the file
#!/bin/bash
while read -r FILE
do
echo ${FILE}
host=`awk 'BEGIN { FS = "_" } ; { print $1 }'<<<<"$FILE"`
echo ${host}
path=`awk -v var="${FILE}" 'BEGIN { FS = "_" } ; { print $2 }'`
echo ${path}
done <zips_not_received.csv
Expected Result
foop07_bar2_20190423152612.zip
foop07
bar2
foop07_bar1_20190423153115.zip
foop07
bar1
Actual Result
foop07_bar2_20190423152612.zip
/ : No such file or directoryfoop07_bar2_20190423152612.zip
bar2 bar1 bar2 bar1
You can do this alone with bash, without using any external tool.
while read -r file; do
[[ $file =~ (.*)_(.*)_.*\.zip ]] || { echo "invalid file name"; exit 1; }
host="${BASH_REMATCH[1]}"
path="${BASH_REMATCH[2]}"
echo "$file"
echo "$host"
echo "$path"
done < zips_not_received.csv
typical...
Managed to work a solution after posting...
#!/bin/bash
while read -r FILE
do
echo ${FILE}
host=`echo "$FILE" | awk -F"_" '{print $1}'`
echo $host
path=`echo "$FILE" | awk -F"_" '{print $2}'`
echo ${path}
done <zips_not_received.csv
not sure on the elegance or its correctness as i am using echo to create variable...but i have it working..
Assuming there is no space or _ in your "file name" that are part of the host or path
just separate line before with sed, awk, ... if using default space separator (or use _ as argument separator in batch). I add the remove of empty line value as basic security seeing your sample.
sed 's/_/ /g;/[[:blank:]]\{1,\}/d' zips_not_received.csv \
| while read host path Ignored
do
echo "${host}"
echo "${path}"
done

Shell Script : Assign the outputs to different variables

In a shell script I need to assign the output of few values to different varialbes, need help please.
cat file1.txt
uid: user1
cn: User One
employeenumber: 1234567
absJobAction: HIRED
I need to assign the value of each attribute to different variables so that I can call them them in script. For example uid should be assigned to a new variable name current_uid and when $current_uid is called it should give user1 and so forth for all other attributes.
And if the output does not contain any of the attributes then that attribute value should be considered as "NULL". Example if the output does not have absJobAction then the value of $absJobAction should be "NULL"
This is what I did with my array
#!/bin/bash
IFS=$'\n'
array=($(cat /tmp/file1.txt | egrep -i '^uid:|^cn:|^employeenumber|^absJobAction'))
current_uid=`echo ${array[0]} | grep -w uid | awk -F ': ' '{print $2}'`
current_cn=`echo ${array[1]} | grep -w cn | awk -F ': ' '{print $2}'`
current_employeenumber=`echo ${array[2]} | grep -w employeenumber | awk -F ': ' '{print $2}'`
current_absJobAction=`echo ${array[3]} | grep -w absJobAction | awk -F ': ' '{print $2}'`
echo $current_uid
echo $current_cn
echo $current_employeenumber
echo $current_absJobAction
Output from sh /tmp/testscript.sh follows:
user1
User One
1234567
HIRED
#!/usr/bin/env bash
# assuming bash 4.0 or newer: create an associative array
declare -A vars=( )
while IFS= read -r line; do ## See http://mywiki.wooledge.org/BashFAQ/001
if [[ $line = *": "* ]]; then ## skip lines not containing ": "
key=${line%%": "*} ## strip everything after ": " for key
value=${line#*": "} ## strip everything before ": " for value
vars[$key]=$value
else
printf 'Skipping unrecognized line: <%s>\n' "$line" >&2
fi
done <file1.txt # or < <(ldapsearch ...)
# print all variables read, just to demonstrate
declare -p vars >&2
# extract and print a single variable by name
echo "Variable uid has value ${vars[uid]}"
Note that this must be run with bash yourscript, not sh yourscript.
By the way -- if you don't have bash 4.0, you might consider a different approach:
while IFS= read -r line; do
if [[ $line = *": "* ]]; then
key=${line%%": "*}
value=${line#*": "}
printf -v "ldap_$key" %s "$value"
fi
done <file1.txt # or < <(ldapsearch ...)
will create separate variables of the form "$ldap_cn" or "$ldap_uid", as opposed to putting everything in a single associative array.
Here's a simple example of what you are trying to do that should get you started. It assumes 1 set of data in the file. Although a tad brute-force, I believe its easy to understand.
Given a file called file.txt in the current directory with the following contents (absJobAction intentionally left out):
$ cat file1.txt
uid: user1
cn: User One
employeenumber: 1234567
$
This script gets each value into a local variable and prints it out:
# Use /bin/bash to run this script
#!/bin/bash
# Make SOURCEFILE a readonly variable. Make it uppercase to show its a constant. This is the file the LDAP values come from.
typeset -r SOURCEFILE=./file1.txt
# Each line sets a variable using awk.
# -F is the field delimiter. It's a colon and a space.
# Next is the value to look for. ^ matches the start of the line.
# When the above is found, return the second field ($2)
current_uid="$(awk -F': ' '/^uid/ {print $2}' ${SOURCEFILE})"
current_cn="$(awk -F': ' '/^cn/ {print $2}' ${SOURCEFILE})"
current_enbr="$(awk -F': ' '/^employeenumber/ {print $2}' ${SOURCEFILE})"
current_absja="$(awk -F': ' '/^absJobAction/ {print $2}' ${SOURCEFILE})"
# Print the contents of the variables. Note since absJobAction was not in the file,
# it's value is NULL.
echo "uid: ${current_uid}"
echo "cn: ${current_cn}"
echo "EmployeeNumber: ${current_enbr}"
echo "absJobAction: ${current_absja}"
~
When run:
$ ./test.sh
uid: user1
cn: User One
EmployeeNumber: 1234567
absJobAction:
$

Redirect output to a bash array

I have a file containing the string
ipAddress=10.78.90.137;10.78.90.149
I'd like to place these two IP addresses in a bash array. To achieve that I tried the following:
n=$(grep -i ipaddress /opt/ipfile | cut -d'=' -f2 | tr ';' ' ')
This results in extracting the values alright but for some reason the size of the array is returned as 1 and I notice that both the values are identified as the first element in the array. That is
echo ${n[0]}
returns
10.78.90.137 10.78.90.149
How do I fix this?
Thanks for the help!
do you really need an array
bash
$ ipAddress="10.78.90.137;10.78.90.149"
$ IFS=";"
$ set -- $ipAddress
$ echo $1
10.78.90.137
$ echo $2
10.78.90.149
$ unset IFS
$ echo $# #this is "array"
if you want to put into array
$ a=( $# )
$ echo ${a[0]}
10.78.90.137
$ echo ${a[1]}
10.78.90.149
#OP, regarding your method: set your IFS to a space
$ IFS=" "
$ n=( $(grep -i ipaddress file | cut -d'=' -f2 | tr ';' ' ' | sed 's/"//g' ) )
$ echo ${n[1]}
10.78.90.149
$ echo ${n[0]}
10.78.90.137
$ unset IFS
Also, there is no need to use so many tools. you can just use awk, or simply the bash shell
#!/bin/bash
declare -a arr
while IFS="=" read -r caption addresses
do
case "$caption" in
ipAddress*)
addresses=${addresses//[\"]/}
arr=( ${arr[#]} ${addresses//;/ } )
esac
done < "file"
echo ${arr[#]}
output
$ more file
foo
bar
ipAddress="10.78.91.138;10.78.90.150;10.77.1.101"
foo1
ipAddress="10.78.90.137;10.78.90.149"
bar1
$./shell.sh
10.78.91.138 10.78.90.150 10.77.1.101 10.78.90.137 10.78.90.149
gawk
$ n=( $(gawk -F"=" '/ipAddress/{gsub(/\"/,"",$2);gsub(/;/," ",$2) ;printf $2" "}' file) )
$ echo ${n[#]}
10.78.91.138 10.78.90.150 10.77.1.101 10.78.90.137 10.78.90.149
This one works:
n=(`grep -i ipaddress filename | cut -d"=" -f2 | tr ';' ' '`)
EDIT: (improved, nestable version as per Dennis)
n=($(grep -i ipaddress filename | cut -d"=" -f2 | tr ';' ' '))
A variation on a theme:
$ line=$(grep -i ipaddress /opt/ipfile)
$ saveIFS="$IFS" # always save it and put it back to be safe
$ IFS="=;"
$ n=($line)
$ IFS="$saveIFS"
$ echo ${n[0]}
ipAddress
$ echo ${n[1]}
10.78.90.137
$ echo ${n[2]}
10.78.90.149
If the file has no other contents, you may not need the grep and you could read in the whole file.
$ saveIFS="$IFS"
$ IFS="=;"
$ n=$(</opt/ipfile)
$ IFS="$saveIFS"
A Perl solution:
n=($(perl -ne 's/ipAddress=(.*);/$1 / && print' filename))
which tests for and removes the unwanted characters in one operation.
You can do this by using IFS in bash.
First read the first line from file.
Seoncd convert that to an array with = as delimeter.
Third convert the value to an array with ; as delimeter.
Thats it !!!
#!/bin/bash
IFS='\n' read -r lstr < "a.txt"
IFS='=' read -r -a lstr_arr <<< $lstr
IFS=';' read -r -a ip_arr <<< ${lstr_arr[1]}
echo ${ip_arr[0]}
echo ${ip_arr[1]}

Resources