I have file a file named as file.yaml with the below content :
SystemType=Secondary-HA
Hostname America
I have a shell script filter.sh:
echo "Enter systemType:"
read systemType
SYSTYPE=`grep systemType /home/oracle/file.yaml | awk '{print $2}'`
if [ SYSTYPE=Secondary-HA ]
then
cat /home/oracle/file.yaml > /home/oracle/file2.txt
fi
hstname=`grep Hostname /home/oracle/file2.txt | awk '{print $2}'`
echo $hstname
Here I want to given the systemType only 'Secondary-HA' then only I should get the result as America. But as of now if I am giving the systemType 'Secondary', I am giving the same result as America. which I don't want.
Please let know. I am bit new in shell scripting.
You need to understand that the shell is white-space sensitive in certain places, for example when splitting arguments. Thus,
if [ x=y ]
must be written as
if [ x = y ]
In addition, I have replaced the anti-pattern
grep xyz file | awk '{print $2}'
with the much less expensive pipe-less
awk '/xyz/ {print $2}' file
Next, awk splits at white-space by default, not by =. You would need to say awk -F= to split at =. I have also uppercased systemType to SystemType, since that's what you tell us is in your yaml file. Mate, you need to be careful if you want to be in programming.
Here is the result:
echo "Enter systemType:"
read systemType
SYSTYPE=$(awk -F= '/SystemType/ {print $2}' /home/oracle/file.yaml)
if [ "$SYSTYPE" = "$systemType" ]; then
cp /home/oracle/file.yaml /home/oracle/file2.txt
fi
hstname=$(awk '/Hostname/ {print $2}' /home/oracle/file2.txt)
echo $hstname
Related
text file:
Annemie;014588529
Stefaan;011802367
Jan;032569874
Hans;015253694
Trying to find the phonenumber by name but it doesn't display anything
echo -n ""
read name
number=`grep '$name' numbers.txt | awk -F';' '{print $2}'`
echo "$number"
Single quotes avoid expansion of variables.
You are searching the string $name inside numbers.txt.
Try this instead:
echo -n ""
read name
number=$(grep "$name" numbers.txt | awk -F';' '{print $2}')
echo "$number"
or even better to reduce the pipe (awk can perform the grep directly):
echo -n ""
read name
number=$(awk -F';' -v name="$name" '$1 ~ name {print $2}' numbers.txt)
echo "$number"
Single quotes prevent the shell variable name to expand.
Try like this:
#!/bin/sh
read -p "Enter name: " name
number=$(grep -F "$name" numbers.txt | awk -F';' '{print $2}')
echo "$name => $number"
Also, you should use -F for fixed string grepping, avoid old-style backtick command substitution, and use $(..) instead. If you wish, you can use -p for read prompt (in POSIX shells, sh/bash included).
I have a text file with content like following :
some random text
some random text
some random text
1000=169.254.1.1 169.254.1.2 169.254.1.3
1001=169.254.2.1 169.254.2.2
using a shell script, want to verify if ALL the IPs mentioned against numbers 1000, 1001 are valid IPs.
your help is highly appreciated.
This might do the job. This will check that the IPs have the same format but not the range, for instance an IP as 999.9.9.9 will be taken as valid too, Im trying to figure out a more precise regex but on the mean time, this might help you.
count_of_potential_IPs=$( grep -E '100(0|1)=' text.txt | awk -F'=' '{ print $2 }' | tr ' ' '\n' | wc -l)
count_of_valid_IPs=$( grep -E '100(0|1)=' text.txt | awk -F'=' '{ print $2 }' | tr ' ' '\n' | grep -E '[0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}' | wc -l )
if [ $count_of_potential_IPs -eq $count_of_valid_IPs ]
then
echo "Awesome!, all Ips seem to be valid. The world is regaining its balance! :)"
fi
Regards!
I'm trying to cut the below string starting on the single quote:
name1=O'Reilly
so it leaves:
name2=Reilly
That's easy from the command line with the following commands:
echo $name | cut -d\' -f
echo $name | awk -F\' '{print $2}'
However when I run these commands from a script the string remains unaltered. I've been looking into problems with using single quotes as a delimiter but couldn't find anything. Any way to solve this issue?
That does not change the string the variable expands to, it just outputs the result of string manipulation.
If you want to create a new reference for variable name, use command substitution to save the result of cut/awk operation as variable name:
% name="O'Reilly"
% echo "$name" | awk -F\' '{print $2}'
Reilly
% name=$(echo "$name" | awk -F\' '{print $2}')
% echo "$name"
Reilly
On the other hand, if you want to declare the input as one (name1), and save the output as a different variable (name2):
% name1="O'Reilly"
% name2=$(echo "$name1" | awk -F\' '{print $2}')
% echo "$name2"
Reilly
This might be easier to get using Parameter expansion though:
$ name="O'Reilly"
$ echo "${name#*\'}"
Reilly
$ name="${name#*\'}"
$ echo "$name"
Reilly
I am trying to set an awk variable field to several field at once.
Right now I can only set the variables one by one.
for line in `cat file.txt`;do
var1=`echo $line | awk -F, '{print $1}'`
var2=`echo $line | awk -F, '{print $2}'`
var3=`echo $line | awk -F, '{print $3}'`
#Some complex code....
done
I think this is costly cause it parses the linux variable several times. Is there a special syntax to set the variable at once? I know that awk has a BEGIN and END block but the reason I am trying to avoid the BEGIN and END block is to avoid nested awk.
I plan to place another loop and awk code in the #Some complex code.... part.
for line in `cat file.txt`;do
var1=`echo $line | awk -F, '{print $1}'`
var2=`echo $line | awk -F, '{print $2}'`
var3=`echo $line | awk -F, '{print $3}'`
for line2 in `cat file_old.txt`;do
vara=`echo $line2 | awk -F, '{print $1}'`
varb=`echo $line2 | awk -F, '{print $2}'`
# Do comparison of $var1,var2 and $vara,$varb , then do something with either
done
done
You can use the IFS internal field separator to use a comma (instead of whitespace) and do the assignments in a while loop:
SAVEIFS=$IFS;
IFS=',';
while read line; do
set -- $line;
var1=$1;
var2=$2;
var3=$3;
...
done < file.txt
IFS=$SAVEIFS;
This will save a copy of your current IFS, change it to a , character, and then iterate over each line in your file. The line set -- $line; will convert each word (separated by a comma) into a numeric-variable ($1, $2, etc.). You can either use these variables directly, or assign them to other (more meaningful) variable names.
Alternatively, you could use IFS with the answer provided by William:
IFS=',';
while read var1 var2 var3; do
...
done < file.txt
They are functionally identical and it just comes down to whether or not you want to explicitly set var1=$1 or have it defined in the while-loop's head.
Why are you using awk at all?
while IFS=, read var1 var2 var3; do
...
done < file.txt
#!/bin/bash
FILE="/tmp/values.txt"
function parse_csv() {
local lines=$lines;
> $FILE
OLDIFS=$IFS;
IFS=","
i=0
for val in ${lines}
do
i=$((++i))
eval var${i}="${val}"
done
IFS=$OLDIFS;
for ((j=1;j<=i;++j))
do
name="var${j}"
echo ${!name} >> $FILE
done
}
for lines in `cat file_old.txt`;do
parse_csv;
done
The problem you have described has only got 3 values, would there be a chance that 3 values may differ and be 4 or 5 or undefined ?
if so the above will parse through the csv line by line and output each value at a time on a new line in a file called /tmp/values.txt
feel free to modify to match your requirements its far more dynamic than defining 3 values
Expecting this to print out abc - but I get nothing, every time, nothing.
echo abc=xyz | g="$(awk -F "=" '{print $1}')" | echo $g
A pipeline isn't a set of separate assignments. However, you could rewrite your current code as follows:
result=$(
echo 'abc=xyz' | awk -F '=' '{print $1}'
)
echo "$result"
However, a more Bash-centric solution without intermediate assignments could take advantage of a here-string. For example:
awk -F '=' '{print $1}' <<< 'abc=xyz'
Other solutions are possible, too, but this should be enough to get you started in the right direction.