Read values from a file, increment or change them and store them again in the same place - bash

So, I have a bash script which reads several variables from different external files, increments or changes these variables and then stores the new values in the files.
Something like this:
var1=$(< file1)
var2=$(< file2)
var3=$(< file3)
# then for example:
((var1=var1+1))
((var1=var1-1))
var3=foo
echo $var1 > file1
echo $var2 > file2
echo $var3 > file3
This works just fine, but I find it a bit bulky, especially when there are a lot of variables stored like this. I think it would be more elegant to store all the values in a single file which could look something like this:
#File containing values
var1=1
var2=2
var3=foo
Unfortunately I can't figure out how to read the values from such a file and store the new values in the same place afterwards? I have looked into sed and awk but so far I couldn't find a solution that works in this particular case.
Any Suggestions?

An awk script can handle this i.e. to find out all name=value lines, find all integer value and increment it:
awk 'BEGIN {FS=OFS="="} NF==2 && $2+0 == $2 {++$2} 1' file
#File containing values
var1=2
var2=3
var3=foo
If you want to save changes inline then use this gnu-awk command:
awk -i inplace 'BEGIN {FS=OFS="="} NF==2 && $2+0 == $2 {++$2} 1' file
Explanation:
FS=OFS="=": Set input and output field separator to =
NF==2: Number of fields are 2
&&: ANDed with
$2+0 == $2: Find only numeric values
++$2: increment 2nd field
1: Print each line

Ok, since my question appears to have been imprecise I accepted the answer by #anubhava as correct even though it didn't quite work for me. But it seems to be the correct answer to my question and pointed me in the right direction. Based on that answer I found a solution that works for me:
I now have a file named 'storage' containing all the variable names and values like this:
var1 1
var2 1
var3 foo
In my script there are three scenarios:
Incrementing or decrementing silently
A value is read from the file (by searching for the variable name and reading the last field in that line), silently incremented or decremented and saved to the file again:
awk '/var1/{++$NF} {print > "storage" }' storage # incrementing
awk '/var1/{--$NF} {print > "storage" }' storage # decrementing
Toggle between two values
Depending on user input a variable can be set to one of two values for example like this:
PS3="Please choose an option"
options=("Option 1" "Option 2")
select opt in "${options[#]}"
do
case $opt in
"Option 1")
awk '/var2/{$NF=0} {print > "storage" }' storage # this sets the value to 0
break
;;
"Option 2")
awk '/var2/{$NF=1} {print > "storage" }' storage # this sets the value to 1
break
;;
esac
done
Reading user input
The script reads a value from the file and prints it. Then it waits for user input and stores the input in the file
var3=$(awk '/var3/{print $NF}' storage) # reading the current value from the file and storing it in the variable
echo The current value is $var3
read -p "Please enter the new value" var3
awk -v var3="$var3" '/var3/{$NF=var3} {print > "storage" }' storage # writing the new value to the file
This does exactly what I was looking for. So, thank you #anubhava for pointing me in the right direction!

Related

Set variable equal to value based on match in bash

I have a script where I want to set a variable equal to a specific value based on a match from a file (in bash).
For example:
File in .csv contains:
Name,ID,Region
Device1,1,USA
Device2,2,UK
I want to declare variables at the beginning like this:
region1=USA
regions2=UK
region3=Ireland
etc...
Then, whilst reading the csv, I need to match the Regioncolumn's name to the global variable set at the beginning of a file, to then use this in an API. So if a device in the csv has a region set of USA, I should be able to use region1 during the update call in the API. I want to use a while loop to iterate over the csv file line by line, and update each device's region.
Does anyone maybe know how I can achieve this? Any help would be greatly appreciated.
PS: This is not a homework assignment before anyone asks :-)
Would you please try the following:
declare -A regions # an associative array
# declare your variables
region1=USA
region2=UK
region3=Ireland
# associate each region's name with region's code
for i in "${!region#}"; do # expands to region1, region2, region3
varname="$i"
regions[${!varname}]="$i" # maps "USA" to "region1", "UK" to "region2" ..
done
while IFS=, read -r name id region; do
((nr++)) || continue # skips the header line
region_group="${regions[$region]}"
echo "region=$region, region_group=$region_group" # for a test
# call API here
done < file.csv
Output:
region=USA, region_group=region1
region=UK, region_group=region2
region=Ireland, region_group=region3
BTW if the variable declaration at the beginning is under your control, it will be easier to say:
# declare an associative aarray
declare -A regions=(
[USA]="region1"
[UK]="region2"
[Ireland]="region3"
)
while IFS=, read -r name id region; do
((nr++)) || continue # skip the header line
region_group="${regions[$region]}"
echo "region=$region, region_group=$region_group" # for a test
# call API here
done < file.csv
Use awk to create a lookup table. eg:
$ cat a.sh
#!/bin/sh
cat << EOD |
region1=USA
region2=UK
region3=Ireland
EOD
{
awk 'NR==FNR { lut[$2] = $1; next}
{$3 = lut[$3]} 1
' FS== /dev/fd/3 FS=, OFS=, - << EOF
Name,ID,Region
Device1,1,USA
Device2,2,UK
EOF
} 3<&0
$ ./a.sh
Name,ID,
Device1,1,region1
Device2,2,region2
or, slightly less obscure (and more portable to just use regular files, I'm not sure about the platforms on which /dev/fd/3 is actually valid:
$ cat input
Name,ID,Region
Device1,1,USA
Device2,2,UK
$ cat lut
region1=USA
region2=UK
region3=Ireland
$ awk 'NR==FNR{lut[$2] = $1; next} {$3 = lut[$3]} 1' FS== lut FS=, OFS=, input
Name,ID,
Device1,1,region1
Device2,2,region2

read and write same file bash/unix

I have a list of keys which are listed in a txt file
sample.txt
address
contact_id
created_at
creator_id
custom_fields
address
contact_id
phone
name
email
name
I have the following recursive script which can remove duplicates
#!/bin/bash
function keyChecker(){
if grep -q $i uniqueKeys.txt; then
echo "Duplicate Key: ${i}"
else
echo $i >> uniqueKeys.txt
echo "New key: ${i}"
fi
}
function recursiveDoer(){
for i in $(cat keys.txt); do
keyChecker $i
done
recursiveDoer
}
touch uniqueKeys.txt
counter=0
recursiveDoer
This code will return a unique list of keys to uniqueKeys.txt
These methods will send into an infinite loop once there are no more duplicates. Every recursive method I have wrote runs into this problem. I usually cheat by adding a counter that forces an exit 1 after an arbitrarily large number like 10,000.
What is the proper way to write this method using recursion and no infinite loop?
Can this be simplified and written as a non-recursive method in a single loop?
If you do not care about the order of the keys you can simply use
sort -u keys.txt > uniqueKeys.txt
awk to the rescue!
this awk magic will only print the unique entries in the same order
awk '!a[$0]++' file
you can overwrite the input file with this idiom
awk '!a[$0]++' file > temp && mv temp file
this probably replicates your code
awk '!a[$0]++ {print "New key: " $0;
print > "uniqueKeys.txt"
next}
{print "Duplicate Key: " $0}' file
Explanation
a[$0] creates an entry in the associative map a for the read line as the key. ++ forces the null value to be treated as 0 and increments. ! forces the value to be treated as Boolean and negates it. Taken all together, it will be only true for the first time the key is seen, therefore effectively de-duplicating the lines in the file.

Adding file information to an AWK comparison

I'm using awk to perform a file comparison against a file listing in found.txt
while read line; do
awk 'FNR==NR{a[$1]++;next}$1 in a' $line compare.txt >> $CHECKFILE
done < found.txt
found.txt contains full path information to a number of files that may contain the data. While I am able to determine that data exists in both files and output that data to $CHECKFILE, I wanted to be able to put the line from found.txt (the filename) where the line was found.
In other words I end up with something like:
File " /xxxx/yyy/zzz/data.txt "contains the following lines in found.txt $line
just not sure how to get the /xxxx/yyy/zzz/data.txt information into the stream.
Appended for clarification:
The file found.txt contains the full path information to several files on the system
/path/to/data/directory1/file.txt
/path/to/data/directory2/file2.txt
/path/to/data/directory3/file3.txt
each of the files has a list of parameters that need to be checked for existence before appending additional information to them later in the script.
so for example, file.txt contains the following fields
parameter1 = true
parameter2 = false
...
parameter35 = true
the compare.txt file contains a number of parameters as well.
So if parameter35 (or any other parameter) shows up in one of the three files I get it's output dropped to the Checkfile.
Both of the scripts (yours and the one I posted) will give me that output but I would also like to echo in the line that is being read at that point in the loop. Sounds like I would just be able to somehow pipe it in, but my awk expertise is limited.
It's not really clear what you want but try this (no shell loop required):
awk '
ARGIND==1 { ARGV[ARGC] = $0; ARGC++; next }
ARGIND==2 { keys[$1]; next }
$1 in keys { print FILENAME, $1 }
' found.txt compare.txt > "$CHECKFILE"
ARGIND is gawk-specific, if you don't have it add FNR==1{ARGIND++}.
Pass the name into awk inside a variable like this:
awk -v file="$line" '{... print "File: " file }'

Using bash and awk to print to a specific column in a new document

I am trying to use bash and awk together with a nested for loop to print data out into columns beside each other.
so far this is what I have:
for k in {1..147..3}
do
for i in "52" "64" "60" "70" "74"
do
awk -v x="${i}" -F, 'match ($0,x) { print $k }' all.csv > final.csv
done
done
echo "script has run"
I need to print out the information into the column k in the new file.. however that does not work.
so in the csv file data is like this:
52,9/05,6109
52,9/06,6119
64,9/05,7382
64,9/06,7392
64,9/07,3382
60,9/06,3829
...
I want my output like this:
52,9/05,6109,64,9/05,7382,60,9/06,3829
52,9/06,6119,64,9/06,7392
,,,64,9/07,3382
basically, all the 52s in the first column, the 64s in fourth column, the 60s in seventh column
Instead of print $k, use printf "%s,",$k.
printf is the print formatter function that is common to many languages. %s tells it the first argument should be a string.
Note that awk won't get the $k from the shell, so you'll need to add -v k=$k.

How to interate based on words in text? (Shell Scripting)

I have a file currently in the form
location1 attr attr ... attr
location2 attr attr ... attr
...
locationn attr atrr ... attr
What I want to do is go through each line, grab the location (first field) then iterate through the attributes. So far I know how to grab the first field, but not iterate through the attributes. There are also a different number of attributes for each line.
TEMP_LIST=$DIR/temp.list
while read LINE
do
x=`echo $LINE | awk '{print $1}'`
echo $x
done<$TEMP_LIST
Can someone tell me how to iterate through the attributes?
I want to get the effect like
while read LINE
do
location=`echo $LINES |awk '{print $1}'`
for attribute in attributes
do something involving the $location for the line and each individual $attribute
done<$TEMP_LIST
I am currently working in ksh shell, but any other unix shell is fine, I will find out how to translate. I am really grateful if someone could help as it would save me alot of time.
Thank you.
Similar to DreadPirateShawn's solution, but a bit simpler:
while read -r location all_attrs; do
read -ra attrs <<< "$all_attrs"
for attr in "${attrs[#]}"; do
: # do something with $location and $attr
done
done < inputfile
The second read line makes use of bash's herestring feature.
This might work in other shells too, but here's an approach that works in Bash:
#!/bin/bash
TEMP_LIST=temp.list
while read LINE
do
# Split line into array using space as delimiter.
IFS=' ' read -a array <<< $LINE
# Use first element of array as location.
location=${array[0]}
echo "First param: $location"
# Remove first element from array.
unset array[0]
# Loop through remaining array elements.
for i in "${array[#]}"
do
echo " Value: $i"
done
done < $TEMP_LIST
As you're already using awk in your posted code, why not learn how to use awk, as it is designed for this sort of problem.
while read LINE
do
location=`echo $LINES |awk '{print $1}'`
for attribute in attributes
do something involving the $location for the line and each individual $attribute
done<$TEMP_LIST
is written in awk as
#!/bin/bash
tempList="MyTempList.txt"
awk '{ # implied while loop for input records by default
location=$1
print "location=" location # location as a "header"
for (i=2;i<NF;i++) {
printf("attr%d=%s\t", i, $i) # print each attr with its number
}
printf("\n") # add new-line char to end of each line of attributes
}' ${tempList}
If you want to save your output, use awk '{.....}' ${tempList}> ${tempList}.new
Awk has numerous vars that it sets as it reads your files. NF mean NumberOfFields for the current line. So the for loop, starts at field 2, and prints all remaining fields on that line in the format provided (change to suit your needs). The i<=NF drives the ability to print all elems on a line.
Sometimes you'll want the 3rd to last elem on line, so you can perform math on the value stored in NF, like thirdFromLast=$(NF-3). For all variables that are numbers, you can "dereference" it as a value, and ask awk to print the value stored of the $N(th) field. i.e. try
print "thirdFromLast="(NF-3)
print "thirdFromLast="$(NF-3)
... to see the difference that the $ makes on a variable that holds a number.
(For large amounts of data, 1 awk process will be considerably more efficient that using subprocesses to gather parts of files.)
Also work your way through this tutorial grymoire's awk tutorial
IHTH

Resources