how to select arguments from text file in bash and loop over them - bash

I have a text file that contains the following format below and I wanted to write a bash script that stores the column (adastatus,type,bodycomponent) names into a variable say x1.
# col_name data_type comment
adastatus string None
type string None
bodycomponent string None
bodytextlanguage string None
copyishchar string None
Then for each of the columns names in x1 I want to run a loop
alter table tabelname change x1(i) x1(i) DOUBLE;

How about:
#!/bin/sh
for i in `cut -f1 yourfile.txt`
do
SQL="alter table tablename change $i $i DOUBLE"
sql_command $SQL
done

awk '$1 !~ /^#/ {if ($1) print $1}' in.txt | \
xargs -I % echo "alter table tabelname change % % DOUBLE"
Replace echo with the command needed to run the alter command (from #Severun's answer it sounds like sql_command).
using awk, matches only input lines that do no start with # (except for leading whitespace) and are non-empty, then returns the first whitespace-separated token, i.e., the 1st column value for each line.
xargs invokes the target command once for each column name, substituting the column name for % - note that % as a placeholder was randomly chosen via the -I option.

Try:
#!/bin/bash
while read col1 _ _
do
[[ "$col1" =~ \#.* ]] && continue # skip comments
[[ -z "$col1" ]] && continue # skip empty lines
echo alter table tabelname change ${col1}\(i\) ${col1}\(i\)
done < input.txt
Output:
$ ./c.sh
alter table tabelname change adastatus(i) adastatus(i)
alter table tabelname change type(i) type(i)
alter table tabelname change bodycomponent(i) bodycomponent(i)
alter table tabelname change bodytextlanguage(i) bodytextlanguage(i)
alter table tabelname change copyishchar(i) copyishchar(i)
Change echo to a more appropriate command.

Related

Reading CSV file in Shell Scripting

I am trying to read values from a CSV file dynamically based on the header. Here's how my input files can look like.
File 1:
name,city,age
john,New York,20
jane,London,30
or
File 2:
name,age,city,country
john,20,New York,USA
jane,30,London,England
I may not be following the best way to accomplish this but I tried the following code.
#!/bin/bash
{
read -r line
line=`tr ',' ' ' <<< $line`
while IFS=, read -r `$line`
do
echo $name
echo $city
echo $age
done
} < file.txt
I am expecting the above code read the values of the header as the variable names. I know that the order of columns can be different for the input file. But, I expect the files to have name, city and age columns in the input file. Is this the right approach? If so, what is the fix for the above code if fails with the error - "line7: name: command not found".
The issue is caused by the backticks. Bash will evaluate the contents and replace the backticks with the output from the command it just evaluated.
You can simply use the variable after the read command to achieve what you want:
#!/bin/bash
{
read -r line
line=`tr ',' ' ' <<< $line`
echo "$line"
while IFS=, read -r $line ; do
echo "person: $name -- $city -- $age"
done
} < file.txt
Some notes on your code:
The backtick syntax is legacy syntax, it is now preferred to use $(...) to evaluate commands. The new syntax is more flexible.
You can enable automatic script failure with set -euo pipefail (see here). This will make your script stop if it encounters an error.
You code is currently very sensitive to invalid header data:
with a file like
n ame,age,city,country
john,20,New York,USA
jane,30,London,England
your script (or rather the version in the beginning of my answer) will run without errors but with invalid output.
It is also good practice to quote variables to prevent unwanted splitting.
To make it much more robust, you can change it as follows:
#!/bin/bash
set -euo pipefail
# -e and -o pipefail will make the script exit
# in case of command failure (or piped command failure)
# -u will exit in case a variable is undefined
# (in you case, if the header is invalid)
{
read -r line
readarray -d, -t header < <(printf "%s" "$line")
# using an array allows to detect if one of the header entries
# contains an invalid character
# the printf is needed because bash would add a newline to the
# command input if using heredoc (<<<).
while IFS=, read -r "${header[#]}" ; do
echo "$name"
echo "$city"
echo "$age"
done
} < file.txt
A slightly different approach can let awk handle the field separation and ordering of the desired output given either of the input files. Below awk stores the desired output order in the f[] (field) array set in the BEGIN rule. Then on the first line in a file (FNR==1) the array a[] is deleted and filled with the headings from the current file. At that point you just loop over the field names in-order in the f[] array and output the corresponding field from the current line, e.g.
awk -F, '
BEGIN { f[1]="name"; f[2]="city"; f[3]="age" } # desired order
FNR==1 { # on first line read header
delete a # clear a array
for (i=1; i<=NF; i++) # loop over headings
a[$i] = i # index by heading, val is field no.
next # skip to next record
}
{
print "" # optional newline between outputs
for (i=1; i<=3; i++) # loop over desired field order
if (f[i] in a) # validate field in a array
print $a[f[i]] # output fields value
}
' file1 file2
Example Use/Output
In your case with the content you show in file1 and file2, you would have:
$ awk -F, '
> BEGIN { f[1]="name"; f[2]="city"; f[3]="age" } # desired order
> FNR==1 { # on first line read header
> delete a # clear a array
> for (i=1; i<=NF; i++) # loop over headings
> a[$i] = i # index by heading, val is field no.
> next # skip to next record
> }
> {
> print "" # optional newline between outputs
> for (i=1; i<=3; i++) # loop over desired field order
> if (f[i] in a) # validate field in a array
> print $a[f[i]] # output fields value
> }
> ' file1 file2
john
New York
20
jane
London
30
john
New York
20
jane
London
30
Where both files are read and handled identically despite having different field orderings. Let me know if you have further questions.
If using Bash verison ≥ 4.2, it is possible to use an associative array to capture an arbitrary number of fields with their name as a key:
#!/usr/bin/env bash
# Associative array to store columns names as keys and and values
declare -A fields
# Array to store columns names with index
declare -a column_name
# Array to store row's values
declare -a line
# Commands block consuming CSV input
{
# Read first line to capture column names
IFS=, read -r -a column_name
# Proces records
while IFS=, read -r -a line; do
# Store column values to corresponding field name
for ((i=0; i<${#column_name[#]}; i++)); do
# Fills fields' associative array
fields["${column_name[i]}"]="${line[i]}"
done
# Dump fields for debug|demo purpose
# Processing of each captured value could go there instead
declare -p fields
done
} < file.txt
Sample output with file 1
declare -A fields=([country]="USA" [city]="New York" [age]="20" [name]="john" )
declare -A fields=([country]="England" [city]="London" [age]="30" [name]="jane" )
For older Bash version, without associative array, use indexed column name alternatively:
#!/usr/bin/env bash
# Array to store columns names with index
declare -a column_name
# Array to store values for a line
declare -a value
# Commands block consuming CSV input
{
# Read first line to capture column names
IFS=, read -r -a column_name
# Proces records
while IFS=, read -r -a value; do
# Print record separator
printf -- '--------------------------------------------------\n'
# Print captured field name and value
for ((i=0; i<"${#column_name[#]}"; i++)); do
printf '%-18s: %s\n' "${column_name[i]}" "${value[i]}"
done
done
} < file.txt
Output:
--------------------------------------------------
name : john
age : 20
city : New York
country : USA
--------------------------------------------------
name : jane
age : 30
city : London
country : England

Bash XSV auto populate empty values with CSV column

I have a CSV export that I need to map to new values to in order to then import into a different system. I am using ArangoDB to create this data migration mapping.
Below is the full script used:
#!/bin/bash
execute () {
filepath=$1
prefix=$2
keyField=$3
filename=`basename "${filename%.csv}"`
collection="$prefix$filename"
filepath="/data-migration/$filepath"
# Check for "_key" column
if ! xsv headers "$1" | grep -q _key
# Add "_key" column using the keyfield provided
then
xsv select $keyField "$1" | sed -e "1s/$keyField/_key/" > "$1._key"
xsv cat columns "$1" "$1._key" > "$1.cat"
mv "$1.cat" "$1"
rm "$1._key"
fi
# Import CSV into Arango Collection
docker exec arango arangoimp --collection "$collection" --type csv "$filepath" --server.password ''
}
# This single line runs the execute() above
execute 'myDirectory/myFile.csv' prefix_ OLD_ORG_ID__C
So far I've deduced the $keyField (OLD_ORG_ID__C) parameter passed to the execute() function, is used in the loop of the script. This looks for $keyField column and then migrates the values to a newly created _key column using the XSV toolkit.
OLD_ORG_ID__C | _key
A123 -> A123
B123 -> B123
-> ## <-auto populate
Unfortunately not every row has a value for the OLD_ORG_ID__C column and as a result the _key for that row is also empty which then causes the import to Arango to fail.
Note: This _key field is necessary for my AQL scripts to work properly
How can I rewrite the loop to auto-index the blank values?
then
xsv select $keyField "$1" | sed -e "1s/$keyField/_key/" > "$1._key"
xsv cat columns "$1" "$1._key" > "$1.cat"
mv "$1.cat" "$1"
rm "$1._key"
fi
Is there a better way to solve this issue? Perhaps xsv sort by the keyField and then auto populate the from the blank rows to the end?
UPDATE: Per the comments/answer I tried something along these lines but so far still not working
#!/bin/bash
execute () {
filepath=$1
prefix=$2
keyField=$3
filename=`basename "${filename%.csv}"`
collection="$prefix$filename"
filepath="/data-migration/$filepath"
# Check for "_key" column
if ! xsv headers "$1" | grep -q _key
# Add "_key" column using the keyfield provided
then
awk -F, 'NR==1 { for(i=1; i<=NF;++i) if ($i == "'$keyField'") field=i; print; next }
$field == "" { $field = "_generated_" ++n }1' $1 > $1-test.csv
fi
}
# import a single collection if needed
execute 'agas/Account.csv' agas_ OLD_ORG_ID__C
This creates a Account-test.csv file but unfortunately it does not have the "_key" column or and changes to the OLD_ORG_ID__C values. Preferably I would only want to see the "_key" values populated with auto-numbered values when OLD_ORG_ID__C is blank, otherwise they should copy the provided value.
If your question is "how can I find from the first header line of a CSV file which field is named OLD_ORG_ID__C, then on subsequent lines put a unique value in this column if it is empty" try something like
awk -F, 'NR==1 { for(i=1; i<=NF;++i) if ($i == "OLD_ORG_ID__C") field=i ; print; next }
$field == "" { $field = "_generated_" ++n }1' file >newfile
This has no provision for coping with complexities like quoted fields with embedded commas. (I have no idea what xsv is but maybe it would be better equipped for such scenarios?)
If I can guess what this code does
xsv select $keyField "$1" |
sed -e "1s/$keyField/_key/" > "$1._key"
then probably you could replace it with something like
xsv select "$keyField" "$1" |
awk -v field="$keyField" 'NR==1 { $0 = field }
/^$/ { $0 = NR } 1' >"$1._key"
to replace the first line with the value of $keyField and replace any subsequent empty lines with their line number.

Associative array pipe to Column command

Im looking for a way to print out an Associative array with the column command and I fill like there is probably a way to do this, but I havent had much luck.
declare -A list
list=(
[a]="x is in this one"
[b]="y is here"
[areallylongone]="z down here"
)
I'd like the outcome to be a simple table. I've used a loop with tabs but in my case the lengths are great enough to offset the second column.
The output should look like
a x is in this one
b y is here
areallylongone z down here
You are looking for something like this?
declare -A assoc=(
[a]="x is in this one"
[b]="y is here"
[areallylongone]="z down here"
)
for i in "${!assoc[#]}" ; do
echo -e "${i}\t=\t${assoc[$i]}"
done | column -s$'\t' -t
Output:
areallylongone = z down here
a = x is in this one
b = y is here
I'm using a tab char to delimit key and value and use the column -t to tabulate the output and -s to set the input delimiter to the tab char. From man column:
-t Determine the number of columns the input contains and create a table. Columns are delimited with whitespace, by default, or with the charac‐
ters supplied using the -s option. Useful for pretty-printing displays
-s Specify a set of characters to be used to delimit columns for the -t option.
One (simple) way to do it is by pasting together keys column and values column:
paste -d $'\t' <(printf "%s\n" "${!list[#]}") <(printf "%s\n" "${list[#]}") | column -s $'\t' -t
For your input, it yields:
areallylongone z down here
a x is in this one
b y is here
To handle spaces in (both) keys and values, we used TAB (\t) as column delimiter, in both paste (-d option) and column (-s option) commands.
To obtain the desired output from the answer of hek2mgl
declare -A assoc=(
[a]="x is in this one"
[b]="y is here"
[areallylongone]="z down here"
)
for i in "${!assoc[#]}" ; do
echo "${i}=${assoc[$i]}"
done | column -s= -t | sort -k 2

Want to sort a file based on another file in unix shell

I have 2 files refer.txt and parse.txt
refer.txt contains the following
julie,remo,rob,whitney,james
parse.txt contains
remo/hello/1.0,remo/hello2/2.0,remo/hello3/3.0,whitney/hello/1.0,julie/hello/2.0,julie/hello/3.0,rob/hello/4.0,james/hello/6.0
Now my output.txt should list the files in parse.txt based on the order specified in refer.txt
ex of output.txt should be:
julie/hello/2.0,julie/hello/3.0,remo/hello/1.0,remo/hello2/2.0,remo/hello3/3.0,rob/hello/4.0,whitney/hello/1.0,james/hello/6.0
i have tried the following code:
sort -nru refer.txt parse.txt
but no luck.
please assist me.TIA
You can do that using gnu-awk:
awk -F/ -v RS=',|\n' 'FNR==NR{a[$1] = (a[$1])? a[$1] "," $0 : $0 ; next}
{s = (s)? s "," a[$1] : a[$1]} END{print s}' parse.txt refer.txt
Output:
julie/hello/2.0,julie/hello/3.0,remo/hello/1.0,remo/hello2/2.0,remo/hello3/3.0,rob/hello/4.0,whitney/hello/1.0,james/hello/6.0
Explanation:
-F/ # Use field separator as /
-v RS=',|\n' # Use record separator as comma or newline
NR == FNR { # While processing parse.txt
a[$1]=(a[$1])?a[$1] ","$0:$0 # create an array with 1st field as key and value as all the
# records with keys julie, remo, rob etc.
}
{ # while processing the second file refer.txt
s = (s)?s "," a[$1]:a[$1] # aggregate all values by reading key from 2nd file
}
END {print s } # print all the values
In pure native bash (4.x):
# read each file into an array
IFS=, read -r -a values <parse.txt
IFS=, read -r -a ordering <refer.txt
# create a map from content before "/" to comma-separated full values in preserved order
declare -A kv=( )
for value in "${values[#]}"; do
key=${value%%/*}
if [[ ${kv[$key]} ]]; then
kv[$key]+=",$value" # already exists, comma-separate
else
kv[$key]="$value"
fi
done
# go through refer list, putting full value into "out" array for each entry
out=( )
for value in "${ordering[#]}"; do
out+=( "${kv[$value]}" )
done
# print "out" array in comma-separated form
IFS=,
printf '%s\n' "${out[*]}" >output.txt
If you're getting more output fields than you have input fields, you're probably trying to run this with bash 3.x. Since associative array support is mandatory for correct operation, this won't work.
tr , "\n" refer.txt | cat -n >person_id.txt # 'cut -n' not posix, use sed and paste
cat person_id.txt | while read person_id person_key
do
print "$person_id" > $person_key
done
tr , "\n" parse.txt | sed 's/(^[^\/]*)(\/.*)$/\1 \1\2/' >person_data.txt
cat person_data.txt | while read foreign_key person_data
do
person_id="$(<$foreign_key)"
print "$person_id" " " "$person_data" >>merge.txt
done
sort merge.txt >output.txt
A text book data processing approach, a person id table, a person data table, merged on a common key field, which is the first name of the person:
[person_key] [person_id]
- person id table, a unique sortable 'id' for each person (line number in this instance, since that is the desired sort order), and key for each person (their first name)
[person_key] [person_data]
- person data table, the data for each person indexed by 'person_key'
[person_id] [person_data]
- a merge of the 'person_id' table and 'person_data' table on 'person_key', which can then be sorted on person_id, giving the output as requested
The trick is to implement an associative array using files, the file name being the key (in this instance 'person_key'), the content being the value. [Essentially a random access file implemented using the filesystem.]
This actually adds a step to the otherwise simple but not very efficient task of grepping parse.txt with each value in refer.txt - which is more efficient I'm not sure.
NB: The above code is very unlikely to work out of the box.
NBB: On reflection, probably a better way of doing this would be to use the file system to create a random access file of parse.txt (essentially an index), and to then consider refer.txt as a batch file, submitting it as a job as such, printing out from the parse.txt random access file the data for each of the names read in from refer.txt in turn:
# 1) index data file on required field
cat person_data.txt | while read data
do
key="$(print "$data" | sed 's/(^[^\/]*)/\1/')" # alt. `cut -d'/' -f1` ??
print "$data" >>./person_data/"$key"
done
# 2) run batch job
cat refer_data.txt | while read key
do
print ./person_data/"$key"
done
However having said that, using egrep is probably just as rigorous a solution or at least for small datasets, I would most certainly use this approach given the specific question posed. (Or maybe not! The above could well prove faster as well as being more robust.)
Command
while read line; do
grep -w "^$line" <(tr , "\n" < parse.txt)
done < <(tr , "\n" < refer.txt) | paste -s -d , -
Key points
For both files, newlines are translated to commas using the tr command (without actually changing the files themselves). This is useful because while read and grep work under the assumption that your records are separated by newlines instead of commas.
while read will read in every name from refer.txt, (i.e julie, remo, etc.) and then use grep to retrieve lines from parse.txt containing that name.
The ^ in the regex ensures matching is only performed from the start of the string and not in the middle (thanks to #CharlesDuffy's comment below), and the -w option for grep allows whole-word matching only. For example, this ensures that "rob" only matches "rob/..." and not "robby/..." or "throb/...".
The paste command at the end will comma-separate the results. Removing this command will print each result on its own line.

Pass external variable to xidel in bash loop script

I try to parse html page using XPath with xidel.
The page have a table with multiple rows and columns
I need to get values from each row from columns 2 and 5 (IP and port) and store them in csv-like file.
Here is my script
#!/bin/bash
for (( i = 2; i <= 100; i++ ))
do
xidel http://www.vpngate.net/en/ -e '//*[#id="vg_hosts_table_id"]/tbody/tr["'$i'"]/td[2]/span[1]' >> "$i".txt #get value from first column
xidel http://www.vpngate.net/en/ -e '//*[#id="vg_hosts_table_id"]/tbody/tr["'$i'"]/td[5]' >> "$i".txt #get value from second column
sed -i ':a;N;$!ba;s/\n/^/g' "$i".txt #replace newline with custom delimiter
sed -i '/\s/d' "$i".txt #remove blanks
cat "$i".txt >> ip_port_list #create list
zip -m ips.zip "$i".txt #archive unneeded texts
done
The perfomance is not issue
When i manually increment each tr - looks perfect. But not with variable from loop.
I want to receive a pair of values from each row.
Now i got only partial data or even empty file
I need to get values from each row from columns 2 and 5 (IP and port) and store them in csv-like file.
xidel -s "https://www.vpngate.net/en/" -e '
(//table[#id="vg_hosts_table_id"])[3]//tr[not(td[#class="vg_table_header"])]/concat(
td[2]/span[#style="font-size: 10pt;"],
",",
extract(
td[5],
"TCP: (\d+)",
1
)
)
'
220.218.70.177,443
211.58.36.54,995
1.239.223.190,1351
[...]
153.207.18.229,1542
(//table[#id="vg_hosts_table_id"])[3]: Select the 3rd table of its
kind. The one you want.
//tr[not(td[#class="vg_table_header"])]: Select all rows, except the headers.
td[2]/span[#style="font-size: 10pt;"]: Select the 2nd column and the <span> that contains just the IP-address.
extract(td[5],"TCP: (\d+)",1): Select the 5th column and extract (regex) the numerical value after "TCP ".
Maybe this xidel line will come in handy:
xidel -q http://www.vpngate.net/en/ -e '//*[#id="vg_hosts_table_id"]/tbody/tr[*]/concat(td[2]/span[1],",",substring-after(substring-before(td[5],"UDP:"),"TCP: "))'
This will only do one fetch (so the admins of vpngate won't block you) and it'll also create a CSV output (ip,port)... Hopefully that is what you were looking for?

Resources