I have a YAML file which also has lists.
YAML File -
configuration:
account: account1
warehouse: warehouse1
database: database1
object_type:
schema: schema1
functions: funtion1
tables:
- table: table1
sql_file_loc: some_path/some_file.sql
- table: table2
sql_file_loc: some_path/some_file.sql
I want to store the key-pair values to shell variable and loop it through. For example, the value for account/warehouse/database should go to variables which I can use later on. Also, the values for tables(table1 and table2) and sql_file_loc should go to shell variable which I can use for looping like below -
for i in $table ;do
echo $i
done
I have tried this code below -
function parse_yaml {
local prefix=$2
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -ne "s|^\($s\):|\1|" \
-e "s|^\($s\)\($w\)$s:$s[\"']\(.*\)[\"']$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" $1 |
awk -F$fs '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]}}
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
printf("%s%s%s=\"%s\"\n", "'$prefix'",vn, $2, $3);
}
}'
}
And this is the output I get -
configuration_account="account_name"
configuration_warehouse="warehouse_name"
configuration_database="database_name"
configuration_object_type_schema="schema1"
configuration_object_type_functions="funtion1"
configuration_object_type_tables__sql_file_loc="some_path/some_file.sql"
configuration_object_type_tables__sql_file_loc="some_path/some_file.sql"
It doesn't print -
configuration_object_type_tables__table="table1" and
configuration_object_type_tables__table="table2"
Also for a list, it prints two underscores(__) unlike other objects.
And I want to loop the values stored in configuration_object_type_tables__table and configuration_object_type_tables__sql_file_loc.
Any help would be appreciated!
Consider using a YAML processor mikefarah/yq.
It's a one liner:
yq e '.. | select(type == "!!str") | (path | join("_")) + "=\"" + . + "\""' "$INPUT"
Output
configuration_account="account1"
configuration_warehouse="warehouse1"
configuration_database="database1"
configuration_object_type_schema="schema1"
configuration_object_type_functions="funtion1"
configuration_object_type_tables_0_table="table1"
configuration_object_type_tables_0_sql_file_loc="some_path/some_file.sql"
configuration_object_type_tables_1_table="table2"
configuration_object_type_tables_1_sql_file_loc="some_path/some_file.sql"
Also take a look at this cool builtin feature of yq:
yq e -o props "$INPUT"
Output
configuration.account = account1
configuration.warehouse = warehouse1
configuration.database = database1
configuration.object_type.schema = schema1
configuration.object_type.functions = funtion1
configuration.object_type.tables.0.table = table1
configuration.object_type.tables.0.sql_file_loc = some_path/some_file.sql
configuration.object_type.tables.1.table = table2
configuration.object_type.tables.1.sql_file_loc = some_path/some_file.sql
I suggest you try yq yaml processor like jpseng mentioned.
About the code you have here, the regex is not matching the "- table" pattern due to "- " prifix.
Related
I have to create a script that given a country and a sport you get the number of medalists and medals won after reading a csv file.
The csv is called "athletes.csv" and have this header
id|name|nationality|sex|date_of_birth|height|weight|sport|gold|silver|bronze|info
when you call the script you have to add the nationality and sport as parameters.
The script i have created is this one:
#!/bin/bash
participants=0
medals=0
while IFS=, read -ra array
do
if [[ "${array[2]}" == $1 && "${array[7]}" == $2 ]]
then
participants=$participants++
medals=$(($medals+${array[8]}+${array[9]}+${array[10]))
fi
done < athletes.csv
echo $participants
echo $medals
where array[3] is the nationality, array[8] is the sport and array[9] to [11] are the number of medals won.
When i run the script with the correct paramters I get 0 participants and 0 medals.
Could you help me to understand what I'm doing wrong?
Note I cannot use awk nor grep
Thanks in advance
Try this:
#! /bin/bash -p
nation_arg=$1
sport_arg=$2
declare -i participants=0
declare -i medals=0
declare -i line_num=0
while IFS=, read -r _ _ nation _ _ _ _ sport ngold nsilver nbronze _; do
(( ++line_num == 1 )) && continue # Skip the header
[[ $nation == "$nation_arg" && $sport == "$sport_arg" ]] || continue
participants+=1
medals+=ngold+nsilver+nbronze
done <athletes.csv
declare -p participants
declare -p medals
The code uses named variables instead of numbered positional parameters and array indexes to try to improve readability and maintainability.
Using declare -i means that strings assigned to the declared variables are treated as arithmetic expressions. That reduces clutter by avoiding the need for $(( ... )).
The code assumes that the field separator in the CSV file is ,, not | as in the header. If the separator is really |, replace IFS=, with IFS='|'.
I'm assuming that the field delimiter of your CSV file is a comma but you can set it to whatever character you need.
Here's a fixed version of your code:
#!/bin/bash
participants=0
medals=0
{
# skip the header
read
# process the records
while IFS=',' read -ra array
do
if [[ "${array[2]}" == $1 && "${array[7]}" == $2 ]]
then
(( participants++ ))
medals=$(( medals + array[8] + array[9] + array[10] ))
fi
done
} < athletes.csv
echo "$participants" "$medals"
remark: As $1 and $2 are left unquoted they are subject to glob matching (right side of [[ ... == ... ]]). For example you'll be able to show the total number of medals won by the US with:
./script.sh 'US' '*'
But I have to say, doing text processing with pure shell isn't considered a good practice; there exists dedicated tools for that. Here's an example with awk:
awk -v FS=',' -v country="$1" -v sport="$2" '
BEGIN {
participants = medals = 0
}
NR == 1 { next }
$3 == country && $8 == sport {
participants++
medals += $9 + $10 + $11
}
END { print participants, medals }
' athletes.csv
There's also a potential problem remaining: the CSV format might need a real CSV parser for reading it accurately. There exists a few awk libraries for that but IMHO it's simpler to use a CSV‑aware tool that provides the functionalities that you need.
Here's an example with Miller:
mlr --icsv --ifs=',' filter -s country="$1" -s sport="$2" '
begin {
#participants = 0;
#medals = 0;
}
$nationality == #country && $sport == #sport {
#participants += 1;
#medals += $gold + $silver + $bronze;
}
false;
end { print #participants, #medals; }
' athletes.csv
I managed to parse a custom yaml using below script from How can I parse a YAML file from a Linux shell script? by Stefan:
function parse_yaml {
local prefix=$2
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -ne "s|^\($s\):|\1|" \
-e "s|^\($s\)\($w\)$s:$s[\"']\(.*\)[\"']$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" $1 |
awk -F$fs '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]}}
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
printf("%s%s%s=\"%s\"\n", "'$prefix'",vn, $2, $3);
}
}'
}
Output:
$ parse_yaml new_export.yaml
schemas_name="exports"
schemas_tables_name="TEST1"
schemas_tables_description="'"Tracks analysis"
schemas_tables_active_date="2019-01-07 00:00:00"
schemas_tables_columns_name="event_create_ts"
schemas_tables_columns_type="timestamp without time zone"
schemas_tables_columns_name="issue_id"
schemas_tables_columns_type="bigint"
schemas_tables_columns_description="conv id"
schemas_tables_columns_example="21352352"
schemas_tables_columns_name="company_id"
schemas_tables_columns_type="bigint"
schemas_tables_columns_description="'"Tracks analysis"
schemas_tables_columns_example="10001"
schemas_tables_name="TEST2"
schemas_tables_description="This table presents funny encounters"
schemas_tables_active_date="2018-12-18 00:00:00"
schemas_tables_columns_name="instance_ts"
schemas_tables_columns_type="datetime"
schemas_tables_columns_description="|-"
schemas_tables_columns_example="2018-03-03 12:30:00"
schemas_tables_columns_name="address_id"
schemas_tables_columns_type="bigint"
How can I generate a csv file out of it using nested hierarchy for each table and its colum etc based on the Keys ?
Something like below:
exports.TEST1.event_create_ts,"timestamp without time zone"
exports.TEST1.issue_id,bigint,"conv id",21352352
exports.TEST1.company_id,bigint,"'"Tracks analysis",10001
exports.TEST2.instance_ts,datetime,"|-","2018-03-03 12:30:00"
exports.TEST2.address_id,bigint
Any help would be appreciated!
A server provides a list of asset IDs separated by commas in square brackets after the date and colons :
20160420084726:-
20160420085418:[111783178, 111557953, 111646835, 111413356, 111412662, 105618372, 111413557]
20160420085418:[111413432, 111633904, 111783198, 111792767, 111557948, 111413225, 111413281]
20160420085418:[111413432, 111633904, 111783198, 111792767, 111557948, 111413225, 111413281]
20160420085522:[111344871, 111394583, 111295547, 111379566, 111352520]
20160420090022:[111344871, 111394583, 111295547, 111379566, 111352520]
The format of the input log is:
timestamp:ads
Where:
timestamp is in the format YYYYMMDDhhmmss, and ads is a comma separated list of ad asset IDs surrounded by square brackets, or - if no ads were returned.
The first part of the task is to write a script that outputs, for each ten minute slice of the day:
Count of IDs that were returned
Count of unique IDs that were returned
Script should support a command line parameter to select whether unique or total IDs should be given.
Example output using the above log excerpt (in total mode):
20160420084:0
20160420085:26
20160420090:5
And in unique count mode it would give:
20160420084:0
20160420085:19
20160420090:5
I have tried this:
awk -F '[,:]' '
{
key = substr($1,1,11)"0"
count[key] += ($2 == "-" ? 0 : NF-1)
}
END {
PROCINFO["sorted_in"] = "#ind_num_asc"
for (key in count) print key, count[key]
}
' $LOGFILENAME | grep $DATE;
With the scripts given until now other scenarios fail. For example this one:
log file:
https://drive.google.com/file/d/1sXFvLyCH8gZrXiqf095MubyP7-sLVUXt/view?usp=sharing
The first few lines of the results should be:
nonunique:
20160420000:1
20160420001:11
20160420002:13
20160420003:16
20160420004:3
20160420005:3
20160420010:6
unique:
20160420000:1
20160420001:5
20160420002:5
20160420003:5
20160420004:3
20160420005:3
20160420010:4
$ cat tst.awk
BEGIN { FS="[]:[]+"; OFS=":" }
{
tot = unq = 0
time = substr($1,1,11)
if ( /,/ ) {
tot = split($2,tmp,/, ?/)
for ( i in tmp ) {
if ( !seen[time,tmp[i]]++ ) {
unq++
}
}
}
tots[time] += tot
unqs[time] += unq
}
END {
for (time in tots) {
print time, tots[time], unqs[time]
}
}
$ awk -f tst.awk file
20160420084:0:0
20160420085:26:19
20160420090:5:5
Massage to suit...
#!/bin/bash
while read; do
dts=$( echo "$REPLY" | cut -d: -f1 )
ids=$( echo "$REPLY" | grep -o '\[.*\]' )
if [ $? -eq 0 ]; then
ids=$( echo "$ids" | tr -d '[] ' | tr ',' '\n' | sort $1 )
count=$( echo "$ids" | wc -l )
else
count=0
fi
echo $dts: $count
done
Run like this:
./script.sh [-u] <input.txt
I am trying to use bc in an awk script. In the code below, I am trying to convert hexadecimal number to binary and store it in a variable.
#!/bin/awk -f
{
binary_vector = $(bc <<< "ibase=16;obase=2;FF")
}
Where do I go wrong?
Not saying it's a good idea but:
$ awk 'BEGIN {
cmd = "bc <<< \"ibase=16;obase=2;FF\""
rslt = ((cmd | getline line) > 0 ? line : -1)
close(cmd)
print rslt
}'
11111111
Also see http://gnu.org/software/gawk/manual/gawk.html#Bitwise-Functions and http://gnu.org/software/gawk/manual/gawk.html#Nondecimal-Data
The following one-liner Awk script should do what you want:
awk -vVAR=$(read -p "Enter number: " -u 0 num; echo $num) \
'BEGIN{system("echo \"ibase=16;obase=2;"VAR"\"|bc");}'
Explanation:
-vVAR Passes the variable VAR into Awk
-vVAR=$(read -p ... ) Sets the variable VAR from the
shell to the user input.
system("echo ... |bc") Uses the Awk system built in command to execute the shell commands. Notice how the quoting stops at the variable VAR and then continues just after it, thats so that Awk interprets VAR as an Awk variable and not as part of the string put into the system call.
Update - to use it in an Awk variable:
awk -vVAR=$(read -p "Enter number: " -u 0 num; echo $num) \
'BEGIN{s="echo \"ibase=16;obase=2;"VAR"\"|bc"; s | getline awk_var;\
close(s); print awk_var}'
s | getline awk_var will put the output of the command s into the Awk variable awk_var. Note the string is built before sending it to getline - if not (unless you parenthesize the string concatenation) Awk will try to send it to getline in separate pieces %s VAR %s.
The close(s) closes the pipe - although for bc it doesn't matter and Awk automatically closes pipes upon exit - if you put this into a more elaborate Awk script it is best to explicitly close the pipe. According to the Awk documentation some commands such as mail will wait on the pipe to close prior to completion.
http://www.staff.science.uu.nl/~oostr102/docs/nawk/nawk_39.html
By the way you wrote your example, it looks like you want to convert an awk record ( line ) into an associative array. Here's an awk executable script that allows that by running the bc command over values in a split type array:
#!/usr/bin/awk -f
{
# initialize the a array
cnt = split($0, a, FS)
if( convertArrayBase(10, 2, a, cnt) > -1 ) {
# use the array here
for(i=1; i<=cnt; i++) {
print a[i]
}
}
}
# Destructively updates input array, converting numbers from ibase to obase
#
# #ibase: ibase value for bc
# #obase: obase value for bc
# #a: a split() type associative array where keys are numeric
# #cnt: size of a ( number of fields )
#
# #return: -1 if there's a getline error, else cnt
#
function convertArrayBase(ibase, obase, a, cnt, i, b, cmd) {
cmd = sprintf("echo \"ibase=%d;obase=%d", ibase, obase)
for(i=1; i<=cnt; i++ ) {
cmd = cmd ";" a[i]
}
cmd = cmd "\" | bc"
i = 0 # reset i
while( (cmd | getline b) > 0 ) {
a[++i] = b
}
close( cmd )
return i==cnt ? cnt : -1
}
When used with an input of:
1 2 3
4 s 1234567
this script outputs the following:
1
10
11
100
0
100101101011010000111
The convertArrayBase function operates on split type arrays. So you have to initialize the input array (a here) with the full row (as shown) or a field's subflds(not shown) before calling the it. It destructively updates the array.
You could instead call bc directly with some helper files to get similar output. I didn't find that bc supported - ( stdin as a file name ) so
it's a little more than I'd like.
Making a start_cmds file like this:
ibase=10;obase=2;
and a quit_cmd like:
;quit
Given an input file (called data.semi) where the data is separated by a ;, like this:
1;2;3
4;s;1234567
you can run bc like:
$ bc -q start_cmds data.semi quit_cmd
1
10
11
100
0
100101101011010000111
which is the same data that the awk script is outputting, but only calling bc a single time with all of the inputs. Now, while that data isn't in an awk associative array in a script, the bc output could be written as stdin input to awk and reassembed into an array like:
bc -q start_cmds data.semi quit_cmd | awk 'FNR==NR {a[FNR]=$1; next} END { for( k in a ) print k, a[k] }' -
1 1
2 10
3 11
4 100
5 0
6 100101101011010000111
where the final dash is telling awk to treat stdin as an input file and lets you add other files later for processing.
I have a caret delimited (key=value) input and would like to extract multiple tokens of interest from it.
For example: Given the following input
$ echo -e "1=A00^35=D^150=1^33=1\n1=B000^35=D^150=2^33=2"
1=A00^35=D^22=101^150=1^33=1
1=B000^35=D^22=101^150=2^33=2
I would like the following output
35=D^150=1^
35=D^150=2^
I have tried the following
$ echo -e "1=A00^35=D^150=1^33=1\n1=B000^35=D^150=2^33=2"|egrep -o "35=[^/^]*\^|150=[^/^]*\^"
35=D^
150=1^
35=D^
150=2^
My problem is that egrep returns each match on a separate line. Is it possible to get one line of output for one line of input? Please note that due to the constraints of the larger script, I cannot simply do a blind replace of all the \n characters in the output.
Thank you for any suggestions.This script is for bash 3.2.25. Any egrep alternatives are welcome. Please note that the tokens of interest (35 and 150) may change and I am already generating the egrep pattern in the script. Hence a one liner (if possible) would be great
You have two options. Option 1 is to change the "white space character" and use set --:
OFS=$IFS
IFS="^ "
set -- 1=A00^35=D^150=1^33=1 # No quotes here!!
IFS="$OFS"
Now you have your values in $1, $2, etc.
Or you can use an array:
tmp=$(echo "1=A00^35=D^150=1^33=1" | sed -e 's:\([0-9]\+\)=: [\1]=:g' -e 's:\^ : :g')
eval value=($tmp)
echo "35=${value[35]}^150=${value[150]}"
To get rid of the newline, you can just echo it again:
$ echo $(echo "1=A00^35=D^150=1^33=1"|egrep -o "35=[^/^]*\^|150=[^/^]*\^")
35=D^ 150=1^
If that's not satisfactory (I think it may give you one line for the whole input file), you can use awk:
pax> echo '
1=A00^35=D^150=1^33=1
1=a00^35=d^157=11^33=11
' | awk -vLIST=35,150 -F^ ' {
sep = "";
split (LIST, srch, ",");
for (i = 1; i <= NF; i++) {
for (idx in srch) {
split ($i, arr, "=");
if (arr[1] == srch[idx]) {
printf sep "" arr[1] "=" arr[2];
sep = "^";
}
}
}
if (sep != "") {
print sep;
}
}'
35=D^150=1^
35=d^
pax> echo '
1=A00^35=D^150=1^33=1
1=a00^35=d^157=11^33=11
' | awk -vLIST=1,33 -F^ ' {
sep = "";
split (LIST, srch, ",");
for (i = 1; i <= NF; i++) {
for (idx in srch) {
split ($i, arr, "=");
if (arr[1] == srch[idx]) {
printf sep "" arr[1] "=" arr[2];
sep = "^";
}
}
}
if (sep != "") {
print sep;
}
}'
1=A00^33=1^
1=a00^33=11^
This one allows you to use a single awk script and all you need to do is to provide a comma-separated list of keys to print out.
And here's the one-liner version :-)
echo '1=A00^35=D^150=1^33=1
1=a00^35=d^157=11^33=11
' | awk -vLST=1,33 -F^ '{s="";split(LST,k,",");for(i=1;i<=NF;i++){for(j in k){split($i,arr,"=");if(arr[1]==k[j]){printf s""arr[1]"="arr[2];s="^";}}}if(s!=""){print s;}}'
given a file 'in' containing your strings :
$ for i in $(cut -d^ -f2,3 < in);do echo $i^;done
35=D^150=1^
35=D^150=2^