I am looking to update one config file using bash.
Config file is having multiple section like
[SECTION1]
action.email.useNSSubject = 1
dispatch.earliest_time = 1578927600
dispatch.latest_time = 1579016736
search = | inputlookup KPI_MASTER_LIST.csv | search TYPE="MTE_GENERIC" \
| table ALERT Order\
| map maxsearches=21 search="| savedsearch "$$ALERT$$" host_token=$host_token$ SERVICE_EARLIEST_TIME=$SERVICE_EARLIEST_TIME$ time_token.earliest=$time_token.earliest$ time_token.latest=$time_token.latest$ | appendcols [ | makeresults | eval Order="$$Order$$" | fillnull count ] | table ALERT count Order "\
| sort Order \
[SECTION2]
action.email.useNSSubject = 1
alert.track = 0
dispatch.earliest_time = 153437300
dispatch.latest_time = 1549013433
display.general.timeRangePicker.show = 0
search = | inputlookup KPI_MASTER_LIST.csv | search TYPE="MTE_GENERIC" \
| table ALERT Order\
| map maxsearches=21 search="| savedsearch "$$ALERT$$" host_token=$host_token$ SERVICE_EARLIEST_TIME=$SERVICE_EARLIEST_TIME$ time_token.earliest=$time_token.earliest$ time_token.latest=$time_token.latest$ | appendcols [ | makeresults | eval Order="$$Order$$" | fillnull count ] | table ALERT count Order "\
| sort Order \
I am looking to update value of "dispatch.earliest_time" and "dispatch.latest_time" in specific section only(not all occurrence of this file)
You could use an address range specifying your delimiter
sed '/\[SECTION_NAME\]/,/^\[/ s/dispatch.earliest_time/new_value_here/'
You can find thorough documentation about sed here
Related
I have a main csv file with records (file1). I then have a second "delta" csv file (file2). I would like to update the main file with the records from the delta file using bash. Existing records should get the new value (replace the row) and new records should be appended.
Example file1
unique_id|value
'1'|'old'
'2'|'old'
'3'|'old'
Example file2
unique_id|value
'1'|'new'
'4'|'new'
Desired outcome
unique_id|value
'1'|'new'
'2'|'old'
'3'|'old'
'4'|'new'
awk -F '|' '
! ($1 in rows){ ids[id_count++] = $1 }
{ rows[$1] = $0 }
END {
for(i=0; i<id_count; i++)
print rows[ids[i]]
}
' old.csv new.csv
Output:
unique_id|value
'1'|'new'
'2'|'old'
'3'|'old'
'4'|'new'
Similar approach using perl
perl -F'\|' -lane '
$id = $F[0];
push #ids, $id unless exists $rows{$id};
$rows{$id} = $_;
END { print $rows{$_} for #ids }
' old.csv new.csv
You could also use an actual database e.g. sqlite
sqlite> create table old (unique_id text primary key, value text);
sqlite> create table new (unique_id text primary key, value text);
# skip headers
sqlite> .sep '|'
sqlite> .import --skip 1 new.csv new
sqlite> .import --skip 1 old.csv old
sqlite> select * from old;
'1'|'old'
'2'|'old'
'3'|'old'
sqlite> insert into old
select * from new where true
on conflict(unique_id)
do update set value=excluded.value;
sqlite> select * from old;
'1'|'new'
'2'|'old'
'3'|'old'
'4'|'new'
I immediately thought of join, but you cannot specify "take this column if there's a match, otherwise use another column, and have either output end up in a single column".
For command-line processing of CSV files, I really like GoCSV. It has its own CSV-aware join command—which is also limited like join (above)—and it has other commands that we can chain together to produce the desired output.
GoCSV uses a streaming/buffered reader/writer for as many subcommands as it can. Every command but join operates in this buffered-in-buffered-out fashion, but join needs to read both sides in total to match. Still, GoCSV is compiled and just really, really fast.
All GoCSV commands read the delimiter to use from the GOCSV_DELIMITER environment variable, so your first order of business is to export that for your pipe delimiter:
export GOCSV_DELIMITER='|'
Joining is easy, just specify the columns from either file to use as the key. I'm also going to rename the columns now so that we're set up for the conditional logic in the next step. If your columns vary from file to file, you'll want to rename each set of columns first, before you join.
I'm telling gocsv join to pick the first columns from both files, -c 1,1 and use an outer join to keep both left and right sides, regardless of match:
gocsv join -c 1,1 -outer file1.csv file2.csv \
| gocsv rename -c 1,2,3,4 -names 'id_left','val_left','id_right','val_right'
| id_left | val_left | id_right | val_right |
|---------|----------|----------|-----------|
| 1 | old | 1 | new |
| 2 | old | | |
| 3 | old | | |
| | | 4 | new |
There's no way to change a value in an existing column based on another column's value, but we can add new columns and use a templating language to define the logic we need.
The following syntax creates two new columns, id_final and val_final. For both columns, if there's a value in val_right that value is used, otherwise val_left is used. This, cominbed with the outer-join of "left then right" from before, gives us the effect of the right side updating/overwriting the left side if the IDs matched:
... \
| gocsv add -name 'id_final' -t '{{ if .id_right }}{{ .id_right }}{{ else }}{{ .id_left }}{{ end }}' \
| gocsv add -name 'val_final' -t '{{ if .val_right }}{{ .val_right }}{{ else }}{{ .val_left }}{{ end }}'
| id_left | val_left | id_right | val_right | id_final | val_final |
|---------|----------|----------|-----------|----------|-----------|
| 1 | old | 1 | new | 1 | new |
| 2 | old | | | 2 | old |
| 3 | old | | | 3 | old |
| | | 4 | new | 4 | new |
Finally, we can select just the "final" fields and rename them back to their original names:
... \
| gocsv select -c 'id_final','val_final' \
| gocsv rename -c 1,2 -names 'unique_id','value'
| unique_id | value |
|-----------|-------|
| 1 | new |
| 2 | old |
| 3 | old |
| 4 | new |
GoCSV has pre-built binaries for modern platforms.
I use Miller and run
mlr --csv --fs "|" join --ul --ur -j unique_id --lp "l#" --rp "r#" -f 01.csv \
then put 'if(is_not_null(${r#value})){$value=${r#value}}else{$value=$value}' \
then cut -x -r -f '#' 02.csv
and I have
unique_id|value
'1'|'new'
'4'|'new'
'2'|'old'
'3'|'old'
I run a full join and I use an if condition, to check if I have value on the right. If I have it, I use it.
I have a sample output from a command
+--------------------------------------+------------------+---------------------+-------------------------------------+
| id | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+-------------------------------------+
| 04584e8a-c210-430b-8028-79dbf741797c | | 99.99.99.91 | |
| 12d2257c-c02b-4295-b910-2069f583bee5 | 20.0.0.92 | 99.99.99.92 | 37ebfa4c-c0f9-459a-a63b-fb2e84ab7f92 |
| 98c5a929-e125-411d-8a18-89877d3c932b | | 99.99.99.93 | |
| f55e54fb-e50a-4800-9a6e-1d75004a2541 | 20.0.0.94 | 99.99.99.94 | fe996e76-ffdb-4687-91a0-9b4df2631b4e |
+--------------------------------------+------------------+---------------------+-------------------------------------+
Now I want to fetch all the "floating _ip_address" for which "port_id" & "fixed_ip_address" fields are blank/empty (In above sample 99.99.99.91 & 99.99.99.93)
How can I do it with shell scripting?
You can use sed:
fl_ips=($(sed -nE 's/\|.*\|.*\|(.*)\|\s*\|/\1/p' inputfile))
Here inputfile is the table provided in the question. The array fl_ips contains the output of sed:
>echo ${#fl_ips[#]}
2 # Array has two elements
>echo ${fl_ips[0]}
99.99.99.91
>echo ${fl_ips[1]}
99.99.99.93
I have 1000 tables, need to check the describe <table name>; for one by one. Instead of running one by one, can you please give me one command to fetch "N" number of tables in a single shot.
You can make a shell script and call it with a parameter. For example following script receives schema, prepares list of tables in the schema, calls DESCRIBE EXTENDED command, extracts location, prints table location for first 1000 tables in the schema ordered by name. You can modify and use it as a single command:
#!/bin/bash
#Create table list for a schema (script parameter)
HIVE_SCHEMA=$1
echo Processing Hive schema $HIVE_SCHEMA...
tablelist=tables_$HIVE_SCHEMA
hive -e " set hive.cli.print.header=false; use $HIVE_SCHEMA; show tables;" 1> $tablelist
#number of tables
tableNum_limit=1000
#For each table do:
for table in $(cat $tablelist|sort|head -n "$tableNum_limit") #add proper sorting
do
echo Processing table $table ...
#Call DESCRIBE
out=$(hive client -S -e "use $HIVE_SCHEMA; DESCRIBE EXTENDED $table")
#Get location for example
table_location=$(echo "${out}" | egrep -o 'location:[^,]+' | sed 's/location://')
echo Table location: $table_location
#Do something else here
done
Query the metastore
Demo
Hive
create database my_db_1;
create database my_db_2;
create database my_db_3;
create table my_db_1.my_tbl_1 (i int);
create table my_db_2.my_tbl_2 (c1 string,c2 date,c3 decimal(12,2));
create table my_db_3.my_tbl_3 (x array<int>,y struct<i:int,j:int,k:int>);
MySQL (Metastore)
use metastore
;
select d.name as db_name
,t.tbl_name
,c.integer_idx + 1 as col_position
,c.column_name
,c.type_name
from DBS as d
join TBLS as t
on t.db_id =
d.db_id
join SDS as s
on s.sd_id =
t.sd_id
join COLUMNS_V2 as c
on c.cd_id =
s.cd_id
where d.name like 'my\_db\_%'
order by d.name
,t.tbl_name
,c.integer_idx
;
+---------+----------+--------------+-------------+---------------------------+
| db_name | tbl_name | col_position | column_name | type_name |
+---------+----------+--------------+-------------+---------------------------+
| my_db_1 | my_tbl_1 | 1 | i | int |
| my_db_2 | my_tbl_2 | 1 | c1 | string |
| my_db_2 | my_tbl_2 | 2 | c2 | date |
| my_db_2 | my_tbl_2 | 3 | c3 | decimal(12,2) |
| my_db_3 | my_tbl_3 | 1 | x | array<int> |
| my_db_3 | my_tbl_3 | 2 | y | struct<i:int,j:int,k:int> |
+---------+----------+--------------+-------------+---------------------------+
I would like to adjust my source centered in columns...
Source:
IP | ASN | Prefix | AS Name | CN | Domain | ISP
109.228.12.96 | 8560 | 109.228.0.0/18 | ONEANDONE | DE | fasthosts.com | Fast Hosts LTD
Goal:
IP | ASN | Prefix | AS Name | CN | Domain | ISP
109.228.12.96 | 8560 | 109.228.0.0/18 | ONEANDONE | DE | fasthosts.com | Fast Hosts LTD
I tried different things with the command column...but I have double spaces inside:
cat Source.txt | sed 's/ *| */#| /g' | column -s '#' -t
IP | ASN | Prefix | AS Name | CN | Domain | ISP
109.228.12.96 | 8560 | 109.228.0.0/18 | ONEANDONE | DE | fasthosts.com | Fast Hosts LTD
Is there a way to use column without removing the delimiter...or another solution?
Thanks in advance for your help!
You can also do everything in awk. Save the program to pr.awk and run
awk -f pr.awk input.dat
BEGIN {
FS = "|"
ARGV[2] = "pass=2" # a trick to read file two times
ARGV[3] = ARGV[1]
ARGC=4
pass = 1
}
function trim(s) {
sub(/^[[:space:]]+/, "", s) # remove leading
sub(/[[:space:]]+$/, "", s) # and trailing whitespaces
return s
}
pass == 1 {
for (i=1; i<=NF; i++) {
field = trim($i)
len = length(field)
w[i] = len>w[i] ? len : w[i] # find the maximum width
}
}
pass == 2 {
line = ""
for (i=1; i<=NF; i++) {
field = trim($i)
s = i==NF ? field : sprintf("%-" w[i] "s", field)
sep = i==1 ? "" : " | "
line = line sep s
}
print line
}
column has input sepatator -s and also output seperator -o
so call is like
cat file | column -t -s '|' -o '|'
Currently I am facing the following problem, which I'm working in Stata to solve. I have added the algorithm tag, because it's mainly the steps that I'm interested in rather than the Stata code.
I have some variables, say, var1 - var20 that can possibly contain a string. I am only interested in some of these strings, let us call them A,B,C,D,E,F, but other strings can occur also (all of these will be denoted X). Also I have a unique identifier ID. A part of the data could look like this:
ID | var1 | var2 | var3 | .. | var20
1 | E | | | | X
1 | | A | | | C
2 | X | F | A | |
8 | | | | | E
Now I want to create an entry for every ID and for every occurrence of one of the strings A,B,C,E,D,F in any of the variables. The above data should look like this:
ID | var1 | var2 | var3 | .. | var20
1 | E | | | .. |
1 | | A | | |
1 | | | | | C
2 | | F | | |
2 | | | A | |
8 | | | | | E
Here we ignore every time there's a string X that is NOT A,B,C,D,E or F. My attempt so far was to create a variable that for each entry counts the number, N, of occurrences of A,B,C,D,E,F. In the original data above that variable would be N=1,2,2,1. Then for each entry I create N duplicates of this. This results in the data:
ID | var1 | var2 | var3 | .. | var20
1 | E | | | | X
1 | | A | | | C
1 | | A | | | C
2 | X | F | A | |
2 | X | F | A | |
8 | | | | | E
My problem is how do I attack this problem from here? And sorry for the poor title, but I couldn't word it any more specific.
Sorry, I thought the finally block was your desired output (now I understand that it's what you've accomplished so far). You can get the middle block with two calls to reshape (long, then wide).
First I'll generate data to match yours.
clear
set obs 4
* ids
generate n = _n
generate id = 1 in 1/2
replace id = 2 in 3
replace id = 8 in 4
* generate your variables
forvalues i = 1/20 {
generate var`i' = ""
}
replace var1 = "E" in 1
replace var1 = "X" in 3
replace var2 = "A" in 2
replace var2 = "F" in 3
replace var3 = "A" in 3
replace var20 = "X" in 1
replace var20 = "C" in 2
replace var20 = "E" in 4
Now the two calls to reshape.
* reshape to long, keep only desired obs, then reshape to wide
reshape long var, i(n id) string
keep if inlist(var, "A", "B", "C", "D", "E", "F")
tempvar long_id
generate int `long_id' = _n
reshape wide var, i(`long_id') string
The first reshape converts your data from wide to long. The var specifies that the variables you want to reshape to long all start with var. The i(n id) specifies that each unique combination of n and i is a unique observation. The reshape call provides one observation for each n-id combination for each of your var1 through var20 variables. So now there are 4*20=80 observations. Then I keep only the strings that you'd like to keep with inlist().
For the second reshape call var specifies that the values you're reshaping are in variable var and that you'll use this as the prefix. You wanted one row per remaining letter, so I made a new index (that has no real meaning in the end) that becomes the i index for the second reshape call (if I used n-id as the unique observation, then we'd end up back where we started, but with only the good strings). The j index remains from the first reshape call (variable _j) so the reshape already knows what suffix to give to each var.
These two reshape calls yield:
. list n id var1 var2 var3 var20
+-------------------------------------+
| n id var1 var2 var3 var20 |
|-------------------------------------|
1. | 1 1 E |
2. | 2 1 A |
3. | 2 1 C |
4. | 3 2 F |
5. | 3 2 A |
|-------------------------------------|
6. | 4 8 E |
+-------------------------------------+
You can easily add back variables that don't survive the two reshapes.
* if you need to add back dropped variables
forvalues i =1/20 {
capture confirm variable var`i'
if _rc {
generate var`i' = ""
}
}