Match string in file1 with string in file2 - bash

my data examples are
1.txt
MTQZ3CODT0SQKGE3QE6B | j t | j | t | 22312 | stimpy | EST | 8 | 20 | text | list | 0 | | 2002-08-22 13:07:05
2.txt
MTQZ3CODT0SQKGE3QE6B | joe#example.com
desired output
joe#example.com | j t | j | t | 22312 | stimpy | EST | 8 | 20 | text | list | 0 | | 2002-08-22 13:07:05
I suppose to match & replace 1st column from 1.txt
with 2nd column in 2.txt
so far i did try :
awk 'BEGIN { while((getline < "file2.txt") > 0) a[$1]=$3 } { $1 = a[$1] } 1' file1.txt
Its work well but after 12hours of running i just finalise only 1GB looks very slow
INFO: file1.txt=7GB file2.txt=4GB my memory 16GB
I'm not sure what cause the slowly thing but i hope if there's another fast way then i'm using of awk
will be helpfull.
Thanks!!
Note: I'm running out of memory is there another way to do it
and that's to not have an array at all?
Also in my case lines are randomly and not in the same lines!

$ join <(sort 2.txt) <(sort 1.txt) | cut -d' ' -f3-
joe#example.com | j t | j | t | 22312 | stimpy | EST | 8 | 20 | text | list | 0 | | 2002-08-22 13:07:05
If that's not all you need then edit your question to provide more truly representative sample input/output including cases that this doesn't work for.

You may use this awk:
awk -F ' *\\| *' -v OFS=' | ' '
FNR == NR {
map[$1]=$2
next
}
$1 in map {
$1 = map[$1]
} 1' 2.txt 1.txt
joe#example.com | j t | j | t | 22312 | stimpy | EST | 8 | 20 | text | list | 0 | | 2002-08-22 13:07:05

Related

Use AWK with delimiter to print specific columns

My file looks as follows:
+------------------------------------------+---------------+----------------+------------------+------------------+-----------------+
| Message | Status | Adress | Changes | Test | Calibration |
|------------------------------------------+---------------+----------------+------------------+------------------+-----------------|
| Hello World | Active | up | 1 | up | done |
| Hello Everyone Here | Passive | up | 2 | down | none |
| Hi there. My name is Eric. How are you? | Down | up | 3 | inactive | done |
+------------------------------------------+---------------+----------------+------------------+------------------+-----------------+
+----------------------------+---------------+----------------+------------------+------------------+-----------------+
| Message | Status | Adress | Changes | Test | Calibration |
|----------------------------+---------------+----------------+------------------+------------------+-----------------|
| What's up? | Active | up | 1 | up | done |
| Hi. I'm Otilia | Passive | up | 2 | down | none |
| Hi there. This is Marcus | Up | up | 3 | inactive | done |
+----------------------------+---------------+----------------+------------------+------------------+-----------------+
I want to extract a specific column using AWK.
I can use CUT to do it; however when the length of each table varies depending on how many characters are present in each column, I'm not getting the desired output.
cat File.txt | cut -c -44
+------------------------------------------+
| Message |
|------------------------------------------+
| Hello World |
| Hello Everyone Here |
| Hi there. My name is Eric. How are you? |
+------------------------------------------+
+----------------------------+--------------
| Message | Status
|----------------------------+--------------
| What's up? | Active
| Hi. I'm Otilia | Passive
| Hi there. This is Marcus | Up
+----------------------------+--------------
or
cat File.txt | cut -c 44-60
+---------------+
| Status |
+---------------+
| Active |
| Passive |
| Down |
+---------------+
--+--------------
| Adress
--+--------------
| up
| up
| up
--+--------------
I tried using AWK but I don't know how to add 2 different delimiters which would take care of all the lines.
cat File.txt | awk 'BEGIN {FS="|";}{print $2,$3}'
Message Status
------------------------------------------+---------------+----------------+------------------+------------------+-----------------
Hello World Active
Hello Everyone Here Passive
Hi there. My name is Eric. How are you? Down
Message Status
----------------------------+---------------+----------------+------------------+------------------+-----------------
What's up? Active
Hi. I'm Otilia Passive
Hi there. This is Marcus Up
The output I'm looking for:
+------------------------------------------+
| Message |
|------------------------------------------+
| Hello World |
| Hello Everyone Here |
| Hi there. My name is Eric. How are you? |
+------------------------------------------+
+----------------------------+
| Message |
|----------------------------+
| What's up? |
| Hi. I'm Otilia |
| Hi there. This is Marcus |
+----------------------------+
or
+------------------------------------------+---------------+
| Message | Status |
|------------------------------------------+---------------+
| Hello World | Active |
| Hello Everyone Here | Passive |
| Hi there. My name is Eric. How are you? | Down |
+------------------------------------------+---------------+
+----------------------------+---------------+
| Message | Status |
|----------------------------+---------------+
| What's up? | Active |
| Hi. I'm Otilia | Passive |
| Hi there. This is Marcus | Up |
+----------------------------+---------------+
or random other columns
+------------------------------------------+----------------+------------------+
| Message | Adress | Test |
|------------------------------------------+----------------+------------------+
| Hello World | up | up |
| Hello Everyone Here | up | down |
| Hi there. My name is Eric. How are you? | up | inactive |
+------------------------------------------+----------------+------------------+
+----------------------------+---------------+------------------+
| Message |Adress | Test |
|----------------------------+---------------+------------------+
| What's up? |up | up |
| Hi. I'm Otilia |up | down |
| Hi there. This is Marcus |up | inactive |
+----------------------------+---------------+------------------+
Thanks in advance.
One idea using GNU awk:
awk -v fldlist="2,3" '
BEGIN { fldcnt=split(fldlist,fields,",") } # split fldlist into array fields[]
{ split($0,arr,/[|+]/,seps) # split current line on dual delimiters "|" and "+"
for (i=1;i<=fldcnt;i++) # loop through our array of fields (fldlist)
printf "%s%s", seps[fields[i]-1], arr[fields[i]] # print leading separator/delimiter and field
printf "%s\n", seps[fields[fldcnt]] # print trailing separator/delimiter and terminate line
}
' File.txt
NOTES:
requires GNU awk for the 4th argument to the split() function (seps == array of separators; see gawk string functions for details)
assumes our field delimiters (|, +) do not show up as part of the data
the input variable fldlist is a comma-delimited list of columns that mimics what would be passed to cut (eg, when a line starts with a delimiter then field #1 is blank)
For fldlist="2,3" this generates:
+------------------------------------------+---------------+
| Message | Status |
|------------------------------------------+---------------+
| Hello World | Active |
| Hello Everyone Here | Passive |
| Hi there. My name is Eric. How are you? | Down |
+------------------------------------------+---------------+
+----------------------------+---------------+
| Message | Status |
|----------------------------+---------------+
| What's up? | Active |
| Hi. I'm Otilia | Passive |
| Hi there. This is Marcus | Up |
+----------------------------+---------------+
For fldlist="2,4,6" this generates:
+------------------------------------------+----------------+------------------+
| Message | Adress | Test |
|------------------------------------------+----------------+------------------+
| Hello World | up | up |
| Hello Everyone Here | up | down |
| Hi there. My name is Eric. How are you? | up | inactive |
+------------------------------------------+----------------+------------------+
+----------------------------+----------------+------------------+
| Message | Adress | Test |
|----------------------------+----------------+------------------+
| What's up? | up | up |
| Hi. I'm Otilia | up | down |
| Hi there. This is Marcus | up | inactive |
+----------------------------+----------------+------------------+
For fldlist="4,3,2" this generates:
+----------------+---------------+------------------------------------------+
| Adress | Status | Message |
+----------------+---------------|------------------------------------------+
| up | Active | Hello World |
| up | Passive | Hello Everyone Here |
| up | Down | Hi there. My name is Eric. How are you? |
+----------------+---------------+------------------------------------------+
+----------------+---------------+----------------------------+
| Adress | Status | Message |
+----------------+---------------|----------------------------+
| up | Active | What's up? |
| up | Passive | Hi. I'm Otilia |
| up | Up | Hi there. This is Marcus |
+----------------+---------------+----------------------------+
Say that again? (fldlist="3,3,3"):
+---------------+---------------+---------------+
| Status | Status | Status |
+---------------+---------------+---------------+
| Active | Active | Active |
| Passive | Passive | Passive |
| Down | Down | Down |
+---------------+---------------+---------------+
+---------------+---------------+---------------+
| Status | Status | Status |
+---------------+---------------+---------------+
| Active | Active | Active |
| Passive | Passive | Passive |
| Up | Up | Up |
+---------------+---------------+---------------+
And if you make the mistake of trying to print the '1st' column, ie, fldlist="1":
+
|
|
|
|
|
+
+
|
|
|
|
|
+
If GNU awk is available, please try markp-fuso's nice solution.
If not, here is a posix-compliant alternative:
#!/bin/bash
# define bash variables
cols=(2 3 6) # bash array of desired columns
col_list=$(IFS=,; echo "${cols[*]}") # create a csv string
awk -v cols="$col_list" '
NR==FNR {
if (match($0, /^[|+]/)) { # the record contains a table
if (match($0, /^[|+]-/)) # horizontally ruled line
n = split($0, a, /[|+]/) # split into columns
else # "cell" line
n = split($0, a, /\|/)
len = 0
for (i = 1; i < n; i++) {
len += length(a[i]) + 1 # accumulated column position
pos[FNR, i] = len
}
}
next
}
{
n = split(cols, a, /,/) # split the variable `cols` on comma into an array
for (i = 1; i <= n; i++) {
col = a[i]
if (pos[FNR, col] && pos[FNR, col+1]) {
printf("%s", substr($0, pos[FNR, col], pos[FNR, col + 1] - pos[FNR, col]))
}
}
print(substr($0, pos[FNR, col + 1], 1))
}
' file.txt file.txt
Result with cols=(2 3 6) as shown above:
+---------------+----------------+-----------------+
| Status | Adress | Calibration |
+---------------+----------------+-----------------|
| Active | up | done |
| Passive | up | none |
| Down | up | done |
+---------------+----------------+-----------------+
+---------------+----------------+-----------------+
| Status | Adress | Calibration |
+---------------+----------------+-----------------|
| Active | up | done |
| Passive | up | none |
| Up | up | done |
+---------------+----------------+-----------------+
It detects the column width in the 1st pass then splits the line on the column position in the 2nd pass.
You can control the columns to print with the bash array cols which is assigned at the beginning of the script. Please assign the array to the list of desired column numbers in increasing order. If you want to use the bash variable in different way, please let me know.

Join two csv files if value is between interval in file 2

I have two csv files that I need to join, F1 has milions of lines, F2 (file 1) has thousands of lines. I need to join these files, if the position in file F1 (F1.pos) is between F2.start and F2.end. Is there any way, how to do this in bash? Because I have a code in Python pandas to sqllite3 and I am looking for something quicker.
Table F1 looks like:
| name | pos |
|------ |------ |
| a | 1020 |
| b | 1200 |
| c | 1800 |
Table F2 looks like:
| interval_name | start | end |
|--------------- |------- |------ |
| int1 | 990 | 1090 |
| int2 | 1100 | 1150 |
| int3 | 500 | 2000 |
Result should look like:
| name | pos | interval_name | start | end |
|------ |------ |--------------- |------- |------ |
| a | 1020 | int1 | 990 | 1090 |
| a | 1020 | int3 | 500 | 2000 |
| b | 1200 | int1 | 990 | 1090 |
| b | 1200 | int3 | 500 | 2000 |
| c | 1800 | int3 | 500 | 2000 |
DISCLAIMER: Use dedicated/local tools if available, this is hacking:
There is an apparent error in your desired output: name b should not match int1.
$ tail -n+1 *.csv
==> f1.csv <==
name,pos
a,1020
b,1200
c,1800
==> f2.csv <==
interval_name,start,end
int1,990,1090
int2,1100,1150
int3,500,2000
$ awk -F, -vOFS=, '
BEGIN {
print "name,pos,interval_name,start,end"
PROCINFO["sorted_in"]="#ind_num_asc"
}
FNR==1 {next}
NR==FNR {Int[$1] = $2 "," $3; next}
{
for(i in Int) {
split(Int[i], I)
if($2 >= I[1] && $2 <= I[2]) print $0, i, Int[i]
}
}
' f2.csv f1.csv
Outputs:
name,pos,interval_name,start,end
a,1020,int1,990,1090
a,1020,int3,500,2000
b,1200,int3,500,2000
c,1800,int3,500,2000
This is not particularly efficient in any way; the only sorting used is to ensure that the Int array is parsed in the correct order, which changes if your sample data is not indicative of the actual schema. I would be very interested to know how my solution performs vs pandas.
Here's one in awk. It hashes the smaller file records to arrays and for each of the bigger file records it iterates thru the hashes so it is slow:
$ awk '
NR==FNR { # hash f2 records
start[NR]=$4
end[NR]=$6
data[NR]=substr($0,2)
next
}
FNR<=2 { # mind the front matter
print $0 data[FNR]
}
{ # check if in range and output
for(i in start)
if($4>start[i] && $4<end[i])
print $0 data[i]
}' f2 f1
Output:
| name | pos | interval_name | start | end |
|------ |------ |--------------- |------- |------ |
| a | 1020 | int1 | 990 | 1090 |
| a | 1020 | int3 | 500 | 2000 |
| b | 1200 | int3 | 500 | 2000 |
| c | 1800 | int3 | 500 | 2000 |
I doubt that a bash script would be faster than a python script. Just don't import the files into a database – write a custom join function instead!
The best way to join depends on your input data. If nearly all F1.pos are inside of nearly all intervals then a naive approach would be the fastest. The naive approach in bash would look like this:
#! /bin/bash
join --header -t, -j99 F1 F2 |
sed 's/^,//' |
awk -F, 'NR>1 && $2 >= $4 && $2 <= $5'
# NR>1 is only there to skip the column headers
However, this will be very slow if there are only a few intersections, for instance, when the average F1.pos only is in 5 intervals. In this case the following approach will be way faster. Implement it in a programing language of your choice – bash is not appropriate for this:
Sort F1 by pos in ascending order.
Sort F2 by start and then by end in ascending order.
For each sorted file, keep a pointer to a line, starting at the first line.
Repeat until F1's pointer reaches the end:
For the current F1.pos advance F2's pointer until F1.pos ≥ F2.start.
Lock F2's pointer, but continue to read lines until F1.pos ≤ F2.end. Print the read lines in the output format name,pos,interval_name,start,end.
Advance F1's pointer by one line.
Only sorting the files could be actually faster in bash. Here is a script to sort both files.
#! /bin/bash
sort -t, -n -k2 F1-without-headers > F1-sorted
sort -t, -n -k2,3 F2-without-headers > F2-sorted
Consider using LC_ALL=C, -S N% and --parallel N to speed up the sorting process.

Extract URLs (multiple lines) from texttable

My source:
+-----------+-------+----------------------+----------------------------------------------------------------------------------+
| positives | total | scan_date | url |
+===========+=======+======================+==================================================================================+
| 4 | 65 | 2015-09-21 23:29:33 | http://thebackpack.fr/wp-content/themes/salient/wpbakery/js_composer/assets/lib/ |
| | | | prettyphoto/images/prettyPhoto/light_rounded/66836487162.txt |
+-----------+-------+----------------------+----------------------------------------------------------------------------------+
| 1 | 64 | 2015-09-17 19:28:50 | http://thebackpack.fr/ |
+-----------+-------+----------------------+----------------------------------------------------------------------------------+
| 1 | 64 | 2015-09-17 08:44:16 | http://thebackpack.fr/wp-content/themes/salient/wpbakery/js_composer/assets/lib/ |
| | | | prettyphoto/images/prettyPhoto/light_rounded/ |
+-----------+-------+----------------------+----------------------------------------------------------------------------------+
I would like to extract the full URLs (Full URL in one line):
hxxp://thebackpack.fr/wp-content/themes/salient/wpbakery/js_composer/assets/lib/prettyphoto/images/prettyPhoto/light_rounded/66836487162.txt
hxxp://thebackpack.fr/
hxxp://thebackpack.fr/wp-content/themes/salient/wpbakery/js_composer/assets/lib/prettyphoto/images/prettyPhoto/light_rounded/
The multiple lines URL is my problem. I tried for example: awk '{print $9}'
Thanks in advance for your help!
You can use this awk command:
awk -F '[[:blank:]]*\\|[[:blank:]]*' 'NR<3 || NF<5{next}
$2{if (url) print url; url=$5; next}
{url=url $5}
END{print url}' file
Output:
http://thebackpack.fr/wp-content/themes/salient/wpbakery/js_composer/assets/lib/prettyphoto/images/prettyPhoto/light_rounded/66836487162.txt
http://thebackpack.fr/
http://thebackpack.fr/wp-content/themes/salient/wpbakery/js_composer/assets/lib/prettyphoto/images/prettyPhoto/light_rounded/

Can't iterate over array in Bash

I need to add a new column with a (ordinal) number after the last column in my table.
Both input and output files are .CSV tables.
Incoming table has more then 500 000 lines (rows) of data and 7 columns, e.g. https://www.dropbox.com/s/g2u68fxrkttv4gq/incoming_data.csv?dl=0
Incoming CSV table (this is just an example, so "|" and "-" are here for the sake of clarity):
| id | Name |
-----------------
| 1 | Foo |
| 1 | Foo |
| 1 | Foo |
| 4242 | Baz |
| 4242 | Baz |
| 4242 | Baz |
| 4242 | Baz |
| 702131 | Xyz |
| 702131 | Xyz |
| 702131 | Xyz |
| 702131 | Xyz |
Result CSV (this is just an example, so "|" and "-" are here for the sake of clarity):
| id | Name | |
--------------------------
| 1 | Foo | 1 |
| 1 | Foo | 2 |
| 1 | Foo | 3 |
| 4242 | Baz | 1 |
| 4242 | Baz | 2 |
| 4242 | Baz | 3 |
| 4242 | Baz | 4 |
| 702131 | Xyz | 1 |
| 702131 | Xyz | 2 |
| 702131 | Xyz | 3 |
| 702131 | Xyz | 4 |
First column is ID, so I've tried to group all lines with the same ID and iterate over them. Script (I don't know bash scripting, to be honest):
FILE=$PWD/$1
# Delete header and extract IDs and delete non-unique values. Also change \n to ♥, because awk doesn't properly work with it.
IDS_ARRAY=$(awk -v FS="|" '{for (i=1;i<=NF;i++) if ($i=="\"") inQ=!inQ; ORS=(inQ?"♥":"\n") }1' $FILE | awk -F'|' '{if (NR!=1) {print $1}}' | awk '!seen[$0]++')
for id in $IDS_ARRAY; do
# Group $FILE by $id from $IDS_ARRAY.
cat $FILE | grep $id >> temp_mail_group.csv
ROW_GROUP=$PWD/temp_mail_group.csv
# Add a number after each row.
# NF+1 — add a column after last existing.
awk -F'|' '{$(NF+1)=++i;}1' OFS="|", $ROW_GROUP >> "numbered_mails_$(date +%Y-%m-%d).csv"
rm -f $PWD/temp_mail_group.csv
done
Right now this script works almost like I want to, except that it thinks that (for example) ID 2834 and 772834 are the same.
UPD: Although I marked one answer as approved it does not assign correct values to some groups of records with the same ID (right now I don't see a pattern).
You can do everything in a single script:
gawk 'BEGIN { FS="|"; OFS="|";}
/^-/ {print; next;}
$2 ~ /\s*id\s*/ {print $0,""; next;}
{print "", $2, $3, ++a[$2];}
'
$1 is the empty field before the first | in the input. I use an empty output column "" to get the leading |.
The trick is ++a[$2] which takes the second field in each row (= the ID column) and looks for it in the associative array a. If there is no entry, the result is 0. By pre-incrementing, we start with 1 and add 1 every time the ID reappears.
Every time you write a loop in shell just to manipulate text you have the wrong approach. The guys who invented shell also invented awk for shell to call to manipulate text - don't disappoint them :-).
$ awk '
BEGIN{ w = 8 }
{
if (NR==1) {
val = sprintf("%*s|",w,"")
}
else if (NR==2) {
val = sprintf("%*s",w+1,"")
gsub(/ /,"-",val)
}
else {
val = sprintf(" %-*s|",w-1,++cnt[$2])
}
print $0 val
}
' file
| id | Name | |
----------------------
| 1 | Foo | 1 |
| 1 | Foo | 2 |
| 1 | Foo | 3 |
| 42 | Baz | 1 |
| 42 | Baz | 2 |
| 42 | Baz | 3 |
| 42 | Baz | 4 |
| 70 | Xyz | 1 |
| 70 | Xyz | 2 |
| 70 | Xyz | 3 |
| 70 | Xyz | 4 |
An awk way
Without considering the dotted line being extended.
awk 'NR>2{$0=$0 (++a[$2])"|"}1' file
output
| id | Name |
-------------
| 1 | Foo |1|
| 1 | Foo |2|
| 1 | Foo |3|
| 42 | Baz |1|
| 42 | Baz |2|
| 42 | Baz |3|
| 42 | Baz |4|
| 70 | Xyz |1|
| 70 | Xyz |2|
| 70 | Xyz |3|
| 70 | Xyz |4|
Here's a way to do it with pure Bash:
inputfile=$1
prev_id=
while IFS= read -r line ; do
printf '%s' "$line"
IFS=$'| \t\n' read t1 id name t2 <<<"$line"
if [[ $line == -* ]] ; then
printf '%s\n' '---------'
elif [[ $id == 'id' ]] ; then
printf ' Number |\n'
else
if [[ $id != "$prev_id" ]] ; then
id_count=0
prev_id=$id
fi
printf '%2d |\n' "$(( ++id_count ))"
fi
done <"$inputfile"

shell - grep - how to get only lines that have certain amount char

good morning.
I have the following lines :
1 | blah | 2 | 1993 | 86 | 0 | NA | 123 | 123
1 | blah | TheBeatles | 0 | 3058 | NA | NA | 11
And I wanna get only the lines with 7 "|" and the same first field.
So the output for these two lines will be nothing, but for these two lines :
1 | blah | 2 | 1993 | 86 | 0 | NA | 123
1 | blah | TheBeatles | 0 | 3058 | NA | NA | 11
The output will be "error".
I'm getting the inputs from a file using the following command :
grep '.*|.*|.*|.*|.*|.*|.*|.*' < $1 | sort -nbsk1 | cut -d "|" -f1 | uniq -d |
while read line2; do
echo error
done
But this implementation would still print error even if I have more then 7 "|".
Any suggestions ?
P.S - I can assume that there is a \n in the end of each line.
For printing lines containing only 7 |, try:
awk -F'|' 'NF == 8' filename
If you want to use bash to count the number of | in a given line, try:
line="1 | blah | 2 | 1993 | 86 | 0 | NA | 123 | 123";
count=${line//[^|]/};
echo ${#count};
With grep
grep '^\([^|]*|[^|]*\)\{7\}$'
Assuming zz.txt is:
$ cat zz.txt
1 | blah | 2 | 1993 | 86 | 0 | NA | 123 | 123
1 | blah | TheBeatles | 0 | 3058 | NA | NA | 11
$ cut -d\| -f1-8 zz.txt
above cut will give you the output you need.
I would suggest that you use awk for this job.
BEGIN { FS = "|" }
NF == 8 && $1 == '1' { print $0}
would do the job (although the == and && could be = and & ; my awk is a bit rusty)

Resources