Linux: Search for coincidences in four different files - bash

Scenario: Four files with 300 lines on each one. I want to know which lines are in all four files using bash only (no perl/python/ruby please)
Quick sample
$cat bad_domains.urlvoid
a
b
c
d
e
$cat bad_domains.alienvault
f
g
a
c
h
$cat bad_domains.hphosts
i
j
k
a
h
$cat bad_domains.malwaredomain
l
b
m
f
a
j
I only want to match the "a" i tried with stuff like this but it's slow as hell:
for void in $(cat bad_domains.urlvoid)
do
for vault in $(cat bad_domains.alienvault)
do
for hphosts in $(cat bad_domains.hphosts)
do
for malwaredomain in $(cat bad_domains.malwaredomain)
do
if [ $void == $vault -a $void == $hphosts -a $void == $malwaredomain -a $vault == $hphosts -a $vault == $malwaredomain -a $hphosts == $malwaredomain ]
then
echo $void
fi
done
done
done
done
Any good tips for optimizing my code? I read something about Dichotomic search that maybe could work.

Using comm:
comm -12 <(awk 'FNR==NR{a[$0];next} $0 in a' f1 f2) <(awk 'FNR==NR{a[$0];next} $0 in a' f3 f4)
a
Which works using these 3 steps:
Get common strings from file1 and file2
Get common strings from file3 and file4
Get common strings from above 2 steps thus getting intersection of 4 sets
EDIT: Pure awk solution:
awk 'FNR==NR{a[$0];next} $0 in a' <(awk 'FNR==NR{a[$0];next} $0 in a' f1 f2) <(awk 'FNR==NR{a[$0];next} $0 in a' f3 f4)

If the lines are unique within each file:
cat file1 file2 file3 file4 | sort | uniq -c | grep '^ *4 '

For bash 4.x (and ksh93)
Create an associative array indexed by the lines of one of the files (master).
For each of the remaining files, create a second array (work) indexed by the file's lines, then iterate over the master
array removing any entry with a key which does not also appear in the work array.
Any keys left in master[] after processing must have been in all files.
list=( bad_domains.* )
typeset -A master
while IFS= read -r key ; do master[$key]=1 ; done < "${list[0]}"
unset list[0]
for file in "${list[#]}" ; do
typeset -A work
while IFS= read -r key ; do work[$key]=1 ; done < "$file"
for key in "${!master[#]}" ; do [[ ${work[$key]+set} = set ]] || unset master[$key] ; done
unset work
done
for key in "${!master[#]}" ; do printf '%s\n' "$key" ; done

Related

How to replace a match with an entire file in BASH?

I have a line like this:
INPUT file1
How can I get bash to read that line and directly copy in the contents of "file1.txt" in place of that line? Or if it sees: INPUT file2 on a line, put in `file2.txt" etc.
The best I can do is a lot of tr commands, to paste the file together, but that seems an overly complicated solution.
'sed' also replaces lines with strings, but I don't know how to input the entire content of a file, which can be hundreds of lines into the replacement.
Seems pretty straightforward with awk. You may want to handle errors differently/more gracefully, but:
$ cat file1
Line 1 of file 1
$ cat file2
Line 1 of file 2
$ cat input
This is some content
INPUT file1
This is more content
INPUT file2
This file does not exist
INPUT file3
$ awk '$1=="INPUT" {system("cat " $2); next}1' input
This is some content
Line 1 of file 1
This is more content
Line 1 of file 2
This file does not exist
cat: file3: No such file or directory
A perl one-liner, using the CPAN module Path::Tiny
perl -MPath::Tiny -pe 's/INPUT (\w+)/path("$1.txt")->slurp/e' input_file
use perl -i -M... to edit the file in-place.
Not the most efficient possible way, but as an exercise I made a file to edit named x and a couple of input sources named t1 & t2.
$: cat x
a
INPUT t2
b
INPUT t1
c
$: while read k f;do sed -ni "/$k $f/!p; /$k $f/r $f" x;done< <( grep INPUT x )
$: cat x
a
here's
==> t2
b
this
is
file ==> t1
c
Yes, the blank lines were in the INPUT files.
This will sed your base file repeatedly, though.
The awk solution given is better, as it only reads through it once.
If you want to do this in pure Bash, here's an example:
#!/usr/bin/env bash
if (( $# < 1 )); then
echo "Usage: ${0##*/} FILE..."
exit 2
fi
for file; do
readarray -t lines < "${file}"
for line in "${lines[#]}"; do
if [[ "${line}" == "INPUT "* ]]; then
cat "${line#"INPUT "}"
continue
fi
echo "${line}"
done > "${file}"
done
Save to file and run like this: ./script.sh input.txt (where input.txt is a file containing text mixed with INPUT <file> statements).
Sed solution similar to awk given erlier:
$ cat f
test1
INPUT f1
test2
INPUT f2
test3
$ cat f1
new string 1
$ cat f2
new string 2
$ sed 's/INPUT \(.*\)/cat \1/e' f
test1
new string 1
test2
new string 2
test3
Bash variant
while read -r line; do
[[ $line =~ INPUT.* ]] && { tmp=($BASH_REMATCH); cat ${tmp[1]}; } || echo $line
done < f

how to pull data from a vcf table

i have two files:
SCR_location - which has information about a SNP location in an ascending order.
19687
36075
n...
modi_VCF - a vcf table that has information about every SNP.
19687 G A xxx:255,0,195 xxx:255,0,206
20398 G C 0/0:0,255,255 0/0:0,208,255
n...
i want to save just the lines with the matching SNP location into a new file
i wrote the following script but it doesn't work
cat SCR_location |while read SCR_l; do
cat modi_VCF |while read line; do
if [ "$SCR_l" -eq "$line" ] ;
then echo "$line" >> file
else :
fi
done
done
Would you please try a bash solution:
declare -A seen
while read -r line; do
seen[$line]=1
done < SCR_location
while read -r line; do
read -ra ary <<< "$line"
if [[ ${seen[${ary[0]}]} ]]; then
echo "$line"
fi
done < modi_VCF > file
It first iterates over SCR_location and stores SNP locations in an associative array seen.
Next it scans modi_VCF and if the 1st column value is found in the associative array, then print the line.
If awk is your option, you can also say:
awk 'NR==FNR {seen[$1]++; next} {if (seen[$1]) print}' SCR_location modi_VCF > file
[Edit]
In order to filter out the unmached lines, just negate the logic as:
awk 'NR==FNR {seen[$1]++; next} {if (!seen[$1]) print}' SCR_location modi_VCF > file_unmatched
The code above outputs the unmatched lines only. If you want to sort the matched lines and the unmatched lines at once, please try:
awk 'NR==FNR {seen[$1]++; next} {if (seen[$1]) {print >> "file_matched"} else {print >> "file_unmatched"} }' SCR_location modi_VCF
Hope this helps.

How to get values from one file that fall in a list of ranges from another file

I have bunch of files with sorted numerical values, in example:
cat tag_1_file.val
234
551
626
cat tag_2_file.val
12
1023
1099
etc.
And one file with tags and value ranges that fit my needs. Values are sorted first by tag, then by 2nd column, then by 3rd. Ranges may overlap.
cat ranges.val
tag_1 200 300
tag_1 600 635
tag_2 421 443
and so on.
So I try to loop through file with ranges and then look for all values that fall in range (in every line) in file with appropriate tag:
cat ~/blahblah/ranges.val | while read -a line;
#read line as array
do
cat ~/blahblah/${line[0]}_file.val | while read number;
#get tag name and cat the appropriate file
do
if [[ "$number" -ge "${line[1]}" ]] && [[ "$number" -le "${line[2]}" ]]
#check if current value fall into range
then
echo $number >> ${line[0]}.output
#toss the value that fall into interval to another file
elif [[ "$number" -gt "${line[2]}" ]]
then break
fi
done
done
But these two nested while loops are deadly slow with huge files containing 100M+ lines.
I think, there must be more efficient way of doing such things and I'd be grateful for any hint.
UPD: The expected output based on this example is:
cat file tag_1.output
234
626
Have you tried recoding the inner loop in something more efficient than Bash? Perl would probably be good enough:
while read tag low hi; do
perl -nle "print if \$_ >= ${low} && \$_ <= ${hi}" \
<${tag}_file.val >>${tag}.output
done <ranges.val
The behaviour if this version is slightly different in two ways - the loop doesn't bail out once the high point is reached, and the output file is created even if it is empty. Over to you if that isn't what you want!
another not so efficient implementation with awk
$ awk 'NR==FNR {t[NR]=$1; s[NR]=$2; e[NR]=$3; next}
{for(k in t)
if(t[k]==FILENAME) {
inout = t[k] "." ((s[k]<=$1 && $1<=e[k])?"in":"out");
print > inout;
next}}' ranges tag_1 tag_2
$ head tag_?.*
==> tag_1.in <==
234
==> tag_1.out <==
551
626
==> tag_2.out <==
12
1023
1099
note that I renamed files to match the tag names, otherwise you have to add tag extraction from filenames. Suffix ".in" for in ranges and ".out" for not. Depends on the sorted order of the files. If you have thousands of tag files adding a another layer to filter out the ranges per tag will speed it up. Now it iterates over ranges.
I'd write
while read -u3 -r tag start end; do
f="${tag}_file.val"
if [[ -r $f ]]; then
while read -u4 -r num; do
(( start <= num && num <= end )) && echo "$num"
done 4< "$f"
fi
done 3< ranges.val
I'm deliberately reading the files on separate file descriptors, otherwise the inner while-read loop will also slurp up the rest of "ranges.val".
bash while-read loops are very slow. I'll be back if a few minutes with an alternate solution
here's a GNU awk answer (requires, I believe, a fairly recent version)
gawk '
#load "filefuncs"
function read_file(tag, start, end, file, number, statdata) {
file = tag "_file.val"
if (stat(file, statdata) != -1) {
while (getline number < file) {
if (start <= number && number <= end) print number
}
}
}
{read_file($1, $2, $3)}
' ranges.val
perl
perl -Mautodie -ane '
$file = $F[0] . "_file.val";
next unless -r $file;
open $fh, "<", $file;
while ($num = <$fh>) {
print $num if $F[1] <= $num and $num <= $F[2]
}
close $fh;
' ranges.val
I have a solution for you from bioinformatics:
We have a format and a tool for this kind of task.
The format called .bed is used for description of ranges on chromosomes, but should work with your tags too.
The best toolset for this format is bedtools, which is lightning fast.
The specific tool, which might help you is intersect.
With this installed it becomes a task of formating the data for the tool:
#!/bin/bash
#reformating your positions to .bed format;
#1 adding the tag to each line
#2 repeating the position to make it a range
#3 converting to tab-separation
awk -F $'\t' 'BEGIN {OFS = FS} {print FILENAME, $0, $0}' *_file.val | sed 's/_file.val//g' >all_positions_in_one_range_file.bed
#making your range-file tab-separated
sed 's/ /\t/g' ranges.val >ranges_with_tab.bed
#doing the real comparision of the ranges with bedtools
bedtools intersect -a all_positions_in_one-range_file.bed -b ranges_with_tab.bed >all_positions_intersected.bed
#spliting the one result file back into files named by your tag
awk -F $'\t' '{print $2 >$1".out"}' all_positions_intersected.bed
Or if you prefer oneliners:
bedtools intersect -a <(awk -F $'\t' 'BEGIN {OFS = FS} {print FILENAME, $0, $0}' *_file.val | sed 's/_file.val//g') -b <(sed 's/ /\t/g' ranges.val) | awk -F $'\t' '{print $2 >$1".out"}'

Take two at a time in a bash "for file in $list" construct

I have a list of files where two subsequent ones always belong together. I would like a for loop extract two files out of this list per iteration, and then work on these two files at a time (for an example, let's say I want to just concatenate, i.e. cat the two files).
In a simple case, my list of files is this:
FILES="file1_mateA.txt file1_mateB.txt file2_mateA.txt file2_mateB.txt"
I could hack around it and say
FILES="file1 file2"
for file in $FILES
do
actual_mateA=${file}_mateA.txt
actual_mateB=${file}_mateB.txt
cat $actual_mateA $actual_mateB
done
But I would like to be able to handle lists where mate A and mate B have arbitrary names, e.g.:
FILES="first_file_first_mate.txt first_file_second_mate.txt file2_mate1.txt file2_mate2.txt"
Is there a way to extract two values out of $FILES per iteration?
Use an array for the list:
files=(fileA1 fileA2 fileB1 fileB2)
for (( i=0; i<${#files[#]} ; i+=2 )) ; do
echo "${files[i]}" "${files[i+1]}"
done
You could read the values from a while loop and use xargs to restrict each read operation to two tokens.
files="filaA1 fileA2 fileB1 fileB2"
while read -r a b; do
echo $a $b
done < <(echo $files | xargs -n2)
You could use xargs(1), e.g.
ls -1 *.txt | xargs -n2 COMMAND
The switch -n2 let xargs select 2 consecutive filenames from the pipe output which are handed down do the COMMAND
To concatenate the 10 files file01.txt ... file10.txt pairwise
one can use
ls *.txt | xargs -n2 sh -c 'cat $# > $1.$2.joined' dummy
to get the 5 result files
file01.txt.file02.txt.joined
file03.txt.file04.txt.joined
file05.txt.file06.txt.joined
file07.txt.file08.txt.joined
file09.txt.file10.txt.joined
Please see 'info xargs' for an explantion.
How about this:
park=''
for file in $files # wherever you get them from, maybe $(ls) or whatever
do
if [ "$park" = '' ]
then
park=$file
else
process "$park" "$file"
park=''
fi
done
In each odd iteration it just stores the value (in park) and in each even iteration it then uses the stored and the current value.
Seems like one of those things awk is suited for
$ awk '{for (i = 1; i <= NF; i+=2) if( i+1 <= NF ) print $i " " $(i+1) }' <<< "$FILES"
file1_mateA.txt file1_mateB.txt
file2_mateA.txt file2_mateB.txt
You could then loop over it by setting IFS=$'\n'
e.g.
#!/bin/bash
FILES="file1_mateA.txt file1_mateB.txt file2_mateA.txt file2_mateB.txt file3_mat
input=$(awk '{for (i = 1; i <= NF; i+=2) if( i+1 <= NF ) print $i " " $(i+1) }'
IFS=$'\n'
for set in $input; do
cat "$set" # or something
done
Which will try to do
$ cat file1_mateA.txt file1_mateB.txt
$ cat file2_mateA.txt file2_mateB.txt
And ignore the odd case without the match.
You can transform you string to array and read this new array by elements:
#!/bin/bash
string="first_file_first_mate.txt first_file_second_mate.txt file2_mate1.txt file2_mate2.txt"
array=(${string})
size=${#array[*]}
idx=0
while [ "$idx" -lt "$size" ]
do
echo ${array[$idx]}
echo ${array[$(($idx+1))]}
let "idx=$idx+2"
done
If you have delimiter in string different from space (i.e. ;) you can use the following transformation to array:
array=(${string//;/ })
You could try something like this:
echo file1 file2 file3 file4 | while read -d ' ' a; do read -d ' ' b; echo $a $b; done
file1 file2
file3 file4
Or this, somewhat cumbersome technique:
echo file1 file2 file3 file4 |tr " " "\n" | while :;do read a || break; read b || break; echo $a $b; done
file1 file2
file3 file4

diff two batches of files

I would like to diff two batches of files. If I simply put them in two different directories and diff by directory, the comparisons will be alphabetical which I do not want.
Another approach would be to list files in text1.txt and list files in text2.txt:
text1:
a1
b1
c1
text2:
c2
a2
b2
How can I approach this such that my loop will be:
diff a1 c2
diff b1 a2
diff b2 c1
You can use paste to join the two files, then a bash loop to process.
paste text1 text2 | while read file1 file2; do diff "$file1" "$file2"; done
In bash, you can use the -u flag on read to read from a different fd. This allows you to read from two files in parallel:
while read -r -u3 file1 && read -r -u4 file2; do
diff "$file1" "$file2"
done 3<file1.txt 4<file2.txt
Another solution :
#!/bin/bash
file1="..."
file2="..."
getSize(){
wc -l "$1"|cut -d " " -f1
}
getValueFromLineNumber(){
sed -n "$1p" "$2"
}
diffFromLineNumber(){
f1=$(getValueFromLineNumber "$1" "$file1")
f2=$(getValueFromLineNumber "$1" "$file2")
diff "$f1" "$f2"
}
# get min size
[[ $(getSize "$file1") -le $(getSize "$file2") ]] && min=$s1 || min=$s2
for (( i=1 ; i <= "$min" ; i++)); do
diffFromLineNumber "$i"
done
This solution takes care of the case where the two files don't have the same number of lines.

Resources