I'm trying to figure out a Makefile problem I've got. I start with a lot of files in a telemetry/ directory, and for each one I need to create a corresponding file in a features/ directory.
The list of filenames in the telemetry/ directory is cached in a filelist file, and I define an allfeats target to encompass all the file-level targets. Except the allfeats target doesn't actually work.
My Makefile (heavily trimmed to just show this issue) looks like this:
MYSAMP:=$(shell cat filelist)
allfeats: $(patsubst %,features/%-feat.rds,$(MYSAMP))
#echo done
features/%-feat.rds: telemetry/%
Rscript -e 'saveRDS(process("$<"), "$#")'
print-%:
#echo $* = $($*)
But something about the timing of variable propagation, I think, isn't letting me chain rules the way I intend:
% make -n allfeats
make: *** No rule to make target `features/709731-feat.rds', needed by `allfeats'. Stop.
It does actually know how to create that target if I specify it explicitly:
% make -n features/709731-feat.rds
Rscript -e 'saveRDS(process("telemetry/709731"), "features/709731-feat.rds")'
Is there a different way to define my rules (or variables) that will work as intended?
I figured out the problem. I had generated the filelist file by doing this:
% ls telemetry > filelist
But I forgot that my ls is aliased like so:
% which ls
ls='ls -FG --color --hide="NTUSER.DAT*"'
So I had the coloration escape sequences in the file too, which were invisible to the naked eye, but reveal themselves in an octal dump:
% head filelist | od -a
0000000 esc [ 0 m esc [ 0 m 7 0 9 7 3 1 esc [
0000020 0 m nl esc [ 0 m 8 0 0 3 3 3 esc [ 0
0000040 m nl esc [ 0 m 8 0 0 3 3 4 esc [ 0 m
0000060 nl esc [ 0 m 8 0 0 3 3 5 esc [ 0 m nl
0000100 esc [ 0 m 8 0 0 8 3 2 esc [ 0 m nl esc
0000120 [ 0 m 8 0 0 8 3 3 esc [ 0 m nl esc [
0000140 0 m 8 0 0 8 3 4 esc [ 0 m nl esc [ 0
0000160 m 8 0 0 8 3 6 esc [ 0 m nl esc [ 0 m
0000200 8 0 0 8 4 8 esc [ 0 m nl esc [ 0 m 8
0000220 0 2 0 3 1 esc [ 0 m nl
Once I cleaned those out, the Makefile chaining works as expected.
Related
I have file that looks like that:
t # 3-7, 1
v 0 104
v 1 92
v 2 95
u 0 1 2
u 0 2 2
u 1 2 2
t # 3-8, 1
v 0 94
v 1 13
v 2 19
v 3 5
u 0 1 2
u 0 2 2
u 0 3 2
t # 3-9, 1
v 0 94
v 1 13
v 2 19
v 3 7
u 0 1 2
u 0 2 2
u 0 3 2
t corresponds to header of each block.
I would like to extract multiple patterns from the file and output transactions that contain required patterns altogether.
I tried the following code:
ps | grep -e 't\|u 0 1 2' file.txt
and it works well to extract header and pattern 'u 0 1 2'. However, when I add one more pattern, the output list only headers start with t #. My modified code looks like that:
ps | grep -e 't\|u 0 1 2 && u 0 2 2' file.txt
I tried sed and awk solutions, but they do not work for me as well.
Thank you for your help!
Olha
Use | as the separator before the third alternative, just like the second alternative.
grep -E 't|u 0 1 2|u 0 2 2' file.txt
Also, it doesn't make sense to specify a filename and also pipe ps to grep. If you provide filename arguments, it doesn't read from the pipe (unless you use - as a filename).
You can use grep with multiple -e expressions to grep for more than one thing at a time:
$ printf '%d\n' {0..10} | grep -e '0' -e '5'
0
5
10
Expanding on #kojiro's answer, you'll want to use an array to collect arguments:
mapfile -t lines < file.txt
for line in "${lines[#]}"
do
arguments+=(-e "$line")
done
grep "${arguments[#]}"
You'll probably need a condition within the loop to check whether the line is one you want to search for, but that's it.
I have a lage text file that I would like to filter by excluding lines that have a number of columns matching a certain character. I had previously removed lines where all columns from 2 onwards contained a 0 or a . like so:
awk '{
for (i=2; i<=NF; i++)
if ($i!~/^(\.|0)/) {
print
break
}
}'
but now I would like it so that I would print lines that had less than a specific number of columns with this value (".").
For example with data:
A B C D E
0 1 . 0 0
1 ./. 0 1 1
1 1 0 0 0
0 0 . . 0
. ./. . . .
and a match value of 2 I would expect the bottom two lines to be excluded so that the output would be:
A B C D E
0 1 . 0 0
1 ./. 0 1 1
1 1 0 0 0
Any ideas?
With awk:
$ awk '{c=0;for(i=1;i<NF;i++) c += ($i == ".")}c<2' file
A B C D E
0 1 . 0 0
1 ./. 0 1 1
1 1 0 0 0
Basically it iterates each column and add one to the counter if the column equals a period (.).
The c<2 part will only print the line if there is less than two columns with periods.
With sed one can use:
$ sed -r 'h;s/[^. ]+//g;s/\.\. *//g;/\. \./d;x' file
A B C D E
0 1 . 0 0
1 ./. 0 1 1
1 1 0 0 0
-r enables extended regular expressions (-E on *BSD).
Basically a copy of the pattern space is stored in the hold buffer, then all but spaces and periods is removed.
Now if the pattern space contains two separate periods it can be deleted if not the pattern space can be exchanged with the hold buffer.
$ awk '{delete a; for(i=1;i<=NF;i++) a[$i]++; if(a["."]>=2) next} 1' foo
A B C D E
0 1 . 0 0
1 ./. 0 1 1
1 1 0 0 0
It iterates all fields (for), counts field values and if 2 or more . in a record, restrains from printing (next). If you want to count the periods only from field 3 onward, change the start value of i in the for: for(i=3; ...).
$ cat ip.txt
A B C D E
0 1 . 0 0
1 ./. 0 1 1
1 1 0 0 0
0 0 . . 0
. ./. . . .
$ perl -ne '(#c)=/\.\/\.|\./g; print if $#c < 1' ip.txt
A B C D E
0 1 . 0 0
1 ./. 0 1 1
1 1 0 0 0
(#c)=/\.\/\.|\./g array of ./. or . matches from current line
$#c indicates index of last element, i.e (size of array - 1)
So, to ignore lines containing 3 elements like ./. or . use $#c < 2
Similar to #spasic's answer, but easier (for me) to read!
perl -ane 'print if (grep { /^\.$/} #F) < 2' file
A B C D E
0 1 . 0 0
1 ./. 0 1 1
1 1 0 0 0
The -a separates the space-separated fields into an array called #F for me. I then grep in the array #F looking for elements that consist of just a period - i.e. those that start with a period and end immediately after the period. That counts the lone periods in each line and I print the line if that number is less than 2.
Perhaps this is alright.
awk '$0 !~/\. \./' file
A B C D E
0 1 . 0 0
1 ./. 0 1 1
1 1 0 0 0
My data(tab separated):
1 0 0 1 0 1 1 0 1
1 1 0 1 0 1 0 1 1
1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0
...
how can i grep the lines with exact, for example, 5 '1's,
ideal output:
1 0 0 1 0 1 1 0 1
Also, how can i grep lines with equal or more than (>=) 5 '1's,
ideal output:
1 0 0 1 0 1 1 0 1
1 1 0 1 0 1 0 1 1
1 1 1 1 1 1 1 1 1
i tried,
grep 1$'\t'1$'\t'1$'\t'1$'\t'1
however this will only output consecutive '1's, which is not all i want.
i wonder if there will be any simple method to achieve this, thank you!
John Bollinger's helpful answer and anishane's answer show that it can be done with grep, but, as has been noted, that is quite cumbersome, given that regular expression aren't designed for counting.
awk, by contrast, is built for field-based parsing and counting (often combined with regular expressions to identify field separators, or, as below, the fields themselves).
Assuming you have GNU awk, you can use the following:
Exactly 5 1s:
awk -v FPAT='\\<1\\>' 'NF==5' file
5 or more 1s:
awk -v FPAT='\\<1\\>' 'NF>=5' file
Special variable FPAT is a GNU awk extension that allows you to identify fields via a regex that describes the fields themselves, in contrast with the standard approach of using a regex to define the separators between fields (via special variable FS or option -F):
'\\<1\\>' identifies any "isolated" 1 (surrounded by non-word characters) as a field, based on word-boundary assertions \< and \>; the \ must be doubled here so that the initial string parsing performed by awk doesn't "eat" single \s.
Standard variable NF contains the count of input fields in the line at hand, which allows easy numerical comparison. If the conditional evaluates to true, the input line at hand is implicitly printed (in other words: NF==5 is implicitly the same as NF==5 { print } and, more verbosely, NF==5 { print $0 }).
A POSIX-compliant awk solution is a little more complicated:
Exactly 5 1s:
awk '{ l=$0; gsub("[\t0]", "") }; length($0)==5 { print l }' file
5 or more 1s:
awk '{ l=$0; gsub("[\t0]", "") }; length($0)>=5 { print l }' file
l=$0 saves the input line ($0) in its original form in variable l.
gsub("[\t0]", "") replaces all \t and 0 chars. in the input line with the empty string, i.e., effectively removes them, and only leaves (directly concatenated) 1 instances (if any).
length($0)==5 { print l } then prints the original input line (l) only if the resulting string of 1s (i.e., the count of 1s now stored in the modified input line ($0)) matches the specified count.
You can use grep. But that would be an abuse of regex.
$ cat countme
1 0 0 1 0 1 1 0 1
1 1 0 1 0 1 0 1 1
1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0
$ grep -P '^[0\t]*(1[0\t]*){5}[0\t]*$' countme # Match exactly 5
1 0 0 1 0 1 1 0 1
$ grep -P '^[0\t]*(1[0\t]*){5,}[0\t]*$' countme # Match >=5
1 0 0 1 0 1 1 0 1
1 1 0 1 0 1 0 1 1
1 1 1 1 1 1 1 1 1
You can do this to get lines with exactly five '1's:
grep '^[^1]*\(1[^1]*\)\{5,5\}[^1]*$'
You can simplify that to this for at least five '1's:
grep '\(1[^1]*\)\{5,\}'
The enumerated quantifier (\{n,m\}) enables you to conveniently specify a particular number or range of numbers of consecutive matches to a sub-pattern. To avoid matching lines with extra matches to such a pattern, however, you must also anchor it to the beginning and end of the line.
The other other trick is to make sure the gaps previous to the first 1, between the 1s, and after the last 1 are matched. In your case, all of those gaps can be represented pretty simply as ranges of zero or more characters other than 1: [^1]*. Putting those pieces together gives you the above regular expressions.
Do
sed -nE '/^([^1]*1[^1]*){5}$/p' your_file
for exactly 5 matches and
sed -nE '/^([^1]*1[^1]*){5,}$/p' your_file
for 5 or more matches.
Note: In GNU sed you may not see the -E option in the manpage, but it is supported. Using -E is for portability to, say, Mac OSX.
with perl
$ perl -ane 'print if (grep {$_==1} #F) == 5' ip.txt
1 0 0 1 0 1 1 0 1
$ perl -ane 'print if (grep {$_==1} #F) >= 5' ip.txt
1 0 0 1 0 1 1 0 1
1 1 0 1 0 1 0 1 1
1 1 1 1 1 1 1 1 1
-a to automatically split input line on whitespaces and save to #F array
grep {$_==1} #F returns array with elements from #F array which are exactly equal to 1
(grep {$_==1} #F) == 5 in scalar context, comparison will be done based on number of elements of array
See http://perldoc.perl.org/perlrun.html#Command-Switches for details on -ane options
I wish to replace blank fields with zeros using awk but when I use sed 's/ /0/' file, I seem to replace all white spaces when I only wish to consider missing data. Using awk '{print NF}' file returns different field numbers (i.e. 9,4) due to some empty fields
input
590073920 20120523 0 M $480746499 CM C 500081532 SP
501298333 0 M *BB
501666604 0 M *OO
90007162 7 M +178852
90007568 3 M +189182
output
590073920 20120523 0 M $480746499 CM C 500081532 SP
501298333 0 0 M *BB 0 0 0 0
501666604 0 0 M *OO 0 0 0 0
90007162 0 7 M +178852 0 0 0 0
90007568 0 3 M +189182 0 0 0 0
Using GNU awk FIELDWIDTHS feature for fixed width processing:
$ awk '{for(i=1;i<=NF;i++)if($i~/^ *$/)$i=0}1' FIELDWIDTHS="11 9 5 2 16 3 2 11 2" file | column -t
590073920 20120523 0 M $480746499 CM C 500081532 SP
501298333 0 0 M *BB 0 0 0 0
501666604 0 0 M *OO 0 0 0 0
90007162 0 7 M +178852 0 0 0 0
90007568 0 3 M +189182 0 0 0 0
I have two files:
file 1:
rs3094315 1 0 742429 G A
rs12124819 1 0 766409 G A
rs2272756 1 0 871896 A G
rs3128126 1 0 952073 G A
rs3934834 1 0 995669 A G
rs3766192 1 0 1007060 G A
file 2:
rs12565286 1 0 711153 C G
rs12138618 1 0 740098 A G
rs3094315 1 0 742429 G A
rs3131968 1 0 744055 A G
rs12562034 1 0 758311 A G
rs2905035 1 0 765522 A G
rs12124819 1 0 766409 G A
rs2980319 1 0 766985 A T
rs4040617 1 0 769185 G A
rs2980300 1 0 775852 T C
rs4951864 1 0 787889 C T
rs12132517 1 0 788664 A G
rs950122 1 0 836727 C G
rs2272756 1 0 871896 A G
rs3128126 1 0 952073 G A
rs3121561 1 0 980243 T C
rs3813193 1 0 988364 C G
rs4075116 1 0 993492 C T
rs3934834 1 0 995669 T C
rs3766193 1 0 1007033 C G
rs3766192 1 0 1007060 C T
rs3766191 1 0 1007450 T C
The files have many more matches in the first column after these shown here, there are about 500k lines in both files.
I'm trying to use the following command to find matches in the first column (rs####) and if found, put the matches on one line in a new folder.
awk 'NF==FNR{s=$1; a[s]=$0; next} a[$1]{print $0" "a[$1]}' file1 file2 > mergedfiles
However, this command only gives 1 match (shown below) in mergedfiles and I just can't figure out what is going wrong. It's probably something really easy :s. Thanks in advance if you are able to clear this problem up.
rs3766192 1 0 1007060 C T rs3766192 1 0 1007060 G A
Use:
NR==FNR
Your condition only picks up the sixth line (because there are 6 fields in the first file)!