how to print a specific column but ignoring first 10 lines and last 10 lines in unix shell - shell

I want to print 2nd column but i don't want first 10 and last 10 lines.
awk 'NR>10' filename.txt | awk '{ print $2 }'| head --lines=-10
It didn't work for me

What you want is:
tail -n+11 filename.txt | head -n-10 | awk '{print $2}'
Input
$cat lines_1-40.txt
line 1 in the file
line 2 in the file
line 3 in the file
line 4 in the file
...
line 38 in the file
line 39 in the file
line 40 in the file
Output
$ tail -n+11 lines_1-40.txt | head -n-10 | awk '{print $2}'
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

Related

distribute data in both increment and decrement order

I have a file which has n number of rows, i want it's data to be distributed in 7 files as per below order
** my input file has n number of rows, this is just an example.
Input file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1
5
16
17
.
.
28
Output file
1 2 3 4 5 6 7
14 13 12 11 10 9 8
15 16 17 18 19 20 21
28 27 26 25 24 23 22
so if i open the first file it should have rows
1
14
15
28
similarly if i open the second file it should have rows
2
13
16
27
similarly output for the other files as well.
Can anybody please help, with below code it is doing what is required but not in required order.
awk '{print > ("te1234"++c".txt");c=(NR%n)?c:0}' n=7 test6.txt
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
EDIT: Since OP has changed sample of Input_file totally different so adding this solution now, again this is written and tested with shown samples only.
With xargs + single awk: (recommended one)
xargs -n7 < Input_file |
awk '
FNR%2!=0{
for(i=1;i<=NF;i++){
print $i >> (i".txt")
close(i".txt")
}
next
}
FNR%2==0{
for(i=NF;i>0;i--){
count++
print $i >> (count".txt")
close(i".txt")
}
count=""
}'
Initial solution:
xargs -n7 < Input_file |
awk '
FNR%2==0{
for(i=NF;i>0;i--){
val=(val?val OFS:"")$i
}
$i=val
val=""
}
1' |
awk '
{
for(i=1;i<=NF;i++){
print $i >> (i".txt")
close(i".txt")
}
}'
Above could be done with single awk too will add xargs + awk(single) solution in few mins too.
Could you please try following, written and tested with shown samples in GNU awk.
awk '{for(i=1;i<=NF;i++){print $i >> (i".txt");close(i".txt")}}' Input_file
The output file counter could descend for each second group of seven:
awk 'FNR%n==1 {asc=!asc}
{
out="te1234" (asc ? ++c : c--) ".txt";
print >> out;
close(out)
}' n=7 test6.txt
$ ls
file tst.awk
$ cat tst.awk
{ rec = (cnt % 2 ? $1 sep rec : rec sep $1); sep=FS }
!(NR%n) {
++cnt
nf = split(rec,flds)
for (i=1; i<=nf; i++) {
out = "te1234" i ".txt"
print flds[i] >> out
close(out)
}
rec=sep=""
}
.
$ awk -v n=7 -f tst.awk file
.
$ ls
file te12342.txt te12344.txt te12346.txt tst.awk
te12341.txt te12343.txt te12345.txt te12347.txt
$ cat te12341.txt
1
14
15
28
$ cat te12342.txt
2
13
16
27
If you can have input that's not an exact multiple of n then move the code that's currently in the !(NR%n) block into a function and call that function there and in an END section.
This might work for you (GNU sed & parallel):
parallel 'echo {1}~14w file{1}; echo {2}~14w file{1}' ::: {1..7} :::+ {14..8} |
sed -n -f - file &&
paste file{1..7}
Create a sed script to write files named filen where n is 1 thru 7 (see above first set of parameters in the parallel command and also in the paste command).
The sed script uses the n~m address where n is the starting address and m is the modulo thereafter.
The distributed files are created first and the paste command then joins them all together to produce a single output file (tab separated by default, use paste -d option to get desired delimiter).
Alternative using Bash & sed:
for ((n=1,m=14;n<=7;n++,m--));do echo "$n~14w file$n";echo "$m~14w file$n";done |
sed -nf - file &&
paste file{1..7}

How to exclude lines in a file based on a range of values taken from a second file

I have a file with a list of value ranges:
2 4
6 9
13 14
and a second file that looks like this:
HiC_scaffold_1 1 26
HiC_scaffold_1 2 27
HiC_scaffold_1 3 27
HiC_scaffold_1 4 31
HiC_scaffold_1 5 34
HiC_scaffold_1 6 35
HiC_scaffold_1 7 37
HiC_scaffold_1 8 37
HiC_scaffold_1 9 38
HiC_scaffold_1 10 39
HiC_scaffold_1 11 39
HiC_scaffold_1 12 39
HiC_scaffold_1 13 39
HiC_scaffold_1 14 39
HiC_scaffold_1 15 42
and I would like to exclude rows from file 2 where the value of column 2 falls within a range defined by file 1. The ideal output would be:
HiC_scaffold_1 1 26
HiC_scaffold_1 5 34
HiC_scaffold_1 10 39
HiC_scaffold_1 11 39
HiC_scaffold_1 12 39
HiC_scaffold_1 15 42
I know how to extract a single range with awk:
awk '$2 == "2", $2 == "4"' file2.txt
but my file 1 has many many range values (lines) and I need to exclude rather than extract the rows that correspond to these values.
This is one awy:
$ awk '
NR==FNR { # first file
min[NR]=$1 # store mins and maxes in pairs
max[NR]=$2
next
}
{ # second file
for(i in min)
if($2>=min[i]&&$2<=max[i])
next
}1' ranges data
Output:
HiC_scaffold_1 1 26
HiC_scaffold_1 5 34
HiC_scaffold_1 10 39
HiC_scaffold_1 11 39
HiC_scaffold_1 12 39
HiC_scaffold_1 15 42
If the ranges are not huge and integer valued but the data is huge, you could make an exclude map of the values to speed up comparing:
$ awk '
NR==FNR { # ranges file
for(i=$1;i<=$2;ex[i++]); # each value in the range goes to exclude hash
next
}
!($2 in ex)' ranges data # print if not found in ex hash
If your ranges aren't huge:
$ cat tst.awk
NR==FNR {
for (i=$1; i<=$2; i++) {
bad[i]
}
next
}
!($2 in bad)
$ awk -f tst.awk file1 file2
HiC_scaffold_1 1 26
HiC_scaffold_1 5 34
HiC_scaffold_1 10 39
HiC_scaffold_1 11 39
HiC_scaffold_1 12 39
HiC_scaffold_1 15 42
sedception
If the second column of file2.txt always equals to the index of its line, you can use sed to prune the lines. If this is not your case, please refer to the awkception paragraph.
sed $(sed 's/^\([0-9]*\)[[:space:]]*\([0-9]*\)/-e \1,\2d/' file1.txt) file2.txt
Where file1.txt contains your ranges and file2.txt is the data itself.
Basically it constructs a sed call that chains a list of -e i,jd expressions, meaning that it will delete lines between the ith line and the jth line.
In your example sed 's/^\([0-9]*\)[[:space:]]*\([0-9]*\)/-e \1,\2d/' file1.txt would output -e 2,4d -e 6,9d -e 13,14d which is the list of expressions for calling sed on file2.txt.
In the end it will call:
sed -e 2,4d -e 6,9d -e 13,14d file2.txt
This command deletes all lines between the 2nd and the 4th, and all lines between the 6th and the 9th, and all lines between the 13th and the 14th.
Obviously it does not work if the second column of file2.txt does not match the index of its own line.
awkception
awk "{$(awk '{printf "if ($2>=%d && $2<=%d) next\n", $1, $2}' file1.txt)}1" file2.txt
This solution works even if the second column does not match the index of its line.
The method uses awk to create an awk program, just like sed created sed expressions in the sedception solution.
In the end this will call :
awk '{
if ($2>=2 && $2<=4) next
if ($2>=6 && $2<=9) next
if ($2>=13 && $2<=14) next
}1' file2.txt
It should be noted that this solution is significantly slower than sed.

Is there a way to print lines from a file from n to m and than reverse their positions?

I'm trying to print text from line 10 to 20 and then reverse their positions.
I've tried this:
sed '10!G;h;$!d' file.txt
But it only prints from 10 to end of the file. Is there any way to stop it at line 20 by using only one sed command?
Almost there, you just need to replace $!d with the 'until' line-number
sed -n '10,20p' tst.txt
// Prints line 10 <--> 20
sed -n '10!G;h;20p' tst.txt
// Prints REVERSE line 10 <--> 20
output:
20
19
18
17
16
15
14
13
12
11
10
tst.txt:
1
2
3
4
...
19
20
Info
You can use this to print a range of lines:
sed -n -e 10,20p file.txt | tac
tac will reverse the order of the lines
And for those of you without tac (like those mac users out there):
sed -n -e 10,20p file.txt | tail -r

shell script for extracting line of file using awk

I want, the selected lines of file to be print in output file side by side separated by space. Here what I have did so far,
for file in SAC*
do
awk 'FNR==2 {print $4}' $file >>exp
awk 'FNR==3 {print $4}' $file >>exp
awk 'FNR==4 {print $4}' $file >>exp
awk 'FNR==5 {print $4}' $file >>exp
awk 'FNR==7 {print $4}' $file >>exp
awk 'FNR==8 {print $4}' $file >>exp
awk 'FNR==24 {print $0}' $file >>exp
done
My output is:
XV
AMPY
BHZ
2012-08-15T08:00:00
2013-12-31T23:59:59
I want output should be
XV AMPY BHZ 2012-08-15T08:00:00 2013-12-31T23:59:59
First the test data (only 9 rows, tho):
$ cat file
1 2 3 14
1 2 3 24
1 2 3 34
1 2 3 44
1 2 3 54
1 2 3 64
1 2 3 74
1 2 3 84
1 2 3 94
Then the awk. No need for that for loop in shell, awk can handle multiple files:
$ awk '
BEGIN {
ORS=" "
a[2];a[3];a[4];a[5];a[7];a[8] # list of records for which $4 should be outputed
}
FNR in a { print $4 } # output the $4s
FNR==9 { printf "%s\n",$0 } # replace 9 with 24
' file file # ... # the files you want to process (SAC*)
24 34 44 54 74 84 1 2 3 94
24 34 44 54 74 84 1 2 3 94

Read the number of columns using awk/sed

I have the following test file
Kmax Event File - Text Format
1 4 1000
65 4121 9426 12312
56 4118 8882 12307
1273 4188 8217 12309
1291 4204 8233 12308
1329 4170 8225 12303
1341 4135 8207 12306
63 4108 8904 12300
60 4106 8897 12307
731 4108 8192 12306
...
ÿÿÿÿÿÿÿÿ
In this file I want to delete the first two lines and apply some mathematical calculations. For instance each column i will be $i-(i-1)*number. A script that does this is the following
#!/bin/bash
if test $1 ; then
if [ -f $1.evnt ] ; then
rm -f $1.dat
sed -n '2p' $1.evnt | (read v1 v2 v3
for filename in $1*.evnt ; do
echo -e "Processing file $filename"
sed '$d' < $filename > $1_tmp
sed -i '/Kmax/d' $1_tmp
sed -i '/^'"$v1"' '"$v2"' /d' $1_tmp
cat $1_tmp >> $1.dat
done
v3=`wc -l $1.dat | awk '{print $1}' `
echo -e "$v1 $v2 $v3" > .$1.dat
rm -f $1_tmp)
else
echo -e "\a!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
echo -e " Event file $1.evnt doesn't exist !!!!!!"
echo -e "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
fi
else
echo -e "\a!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
echo -e "!!!!! Give name for event files !!!!!"
echo -e "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
fi
awk '{print $1, $2-4096, $3-(2*4096), $4-(3*4096)}' $1.dat >$1_Processed.dat
rm -f $1.dat
exit 0
The file won't always have 4 columns. Is there a way to read the number of columns, print this number and apply those calculations?
EDIT The idea is to have an input file (*.evnt), convert it to *.dat or any other ascii file(it doesn't matter really) which will only include the number in columns and then apply the calculation $i=$i-(i-1)*number. In addition it will keep the number of columns in a variable, that will be called in another program. For instance in the above file, number=4096 and a sample output file is the following
65 25 1234 24
56 22 690 19
1273 92 25 21
1291 108 41 20
1329 74 33 15
1341 39 15 18
63 12 712 12
60 10 705 19
731 12 0 18
while in the console I will get the message There are 4 detectors.
Finally a new file_processed.dat will be produced, where file is the initial name of awk's input file.
The way it should be executed is the following
./myscript <filename>
where <filename> is the name without the format. For instance, the files will have the format filename.evnt so it should be executed using
./myscript filename
Let's start with this to see if it's close to what you're trying to do:
$ numdet=$( awk -v num=4096 '
NR>2 && NF>1 {
out = FILENAME "_processed.dat"
for (i=1;i<=NF;i++) {
$i = $i-(i-1)*num
}
nf = NF
print > out
}
END {
printf "There are %d detectors\n", nf | "cat>&2"
print nf
}
' file )
There are 4 detectors
$ cat file_processed.dat
65 25 1234 24
56 22 690 19
1273 92 25 21
1291 108 41 20
1329 74 33 15
1341 39 15 18
63 12 712 12
60 10 705 19
731 12 0 18
$ echo "$numdet"
4
Is that it?
Using awk
awk 'NR<=2{next}{for (i=1;i<=NF;i++) $i=$i-(i-1)*4096}1' file

Resources