split a file based upon line number - bash

I have a large file that needs to be slitted based on line numbers.
For instance , my file is like that:
aaaaaa
bbbbbb
cccccc
dddddd
****** //here blank line//
eeeeee
ffffff
gggggg
hhhhhh
*******//here blank line//
ıııııı
jjjjjj
kkkkkk
llllll
******
//And so on...
I need two separate files as such that one file should have first 4 lines, third 4 lines, fifth 4 lines in it and the other file should have second 4 lines, fourth 4 lines, sixth 4 lines in it and so on. how can I do that in bash script?

You can play with the number of the line, NR:
$ awk 'NR%10>0 && NR%10<5' your_file > file1
$ awk 'NR%10>5' your_file > file2
If it is 10K + n, 0 < n < 5, then goes to the first file.
If it is 10K + n, n > 5, then goes to the second file.
In one line:
$ awk 'NR%10>0 && NR%10<5 {print > "file1"} NR%10>5 {print > "file2"}' file
Test
$ cat a
1
2
3
4
6
7
8
9
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
43
44
46
47
48
49
51
$ awk 'NR%10>0 && NR%10<5 {print > "file1"} NR%10>5 {print > "file2"}' a
$ cat file1
1
2
3
4
11
12
13
14
21
22
23
24
31
32
33
34
41
42
43
44
51
$ cat file2
6
7
8
9
16
17
18
19
26
27
28
29
36
37
38
39
46
47
48
49

You can do this with head and tail (which are not be part of the bash itself):
head -n 20 <file> | tail -n 5
gives you the lines 15 to 20.
This is however inefficient, if you want to get multiple sections of your file, since it has to be parsed again and again. In this case I'd prefer some real scripting.

Another approach is to treat blank-line-separated paragraphs as the records, and print odd-numbered and even-numbered records to different files:
awk -v RS= -v ORS='\n\n' '{
outfile = (NR % 2 == 1) ? "file1" : "file2"
print > outfile
}' file

Maybe something like that:
#!/bin/bash
EVEN="even.log"
ODD="odd.log"
line_count=0
block_count=0
while read line
do
# ignore blank lines
if [ ! -z "$line" ]; then
if [ $(( $block_count % 2 )) -eq 0 ]; then
# even
echo "$line" >> "$EVEN"
else
# odd
echo "$line" >> "$ODD"
fi
line_count=$[$line_count +1]
if [ "$line_count" -eq "4" ]; then
block_count=$[$block_count +1]
line_count=0
fi
fi
done < "$1"
The first argument is the source file: ./split.sh split_input

This script prints lines from file 1.txt with indexes 0, 1, 2, 3, 8, 9, 10, 11, 16, 17, 18, 19, ...
i=0
while read p; do
if [ $i%8 -lt 4 ]
then
echo $p
fi
let i=$i+1
done < 1.txt
This script prints lines with indexes 4, 5, 6, 7, 12, 13, 14, 15, ...
i=0
while read p; do
if [ $i%8 -gt 3 ]
then
echo $p
fi
let i=$i+1
done < 1.txt

Related

distribute data in both increment and decrement order

I have a file which has n number of rows, i want it's data to be distributed in 7 files as per below order
** my input file has n number of rows, this is just an example.
Input file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1
5
16
17
.
.
28
Output file
1 2 3 4 5 6 7
14 13 12 11 10 9 8
15 16 17 18 19 20 21
28 27 26 25 24 23 22
so if i open the first file it should have rows
1
14
15
28
similarly if i open the second file it should have rows
2
13
16
27
similarly output for the other files as well.
Can anybody please help, with below code it is doing what is required but not in required order.
awk '{print > ("te1234"++c".txt");c=(NR%n)?c:0}' n=7 test6.txt
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
EDIT: Since OP has changed sample of Input_file totally different so adding this solution now, again this is written and tested with shown samples only.
With xargs + single awk: (recommended one)
xargs -n7 < Input_file |
awk '
FNR%2!=0{
for(i=1;i<=NF;i++){
print $i >> (i".txt")
close(i".txt")
}
next
}
FNR%2==0{
for(i=NF;i>0;i--){
count++
print $i >> (count".txt")
close(i".txt")
}
count=""
}'
Initial solution:
xargs -n7 < Input_file |
awk '
FNR%2==0{
for(i=NF;i>0;i--){
val=(val?val OFS:"")$i
}
$i=val
val=""
}
1' |
awk '
{
for(i=1;i<=NF;i++){
print $i >> (i".txt")
close(i".txt")
}
}'
Above could be done with single awk too will add xargs + awk(single) solution in few mins too.
Could you please try following, written and tested with shown samples in GNU awk.
awk '{for(i=1;i<=NF;i++){print $i >> (i".txt");close(i".txt")}}' Input_file
The output file counter could descend for each second group of seven:
awk 'FNR%n==1 {asc=!asc}
{
out="te1234" (asc ? ++c : c--) ".txt";
print >> out;
close(out)
}' n=7 test6.txt
$ ls
file tst.awk
$ cat tst.awk
{ rec = (cnt % 2 ? $1 sep rec : rec sep $1); sep=FS }
!(NR%n) {
++cnt
nf = split(rec,flds)
for (i=1; i<=nf; i++) {
out = "te1234" i ".txt"
print flds[i] >> out
close(out)
}
rec=sep=""
}
.
$ awk -v n=7 -f tst.awk file
.
$ ls
file te12342.txt te12344.txt te12346.txt tst.awk
te12341.txt te12343.txt te12345.txt te12347.txt
$ cat te12341.txt
1
14
15
28
$ cat te12342.txt
2
13
16
27
If you can have input that's not an exact multiple of n then move the code that's currently in the !(NR%n) block into a function and call that function there and in an END section.
This might work for you (GNU sed & parallel):
parallel 'echo {1}~14w file{1}; echo {2}~14w file{1}' ::: {1..7} :::+ {14..8} |
sed -n -f - file &&
paste file{1..7}
Create a sed script to write files named filen where n is 1 thru 7 (see above first set of parameters in the parallel command and also in the paste command).
The sed script uses the n~m address where n is the starting address and m is the modulo thereafter.
The distributed files are created first and the paste command then joins them all together to produce a single output file (tab separated by default, use paste -d option to get desired delimiter).
Alternative using Bash & sed:
for ((n=1,m=14;n<=7;n++,m--));do echo "$n~14w file$n";echo "$m~14w file$n";done |
sed -nf - file &&
paste file{1..7}

Bash - Read lines from file with intervals

I need to read all lines of the file separating at intervals. A function will execute a command with each batch of lines.
Lines range example:
1 - 20
21 - 50
51 - 70
...
I tried with the sed command in a forloop, but the range does not go to the end of the file. For example, a file with 125 lines reads up to 121, missing lines to reach the end.
I commented on the sed line because in this loop the range goes up to 121 and the COUNT is 125.
TEXT=`cat wordlist.txt`
COUNT=$( wc -l <<<$TEXT )
for i in $(seq 1 20 $COUNT);
do
echo "$i"
#sed -n "1","${i}p"<<<$TEXT
done
Output:
1
21
41
61
81
101
121
Thanks!
Quick fix - ensure the last line is processed by throwing $COUNT on the end of of values assigned to i:
for i in $(seq 1 20 $COUNT) $COUNT;
do
echo "$i"
done
1
21
41
61
81
101
121
125
If COUNT happens to be the same as the last value generated by seq then we'll need to add some logic to skip the second time around; for example, if COUNT=121 then we'll want to skip the second time around when i=121, eg:
# assume COUNT=121
lasti=0
for i in $(seq 1 20 $COUNT) $COUNT;
do
[ $lasti = $COUNT ] && break
echo "$i"
lasti=$i
done
1
21
41
61
81
101
121

Read the number of columns using awk/sed

I have the following test file
Kmax Event File - Text Format
1 4 1000
65 4121 9426 12312
56 4118 8882 12307
1273 4188 8217 12309
1291 4204 8233 12308
1329 4170 8225 12303
1341 4135 8207 12306
63 4108 8904 12300
60 4106 8897 12307
731 4108 8192 12306
...
ÿÿÿÿÿÿÿÿ
In this file I want to delete the first two lines and apply some mathematical calculations. For instance each column i will be $i-(i-1)*number. A script that does this is the following
#!/bin/bash
if test $1 ; then
if [ -f $1.evnt ] ; then
rm -f $1.dat
sed -n '2p' $1.evnt | (read v1 v2 v3
for filename in $1*.evnt ; do
echo -e "Processing file $filename"
sed '$d' < $filename > $1_tmp
sed -i '/Kmax/d' $1_tmp
sed -i '/^'"$v1"' '"$v2"' /d' $1_tmp
cat $1_tmp >> $1.dat
done
v3=`wc -l $1.dat | awk '{print $1}' `
echo -e "$v1 $v2 $v3" > .$1.dat
rm -f $1_tmp)
else
echo -e "\a!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
echo -e " Event file $1.evnt doesn't exist !!!!!!"
echo -e "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
fi
else
echo -e "\a!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
echo -e "!!!!! Give name for event files !!!!!"
echo -e "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
fi
awk '{print $1, $2-4096, $3-(2*4096), $4-(3*4096)}' $1.dat >$1_Processed.dat
rm -f $1.dat
exit 0
The file won't always have 4 columns. Is there a way to read the number of columns, print this number and apply those calculations?
EDIT The idea is to have an input file (*.evnt), convert it to *.dat or any other ascii file(it doesn't matter really) which will only include the number in columns and then apply the calculation $i=$i-(i-1)*number. In addition it will keep the number of columns in a variable, that will be called in another program. For instance in the above file, number=4096 and a sample output file is the following
65 25 1234 24
56 22 690 19
1273 92 25 21
1291 108 41 20
1329 74 33 15
1341 39 15 18
63 12 712 12
60 10 705 19
731 12 0 18
while in the console I will get the message There are 4 detectors.
Finally a new file_processed.dat will be produced, where file is the initial name of awk's input file.
The way it should be executed is the following
./myscript <filename>
where <filename> is the name without the format. For instance, the files will have the format filename.evnt so it should be executed using
./myscript filename
Let's start with this to see if it's close to what you're trying to do:
$ numdet=$( awk -v num=4096 '
NR>2 && NF>1 {
out = FILENAME "_processed.dat"
for (i=1;i<=NF;i++) {
$i = $i-(i-1)*num
}
nf = NF
print > out
}
END {
printf "There are %d detectors\n", nf | "cat>&2"
print nf
}
' file )
There are 4 detectors
$ cat file_processed.dat
65 25 1234 24
56 22 690 19
1273 92 25 21
1291 108 41 20
1329 74 33 15
1341 39 15 18
63 12 712 12
60 10 705 19
731 12 0 18
$ echo "$numdet"
4
Is that it?
Using awk
awk 'NR<=2{next}{for (i=1;i<=NF;i++) $i=$i-(i-1)*4096}1' file

Bash script number generator

I need to generate random numbers in an specific format as test data. For example, given a number "n" I need to produce "n" random numbers and write them in a file. The file must contain at most 3 numbers per line. Here is what I have:
#!/bin/bash
m=$1
output=$2
for ((i=1; i<= m; i++)) do
echo $((RANDOM % 29+2)) >> $output
done
This outputs the numbers as:
1
2
24
21
10
14
and what I want is:
1 2 24
21 10 14
Thank you for your help!
Pure bash (written as a function rather than a script file)
randx3() {
local d=$' \n'
local i
for ((i=0;i<$(($1 - 1));++i)); do
printf "%d%c" $((RANDOM%29 + 2)) "${d:$((i%3)):1}"
done
printf "%d\n" $((RANDOM%29 + 2))
}
Note that it doesn't take a file argument; rather it outputs to stdout, so you would use it like this:
randx3 11 > /path/to/output
That style is often more flexible.
Here's a less hacky one which allows you to select how often you want a newline:
randx() {
local i
local m=$1
local c=${2:-3}
for ((i=1;i<=m;++i)); do
if ((i%c && i<m)); then
printf "%d " $((RANDOM%29 + 2))
else
printf "%d\n" $((RANDOM%29 + 2))
fi
done
}
Call that one as randx 11 or randx 11 7 (second argument defaults to 3).
Pipe the output to a command that will read 3 lines at a time:
for ((i=1; i<= m; i++)) do
echo $((RANDOM % 29+2))
done | sed -e '$!N;$!N;s/\n/ /g' >> $output
This is what paste was designed for:
$ for i in {0..10}; do echo $RANDOM; done | paste -d' ' - - -
14567 3240 16354
17457 25616 12772
3912 7490 12206
7342 10554
Another approach would be to build up the values in an array, then use printf.
m=$1
output=$2
vals=()
while (( m-- )); do
vals+=( $((RANDOM % 29+2)) )
done
printf '%d %d %d\n' "${vals[#]}" > "$output"
Shortest!!!
I need to produce "n" random numbers and write them in a file. The file must contain at most 3 numbers per line.
pr -t -3 -s\ <(for ((n=6;n--;)){ echo $((RANDOM % 29+2));}) >file
Then
cat file
11 29 27
14 21 22
YAS: Yet another bash solution
As a script:
#!/bin/bash
n=$1
file=$2
out=()
>$file
for ((i=1;i<=n;i++));do
out+=($((RANDOM%29+2)))
[ $((i%3)) -eq 0 ] && echo ${out[*]} >>$file && out=()
done
[ "$out" ] && echo ${out[*]} >>$file
Usage:
script <quantity of random> <filename>
Important remark about RANDOM%29
This way of rendering random between 2 to 30 is not equitable!
As $RANDOM give a number between 0 and 32767, there is:
for ((i=0;i<32768;i++)) ;do
((RL[$((i%29+2))]++))
done
for ((i=0;i<32;i++));do
printf "%3d %5d\n" $i ${RL[i]}
done | column
0 0 7 1130 14 1130 21 1130 28 1130
1 0 8 1130 15 1130 22 1130 29 1129
2 1130 9 1130 16 1130 23 1130 30 1129
3 1130 10 1130 17 1130 24 1130 31 0
4 1130 11 1130 18 1130 25 1130
5 1130 12 1130 19 1130 26 1130
6 1130 13 1130 20 1130 27 1130
... there is 1130 chances to obtain a number between 2 to 28, but only 1129 chances to obtain a 29 or a 30.
To prevent this, you have to drop unwanted results:
random2to30() {
local _random=32769
while (( $_random>=32741 )) ;do
_random=$RANDOM;
done;
printf -v $1 "%d" $((2+_random%29))
}
The proof:
tstr2to30() {
unset $1
local _random=32769
while (( $_random>=32741 )); do
read _random || break
done
[ "$_random" ] && printf -v $1 "%d" $((2 +_random % 29 ))
}
unset RL
while tstr2to30 MyRandom && [ "$MyRandom" ] ;do
((RL[MyRandom]++))
done < <(seq 0 32767)
for ((i=0;i<32;i++));do
printf "%3d %5d\n" $i ${RL[i]}
done | column
Give:
0 0 7 1129 14 1129 21 1129 28 1129
1 0 8 1129 15 1129 22 1129 29 1129
2 1129 9 1129 16 1129 23 1129 30 1129
3 1129 10 1129 17 1129 24 1129 31 0
4 1129 11 1129 18 1129 25 1129
5 1129 12 1129 19 1129 26 1129
6 1129 13 1129 20 1129 27 1129
Where all value do obtain exactly same (1129) chances!
Final useable script
So the script could become (Don't forget bash's shebang!):
#!/bin/bash
n=${1:-11} # default to 11 values
c=${2:-3} # default to 3 values by lines
minval=${3:-2} # default to 2 random min
maxval=${4:-30} # defailt to 30 random max
file=${5:-/dev/stdout} # default to STDOUT
rnum=$(( maxval - minval + 1 ))
rmax=$(( ( 32768 / rnum ) * rnum ))
randomGen() {
local _random=33000
while [ $_random -ge $rmax ] ;do
_random=$RANDOM
done
printf -v $1 "%d" $(( minval +_random % rnum ))
}
out=()
for ((i=1;i<=n;i++));do
randomGen MyRandom
out+=($MyRandom)
[ $((i%c)) -eq 0 ] && echo ${out[*]} >>"$file" && out=()
done
[ "$out" ] && echo ${out[*]} >>"$file"
This awk will insert a newline after every 3rd line or a space:
for ((i=1; i<= m; i++)); do
echo $((RANDOM % 29+2))
done | awk '{printf "%s%c", $1, (NR % 3) ? " " : "\n"}' >> $output
yet another way of doing it :
eval echo {1..$m} | xargs -n3 echo $((RANDOM % 29+2)) > $output

Extracting lines 1, 11, 21, etc. from a text file

Is it possible to retrieve data from lines number 1, 11, 21 ,31 from a text file using Linux commands?
I need to do the same for 2, 12, 22, 32 and so on.
You can use awk for this:
awk '(NR % 10 == 1){ print }' your_input_file
For example:
$ seq 1 100|awk '(NR%10 == 2){print}'
2
12
22
32
42
52
62
72
82
92
As glenn jackman points out, you can parametrize the awk script to make it more easy to use. And given that print is the default action, you can simply write:
$ seq 1 20|awk -v step=10 -v idx=3 'NR%step==idx'
3
13

Resources