Extract lines from text file, using starting line number and amount of lines to extract, in bash? - bash

I have seen How can I extract a predetermined range of lines from a text file on Unix? but I have a slightly different use case: I want to specify a starting line number, and a count/amount/number of lines to extract, from a text file.
So, I tried to generate a text file, and then compose an awk command to extract a count of 10 lines starting from line number 100 - but it does not work:
$ seq 1 500 > test_file.txt
$ awk 'BEGIN{s=100;e=$s+10;} NR>=$s&&NR<=$e' test_file.txt
$
So, what would be an easy approach to extract lines from a text file using a starting line number, and count of lines, in bash? (I'm ok with awk, sed, or any such tool, for instance in coreutils)

This gives you text that is inclusive of both end points
(eleven output lines, here).
$ START=100
$
$ sed -n "${START},$((START + 10))p" < test_file.txt
The -n says "no print by default".
And then the p says "print this line",
for lines within the example range of 100,110

When you want to use awk, use something like
seq 1 500 | awk 'NR>=100 && NR<=110'
Advantage of awk is the flexibility for changing the requirements.
When you want to use a variable start and skip the endpoints, it will be
start=100
seq 1 500 | awk -v start="${start}" 'NR > start && NR < start + 10'

Another alternative with tail and head:
tail -n +$START test_file.txt | head -n $NUMBER
If test_file.txt is very large and $START and $NUMBER are small, the following variant should be the fastest:
head -n $((START+NUMBER)) test_file.txt | tail -n +$START
Anyway, I prefer the sed solution noticed above for short input files:
sed -n "$START,$((START+NUMBER)) p" test_file.txt

sed -n "$Start,$End p" file
is likely a better way to get those lines.

$ seq 1 500 > test_file.txt
$ awk 'BEGIN{s=100;e=$s+10;} NR>=$s&&NR<=$e' test_file.txt
$
$s in GNU AWK means value of s-th field, $e in GNU AWK means value of e-th field. There are not fields yet in BEGIN clause so $s for any s is not set, as you use in arithemtic context it will be assumed to be 0 and therefore e will be set to value 10. Output of seq is single number per line, so there is not 10th field, so GNU AWK assumes it to be zero when asked to compare it with number, as NR is always strictly bigger than 0 your condition never holds so output is empty.
Use Range if you are able to prepare condition which holds solely for starting line and condition which holds solely for ending line, in this case
awk 'BEGIN{s=100}NR==s,NR==s+10' test_file.txt
gives output
100
101
102
103
104
105
106
107
108
109
110
Keep in mind that this will process whole file, if you have huge file and area of interest is relatively near begin, then you might decrease time consumption by ending processing at end of area of interest following way
awk 'BEGIN{s=100}NR>=s{print}NR==s+10{exit}' test_file.txt
(tested in GNU Awk 5.0.1)

This command extracts 30 lines starting from line 100
sed -n '100,$p' test_file.txt | head -30

Related

How to add 100 spaces at end of each line of a file in Unix

I have a file which is supposed to contain 200 characters in each line. I received a source file with only 100 characters in each line. I need to add 100 extra white spaces to each line now. If it were few blank spaces, we could have used sed like:
sed 's/$/ /' filename > newfilename
Since it's 100 spaces, can anyone tell me is it possible to add in Unix?
If you want to have fixed n chars per line (don't trust the input file has exact m chars per line) follow this. For the input file with varying number of chars per line:
$ cat file
1
12
123
1234
12345
extend to 10 chars per line.
$ awk '{printf "%-10s\n", $0}' file | cat -e
1 $
12 $
123 $
1234 $
12345 $
Obviously change 10 to 200 in your script. Here $ shows end of line, it's not there as a character. You don't need cat -e, here just to show the line is extended.
With awk
awk '{printf "%s%100s\n", $0, ""}' file.dat
$0 refers to the entire line.
Updated after Glenn's suggestion
Somewhat how Glenn suggests in the comments, the substitution is unnecessary, you can just add the spaces - although, taking that logic further, you don't even need the addition, you can just say them after the original line.
perl -nlE 'say $_," "x100' file
Original Answer
With Perl:
perl -pe 's/$/" " x 100/e' file
That says... "Substitute (s) the end of each line ($) with the calculated expression (e) of 100 repetitions of a space".
If you wanted to pad all lines to, say, 200 characters even if the input file was ragged (all lines of differing length), you could use something like this:
perl -pe '$pad=200-length;s/$/" " x $pad/e'
which would make up lines of 83, 102 and 197 characters to 200 each.
If you use Bash, you can still use sed, but use some readline functionality to keep you from manually typing 100 spaces (see manual for "Readline arguments").
You start typing normally:
sed 's/$/
Now, you want to insert 100 spaces. You can do this by prepending hitting the space bar with a readline argument to indicate that you want it to happen 100 times, i.e., you manually enter what would look like this as a readline keybinding:
M-1 0 0 \040
Or, if your meta key is the alt key: Alt+1 00Space
This inserts 100 spaces, and you get
sed 's/$/ /' filename
after typing the rest of the command.
This is useful for working in an interactive shell, but not very pretty for scripts – use any of the other solutions for that.
Just in case you are looking for a bash solution,
while IFS= read -r line
do
printf "%s%100s\n" "$line"
done < file > newfile
Test
Say I have a file with 3 lines it it as
$ wc -c file
16 file
$ wc -c newfile
316 newfile
Original Answer
spaces=$(echo {1..101} | tr -d 0-9)
while read line
do
echo -e "${line}${spaces}\n" >> newfile
done < file
You can use printf in awk:
awk '{printf "%s%*.s\n", $0, 100, " "}' filename > newfile
This printf will append 100 spaces at the end of each newline.
Another way in GNU awk using string-manipulation function sprintf.
awk 'BEGIN{s=sprintf("%-100s", "");}{print $0 s}' input-file > file-with-spaces
A proof with an example:-
$ cat input-file
1234jjj hdhyvb 1234jjj
6789mmm mddyss skjhude
khora77 koemm sado666
nn1004 nn1004 457fffy
$ wc -c input-file
92 input-file
$ awk 'BEGIN{s=sprintf("%-100s", "");}{print $0 s}' input-file > file-with-spaces
$ wc -c file-with-spaces
492 file-with-spaces

Searching a file (grep/awk) for 2 carriage return/line-feed characters

I'm trying to write a script that'll simply count the occurrences of \r\n\r\n in a file. (Opening the sample file in vim binary mode shows me the ^M character in the proper places, and the newline is still read as a newline).
Anyway, I know there are tons of solutions, but they don't seem to get me what I want.
e.g. awk -e '/\r/,/\r/!d' or using $'\n' as part of the grep statement.
However, none of these seem to produce what I need. I can't find the \r\n\r\n pattern with grep's "trick", since that just expands one variable. The awk solution is greedy, and so gets me way more lines than I want/need.
Switching grep to binary/Perl/no-newline mode seems to be closer to what I want,
e.g. grep -UPzo '\x0D', but really what I want then is grep -UPzo '\x0D\x00\x0D\x00', which doesn't produce the output I want.
It seems like such a simple task.
By default, awk treats \n as the record separator. That makes it very hard to count \r\n\r\n. If we choose some other record separator, say a letter, then we can easily count the appearance of this combination. Thus:
awk '{n+=gsub("\r\n\r\n", "")} END{print n}' RS='a' file
Here, gsub returns the number of substitutions made. These are summed and, after the end of the file has been reached, we print the total number.
Example
Here, we use bash's $'...' construct to explicitly add newlines and linefeeds:
$ echo -n $'\r\n\r\n\r\n\r\na' | awk '{n+=gsub("\r\n\r\n", "")} END{print n}' RS='a'
2
Alternate solution (GNU awk)
We can tell it to treat \r\n\r\n as the record separator and then return the count (minus 1) of the number of records:
cat file <(echo 1) | awk 'END{print NR-1;}' RS='\r\n\r\n'
In awk, RS is the record separator and NR is the count of the number of records. Since we are using a multiple-character record separator, this requires GNU awk.
If the file ends with \r\n\r\n, the above would be off by one. To avoid that, the echo -n 1 statement is used to assure that there are always at least one character after the last \r\n\r\n in the file.
Examples
Here, we use bash's $'...' construct to explicitly add newlines and linefeeds:
$ echo -n $'abc\r\n\r\n' | cat - <(echo 1) | awk 'END{print NR-1;}' RS='\r\n\r\n'
1
$ echo -n $'abc\r\n\r\ndef' | cat - <(echo 1) | awk 'END{print NR-1;}' RS='\r\n\r\n'
1
$ echo -n $'\r\n\r\n\r\n\r\n' | cat - <(echo 1) | awk 'END{print NR-1;}' RS='\r\n\r\n'
2
$ echo -n $'1\r\n\r\n2\r\n\r\n3' | cat - <(echo 1) | awk 'END{print NR-1;}' RS='\r\n\r\n'
2

How can I delete every Xth line in a text file?

Consider a text file with scientific data, e.g.:
5.787037037037037063e-02 2.048402977658663748e-01
1.157407407407407413e-01 4.021264347118673754e-01
1.736111111111111049e-01 5.782032163406526371e-01
How can I easily delete, for instance, every second line, or every 9 out of 10 lines in the file? Is it for example possible with a bash script?
Background: the file is very large but I need much less data to plot. Note that I am using Ubuntu/Linux.
This is easy to accomplish with awk.
Remove every other line:
awk 'NR % 2 == 0' file > newfile
Remove every 10th line:
awk 'NR % 10 != 0' file > newfile
The NR variable in awk is the line number. Anything outside of { } in awk is a conditional, and the default action is to print.
How about perl?
perl -n -e '$.%10==0&&print' # print every 10th line
You could possibly do it with sed, e.g.
sed -n -e 'p;N;d;' file # print every other line, starting with line 1
If you have GNU sed it's pretty easy
sed -n -e '0~10p' file # print every 10th line
sed -n -e '1~2p' file # print every other line starting with line 1
sed -n -e '0~2p' file # print every other line starting with line 2
Try something like:
awk 'NR%3==0{print $0}' file
This will print one line in three. Or:
awk 'NR%10<9{print $0}' file
will print 9 lines out of ten.
This might work for you (GNU sed):
seq 10 | sed '0~2d' # delete every 2nd line
1
3
5
7
9
seq 100 | sed '0~10!d' # delete 9 out of 10 lines
10
20
30
40
50
60
70
80
90
100
You can use a awk and a shell script. Awk can be difficult but...
This will delete specific lines you tell it to:
nawk -f awkfile.awk [filename]
awkfile.awk contents
BEGIN {
if (!lines) lines="3 4 7 8"
n=split(lines, lA, FS)
for(i=1;i<=n;i++)
linesA[lA[i]]
}
!(FNR in linesA)
Also I can't remember if VIM comes with the standard Ubuntu or not. If not get it.
Then open the file with vim
vim [filename]
Then type
:%!awk NR\%2 or :%!awk NR\%2
This will delete every other line. Just change the 2 to another integer for a different frequency.

Can I grep only the first n lines of a file?

I have very long log files, is it possible to ask grep to only search the first 10 lines?
The magic of pipes;
head -10 log.txt | grep <whatever>
For folks who find this on Google, I needed to search the first n lines of multiple files, but to only print the matching filenames. I used
gawk 'FNR>10 {nextfile} /pattern/ { print FILENAME ; nextfile }' filenames
The FNR..nextfile stops processing a file once 10 lines have been seen. The //..{} prints the filename and moves on whenever the first match in a given file shows up. To quote the filenames for the benefit of other programs, use
gawk 'FNR>10 {nextfile} /pattern/ { print "\"" FILENAME "\"" ; nextfile }' filenames
Or use awk for a single process without |:
awk '/your_regexp/ && NR < 11' INPUTFILE
On each line, if your_regexp matches, and the number of records (lines) is less than 11, it executes the default action (which is printing the input line).
Or use sed:
sed -n '/your_regexp/p;10q' INPUTFILE
Checks your regexp and prints the line (-n means don't print the input, which is otherwise the default), and quits right after the 10th line.
You have a few options using programs along with grep. The simplest in my opinion is to use head:
head -n10 filename | grep ...
head will output the first 10 lines (using the -n option), and then you can pipe that output to grep.
grep "pattern" <(head -n 10 filename)
head -10 log.txt | grep -A 2 -B 2 pattern_to_search
-A 2: print two lines before the pattern.
-B 2: print two lines after the pattern.
head -10 log.txt # read the first 10 lines of the file.
You can use the following line:
head -n 10 /path/to/file | grep [...]
The output of head -10 file can be piped to grep in order to accomplish this:
head -10 file | grep …
Using Perl:
perl -ne 'last if $. > 10; print if /pattern/' file
An extension to Joachim Isaksson's answer: Quite often I need something from the middle of a long file, e.g. lines 5001 to 5020, in which case you can combine head with tail:
head -5020 file.txt | tail -20 | grep x
This gets the first 5020 lines, then shows only the last 20 of those, then pipes everything to grep.
(Edited: fencepost error in my example numbers, added pipe to grep)
grep -A 10 <Pattern>
This is to grab the pattern and the next 10 lines after the pattern. This would work well only for a known pattern, if you don't have a known pattern use the "head" suggestions.
grep -m6 "string" cov.txt
This searches only the first 6 lines for string

Removing a line based in a criteria

I just want to delete the line which contain the number of selected rows in a query. I mean the one in the last line. please help.
[root#machine-test scripts]# ./hit_ratio.sh
193830 432
185260 125
2 rows selected.
If you know you want to delete the last line, but not other lines which contain similar text, or you don't know what text it will contain, sed is uniquely suitable.
./hit_ratio.sh | sed '$d'
You don't need the power of sed or the super-powers of awk if all you want is to delete a line based on a pattern. You can use:
./hit_ratio.sh | grep -v ' rows selected.'
You can do it with awk and sed but it's a bit like trying to kill a fly with a thermo-nuclear warhead:
pax> ./hit_ratio.sh | sed '/ rows selected./d'
193830 432
185260 125
pax> ./hit_ratio.sh | awk '$2!="rows"{print}'
193830 432
185260 125
Alternatively, do something with your SQL script. Sometimes, turning on the set nocount on statement eliminates the "rows affected" line.
My first recommendation is not to have that line outputted, list hit_ratio.sh here maybe it can be modified not to output that line
Anyway if you have to remove only the last line the easiest would be to use head:
./hit_ratio.sh | head -n -1
Using -n and a negative number, makes head print all but the last N lines of the input
Use head to get the first N - 1 lines of your file, where N is the length of the file (calculated with wc -l)
head -n $(($(cat lipsum.log | wc -l) - 1)) lipsum.log works
Pipe through
sed -e '/\w*[0-9]\+ rows\? selected/d'

Resources