I need to parse a log file into multiple files based on delimiters - bash

I have a log file which i need to parse it into multiple files.
############################################################################################
6610
############################################################################################
GTI02152 I gtirreqi 20130906 000034 TC SJ014825 GTT_E_REQ_INF テーブル挿入件数 16件
############################################################################################
Z5000
############################################################################################
GTP10000 I NIPS gtgZ5000 20130906 000054 TC SJ014825 シェル開始
############################################################################################
I need to create files like 6610.txt which will have all values under 6610 like(GTI02152..) and for z5000(GTP10000) respectively. Any help will be greatly appreciated!

Below script would help you to get the information. You can modify them to create the data you require.
#!/bin/sh
cmd=`cat data.dat | paste -d, - - - - - | cut -d ',' -f 2,4 > file.out`
$cmd
while read p; do
fileName=`echo $p | cut -d ',' -f 1`
echo $fileName
dataInfo=`echo $p | cut -d ',' -f 2`
echo $dataInfo
done< file.out

Here's an awk styled answer:
I put the following into a file named awko and chmod +x it to use it:
#!/usr/bin/awk -f
BEGIN { p = 0 } # look for filename flag - start at zero
/^\#/ { p = !p } # turn it on to find the filename
# either make a filename or write to the last filename based on the flag
$0 !~ /^\#/ {
if( p == 1 ) filename = $1 ".txt"
else print $0 > filename
}
Running awko data.txt produced two files, 6610.txt and Z5000.txt from your example data. It's capable of sending more data lines to the output files as well.

You can do it with Ruby as well:
ruby -e 'File.read(ARGV.shift).scan(/^[^#].*?(?=^[#])/m).each{|e| name = e.split[0]; File.write("#{name}.txt", e)}' file
Example output:
> for A in *.txt; do echo "---- $A ----"; cat "$A"; done
---- 6610.txt ----
6610
---- GTI02152.txt ----
GTI02152 I gtirreqi 20130906 000034 TC SJ014825 GTT_E_REQ_INF テーブル挿入件数 16件
---- GTP10000.txt ----
GTP10000 I NIPS gtgZ5000 20130906 000054 TC SJ014825 シェル開始
---- Z5000.txt ----
Z5000

This script makes the following assumptions:
Each record is separated by an empty line
#### lines are purely comment/space filler and can be ignored during parsing
The first line of each record (ignoring ####) contains the basename for the filename
The name of the logfile is passed as the first argument to this script.
#!/bin/bash
# write records to this temporary file, rename later
tempfile=$(mktemp)
while read line; do
if [[ $line == "" ]] ; then
# line is empty - separator - save existing record and start a new one
mv $tempfile $filename
filename=""
tempfile=$(mktemp)
else
# output non-empty line to record file
echo $line >> $tempfile
if [[ $filename == "" ]] ; then
# we haven't yet figured out the filename for this record
if [[ $line =~ ^#+$ ]] ; then
# ignore #### comment lines
:
else
# 1st non-comment line in record is filename
filename=${line}.txt
fi
fi
fi
done < $1
# end of input file might not have explicit empty line separator -
# make sure last record file is moved correctly
if [[ -e $tempfile ]] ; then
mv $tempfile $filename
fi

Related

Grep -rl from a .txt list

I'm trying to locate a list of strings from a .txt file, the search target is a directory of multiple .csv (locating which .csv contain the string)
I already find how to do it manually:
grep -rl doggo C:\dirofcsv\
The next step is to to it from a list of hundreds of terms.
I tried grep -rl -f list.txt C:\dirofcsv < print.txt but I only have the last term printed.. I want to have the results lines by lines.
I'm missing something but I don't know where.
I'm working on windows with a term emulator.
EDIT: I've found how to list the terms from a file.Now I need to see which terms have which result like " doggo => file2, file4" did I need to write a loop ?
Thanks community.
grep -rl -f list.txt C:\dirofcsv >> print.txt
You are looking to append lines to the print.txt file and so will need to use >> as opposed to > which will overwrite what is already in the file.
To get the output listed in the output required in your edited requirement, you can use a loop redirected back into awk:
awk '/^FILE -/ { fil=$3; # When the output start with "FILE -" set fil to the third space delimited field
next # Skip to the next line
}
{ arr[fil][$0]="" # Set up a 2 dimensional array with the search term (fil) as the first index and the name of the file the second
}
END { for (i in arr) { # Loop through the array
printf "%s => ",i; First print the search term in the format required
for (j in arr[i]) {
printf "%s,",j # Print the file name followed by a comma
}
printf "\n" # Print a new line
}
}' <<< "$(while read line # Read list.txt line by line
do
echo "FILE - $line"; Echo a marker for identification in awk
grep -l "$line" C:\dirofcsv ; # Grep for the line
done < list.txt)" >> print.txt
One liner:
awk '/^FILE -/ { fil=$3;next } { arr[fil][$0]="" } END { for (i in arr) { printf "%s => ",i;for (j in arr[i]) { printf "%s,",j } printf "\n" } }' <<< "$(while read line;do echo "FILE - $line";grep -l "$line" C:\dirofcsv done < list.txt)" >> print.txt
I think you meant to pass the command as:
grep -rl -f list.txt C:\dirofcsv >> print.txt
Give it a shot. It should take all patterns from list.txt line by line and search in the directory C:\dirofcsv for files with matching patterns and print their names to print.txt file.
Try this for printing without a loop (just like you asked in comments ;-)
One Line Answer
dir=C:\dirofcsv
listfile=list.txt
eval $(jq -Rsr 'split("\n") | map(select(length > 0)) | reduce .[] as $line ([]; . + ["echo \($line) :; grep -rl \($line) \($dir); echo"]) | (join("; "))' --arg dir "$dir" < "$listfile")
Another solution, for explanation say:
unset li
readarray li -u <"$listfile"
quoted_commands="$(jq -R 'reduce inputs as $line ([]; . + ["echo \($line) :; grep -rl \($line) \($dir); echo"]) | (join("; "))' \
--arg dir $dir \
<<< $(echo; printf "%s" "${li[#]}"))"
quoted_commands=${quoted_commands%\"}
commands=${quoted_commands#\"}
eval $commands
Breaking down the command for better explaination in comments:
# read contents of listfile in li
unset li && readarray li -u <"$listfile"
# add the content to new list so that it prints the list elements in new-lines
# also add a newline at top as it will be discarded by jq (in this case only)
list="$(echo; printf "%s" "${li[#]}";)"
# pass jq command
quoted_commands="$(jq -R 'reduce inputs as $line
([]; . + ["echo \($line) :; grep -rl \($line) \($dir); echo"])
| (join("; "))' \
--arg dir $dir <<< "$list")"
# the elements are read with reduce filter and converted to JSON Array of corresponding commands to execute
# the commands for all elements of list are joined with join filter
# trim quotes to execute commands properly
commands=$(sed -e 's/^"//' -e 's/"$//' <<< "$quoted_commands")
# run commands
eval "$commands"
You may want to print the above variables. Take care to use quotes in echo/printf while doing so, i.e., echo "$variable".
Replacement of sed command:
signgle_quoted=${quoted%\"}
commands=${signgle_quoted#\"}
echo "$commands"
I am now using the following implementations (though the dictionary implementation uses a for loop, the key : value implementation doesn't, and is a single line command):
# print an Associative bash array as a JSON dictionary
print_dict()
{
declare -n ref
ref=$1
for k in $(echo "${!ref[#]}")
do
printf '{"name":"%s", "value":"%s"}\n' "$k" "${ref[$k]}"
done | jq -s 'reduce .[] as $i ({}; .[$i.name] = $i.value)'
}
#-------------------------------------------------------------------------
# print the grep output in key : value format
function list_grep()
{
local listfile=$1
local dir=$2
eval $(jq -Rsr 'split("\n") | map(select(length > 0)) | reduce .[] as $line ([]; . + ["echo \($line) :; grep -rl \($line) \($dir); echo"]) | (join("; "))' --arg dir "$dir" < "$listfile")
}
#-------------------------------------------------------------------------
# print the grep output as JSON dictionary
function dict_grep()
{
local listfile=$1
local dir=$2
eval declare -A Arr=\($(eval echo $(jq -Rrs 'split("\n") | map(select(length > 0)) | reduce .[] as $k ([]; . + ["[\($k)]=\\\"$(grep -rl \($k) tmp)\\\""]) | (join(" "))' --arg dir $dir < tmp/list.txt))\)
print_dict Arr
}
#-------------------------------------------------------------------------
# call:
list_grep $listfile $dir
dict_grep $listfile $dir
-Himanshu

Add filename of each file as a separator row when merging into a single file Bash Script

I have the current script which combines all the CSV files in a folder into a single CSV file and it works great. I need to add functionality to add the filename of the original csv's as a header row for each data block so I know which section is which.
Can someone assist as this is not by strong point and I am over my head
#!/bin/bash
OutFileName="./Data/all/all.csv" # Fix the output name
i=0 # Reset a counter
for filename in ./Data/all/*.csv; do
if [ "$filename" != "$OutFileName" ] ; # Avoid recursion
then
if [[ $i -eq 0 ]] ; then
head -1 $filename > $OutFileName # Copy header if it is the first file
fi
tail -n +2 $filename >> $OutFileName # Append from the 2nd line each file
i=$(( $i + 1 )) # Increase the counter
fi
done
I will be automating this and using and run shell script in apple automator.
Thank you got any help.
This is one of the files that are imported and output example
Example of current input file
Once combined I need the filename where the "headers are"
When you want to generate something like ...
Header1,Header2,Header3
file1.csv
a,b,c
x,y,z
file2.csv
1,2,3
9,9,9
file3.csv
...
... then you just have to insert an echo "$filename" >> "$OutFileName" in front of the tail command. Here is an updated version of your script with some minor improvements.
#!/bin/bash
out="./Data/all/all.csv"
i=0
rm -f "$out"
for file in ./Data/all/*.csv; do
(( i++ == 0)) && head -1 "$file"
echo "$file"
tail -n +2 "$file"
done > "$out"
There is no concept of "header line" other than the first line of the CSV file. What you can do is add a new column.
I've switched to Awk because it simplifies the script considerably. Your original would be literally a one-liner.
awk -F , 'NR==1 { OFS=FS; $(NF+1) = "Filename" }
FNR>1{ $(NF+1) = FILENAME }1' all/*.csv >all.csv
Not saving the output in the same directory as the inputs removes the pesky corner case handling.

Parsing .ini file in bash

I have a below properties file and would like to parse it as mentioned below. Please help in doing this.
.ini file which I created :
[Machine1]
app=version1
[Machine2]
app=version1
app=version2
[Machine3]
app=version1
app=version3
I am looking for a solution in which ini file should be parsed like
[Machine1]app = version1
[Machine2]app = version1
[Machine2]app = version2
[Machine3]app = version1
[Machine3]app = version3
Thanks.
Try:
$ awk '/\[/{prefix=$0; next} $1{print prefix $0}' file.ini
[Machine1]app=version1
[Machine2]app=version1
[Machine2]app=version2
[Machine3]app=version1
[Machine3]app=version3
How it works
/\[/{prefix=$0; next}
If any line begins with [, we save the line in the variable prefix and then we skip the rest of the commands and jump to the next line.
$1{print prefix $0}
If the current line is not empty, we print the prefix followed by the current line.
Adding spaces
To add spaces around any occurrence of =:
$ awk -F= '/\[/{prefix=$0; next} $1{$1=$1; print prefix $0}' OFS=' = ' file.ini
[Machine1]app = version1
[Machine2]app = version1
[Machine2]app = version2
[Machine3]app = version1
[Machine3]app = version3
This works by using = as the field separator on input and = as the field separator on output.
I love John1024's answer. I was looking for exactly that. I have created a bash function that allows me to lookup sections or specific keys based on his idea:
function iniget() {
if [[ $# -lt 2 || ! -f $1 ]]; then
echo "usage: iniget <file> [--list|<section> [key]]"
return 1
fi
local inifile=$1
if [ "$2" == "--list" ]; then
for section in $(cat $inifile | grep "\[" | sed -e "s#\[##g" | sed -e "s#\]##g"); do
echo $section
done
return 0
fi
local section=$2
local key
[ $# -eq 3 ] && key=$3
# https://stackoverflow.com/questions/49399984/parsing-ini-file-in-bash
# This awk line turns ini sections => [section-name]key=value
local lines=$(awk '/\[/{prefix=$0; next} $1{print prefix $0}' $inifile)
for line in $lines; do
if [[ "$line" = \[$section\]* ]]; then
local keyval=$(echo $line | sed -e "s/^\[$section\]//")
if [[ -z "$key" ]]; then
echo $keyval
else
if [[ "$keyval" = $key=* ]]; then
echo $(echo $keyval | sed -e "s/^$key=//")
fi
fi
fi
done
}
So given this as file.ini
[Machine1]
app=version1
[Machine2]
app=version1
app=version2
[Machine3]
app=version1
app=version3
then the following results are produced
$ iniget file.ini --list
Machine1
Machine2
Machine3
$ iniget file.ini Machine3
app=version1
app=version3
$ iniget file.ini Machine1 app
version1
$ iniget file.ini Machine2 app
version2
version3
Again, thanks to #John1024 for his answer, I was pulling my hair out trying to create a simple bash ini parser that supported sections.
Tested on Mac using GNU bash, version 5.0.0(1)-release (x86_64-apple-darwin18.2.0)
You can try using awk:
awk '/\[[^]]*\]/{ # Match pattern like [...]
a=$1;next # store the pattern in a
}
NF{ # Match non empty line
gsub("=", " = ") # Add space around the = character
print a $0 # print the line
}' file
Excellent answers here. I made some modifications to #davfive's function to fit it better to my use case. This version is largely the same except it allows for whitespace before and after = characters, and allows values to have spaces in them.
# Get values from a .ini file
function iniget() {
if [[ $# -lt 2 || ! -f $1 ]]; then
echo "usage: iniget <file> [--list|<section> [key]]"
return 1
fi
local inifile=$1
if [ "$2" == "--list" ]; then
for section in $(cat $inifile | grep "^\\s*\[" | sed -e "s#\[##g" | sed -e "s#\]##g"); do
echo $section
done
return 0
fi
local section=$2
local key
[ $# -eq 3 ] && key=$3
# This awk line turns ini sections => [section-name]key=value
local lines=$(awk '/\[/{prefix=$0; next} $1{print prefix $0}' $inifile)
lines=$(echo "$lines" | sed -e 's/[[:blank:]]*=[[:blank:]]*/=/g')
while read -r line ; do
if [[ "$line" = \[$section\]* ]]; then
local keyval=$(echo "$line" | sed -e "s/^\[$section\]//")
if [[ -z "$key" ]]; then
echo $keyval
else
if [[ "$keyval" = $key=* ]]; then
echo $(echo $keyval | sed -e "s/^$key=//")
fi
fi
fi
done <<<"$lines"
}
For taking disparate sectional and tacking the section name (including 'no-section'/Default together) to each of its related keyword (along with = and its keyvalue), this one-liner AWK will do the trick coupled with a few clean-up regex.
ini_buffer="$(echo "$raw_buffer" | awk '/^\[.*\]$/{obj=$0}/=/{print obj $0}')"
Will take your lines and output them like you wanted:
+++ awk '/^\[.*\]$/{obj=$0}/=/{print obj $0}'
++ ini_buffer='[Machine1]app=version1
[Machine2]app=version1
[Machine2]app=version2
[Machine3]app=version1
[Machine3]app=version3'
A complete solution to the INI-format File
As Clonato, INI-format expert said that for the latest INI version 1.4 (2009-10-23), there are several other tricky aspects to the INI file:
character set constraint for section name
character set constraint for keyword
And lastly is for the keyvalue to be able to handle pretty much anthing that is not used in the section and keyword name; that includes nesting of quotes inside a pair of same single/double-quote.
Except for the nesting of quotes, a INI-format Github complete solution to parsing INI-format file with default section:
# syntax: ini_file_read <raw_buffer>
# outputs: formatted bracket-nested "[section]keyword=keyvalue"
ini_file_read()
{
local ini_buffer raw_buffer hidden_default
raw_buffer="$1"
# somebody has to remove the 'inline' comment
# there is a most complex SED solution to nested
# quotes inline comment coming ... TBA
raw_buffer="$(echo "$raw_buffer" | sed '
s|[[:blank:]]*//.*||; # remove //comments
s|[[:blank:]]*#.*||; # remove #comments
t prune
b
:prune
/./!d; # remove empty lines, but only those that
# become empty as a result of comment stripping'
)"
# awk does the removal of leading and trailing spaces
ini_buffer="$(echo "$raw_buffer" | awk '/^\[.*\]$/{obj=$0}/=/{print obj $0}')" # original
ini_buffer="$(echo "$ini_buffer" | sed 's/^\s*\[\s*/\[/')"
ini_buffer="$(echo "$ini_buffer" | sed 's/\s*\]\s*/\]/')"
# finds all 'no-section' and inserts '[Default]'
hidden_default="$(echo "$ini_buffer" \
| egrep '^[-0-9A-Za-z_\$\.]+=' | sed 's/^/[Default]/')"
if [ -n "$hidden_default" ]; then
echo "$hidden_default"
fi
# finds sectional and outputs as-is
echo "$(echo "$ini_buffer" | egrep '^\[\s*[-0-9A-Za-z_\$\.]+\s*\]')"
}
The unit test for this StackOverflow post is included in this file:
https://github.com/egberts/bash-ini-file
Source:
https://github.com/egberts/easy-admin/blob/main/test/section-regex.sh
https://cloanto.com/specs/ini/#escapesequences

How to browse a line from a file?

I have a file that contains 10 lines with this sort of content:
aaaa,bbb,132,a.g.n.
I wanna walk throw every line, char by char and put the data before the " , " is met in an output file.
if [ $# -eq 2 ] && [ -f $1 ]
then
echo "Read nr of fields to be saved or nr of commas."
read n
nrLines=$(wc -l < $1)
while $nrLines!="1" read -r line || [[ -n "$line" ]]; do
do
for (( i=1; i<=$n; ++i ))
do
while [ read -r -n1 temp ]
do
if [ temp != "," ]
then
echo $temp > $(result$i)
else
fi
done
paste -d"\n" $2 $(result$i)
done
nrLines=$($nrLines-1)
done
else
echo "File not found!"
fi
}
In parameter $2 I have an empty file in which I will store the data from file $1 after I extract it without the " , " and add a couple of comments.
Example:
My input_file contains:
a.b.c.d,aabb,comp,dddd
My output_file is empty.
I call my script: ./script.sh input_file output_file
After execution the output_file contains:
First line info: a.b.c.d
Second line info: aabb
Third line info: comp
(yes, without the 4th line info)
You can do what you want very simply with parameter-expansion and substring-removal using bash alone. For example, take an example file:
$ cat dat/10lines.txt
aaaa,bbb,132,a.g.n.
aaaa,bbb,133,a.g.n.
aaaa,bbb,134,a.g.n.
aaaa,bbb,135,a.g.n.
aaaa,bbb,136,a.g.n.
aaaa,bbb,137,a.g.n.
aaaa,bbb,138,a.g.n.
aaaa,bbb,139,a.g.n.
aaaa,bbb,140,a.g.n.
aaaa,bbb,141,a.g.n.
A simple one-liner using native bash string handling could simply be the following and give the following results:
$ while read -r line; do echo ${line%,*}; done <dat/10lines.txt
aaaa,bbb,132
aaaa,bbb,133
aaaa,bbb,134
aaaa,bbb,135
aaaa,bbb,136
aaaa,bbb,137
aaaa,bbb,138
aaaa,bbb,139
aaaa,bbb,140
aaaa,bbb,141
Paremeter expansion w/substring removal works as follows:
var=aaaa,bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the first ',' is:
${var#*,} # bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the last ',' is:
${var##*,} # a.g.n.
Beginning at the right and removing up to, and including, the first ',' is:
${var%,*} # aaaa,bbb,132
Beginning at the left and removing up to, and including, the last ',' is:
${var%%,*} # aaaa
Note: the text to remove above is represented with a wildcard '*', but wildcard use is not required. It can be any allowable text. For example, to only remove ,a.g.n where the preceding number is 136, you can do the following:
${var%,136*},136 # aaaa,bbb,136 (all others unchanged)
To print 2016 th line from a file named file.txt u have to run a command like this-
sed -n '2016p' < file.txt
More-
sed -n '2p' < file.txt
will print 2nd line
sed -n '2011p' < file.txt
2011th line
sed -n '10,33p' < file.txt
line 10 up to line 33
sed -n '1p;3p' < file.txt
1st and 3th line
and so on...
For more detail, please have a look in this tutorial and this answer.
In native bash the following should do what you want, assuming you replace the contents of your script.sh with the below:
#!/bin/bash
IN_FILE=${1}
OUT_FILE=${2}
IFS=\,
while read line; do
set -- ${line}
for ((i=1; i<=${#}; i++)); do
((${i}==4)) && continue
((n+=1))
printf '%s\n' "Line ${n} info: ${!i}"
done
done < ${IN_FILE} > ${OUT_FILE}
This will not print the 4th field of each line within the input file, on a new line in the output file (I assume this is your requirement as per your comment?).
[wspace#wspace sandbox]$ awk -F"," 'BEGIN{OFS="\n"}{for(i=1; i<=NF-1; i++){print "line Info: "$i}}' data.txt
line Info: a.b.c.d
line Info: aabb
line Info: comp
This little snippet can ignore the last field.
updated:
#!/usr/bin/env bash
if [ ! -f "$1" -o $# -ne 2 ];then
echo "Usage: $(basename $0) input_file out_file"
exit 127
fi
input_file=$1
output_file=$2
: > $output_file
if [ "$(wc -l < $1)" -ne 0 ];then
while true
do
read -r -n1 char
if [ "$char" == "" ];then
break
elif [ $char != "," ];then
temp=$temp$char
else
echo "line info: $temp" >> $output_file
temp=""
fi
done < $input_file
else
echo "file $1 is empty"
fi
Maybe this is what you want
Did you try
sed "s|,|\n|g" $1 | head -n -1 > $2
I assume that only the last word would not have a comma on its right.
Try this (tested with you sample line) :
#!/bin/bash
# script.sh
echo "Number of fields to save ?"
read nf
while IFS=$',' read -r -a arr; do
newarr=${arr[#]:0:${nf}}
done < "$1"
for i in ${newarr[#]};do
printf "%s\n" $i
done > "$2"
Execute script with :
$ ./script.sh inputfile outputfile
Number of fields ?
3
$ cat outputfile
a.b.c.d
aabb
comp
All words separated with commas are stored into an array $arr
A tmp array $newarr removes last $n element ($n get the read command).
It loops over new array and prints result in $2, the outputfile.

How do I use Head and Tail to print specific lines of a file

I want to say output lines 5 - 10 of a file, as arguments passed in.
How could I use head and tail to do this?
where firstline = $2 and lastline = $3 and filename = $1.
Running it should look like this:
./lines.sh filename firstline lastline
head -n XX # <-- print first XX lines
tail -n YY # <-- print last YY lines
If you want lines from 20 to 30 that means you want 11 lines starting from 20 and finishing at 30:
head -n 30 file | tail -n 11
#
# first 30 lines
# last 11 lines from those previous 30
That is, you firstly get first 30 lines and then you select the last 11 (that is, 30-20+1).
So in your code it would be:
head -n $3 $1 | tail -n $(( $3-$2 + 1 ))
Based on firstline = $2, lastline = $3, filename = $1
head -n $lastline $filename | tail -n $(( $lastline -$firstline + 1 ))
Aside from the answers given by fedorqui and Kent, you can also use a single sed command:
#!/bin/sh
filename=$1
firstline=$2
lastline=$3
# Basics of sed:
# 1. sed commands have a matching part and a command part.
# 2. The matching part matches lines, generally by number or regular expression.
# 3. The command part executes a command on that line, possibly changing its text.
#
# By default, sed will print everything in its buffer to standard output.
# The -n option turns this off, so it only prints what you tell it to.
#
# The -e option gives sed a command or set of commands (separated by semicolons).
# Below, we use two commands:
#
# ${firstline},${lastline}p
# This matches lines firstline to lastline, inclusive
# The command 'p' tells sed to print the line to standard output
#
# ${lastline}q
# This matches line ${lastline}. It tells sed to quit. This command
# is run after the print command, so sed quits after printing the last line.
#
sed -ne "${firstline},${lastline}p;${lastline}q" < ${filename}
Or, to avoid any external utilites, if you're using a recent version of bash (or zsh):
#!/bin/sh
filename=$1
firstline=$2
lastline=$3
i=0
exec <${filename} # redirect file into our stdin
while read ; do # read each line into REPLY variable
i=$(( $i + 1 )) # maintain line count
if [ "$i" -ge "${firstline}" ] ; then
if [ "$i" -gt "${lastline}" ] ; then
break
else
echo "${REPLY}"
fi
fi
done
try this one-liner:
awk -vs="$begin" -ve="$end" 'NR>=s&&NR<=e' "$f"
in above line:
$begin is your $2
$end is your $3
$f is your $1
Save this as "script.sh":
#!/bin/sh
filename="$1"
firstline=$2
lastline=$3
linestoprint=$(($lastline-$firstline+1))
tail -n +$firstline "$filename" | head -n $linestoprint
There is NO ERROR HANDLING (for simplicity) so you have to call your script as following:
./script.sh yourfile.txt firstline lastline
$ ./script.sh yourfile.txt 5 10
If you need only line "10" from yourfile.txt:
$ ./script.sh yourfile.txt 10 10
Please make sure that:
(firstline > 0) AND (lastline > 0) AND (firstline <= lastline)

Resources