How to only read the last line from a text file [duplicate] - bash

This question already has answers here:
How to read the last line of a text file into a variable using Bash? [closed]
(2 answers)
Print the last line of a file, from the CLI
(5 answers)
Closed 4 years ago.
I am working on a tool project. I need to grab the last line from a file & assign into a variable. This is what I have tried:
line=$(head -n $NF input_file)
echo $line
Maybe I could read the file in reverse then use
line=$(head -n $1 input_file)
echo $line
Any ideas are welcome.

Use tail ;)
line=$(tail -n 1 input_file)
echo $line

Combination of tac and awk here. Benefit in this approach could be we need NOT to read complete Input_file in it.
tac Input_file | awk '{print;exit}'

With sed or awk :
sed -n '$p' file
sed '$!d' file
awk 'END{print}' file
However, tail is still the right tool to do the job.

Related

Delete last blank line from a file and save the file [duplicate]

This question already has answers here:
shell: delete the last line of a huge text log file [duplicate]
(4 answers)
Remove the last line from a file in Bash
(16 answers)
Closed 5 years ago.
This post was edited and submitted for review 6 months ago and failed to reopen the post:
Original close reason(s) were not resolved
How can I delete the last line of a file without reading the entire file or rewriting it in any temp file? I tried to use sed but it reads the entire file into memory which is not feasible.
I want to remove the blank line from the end and save the change to the same file.
Since the file is very big and reading through the complete file would be slow, I am looking for a better way.
simple sed command to delete last line:
sed '$d' <file>
here in sed $ is the last line.
You can try awk command:
awk 'NR > 1{print t} {t = $0}END{if (NF) print }' file
Using cat:
cat file.txt | head -n -1 > new_file.txt
The easiest way I can think of is:
sed -i '${/^$/d}' filename
edited to delete only blank end line.
Your only option not using sed that won't read the entire file is to stat the file to get the size and then use dd to skip to the end before you start reading. However, telling sed to only operate on the last line, does essentially that.
Take a look at
Remove the last line from a file in Bash
Edit: I tested
dd if=/dev/null of=<filename> bs=1 seek=$(echo $(stat --format=%s <filename> ) - $( tail -n1 <filename> | wc -c) | bc )
and it does what you want

sed '$i d' // Deleting a line with linenumber in var [duplicate]

This question already has answers here:
Delete line from file at specified line number in bourne shell [duplicate]
(3 answers)
Closed 6 years ago.
Hi i have a linenumber
i=10
Now I want to delete that line with sed
sed '$i d' file
But it looke like that this wont work..
any ideas?
In awk. First test material:
$ cat > foo
1
2
3
Set the i:
$ i=2
Awk it:
$ awk -v line="$i" 'NR!=line' foo
1
3
sed -i.bak "${i}d" data.txt
is what you're looking for.
Notes
The -i option with sed is used for inplace edit. A backup with extension .bak is created.
The double quotes with sed expands the shell variables
To delete second line and show result:
sed -e '2d' data.txt
So your answer is:
sed -e "$i d" file.txt > file.txt
Just add strong quotes around the quoted variable:
i=10
sed ''"$i"' d' file

Unix Shell Script: Remove common prefix from a variable [duplicate]

This question already has answers here:
Remove a fixed prefix/suffix from a string in Bash
(9 answers)
Closed 2 years ago.
I have 2 variables one which is holding the prefix and other one the complete string.
For e.g
prefix="TEST_"
Str="TEST_FILENAME.xls"
I want the the Str to be compared against prefix and remove that common characters 'TEST_' and i want the output as FILENAME.xls. Please advise if it can be done with minimal lines of coding. Thanks a lot.
Using BASH you can do:
prefix="TEST_"
str="TEST_FILENAME.xls"
echo "${str#$prefix}"
FILENAME.xls
If not using BASH you can use sed:
sed "s/^$prefix//" <<< "$str"
FILENAME.xls
Try this:
$ Str=$(echo $Str | sed "s/^${prefix}//")
$ echo $Str
FILENAME.xls
Or using awk:
$ echo $Str | awk -F $prefix '{print $2}'
FILENAME.xls

Grep the first number in a line [duplicate]

This question already has answers here:
Printing only the first field in a string
(3 answers)
Closed 6 years ago.
I have a file that has lines of numbers and a file:
2 20 3 file1.txt
93 21 42 file2.txt
52 10 12 file3.txt
How do I use grep, awk, or some other command to just give me the first numbers of each line so that it will only display:
2
93
52
Thanks.
So many ways to do this. Here are some (assuming the input file is gash.txt):
awk '{print $1}' gash.txt
or using pure bash:
while read num rest
do
echo $num
done < gash.txt
or using "classic" sed:
sed 's/[ \t]*\([0-9]\{1,\}\).*/\1/' gash.txt
or using ERE with sed:
sed -E 's/[ \t]*([0-9]+).*/\1/' gash.txt
Using cut is problematic because we don't know if the whitespace is spaces or tabs.
By the way, if you want to add the total number of lines:
awk '{total+=$1} END{print "Total:",total}' gash.txt
You can use this:
grep -oE '^\s*[0-9]+' filename
To handle the leading spaces, I'm currently out of options. You better accept the awk answer.
You can use awk
awk '{print $1}' file

How to do a script in BASH which takes random text from file? [duplicate]

This question already has answers here:
What's an easy way to read random line from a file?
(13 answers)
Closed 8 years ago.
I have file like:
aaa
bbb
ccc
ddd
eee
And I want to do a script in BASH which can takes random line of this text file, and return it to me as variable or something.
I hear it can be done with some AWK.
Any ideas?
UPDATE: I now using this:
shuf -n 1 text.txt
Thanks you all for help!
I used a script like this to generate a random line from my singature-quotes file:
#!/bin/bash
QUOTES_FILE=$HOME/.quotes/quotes.txt
numLines=`wc -l $QUOTES_FILE | cut -d" " -f 1`
random=`date +%N`
selectedLineNumber=$(($random - $($random/$numLines) * $numLines + 1))
selectedLine=`head -n $selectedLineNumber $QUOTES_FILE | tail -n 1`
echo -e "$selectedLine"
I would use sed with p argument...
sed -n '43p'
where 43 could be a variable ...
i don't know much about awk but i guess you could do almost the same thing (however i don't know if awk is turing complete...)
here's a bash way, w/o any external tools
IFS=$'\n'
set -- $(<"myfile")
len=${##}
rand=$((RANDOM%len+1))
linenum=0
while read -r myline
do
(( linenum++ ))
case "$linenum" in
$rand) echo "$myline";;
esac
done <"myfile"

Resources