Print specific lines of a file in Terminal [duplicate] - macos

This question already has answers here:
How can I extract a predetermined range of lines from a text file on Unix?
(28 answers)
Closed 7 years ago.
This seems pretty silly, but I haven't found a tool that does this, so I figured I'd ask just to make sure one doesn't exist before trying to code it up myself:
Is there any easy way to cat or less specific lines of a file?
I'm hoping for behavior like this:
# -s == start && -f == finish
# we want to print lines 5 - 10 of file.txt
cat -s 5 -f 10 file.txt
Even something a little simpler would be appreciated, but I just don't know of any tool that does this:
# print from line 10 to the end of the file
cat -s 10 file.txt
I'm thinking that both of these functionalities could be easily created with a mixture of head, tail, and wc -l, but maybe there are builtins that do this of which I'm unaware?

yes awk and sed can help
for lines 5 to 10
awk 'NR>4&&NR<11' file.txt
sed -n '5,10p' file.txt
for lines 10 to last line
awk 'NR>9' file.txt
sed -n '10,$p' file.txt

Related

How can I delete empty line from my ouput by grep? [duplicate]

This question already has answers here:
Remove empty lines in a text file via grep
(11 answers)
Closed 4 years ago.
Exists way to remove empty lines with cat myfile | grep -w #something ?
I looking for simple way for remove empty lines from my output like in the way the presented above.
This really belongs on the codegolfing stackexchange because it's not related to how anyone would ever write a script. However, you can do it like this:
cat myfile | grep -w '.*..*'
It's equivalent to the more canonical grep ., but adds explicit .*s on either side so that it will always match the complete line, thereby satisfying the word boundary conditions imposed by -w
You can pipe your output to awk to easily remove empty lines
cat myfile | grep -w #something | awk NF
EDIT: so... you just want cat myfile | awk NF?
if you have to use grep, you can do grep myfile -v '^[[:blank:]]*$'

Add a new line of text at the top of a file in bash shell [duplicate]

This question already has answers here:
Unix command to prepend text to a file
(21 answers)
Closed 4 years ago.
I want to write a bash script that takes my file:
READ_ME.MD
two
three
four
and makes it
READ_ME.MD
one
two
three
four
There are a bunch of similar StackOverflow questions, but I tried their answers and haven't been successful.
These are the bash scripts that I have tried and failed with:
test.sh
sed '1s/^/one/' READ_ME.md > READ_ME.md
Result: Clears the contents of my file
test.sh
sed '1,1s/^/insert this /' READ_ME.md > READ_ME.md
Result: Clears the contents of my file
test.sh
sed -i '1s/^/one\n/' READ_ME.md
Result: sed: 1: "READ_ME.md": invalid command code R
Any help would be appreciated.
You can use this BSD sed command:
sed -i '' '1i\
one
' file
-i will save changes inline to file.
If you want to add a line at the top if same line is not already there then use BSD sed command:
line='one'
sed -i '' '1{/'"$line"'/!i\
'"$line"'
}' file
Your last example works for me with GNU sed. Based on the error message you added, I'd guess you're working on a Mac system? According to this blog post, a suffix argument may be required on Mac versions of sed:
sed -i ' ' '1s/^one\n/' READ_ME.md
If this is bash or zsh, you can use process substitution like so.
% cat x
one
two
three
% cat <(echo "zero") x
zero
one
two
three
Redirect this into a temp file and copy it back to the original
there is always ed
printf '%s\n' H 1i "one" . w | ed -s READ_ME.MD

Delete last blank line from a file and save the file [duplicate]

This question already has answers here:
shell: delete the last line of a huge text log file [duplicate]
(4 answers)
Remove the last line from a file in Bash
(16 answers)
Closed 5 years ago.
This post was edited and submitted for review 6 months ago and failed to reopen the post:
Original close reason(s) were not resolved
How can I delete the last line of a file without reading the entire file or rewriting it in any temp file? I tried to use sed but it reads the entire file into memory which is not feasible.
I want to remove the blank line from the end and save the change to the same file.
Since the file is very big and reading through the complete file would be slow, I am looking for a better way.
simple sed command to delete last line:
sed '$d' <file>
here in sed $ is the last line.
You can try awk command:
awk 'NR > 1{print t} {t = $0}END{if (NF) print }' file
Using cat:
cat file.txt | head -n -1 > new_file.txt
The easiest way I can think of is:
sed -i '${/^$/d}' filename
edited to delete only blank end line.
Your only option not using sed that won't read the entire file is to stat the file to get the size and then use dd to skip to the end before you start reading. However, telling sed to only operate on the last line, does essentially that.
Take a look at
Remove the last line from a file in Bash
Edit: I tested
dd if=/dev/null of=<filename> bs=1 seek=$(echo $(stat --format=%s <filename> ) - $( tail -n1 <filename> | wc -c) | bc )
and it does what you want

Weird behavior when concatenate string in bash shell [duplicate]

This question already has answers here:
Bash script prints "Command Not Found" on empty lines
(17 answers)
Closed 6 years ago.
I have a file store version information and I wrote a shell to read two fields and combine them. But when I concatenate those two fields, it show me a werid result.
version file:
buildVer = 3
version = 1.0.0
script looks like:
#!bin/bash
verFile='version'
sdk_ver=`cat $verFile | sed -nE 's/version = (.*)/\1/p'`
build_ver=`cat $verFile | sed -nE 's/buildVer = (.*)/\1/p'`
echo $sdk_ver
echo $build_ver
tagname="$sdk_ver.$build_ver"
echo $tagname
The output shows
1.0.0
3
.30.0
I tried to echo the sdk_ver directly without read it from file, this piece of script works well. So I think it may relate to the sed, but I couldn't figure out how to fix it.
Does anyone know why it acts like that?
You're getting this problem because of presence of DOS line ending i.e. \r in each line of version file.
Use dos2unix or this sed command to remove \r first:
sed -i 's/\r//' version
btw you can also simplify your script using pure BASH constructs like this:
#!/bin/bash
while IFS='= ' read -r k v; do
declare $k="$v"
done < <(sed $'s/\r//' version)
tagname="$version.$buildVer"
echo "$tagname"
This will give output:
1.0.0.3
Alternate solution, with awk:
awk '/version/{v=$3} /buildVer/{b=$3} END{print v "." b}' version.txt
Example:
$ cat file.txt
buildVer = 3
version = 1.0.0
$ awk '/version/{v=$3} /buildVer/{b=$3} END{print v "." b}' file.txt
1.0.0.3

Unix: Perform grep command for every line from inputfile and save output to file [duplicate]

This question already has answers here:
How to make "grep" read patterns from a file?
(2 answers)
Closed 6 years ago.
I'm trying to write a small unix script to search one file for the line that contain a certain term, found in a different file. I would like to save the output of this command to a new file.
I have a file containing terms (terms.txt) which has a term on each line:
term1
term2
term3
term4
For each of these terms, I want to find the line that contains this term in another file (scores.txt) and append the output of this to a new file (output.txt).
The script I have come up with thus far:
#!/bin/bash
for f in `cat terms.txt`;
do
grep -i $f scores.txt >> output.txt;
done
Somehow this does not seem to work properly.
Running just the grep command with the term hard coded does indeed give me the right line I'm searching for:
grep -i "term1" scores.txt
Also, a simple echo does give me the right terms:
for f in `cat terms.txt`; do echo $f; done
However, when I try to repeat this with the $f variable, to repeat the same command for every term in my terms.txt, it does not work.
Could someone help me out on this one?
can you try:
grep -if terms.txt scores.txt > output.txt
basically grep's option f treats the strings in the terms.txt file as patterns to search for in scores.txt
If your terms.txt has CRLF line endings, try this:
grep -if <(tr -d '\r' < terms.txt) scores.txt > output.txt

Resources