I'm trying to pipe line numbers from grep to sed.
First I was extracting the start and end line of what I want to print with sed:
grep -n "Start" file1 | cut -d: -f 1 | head -n 1
grep -n "End" file1 | cut -d: -f 1 | head -n 1
Now I need to use these numbers to print everything from Start to End by line. E.g.
sed -ne '1,30w output1' file1
I'm not sure how this can be done as piping the line numbers to sed will be seen as "input" right?
Example:
Start
some text
some more text
End
Start
some text
some more text
End
As there's more than one start and end i cut of the rest of the line numbers from grep. And I'm supposed to combine grep and sed or is this not possible?
You can do it without grep
sed -n '/Start/,/End/w output1' file1
should work.
It looks like you want to print from the first occurrence of Start to the first subsequent occurrence of End, inclusive. That'd just be:
awk '/Start/{found=1} found{print; if (/End/) exit}' file
This might work for you (GNU sed):
sed -ne '/Start/,/End/w outputfile' -e '/End/q' file
This will write to outputfile the lines between the first Start and End and then quit and obviate the need to use grep too.
If you must use grep then perhaps:
sed -n "$(grep -n "Start" file | cut -d: -f 1 | head -n 1),$(grep -n "End" file | cut -d: -f 1 | head -n 1)"'p' file
Related
I have multiple urls as input
https://drive.google.com/a/domain.com/file/d/1OR9QLGsxiLrJIz3JAdbQRACd-G9ZfL3O/view?usp=drivesdk
https://drive.google.com/a/domain.com/file/d/1sEWMFqGW9p2qT-8VIoBesPlVJ4xvOzXD/view?usp=drivesdk
How can I create a sed command to simply return only the file ID
desired output:
1OR9QLGsxiLrJIz3JAdbQRACd-G9ZfL3O
1sEWMFqGW9p2qT-8VIoBesPlVJ4xvOzXD
Looks like I need to start between /d/ and stop at /view but I'm not quite sure how to do that.
I've tried? sed -e 's/d\(.*\)\/view/\1/'
I was able to do this with cut -d '/' -f 8
also awk -F/ '{print $8}' file worked, thanks!
Your command was almost right:
# Wrong
sed -e 's/d\(.*\)\/view/\1/'
# better, removing unmatched stuff including the / after the d
sed -e 's/.*d\/\(.*\)\/view.*/\1/'
# better: using # for making the command easier to read
sed -e 's#.*d/\(.*\)/view.*#\1#'
# Alternative:Using cut when you don't know which field /d/ is
some_straem | grep -Eo '/d/.*/view' | cut -d/ -f3
I have been trying to get the head utility to display all but the last line of standard input. The actual code that I needed is something along the lines of cat myfile.txt | head -n $(($(wc -l)-1)). But that didn't work. I'm doing this on Darwin/OS X which doesn't have the nice semantics of head -n -1 that would have gotten me similar output.
None of these variations work either.
cat myfile.txt | head -n $(wc -l | sed -E -e 's/\s//g')
echo "hello" | head -n $(wc -l | sed -E -e 's/\s//g')
I tested out more variations and in particular found this to work:
cat <<EOF | echo $(($(wc -l)-1))
>Hola
>Raul
>Como Esta
>Bueno?
>EOF
3
Here's something simpler that also works.
echo "hello world" | echo $(($(wc -w)+10))
This one understandably gives me an illegal line count error. But it at least tells me that the head program is not consuming the standard input before passing stuff on to the subshell/command substitution, a remote possibility, but one that I wanted to rule out anyway.
echo "hello" | head -n $(cat && echo 1)
What explains the behavior of head and wc and their interaction through subshells here? Thanks for your help.
head -n -1 will give you all except the last line of its input.
head is the wrong tool. If you want to see all but the last line, use:
sed \$d
The reason that
# Sample of incorrect code:
echo "hello" | head -n $(wc -l | sed -E -e 's/\s//g')
fails is that wc consumes all of the input and there is nothing left for head to see. wc inherits its stdin from the subshell in which it is running, which is reading from the output of the echo. Once it consumes the input, it returns and then head tries to read the data...but it is all gone. If you want to read the input twice, the data will have to be saved somewhere.
Using sed:
sed '$d' filename
will delete the last line of the file.
$ seq 1 10 | sed '$d'
1
2
3
4
5
6
7
8
9
For Mac OS X specifically, I found an answer from a comment to this Q&A.
Assuming you are using Homebrew, run brew install coreutils then use the ghead command:
cat myfile.txt | ghead -n -1
Or, equivalently:
ghead -n -1 myfile.txt
Lastly, see brew info coreutils if you'd like to use the commands without the g prefix (e.g., head instead of ghead).
cat myfile.txt | echo $(($(wc -l)-1))
This works. It's overly complicated: you could just write echo $(($(wc -l)-1)) <myfile.txt or echo $(($(wc -l <myfile.txt)-1)). The problem is the way you're using it.
cat myfile.txt | head -n $(wc -l | sed -E -e 's/\s//g')
wc consumes all the input as it's counting the lines. So there is no data left to read in the pipe by the time head is started.
If your input comes from a file, you can redirect both wc and head from that file.
head -n $(($(wc -l <myfile.txt) - 1)) <myfile.txt
If your data may come from a pipe, you need to duplicate it. The usual tool to duplicate a stream is tee, but that isn't enough here, because the two outputs from tee are produced at the same rate, whereas here wc needs to fully consume its output before head can start. So instead, you'll need to use a single tool that can detect the last line, which is a more efficient approach anyway.
Conveniently, sed offers a way of matching the last line. Either printing all lines but the last, or suppressing the last output line, will work:
sed -n '$! p'
sed '$ d'
Here is a one-liner that can get you the desired output, and it can be used more generally for getting all lines from a file except the last n lines.
grep -n "" myfile.txt \ # output the line number for each line
| sort -nr \ # reverse the file by using those line numbers
| sed '1,4d' \ # delete first 4 lines (last 4 of the original file)
| sort -n \ # reverse the reversed file (correct the line order)
| sed 's/^[0-9]*://' # remove the added line numbers
Here is the above command in an actual single line and runnable (can't execute the above due to the added comments):
grep -n "" myfile.txt | sort -nr | sed '1,4d' | sort -n | sed 's/^[0-9]*://'
It's a little cumbersome, and this problem can be solved with more comprehensive commands like ghead, but when you can't or don't want to download such tools, it's nice to be able to do this with the more basic options. I've been in situations where it's simply not an option to get better tools.
awk 'NR>1{print p}{p=$0}'
For this job, an awk one-liner is a bit longer than a sed one.
I have a text file files.txt with following entries
"/home/dilawar/a.txt","/home/dilawar/b.txt"
"/home/dilawar/aa.txt","/home/dilawar/bb.txt"
Now I wish to see the diff of files on line 1. I tried the following
head -n 1 files.txt | cut -d, -f 2,3 | sed "s/,/\t/g" | xargs -I files vimdiff files
It is not working. I replaced vimdiff with diff, it did not work either. However this works
head -n 1 files.txt | cut -d, -f 1 | xargs -I file vim file
How to pass file as an argument to diff as two separate file paths rather than a single string?
PS : To make matter worse, I have space in some of file paths.
First take the first line, then recplace the symbols by a space, and feed it to vimdiff via a subshell.
vimdiff $(head -1 files.txt | tr '",' ' ')
The above elegant method will not work with names with a space. The below dirty one will.
awk -F, 'NR==1{print "vimdiff",$1,$2}' files.txt | bash
try this, see if it helps
sed '1{s/,/ /; s/^/diff /;q}' files.txt|sh
I also escaped the whitespace in filepath (first sed command)
head -n 1 files.txt | sed "s/ /\\\\ /g" | sed "s/[\",]/ /g" |xargs vimdiff
I want to get the second last line from the ls -l output.
I know that
ls -l|tail -n 2| head -n 1
can do this, just wondering if sed can do this in just one command?
ls -l|sed -n 'x;$p'
It can't do third to last though, because sed only has 1 hold space, so can only remember one older line. And since it processes the lines one at a time, it does not know the line will be next to last when processing it. awk could return thrid to last, because you can have arbitrary number of variables there, but the script would be much longer than the tail -n X|head -n 1.
In a awk one-liner :
echo -e "aaa\nbbb\nccc\nddd" | awk '{v[c++]=$0}END{print v[c-2]}'
ccc
Try this to delete second-last line in file
sed -e '$!{h;d;}' -e x filename
tac filename | sed -n 2p
-- but involves a pipe, too
I'm trying to make a txt file with a generated key into 1 line. example:
<----- key start ----->
lkdjasdjskdjaskdjasdkj
skdhfjlkdfjlkdsfjsdlfk
kldshfjlsdhjfksdhfksdj
jdhsfkjsdhfksdjfhskdfh
jhdfkjsdhfkjsdhfkjsdhf
<----- key stop ----->
I want it to look like:
lkdjasdjskdjaskdjasdkjskdhfjlkdfjlkdsfjsdlfkkldshfjlsdhjfksdhfksdjjdhsfkjsdhfksdjfhskdfhjhdfkjsdhfkjsdhfkjsdhf
Notice I also want the lines <----- key start -----> and <----- key stop -----> removed. How can I do this? Would this be done with sed?
tr -d '\n' < key.txt
Found on http://linux.dsplabs.com.au/rmnl-remove-new-line-characters-tr-awk-perl-sed-c-cpp-bash-python-xargs-ghc-ghci-haskell-sam-ssam-p65/
To convert multi line output to a single space separated line, use
tr '\n' ' ' < key.txt
I know this does not answer the detailed question. But it is one possible answer to the title. I needed this answer and my google search found this question.
tail -n +2 key.txt | head -n -1 | tr -d '\n'
Tail to remove the first line, head to remove the last line and tr to remove newlines.
If you're looking for everything you asked for in one sed, I have this...
sed -n '1h;2,$H;${g;s/\n//g;s/<----- key \(start\|stop\) ----->//g;p}' key.txt
But it's not exactly easily readable :) If you don't mind piping a couple of commands, you could use the piped grep, tr, sed, etc. suggestions in the rest of the answers you got.
An easy way would be to use cat file.txt | tr -d '\n'
grep '^[^<]' test.txt | tr -d '\n'
This might work for you (GNU sed):
sed -r '/key start/{:a;N;/key stop/!ba;s/^[^\n]*\n(.*)\n.*/\1/;s/\n//g}' file
Gather up lines between key start and key stop. Then remove the first and last lines and delete any newlines.
In vim, it's just :%s/^M//
I use this all the time to generate comma separated lists from lines. For sed or awk, check out the many solutions at this link:
http://www.unix.com/shell-programming-scripting/35107-remove-line-break.html
Example:
paste -s -d',' tmpfile | sed 's/,/, /g'
grep -v -e "key start" -e "key stop" /PATH_TO/key | tr -d '\n'
awk '/ key (start|stop) / {next} {printf("%s", $0)} END {print ""}' filename
Every other answer mentioned here converts the key to a single line, but the result that we get is not a valid key and hence I was running into problems.
If you also have the same issue, please try
awk -v ORS='\\n' '1' key.txt/file-name
Credit: https://gist.github.com/bafxyz/de4c94c0912f59969bd27b47069eeac0
You may use man 1 ed to join lines as well:
str='
aaaaa
<----- key start ----->
lkdjasdjskdjaskdjasdkj
skdhfjlkdfjlkdsfjsdlfk
kldshfjlsdhjfksdhfksdj
jdhsfkjsdhfksdjfhskdfh
jhdfkjsdhfkjsdhfkjsdhf
<----- key stop ----->
bbbbb
'
# for in-place file editing use "ed -s file" and replace ",p" with "w"
# cf. http://wiki.bash-hackers.org/howto/edit-ed
cat <<-'EOF' | sed -e 's/^ *//' -e 's/ *$//' | ed -s <(echo "$str")
H
/<----- key start ----->/+1,/<----- key stop ----->/-1j
/<----- key start ----->/d
/<----- key stop ----->/d
,p
q
EOF
# print the joined lines to stdout only
cat <<-'EOF' | sed -e 's/^ *//' -e 's/ *$//' | ed -s <(echo "$str")
H
/<----- key start ----->/+1,/<----- key stop ----->/-1jp
q
EOF