Using the output of one command as the argument for another - bash

I am attempting to grep a file and pipe the line number out to
vim +{lineNumber} filetoedit
unfortunately Vim throws an error saying
Vim: Warning: Input is not from a terminal
An example:
grep -nF 'Im looking for this' testfile.txt | cut -f1 -d: | xargs vim +{} testfile.tx

The command run by xargs inherits stdin from xargs, so its input is connected to the pipe from cut, not the terminal.
Assign the result to a variable and use that.
line=$(grep -nF 'Im looking for this' testfile.txt | cut -f1 -d: )
vim "+$line" testfile.txt

Related

Why doesn't this sed command put a newline

I have a file, ciao.py thas has only one line in it: print("ciao")
I want to do this: I want to do that via pipe stream, and als, if I do cat ciao.py | sed 's/.*/&\n&/' it would work, but I want to do this in two separated parts, simulating the case where I want to print it and then pass that to further commands.
If I do this:
cat ciao.py | sed 's/.*/&\n/' |tee >(xargs echo) | xargs echo
it does not work. It prints print("ciao") print("ciao") in the same line. I don't understand why, since I am putting \n with sed.
I'd guess print cia is appearing twice on the same line because xargs is calling echo with multiple strings since xargs calls the command you provide it with groups of input lines at a time by default.
Is this what you're trying to do?
$ cat ciao.py | sed 's/.*/&\n/' |tee >(xargs -n 1 echo) | xargs -n 1 echo
print(ciao)
print(ciao)
or:
$ cat ciao.py | sed 's/.*/&\n/' |tee >(cat) | xargs -n 1 echo
print(ciao)
print(ciao)
There are, of course, better ways to get that output from that input, e.g.:
$ sed 'p' ciao.py
print("ciao")
print("ciao")

How to figure out, why is my shell crash?

When I enter this command:
$ grep -n 'some search' $file | awk '{print 1}' | sed 's/://' | xargs -I{} vim +"{}" $file
It will open, but after quitting vim, the shell crash. It does not react to any input neither for Ctr-C. I have no idea why, how to find out? I suspect there is some infinite loop, because after reboot, there is a lot clearing in terminal. But really have no clue of the reason.
PS:
alias grep: alias grep='grep --color=auto -P'
alias sed: alias sed='sed -E'
No more aliases.
vi changes the terminal settings.
When you onlu want to go to the first match, you can use the linenumber with
vi +$(grep -n 'some search' .bashrc | cut -d: -f1 | head -1) .bashrc
This is still to complicated, you can jump to the match with
vi '/+some search/' "$file"
When you want to go to the second match, just use n.

grep return the string in between words

I am trying to use grep to filter out the RDS snapshot identifier from the rds describe-db-snapshots command output below:
"arn:aws:rds:ap-southeast-1:123456789:snapshot:rds:apple-pie-2018-05-06-17-12",
"rds:apple-pie-2018-05-06-17-12",
how to return the exact output as in
rds:apple-pie-2018-05-06-17-12
tried using
grep -Eo ",rds:"
but not able to
Following awk may also help you on same.
awk 'match($0,/^"rds[^"]*/){print substr($0,RSTART+1,RLENGTH-1)}' Input_file
Your grep -Eo ",rds:" is failing for different reasons:
You did not add a " in the string to match
Between the comma and rds you need to match the character.
You are trying to match the comma that can be on the previous line
Your sample input is 2 lines (with a newline in between), perhaps the real input is without the newline.
You want to match until the next double quote.
You can support both input-styles (with/without newline) with
grep -Eo '(,|^)"rds:[^"]*' rdsfile |cut -d'"' -f2
You can do this in one command with
sed -rn 's/.*(,|^)"(rds:[^"]*).*/\2/p' rdsfile
EDIT: Manipulting stdout and not the file is with similar commands:
yourcommand | grep -Eo '(,|^)"rds:[^"]*' |cut -d'"' -f2
# or
yourcommand | sed -rn 's/.*(,|^)"(rds:[^"]*).*/\2/p'
You can also test the original commands with yourcommand > rdsfile.
You might notice that rdsfile is missing data that you have seen on the screen, in that case add 2>&1
yourcommand 2>&1 | grep -Eo '(,|^)"rds:[^"]*' |cut -d'"' -f2
# or
yourcommand 2>&1 | sed -rn 's/.*(,|^)"(rds:[^"]*).*/\2/p'

Execute piped shell commands in Tcl

I want to execute these piped shell commands in Tcl:
grep -v "#" inputfile | grep -v ">" | sort -r -nk7 | head
I try:
exec grep -v "#" inputfile | grep -v ">" | sort -r -nk7 | head
and get an error:
Error: grep: invalid option -- 'k'
When I try to pipe only 2 of the commands:
exec grep -v "#" inputfile | grep -v ">"
I get:
Error: can't specify ">" as last word in command
Update: I also tried {} and {bash -c '...'}:
exec {bash -c 'grep -v "#" inputfile | grep -v ">"'}
Error: couldn't execute "bash -c 'grep -v "#" inputfile | grep -v ">"'": no such file or directory
My question: how can I execute the initial piped commands in a tcl script?
Thanks
The problem is that exec does “special things” when it sees a > on its own (or at the start of a word) as that indicates a redirection. Unfortunately, there's no practical way to avoid this directly; this is an area where Tcl's syntax system doesn't help. You end up having to do something like this:
exec grep -v "#" inputfile | sh -c {exec grep -v ">"} | sort -r -nk7 | head
You can also move the entire pipeline to the Unix shell side:
exec sh -c {grep -v "#" inputfile | grep -v ">" | sort -r -nk7 | head}
Though to be frank this is something that you can do in pure Tcl, which will then make it portable to Windows too…
The > is causing problems here.
You need to escape it from tcl and the shell to make it work here.
exec grep -v "#" inputfile | grep -v {\\>} | sort -r -nk7 | head
or (and this is better since you have one less grep)
exec grep -Ev {#|>} inputfile | sort -r -nk7 | head
If you look in the directory you were running this from (assuming tclsh or similar) you'll probably see that you created an oddly named file (i.e. |) before.
In pure Tcl:
package require fileutil
set lines {}
::fileutil::foreachLine line inputfile {
if {![regexp #|> $line]} {
lappend lines $line
}
}
set lines [lsort -decreasing -integer -index 6 $lines]
set lines [lrange $lines 0 9]
puts [join $lines \n]\n
(-double might be more appropriate than -integer)
Edit: I mistranslated the (1-based) -k index for the command sort when writing the (0-based) -index option for lsort. It is now corrected.
Documentation: fileutil package, if, join, lappend, lrange, lsort, package, puts, regexp, set

What is the proper method to pipe the output of the cut command into a grep command?

I am currently learning a little more about using Bash shell on OSX terminal. I am trying to pipe the output of a cut command into a grep command, but the grep command is not giving any output even though I know there are matches. I am using the following command:
cut -d'|' -f2 <filename.txt> > <temp.txt> | grep -Ff <temp.txt> <searchfile.txt> > <filematches.txt>
I was thinking that this should work, but most of the examples I have seen normally pipe grep output into the cut. My goal was to cut field 2 from the file and use that as the pattern to search for in . However, using the command produced no output.
When I generated the temp.txt first with the cut command and then ran the grep on it manually with no pipe, the grep seemed to run fine. I am not sure why this is?
You can use process substitution here:
grep -Ff <(cut -d'|' -f2 filename.txt) searchfile.txt > filematches.txt
<(cut -d'|' -f2 filename.txt) is feeding cut command's output to grep as a file.
Okay, a reason this line doesn't behave as you expect
cut -d'|' -f2 <filename.txt> > <temp.txt> | grep -Ff <temp.txt> <searchfile.txt> > <filematches.txt>
is that the output of your cut is going to temp.txt. You're not sending anything to the pipe. Now, conveniently pipe also starts a new commend, so it doesn't matter much -- grep runs and reads searchfile.txt.
But what are you trying to do? Here's what your command line is trying to do:
take the second pipe-delimited field from filename.txt
write it to a file
run grep ...
... using the contents of the file from 2 as a grep search string (which isn't going to do what you think either, as you're effectively asking grep to look for the pattern match1\nmatch2...)
You'd be closer with
cut ... && grep ...
as that runs grep assuming cut completes effectively. Or you could use
grep -f `cut ...`
which would put the results on the command line. You need to mess with quoting, but you're still going to be looking for a line containing ALL of your match fields from cut.
I'd recommend maybe you mean something like this:
for match in `cut ...`
do
grep -f $match >> filematches.txt
done

Resources