open vi with passed file name - bash

I usually use like this
$ find -name testname.c
./dir1/dir2/testname.c
$ vi ./dir1/dir2/testname.c
it's to annoying to type file name with location again.
how can I do this with only one step?
I've tried
$ find -name testname.c | xargs vi
but I failed.

Use the -exec parameter to find.
$ find -name testname.c -exec vi {} \;
If your find returns multiple matches though, the files will be opened sequentially. That is, when you close one, it will open the next. You won't get them all queued up in buffers.
To get them all open in buffers, use:
$ vi $(find -name testname.c)
Is this really vi, by the way, and not Vim, to which vi is often aliased nowadays?

You can do it with the following commands in bash:
Either use
vi `find -name testname.c`
Or use
vi $(!!)
if you have already typed find -name testname.c
Edit: possible duplication: bash - automatically capture output of last executed command into a variable

The problem is xargs takes over all of vi's input there (and, having no other recourse, then passes on /dev/null to vi because the alternative is passing the rest of the file list), leaving no way for you to interact with it. You probably want to use a subcommand instead:
$ vi $(find -name testname.c)
Sadly there's no simple fc or r invocation that can do this for you easily after you've run the initial find, although it's easy enough to add the characters to both ends of the command after the fact.

My favorite solution is to use vim itself:
:args `find -name testname.c`
Incidentally, VIM has extended shell globbing builtin, so you can just say
:args **/testname.c
which will find recursively in the sub directory tree.
Not also, that VIM has filename completion on the commandline, so if you know you are really looking for a single file, try
:e **/test
and then press Tab (repeatedly) to cycle between any matchin filenames in the subdirectory tree.

For something a bit more robust than vi $(find -name testname.c) and the like, the following will protect against file names with whitespace and other interpreted shell characters (if you have newlines embedded in your file names, god help you). Inject this function into your shell environment:
# Find a file (or files) by name and open with vi.
function findvi()
{
declare -a fnames=()
readarray -t fnames < <(find . -name "$1" -print)
if [ "${#fnames[#]}" -gt 0 ]; then
vi "${fnames[#]}"
fi
}
Then use like
$ findvi Classname.java

Related

How to 'cd' to the output of the 'find' command in terminal

Pretty much I want to cd to the output of the find command:
find ~ -name work_project_linux
cd the_output
In general the best way to execute an arbitrary command on the results of find is with find -exec. Curly braces {} are placeholders for the file names it finds, and the entire command ends with + or \;. For example, this will run ls -l on all of the files found:
find ~ -name work_project_linux -exec ls -l {} +
It doesn't work with some special commands like cd, though. -exec runs binaries, such as those found in /usr/bin, and cd isn't a binary. It's a shell builtin, a special type of command that the shell executes directly instead of calling out to some executable on disk. For shell builtins you can use command substitution:
cd "$(find ~ -name work_project_linux)"
This wouldn't work if find finds multiple files. It's only good for a single file name. Command substitution also won't handle some unusual file names correctly, such as those with embedded newlines—unusual, but legal.

Bash capture output of command as unfinished input for command line

Can't figure out if this is possible, but it sure would be convenient.I'd like to get the output of a bash command and use it, interactively, to construct the next command in a Bash shell. A simple example of this might be as follows:
> find . -name myfile.txt
/home/me/Documents/2015/myfile.txt
> cp /home/me/Documents/2015/myfile.txt /home/me/Documents/2015/myfile.txt.bak
Now, I could do:
find . -name myfile.txt -exec cp {} {}.bak \;
or
cp `find . -name myfile.txt` `find . -name myfile.txt`.bak
or
f=`find . -name myfile.txt`; cp $f $f.bak
I know that. But sometimes you need to do something more complicated than just add an extension to a filename, and rather than getting involved with ${f%%txt}.text.bak etc etc it would be easier and faster (as you up the complexity more and more so) to just pop the result of the last command into your interactive shell command line and use emacs-style editing keys to do what you want.
So, is there some way to pipe the result of a command back into the interactive shell and leave it hanging there. Or alternatively to pipe it directly to the cut/paste buffer and recover it with a quick ctrl-v?
Typing M-C-e expands the current command line, including command substitutions, in-place:
$ $(echo bar)
Typing M-C-e now will change your line to
$ bar
(M-C-e is the default binding for the Readline function shell-expand-line.)
For your specific example, you can start with
$ cp $(find . -name myfile.txt)
which expands with shell-expand-line to
$ cp /home/me/Documents/2015/myfile.txt
which you can then augment further with
$ cp /home/me/Documents/2015/myfile.txt
From here, you have lots of options for completing your command line. Two of the simpler are
You can use history expansion (!#:1.txt) to expand to the target file name.
You can use brace expansion (/home/me/Documents/2015/myfile.txt{,.bak}).
If you are on a Mac, you can use pbcopy to put the output of a command into the clipboard so you can paste it into the next command line:
find . -name myfile.txt | pbcopy
On an X display, you can do the same thing with xsel --input --clipboard (or --primary, or possibly some other selection name depending on your window manager). You may also have xclip available, which works similarly.

find folders and cd into them

I wanted to write a short script with the following structure:
find the right folders
cd into them
replace an item
So my problem is that I get the right folders from findbut I don't know how to do the action for every line findis giving me. I tried it with a for loop like this:
for item in $(find command)
do magic for item
done
but the problem is that this command will print the relative pathnames, and if there is a space within my path it will split the path at this point.
I hope you understood my problem and can give me a hint.
You can run commands with -exec option of find directly:
find . -name some_name -exec your_command {} \;
One way to do it is:
find command -print0 |
while IFS= read -r -d '' item ; do
... "$item" ...
done
-print0 and read ... -d '' cause the NUL character to be used to separate paths, and ensure that the code works for all paths, including ones that contain spaces and newlines. Setting IFS to empty and using the -r option to read prevents the paths from being modified by read.
Note that the while loop runs in a subshell, so variables set within it will not be visible after the loop completes. If that is a problem, one way to solve it is to use process substitution instead of a pipe:
while IFS= ...
...
done < <(find command -print0)
Another option, if you have got Bash 4.2 or later, is to use the lastpipe option (shopt -s lastpipe) to cause the last command in pipelines to be run in the current shell.
If the pattern you want to find is simple enough and you have bash 4 you may not need find. In that case, you could use globstar instead for recursive globbing:
#!/bin/bash
shopt -s globstar
for directory in **/*pattern*/; do
(
cd "$directory"
do stuff
)
done
The parentheses make each operation happen in a subshell. That may have performance cost, but usually doesn't, and means you don't have to remember to cd back each time.
If globstar isn't an option (because your find instructions are not a simple pattern, or because you don't have a shell that supports it) you can use find in a similar way:
find . -whatever -exec bash -c 'cd "$1" && do stuff' _ {} \;
You could use + instead of ; to pass multiple arguments to bash each time, but doing one directory per shell (which is what ; would do) has similar benefits and costs to using the subshell expression above.

how to use vim to open every .txt file under a directory (with Bash)

I am trying the following to use a vim to open every txt file under current directory.
find . -name "*.txt" -print | while read aline; do
read -p "start spellchecking fine: $aline" sth
vim $aline
done
Running it in bash complains with
Vim: Warning: Input is not from a terminal
Vim: Error reading input, exiting...
Vim: Finished.
Can anyone explain what could possibly goes wrong? Also, I intend to use read -p for prompt before using vim, without no success.
Try:
vim $( find . -name "*.txt" )
To fix your solution, you can (probably) do:
find . -name "*.txt" -print | while read aline; do
read -p "start spellchecking fine: $aline" sth < /dev/tty
vim $aline < /dev/tty
done
The problem is that the entire while loop is taking its input from find, and vim inherits that pipe as its stdin. This is one technique for getting vim's input to come from your terminal. (Not all systems support /dev/tty, though.)
With shopt -s globstar you can purge out find and thus make bash not execute vim in a subshell that receives output from find:
shopt -s globstar
shopt -s failglob
for file in **/*.txt ; do
read -p "Start spellchecking fine: $file" sth
vim "$file"
done
. Another idea is using
for file in $(find . -name "*.txt") ; do
(in case there are no filenames with spaces or newlines.)
Often the simplest solution is the best, and I believe this is it:
vim -o `find . -name \*.txt -type f`
The -type f is to ensure only files ending .txt are opened as you don't discount the possibility that there may be subdirectories that have names that end in ".txt".
This will open each file in a seperate window/bufer in vim, if you don't require this and are happy with using :next and :prefix to navigate through the files, remove "-o" from the suggested comand-line above.
The proper way to open all files in one single vim instance is (provided the number of files doesn't exceed the maximal number of arguments):
find . -name '*.txt' -type f -exec vim {} +
Another possibility that fully answers the OP, but with the benefit that it is safe regarding file names containing spaces or funny symbols.
find . -name '*.txt' -type f -exec bash -c 'read -p "start spellchecking $0"; vim "$0"' {} \;

putting find in a bash_profile function

I want to make bash function in my .bash_profile that basically does a find ./ -name $1, very simple idea, seems not to work. My tries don't print things the right way i.e.:
find_alias() {
`find ./ -name $1 -print`
}
alias ff='find_alias $1'
The above if I do something like ff *.xml I get the following one liner:
bash: .pom.xml: Permission denied
The following after that:
find_alias() {
echo -e `find ./ -name $1 -print`
}
alias ff='find_alias $1'
does find them all, but puts the output of that onto one massive long line, what am I doing wrong here?
find_alias() {
find ./ -name $1 -print
}
You don't need, nor want, the backticks. That would try to execute what the find command returns.
Backticks make shell treat output of what's inside them as command that should be executed. If you tried ´echo "ls"´ then it would first execute echo "ls", take the output which is text ls and then execute it listing all files.
In your case you are executing textual result of find ./ -name *.xml -print which is a list of matched files. Of course it has no sense because matched file names (in most cases) are not commands.
The output you are getting means two things:
you tried to execute script from pom.xml (like if you typed
./pom.xml) - makes no sense
you don't have execution rights for
that file
So the simple solution for you problem, as #Mat suggested, is to remove backticks and let the output of find be displayed in your terminal.

Resources