Bash capture output of command as unfinished input for command line - bash

Can't figure out if this is possible, but it sure would be convenient.I'd like to get the output of a bash command and use it, interactively, to construct the next command in a Bash shell. A simple example of this might be as follows:
> find . -name myfile.txt
/home/me/Documents/2015/myfile.txt
> cp /home/me/Documents/2015/myfile.txt /home/me/Documents/2015/myfile.txt.bak
Now, I could do:
find . -name myfile.txt -exec cp {} {}.bak \;
or
cp `find . -name myfile.txt` `find . -name myfile.txt`.bak
or
f=`find . -name myfile.txt`; cp $f $f.bak
I know that. But sometimes you need to do something more complicated than just add an extension to a filename, and rather than getting involved with ${f%%txt}.text.bak etc etc it would be easier and faster (as you up the complexity more and more so) to just pop the result of the last command into your interactive shell command line and use emacs-style editing keys to do what you want.
So, is there some way to pipe the result of a command back into the interactive shell and leave it hanging there. Or alternatively to pipe it directly to the cut/paste buffer and recover it with a quick ctrl-v?

Typing M-C-e expands the current command line, including command substitutions, in-place:
$ $(echo bar)
Typing M-C-e now will change your line to
$ bar
(M-C-e is the default binding for the Readline function shell-expand-line.)
For your specific example, you can start with
$ cp $(find . -name myfile.txt)
which expands with shell-expand-line to
$ cp /home/me/Documents/2015/myfile.txt
which you can then augment further with
$ cp /home/me/Documents/2015/myfile.txt
From here, you have lots of options for completing your command line. Two of the simpler are
You can use history expansion (!#:1.txt) to expand to the target file name.
You can use brace expansion (/home/me/Documents/2015/myfile.txt{,.bak}).

If you are on a Mac, you can use pbcopy to put the output of a command into the clipboard so you can paste it into the next command line:
find . -name myfile.txt | pbcopy
On an X display, you can do the same thing with xsel --input --clipboard (or --primary, or possibly some other selection name depending on your window manager). You may also have xclip available, which works similarly.

Related

shell script to batch replace specific string in .csv file

I want to replace some strings in my raw csv file for further use and I search for the internet and create the script so far. But it seems they doesn't work. Hope anyone can help me
The csv file is like this and I want to delete "^M" and "# Columns: " so that I can read my file.
# Task: bending1^M
# Frequency (Hz): 20^M
# Clock (millisecond): 250^M
# Duration (seconds): 120^M
# Columns: time,avg_rss12,var_rss12,avg_rss13,var_rss13,avg_rss23,var_rss23^M
#!/usr/bin/env bash
function scandir(){
cd `dirname $0`
echo `pwd`
local cur_dir parent_dir workir
workdir=$1
cd ${workdir}
if [ ${workdir}="/" ]
then
cur_dir=""
else
cur_dir=$(pwd)
fi
for dirlist in $(ls ${cur_dir})
do
if test -d ${dirlist}
then
cd ${dirlist}
scandir ${cur_dir}/${dirlist}
cd ..
else
vi ${cur_dir}/${dirlist} << EOF
:%s/\r//g
:%s/\#\ Columns:\ //g
:wq
EOF
fi
done
}
Your whole script looks like just:
find "$workdir" -type f | xargs -n1 sed -i -e 's/\r//g; s/^# Columns://'
Notes to your script:
Check your scripts for validity on https://www.shellcheck.net/
The of << EOF here document is invalid. The closing word EOF has to start from the beginning of the line inside the script:
vi ${cur_dir}/${dirlist} << EOF
:%s/\r//g
:%s/\#\ Columns:\ //g
:wq
EOF
#^^ no spaces in front of EOF, also no spaces/tabs after EOF
# the whole line needs to be exactly 'EOF'
There cannot be any spaces, tabs in front of it. Also, I don't think vi is not the best tool to run substitutions on a file, also I don't know how it acts with tabs or spaces infront of :. You may want to try to run it without whitespace characters in front of ::
vi ${cur_dir}/${dirlist} << EOF
:%s/\r//g
:%s/\#\ Columns:\ //g
:wq
EOF
Backticks ` are deprecated, less readable and don't allow for easy nesting. Use $( ... ) command substitution instead.
echo `pwd` is just invalid use of echo, just use pwd.
for dirlist in $(ls parsing ls output is bad. Use find command instead, or if you have to, shell globulation, ie. for dirlist in *.
if [ ${workdir}="/" ] is invalid. This tests if the string "${workdir}=/ is not null. Bash is space aware, it needs a space between = and operands. It should be if [ "${workdir}" = "/" ].
Always quote your variables. Don't cd ${dirlist} do cd "${dirlist}" and so on.
Well posted answer are corrects, but I would recommand this syntax:
find "$1" -type f -name '*.csv' -exec sed -e 's/\r$//;s/^# Columns: //' -i~ {} +
Using + instead of \; at end of find command will permit sed to work on many files at once, reducing forks and make whole job quicker.
The ~ after -i option will rename existing files by appending tilde at end of names instead of deleting them.
Using -type f will ensure working on files only (no symlinks, dirs, socket, fifos, devices...)
You can reduce the entire script to one command, and you do not have to use Vim to process the files:
find ${workdir} -name '*.csv' -exec sed -i -e 's/\r$//; /^#/d' '{}' \;
Explanation:
find <dir> -name <pattern> -exec <command> \; will search <dir> for files matchingand execute` on each file. You want to search for CSV files and do something with them (run a command on them).
The command run on every (CSV) file that was found will be sed -i -e 's/\r$//; /^#/d'. This means to edit the files in-place (-i) and run two transformations on them. s/\r$// will remove the ^M from each line, and /^#/d will remove all lines that start with a #.
'{}' is replaced with the files found by find and \; marks the end of the command run by find (see the find manual page for this).
Most of your script emulates part of the find command. That is not a good idea.
Also, for simple text processing, it is easier and faster to use sed instead of invoking an editor such as Vim.

echo/printf incomplete command at bash terminal

At the BASH prompt I can find files then append xargs to do further stuff. e.g. find ... | xargs rm {}
However, sometimes there is a manual intermediate step: I use fzf to refine the find results.
I would like to use this filtered list of files to create an incomplete xargs command at the terminal.
For example, if my find command produces
file1 file2 file3, and my fzf narrows this down to file2 file3, I would like the script to create an incomplete line at the terminal like this:
file2 file3 |xargs -0 --other-standard-options
but i don't want the command to flush (I don't know what the correct term is) as if I had pressed enter. I want to be able to complete the command myself (e.g. rm {}), after seeing the list of files printed on the line.
The find command will need to use the print0 option.
I suppose the script would look something like this:
find . | fzf -m | *echo incomplete xargs command*.
the echo -n command is not what I want: it still passes the command to BASH shell.
Maybe there is a better way of using find, then manually checking and filtering, then executing a command like rm or mv, and if so, that would be an acceptable answer.
The number of files I need to be able to deal with after the filtering is small (<100).
So it looks like fzf has some evn settings that can be used to filter the set of data that comes back. There is also options env as well. I would take a look at the following evn variables:
FZF_DEFAULT_COMMAND and FZF_DEFAULT_OPTS.
FZF_DEFAULT_COMMAND will allow you to use a different command to filter down your file set. Check out the following here
Meaning you may not need to use find and pipe that to the fzf command. You could just start with the fzf command itself and set the proper evn variable.
To use the interactive mode to remove and review files on the same line you could do something like the following.
find . -type f -name '*test*' | fzf -m | xargs rm
for f in $(find . -type f -name '*test*' | fzf -m); do
read -p ".... ${f} Enter command to insert on the dots "
echo "$REPLY ${f}"
$REPLY "${f}"
done
The following seems to work, leaving the user to complete the command at the prompt:
out=$(find . |fzf -m )
prefix="echo ";
suffix="| xargs ";
files="$(echo "${out}" | sed -e 's:^\.:"\.:g' -e 's:$:":g'|tr '\n' ' ' )";
cmd="${prefix}${files}${suffix}" ;
read -e -i "$cmd"; eval "$REPLY";
Explanation: fzf outputs filenames which I wrap in double quotes for the sake of safety; the terminal command I want to create looks like this:
echo "file1" "file2" "file3"|xargs ....
All the real credit goes to #meuh here

Understand pipe and redirection command

I want to understand the real power of pipe and redirection command.As per my understanding,| takes the output of one command result as the input of itself. And, > is helps in output redirecting .If it is so,
find . -name "*.swp" | rm
find . -name "*.swp" > rm
why this command is not working as expected .For me above command means
Find the all files recursively whose extension is .swp in current directory .
take the output of 1. and remove all whose resulted files .
FYI,yes i know how to accomplish this task . it can be done by passing -exec flag .
find . -name "*.swp"-exec rm -rf {} \;
But as I already mentioned,i want to accomplish it with > or | command.
If i was wrong and going in wrong direction,please correct me and explain redirection and pipe command. Where we use whose commands ? please dont mention simple book examples i read all whose thing . try to explain me some complicated thing .
I'll break this down by the three methods you have shown:
> will redirect all output from find into a file named rm (will not work, because you're just appending to a file).
| will pipe output from find into the rm command (will not work, because rm does not read on stdin)
-exec rm -rf {} \; will run rm -rf on each item ({}) that find finds (will work, because it passes the files as argument to rm).
You will want to use -exec flag, or pipe into the xargs command (man xargs), not | or > in order to achieve the desired behavior.
EDIT: as #dmckee said, you can also use the $() operator for string interpolation, ie: rm -rf $(find . -name "*.swp") (this will fail if you have a large number of files, due to argument length limits).
> simply redirects to a file named rm.
Piping via | to rm doesn't work because rm doesn't expect filenames via STDIN.
So you have to use xargs, which passes values from STDIN as arguments:
find . -name "*.swp"|xargs rm
This is dangerous because the filename may contain characters your shell considers a field seperator ($IFS).
So, you use:
find . -name "*.swp" -print0|xargs -0 rm
Which causes find print the filenames \0 sperated to STDOUT and xargs to read the filenames \0 seperated and pass them as arguments to rm.
Of course, the easiest way to achieve this would have been:
rm **/*.swp
assuming you use bash.
You should take some time and read about the basics of shell redirection again :) I think this is a good document: http://wiki.bash-hackers.org/howto/redirection_tutorial
I'll try to explain what went wrong for you:
find . -name "*.swp" | rm
This command redirects the find results, i.e. the stdout of find, to the stdin of the program rm. However, rm does not read on stdin (this is something you can read in the documentation of rm). rm is controlled via command line arguments, not via stdin. I think there is no way to make rm read from stdin at all. That's why nothing is deleted.
find . -name "*.swp" > rm
This command redirects newline-delimited find results (stdout of find) to a file called 'rm'. Again, nothing is deleted :)
Basically the <, >, >>, &>, &>> operators perform redirection from/to a file that actually exists in the file system. The pipe | redirects the standard output from one command to the standard input of another command. Simply spoken there are no files involved here. However, this approach only makes sense if the program to the left of the pipe actually writes something to stdout and the program to the right of the pipe reads from stdin and both programs understand each other, i.e. the reading program (the consumer) understands the output of the feeding program.
Redirection creates a file. So your >rm example just creates a file named ./rm into which the output of your command is saved.
Pipes are essentially a shorthand. one | two is like one >tmp; two <tmp except without the (explicit) temporary file.
Of course, rm doesn't read file names from standard input, so cmd | rm is basically useless (apart from situations where the pipeline continues with yet another command which does something with the input which rm didn't read). If you want that, there's xargs.
find . -name "*.swp" | xargs rm

putting find in a bash_profile function

I want to make bash function in my .bash_profile that basically does a find ./ -name $1, very simple idea, seems not to work. My tries don't print things the right way i.e.:
find_alias() {
`find ./ -name $1 -print`
}
alias ff='find_alias $1'
The above if I do something like ff *.xml I get the following one liner:
bash: .pom.xml: Permission denied
The following after that:
find_alias() {
echo -e `find ./ -name $1 -print`
}
alias ff='find_alias $1'
does find them all, but puts the output of that onto one massive long line, what am I doing wrong here?
find_alias() {
find ./ -name $1 -print
}
You don't need, nor want, the backticks. That would try to execute what the find command returns.
Backticks make shell treat output of what's inside them as command that should be executed. If you tried ´echo "ls"´ then it would first execute echo "ls", take the output which is text ls and then execute it listing all files.
In your case you are executing textual result of find ./ -name *.xml -print which is a list of matched files. Of course it has no sense because matched file names (in most cases) are not commands.
The output you are getting means two things:
you tried to execute script from pom.xml (like if you typed
./pom.xml) - makes no sense
you don't have execution rights for
that file
So the simple solution for you problem, as #Mat suggested, is to remove backticks and let the output of find be displayed in your terminal.

open vi with passed file name

I usually use like this
$ find -name testname.c
./dir1/dir2/testname.c
$ vi ./dir1/dir2/testname.c
it's to annoying to type file name with location again.
how can I do this with only one step?
I've tried
$ find -name testname.c | xargs vi
but I failed.
Use the -exec parameter to find.
$ find -name testname.c -exec vi {} \;
If your find returns multiple matches though, the files will be opened sequentially. That is, when you close one, it will open the next. You won't get them all queued up in buffers.
To get them all open in buffers, use:
$ vi $(find -name testname.c)
Is this really vi, by the way, and not Vim, to which vi is often aliased nowadays?
You can do it with the following commands in bash:
Either use
vi `find -name testname.c`
Or use
vi $(!!)
if you have already typed find -name testname.c
Edit: possible duplication: bash - automatically capture output of last executed command into a variable
The problem is xargs takes over all of vi's input there (and, having no other recourse, then passes on /dev/null to vi because the alternative is passing the rest of the file list), leaving no way for you to interact with it. You probably want to use a subcommand instead:
$ vi $(find -name testname.c)
Sadly there's no simple fc or r invocation that can do this for you easily after you've run the initial find, although it's easy enough to add the characters to both ends of the command after the fact.
My favorite solution is to use vim itself:
:args `find -name testname.c`
Incidentally, VIM has extended shell globbing builtin, so you can just say
:args **/testname.c
which will find recursively in the sub directory tree.
Not also, that VIM has filename completion on the commandline, so if you know you are really looking for a single file, try
:e **/test
and then press Tab (repeatedly) to cycle between any matchin filenames in the subdirectory tree.
For something a bit more robust than vi $(find -name testname.c) and the like, the following will protect against file names with whitespace and other interpreted shell characters (if you have newlines embedded in your file names, god help you). Inject this function into your shell environment:
# Find a file (or files) by name and open with vi.
function findvi()
{
declare -a fnames=()
readarray -t fnames < <(find . -name "$1" -print)
if [ "${#fnames[#]}" -gt 0 ]; then
vi "${fnames[#]}"
fi
}
Then use like
$ findvi Classname.java

Resources