I want to understand the real power of pipe and redirection command.As per my understanding,| takes the output of one command result as the input of itself. And, > is helps in output redirecting .If it is so,
find . -name "*.swp" | rm
find . -name "*.swp" > rm
why this command is not working as expected .For me above command means
Find the all files recursively whose extension is .swp in current directory .
take the output of 1. and remove all whose resulted files .
FYI,yes i know how to accomplish this task . it can be done by passing -exec flag .
find . -name "*.swp"-exec rm -rf {} \;
But as I already mentioned,i want to accomplish it with > or | command.
If i was wrong and going in wrong direction,please correct me and explain redirection and pipe command. Where we use whose commands ? please dont mention simple book examples i read all whose thing . try to explain me some complicated thing .
I'll break this down by the three methods you have shown:
> will redirect all output from find into a file named rm (will not work, because you're just appending to a file).
| will pipe output from find into the rm command (will not work, because rm does not read on stdin)
-exec rm -rf {} \; will run rm -rf on each item ({}) that find finds (will work, because it passes the files as argument to rm).
You will want to use -exec flag, or pipe into the xargs command (man xargs), not | or > in order to achieve the desired behavior.
EDIT: as #dmckee said, you can also use the $() operator for string interpolation, ie: rm -rf $(find . -name "*.swp") (this will fail if you have a large number of files, due to argument length limits).
> simply redirects to a file named rm.
Piping via | to rm doesn't work because rm doesn't expect filenames via STDIN.
So you have to use xargs, which passes values from STDIN as arguments:
find . -name "*.swp"|xargs rm
This is dangerous because the filename may contain characters your shell considers a field seperator ($IFS).
So, you use:
find . -name "*.swp" -print0|xargs -0 rm
Which causes find print the filenames \0 sperated to STDOUT and xargs to read the filenames \0 seperated and pass them as arguments to rm.
Of course, the easiest way to achieve this would have been:
rm **/*.swp
assuming you use bash.
You should take some time and read about the basics of shell redirection again :) I think this is a good document: http://wiki.bash-hackers.org/howto/redirection_tutorial
I'll try to explain what went wrong for you:
find . -name "*.swp" | rm
This command redirects the find results, i.e. the stdout of find, to the stdin of the program rm. However, rm does not read on stdin (this is something you can read in the documentation of rm). rm is controlled via command line arguments, not via stdin. I think there is no way to make rm read from stdin at all. That's why nothing is deleted.
find . -name "*.swp" > rm
This command redirects newline-delimited find results (stdout of find) to a file called 'rm'. Again, nothing is deleted :)
Basically the <, >, >>, &>, &>> operators perform redirection from/to a file that actually exists in the file system. The pipe | redirects the standard output from one command to the standard input of another command. Simply spoken there are no files involved here. However, this approach only makes sense if the program to the left of the pipe actually writes something to stdout and the program to the right of the pipe reads from stdin and both programs understand each other, i.e. the reading program (the consumer) understands the output of the feeding program.
Redirection creates a file. So your >rm example just creates a file named ./rm into which the output of your command is saved.
Pipes are essentially a shorthand. one | two is like one >tmp; two <tmp except without the (explicit) temporary file.
Of course, rm doesn't read file names from standard input, so cmd | rm is basically useless (apart from situations where the pipeline continues with yet another command which does something with the input which rm didn't read). If you want that, there's xargs.
find . -name "*.swp" | xargs rm
Related
First I made a question here: Unzip a file and then display it in the console in one step
It works and helped me a lot. (please read)
Now I have a second issue. I do not have a single zipped log file but I have a lot of them in defferent folders, which I need to find first. The files have the same names. For example:
/somedir/server1/log.gz
/somedir/server2/log.gz
/somedir/server3/log.gz
and so on...
What I need is a way to:
find all the files like: find /somedir/server* -type f -name log.gz
unzip the files like: gunzip -c log.gz
use grep on the content of the files
Important! The whole should be done in one step.
I cannot first store the extracted files in the filesystem because it is a readonly filesystem. I need somehow to connect, with pipes, the output from one command to the input of the next.
Before, the log files were in text format (.txt), therefore I had not to unzip them first. In this case it was easy:
ex.
find /somedir/server* -type f -name log.txt | xargs grep "term"
Now I have to deal with zipped files. That means, after I find the files, I need first somehow do unzip them and then send the contents to grep.
With one file I do:
gunzip -p /somedir/server1/log.gz | grep term
But for multiple files I don't know how to do it. For example how to pass the output of find to gunzip and the to grep?!
Also if there is another way / "best practise" how to do that, it is welcome :)
find lets you invoke a command on the files it finds:
find /somedir/server* -type f -name log.gz -exec gunzip -c '{}' + | grep ...
From the man page:
-exec command {} +
This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of
invocations of the command will be much less than the number
of matched files. The command line is built in much the same
way that xargs builds its command lines. Only one instance of
{} is allowed within the command, and (when find is being
invoked from a shell) it should be quoted (for example, '{}')
to protect it from interpretation by shells. The command is
executed in the starting directory. If any invocation with
the + form returns a non-zero value as exit status, then
find returns a non-zero exit status. If find encounters an
error, this can sometimes cause an immediate exit, so some
pending commands may not be run at all. This variant of -exec
always returns true.
If I execute
find . -name \*\.txt | while read f; do /bin/rm -i "$f"; done
rm asks:
/bin/rm: remove regular empty file ‘./some file name with spaces.txt’?
but the command exits without waiting for the answer. Why is that and how to fix it?
The other question on this subject, https://unix.stackexchange.com/questions/398872/rm-ir-does-not-work-inside-a-loop loops through the ls output, but in my case STDIN is the output of find, with multiple files, each potentially with spaces in them, so I can't switch to non-loop approach.
while IFS= read -r -d '' f <&3; do
rm -i -- "$f"
done 3< <(find . -name '*.txt' -print0)
Put the content on a different file descriptor than the one read -i uses for input. Here, we're using FD 3 (3< on the redirection, and <&3 on the read alone).
Clear IFS, or leading and trailing spaces in your filenames will be stripped.
Pass -r to read, or literal backslashes in your filenames will be consumed by read rather than placed in the populated variable.
Use NUL-delimited streams, or filenames containing newlines (yes, they can happen!) will break your code. To do so, use -print0 on the find side, and -d '' on the read side.
Instead of looping over the content produced by the default -print action of find, use find's -exec action :
find . -name \*\.txt -exec rm -i -- {} +
In this command, {} represents the elements iterated over by find, and the + both delimits the command executed by find -exec and states that it should replace {} by as many elements it can at once (as an alternative you can use \; for the command to be executed once per element). -- after rm -i makes sure the file listed by find won't be interpreted as rm options if they start by a dash but correctly as filenames.
Not only is this more concise (although not more easily understandable) , but it also avoids problems related to special characters naïve solutions would have.
Because rm inherits its standard input from the loop, which is connected to the output from the left side. So after read f got the name foo, there's nothing for rm to read from stdin. That's why rm exits on its own.
Don't use pipe if you need rm -i the prompt. There are many alternatives. One of them is
rm -i $(echo foo)
Can't figure out if this is possible, but it sure would be convenient.I'd like to get the output of a bash command and use it, interactively, to construct the next command in a Bash shell. A simple example of this might be as follows:
> find . -name myfile.txt
/home/me/Documents/2015/myfile.txt
> cp /home/me/Documents/2015/myfile.txt /home/me/Documents/2015/myfile.txt.bak
Now, I could do:
find . -name myfile.txt -exec cp {} {}.bak \;
or
cp `find . -name myfile.txt` `find . -name myfile.txt`.bak
or
f=`find . -name myfile.txt`; cp $f $f.bak
I know that. But sometimes you need to do something more complicated than just add an extension to a filename, and rather than getting involved with ${f%%txt}.text.bak etc etc it would be easier and faster (as you up the complexity more and more so) to just pop the result of the last command into your interactive shell command line and use emacs-style editing keys to do what you want.
So, is there some way to pipe the result of a command back into the interactive shell and leave it hanging there. Or alternatively to pipe it directly to the cut/paste buffer and recover it with a quick ctrl-v?
Typing M-C-e expands the current command line, including command substitutions, in-place:
$ $(echo bar)
Typing M-C-e now will change your line to
$ bar
(M-C-e is the default binding for the Readline function shell-expand-line.)
For your specific example, you can start with
$ cp $(find . -name myfile.txt)
which expands with shell-expand-line to
$ cp /home/me/Documents/2015/myfile.txt
which you can then augment further with
$ cp /home/me/Documents/2015/myfile.txt
From here, you have lots of options for completing your command line. Two of the simpler are
You can use history expansion (!#:1.txt) to expand to the target file name.
You can use brace expansion (/home/me/Documents/2015/myfile.txt{,.bak}).
If you are on a Mac, you can use pbcopy to put the output of a command into the clipboard so you can paste it into the next command line:
find . -name myfile.txt | pbcopy
On an X display, you can do the same thing with xsel --input --clipboard (or --primary, or possibly some other selection name depending on your window manager). You may also have xclip available, which works similarly.
I found this command line when I was checking a Bash Script! My question is what does this command do and is this command correct?
find / -name "*.src" | xargs cp ~/Desktop/Log.txt
find the files or directory with .src extension in / and copy file ~/Desktop/Log.txt as the find result's filename
for example if the output of the find command is
file.src
directory1.src
file2.src
xargs command will execute cp ~/Desktop/Log.txt file.src directory1.src file2.src which does not make any sense.
What the command is
find / -name "*.src"
Explanation: Recursively find all regular files, directories, and symlinks in /, for which the filename ends in .src
|
Explanation: Redirect stdout from the command on the left side of the pipe to stdin of the command on the right side
xargs cp ~/Desktop/Log.txt
Explanation: Build a cp command, taking arguments from stdin and appending them into a space-delimited list at the end of the command. If the pre-defined buffer space of xargs (generally bounded by ARG_MAX) is exhausted multiple cp commands will be executed,
e.g. xargs cp arg100 .... arg900 could be processed as cp a100 ... a500; cp a501 ... a900
Note The behavior of your cp varies a lot depending on its arguments
If ~/Desktop/Log.txt is the only argument, cp will throw stderr
If the last argument to cp is a directory
All preceding arguments that are regular files will be copied into it
Nothing will happen for all preceding arguments that are directories, except stderr will be thrown
If the last argument is a regular file
If there are 2 total arguments to cp and the first one is a regular file. Then the contents of the second argument file will be overwritten by the contents of the first argument
If there are more than 2 total arguments, cp will throw a stderr
So all in all, there are too many variables here for the behavior of your command to really ever be precisely defined. As such, I suspect, you wanted to do something else.
What it probably should have been
My guess is the author of the command probably wanted to redirect stdout to a log file (note the log file will be overwritten each time you run the command)
find / -name "*.src" > ~/Desktop/Log.txt
Additionally, if you are just looking for regular files with the .src extension, you should also add the -type f option to find
find / -type f -name "*.src" > ~/Desktop/Log.txt
It finds every file or directory that match the pattern *.src from /.
Then copy the file ~/Desktop/Log.txt to every result of previous command.
Duplicate
Unable to remove everything else in a folder except FileA
I guess that it is slightly similar to this:
delete [^Music]
However, it does not work.
Put the following command to your ~/.bashrc
shopt -s extglob
You can now delete everything else in the folder except the Music folder by
rm -r !(Music)
Please, be careful with the command.
It is powerful, but dangerous too.
I recommend to test it always with the command
echo rm -r !(Music)
The command
rm (ls | grep -v '^Music$')
should work. If some of your "files" are also subdirectories, then you want to recursively delete them, too:
rm -r (ls | grep -v '^Music$')
Warning: rm -r can be dangerous and you could accidentally delete a lot of files. If you would like to confirm what you will be deleting, try looking at the output of
ls | grep -v '^Music$'
Explanation:
The ls command lists directory contents; without an argument, it defaults to the current directory.
The pipe symbol | redirects output to another command; when the output of ls is redirected in this way, it prints filenames one-per-line, rather than in a column format as you would see if you type ls at an interactive terminal.
The grep command matches lines for patterns; the -v switch means to print lines that don't match the pattern.
The pattern ^Music$ means to match a line starting and ending with Music -- that is, only the string Music; the effect of the ^ (beginning of line) and $ (end of line) characters can also be achieved with the -x switch, as in grep -vx Music.
The syntax command (subcommand) is fish's way of taking the output of one command and passing it over as command-line arguments to another.
The rm command removes files. By default, it does not remove directories, but the -r ("recursive") option changes that.
You can learn about these commands and more by typing man command, where command is what you want to learn about.
So I was looking all over for a way to remove all files in a directory except for some directories, and files, I wanted to keep around. After much searching I devised a way to do it using find.
find -E . -regex './(dir1|dir2|dir3)' -and -type d -prune -o -print -exec rm -rf {} \;
Essentially it uses regex to select the directories to exclude from the results then removes the remaining files. Just wanted to put it out here in case someone else needed it.