Collect jars into a tar without directory - bash

I have a large project that creates a large number of jars in a path similar to project/subproject/target/subproject.jar. I want to make a command to collect all the jars into one compressed tar, but without the directories. The command I have come up with so far is: find project -name \*.jar -exec tar -rvf Collectors.tar.gz -C $(dirname {}) $(basename {}) \; but this isn't quite working as I am intending, the directories are still there.
Does anyone have any ideas for how to resolve this issue?

Your command is quite close, but the problem is that Bash is executing $(dirname {}) and $(basename {}) before executing find; so your command expands to this:
find project -name \*.jar -exec tar -rvf Collectors.tar.gz -C . {} \;
where the -C . is a no-op and the {} just expands to the full relative directory+filename.
One general-purpose way to fix this sort of thing is to wrap up the argument to -exec in a Bash one-liner, so you invoke Bash for each individual file, and let it execute the dirname and basename at the right time:
find project -name \*.jar -exec bash -c 'tar -rvf Collectors.tar.gz -C "$(dirname "$1")" "$(basename "$1")"' '' '{}' \;
In your specific case, however, I'd point you to find's -execdir action, which is the same as -exec except that it cd's into the file's directory first. So you can simply write:
find project -name '*.jar' -execdir tar -rvf "$PWD/Collectors.tar.gz" '{}' \;
(Note that $PWD part, which is to make sure that you write to the Collectors.tar.gz in the current directory, rather than in the directory that find -execdir will cd into.)

Related

How to use the pwd as a filename

I need to go through a whole set of subdirectories, and each time it finds a subdirectory named 0, it has to go inside it. Once there, I need to execute a tar command to compact some files.
I tried the following
find . -type d -name "0" -exec sh -c '(cd {} && tar -cvf filename.tar ../PARFILE RS* && cp filename.tar ~/home/directoryForTransfer/)' ';'
which seems to work. However, because this is done in many directories named 0, it will always overwrite the previous filename.tar one (and I lose the info about where it was created).
One way to solve this would be to use the $pwd as the filename (+.tar at the end).
I tried double ticks, backticks, etc, but I never manage to get the correct filename.
"$PWD"".tar", `$PWD.tar`, etc
Any idea? Any other way is ok, as long as I can link the name of the file with the directory it was created.
I'd need this to transfer the directoryToTransfer easily from the cluster to my home computer.
You can try "${PWD//\//_}.tar". However you have to use bash -c instead of sh -c.
Edit:
So now your code should look like this:
find . -type d -name "0" -exec bash -c 'cd {} && tar -cvf filename.tar ../PARFILE RS* && cp filename.tar ~/home/directoryForTransfer/"${PWD//\//_}.tar"' ';'
I personally don't really like the using -exec flag for find as it makes the code less readable and also forks a new process for each file. I would do it like this, which should work unless a filename somewhere contains a newline (which is very unlikely).
while read dir; do
cd {} && tar -cvf filename.tar ../PARFILE RS* && cp filename.tar ~/home/directoryForTransfer/"${PWD//\//_}.tar"
done < <(find . -type d -name "0")
But this is just my personal preference. The -exec variant should work too.
You can use -execdir option in find to descend in each found directory and then run the tar command to greatly simplify your tar command:
find . -type d -name "0" -execdir tar -cvf filename.tar RS* \;
If you want tar file to be created in ~/home/directoryForTransfer/ then use:
find . -type d -name "0" -execdir tar -cvf ~/home/directoryForTransfer/filename.tar RS* \;

Extract tarball into the same directory

I have a hierarchy of folders, which contain a lot of tarballs. I need to write a script which recursively goes to each directory, extract the tarball in the corresponding directory.
I tried
find ./ -name "*.tar.gz" -exec /bin/tar -zxvf {} \;
The code executed with all the tarballs extracted to the pwd, not in the corresponding directory.
Please assist me on this if possible. Thanks :)
You can use find like this:
find . -name "*.tar.gz" -exec bash -c 'd=$(dirname "{}") && b=$(basename "{}") && cd "$d" && tar zxvf "$b"' \;
EDIT A shorter version of above find command will be:
find . -name "*.tar.gz" -execdir tar zxvf "{}" \;

Bash '.' vs proper dir string

I have been running a number of find commands and have noticed something that seems odd about how bash handles the . vs a dir inputted as a string.
find . -type f -exec sh -c 'cd $(dirname "$0") && aunpack "$0"' {} \;
acts completely differently to
find [current dir] -type f -exec sh -c 'cd $(dirname "$0") && aunpack "$0"' {} \;
What gives?
Does bash treat '.' and a string specified directory path differently. Isn't '.' a substitute for the current dir?
What find does is append the rest of the path to the location passed as an argument.
Ie: if you are in dir "/home/user/find":
find .
Prints:
.
./a
./b
But if you try:
find /home/user/find
It prints:
/home/user/find
/home/user/find/a
/home/user/find/b
So find appends the rest of the path (/a, /b...) to the argument (. or /home/user/find).
You could use the pwd command instead of the . and it will behave the same.
find "`pwd`" -type f -exec sh -c 'cd $(dirname "$0") && aunpack "$0"' {} \;
#arutaku has pinpointed the source of the problem; let me point out another possible solution. If your version of find supports it, the -execdir primary does what you want very simply: it cd's to the directory each file is in, then executes the command with just the filename (no path):
find . -type f -execdir aunpack {} \;
Bash has nothing to do with it, it's the logic of find. It does not try to expand or normalize the path(s) you give, it just uses them verbatim: not only for . but for any path specification (e.g. ../../my/other/project).
I find it reasonable because any conversion would be more complicated than the current behavior. At least we would have to remember if symbolic links are resolved during conversion. And whenever we want a relative path for some reason, we would have to relativize it again.

bash script rm cannot delete folder created by php mkdir

I cannot delete folder created by php mkdir
for I in `echo $*`
do
find $I -type f -name "sess_*" -exec rm -f {} \;
find $I -type f -name "*.bak" -exec rm -f {} \;
find $I -type f -name "Thumbs.db" -exec rm -f {} \;
find $I -type f -name "error.log" -exec sh -c 'echo -n > "{}"' -f {} \;
find $I -type f -path "*/cache/*" -name "*.*" -exec rm -f {} \;
find $I -path "*/uploads/*" -exec rm -rdf {} \;
done
I want to delete under /uploads/ all files and folders please help me thanks...
You should consider changing your find command to use the -o pragma to join your conditions together as the final exec is basically the same. This will avoid recursing the file system repeatedly.
The other answers address your concern about php mkdir. I'll just add that it has nothing to do with the fact it was created with php mkdir rather than any other code or command. It is due to the ownership and permissions.
I think this is most likely because php is running in apache or another http server under a different user than you are invoking the bash script. Or perhaps the files uploaded in uploads/ are owned by the http server's user and not the user invoking it.
Make sure that you run the bash script under the same user as your http server.
To find out which user owns which file do:
ls -l
If you run you bash script as root, you should be able to delete it anyway, but that is not recommended.
Update
To run it as root for nautilus script use the following as your nautilus script:
gksudo runmydeletescript
Then put all the other code into another file with the same path as whatever you have put for runmydeletescript and run chmod +x on it. This is extremely dangerous!
You should probably add -depth to the command to delete sub-directories of upload before the directory itself.
I worry about the -path but I'm not familiar with it.
Also consider using + instead of \; to reduce the number of commands executed.

Bash script, run echo command with find, set a variable and use that variable

I want to run two commands but the second command depends on the first.
Is there a way to do something like this.
find . -name '*.txt' -exec 'y=$(echo 1); echo $y' {} \;
...
And actually, I want to do this.
Run the find command, change to that directory that the file is in and then run the command on the file in the current directory.
find . -name '*.txt' -exec 'cd basedir && /mycmd/' {} \;
How do I do that?
find actually has a primary that switches to each file's directory and executes a command from there:
find . -name '*.txt' -execdir /mycmd {} \;
Find's -exec option expects an executable with arguments, not a command, but you can use bash -c cmd to run an arbitrary shell command like this:
find . -name '*.txt' -exec bash -c 'cd $(dirname {}) && pwd && /mycmd $(basename {})' \;
I have added pwd to confirm that mycmd executes in the right directory. You can remove it. dirname gives you the directory of each file and basename gives you the filename. If you omit basename your command will receive (as {}) pathname to each file relative to the directory where you run find which is different from mycmd's current directory due to cd, so mycmd will likely fail to find the file. If you want your command to receive absolute pathname, you can try this:
find $PWD -name '*.txt' -exec bash -c 'cd $(dirname {}) && pwd && /mycmd {}' \;

Resources