I've got about 150 directories which I want to rename (and commit) in a git repo. The paths are something similar to;
/src/app/testing/linux/testindexscreen_home/image.png
/src/app/testing/osx/testindexscreen_home/image.png
/src/app/testing/win/testindexscreen_home/image.png
So I'd like to run mv and then commit on all paths which match indexscreen_ to remove that part of the string.
I'm on a windows box, using git bash and at the moment have the find & mv command trying to move the folder in to itself. I'm not sure how you remove the matched string;
find . -name '*indexscreen_*' -exec sh -c 'file={}; git mv $file ${file/"*indexscreen_*"/}' \;
Which with the commit included I think needs to be;
find . -name '*indexscreen_*' -exec sh -c 'file={}; git mv $file ${file/"*indexscreen_*"/}; git commit -m "Renamed $file"' \;
So I'd like to have that bash command turn those paths in to;
/src/app/testing/linux/testhome/image.png
/src/app/testing/osx/testhome/image.png
/src/app/testing/win/testhome/image.png
And have commit messages like "Renamed testhome"
So my understanding of bash hasn't improved much here (I need to find some good docs), but I have figured out the problem with what I was running.
Changing my mv command to the following successfully renamed the directories as I wanted;
find . -name '*indexscreen_*' -exec sh -c 'file={}; git mv $file ${file/indexscreen_/}' \;
I think this does what you want:
find . -name "*indexscreen_*" -a -type d -exec sh -c 'n={}; git mv {} ${n/indexscreen_/}' \;
but my find complains when I disappear the files while find is running.
The same solution using xargs does not complain:
find . -name "*indexscreen_*" -a -type d |xargs -i sh -c 'n={}; git mv {} ${n/indexscreen_/}'
Related
After reading multiple anwers on stackoverflow I came up with the following solution to read directory paths from find's output:
find "$searchdir" -type d -execdir test -d {}/.git \; -prune -print0 | while read -r -d $'\0' dir; do
# do stuff
done
However, most sources recommend something like the following approach:
while IFS= read -r -d '' file; do
some command "$file"
done < <(find . -type f -name '*.mp3' -print0)
Why are they using process substitution? Does this change anything about the whole process or is it just an other way to do the same thing?
Is the read argument -d '' different from -d $'\0' or again the same thing? Does empty string always contain at least \0 so the bash specific $'' syntax is completely unnecessary?
I also tried doing it directly in find -exec/-execdir by passing it multiple times and failed. Maybe filtering and testing can be done in one command?
non working example:
find "$repositories_root_dir" -type d -execdir test -d {}/.git \; -prune -execdir sh -c "if git ls-remote --exit-code . \"origin/${target_branch_name}\" &> /dev/null; then echo \"Found branch '${target_branch_name}' in {}\"; git checkout \"${target_branch_name}\"; fi" \;
Sources:
https://github.com/koalaman/shellcheck/wiki/Sc2044
https://mywiki.wooledge.org/BashPitfalls#for_f_in_.24.28ls_.2A.mp3.29
In your non-working example, if you test the existence of a .git sub-directory to process only git clones and discard the other directories, then you should probably not prune because it does the exact opposite: skip only git clones.
Moreover, when using -execdir sh -c SCRIPT, you should pass positional parameters to your script instead of trying to embed the current directory name in the script with {}, which is not portable. And you could do the same for the branch name. Note that the directory name is not needed for what you try to accomplish in each git clone, because your script is executed from there.
Try this, maybe:
find "$repositories_root_dir" -type d -name '.git' -execdir sh -c '
if git ls-remote --exit-code . "origin/$1" &> /dev/null; then
printf "Found branch %s in " "$1"; pwd
echo git checkout "$1"
fi' _ "$target_branch_name" \;
(_ is assigned to positional parameter $0). Remove the echo if the result looks correct.
I am searching specific directory and subdirectories for new files, I will like to copy the files. I am using this:
find /home/foo/hint/ -type f -mtime -2 -exec cp '{}' ~/new/ \;
It is copying the files successfully, but some files have same name in different subdirectories of /home/foo/hint/.
I will like to copy the files with its base directory to the ~/new/ directory.
test#serv> find /home/foo/hint/ -type f -mtime -2 -exec ls '{}' \;
/home/foo/hint/do/pass/file.txt
/home/foo/hint/fit/file.txt
test#serv>
~/new/ should look like this after copy:
test#serv> ls -R ~/new/
/home/test/new/pass/:
file.txt
/home/test/new/fit/:
file.txt
test#serv>
platform: Solaris 10.
Since you can't use rsync or fancy GNU options, you need to roll your own using the shell.
The find command lets you run a full shell in your -exec, so you should be good to go with a one-liner to handle the names.
If I understand correctly, you only want the parent directory, not the full tree, copied to the target. The following might do:
#!/usr/bin/env bash
findopts=(
-type f
-mtime -2
-exec bash -c 'd="${0%/*}"; d="${d##*/}"; mkdir -p "$1/$d"; cp -v "$0" "$1/$d/"' {} ./new \;
)
find /home/foo/hint/ "${findopts[#]}"
Results:
$ find ./hint -type f -print
./hint/foo/slurm/file.txt
./hint/foo/file.txt
./hint/bar/file.txt
$ ./doit
./hint/foo/slurm/file.txt -> ./new/slurm/file.txt
./hint/foo/file.txt -> ./new/foo/file.txt
./hint/bar/file.txt -> ./new/bar/file.txt
I've put the options to find into a bash array for easier reading and management. The script for the -exec option is still a little unwieldy, so here's a breakdown of what it does for each file. Bearing in mind that in this format, options are numbered from zero, the {} becomes $0 and the target directory becomes $1...
d="${0%/*}" # Store the source directory in a variable, then
d="${d##*/}" # strip everything up to the last slash, leaving the parent.
mkdir -p "$1/$d" # create the target directory if it doesn't already exist,
cp "$0" "$1/$d/" # then copy the file to it.
I used cp -v for verbose output as shown in "Results" above, but IIRC it's also not supported by Solaris, and can be safely ignored.
The --parents flag should do the trick:
find /home/foo/hint/ -type f -mtime -2 -exec cp --parents '{}' ~/new/ \;
Try testing with rsync -R, for example:
find /your/path -type f -mtime -2 -exec rsync -R '{}' ~/new/ \;
From the rsync man:
-R, --relative
Use relative paths. This means that the full path names specified on the
command line are sent to the server rather than just the last parts of the
filenames.
The problem with the answers by #Mureinik and #nbari might be that the absolute path of new files will spawn in the target directory. In this case you might want to switch to the base directory before the command and go back to your current directory afterwards:
path_current=$PWD; cd /home/foo/hint/; find . -type f -mtime -2 -exec cp --parents '{}' ~/new/ \; ; cd $path_current
or
path_current=$PWD; cd /home/foo/hint/; find . -type f -mtime -2 -exec rsync -R '{}' ~/new/ \; ; cd $path_current
Both ways work for me at a Linux platform. Let’s hope that Solaris 10 knows about rsync’s -R ! ;)
I found a way around it:
cd ~/new/
find /home/foo/hint/ -type f -mtime -2 -exec nawk -v f={} '{n=split(FILENAME, a, "/");j= a[n-1];system("mkdir -p "j"");system("cp "f" "j""); exit}' {} \;
I have a large project that creates a large number of jars in a path similar to project/subproject/target/subproject.jar. I want to make a command to collect all the jars into one compressed tar, but without the directories. The command I have come up with so far is: find project -name \*.jar -exec tar -rvf Collectors.tar.gz -C $(dirname {}) $(basename {}) \; but this isn't quite working as I am intending, the directories are still there.
Does anyone have any ideas for how to resolve this issue?
Your command is quite close, but the problem is that Bash is executing $(dirname {}) and $(basename {}) before executing find; so your command expands to this:
find project -name \*.jar -exec tar -rvf Collectors.tar.gz -C . {} \;
where the -C . is a no-op and the {} just expands to the full relative directory+filename.
One general-purpose way to fix this sort of thing is to wrap up the argument to -exec in a Bash one-liner, so you invoke Bash for each individual file, and let it execute the dirname and basename at the right time:
find project -name \*.jar -exec bash -c 'tar -rvf Collectors.tar.gz -C "$(dirname "$1")" "$(basename "$1")"' '' '{}' \;
In your specific case, however, I'd point you to find's -execdir action, which is the same as -exec except that it cd's into the file's directory first. So you can simply write:
find project -name '*.jar' -execdir tar -rvf "$PWD/Collectors.tar.gz" '{}' \;
(Note that $PWD part, which is to make sure that you write to the Collectors.tar.gz in the current directory, rather than in the directory that find -execdir will cd into.)
I need to go through a whole set of subdirectories, and each time it finds a subdirectory named 0, it has to go inside it. Once there, I need to execute a tar command to compact some files.
I tried the following
find . -type d -name "0" -exec sh -c '(cd {} && tar -cvf filename.tar ../PARFILE RS* && cp filename.tar ~/home/directoryForTransfer/)' ';'
which seems to work. However, because this is done in many directories named 0, it will always overwrite the previous filename.tar one (and I lose the info about where it was created).
One way to solve this would be to use the $pwd as the filename (+.tar at the end).
I tried double ticks, backticks, etc, but I never manage to get the correct filename.
"$PWD"".tar", `$PWD.tar`, etc
Any idea? Any other way is ok, as long as I can link the name of the file with the directory it was created.
I'd need this to transfer the directoryToTransfer easily from the cluster to my home computer.
You can try "${PWD//\//_}.tar". However you have to use bash -c instead of sh -c.
Edit:
So now your code should look like this:
find . -type d -name "0" -exec bash -c 'cd {} && tar -cvf filename.tar ../PARFILE RS* && cp filename.tar ~/home/directoryForTransfer/"${PWD//\//_}.tar"' ';'
I personally don't really like the using -exec flag for find as it makes the code less readable and also forks a new process for each file. I would do it like this, which should work unless a filename somewhere contains a newline (which is very unlikely).
while read dir; do
cd {} && tar -cvf filename.tar ../PARFILE RS* && cp filename.tar ~/home/directoryForTransfer/"${PWD//\//_}.tar"
done < <(find . -type d -name "0")
But this is just my personal preference. The -exec variant should work too.
You can use -execdir option in find to descend in each found directory and then run the tar command to greatly simplify your tar command:
find . -type d -name "0" -execdir tar -cvf filename.tar RS* \;
If you want tar file to be created in ~/home/directoryForTransfer/ then use:
find . -type d -name "0" -execdir tar -cvf ~/home/directoryForTransfer/filename.tar RS* \;
I cannot delete folder created by php mkdir
for I in `echo $*`
do
find $I -type f -name "sess_*" -exec rm -f {} \;
find $I -type f -name "*.bak" -exec rm -f {} \;
find $I -type f -name "Thumbs.db" -exec rm -f {} \;
find $I -type f -name "error.log" -exec sh -c 'echo -n > "{}"' -f {} \;
find $I -type f -path "*/cache/*" -name "*.*" -exec rm -f {} \;
find $I -path "*/uploads/*" -exec rm -rdf {} \;
done
I want to delete under /uploads/ all files and folders please help me thanks...
You should consider changing your find command to use the -o pragma to join your conditions together as the final exec is basically the same. This will avoid recursing the file system repeatedly.
The other answers address your concern about php mkdir. I'll just add that it has nothing to do with the fact it was created with php mkdir rather than any other code or command. It is due to the ownership and permissions.
I think this is most likely because php is running in apache or another http server under a different user than you are invoking the bash script. Or perhaps the files uploaded in uploads/ are owned by the http server's user and not the user invoking it.
Make sure that you run the bash script under the same user as your http server.
To find out which user owns which file do:
ls -l
If you run you bash script as root, you should be able to delete it anyway, but that is not recommended.
Update
To run it as root for nautilus script use the following as your nautilus script:
gksudo runmydeletescript
Then put all the other code into another file with the same path as whatever you have put for runmydeletescript and run chmod +x on it. This is extremely dangerous!
You should probably add -depth to the command to delete sub-directories of upload before the directory itself.
I worry about the -path but I'm not familiar with it.
Also consider using + instead of \; to reduce the number of commands executed.