In my makefile, I have a target called indent-fast --
indent-fast:
# astyle --style=allman --indent=tab `find . -name "*.java"`
files=`find . -name "*.java"` ; \
for file in $$files ; do \
echo formatting $$file ; \
done ;\
# to do a whitesmith at 4 spaces, uncomment this line --
# astyle --style=whitesmith --indent=spaces=4 `find . -name "*.java"`
but when I execute it, I get this output --
ramgorur#ramgorur-pc:~$ make indent-fast
# astyle --style=allman --indent=tab `find . -name "*.java"`
files=`find . -name "*.java"` ; \
for file in $files ; do \
echo formatting $file ; \
done ;\
formatting ./File1.java
formatting ./File2.java
formatting ./File3.java
.
.
.
formatting ./FileN.java
# to do a whitesmith at 4 spaces, uncomment this line --
# astyle --style=whitesmith --indent=spaces=4 `find . -name "*.java"`
why is it showing the scripts along with the comments on the std output? Please also note that indent-fast is the last target in the file.
Because your comments are indented in a recipe, they're not make comments, and they're being executed as part of the recipe. They're shell comments in that context. You could add an # to prevent them from being output, if that's your goal.
From the GNU make manual:
Comments within a recipe are passed to the shell, just as with any other recipe text. The shell decides how to interpret it: whether or not this is a comment is up to the shell.
Simple example makefile:
# make comment
target:
# shell comment
:
## output-suppressed shell comment
#:
Execution:
$ make
# shell comment
:
Edit: since an example wasn't good enough, here's the solution for your exact problem:
indent-fast:
## astyle --style=allman --indent=tab `find . -name "*.java"`
#files=`find . -name "*.java"` ; \
for file in $$files ; do \
echo formatting $$file ; \
done
## to do a whitesmith at 4 spaces, uncomment this line --
## astyle --style=whitesmith --indent=spaces=4 `find . -name "*.java"`
Related
for subj in `cat dti_list.txt`; do
echo $subj
find . -type f -iname '*306.nii' -execdir bash -c 'rename.ul "$subj" DTI_MAIN_AP.nii *.nii' \+
done
I have some trouble with a small bash script, which adds the name instead of replacing when I use the rename.ul function.
Currently, the code adds DTI_MAIN_AP.nii in front of the old name.
My goal is to replace the name from the subj list and using the find to search up any directory with a *306.nii file, and then using execdir to execute the rename.ul function to rename the file from the dti_list.txt.
Any solution, or correction to get the code working, will be appreciated.
If you just want to rename the first file matching *306.nii in each directory to DTI_MAIN_AP.nii, that might look like:
find . -type f -iname '*306.nii' \
-execdir sh -c '[[ -e DTI_MAIN_AP.nii ]] || mv "$1" DTI_MAIN_AP.nii' _ {} +
If instead of matching on *306.nii you want to iterate over names from dti_list.txt, that might instead look like:
while IFS= read -r -d '' filename <&3; do
find . -type f -name "$filename" \
-execdir sh -c '[[ -e DTI_MAIN_AP.nii ]] || mv "$1" DTI_MAIN_AP.nii' _ {} +
done <dti_list.txt
References of note:
BashFAQ #1 (on reading files line-by-line)
Using Find
I am writing a script that wraps the find command to search for specific source file types under a given directory. A sample invocation would be :
./find_them.sh --java --flex --xml dir1
The above command would search for .java, .as and .xml files under dir1.
To do this manually I came up with the following find command :
find dir1 -type f -a \( -name "*.java" -o -name "*.as" -o -name "*.xml" \)
As I am doing this in a script where I want to be able specify different file sets to search for you end up with the following structure :
find_cmd_file_sets=$(decode_file_sets) # Assume this creates a string with the file sets e.g. -name "*.java" -o -name "*.as" etc
dirs=$(get_search_dirs) # assume this gives you the list of dirs to search, defaulting to the current directory
for dir in $dirs
do
find $dir -type f -a \( $find_cmd_file_sets \)
done
The above script doesn't behave as expected, you execute the script and the find command churns for a while before returning no results.
I'm certain the equivalents of decode_file_sets and get_search_dirs I've created are generating the correct results.
A simpler example if to execute the following directly in a bash shell
file_sets=' -name "*.java" -o -name "*.as" '
find dir -type f -a \( $file_sets \) # Returns no result
# Executing result of below command directly in the shell returns correct result
echo find dir -type f -a \\\( $file_sets \\\)
I don't understand why variable expansion in brackets of the find command would change the result. If it makes any difference I am using git-bash under Windows.
This is really frustrating. Any help would be much appreciated. Most importantly I would like to understand why the variable expansion of $file_sets is behaving as it is.
Hope this will work, Its tested on bash.
file_sets=' -name "*.java" -o -name "*.as" '
command=`echo "find $dir -type f -a \( $file_sets \)"`
eval $command
TLDR: Don't use quotes in find_cmd_file_sets variable and disable pathname expansion (set -f) before calling find.
When you have "special" character in a variable content and then you try to expand that variable without quotes than bash will surround each word with "special" character with single quotes, e.g.:
#!/usr/bin/env bash
set -x
VAR='abc "def"'
echo $VAR
The output is:
+ VAR='abc "def"'
+ echo abc '"def"'
abc "def"
As you can see, bash surrounded "def" with single quotes. In your case, the call to find command becomes:
find ... -name '"*.java"' ...
So it tries to find files which start with " and end with .java"
To prevent that behavior, the only thing you can do (which I'm aware of) is to use double quotes when expanding the variable, e.g.:
#!/usr/bin/env bash
set -x
VAR='abc "def"'
echo "$VAR"
The output is:
+ VAR='abc "def"'
+ echo 'abc "def"'
abc "def"
The only problem, as you probably noticed already, is that now the whole variable is in quotes and is treated as single argument. So this won't work in your find command.
The only option left is to not use quotes, neither in variable content nor when expanding the variable. But then, of course, you have a problem with pathname expansion:
#!/usr/bin/env bash
set -x
VAR='abc *.java'
echo $VAR
The output is:
+ VAR='abc *.java'
+ echo abc file1.java file2.java
abc file1.java file2.java
Fortunately you can disable pathname expansion using set -f:
#!/usr/bin/env bash
set -x
VAR='abc *.java'
set -f
echo $VAR
The output is:
+ VAR='abc *.java'
+ set -f
+ echo abc '*.java'
abc *.java
To sum up, the following should work:
#!/usr/bin/env bash
pattern='-name *.java'
dir="my_project"
set -f
find "$dir" -type f -a \( $pattern \)
bash arrays were introduced to allow this kind of nested quoting:
file_sets=( -name "*.java" -o -name "*.as" )
find dir -type f -a \( "${file_sets[#]}" \)
In bash, when I want to iterate in a recursive list of pdf files, without the extension, I could do the following:
for file in `find mypath -type f -name '*.pdf' -printf "%f\n"`
do
echo "${file%.*}"
done
This works perfectly, and I get a list of the pdf files without the extension.
But if I try to do the same in a Makefile, I get empty output:
my_test:
#for file in `find mypath -type f -name '*.pdf' -printf "%f\n"`; \
do \
echo "${file%.*}"; \
done; \
do you have an idea why this is happening?
thanks in advance
Just put in an extra $:
echo "$${file%.*}"; \
In your command Make expands the first $, interprets ${ as nothing, and things unravel fast. With $$, the first $ escapes the second and the ${...} gets passed to the shell.
How do I write a bash script that goes through each directory inside a parent_directory and executes a command in each directory.
The directory structure is as follows:
parent_directory (name could be anything - doesnt follow a pattern)
001 (directory names follow this pattern)
0001.txt (filenames follow this pattern)
0002.txt
0003.txt
002
0001.txt
0002.txt
0003.txt
0004.txt
003
0001.txt
the number of directories is unknown.
This answer posted by Todd helped me.
find . -maxdepth 1 -type d \( ! -name . \) -exec bash -c "cd '{}' && pwd" \;
The \( ! -name . \) avoids executing the command in current directory.
You can do the following, when your current directory is parent_directory:
for d in [0-9][0-9][0-9]
do
( cd "$d" && your-command-here )
done
The ( and ) create a subshell, so the current directory isn't changed in the main script.
You can achieve this by piping and then using xargs. The catch is you need to use the -I flag which will replace the substring in your bash command with the substring passed by each of the xargs.
ls -d */ | xargs -I {} bash -c "cd '{}' && pwd"
You may want to replace pwd with whatever command you want to execute in each directory.
If you're using GNU find, you can try -execdir parameter, e.g.:
find . -type d -execdir realpath "{}" ';'
or (as per #gniourf_gniourf comment):
find . -type d -execdir sh -c 'printf "%s/%s\n" "$PWD" "$0"' {} \;
Note: You can use ${0#./} instead of $0 to fix ./ in the front.
or more practical example:
find . -name .git -type d -execdir git pull -v ';'
If you want to include the current directory, it's even simpler by using -exec:
find . -type d -exec sh -c 'cd -P -- "{}" && pwd -P' \;
or using xargs:
find . -type d -print0 | xargs -0 -L1 sh -c 'cd "$0" && pwd && echo Do stuff'
Or similar example suggested by #gniourf_gniourf:
find . -type d -print0 | while IFS= read -r -d '' file; do
# ...
done
The above examples support directories with spaces in their name.
Or by assigning into bash array:
dirs=($(find . -type d))
for dir in "${dirs[#]}"; do
cd "$dir"
echo $PWD
done
Change . to your specific folder name. If you don't need to run recursively, you can use: dirs=(*) instead. The above example doesn't support directories with spaces in the name.
So as #gniourf_gniourf suggested, the only proper way to put the output of find in an array without using an explicit loop will be available in Bash 4.4 with:
mapfile -t -d '' dirs < <(find . -type d -print0)
Or not a recommended way (which involves parsing of ls):
ls -d */ | awk '{print $NF}' | xargs -n1 sh -c 'cd $0 && pwd && echo Do stuff'
The above example would ignore the current dir (as requested by OP), but it'll break on names with the spaces.
See also:
Bash: for each directory at SO
How to enter every directory in current path and execute script? at SE Ubuntu
If the toplevel folder is known you can just write something like this:
for dir in `ls $YOUR_TOP_LEVEL_FOLDER`;
do
for subdir in `ls $YOUR_TOP_LEVEL_FOLDER/$dir`;
do
$(PLAY AS MUCH AS YOU WANT);
done
done
On the $(PLAY AS MUCH AS YOU WANT); you can put as much code as you want.
Note that I didn't "cd" on any directory.
Cheers,
for dir in PARENT/*
do
test -d "$dir" || continue
# Do something with $dir...
done
While one liners are good for quick and dirty usage, I prefer below more verbose version for writing scripts. This is the template I use which takes care of many edge cases and allows you to write more complex code to execute on a folder. You can write your bash code in the function dir_command. Below, dir_coomand implements tagging each repository in git as an example. Rest of the script calls dir_command for each folder in directory. The example of iterating through only given set of folder is also include.
#!/bin/bash
#Use set -x if you want to echo each command while getting executed
#set -x
#Save current directory so we can restore it later
cur=$PWD
#Save command line arguments so functions can access it
args=("$#")
#Put your code in this function
#To access command line arguments use syntax ${args[1]} etc
function dir_command {
#This example command implements doing git status for folder
cd $1
echo "$(tput setaf 2)$1$(tput sgr 0)"
git tag -a ${args[0]} -m "${args[1]}"
git push --tags
cd ..
}
#This loop will go to each immediate child and execute dir_command
find . -maxdepth 1 -type d \( ! -name . \) | while read dir; do
dir_command "$dir/"
done
#This example loop only loops through give set of folders
declare -a dirs=("dir1" "dir2" "dir3")
for dir in "${dirs[#]}"; do
dir_command "$dir/"
done
#Restore the folder
cd "$cur"
I don't get the point with the formating of the file, since you only want to iterate through folders... Are you looking for something like this?
cd parent
find . -type d | while read d; do
ls $d/
done
you can use
find .
to search all files/dirs in the current directory recurive
Than you can pipe the output the xargs command like so
find . | xargs 'command here'
#!/bin.bash
for folder_to_go in $(find . -mindepth 1 -maxdepth 1 -type d \( -name "*" \) ) ;
# you can add pattern insted of * , here it goes to any folder
#-mindepth / maxdepth 1 means one folder depth
do
cd $folder_to_go
echo $folder_to_go "########################################## "
whatever you want to do is here
cd ../ # if maxdepth/mindepath = 2, cd ../../
done
#you can try adding many internal for loops with many patterns, this will sneak anywhere you want
You could run sequence of commands in each folder in 1 line like:
for d in PARENT_FOLDER/*; do (cd "$d" && tar -cvzf $d.tar.gz *.*)); done
for p in [0-9][0-9][0-9];do
(
cd $p
for f in [0-9][0-9][0-9][0-9]*.txt;do
ls $f; # Your operands
done
)
done
Can't seem to crack this one.
I have a bash script to search a folder and exclude certain file types.
list=`find . -type f ! \( -name "*data.php" -o -name "*.log" -o -iname "._*" -o -path "*patch" \)`
I want to exclude files which start with dot-dash ._ but the above just refuses to work.
Here's some more of the script, but I am still getting files copied with start with ._
O/S is CentOS 5.3
list=`find . -type f ! \( -name "*data.php" -o -name "*.log" -o -iname "._*" -o -path "*patch" \)`
for a in $list; do
if [ ! -f "$OLDFOL$a" ]; then
cp --preserve=all --parents $a $UPGFOL
continue
fi
diff $a "$OLDFOL$a" > /dev/null
if [[ "$?" == "1" ]]; then
# exists & different so copy
cp --preserve=all --parents $a $UPGFOL
fi
done
First -- don't do it that way.
files="`find ...`"
splits names on whitespace, meaning that Some File becomes two files, Some and File. Even splitting on newlines is unsafe, as valid UNIX filenames can contain $'\n' (any character other than / and null is valid in a UNIX filename). Instead...
getfiles() {
find . -type f '!' '(' \
-name '*data.php' -o \
-name '*.log' -o \
-iname "._*" -o \
-path "*patch" ')' \
-print0
}
while IFS= read -r -d '' file; do
if [[ ! -e $orig_dir/$file ]] ; then
cp --preserve=all --parents "$file" "$dest_dir"
continue
fi
if ! cmp -q "$file" "$orig_dir/$file" ; then
cp --preserve=all --parents "$file" "$dest_dir"
fi
done < <(getfiles)
The above does a number of things right:
It is safe against filenames containing spaces or newlines.
It uses cmp -q, not diff. cmp exits immediately when a change is made, rather than needing to calculate the delta between two files, and is thus far faster.
Read BashFAQ #1, UsingFind, and BashPitfalls #1 to understand some of the differences between this and the original.
Also -- I've validated that this correctly excludes filenames which start with ._ -- but the original version did too. Perhaps what you really want is to exclude filenames matching *._* rather than ._*?