gbash 'git rm' multiple files that are found by a 'find' command - bash

I want to 'git rm' a bunch of files that are found by a 'find' command. The files should have a certain suffix. I got this:
TEST_PATH='/usr/src'
function main() {
for i in "$#"
do
echo "current i = ${i}"
COMMAND='find $TEST_PATH -maxdepth 20 -name '*_${i}.txt' -exec git rm {} \;'
# COMMAND="$(find $TEST_PATH -maxdepth 20 name '*_${i}.txt' -print0 | xargs -0 -I{} cp {} .)"
# COMMAND="find $TEST_PATH -maxdepth 20 -name '*_${i}.txt' -exec cp {} . \;"
# COMMAND="find . '*.BUILD' | while read file; do echo "$file"; done \;"
done
echo "Running Command: $COMMAND"
$COMMAND
}
gbash::main "$#"
Running it will throw an error like this:
$ sh abc.sh 123
current i = 123
Running Command: find ../../src/python/servers/innertube/tests/ -maxdepth 20 -name "*_9421870.txt" -exec rm {}\;
find: missing argument to `-exec'
I've read and tried all the solutions on stackoverflow (see the commented out code) but none works...

Update
The problem is that you should eval contents of the variable containing command:
eval $COMMAND
From man eval:
The eval utility shall construct a command by concatenating arguments together, separating each with a <space> character. The constructed command shall be read and executed by the shell.
Original answer
Replace {}\; with {} \; or {} +.
Read the man page for find. The action used in your command is documented as:
-exec command ;
Execute command; true if 0 status is returned. All following arguments > to find are taken to be arguments to the command until an argument consisting of ; is encountered. The string {} is replaced by the current file name being processed everywhere it occurs in the arguments to the command...
So the command failed because the {}\; sequence is interpreted as command.

Related

Bash: How to use functions with parameters with find and ssh

I'm trying to search for files of a specific type on a remote ssh client, and want to call a function with the filename passed as a function parameter:
out=$(ssh operator#$IP << EOF
check_cert_date () {
echo "checking" $1
}
$(typeset -f)
find /opt -iname *.der -o -iname *.pem -exec bash -c 'for arg; do check_cert_date "$arg"; done' - {} \;
EOF
)
Files are found, but the filename itself is not passed to check_cert_date(), i.e. $1 is always empty.
Watch out for quoting with 'Here Documents'. Use << "EOF".
Also, find needs parens for your action to apply to both *.der and *.pem files:
find /opt \( -iname *.der -o -iname *.pem \) -print | while read -r file; do check_cert_date "$file"; done

bash rename file/add integer in filename with various extensions in multiple subdirectories

I want to insert an integer into filenames with various extensions in multiple subdirectories using bash.
Examples:
./trial2/foo.hhh --> ./trial2/foo1.hhh
./trial2/trial3/foo.txt--> ./trial2/trial3/foo1.txt
I tried to separate the filename from the extension and insert the integer in between with:
i=123
find . -type f -exec sh -c ' echo mv "$0" "${0%.*}${i}.${0##*.}" ' {} \;
mv ./trial2/foo.hhh ./trial2/foo.hhh
But the variable output ${i} is not printed. Why?
If I change to:
find . -type f -exec sh -c ' mv "$0" "${0%.*}123.${0##*.}" ' {} \;
mv ./trial2/foo.hhh ./trial2/foo123.hhh
The number is printed. However, I need the variable ${i} as it will be defined in a wrapping for-loop.
Edit
You are almost there; you just hit a quoting subtlety. i is declared in the current shell, so needs to be passed into the sh run by find somehow. In your current command, ${i} is in single quotes, so is not interpreted by bash before the quoted material is passed to sh.
As suggested by Charles Duffy:
find . -type f -exec sh -c ' echo mv "$0" "${0%.*}${1}.${0##*.}" ' {} "$i" \;
Within the sh command, $0 is the {}, as you know. $1 is the second parameter to the sh, which is "$i" (i expanded as a single word). Instead of ${i}, the sh command uses ${1} to access that parameter (a copy of i).
Original
In this example, i is interpolated in the current shell.
Before: find . -type f -exec sh -c ' echo mv "$0" "${0%.*}${i}.${0##*.}" ' {} \;
After: find . -type f -exec sh -c ' echo mv "$0" "${0%.*}'"${i}"'.${0##*.}" ' {} \;
^^ ^^
The '"${i}"' drops you out of the single quotes, then expands i, then takes you back into the single quotes. That way the command you are passing to sh includes the value of i you want.
Pass $i in $0, and your filenames in $1 and onward, and you can change from -exec ... {} \; to -exec ... {} +, thus using only one sh instance to rename potentially several files.
The following requires bash, but generates mv commands that are guaranteed to be correct even in the presence of malicious or confusing filenames (with spaces, newlines, control characters, etc):
find . -type f -exec bash -c '
logcmd() { printf "%q " "$#"; printf "\n"; } # a more accurate replacement for echo
for f; do
logcmd mv "$f" "${f%.*}${0}.${f##*.}"
done' "$i" {} +

getting the output of a grep command in a loop

I have a shell script that includes this search:
find . -type f -exec grep -iPho "barh(li|mar|ag)" {} \;
I want to capture each string the grep command finds and send it a function I will create named "parser"
parser(){
# do stuff with each single grep result found
}
how can that be done?
is this right?
find . -type f -exec grep -iPho "barh(li|mar|ag)" {parser $1} \;
I do not want to output the entire find command result to the function
Only shell can execute a function. You need to use bash -c in your find in order to execute it. That is also the reason you need to export your function, so that the new process sees it.
parser() {
while IFS= read -r line; do
echo "Processing line: $line"
done <<< "$1"
}
export -f parser
find . -type f -exec bash -c 'parser "$(grep -iPho "barh(li|mar|ag)" "$1")"' -- {} \;
The code above will send all occurrences from file1, then file2 etc to your function to process. It will not send each line one by one and therefore you need to loop over the lines in your function. If there is no occurrence of your regex in a file, it will still call your function with an empty input!
That might not be the best solution for you so let's try to add the loop inside the bash -c statement and really process the lines one by one:
parser() {
echo "Processing line: $1"
}
export -f parser
find . -type f -exec bash -c 'grep -iPho "barh(li|mar|ag)" "$#" | while IFS= read -r line; do parser "$line"; done' -- {} +
EDIT: Very nice and simple solution not using bash -c suggested by #gniourf_gniourf:
parser() {
echo "Processing line: $1"
}
find . -type f -exec grep -iPho "barh(li|mar|ag)" {} + | while IFS= read -r line; do parser "$line"; done
This approach works fine and it will process each line one by one. You also do not need to export your function with this approach. But you have to care for some things that might surprise you.
Each command in a pipeline is executed in its own subshell, and any variable assignment in your parser function or your while in general will be lost after returning from that very subshell. If you are writing a script, simple shopt -s lastpipe will suffice and run the last pipe command in the current shell environment. Or you can use process substitution:
parser() {
echo "Processing line: $1"
}
while IFS= read -r line; do
parser "$line";
done < <(find . -type f -exec grep -iPho "barh(li|mar|ag)" {} +)
Note that in the previous bash -c examples, you will experience the same behavior and your variable assignments will be lost as well.
You need to export your function.
You also need to call bash to execute the function.
parser() {
echo "GOT: $1"
}
export -f parser
find Projects/ -type f -name '*rb' -exec bash -c 'parser "$0"' {} \;
i suggest you to use sed ,this is more powerful tool to do text processing.
for example i want to add string "myparse" after the line that end as "ha",i can do this like
# echo "haha" > text1
# echo "hehe" > text2
# echo "heha" > text3
# find . -type f -exec sed '/ha$/s/ha$/ha myparse/' {} \;
haha myparse
heha myparse
hehe
if you really want to replace the file ,not just print to stdout,you can do this like
# find . -type f -exec sed -i '/ha$/s/ha$/ha myparse/' {} \;

find statement in cygwin bash script

for i in `find . -type f -name "VF-Outlet*.edi" -exec basename \{} \;` ; do
if [ -n "${i}" ];
then
echo file "VF-Outlet found";
sed -e 's/\*UK\*00/\*UP\*/g;s/XQ.*$/XQ\*H\*20150104\*20150110/g' $i > ${i}_fix
else
echo file "VF-Outlet" not found;
fi
done
The above code works if the file is found. The 'echo' statement prints file found.
If the file is not found however, nothing prints. I tried all the various tests for empty string, and unset variables, nothing works.
Also if I try:
i=`find . -type f -name "VF-Outlet*.edi" -exec basename \{} \;`;
Then do the test:
if [ -n "${i}" ];
then
echo file ${i} found;
else
echo file "VF-Outlet" not found;
fi
done
It works correctly if the file is found or not.
Need help in figuring this out. I need the for loop to test multiple files.
The reason it is not working is due to the fact that "for" does not take null value as input for the variable "i"
For ex:
for i in echo > /dev/null; do echo hi; done
The above command wont give any result, because no value has been assigned to value $i for running the loop.
In the case mentioned here if we check the script in debug mode, we can see that the script dies at initial variable assignment.
# sh -x script.sh
+ find . -type f -name VF-Outlet*.edi -exec basename {} ;
here, script.sh file contains the script you have provided.
If there is a file present in the directory, the script will successfully execute.
# sh -x tet
+ find . -type f -name VF-Outlet*.edi -exec basename {} ;
+ [ -n VF-Outlet1.edi ]
+ echo file VF-Outlet found
file VF-Outlet found
As #shellter mentioned, this not how I would have done. You can use -f instead of -n to check if a file exists.
Hope this helps!

find -exec with multiple commands

I am trying to use find -exec with multiple commands without any success. Does anybody know if commands such as the following are possible?
find *.txt -exec echo "$(tail -1 '{}'),$(ls '{}')" \;
Basically, I am trying to print the last line of each txt file in the current directory and print at the end of the line, a comma followed by the filename.
find accepts multiple -exec portions to the command. For example:
find . -name "*.txt" -exec echo {} \; -exec grep banana {} \;
Note that in this case the second command will only run if the first one returns successfully, as mentioned by #Caleb. If you want both commands to run regardless of their success or failure, you could use this construct:
find . -name "*.txt" \( -exec echo {} \; -o -exec true \; \) -exec grep banana {} \;
find . -type d -exec sh -c "echo -n {}; echo -n ' x '; echo {}" \;
One of the following:
find *.txt -exec awk 'END {print $0 "," FILENAME}' {} \;
find *.txt -exec sh -c 'echo "$(tail -n 1 "$1"),$1"' _ {} \;
find *.txt -exec sh -c 'echo "$(sed -n "\$p" "$1"),$1"' _ {} \;
Another way is like this:
multiple_cmd() {
tail -n1 $1;
ls $1
};
export -f multiple_cmd;
find *.txt -exec bash -c 'multiple_cmd "$0"' {} \;
in one line
multiple_cmd() { tail -1 $1; ls $1 }; export -f multiple_cmd; find *.txt -exec bash -c 'multiple_cmd "$0"' {} \;
"multiple_cmd()" - is a function
"export -f multiple_cmd" - will export it so any other subshell can see it
"find *.txt -exec bash -c 'multiple_cmd "$0"' {} \;" - find that will execute the function on your example
In this way multiple_cmd can be as long and as complex, as you need.
Hope this helps.
There's an easier way:
find ... | while read -r file; do
echo "look at my $file, my $file is amazing";
done
Alternatively:
while read -r file; do
echo "look at my $file, my $file is amazing";
done <<< "$(find ...)"
Extending #Tinker's answer,
In my case, I needed to make a command | command | command inside the -exec to print both the filename and the found text in files containing a certain text.
I was able to do it with:
find . -name config -type f \( -exec grep "bitbucket" {} \; -a -exec echo {} \; \)
the result is:
url = git#bitbucket.org:a/a.git
./a/.git/config
url = git#bitbucket.org:b/b.git
./b/.git/config
url = git#bitbucket.org:c/c.git
./c/.git/config
I don't know if you can do this with find, but an alternate solution would be to create a shell script and to run this with find.
lastline.sh:
echo $(tail -1 $1),$1
Make the script executable
chmod +x lastline.sh
Use find:
find . -name "*.txt" -exec ./lastline.sh {} \;
Thanks to Camilo Martin, I was able to answer a related question:
What I wanted to do was
find ... -exec zcat {} | wc -l \;
which didn't work. However,
find ... | while read -r file; do echo "$file: `zcat $file | wc -l`"; done
does work, so thank you!
1st answer of Denis is the answer to resolve the trouble. But in fact it is no more a find with several commands in only one exec like the title suggest. To answer the one exec with several commands thing we will have to look for something else to resolv. Here is a example:
Keep last 10000 lines of .log files which has been modified in the last 7 days using 1 exec command using severals {} references
1) see what the command will do on which files:
find / -name "*.log" -a -type f -a -mtime -7 -exec sh -c "echo tail -10000 {} \> fictmp; echo cat fictmp \> {} " \;
2) Do it: (note no more "\>" but only ">" this is wanted)
find / -name "*.log" -a -type f -a -mtime -7 -exec sh -c "tail -10000 {} > fictmp; cat fictmp > {} ; rm fictmp" \;
I usually embed the find in a small for loop one liner, where the find is executed in a subcommand with $().
Your command would look like this then:
for f in $(find *.txt); do echo "$(tail -1 $f), $(ls $f)"; done
The good thing is that instead of {} you just use $f and instead of the -exec … you write all your commands between do and ; done.
Not sure what you actually want to do, but maybe something like this?
for f in $(find *.txt); do echo $f; tail -1 $f; ls -l $f; echo; done
should use xargs :)
find *.txt -type f -exec tail -1 {} \; | xargs -ICONSTANT echo $(pwd),CONSTANT
another one (working on osx)
find *.txt -type f -exec echo ,$(PWD) {} + -exec tail -1 {} + | tr ' ' '/'
A find+xargs answer.
The example below finds all .html files and creates a copy with the .BAK extension appended (e.g. 1.html > 1.html.BAK).
Single command with multiple placeholders
find . -iname "*.html" -print0 | xargs -0 -I {} cp -- "{}" "{}.BAK"
Multiple commands with multiple placeholders
find . -iname "*.html" -print0 | xargs -0 -I {} echo "cp -- {} {}.BAK ; echo {} >> /tmp/log.txt" | sh
# if you need to do anything bash-specific then pipe to bash instead of sh
This command will also work with files that start with a hyphen or contain spaces such as -my file.html thanks to parameter quoting and the -- after cp which signals to cp the end of parameters and the beginning of the actual file names.
-print0 pipes the results with null-byte terminators.
for xargs the -I {} parameter defines {} as the placeholder; you can use whichever placeholder you like; -0 indicates that input items are null-separated.
I found this solution (maybe it is already said in a comment, but I could not find any answer with this)
you can execute MULTIPLE COMMANDS in a row using "bash -c"
find . <SOMETHING> -exec bash -c "EXECUTE 1 && EXECUTE 2 ; EXECUTE 3" \;
in your case
find . -name "*.txt" -exec bash -c "tail -1 '{}' && ls '{}'" \;
i tested it with a test file:
[gek#tuffoserver tmp]$ ls *.txt
casualfile.txt
[gek#tuffoserver tmp]$ find . -name "*.txt" -exec bash -c "tail -1 '{}' && ls '{}'" \;
testonline1=some TEXT
./casualfile.txt
Here is my bash script that you can use to find multiple files and then process them all using a command.
Example of usage. This command applies a file linux command to each found file:
./finder.sh file fb2 txt
Finder script:
# Find files and process them using an external command.
# Usage:
# ./finder.sh ./processing_script.sh txt fb2 fb2.zip doc docx
counter=0
find_results=()
for ext in "${#:2}"
do
# #see https://stackoverflow.com/a/54561526/10452175
readarray -d '' ext_results < <(find . -type f -name "*.${ext}" -print0)
for file in "${ext_results[#]}"
do
counter=$((counter+1))
find_results+=("${file}")
echo ${counter}") ${file}"
done
done
countOfResults=$((counter))
echo -e "Found ${countOfResults} files.\n"
echo "Processing..."
counter=0
for file in "${find_results[#]}"
do
counter=$((counter+1))
echo -n ${counter}"/${countOfResults}) "
eval "$1 '${file}'"
done
echo "All files have been processed."

Resources