How to get value of {} in shell? - shell

find . -name "*.network" -o -name "*.index" | xargs -n 1 -I {} -P 5 sh -c "ncols=3; filename={}; echo $filename"
I want get the filename stored in a variable. By setting filename={}, and echo filename, I got nothing output in console.
Since I want to use multi-thread. xargs is necessary in my script.
I use single quotes as suggested by aicastell. But I want to used awk inside quotes. What should I do with single quotes inside quotes? \s did not work.
Can anybody help me with this ?

Since $filename is within double quotes, the shell substitues it before it already before it runs your pipe. With other words: The filename in the echo statement refers to the variable in the calling shell, not in the subshell which you open with sh -c.
Hence, use single quotes instead!

You can do something like this:
for FILENAME in $(find . -name "*.network" -o -name "*.index")
do
# process $FILENAME
done

Related

Unsure how to use ' in a bash script

I am trying to remotely SSH into a server and remove x amount of backups depending how many we set to keep in the script.
#!/bin/bash
BKUSER=4582
BKSERVER=int-backup2.domain.com
DELETEMORETHAN=$(ssh "$BKUSER"#"$BKSERVER" 'find ~/backup/ -maxdepth 1 -type d | wc -l')
if [ "$DELETEMORETHAN" -gt 4 ] ; then
COUNT=$(echo "$DELETEMORETHAN - 4" | bc -l)
ssh "$BKUSER"#"$BKSERVER" 'echo rm -rvf "$(ssh "$BKUSER"#"$BKSERVER" 'find ~/backup/ -maxdepth 1 -type d | head -"${COUNT}"')'
fi
In this example, I am trying to keep 4 of the latest backups. I am failing at
ssh "$BKUSER"#"$BKSERVER" 'echo rm -rvf "$(ssh "$BKUSER"#"$BKSERVER" 'find ~/backup/ -maxdepth 1 -type d | head -"${COUNT}"')'
I was trying to use: https://github.com/koalaman/shellcheck/wiki/SC2026 but it doesn't help since the ' are not being grouped properly, I am stuck!
dennis-b:
[root#ngx /]# ./test
bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 3: syntax error: unexpected end of file
There's really no point repeatedly ssh'ing into your server in order to construct a command line which will be executed that server. Just ssh once, and give it the script that should be run.
To simplify a bit, there is no real need to use find. find ~/backup/ -maxdepth 1 -type d doesn't even produce the listing in order, so a simple glob ~/backup/*/ is probably better.
Assuming you have bash on the server,
ssh "$bkuser#$bkserver" \
'dirs=(~/backup/*/);
if ((${#dirs[#]} > 4)); then
echo rm -rvf "${dirs[#]:4}";
fi'
will probably do what you want. (Split into lines for readability; it can be typed all on one line, leaving out the line continuation character.)
The main difference between ' and " is that variables in double quotes are replaced but in single quotes no variables are replaced and $() is not executed.
So to build the command that is to be executed on the server, you need to put it into double quotes as you need the $() to work:
ssh "$BKUSER"#"$BKSERVER" "echo rm -rvf '$(ssh "$BKUSER"#"$BKSERVER" "find ~/backup/ -maxdepth 1 -type d | head -'${COUNT}'")"
You basically have to switch your single and double quotes: the outermost quotes must be double quotes so $() works and variables are replaced. The quotes that are required for the command to work on the server can be single quotes, as no variable replacement happens on the server, everything is done before the command is send.
You may wonder why '$(ssh ...)' works in spite of being in single quotes. Actually, it is in double quotes! The single quotes are not quotes here, they are just plain text inside of double quotes. They only get interpreted as single quotes on the server.

Iterating over associative array in bash

I am renaming strings recursively using an associative array. Th array part is working, when I echo $index & ${code_names[$index]}they print correctly. However the files are not modified. When I run the find | sed command in the shell it works however inside a bash script it doesnt.
Update
Also script runs ok if I just hardcode the string to be renamed: find . -name $file_type -print0 | xargs -0 sed -i 's/TEST/BAT/g'
#!/usr/bin/env bash
dir=$(pwd)
base=$dir"/../../new/repo"
file_type="*Kconfig"
cd $base
declare -A code_names
code_names[TEST]=BAT
code_names[FUGT]=BLANK
for index in "${!code_names[#]}"
do
find . -name $file_type -print0 | xargs -0 sed -i 's/$index/${code_names[$index]}/g'
done
The bare variable $file_type gets expanded by the shell. Double quote it. Variables are not expanded in single quotes, use double quotes instead. Note that it can break if $index or ${code_names[$index]} contain characters with special meaning for sed (like /).
find . -name "$file_type" -print0 \
| xargs -0 sed -i "s/$index/${code_names[$index]}/g"

Edit a find -exec echo command to include a grep for a string

So I have the following command which looks for a series of files and appends three lines to the end of everything found. Works as expected.
find /directory/ -name "file.php" -type f -exec sh -c "echo -e 'string1\string2\nstring3\n' >> {}" \;
What I need to do is also look for any instance of string1, string2, or string3 in the find ouput of file.php prior to echoing/appending the lines so I don't append a file unnecessarily. (This is being run in a crontab)
Using | grep -v "string" after the find breaks the -exec command.
How would I go about accomplishing my goal?
Thanks in advance!
That -exec command isn't safe for strings with spaces.
You want something like this instead (assuming finding any of the strings is reason not to add any of the strings).
find /directory/ -name "file.php" -type f -exec sh -c "grep -q 'string1|string2|string3' \"\$1\" || echo -e 'string1\nstring2\nstring3\n' >> \"\$1\"" - {} \;
To explain the safety issue.
find places {} in the command it runs as a single argument but when you splat that into a double-quoted string you lose that benefit.
So instead of doing that you pass the file as an argument to the shell and then use the positional arguments in the shell command with quotes.
The command above simply chains the echo to a failure from grep to accomplish the goal.

why isn't this variable in a find iteration changing its value

I was trying to rename all files using find but after i ran this...
find . -name '*tablet*' -exec sh -c "new=$(echo {} | sed 's/tablet/mobile/') && mv {} $new" \;
i found that my files where gone, changed it to echo the value of $new and found that it always kept the name of the first file so it basically renamed all files to have the same name
$ find . -name '*tablet*' -exec sh -c "new=$(echo {} | sed 's/tablet/mobile/') && echo $new" \;
_prev_page.tablet.erb
_prev_page.tablet.erb
_prev_page.tablet.erb
_prev_page.tablet.erb
_prev_page.tablet.erb
_prev_page.tablet.erb
_prev_page.tablet.erb
also tried to change to export new=..., same result
Why doesn't the value of new change?
The problem I believe is that the command substitution is expanded by bash once then find uses the result in each invocation. I could be wrong with the reason.
When I have similar stuff before I write out a shell script eg
#! /bin/bash
old="$1"
new="${1/tablet/mobile}"
if [[ "${old}" != "${new}" ]]; then
mv "${old}" "${new}"
fi
that takes care of renaming the file then I can call that script from the find command
find . -name "*tablet*" -exec /path/to/script '{}' \;
makes things much simpler to sort out.
EDIT:
HAHA after some messing around with the quoting you can sort this out by changing the double quotes to single quotes encapsulating the command. As is the $() is expanded by the shell command. if done as below the command substitution is done by the shell invoked by the exec.
find . -name "*tablet*" -exec sh -c 'new=$( echo {} | sed "s/tablet/mobile/" ) && mv {} $new' \;
SO the issue is to do with when the command substitution is expanded, by puting it in single quotes we force the expansion in each invokation of sh.

'find -exec' a shell function in Linux

Is there a way to get find to execute a function I define in the shell?
For example:
dosomething () {
echo "Doing something with $1"
}
find . -exec dosomething {} \;
The result of that is:
find: dosomething: No such file or directory
Is there a way to get find's -exec to see dosomething?
Since only the shell knows how to run shell functions, you have to run a shell to run a function. You also need to mark your function for export with export -f, otherwise the subshell won't inherit them:
export -f dosomething
find . -exec bash -c 'dosomething "$0"' {} \;
find . | while read file; do dosomething "$file"; done
Jac's answer is great, but it has a couple of pitfalls that are easily overcome:
find . -print0 | while IFS= read -r -d '' file; do dosomething "$file"; done
This uses null as a delimiter instead of a linefeed, so filenames with line feeds will work. It also uses the -r flag which disables backslash escaping, and without it backslashes in filenames won't work. It also clears IFS so that potential trailing white spaces in names are not discarded.
Add quotes in {} as shown below:
export -f dosomething
find . -exec bash -c 'dosomething "{}"' \;
This corrects any error due to special characters returned by find,
for example files with parentheses in their name.
Processing results in bulk
For increased efficiency, many people use xargs to process results in bulk, but it is very dangerous. Because of that there was an alternate method introduced into find that executes results in bulk.
Note though that this method might come with some caveats like for example a requirement in POSIX-find to have {} at the end of the command.
export -f dosomething
find . -exec bash -c 'for f; do dosomething "$f"; done' _ {} +
find will pass many results as arguments to a single call of bash and the for-loop iterates through those arguments, executing the function dosomething on each one of those.
The above solution starts arguments at $1, which is why there is a _ (which represents $0).
Processing results one by one
In the same way, I think that the accepted top answer should be corrected to be
export -f dosomething
find . -exec bash -c 'dosomething "$1"' _ {} \;
This is not only more sane, because arguments should always start at $1, but also using $0 could lead to unexpected behavior if the filename returned by find has special meaning to the shell.
Have the script call itself, passing each item found as an argument:
#!/bin/bash
if [ ! $1 == "" ] ; then
echo "doing something with $1"
exit 0
fi
find . -exec $0 {} \;
exit 0
When you run the script by itself, it finds what you are looking for and calls itself passing each find result as the argument. When the script is run with an argument, it executes the commands on the argument and then exits.
Just a warning regaring the accepted answer that is using a shell,
despite it well answer the question, it might not be the most efficient way to exec some code on find results:
Here is a benchmark under bash of all kind of solutions,
including a simple for loop case:
(1465 directories, on a standard hard drive, armv7l GNU/Linux synology_armada38x_ds218j)
dosomething() { echo $1; }
export -f dosomething
time find . -type d -exec bash -c 'dosomething "$0"' {} \;
real 0m16.102s
time while read -d '' filename; do dosomething "${filename}" </dev/null; done < <(find . -type d -print0)
real 0m0.364s
time find . -type d | while read file; do dosomething "$file"; done
real 0m0.340s
time for dir in $(find . -type d); do dosomething $dir; done
real 0m0.337s
"find | while" and "for loop" seems best and similar in speed.
For those of you looking for a Bash function that will execute a given command on all files in current directory, I have compiled one from the above answers:
toall(){
find . -type f | while read file; do "$1" "$file"; done
}
Note that it breaks with file names containing spaces (see below).
As an example, take this function:
world(){
sed -i 's_hello_world_g' "$1"
}
Say I wanted to change all instances of "hello" to "world" in all files in the current directory. I would do:
toall world
To be safe with any symbols in filenames, use:
toall(){
find . -type f -print0 | while IFS= read -r -d '' file; do "$1" "$file"; done
}
(but you need a find that handles -print0 e.g., GNU find).
It is not possible to executable a function that way.
To overcome this you can place your function in a shell script and call that from find
# dosomething.sh
dosomething () {
echo "doing something with $1"
}
dosomething $1
Now use it in find as:
find . -exec dosomething.sh {} \;
To provide additions and clarifications to some of the other answers, if you are using the bulk option for exec or execdir (-exec command {} +), and want to retrieve all the positional arguments, you need to consider the handling of $0 with bash -c.
More concretely, consider the command below, which uses bash -c as suggested above, and simply echoes out file paths ending with '.wav' from each directory it finds:
find "$1" -name '*.wav' -execdir bash -c 'echo "$#"' _ {} +
The Bash manual says:
If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, they are assigned to positional parameters, starting with $0.
Here, 'echo "$#"' is the command string, and _ {} are the arguments after the command string. Note that $# is a special positional parameter in Bash that expands to all the positional parameters starting from 1. Also note that with the -c option, the first argument is assigned to positional parameter $0.
This means that if you try to access all of the positional parameters with $#, you will only get parameters starting from $1 and up. That is the reason why Dominik's answer has the _, which is a dummy argument to fill parameter $0, so all of the arguments we want are available later if we use $# parameter expansion for instance, or the for loop as in that answer.
Of course, similar to the accepted answer, bash -c 'shell_function "$0" "$#"' would also work by explicitly passing $0, but again, you would have to keep in mind that $# won't work as expected.
Put the function in a separate file and get find to execute that.
Shell functions are internal to the shell they're defined in; find will never be able to see them.
I find the easiest way is as follows, repeating two commands in a single do:
func_one () {
echo "The first thing with $1"
}
func_two () {
echo "The second thing with $1"
}
find . -type f | while read file; do func_one $file; func_two $file; done
Not directly, no. Find is executing in a separate process, not in your shell.
Create a shell script that does the same job as your function and find can -exec that.
I would avoid using -exec altogether. Use xargs:
find . -name <script/command you're searching for> | xargs bash -c

Resources