Getting error "xargs unterminated quote" when tried to print the number of lines in terminal - macos

I want to get the number of lines in my application. I am using this code:
find . "(" -name "*.m" -or -name "*.h" ")" -print | xargs wc -l
It is working fine in other applications but for one of my applications it is giving the error "xargs unterminated quote".

Does one of your filenames have a quote in it? Try something like this:
find . "(" -name "*.m" -or -name "*.h" ")" -print0 | xargs -0 wc -l
The -print0 argument tells find to use the NULL character to terminate each name that it prints out. The -0 argument tells xargs that its input tokens are NULL-terminated. This avoids issues with characters that otherwise would be treated as special, like quotes.

This can happen because you have a single quote in a filename somewhere...
i.e., -> '
To find the problem file, run the following in the terminal:
\find . | grep \'
you can also run xargs like so to effectively address this issue:
xargs -I
and it can also happen if you have an alias for xargs setup that's causing an issue. To test if this is the case, just run xargs with a \ in front of it, e.g.
\find . | \xargs ....
The \ simply means "run the command without any aliases"

The canonical way to solve quotes, spaces and special characters problems when using find is to use the -exec option instead of xargs.
For your case you can use:
find . "(" -name "*.m" -or -name "*.h" ")" -exec wc -l "{}" \;

After some tinkering, I found that this command worked for me (because I had spaces and unmatched quotations in my filenames):
find . -iname "*USA*" -exec cp "{}" /Directory/to/put/file/ \;
. refers to the location the search is being run
-iname followed by the expression refers to the match criteria
-exec cp "{}" /Directory/to/put/file/ \; tells the command to execute the copy command where each file found via -iname replaces "{}"
You need the \; to denote to the exec command that the cp statement is ending.

Solved by replacing from source " with \"

Related

Escaping an apostrophe with xargs?

I'm trying to run a command on a bunch of files:
find . -name "*.ext" | xargs -0 cmd
The above works but it hangs because one of the folders stupidly has an ' in the file name (others have parens and other nonsense).
How do I safely send escaped output to my command? e.g.:
cmd foo\ bar\(baz\)\'\!
[edit] I know I can run find ... -exec cmd {} \; but the actual command I'm running is more complicated and being piped through sed first
You can use this while loop to process results of find command that uses NUL terminator using process substitution:
while IFS= read -rd '' file; do
# echo "$file"
cmd
done < <(find -iname "*.ext" -print0)
This can handle filenames with all kind of whitespaces, glob characters, newlines or any other special characters.
Note that this requires bash as process substitution is not supported in bourne shell.
If you have GNU find you can use the -print0 option
find -name "*.ext" -print0 | xargs -0 cmd
Otherwise you would have to ditch xargs. If you have Bash you could use
find -name "*.ext" | while read -a list ; do cmd "${list[#]}" ; done
Note that you do not have to specify current directory as the starting point. If no starting is specified, . is assumed.
GNU Parallel was born exactly because of xargs way of dealing with ", ' and space:
find . -name "*.ext" | parallel cmd

bash function not executing, returns "command not found"

In my .bash_profile, I have a function that returns all php files containing the parameter string passed in:
summon() {
"find . -name '*.php' -exec grep -ril '$1' '{}' \;"
}
When I am on my command line (mac) and I run summon foo, I get the error:
-bash: find . -name '*.php' -exec grep -ril 'foo' '{}' \;: command not found
But if I just copy/paste the find . -name '*.php' -exec grep -ril 'foo' '{}' \; into the command line, then it works properly, returning all of the php files that contain the string 'foo'.
Does anyone have any idea why the function is not being evaluated?
Just remove the quotes from your summon function. By quoting it, you are telling it to look for a command called find . -name '*.php' -exec grep -ril '$1' '{}' \; rather than a command called find with arguments of . -name '*.php' -exec grep -ril '$1' '{}' \; There is a good reason for this; consider if there were an application whose name contained a space (let's call it foo bar). If not for this quoting syntax, the program would be more difficult to execute from bash, because typing foo bar would try to run the command foo with argument bar, as opposed to running foo bar (As a side note, if this were the case, you could also run it by escaping the space: foo\ bar). Of course, it is considered bad form to name an executable something containing a space for this reason of adding complexity to run the command.
Your function should look like this:
summon() {
find . -name '*.php' -exec grep -ril "$1" '{}' \;
}
Also see #gniourf_gniourf 's comment on this answer with a few more suggestions, including using -type f on the find command to limit the search to files and removing the unnecessary -r flag from grep, because all files passed there will be files.
Loose the double-quotes around the find within the function.
summon() {
find . -name '*.php' -exec grep -il "$1" '{}' +
}
Within double-quotes, shell tries to expand it, so that it can evaluate it as an expression, Shell-Expansion
Argument inside single quote is the problem. Try like the below
summon() {
find . -name '*.php' -exec grep -ril "$1" {} \;
}

issue with piping find into sed (find and replace)

Here is my current code, my goal is to find every file in a given directory (recursively) and replace "FIND" with "REPLACEWITH" and overwrite the files.
FIND='ALEX'
REPLACEWITH='<strong>ALEX</strong>'
DIRECTORY='/some/directory/'
find $DIRECTORY -type f -name "*.html" -print0 |
LANG=C xargs -0 sed -i "s|$FIND|$REPLACEWITH|g"
The error I am getting is:
sed: 1: "/some/directory ...": command a expects \ followed by text
As given in BashFAQ #21, you can use perl to perform search-and-replace operations with no potential for data being treated as code:
in="$FIND" out="$REPLACEWITH" find "$DIRECTORY" -type f -name '*.html' \
-exec perl -pi -e 's/\Q$ENV{"in"}/$ENV{"out"}/g' '{}' +
If you want to include only files matching the FIND string, find can be told to only pass files which grep flags on to perl:
in="$FIND" out="$REPLACEWITH" find "$DIRECTORY" -type f -name '*.html' \
-exec grep -F -q -e "$FIND" '{}' ';' \
-exec perl -pi -e 's/\Q$ENV{"in"}/$ENV{"out"}/g' '{}' +
Because grep is being used to evaluate individual files, it's necessary to use one grep call per file so its exit status can be evaluated on a per-file basis; thus, the use of the less efficient -exec ... {} ';' action. For perl, it's possible to put multiple files to process on one command, hence the use of -exec ... {} +.
Note that fgrep is line-oriented; if your FIND string contains multiple lines, then files with any one of those lines will be passed to perl for replacements.
You can have find invoke sed directly although I think all the modification times on your files will be affected (which might matter or not):
find $DIRECTORY -type f -name "*.html" -exec sed -i "s|$FIND|$REPLACEWITH|g" '{}' ';'

Find files containing a given text

In bash I want to return file name (and the path to the file) for every file of type .php|.html|.js containing the case-insensitive string "document.cookie" | "setcookie"
How would I do that?
egrep -ir --include=*.{php,html,js} "(document.cookie|setcookie)" .
The r flag means to search recursively (search subdirectories). The i flag means case insensitive.
If you just want file names add the l (lowercase L) flag:
egrep -lir --include=*.{php,html,js} "(document.cookie|setcookie)" .
Try something like grep -r -n -i --include="*.html *.php *.js" searchstrinhere .
the -i makes it case insensitlve
the . at the end means you want to start from your current directory, this could be substituted with any directory.
the -r means do this recursively, right down the directory tree
the -n prints the line number for matches.
the --include lets you add file names, extensions. Wildcards accepted
For more info see: http://www.gnu.org/software/grep/
find them and grep for the string:
This will find all files of your 3 types in /starting/path and grep for the regular expression '(document\.cookie|setcookie)'. Split over 2 lines with the backslash just for readability...
find /starting/path -type f -name "*.php" -o -name "*.html" -o -name "*.js" | \
xargs egrep -i '(document\.cookie|setcookie)'
Sounds like a perfect job for grep or perhaps ack
Or this wonderful construction:
find . -type f \( -name *.php -o -name *.html -o -name *.js \) -exec grep "document.cookie\|setcookie" /dev/null {} \;
find . -type f -name '*php' -o -name '*js' -o -name '*html' |\
xargs grep -liE 'document\.cookie|setcookie'
Just to include one more alternative, you could also use this:
find "/starting/path" -type f -regextype posix-extended -regex "^.*\.(php|html|js)$" -exec grep -EH '(document\.cookie|setcookie)' {} \;
Where:
-regextype posix-extended tells find what kind of regex to expect
-regex "^.*\.(php|html|js)$" tells find the regex itself filenames must match
-exec grep -EH '(document\.cookie|setcookie)' {} \; tells find to run the command (with its options and arguments) specified between the -exec option and the \; for each file it finds, where {} represents where the file path goes in this command.
while
E option tells grep to use extended regex (to support the parentheses) and...
H option tells grep to print file paths before the matches.
And, given this, if you only want file paths, you may use:
find "/starting/path" -type f -regextype posix-extended -regex "^.*\.(php|html|js)$" -exec grep -EH '(document\.cookie|setcookie)' {} \; | sed -r 's/(^.*):.*$/\1/' | sort -u
Where
| [pipe] send the output of find to the next command after this (which is sed, then sort)
r option tells sed to use extended regex.
s/HI/BYE/ tells sed to replace every First occurrence (per line) of "HI" with "BYE" and...
s/(^.*):.*$/\1/ tells it to replace the regex (^.*):.*$ (meaning a group [stuff enclosed by ()] including everything [.* = one or more of any-character] from the beginning of the line [^] till' the first ':' followed by anything till' the end of line [$]) by the first group [\1] of the replaced regex.
u tells sort to remove duplicate entries (take sort -u as optional).
...FAR from being the most elegant way. As I said, my intention is to increase the range of possibilities (and also to give more complete explanations on some tools you could use).

Shell Scripting: Using bash with xargs

I'm trying to write a bash command that will delete all files matching a specific pattern - in this case, it's all of the old vmware log files that have built up.
I've tried this command:
find . -name vmware-*.log | xargs rm
However, when I run the command, it chokes up on all of the folders that have spaces in their names. Is there a way to format the file path so that xargs passes it to rm quoted or properly escaped?
Try using:
find . -name vmware-*.log -print0 | xargs -0 rm
This causes find to output a null character after each filename and tells xargs to break up names based on null characters instead of whitespace or other tokens.
Do not use xargs. Find can do it without any help:
find . -name "vmware-*.log" -exec rm '{}' \;
Check out the -0 flag for xargs; combined with find's -print0 you should be set.
find . -name vmware-*.log -print0 | xargs -0 rm
GNU find
find . -name vmware-*.log -delete
find . -name vmware-*.log | xargs -i rm -rf {}
find -iname pattern
use -iname for pattern search
To avoid space issue in xargs I'd use new line character as separator with -d option:
find . -name vmware-*.log | xargs -d '\n' rm

Resources