Using a variable in find - bash

I'm trying to look for a file from a database. I'm getting the data from a php file just fine. It's just this one line I'm having issues with:
directory=`find ./ -type f -name "*$thismodelnormal*" -exec ls -la {} \;`
$thismodelnormal is just a string, but it's dynamic based on data from the database. Can anyone enlighten me on how to get this done? I've done a good bit of research already and couldn't find a solution.. surely somebody has done this before though.

Adding set -x at the top of my script allowed me to view the command that are actually being run. In this case my command needed to be
directory=`find ./ -type f -name "*"$thismodelnormal"*" -exec ls -la {} \;`
NOTE the two sets of double quotes. One is for the find itself and the other extracts the variable.

Related

"find: missing argument to -exec" when executing a shell script

I am trying to execute a shell script which has the following within it:
find /hana/shared/directory -type d -mtime +2 -exec rm -rf {} \;
This works on other SUSE Linux servers but on one. It keeps returning the following:
find: missing argument to -exec
If, however, I place the same syntax into a terminal and run it manually, it runs without issue.
I can see this is a common issue, but I believe I have tried many of the suggestions to no avail and I'm a bit stuck now.
Very carefully read find(1), proc(5), and the GNU Bash documentation.
You might want to run (this is dangerous; see below):
find / -type d mtime +2 -exec /bin/rm -f '{}' \;
(use at least -ok instead of -exec)
And you probably want to clean just your $HOME.
But you should avoid removing files from /proc/, /sys/, /dev/, /lib/, /usr/, /bin/, and /sbin/. See hier(7) and environ(7).

Using fish shell builtins with find exec

I'm trying to source a file that I can get from the output of find using these commands:
find ./ -iname activate.fish -exec source {} \;
and
find ./ -iname activate.fish -exec builtin source {} \;
But both these commands give the error of the form find: ‘source’: No such file or directory or find: ‘builtin’: No such file or directory. Seems like exec of find is not able to recognize fish's builtins ?
What I basically want to achieve is a single command that will search for Python's virtualenv activate scripts in the current directory and execute them.
So doing something like -exec fish -c 'source {}; \ would not help. I've tried it as well and it doesn't error out but does not make the changes either.
Any ideas what can be done for this ?
Thanks!
Perhaps you need:
for file in (find ./ -iname activate.fish)
source $file
end
# or
find ./ -iname activate.fish | while read file
source $file
end
Command substitution executes the command, splits on newlines, and returns that list.
As mentioned in comments, seems like -exec does not run in or affect the current shell environment. So find -exec is not gonna work for my use case.
Instead, this will work:
source (find ./ -iname activate.fish)

Passing a command to 'find -exec' through a variable does not work

given a directory $HOME/foo/ with files in it.
the command:
find $HOME/foo -type f -exec md5deep -bre {} \;
works fine and hashes the files.
but, creating a variable for -exec does not seem to work:
md5="md5deep -bre"
find $HOME/foo -type f -exec "$md5" {} \;
returns: find: md5deep -bre: No such file or directory
why?
Since you are enclosing your variable in double quotes, the entire string gets sent to find as a single token following -exec and find treats it as the name of the command. To resolve the issue, simply remove the double quotes around your variable:
find "$HOME/foo" -type f -exec $md5 {} \;
In general, it is not good to store commands in shell variables. See BashFAQ/050.
Use an array.
md5Cmd=(md5deep -bre)
find "$HOME/foo" -type f -exec "${md5Cmd[#]}" {} \;
Better still, make the whole -exec statement optional:
md5Cmd=( -exec md5deep -bre {} \; )
find "$HOME/foo" -type f "${md5Cmd[#]}"
I have found the syntax for find -exec a bit weird (with several pitfalls as the ones #codeforester has mentioned).
So, as an alternative, i tend to separate the search part from the action part by piping the output of find (or grep) to a proper xargs process.
For example, i find it more readable (-n1 for using exactly 1 argument per command):
find $HOME/foo -type f | xargs -n1 md5deep -bre

Shell script for coppying files from one directory to another

I am trying to write a shell script to copy files with some specific name and creation/modification date from one folder to another. I am finding it hard that how I can do this ?
However i have tried this till now.
srcdir="/media/ubuntu/CA52057F5205720D/Users/st4r8_000/Desktop/26 nov"
dstdir="/media/ubuntu/ubuntu"
find ./ -type f -name 'test*.csv' -mtime -1
Now my question is, is it possible to put that find command into a if condition to get the files found by find.
I am very new to shell script. any help would be really appreciated.
What I found useful for this is the following code. I am sharing this here so that some one who is new like me can take some help from it:
#!/bin/bash
srcdir="/media/ubuntu/CA52057F5205720D/Users/st4r8_000/Desktop/office work/26 nov"
dstdir="/media/ubuntu/ubuntu"
find "$srcdir" -type f -name 'test*.csv' -mtime -1 -exec cp -v {} "$dstdir" \;

Moving large number of files [duplicate]

This question already has answers here:
Argument list too long error for rm, cp, mv commands
(31 answers)
Closed 3 years ago.
If I run the command mv folder2/*.* folder, I get "argument list too long" error.
I find some example of ls and rm, dealing with this error, using find folder2 -name "*.*". But I have trouble applying them to mv.
find folder2 -name '*.*' -exec mv {} folder \;
-exec runs any command, {} inserts the filename found, \; marks the end of the exec command.
The other find answers work, but are horribly slow for a large number of files, since they execute one command for each file. A much more efficient approach is either to use + at the end of find, or use xargs:
# Using find ... -exec +
find folder2 -name '*.*' -exec mv --target-directory=folder '{}' +
# Using xargs
find folder2 -name '*.*' | xargs mv --target-directory=folder
find folder2 -name '*.*' -exec mv \{\} /dest/directory/ \;
First, thanks to Karl's answer. I have only minor correction to this.
My scenario:
Millions of folders inside /source/directory, containing subfolders and files inside. Goal is to copy it keeping the same directory structure.
To do that I use such command:
find /source/directory -mindepth 1 -maxdepth 1 -name '*' -exec mv {} /target/directory \;
Here:
-mindepth 1 : makes sure you don't move root folder
-maxdepth 1 : makes sure you search only for first level children. So all it's content is going to be moved too, but you don't need to search for it.
Commands suggested in answers above made result directory structure flat - and it was not what I looked for, so decided to share my approach.
This one-liner command should work for you.
Yes, it is quite slow, but works even with millions of files.
for i in /folder1/*; do mv "$i" /folder2; done
It will move all the files from folder /folder1 to /folder2.
find doesn't work with really long lists of files, it will give you the same error "Argument list too long". Using a combination of ls, grep and xargs worked for me:
$ ls|grep RadF|xargs mv -t ../fd/
It did the trick moving about 50,000 files where mv and find alone failed.

Resources