What is this adddate function doing in this code? - bash

Can anyone please help me out in explaining what is this adddate() function doing exactly in this piece of code? Can anyone tell me line by line especially the while IFS=read -r line part.
What are the 3 or more problems with this script?
What is a better/different way to solve this task?
Thanks a lot guys!
#!/bin/bash
adddate() {
while IFS= read -r line; do
echo "$(date) $line"
done
}
for file in $( find /tmp/ -type f -mtime +5 -name '*.fish.temp' )
do
ls -la $file | adddate >> /tmp/clean.log
done
find /tmp/ -type f -mtime +5 -name '*.fish.temp' | xargs rm
exit 0

adddate is a bash shell function used bellow to pipe the output of ls with the intent to prepend the date before the line so it creates a new clean.log with date included (date of the time this script ran, and not the time of the actual log - this may be your 1st issue)
ls -la $file | adddate >> /tmp/clean.log
2nd - while IFS=read -r line issue has been explained on stackoverflow/6830735
3d issue - I would say is duplicating the find command. I would execute the find command once, as depending on the folder recursivity, may take some time.
4th issue might be the fact that exit 0 is useless as all sucess process outputs exit with 0 by default (so is redundant)
5th issue is an optimization that can be made to find:
find /tmp/ -type f -mtime +5 -name '*.fish.temp' | xargs rm
so that it executes in oneline lik:
find /tmp/ -type f -mtime +5 -name '*.fish.temp' -exec rm {} \;
An bash alias is nothing but the shortcut to commands - more
UPDATE-1:
what is the "-r" argument for"
Also on man read (thanks to urbanespaceman), it means that if you have in your stream (string), something like \n to be interpreted like 2 characters (\ and n, and not the special character newline.
-r Do not treat a <backslash> character in any special way. Consider each <backslash> to be part of the input line.
UPDATE-2:
is there any security issues with this script?
I guess depends how is used and how often. You're appending to the *.fish.temp so, you can go easily out of space if abused. Also, you're removing from your system whatever is there. You are also exit 0, regardless how find or any command there exited. Is that what you want?

It's looking for a list of all files under /tmp/ that were modified within the last 5 days and are called ".fish.temp"
For each of these files it is writing them line by line to /tmp/clean.log, prepending the timestamp from the date command. (The -la really isn't needed here though I don't think).
Then it runs the same find command and runs the results through rm to delete the files.
Finally it exits with a success code.
Step 3 is dodgy actually, as the find command could potentially return different results, depending on how often files in that dir are added/changed, how long the process takes to run, etc. This should be included in the for loop.
IFS defines the seperator - when set to blank it will just be end of line.

Related

Find command in for loop

I have bash script that intents to find all files older then "X" minutes and to redirect the output into a file. The logic is a have a for loop and i want to do a find through all files, but for some reason it prints and redirect in the output file just the file from the last directory(TESTS[3]="/tmp/test/"). So i want all the files from the directories to be redirected there. Thank you for the help :D
Here is the sh:
#!/bin/bash
set -x
if [ ! -d $TEST ]
then
echo "The directory does not exist (${TEST})!"
echo "Aborted."
exit 1
fi
TESTS[0]="/tmp/t1/"
TESTS[1]="/tmp/t2/"
TESTS[2]="/tmp/t3/"
TESTS[3]="/tmp/test/"
for TEST in "${TESTS[#]}"
do
find $TEST -type f -mmin +1 -exec ls -ltrah {} \; > /root/alex/out
done
You are using > inside the loop to redirect the output of the latest command to the file each time, overwriting the previous contents of the file. If you used >> it would open the file in "append" mode each time instead, but...
A better way to fix your issue would be by moving the redirection to outside the loop:
done > /root/alex/out
And an even better way than that would be to avoid a loop entirely and just use:
find "${TESTS[#]}" -type f -mmin +1 -exec ls -ltrah {} \; > /root/alex/out
Since find accepts multiple paths.
I think you can use {} + instead of {} \; to call the minimum number of ls required to process all arguments, and you might want to check -printf in man find because you can probably get a similar output using built-in format specifiers without calling ls at all.

shell script does not find the directory

I'm starting in the shell script.I'm need to make the checksum of a lot of files, so I thought to automate the process using an shell script.
I make to scripts: the first script uses an recursive ls command with an egrep -v that receive as parameter the path of file inputed by me, these command is saved in a ambient variable that converts the output in a string, follow by a loop(for) that cut the output's string in lines and pass these lines as a parameter when calling the second script; The second script take this parameter and pass they as parameter to hashdeep command,wich in turn is saved in another ambient variable that, as in previous script,convert the output's command in a string and cut they using IFS,lastly I'm take the line of interest and put then in a text file.
The output is:
/home/douglas/Trampo/shell_scripts/2016-10-27-001757.jpg: No such file
or directory
----Checksum FILE: 2016-10-27-001757.jpg
----Checksum HASH:
the issue is: I sets as parameter the directory ~/Pictures but in the output error they return another directory,/home/douglas/Trampo/shell_scripts/(the own directory), in this case, the file 2016-10-27-001757.jpg is in the ~/Pictures directory,why the script is going in its own directory?
First script:
#/bin/bash
arquivos=$(ls -R $1 | egrep -v '^d')
for linha in $arquivos
do
bash ./task2.sh $linha
done
second script:
#/bin/bash
checksum=$(hashdeep $1)
concatenado=''
for i in $checksum
do
concatenado+=$i
done
IFS=',' read -ra ADDR <<< "$concatenado"
echo
echo '----Checksum FILE:' $1
echo '----Checksum HASH:' ${ADDR[4]}
echo
echo ${ADDR[4]} >> ~/Trampo/shell_scripts/txt2.txt
I think that's...sorry about the English grammatic errors.
I hope that the question has become clear.
Thanks ins advanced!
There are several wrong in the first script alone.
When running ls in recursive mode using -R, the output is listed per directory and each file is listed relative to their parent instead of full pathname.
ls -R doesn't list the directory in long format as implied by | grep -v ^d where it seems you are looking for files (non directories).
In your specific case, the missing file 2016-10-27-001757.jpg is in a subdirectory but you lost the location by using ls -R.
Do not parse the output of ls. Use find and you won't have the same issue.
First script can be replaced by a single line.
Try this:
#!/bin/bash
find $1 -type f -exec ./task2.sh "{}" \;
Or if you prefer using xargs, try this:
#!/bin/bash
find $1 -type f -print0 | xargs -0 -n1 -I{} ./task2.sh "{}"
Note: enclosing {} in quotes ensures that task2.sh receives a complete filename even if it contains spaces.
In task2.sh the parameter $1 should also be quoted "$1".
If task2.sh is executable, you are all set. If not, add bash in the line so it reads as:
find $1 -type f -exec bash ./task2.sh "{}" \;
task2.sh, though not posted in the original question, is not executable. It has a missing execute permission.
Add execute permission to it by running chmod like:
chmod a+x task2.sh
Goodluck.

find folders and cd into them

I wanted to write a short script with the following structure:
find the right folders
cd into them
replace an item
So my problem is that I get the right folders from findbut I don't know how to do the action for every line findis giving me. I tried it with a for loop like this:
for item in $(find command)
do magic for item
done
but the problem is that this command will print the relative pathnames, and if there is a space within my path it will split the path at this point.
I hope you understood my problem and can give me a hint.
You can run commands with -exec option of find directly:
find . -name some_name -exec your_command {} \;
One way to do it is:
find command -print0 |
while IFS= read -r -d '' item ; do
... "$item" ...
done
-print0 and read ... -d '' cause the NUL character to be used to separate paths, and ensure that the code works for all paths, including ones that contain spaces and newlines. Setting IFS to empty and using the -r option to read prevents the paths from being modified by read.
Note that the while loop runs in a subshell, so variables set within it will not be visible after the loop completes. If that is a problem, one way to solve it is to use process substitution instead of a pipe:
while IFS= ...
...
done < <(find command -print0)
Another option, if you have got Bash 4.2 or later, is to use the lastpipe option (shopt -s lastpipe) to cause the last command in pipelines to be run in the current shell.
If the pattern you want to find is simple enough and you have bash 4 you may not need find. In that case, you could use globstar instead for recursive globbing:
#!/bin/bash
shopt -s globstar
for directory in **/*pattern*/; do
(
cd "$directory"
do stuff
)
done
The parentheses make each operation happen in a subshell. That may have performance cost, but usually doesn't, and means you don't have to remember to cd back each time.
If globstar isn't an option (because your find instructions are not a simple pattern, or because you don't have a shell that supports it) you can use find in a similar way:
find . -whatever -exec bash -c 'cd "$1" && do stuff' _ {} \;
You could use + instead of ; to pass multiple arguments to bash each time, but doing one directory per shell (which is what ; would do) has similar benefits and costs to using the subshell expression above.

Loop over directories with whitespace in Bash

In a bash script, I want to iterate over all the directories in the present working directory and do stuff to them. They may contain special symbols, especially whitespace. How can I do that? I have:
for dir in $( ls -l ./)
do
if [ -d ./"$dir" ]
but this skips my directories with whitespace in their name. Any help is appreciated.
Give this a try:
for dir in */
Take your pick of solutions:
http://www.cyberciti.biz/tips/handling-filenames-with-spaces-in-bash.html
The general idea is to change the default seperator (IFS).
#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for f in *
do
echo "$f"
done
IFS=$SAVEIFS
There are multiple ways. Here is something that is very fast:
find /your/dir -type d -print0 | xargs -0 echo
This will scan /your/dir recursively for directories and will pass all paths to the command "echo" (exchange to your need). It may call echo multiple time, but it will try to pass as many directory names as the console allows at once. This is extremely fast because few processes need to be started. But it works only on programs that can take an arbitrary amount of values as options.
-print0 tells find to seperate file paths using a zero byte (and -0 tells xargs to read arguments seperated by zero byte)
If you don't have the later one, you can do this:
find /your/dir -type d -print0 | xargs -0 -n 1 echo
or
find /your/dir -type d -print0 --exec echo '{}' ';'
The option -n 1 will tell xargs not to pass more arguments than one at the same time to your program.
If you don't want find to scan recursively you can specify the depth option to disable recursion (don't know the syntax by heart though).
Though if that's usable in your particular script is another question ;-).

How do I apply a shell command to many files in nested (and poorly escaped) subdirectories?

I'm trying to do something like the following:
for file in `find . *.foo`
do
somecommand $file
done
But the command isn't working because $file is very odd. Because my directory tree has crappy file names (including spaces), I need to escape the find command. But none of the obvious escapes seem to work:
-ls gives me the space-delimited filename fragments
-fprint doesn't do any better.
I also tried: for file in "find . *.foo -ls"; do echo $file; done
- but that gives all of the responses from find in one long line.
Any hints? I'm happy for any workaround, but am frustrated that I can't figure this out.
Thanks,
Alex
(Hi Matt!)
You have plenty of answers that explain well how to do it; but for the sake of completion I'll repeat and add to it:
xargs is only ever useful for interactive use (when you know all your filenames are plain - no spaces or quotes) or when used with the -0 option. Otherwise, it'll break everything.
find is a very useful tool; put using it to pipe filenames into xargs (even with -0) is rather convoluted as find can do it all itself with either -exec command {} \; or -exec command {} + depending on what you want:
find /path -name 'pattern' -exec somecommand {} \;
find /path -name 'pattern' -exec somecommand {} +
The former runs somecommand with one argument for each file recursively in /path that matches pattern.
The latter runs somecommand with as many arguments as fit on the command line at once for files recursively in /path that match pattern.
Which one to use depends on somecommand. If it can take multiple filename arguments (like rm, grep, etc.) then the latter option is faster (since you run somecommand far less often). If somecommand takes only one argument then you need the former solution. So look at somecommand's man page.
More on find: http://mywiki.wooledge.org/UsingFind
In bash, for is a statement that iterates over arguments. If you do something like this:
for foo in "$bar"
you're giving for one argument to iterate over (note the quotes!). If you do something like this:
for foo in $bar
you're asking bash to take the contents of bar and tear it apart wherever there are spaces, tabs or newlines (technically, whatever characters are in IFS) and use the pieces of that operation as arguments to for. That is NOT filenames. Assuming that the result of a tearing long string that contains filenames apart wherever there is whitespace yields in a pile of filenames is just wrong. As you have just noticed.
The answer is: Don't use for, it's obviously the wrong tool. The above find commands all assume that somecommand is an executable in PATH. If it's a bash statement, you'll need this construct instead (iterates over find's output, like you tried, but safely):
while read -r -d ''; do
somebashstatement "$REPLY"
done < <(find /path -name 'pattern' -print0)
This uses a while-read loop that reads parts of the string find outputs until it reaches a NULL byte (which is what -print0 uses to separate the filenames). Since NULL bytes can't be part of filenames (unlike spaces, tabs and newlines) this is a safe operation.
If you don't need somebashstatement to be part of your script (eg. it doesn't change the script environment by keeping a counter or setting a variable or some such) then you can still use find's -exec to run your bash statement:
find /path -name 'pattern' -exec bash -c 'somebashstatement "$1"' -- {} \;
find /path -name 'pattern' -exec bash -c 'for file; do somebashstatement "$file"; done' -- {} +
Here, the -exec executes a bash command with three or more arguments.
The bash statement to execute.
A --. bash will put this in $0, you can put anything you like here, really.
Your filename or filenames (depending on whether you used {} \; or {} + respectively). The filename(s) end(s) up in $1 (and $2, $3, ... if there's more than one, of course).
The bash statement in the first find command here runs somebashstatement with the filename as argument.
The bash statement in the second find command here runs a for(!) loop that iterates over each positional parameter (that's what the reduced for syntax - for foo; do - does) and runs a somebashstatement with the filename as argument. The difference here between the very first find statement I showed with -exec {} + is that we run only one bash process for lots of filenames but still one somebashstatement for each of those filenames.
All this is also well explained in the UsingFind page linked above.
Instead of relying on the shell to do that work, rely on find to do it:
find . -name "*.foo" -exec somecommand "{}" \;
Then the file name will be properly escaped, and never interpreted by the shell.
find . -name '*.foo' -print0 | xargs -0 -n 1 somecommand
It does get messy if you need to run a number of shell commands on each item, though.
xargs is your friend. You will also want to investigate the -0 (zero) option with it. find (with -print0) will help to produce the list. The Wikipedia page has some good examples.
Another useful reason to use xargs, is that if you have many files (dozens or more), xargs will split them up into individual calls to whatever xargs is then called upon to run (in the first wikipedia example, rm)
find . -name '*.foo' -print0 | xargs -0 sh -c 'for F in "${#}"; do ...; done' "${0}"
I had to do something similar some time ago, renaming files to allow them to live in Win32 environments:
#!/bin/bash
IFS=$'\n'
function RecurseDirs
{
for f in "$#"
do
newf=echo "${f}" | sed -e 's/[\\/:\*\?#"\|<>]/_/g'
if [ ${newf} != ${f} ]; then
echo "${f}" "${newf}"
mv "${f}" "${newf}"
f="${newf}"
fi
if [[ -d "${f}" ]]; then
cd "${f}"
RecurseDirs $(ls -1 ".")
fi
done
cd ..
}
RecurseDirs .
This is probably a little simplistic, doesn't avoid name collisions, and I'm sure it could be done better -- but this does remove the need to use basename on the find results (in my case) before performing my sed replacement.
I might ask, what are you doing to the found files, exactly?

Resources