I'm running Centos 7. I need to have a cron job that moves everything from /media/tmp to /media/tv except the .grab and Transcode folders. Yesterday I thought that the following worked, but today it moves the Transcode folder as well.
mv -n /media/tmp/*!(Transcode)!(.grab) /media/tv/
I've found that the above does not work as a cron job as the '(' causes and error. I learned that I needed to escape those, but now I get
mv: cannot stat ‘/media/tmp/!(Transcode)!(.grab)’: No such file or directory
My current attempt at a bash script is
#!/bin/bash
mv -n '/media/tmp'/*!\(Transcode\)!\(.grab\) '/media/tv/'
My understanding is that the * is the problem, but using either ' or " on the file path doesn't seem to fix it like that post I found said it would.
Any ideas on how to get this to work correctly?
You're trying to use extglob, which may not be enabled for cron. I would avoid that option entirely, iterating over the glob with a negative ! regex match.
for file in /media/tmp/*; do
[[ ! "$file" =~ Transcode|\.grab$ ]] && mv -n "$file" /media/tv/
done
I'd just do it as something simple like (untested):
mkdir -p /media/tv || exit 1
for i in /media/tmp/*; do
case $(basename "$i") in
Transcode|.grab ) ;;
* ) mv -n -- "$i" /media/tv ;;
esac
done
Related
I want to move all JSON files created within a jenkins job to a different folder.
It is possible that the job does not create any json file.
In that case the mv command is raising an error and so that job is failing.
How do I prevent mv command from raising error in case no file is found?
Welcome to SO.
Why do you not want the error?
If you just don't want to see the error, then you could always just throw it away with 2>/dev/null, but PLEASE don't do that. Not every error is the one you expect, and this is a debugging nightmare. You could write it to a log with 2>$logpath and then build in logic to read that to make certain it's ok, and ignore or respond accordingly --
mv *.json /dir/ 2>$someLog
executeMyLogParsingFunction # verify expected err is the ONLY err
If it's because you have set -e or a trap in place, and you know it's ok for the mv to fail (which might not be because there is no file!), then you can use this trick -
mv *.json /dir/ || echo "(Error ok if no files found)"
or
mv *.json /dir/ ||: # : is a no-op synonym for "true" that returns 0
see https://www.gnu.org/software/bash/manual/html_node/Conditional-Constructs.html
(If it's failing simply because the mv is returning a nonzero as the last command, you could also add an explicit exit 0, but don't do that either - fix the actual problem rather than patching the symptom. Any of these other solutions should handle that, but I wanted to point out that unless there's a set -e or a trap that catches the error, it shouldn't cause the script to fail unless it's the very last command.)
Better would be to specifically handle the problem you expect without disabling error handling on other problems.
shopt -s nullglob # globs with no match do not eval to the glob as a string
for f in *.json; do mv "$f" /dir/; done # no match means no loop entry
c.f. https://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html
or if you don't want to use shopt,
for f in *.json; do [[ -e "$f" ]] && mv "$f" /dir/; done
Note that I'm only testing existence, so that will include any match, including directories, symlinks, named pipes... you might want [[ -f "$f" ]] && mv "$f" /dir/ instead.
c.f. https://www.gnu.org/software/bash/manual/html_node/Bash-Conditional-Expressions.html
This is expected behavior -- it's why the shell leaves *.json unexpanded when there are no matches, to allow mv to show a useful error.
If you don't want that, though, you can always check the list of files yourself, before passing it to mv. As an approach that works with all POSIX-compliant shells, not just bash:
#!/bin/sh
# using a function here gives us our own private argument list.
# that's useful because minimal POSIX sh doesn't provide arrays.
move_if_any() {
dest=$1; shift # shift makes the old $2 be $1, the old $3 be $2, etc.
# so, we then check how many arguments were left after the shift;
# if it's only one, we need to also check whether it refers to a filesystem
# object that actually exists.
if [ "$#" -gt 1 ] || [ -e "$1" ] || [ -L "$1" ]; then
mv -- "$#" "$dest"
fi
}
# put destination_directory/ in $1 where it'll be shifted off
# $2 will be either nonexistent (if we were really running in bash with nullglob set)
# ...or the name of a legitimate file or symlink, or the string '*.json'
move_if_any destination_directory/ *.json
...or, as a more bash-specific approach:
#!/bin/bash
files=( *.json )
if (( ${#files[#]} > 1 )) || [[ -e ${files[0]} || -L ${files[0]} ]]; then
mv -- "${files[#]}" destination/
fi
Loop over all json files and move each of them, if it exists, in a oneliner:
for X in *.json; do [[ -e $X ]] && mv "$X" /dir/; done
I am trying to copy all files in a directory starting with a certain prefix using wildcard. Here is my script
#!/bin/bash
path="/home/scoubidou/recovered/"
prefix="f"
for i in "$#"
do
if [ ! -d "$path$prefix$i" ]; then
mkdir $path$prefix$i
fi
echo $path$prefix$i* $path$prefix$i
mv $path$prefix$i* $path$prefix$i
done
However, this is not working. The wildcard seems not to be working and the expression is taken with a string. Note that the command works just fine in the terminal.
try this
`mv $path$prefix$i* $path$prefix$i`
I need to check if a file exists in directory using bash. I have tried below method but it needs complete path as input.
if [ -e /*/my_file.txt ] ;
then
echo "file found"
else
echo "not found"
fi
Is there any way that I can check if the file exists at any depth dynamically.
NOTE: I don't want to use "find" as it takes lot of time to execute.
If you are using bash 4, you can write patterns that recursively descend a hierarchy:
shopt -s globstar
for f in /**/myfile.txt; do
if [[ -e $f ]]; then
found=1
echo "File found"
break
fi
done
if [[ $found -ne 1 ]]; then
echo "File not found"
fi
Using find:
found=$( find / -name myfile.txt )
if [[ -n $found ]]; then
echo "File found"
else
echo "File not found"
fi
If really speed is your concern, file globbing like ls * */* */*/* is not helping you that much. And it has its limit with this error: Argument list too long. find is a useful tool for finding stuff. It is very flexible. But like using file globbing, it has to scan the directory tree with every invocation. For occasional searches, like for maintenance, this is totally acceptable. But if this is part processing pipeline, the speed is not acceptable. You need an optimised database for that.
The simplistic way
Most every UNIX I know is shipped with locate.
If it is preinstall you can search like this:
$ locate -b '\my_file.txt'
The backslash in front of my_file.txt is intended. It switches off wildcard search. Adding -i gives case insensitive search.
If the command is not available it should be installable from your OS repository. For Debian/Ubuntu: apt install locate. For first init run /etc/cron.daily/locate as root or with sudo.
The database is updated on a daily basis. For some applications, this interval is probably too long. By moving the cronjob from daily to like every 3 hours, you get more recent results.
The realtime way ...
This is a bit out of the scope of this answer. But you would need some kind of deamon, that would watch kernel inotify events for directory changes. These in turn would be reflected in a database, that can be queried through some API. Like Spotlight from MacOS or Tracker from Gnome.
find is the proper solution.
however you can use bash expansion feature
if ls */* | grep -q my_file.txt
then echo file found
else echo file not found
fi
note
that above solution will not find my_file.txt if a top level.
if my_file.txt is part of a directory name you might get a wrong result.
if there are many (thousands) directories and many files / expansion might get paste bash limit (arg list too long)
you can ls * */* */*/* | grep with limit state above.
I am working on some stuff where I am storing data in a file.
But each time I run the script it gets appended to the previous file.
I want help on how I can remove the file if it already exists.
Don't bother checking if the file exists, just try to remove it.
rm -f /p/a/t/h
# or
rm /p/a/t/h 2> /dev/null
Note that the second command will fail (return a non-zero exit status) if the file did not exist, but the first will succeed owing to the -f (short for --force) option. Depending on the situation, this may be an important detail.
But more likely, if you are appending to the file it is because your script is using >> to redirect something into the file. Just replace >> with >. It's hard to say since you've provided no code.
Note that you can do something like test -f /p/a/t/h && rm /p/a/t/h, but doing so is completely pointless. It is quite possible that the test will return true but the /p/a/t/h will fail to exist before you try to remove it, or worse the test will fail and the /p/a/t/h will be created before you execute the next command which expects it to not exist. Attempting this is a classic race condition. Don't do it.
Another one line command I used is:
[ -e file ] && rm file
You can use this:
#!/bin/bash
file="file_you_want_to_delete"
if [ -f "$file" ] ; then
rm "$file"
fi
If you want to ignore the step to check if file exists or not, then you can use a fairly easy command, which will delete the file if exists and does not throw an error if it is non-existing.
rm -f xyz.csv
A one liner shell script to remove a file if it already exist (based on Jindra Helcl's answer):
[ -f file ] && rm file
or with a variable:
#!/bin/bash
file="/path/to/file.ext"
[ -f $file ] && rm $file
Something like this would work
#!/bin/sh
if [ -fe FILE ]
then
rm FILE
fi
-f checks if it's a regular file
-e checks if the file exist
Introduction to if for more information
EDIT : -e used with -f is redundant, fo using -f alone should work too
if [ $( ls <file> ) ]; then rm <file>; fi
Also, if you redirect your output with > instead of >> it will overwrite the previous file
So in my case I wanted to remove a FIFO file before I create it again, so this worked for me:
#!/bin/bash
file="/tmp/test"
rm -rf $file | true
mkfifo $file
| true will continue the script even if file is not found.
I need to make a recycle bin code using bash. Here is what I have done so far. My problem is that when I move a file with the same name into the trash folder it just overwrites the previous file. Can you give me any suggestions on how to approach this problem?
#!/bin/bash
mkdir -p "$HOME/Trash"
if [ $1 = -restore ]; then
while read file; do
mv $HOME/Trash/$2 /$file
done < try.txt
else
if [ $1 = -restoreall ]; then
mv $HOME/Trash/* /$PWD
else
if [ $1 = -empty ]; then
rm -rfv /$HOME/Trash/*
else
mv $PWD/"$1"/$HOME/Trash
echo -n "$PWD" >> /$HOME/Bash/try
fi
fi
fi
You could append the timestamp of the time of deletion to the filename in your Trash folder. Upon restore, you could strip this off again.
To add a timestamp to your file, use something like this:
DT=$(date +'%Y%m%d-%H%M%S')
mv $PWD/"$1" "/$HOME/Trash/${1}.${DT}"
This will, e.g., create a file like initrd.img-2.6.28-11-generic.20110615-140159 when moving initrd.img-2.6.28-11-generic.
To get the original filename, strip everything starting from the last dot, like with:
NAME_WITHOUT_TIMESTAMP=${file%.*-*}
The pattern is on the right side after the percentage char. (.* would also be enough to match.)
Take a look how trash-cli does it. It's written in Python and uses the same trash bin as desktop environments. Trash-cli is available at least in the big Linux distributions.
http://code.google.com/p/trash-cli/
Probably the easiest thing to do is simply add -i to the invocation of mv. That will prompt the user whether or not to replace. If you happen to have access to gnu cp (eg, on Linux), you could use cp --backup instead of mv.