Consider I have lots of shell scripts in a folder named test. I want to execute all the files in it except one particular file. what do I do? relocating the file or executing the file one after an other manually is not an option. Is there any way I could do this in single line. Or perhaps, adding something to sh path/to/test/*.sh, which executes all files?
for file in test/*; do
[ "$file" != "test/do-not-run.sh" ] && sh "$file"
done
If you are using bash, you can use extended patterns to skip the undesired script:
shopt -s extglob
for file in test/!(do-not-run).sh; do
sh "$file"
done
for FILE in `ls "$YOURPATH"` ; do
test "$FILE" != "do-not-run.sh" && sh "$YOURPATH/$FILE";
done
find path/to/test -name "*.sh" \! -name $pattern_for_unwanted_scripts -exec {} \;
Find will recursively execute all entries in the directory which end in .sh (-name "*.sh") and don't match the unwanted pattern (\! -name $pattern_for_unwanted_scripts).
in bash, provided you do shopt -s extglob you can use "extended globbing" allowing to use !(pattern-list) which matches anything except one of the given patterns.
In your case:
shopt -s extglob
for f in !(do-not-run.sh); do if [ "${f##*.}" == "sh" ]; then sh $f; fi; done
Related
I want to get all the instances of a file in my macosx file system and copy them in a single folder of an external hard disk.
I wrote a simple line of code in terminal but when I execute it, there is only a file in the target folder that is replaced at every occurrence it finds.
It seems that the $RANDOM or $(uuidgen) used in a single command return only one value used for every occurrence {} of the find command.
Is there a way to get a new value for every result of the find command?
Thank you.
find . -iname test.txt -exec cp {} /Volumes/EXT/$(uuidgen) \;
or
find . -iname test.txt -exec cp {} /Volumes/EXT/$RANDOM \;
This should work:
find ... -exec bash -c 'cp "$1" /Volumes/somewhere/$(uuidgen)' _ {} \;
Thanks to dan and pjh for corrections in comments.
find . -iname test.txt -exec bash -c '
for i do
cp "$i" "/Volumes/EXT/$RANDOM"
done' _ {} +
You can use -exec with +, to pass multiple files to a bash loop. You can't use command subs (or multiple commands at all) in a single -exec.
If you've got Bash 4.0 or later, another option is:
shopt -s dotglob
shopt -s globstar
shopt -s nocaseglob
shopt -s nullglob
for testfile in **/test.txt; do
cp -- "$testfile" "/Volumes/EXT/$(uuidgen)"
done
shopt -s dotglob enables globs to match files and directories that begin with . (e.g. .dir/test.txt)
shopt -s globstar enables the use of ** to match paths recursively through directory trees
shopt -s nocaseglob causes globs to match in a case-insensitive fashion (like find option -iname versus -name)
shopt -s nullglob makes globs expand to nothing when nothing matches (otherwise they expand to the glob pattern itself, which is almost never useful in programs)
The -- in cp -- ... prevents paths that begin with hyphens (e.g. -dir/test.txt) being (mis)treated as options to `cp'
Note that this code might fail on versions of Bash prior to 4.3 because symlinks are (stupidly) followed while expanding ** patterns
In an attempt to rename the files in one directory with numbers at the front I made an error in my script so that this happened in the wrong directory. Therefore I now need to remove these numbers from the beginning of all of my filenames in a directory. These range from 1 to 3 digits. Examples of the filnames I am working with are:
706terrain_Slope1000m_Minimum_all_25PCs_bolt_all_25PCs_qq_bolt.png
680met_sfcWind_all_25PCs_bolt_number.txt
460greenness_NDVI_500m_min_all_25PCs_bolt_number.txt
I was thinking of using mv but I'm not really sure how to do it with varying numbers of digits at the beginning, so any advice would be appreciated!
A simple way in bash is making use of a regular expression test:
for file in *; do
[[ -f "${file}" ]] && [[ "${file}" =~ (^[0-9]+) ]] && mv ${file} ${file/${BASH_REMATCH[1]}}
done
This does the following:
[[ -f "${file}" ]]: test if file is a file, if so
[[ "${file}" =~ (^[0-9]+) ]]: check if file starts with a number
${file/${BASH_REMATCH[1]}}: remove the number from the string file by using BASH_REMATCH, a variable that matches the groupings from the regex match.
If you've got perl's rename installed, the following should work :
rename 's/^[0-9]{1,3}//' /path/to/files
/path/to/files can be a list of specific files, or probably in your case a glob (e.g. *.{png,txt}). You don't need to select only files starting with digits as rename won't modify those that do not.
Using bash parameter expansion:
shopt -s extglob
for i in +([0-9])*.{txt,png}; do
mv -- "$i" "${i##+([0-9])}"
done
This will remove starting digits (any number) in filenames having png and txt extension.
The ## is removing the longest matching prefix pattern.
The +(...) is path name expansion syntax for repeated characters.
And [0-9] is pattern matching digits.
Alternate method using GNU find:
#!/usr/bin/env bash
find ./ \
-maxdepth 1\
-type f\
-name '[[:digit:]]*'\
-exec bash -c 'shopt -s extglob; f="${1##*/}"; d="${1%%/*}"; mv -- "$1" "${d}/${f##+([[:digit:]])}"' _ {} \;
Find all actual files in current directory whose name start with a digit.
For each found file, execute the Bash script below:
shopt -s extglob # need for extended pattern syntax
f="${1##*/}" # Get file name without directory path
d="${1%%/*}" # Get directory path without file name
mv -- "$1" "${d}/${f##+([[:digit:]])}" # Rename without the leading digits
Using basic features of a POSIX-compliant shell:
#!/bin/sh
for f in [[:digit:]]*; do
if [ -f "$f" ]; then
pf="${f%${f#???}}" pf="${pf##*[[:digit:]]}"
mv "$f" "$pf${f#???}"
fi
done
I'm having problems creating an if statement to check the files in my directory for a certain string in their names.
For example, I have the following files in a certain directory:
file_1_ok.txt
file_2_ok.txt
file_3_ok.txt
file_4_ok.txt
other_file_1_ok.py
other_file_2_ok.py
other_file_3_ok.py
other_file_4_ok.py
another_file_1_not_ok.sh
another_file_2_not_ok.sh
another_file_3_not_ok.sh
another_file_4_not_ok.sh
I want to copy all files that contain 1_ok to another directory:
#!/bin/bash
directory1=/FILES/user/directory1/
directory2=/FILES/user/directory2/
string="1_ok"
cd $directory
for every file in $directory1
do
if [$string = $file]; then
cp $file $directory2
fi
done
UPDATE:
The simpler answer was made by Faibbus, but refer to Inian if you want to remove or simply move files that don't have the specific string you want.
The other answers are valid as well.
cp directory1/*1_ok* directory2/
Use find for that:
find directory1 -maxdepth 1 -name '*1_ok*' -exec cp -v {} directory2 \;
The advantage of using find over the glob solution posted by Faibbus is that it can deal with an unlimited number of files which contain 1_ok were the glob solution will lead to an argument list too long error when calling cp with too many arguments.
Conclusion: For interactive use with a limited number of input files the glob will be fine, for a shell script, which has to be stable, I would use find.
With your script I suggest:
#!/bin/bash
source="/FILES/user/directory1"
target="/FILES/user/directory2"
regex="1_ok"
for file in "$source"/*; do
if [[ $file =~ $regex ]]; then
cp -v "$file" "$target"
fi
done
From help [[:
When the =~ operator is used, the string to the right of the operator
is matched as a regular expression.
Please take a look: http://www.shellcheck.net/
Using extglob matching in bash with the below pattern,
+(pattern-list)
Matches one or more occurrences of the given patterns.
First enable extglob by
shopt -s extglob
cp -v directory1/+(*not_ok*) directory2/
An example,
$ ls *.sh
another_file_1_not_ok.sh another_file_3_not_ok.sh
another_file_2_not_ok.sh another_file_4_nnoot_ok.sh
$ shopt -s extglob
$ cp -v +(*not_ok*) somedir/
another_file_1_not_ok.sh -> somelib/another_file_1_not_ok.sh
another_file_2_not_ok.sh -> somelib/another_file_2_not_ok.sh
another_file_3_not_ok.sh -> somelib/another_file_3_not_ok.sh
To remove the files except the one containing this pattern, do
$ rm -v !(*not_ok*) 2>/dev/null
This is fairly simple issue that has been bothering me. A little backstory. I have a folder full of scripts. These scripts takes data files *.dat and generates output in *.eps. The extension of my scripts is *.plt. I create a one line shell script that runs all the *.plt files in that folder.
#!/bin/sh
find . -name "*.plt" -exec {} \;
I just want to make sure that all the *.pdf images I will use in my document are up to date. For a time, the one line script was good. But when the number of files is over 50, it takes some time to run. I rarely change the data files, but make changes to the *.plt scripts frequently. The scripts are written in such way that a script named this_script_does_something.plt will create a file called this_script_does_something.eps.
Hence, here's my question.
Is there way to write a refined shell script that executes only the *.plt files that are newer than the similarly called *.eps?
I know I can do this in Python. But it seems like cheating. I also know that I can look for the newer *.eps and execute all the *.plt that are newer than this. This will solve my problem, for most practical cases. I just realized about this option while I was typing the question, so thank you SX. However, as a didactic exercise, and to solve my original doubt, I would like to search for individual cases: compare the modification time of each *.plt with each *.eps, and execute the script only when they are more recent than the output. Is it possible? Can it be done in a single line?
EDIT: I forgot to add, that the *.plt scripts should also execute when there are no homonym *.eps files, which normally means that the script is new and has not been executed yet.
I think I'd be using:
#!/bin/bash
for plt in *.plt
do
eps=$(basename "$plt" .plt).eps
if [ "$plt" -nt "$eps" ]
then "$plt"
fi
done
This uses the Bash/Korn shell operator -nt for 'newer than' (and there's the converse -ot operator for 'older than'). I'm assuming the files are all in a single directory so there's no need for a recursive search. If that's not correct, then use a separate:
find . -type d -exec sh -c "cd {}; new-script.sh" \;
(where new-script.sh is the script I just showed). Or use the Bash extension ** operator:
for plt in *.plt **/*.plt
You might need to set the Bash nullglob option:
shopt -s nullglob
This generates nothing when an expansion does not match any files.
Also generate when the .eps file does not exist:
#!/bin/bash
for plt in *.plt
do
eps=$(basename "$plt" .plt).eps
if [ ! -f "$eps" ] || [ "$plt" -nt "$eps" ]
then "$plt"
fi
done
The only not-completely-generic shell feature in this is the -nt operator. If your /bin/sh doesn't support it, check the /bin/[ command — it might — or use Korn Shell or Bash instead of /bin/sh in the shebang line.
This script should do what you expect:
find . -name "*.eps" -exec sh -c \
'plt=$(basename "$1" eps)plt; [ "$plt" -nt "$1" ] && $plt' sh {} \;
It will recurse into subdirectories, if any. If you don't want that, and you use GNU find, a simple workaround is to run:
find . -maxdepth 1 -name "*.eps" -exec sh -c \
'plt=$(basename "$1" eps)plt; [ "$plt" -nt "$1" ] && $plt' sh {} \;
If you don't use GNU find, you might use that syntax instead:
find *.eps -type f -exec sh -c \
'plt=$(basename "$1" eps)plt; [ "$plt" -nt "$1" ] && $plt' sh {} \;
but the latter might fail with an "arg list too long" error if you have a very large number of files matching the *.eps pattern. Any solution based on a for file in *.extension loop would suffer from the same issue.
Note also that -nt is not specified by POSIX so depending on your system, you might want to specifically state the shell to use instead of sh (mainstream shells like dash, bash, ksh, ksh93 or zsh do support -nt). For example on Solaris 10, you would use:
find . -name "*.eps" -exec ksh -c \
'plt=$(basename "$1" eps)plt; [ "$plt" -nt "$1" ] && $plt' ksh {} \;
Edit:
As the script should run if the .eps file does not exist, the command should loop on the .plt files instead, eg:
find *.plt -type f -exec bash -c \
'eps=$(basename "$0" plt)eps;
[ ! -f "$eps" -o "$0" -nt "$eps" ] && "$0"' "{}" \;
I know nothing about Linux commands o bash scripts so help me please.
I have a lot of file in different directories i want to rename all those files from "name" to "name.xml" using bash file is it possible to do that? I just find usefulness codes on the internet like this:
shopt -s globstar # enable ** globstar/recursivity
for i in **/*.txt; do
echo "$i" "${i/%.txt}.xml";
done
it does not even work.
For the purpose comes in handy the prename utility which is installed by default on many Linux distributions, usually it is distributed with the Perl package. You can use it like this:
find . -iname '*.txt' -exec prename 's/.txt/.xml/' {} \;
or this much faster alternative:
find . -iname '*.txt' | xargs prename 's/.txt/.xml/'
Explanation
Move/rename all files –whatever the extension is– in current directory and below from name to name.xml. You should test using echo before running the real script.
shopt -s globstar # enable ** globstar/recursivity
for i in **; do # **/*.txt will look only for .txt files
[[ -d "$i" ]] && continue # skip directories
echo "$i" "$i.xml"; # replace 'echo' by 'mv' when validated
#echo "$i" "${i/%.txt}.xml"; # replace .txt by .xml
done
Showing */.txt */.xml means effectively there are no files matching the given pattern, as by default bash will use verbatim * if no matches are found.
To prevent this issue you'd have to additionally set shopt -s nullglob to have bash just return nothing when there is no match at all.
After verifying the echoed lines look somewhat reasonable you'll have to replace
echo "$i" "${i/%.txt}.xml"
with
mv "$i" "${i/%.txt}.xml"
to rename the files.
You can use this bash script.
#!/bin/bash
DIRECTORY=/your/base/dir/here
for i in `find $DIRECTORY -type d -exec find {} -type f -name \*.txt\;`;
do mv $i $i.xml
done