Create a subdirectory for all directories in folder - bash

Suppose I have some directory, which contain a number of subdirectories, and in each of those subdirectories I want to create a directory with the same name:
./dir-1
./dir-2
...
./dir-n
I want to do mkdir */new-dir
but this throws an error. What's the best way of going about this?

for dir in $(ls); do
mkdir "$dir/new-dir"
done

find . -type d | xargs -I "{x}" mkdir "{x}"/new-dir
If you want to reduce it to the first level of directories use
find . -maxdepth 1 -type d | xargs -I "{x}" mkdir "{x}"/new-dir

Amazing that this question never obtained a sane answer:
shopt -s nullglob
for i in */; do
mkdir -- "${i}newdir"
done
This is 100% safe regarding funny symbols in filenames (spaces, wildcards, etc.).
The shopt -s nullglob so that the glob expands to nothing if there are no matches.
The -- in mkdir to mark the end of options (if there's a directory with a name starting with a hyphen, doesn't confuse mkdir trying to interpret it as an option).
This silently ignores the hidden directories. If you need to perform this operation on hidden directories, just replace the line shopt -s nullglob by the following:
shopt -s nullglob dotglob
The dotglob so that the globs also consider hidden files/directories.
If you want only one invocation of mkdir:
shopt -s nullglob
dirs=( */ )
mkdir -- "${dirs[#]/%/newdir}"

Better to list only the directories otherwise you may get a bunch of errors when it encounters a file instead of a directory
for dir in $(ls -d */); do
mkdir "$dir/2015"
done

Related

uuidgen and $RANDOM doesn't change in find -exec argument

I want to get all the instances of a file in my macosx file system and copy them in a single folder of an external hard disk.
I wrote a simple line of code in terminal but when I execute it, there is only a file in the target folder that is replaced at every occurrence it finds.
It seems that the $RANDOM or $(uuidgen) used in a single command return only one value used for every occurrence {} of the find command.
Is there a way to get a new value for every result of the find command?
Thank you.
find . -iname test.txt -exec cp {} /Volumes/EXT/$(uuidgen) \;
or
find . -iname test.txt -exec cp {} /Volumes/EXT/$RANDOM \;
This should work:
find ... -exec bash -c 'cp "$1" /Volumes/somewhere/$(uuidgen)' _ {} \;
Thanks to dan and pjh for corrections in comments.
find . -iname test.txt -exec bash -c '
for i do
cp "$i" "/Volumes/EXT/$RANDOM"
done' _ {} +
You can use -exec with +, to pass multiple files to a bash loop. You can't use command subs (or multiple commands at all) in a single -exec.
If you've got Bash 4.0 or later, another option is:
shopt -s dotglob
shopt -s globstar
shopt -s nocaseglob
shopt -s nullglob
for testfile in **/test.txt; do
cp -- "$testfile" "/Volumes/EXT/$(uuidgen)"
done
shopt -s dotglob enables globs to match files and directories that begin with . (e.g. .dir/test.txt)
shopt -s globstar enables the use of ** to match paths recursively through directory trees
shopt -s nocaseglob causes globs to match in a case-insensitive fashion (like find option -iname versus -name)
shopt -s nullglob makes globs expand to nothing when nothing matches (otherwise they expand to the glob pattern itself, which is almost never useful in programs)
The -- in cp -- ... prevents paths that begin with hyphens (e.g. -dir/test.txt) being (mis)treated as options to `cp'
Note that this code might fail on versions of Bash prior to 4.3 because symlinks are (stupidly) followed while expanding ** patterns

Rename .txt files

Looking for help with the following code. I have a folder titled data, with 6 subfolders (folder1, folder2, etc). Each folder has a text file I want to rename to "homeworknotes" keeping the .txt extension.
Used notes before for short:
So far I have the following code:
for file in data/*/*.txt; do
mv $file "notes"
done
find
You can use find command with -execdir that will execute command of your choice in a directory where file matching pattern is:
find data -type f -name '*.txt' -execdir mv \{\} notes.txt \;
data is path to directory where find should look for matching files recursively
-type f look only for files, not directories
-name '*.txt' match anything that ends with .txt
-execdir mv \{\} notes.txt run command mv {} notes.txt in directory where file was found; where {} is original filename found by find.
bash
EDIT1: To do this without find you need to handle recursive directory traversal (unless you have fixed directory layout). In bash you can set following shell options with shopt -s command:
extglob - extended globbing support (allows to write extended globs like **; see "Pathname Expansion" in man bash)
globstar - allows ** in pathname expansion; **/ will match any directories and their subdirectories (see "Pathname Expansion" in man bash).
nullglob - allows patterns that match no files (in case there's a directory without any .txt file)
Following script will traverse directories under data/ and rename .txt files to notes.txt:
#!/bin/bash
shopt -s extglob globstar nullglob
for f in data/**/*.txt ; do
mv $f $(dirname $f)/notes.txt
done
mv $f $(dirname $f)/notes.txt moves (renames) file; $f contains matched path so e.g. data/folder1/day4notes.txt and $(dirname $f) gets directory where that file is - in this case data/folder1 so we just append /notes.txt to that.
EDIT2: If you are absolutely positive that you want to do this only in first level of subdirectories under data/ you can omit extglob and globstar (and if you know there's at least one .txt in each directory then also nullglob) and go ahead with pattern you posted; but you still need to use mv $f $(dirname $f)/notes.txt to rename file.
NOTE: When experimenting with things like these always make backup beforehand. If you have multiple .txt files in any of directories they all will get renamed to notes.txt so you might lose data in that case.

Check if a filename has a string in it

I'm having problems creating an if statement to check the files in my directory for a certain string in their names.
For example, I have the following files in a certain directory:
file_1_ok.txt
file_2_ok.txt
file_3_ok.txt
file_4_ok.txt
other_file_1_ok.py
other_file_2_ok.py
other_file_3_ok.py
other_file_4_ok.py
another_file_1_not_ok.sh
another_file_2_not_ok.sh
another_file_3_not_ok.sh
another_file_4_not_ok.sh
I want to copy all files that contain 1_ok to another directory:
#!/bin/bash
directory1=/FILES/user/directory1/
directory2=/FILES/user/directory2/
string="1_ok"
cd $directory
for every file in $directory1
do
if [$string = $file]; then
cp $file $directory2
fi
done
UPDATE:
The simpler answer was made by Faibbus, but refer to Inian if you want to remove or simply move files that don't have the specific string you want.
The other answers are valid as well.
cp directory1/*1_ok* directory2/
Use find for that:
find directory1 -maxdepth 1 -name '*1_ok*' -exec cp -v {} directory2 \;
The advantage of using find over the glob solution posted by Faibbus is that it can deal with an unlimited number of files which contain 1_ok were the glob solution will lead to an argument list too long error when calling cp with too many arguments.
Conclusion: For interactive use with a limited number of input files the glob will be fine, for a shell script, which has to be stable, I would use find.
With your script I suggest:
#!/bin/bash
source="/FILES/user/directory1"
target="/FILES/user/directory2"
regex="1_ok"
for file in "$source"/*; do
if [[ $file =~ $regex ]]; then
cp -v "$file" "$target"
fi
done
From help [[:
When the =~ operator is used, the string to the right of the operator
is matched as a regular expression.
Please take a look: http://www.shellcheck.net/
Using extglob matching in bash with the below pattern,
+(pattern-list)
Matches one or more occurrences of the given patterns.
First enable extglob by
shopt -s extglob
cp -v directory1/+(*not_ok*) directory2/
An example,
$ ls *.sh
another_file_1_not_ok.sh another_file_3_not_ok.sh
another_file_2_not_ok.sh another_file_4_nnoot_ok.sh
$ shopt -s extglob
$ cp -v +(*not_ok*) somedir/
another_file_1_not_ok.sh -> somelib/another_file_1_not_ok.sh
another_file_2_not_ok.sh -> somelib/another_file_2_not_ok.sh
another_file_3_not_ok.sh -> somelib/another_file_3_not_ok.sh
To remove the files except the one containing this pattern, do
$ rm -v !(*not_ok*) 2>/dev/null

How to retain folder structure in bash scripting?

I made a program to convert the bitrate of music. The program is as follows..
for f in *.mp3 ;
do lame --mp3input -b $bitrate "$f" $path_to_destination/"$f" ;
done;
But this works for only one folder; I have music in different folders. How to modify the code so that it can recursively make conversions happen yet retain the folder structure in the output?
If you have a new enough Bash (version 4.3 works; version 3.x does not), you can use:
shopt -s globstar nullglob
for file in *.mp3 **/*.mp3
do
lame --mp3input -b $bitrate "$file" "$path_to_destination/$file"
done
The globstar option means that the ** notation works recursively; the nullglob option means that if there are no .mp3 files in any of the subdirectories (or no sub-directories), you get nothing generated instead of a name **/*.mp3 which would happen by default.
Because this uses globbing, it is safe even with paths or file names that contain spaces, newlines or other awkward characters.
If the sub-directories don't necessarily exist under $path_to_destination, then you need to create them. Add:
mkdir -p $(dirname "$path_to_destination/$file")
before the invocation of lame. This creates all the missing directories on the path leading to the target file (no error if all the directories already exist), leaving lame to create the file in that directory.
find . -type f -name '*.mp3' | while IFS= read -r src
do
dst="$path_to_destination/$src"
mkdir -p $(dirname "$dst")
lame --mp3input -b $bitrate "$src" "$dst"
done

Delete all files and directories but certain ones using Bash

I'm writing a script that needs to erase everything from a directory except two directories, mysql and temp.
I've tried this:
ls * | grep -v mysql | grep -v temp | xargs rm -rf
but this also keeps all the files that have mysql in their name, that i don't need. it also doesn't delete any other directories.
any ideas?
You may try:
rm -rf !(mysql|init)
Which is POSIX defined:
Glob patterns can also contain pattern lists. A pattern list is a sequence
of one or more patterns separated by either | or &. ... The following list
describes valid sub-patterns.
...
!(pattern-list):
Matches any string that does not match the specified pattern-list.
...
Note: Please, take time to test it first! Either create some test folder, or simply echo the parameter substitution, as duly noted by #mnagel:
echo !(mysql|init)
Adding useful information: if the matching is not active, you may to enable/disable it by using:
shopt extglob # shows extglob status
shopt -s extglob # enables extglob
shopt -u extglob # disables extglob
This is usually a job for find. Try the following command (add -rf if you need a recursive delete):
find . -maxdepth 1 \! \( -name mysql -o -name temp \) -exec rm '{}' \;
(That is, find entries in . but not subdirectories that are not [named mysql or named tmp] and call rm on them.)
You can use find, ignore mysql and temp, and then rm -rf them.
find . ! -iname mysql ! -iname temp -exec rm -rf {} \;

Resources