I have a bash script that looks like the following:
#!/bin/bash
FILES=public_html/*.php # */ stupid syntax highlighter!
for f in $FILES
do
echo "Processing $f file..."
# take action on each file.
done
Now I need it to go through all subdirectories in public_html so it should run on:
/public_html/index.php
/public_html/forums/status.php
/public_html/really/deep/file/in/many/sub/dirs/here.php
What do I change FILES=public_html/*.php to in order to do that?
Also I need to check to make sure that there is at least one file or else it prints
Processing *.php file...
FILES=$(find public_html -type f -name '*.php')
IMPORTANT: Note the single quotes around the *.php to prevent shell expansion of the *.
FILES=`find public_html -type d`
$FILES will now be a list of every single directory inside public_html.
Related
Given a root path, I am trying to loop through the sub-directories to loop through the files in each subdirectory and print the names of the files.
The directory structure is like this:
Root directory
dir2,
file{1..10}
dir3,
file{1..10}
dir4
file{1..10}
I want to loop through dir2 and print all the filenames in it. Then loop through dir3 and print all the file names...and so on
Here is what I have so far:
#!/bin/bash
#!/bin/sh
cd /the/root/directory
for dir in */
do
for FILE in dir
do
echo "$FILE"
done > /the/root/directory/filenames.txt
done
This is the output I get in filenames.txt:
dir
My expected output is supposed to be:
file{1..10}
file{1..10}
file{1..10}
I am a beginner to bash scripting...well scripting in general.
Any help is greatly appreciated!
You didn't mention what your end goal is, so I'll speculate here.
If your end goal is to only see the files recursively in a list, you can run just a simple find command:
find . -type f
Or if you want to see the details:
find . -type f -ls
A nice way to view them with colors and nice-looking ansi bars is to install the tree command. Example:
https://www.tecmint.com/linux-tree-command-examples/
If your needs are simple, for example, you want to do an action such as a tail -n1 on each file, you can pipe the command to xargs like this:
find . -type f | xargs tail -n1
But if your end goal is to use bash to process them in some way, then you can continue down the bash looping method as mentioned by #tjm3772.
You mentioned you were just looking for the filenames so you can just run:
find . -type f | sed 's/.*\///'
If you want to write that to a file, just redirect the output to a filename of your choice:
find . -type f | sed 's/.*\///' > filename.txt
You can use the find command, and this will loop through the directories without needing the for loop
my_bash_script.sh:
find * -type d > filenames.txt
Put this script in the same level of the directories, or you point it to the location by changing the * to the path
note: if it says permission denied in the terminal run this: chmod u+x the_script_name.sh
You forgot to expand $dir in your inner loop, so the loop is executing one time with FILE set to the literal string 'dir' instead of the directory name.
After that, you need a globbing pattern to expand to the filenames inside the directory.
Fixed example:
#!/bin/bash
cd /the/root/directory
for dir in */
do
for FILE in "$dir/"*
do
echo "$FILE"
done > /the/root/directory/filenames.txt
done
The bash's way to do it is with the globstar expansion which expands recursively in directories
#!/usr/bin/env bash
shopt -s globstar # This enables recursively expanding files in directories
# This prints all the files in all the directories starting from /the/root/directory
printf '%s\n' /the/root/directory/**
I have a folder which contains a depth of up to 3 layers of subdirectories, with images in the deepest subdirectory (for most images, this is just one level). I want to rename these images to include the name of the directory they are in as a prefix. For this, I need to be able to run a single command for each subdirectory in this tree of subdirectories.
This is my attempt:
DIRS=/home/arjung2/data_256_nodir/a/*
for dir in $DIRS
do
for f in $dir
do
echo "$dir"
echo "$f"
echo "$dir_$f"
mv "$f" "$dir_$f"
done
done
However, the first three echos prints out the same thing for each 1-level deep subdirectory (not all up to 3-level deep subdirectories as I desire), and gives me an error. An example output is the following:
/home/arjung2/data_256_nodir/a/airfield
/home/arjung2/data_256_nodir/a/airfield
/home/arjung2/data_256_nodir/a/airfield
mv: cannot move ‘/home/arjung2/data_256_nodir/a/airfield’ to a subdirectory of itself, ‘/home/arjung2/data_256_nodir/a/airfield/airfield’
Any ideas what I'm doing wrong? Any help will be much appreciated, thanks!!
Assuming all images can be identified with 'find' (e.g., by suffix), consider the following bash script:
#! /bin/bash
find . -type f -name '*.jpeg' -print0 | while read -d '' file ; do
d=${file%/*};
d1=${d##*/};
new_file="$d/${d1}_${file##*/}"
echo "Move: $file -> $new_file"
mv "$file" "$new_file"
done
It will move a/b/c.jpeg to a/b/b_c.jpeg, for every folder/file. Adjust (or remove) the -name as needed.
Say dir has the value /home/arjung2/data_256_nodir/a/airfield. In this case, the statement
for f in $dir
expands to
for f in /home/arjung2/data_256_nodir/a/airfield
which means that the inner loop will be executed exactly once, f taking the name /home/arjung2/data_256_nodir/a/airfield, which is the same as dir.
It would make more sense to iterate over the files within the directory:
for f in $dir/*
From the current directory I have multiple sub directories:
subdir1/
001myfile001A.txt
002myfile002A.txt
subdir2/
001myfile001B.txt
002myfile002B.txt
where I want to strip every character from the filenames before myfile so I end up with
subdir1/
myfile001A.txt
myfile002A.txt
subdir2/
myfile001B.txt
myfile002B.txt
I have some code to do this...
#!/bin/bash
for d in `find . -type d -maxdepth 1`; do
cd "$d"
for f in `find . "*.txt"`; do
mv "$f" "$(echo "$f" | sed -r 's/^.*myfile/myfile/')"
done
done
however the newly renamed files end up in the parent directory
i.e.
myfile001A.txt
myfile002A.txt
myfile001B.txt
myfile002B.txt
subdir1/
subdir2/
In which the sub-directories are now empty.
How do I alter my script to rename the files and keep them in their respective sub-directories? As you can see the first loop changes directory to the sub directory so not sure why the files end up getting sent up a directory...
Your script has multiple problems. In the first place, your outer find command doesn't do quite what you expect: it outputs not only each of the subdirectories, but also the search root, ., which is itself a directory. You could have discovered this by running the command manually, among other ways. You don't really need to use find for this, but supposing that you do use it, this would be better:
for d in $(find * -maxdepth 0 -type d); do
Moreover, . is the first result of your original find command, and your problems continue there. Your initial cd is without meaningful effect, because you're just changing to the same directory you're already in. The find command in the inner loop is rooted there, and descends into both subdirectories. The path information for each file you choose to rename is therefore stripped by sed, which is why the results end up in the initial working directory (./subdir1/001myfile001A.txt --> myfile001A.txt). By the time you process the subdirectories, there are no files left in them to rename.
But that's not all: the find command in your inner loop is incorrect. Because you do not specify an option before it, find interprets "*.txt" as designating a second search root, in addition to .. You presumably wanted to use -name "*.txt" to filter the find results; without it, find outputs the name of every file in the tree. Presumably you're suppressing or ignoring the error messages that result.
But supposing that your subdirectories have no subdirectories of their own, as shown, and that you aren't concerned with dotfiles, even this corrected version ...
for f in `find . -name "*.txt"`;
... is an awfully heavyweight way of saying this ...
for f in *.txt;
... or even this ...
for f in *?myfile*.txt;
... the latter of which will avoid attempts to rename any files whose names do not, in fact, change.
Furthermore, launching a sed process for each file name is pretty wasteful and expensive when you could just use bash's built-in substitution feature:
mv "$f" "${f/#*myfile/myfile}"
And you will find also that your working directory gets messed up. The working directory is a characteristic of the overall shell environment, so it does not automatically reset on each loop iteration. You'll need to handle that manually in some way. pushd / popd would do that, as would running the outer loop's body in a subshell.
Overall, this will do the trick:
#!/bin/bash
for d in $(find * -maxdepth 0 -type d); do
pushd "$d"
for f in *.txt; do
mv "$f" "${f/#*myfile/myfile}"
done
popd
done
You can do it without find and sed:
$ for f in */*.txt; do echo mv "$f" "${f/\/*myfile/\/myfile}"; done
mv subdir1/001myfile001A.txt subdir1/myfile001A.txt
mv subdir1/002myfile002A.txt subdir1/myfile002A.txt
mv subdir2/001myfile001B.txt subdir2/myfile001B.txt
mv subdir2/002myfile002B.txt subdir2/myfile002B.txt
If you remove the echo, it'll actually rename the files.
This uses shell parameter expansion to replace a slash and anything up to myfile with just a slash and myfile.
Notice that this breaks if there is more than one level of subdirectories. In that case, you could use extended pattern matching (enabled with shopt -s extglob) and the globstar shell option (shopt -s globstar):
$ for f in **/*.txt; do echo mv "$f" "${f/\/*([!\/])myfile/\/myfile}"; done
mv subdir1/001myfile001A.txt subdir1/myfile001A.txt
mv subdir1/002myfile002A.txt subdir1/myfile002A.txt
mv subdir1/subdir3/001myfile001A.txt subdir1/subdir3/myfile001A.txt
mv subdir1/subdir3/002myfile002A.txt subdir1/subdir3/myfile002A.txt
mv subdir2/001myfile001B.txt subdir2/myfile001B.txt
mv subdir2/002myfile002B.txt subdir2/myfile002B.txt
This uses the *([!\/]) pattern ("zero or more characters that are not a forward slash"). The slash has to be escaped in the bracket expression because we're still inside of the pattern part of the ${parameter/pattern/string} expansion.
Maybe you want to use the following command instead:
rename 's#(.*/).*(myfile.*)#$1$2#' subdir*/*
You can use rename -n ... to check the outcome without actually renaming anything.
Regarding your actual question:
The find command from the outer loop returns 3 (!) directories:
.
./subdir1
./subdir2
The unwanted . is the reason why all files end up in the parent directory (that is .). You can exclude . by using the option -mindepth 1.
Unfortunately, this was onyl the reason for the files landing in the wrong place, but not the only problem. Since you already accepted one of the answers, there is no need to list them all.
a slight modification should fix your problem:
#!/bin/bash
for f in `find . -maxdepth 2 -name "*.txt"`; do
mv "$f" "$(echo "$f" | sed -r 's,[^/]+(myfile),\1,')"
done
note: this sed uses , instead of / as the delimiter.
however, there are much faster ways.
here is with the rename utility, available or easily installed wherever there is bash and perl:
find . -maxdepth 2 -name "*.txt" | rename 's,[^/]+(myfile),/$1,'
here are tests on 1000 files:
for `find`; do mv 9.176s
rename 0.099s
that's 100x as fast.
John Bollinger's accepted answer is twice as fast as the OPs, but 50x as slow as this rename solution:
for|for|mv "$f" "${f//}" 4.316s
also, it won't work if there is a directory with too many items for a shell glob. likewise any answers that use for f in *.txt or for f in */*.txt or find * or rename ... subdir*/*. answers that begin with find ., on the other hand, will also work on directories with any number of items.
I am trying to create a simple bash script which will echo all the files from a folder, including subfolders. The following is my code. But the output I am getting is just ls $fromFolder
#! /bin/bash
fromFolder="~/proj/activex"
toFolder="~/proj/outgoing"
files='ls $fromFolder'
for file in $files
do
echo $file
done
Thanks
No need to use ls command here. You can simply replace your for loop as:
for file in ~/proj/outgoing/*
do
echo $file
done
find $fromfolder -print
will print all of the files and subdirectories in $fromfolder.
This lists regular files
find $fromfolder -print -type f
This lists directories
find $fromfolder -print -type d
In you code --this has a problem
files='ls $fromFolder'
$fromfolder will never be "translated" into its value by bash because of the single quotes.
You need to use double quotes instead of singles, which will allow the shell to expand the fromFolder variable:
files="ls $fromFolder"
Although anubhava's solution is better
I'm looping through certain files (all files starting with MOVIE) in a folder with this bash script code:
for i in MY-FOLDER/MOVIE*
do
which works fine when there are files in the folder. But when there aren't any, it somehow goes on with one file which it thinks is named MY-FOLDER/MOVIE*.
How can I avoid it to enter the things after
do
if there aren't any files in the folder?
With the nullglob option.
$ shopt -s nullglob
$ for i in zzz* ; do echo "$i" ; done
$
for i in $(find MY-FOLDER/MOVIE -type f); do
echo $i
done
The find utility is one of the Swiss Army knives of linux. It starts at the directory you give it and finds all files in all subdirectories, according to the options you give it.
-type f will find only regular files (not directories).
As I wrote it, the command will find files in subdirectories as well; you can prevent that by adding -maxdepth 1
Edit, 8 years later (thanks for the comment, #tadman!)
You can avoid the loop altogether with
find . -type f -exec echo "{}" \;
This tells find to echo the name of each file by substituting its name for {}. The escaped semicolon is necessary to terminate the command that's passed to -exec.
for file in MY-FOLDER/MOVIE*
do
# Skip if not a file
test -f "$file" || continue
# Now you know it's a file.
...
done