Cd into subdirectories using Bash - shell

I am writing a shell script to cd into subdirectories in a main directory, however I am getting an error that directory does not exists, so when I print out the value of the directory value I see that directory names are listed in a weird way,can someone help me to correct this error.
Code:
for d in ./*
do
(cd $d && ./example.c)
done
Output when I do echo $d:
./Directory, Test
./Directory, Test1
Orignal directory names in the parent directory:
Test,Directory
Test1,Directory

Related

Writing a loop for removing files from subdirectories

Im trying to figure out how to write a loop to remove specific files from multiple subdirectories which are all very similar in structure.
Within /main_directory/ there are 50 sub directories all with different names.
So /main_directory/sub1 /main_directory/testsub1 /main_directory/sub1_practice and so on. Each subdirectory has different file types and I want to remove a specific type, call it .txt.
I want to write a loop something like this:
for file in ./main_directory/*; cd $file; rm *.txt; done
So that within each subdirectory in main_directory, all .txt files are removed.
Apologies for the basic question, but I can only find solutions for subdirectories that have the same name or prefix, not all differently named subdirectories.
I tried for file in ./main_directory/*; cd $file; rm *.txt; done
and received the error that there is no such file or directory for all the subdirectories
If you cd into the directory, you have to remember to return to your original working directory afterward otherwise every future command will be executed relative to whatever directory you last successfully changed to.
Some examples that should work:
# don't change directories at all
for dir in ./main_directory/* ; do
rm "$dir"/*.txt
done
# explicitly change back to the original directory
for dir in ./main_directory/* ; do
cd "$dir" && { rm *.txt ; cd - ; }
done
# change directories in a subshell, so that your
# original working directory is restored afterward
for dir in ./main_directory/* ; do
(
cd "$dir" && rm *.txt
)
done
# without a loop, using the find command
find ./main_directory -type f -name '*.txt' -delete
After descending in the first directory, you don't return to the original location. The second directory will be searched for in the first one. An improvement of your code will be adding cd ..
for directory in ./main_directory/*; do
cd "$directory" || continue
rm *.txt
cd ..
done
This code is wrong without the || continue part.
When the cd "$directory" fails, it will start removing files from the wrong location and next put you in the parent directory. You can change it into cd "$directory" || continue.
An alternative is starting it in a subprocess:
for directory in ./main_directory/*; do
(
cd "$directory" || continue
rm *.txt
)
done
You still need the || continue, because you don't want to remove files from the wrong directory.
In this example, you don't need a loop. You can use
rm main_directory/*.txt

Exclude current directory from tar

I'm trying to exclude the current directory from the tarball without excluding its contents, because when I extract it out using the -k flag I get an exit status of 1 and a message
./: Already exists
tar: Error exit delayed from previous errors.
How do I do this? I've tried the --exclude flag but that excludes the contents also (rightly so). I'm trying to code this for both the OSX/BSD and GNU versions of tar.
Test case:
# Setup
mkdir /tmp/stackoverflow
cd /tmp/stackoverflow
mkdir dir
touch dir/file
# Create
tar cCf dir dir.tar .
# List contents
tar tf dir.tar
gives
./
./file
showing that the current directory ./ is in the tar. This would be fine, but when I do the following:
mkdir dir2
tar xkfC dir.tar dir2
due to the -k flag, I get an exit code of 1 and the message
./: Already exists
tar: Error exit delayed from previous errors.
To exclude the current directory you can create your archive on this way:
tar cf /path/to/dir.tar ./*
use ./*instead of ., this will not match current directory (.) and therefore not include in the archive
This does the trick:
GLOBIGNORE=".:.."
cd dir
tar cf ../dir.tar *
The extra cd is to replace the use of the -C flag, so that we can use a glob. The GLOBIGNORE ignores the current directory, but also sets shopt -s dotglob implicitly to include hidden files. Using * instead of ./* means that the tar file listing doesn't list ./file, but instead lists it as file. The entire listing of the tar is:
file

Bash script get one directory back

I can get my current working directory with
my_dir=$(pwd)
echo $my_dir
/files/work/test
How do I get /files/work I don't want to change directories. I just need to get the /files/work directory.
Try one of these (assuming you didn't manually change $PWD):
echo "${PWD%/*}"
dirname "$PWD"
(cd .. && pwd)
echo `dirname \`pwd\``
It ll result the parent directory of present working directory
You can ascend one directory above:
$ cd ..
or you can dirname to print the one directory above:
$ dirname "/files/work/test"
/files/work

How to cd into multiple directories, zip it (naming it after the directory name)?

So I have a bash script that cds into a directory, executes a command and then exits and enters a new directory again:
for d in ./*/ ; do (cd "$d" && somecommand); done
From here.
Unfortunately I'm not sure how to zip up the directory it is in (maybe using something such as 7z). It was a long shot but I tried this command and it didn't work (I didn't expect the asterisk to take the name of the directory...but I hoped):
7z a -r *.zip
I don't suppose anyone has any suggestions?
The variable $d contains the name of the directory (among other things):
for d in ./*/ ; do (
cd "$d"
dirname=${d%/} # remove trailing /
dirname=${dirname##*/} # remove everything up to the last /
7z a -r "$dirname".zip
)
done
I'm assuming that your 7z command was correct.
Perhaps you're looking for something like this:
for d in *; do
test -d "$d" && zip -r "$d.zip" "$d"
done
That examines all files in the working directory whose names do not begin with '.' (for d in *). For those that are directories (test -d $d) it zips the directory contents, recursively, as members of a directory. The zip files are left in the original working directory (the parent of all the directories that get zipped), but they could as easily be put into the subdirectories.

Shell Script to update the contents of a folder - 2

I wrote this piece of code this morning.
The idea is, a text file (new.txt) has the details about the directory structure and the files in the directory.
Read new.txt, create the same directory structure at a destination directory (here it is /tmp), copy the source files to the corresponding destination directory.
Script
clear
DEST_DIR=/tmp
for file in 'cat new.txt'
do
mkdir -p $file
touch $file
echo 'ls -ltr $file'
cp -rf $file $DEST_DIR
find . -name $file -type f
cp $file $DEST_DIR
done
Contents of new.txt
Test/test1/test1.txt
Test/test2/test2.txt
Test/test3/test3.txt
Test/test4/test4.txt
The issue is, it executes the code, creates the directory structure, but instead of creating it at the end, it creates directories named test1.txt, test2.txt, etc. I have no idea why this is happening.
Another question: For Turbo C, C++, there is an option to check the execution flow? Is there something available in Unix, Perl and shell scripting to check the execution flow?
The script creates these directories because you tell it to on the line mkdir -p $file. You have to extract the directory path from you filename. The standard command for this is dirname:
dir=`dirname "$file"`
mkdir -p -- "$dir"
To check the execution flow is to add set -x at the top of your script. This will cause all lines that are executed to be printed to stderr with "+ " in front of it.
you might want to try something like rsync

Resources