Using Variables with grep, and an IF statement regarding this - bash

I am looking to search for strings within a file using variables.
I have a script that will accept 3 or 4 parameters: 3 are required; the 4th isn't mandatory.
I would like to search the text file for the 3 parameters matching within the same line, and if they do match then I want to remove that line and replace it with my new one - basically it would update the 4th parameter if set, and avoid duplicate entries.
Currently this is what I have:
input=$(egrep -e '$domain\s+$type\s+$item' ~/etc/security/limits.conf)
if [ "$input" == "" ]; then
echo $domain $type $item $value >>~/etc/security/limits.conf
echo \"$domain\" \"$type\" \"$item\" \"$value\" has been successfully added to your limits.conf file.
else
cat ~/etc/security/limits.conf | egrep -v "$domain|$type|$item" >~/etc/security/limits.conf1
rm -rf ~/etc/security/limits.conf
mv ~/etc/security/limits.conf1 ~/etc/security/limits.conf
echo $domain $type $item $value >>~/etc/security/limits.conf
echo \"$domain\" \"$type\" \"$item\" \"$value\" has been successfully added to your limits.conf file.
exit 0
fi
Now I already know that the input=egrep etc.. will not work; it works if I hard code some values, but it won't accept those variables. Basically I have domain=$1, type=$2 and so on.
I would like it so that if all 3 variables are not matched within one line, than it will just append the parameters to the end of the file, but if the parameters do match, then I want them to be deleted, and appended to the file. I know I can use other things like sed and awk, but I have yet to learn them.
This is for a school assignment, and all help is very much appreciated, but I'd also like to learn why and how it works/doesn't, so if you can provide answers to that as well that would be great!

Three things:
To assign the output of a command, use var=$(cmd).
Don't put spaces around the = in assignments.
Expressions don't expand in single quotes: use double quotes.
To summarize:
input=$(egrep -e "$domain\s+$type\s+$item" ~/etc/security/limits.conf)
Also note that ~ is your home directory, so if you meant /etc/security/limits.conf and not /home/youruser/etc/security/limits.conf, leave off the ~

You have several bugs in your script. Here's your script with some comments added
input=$(egrep -e '$domain\s+$type\s+$item' ~/etc/security/limits.conf)
# use " not ' in the string above or the shell can't expand your variables.
# some versions of egrep won't understand '\s'. The safer, POSIX character class is [[:blank:]].
if [ "$input" == "" ]; then
# the shell equality test operator is =, not ==. Some shells will also take == but don't count on it.
# the normal way to check for a variable being empty in shell is with `-z`
# you can have problems with tests in some shells if $input is empty, in which case you'd use [ "X$input" = "X" ].
echo $domain $type $item $value >>~/etc/security/limits.conf
# echo is unsafe and non-portable, you should use printf instead.
# the above calls echo with 4 args, one for each variable - you probably don't want that and should have double-quoted the whole thing.
# always double-quote your shell variables to avoid word splitting ad file name expansion (google those - you don't want them happening here!)
echo \"$domain\" \"$type\" \"$item\" \"$value\" has been successfully added to your limits.conf file.
# the correct form would be:
# printf '"%s" "%s" "%s" "%s" has been successfully added to your limits.conf file.\n' "$domain" "$type" "$item" "$value"
else
cat ~/etc/security/limits.conf | egrep -v "$domain|$type|$item" >~/etc/security/limits.conf1
# Useless Use Of Cat (UUOC - google it). [e]grep can open files just as easily as cat can.
rm -rf ~/etc/security/limits.conf
# -r is for recursively removing files in a directory - inappropriate and misleading when used on a single file.
mv ~/etc/security/limits.conf1 ~/etc/security/limits.conf
# pointless to remove the file above when you're overwriting it here anyway
# If your egrep above failed to create your temp file (e.g. due to memory or permissions issues) then the "mv" above would zap your real file. the correct way to do this is:
# egrep regexp file > tmp && mv tmp file
# i.e. use && to only do the mv if creating the tmp file succeeded.
echo $domain $type $item $value >>~/etc/security/limits.conf
# see previous echo comments.
echo \"$domain\" \"$type\" \"$item\" \"$value\" has been successfully added to your limits.conf file.
# ditto
exit 0
# pointless and misleading having an explicit "exit <success>" when that's what the script will do by default anyway.
fi

This line:
input=$(egrep -e '$domain\s+$type\s+$item' ~/etc/security/limits.conf)
requires double quotes around the regex to allow the shell to interpolate the variable values.
input=$(egrep -e "$domain\s+$type\s+$item" ~/etc/security/limits.conf)
You need to be careful with backslashes; you probably don't have to double them up in this context, but you should be sure you know why.
You should be aware that your first egrep commands is much more restrictive in what it selects than the second egrep which is used to delete data from the file. The first requires the entry with the three fields in the single line; the second only requires a match with any one of the words (and that could be part of a larger word) to delete the line.
Since ~/etc/security/limits.conf is a file, there is no need to use the -r option of rm; it is advisable not to use the -r unless you intend to remove directories.

Related

Bash script MV is disappearing files

I've written a script to go through all the files in the directory the script is located in, identify if a file name contains a certain string and then modify the filename. When I run this script, the files that are supposed to be modified are disappearing. It appears my usage of the mv command is incorrect and the files are likely going to an unknown directory.
#!/bin/bash
string_contains="dummy_axial_y_position"
string_dontwant="dummy_axial_y_position_time"
file_extension=".csv"
for FILE in *
do
if [[ "$FILE" == *"$string_contains"* ]];then
if [[ "$FILE" != *"$string_dontwant"* ]];then
filename= echo $FILE | head -c 15
combined_name="$filename$file_extension"
echo $combined_name
mv $FILE $combined_name
echo $FILE
fi
fi
done
I've done my best to go through the possible errors I've made in the MV command but I haven't had any success so far.
There are a couple of problems and several places where your script can be improved.
filename= echo $FILE | head -c 15
This pipeline runs echo $FILE adding the variable filename having the null string as value in its environment. This value of the variable is visible only to the echo command, the variable is not set in the current shell. echo does not care about it anyway.
You probably want to capture the output of echo $FILE | head -c 15 into the variable filename but this is not the way to do it.
You need to use command substitution for this purpose:
filename=$(echo $FILE | head -c 15)
head -c outputs only the first 15 characters of the input file (they can be on multiple lines but this does not happen here). head is not the most appropriate way for this. Use cut -c-15 instead.
But for what you need (extract the first 15 characters of the value stored in the variable $FILE), there is a much simpler way; use a form of parameter expansion called "substring expansion":
filename=${FILE:0:15}
mv $FILE $combined_name
Before running mv, the variables $FILE and $combined_name are expanded (it is called "parameter expansion"). This means that the variable are replaced by their values.
For example, if the value of FILE is abc def and the value of combined_name is mnp opq, the line above becomes:
mv abc def mnp opq
The mv command receives 4 arguments and it attempts to move the files denoted by the first three arguments into the directory denoted by the fourth argument (and it probably fails).
In order to keep the values of the variables as single words (if they contain spaces), always enclose them in double quotes. The correct command is:
mv "$FILE" "$combined_name"
This way, in the example above, the command becomes:
mv "abc def" "mnp opq"
... and mv is invoked with two arguments: abc def and mnp opq.
combined_name="$filename$file_extension"
There isn't any problem in this line. The quotes are simply not needed.
The variables filename and file_extension are expanded (replaced by their values) but on assignments word splitting is not applied. The value resulted after the replacement is the value assigned to variable combined_name, even if it contains spaces or other word separator characters (spaces, tabs, newlines).
The quotes are also not needed here because the values do not contain spaces or other characters that are special in the command line. They must be quoted if they contain such characters.
string_contains="dummy_axial_y_position"
string_dontwant="dummy_axial_y_position_time"
file_extension=".csv"
It is not not incorrect to quote the values though.
for FILE in *
do
if [[ "$FILE" == *"$string_contains"* ]];then
if [[ "$FILE" != *"$string_dontwant"* ]]; then
This is also not wrong but it is inefficient.
You can use the expression from the if condition directly in the for statement (and get rid of the if statement):
for FILE in *"$string_contains"*; do
if [[ "$FILE" != *"$string_dontwant"* ]]; then
...
If you have read and understood the above (and some of the linked documentation) you will be able to figure out yourself where were your files moved :-)

Bash File names will not append to file from script

Hello I am trying to get all files with Jane's name to a separate file called oldFiles.txt. In a directory called "data" I am reading from a list of file names from a file called list.txt, from which I put all the file names containing the name Jane into the files variable. Then I'm trying to test the files variable with the files in list.txt to ensure they are in the file system, then append the all the files containing jane to the oldFiles.txt file(which will be in the scripts directory), after it tests to make sure the item within the files variable passes.
#!/bin/bash
> oldFiles.txt
files= grep " jane " ../data/list.txt | cut -d' ' -f 3
if test -e ~data/$files; then
for file in $files; do
if test -e ~/scripts/$file; then
echo $file>> oldFiles.txt
else
echo "no files"
fi
done
fi
The above code gets the desired files and displays them correctly, as well as creates the oldFiles.txt file, but when I open the file after running the script I find that nothing was appended to the file. I tried changing the file assignment to a pointer instead files= grep " jane " ../data/list.txt | cut -d' ' -f 3 ---> files=$(grep " jane " ../data/list.txt) to see if that would help by just capturing raw data to write to file, but then the error comes up "too many arguments on line 5" which is the 1st if test statement. The only way I get the script to work semi-properly is when I do ./findJane.sh > oldFiles.txt on the shell command line, which is me essentially manually creating the file. How would I go about this so that I create oldFiles.txt and append to the oldFiles.txt all within the script?
The biggest problem you have is matching names like "jane" or "Jane's", etc. while not matching "Janes". grep provides the options -i (case insensitive match) and -w (whole-word match) which can tailor your search to what you appear to want without having to use the kludge (" jane ") of appending spaces before an after your search term. (to properly do that you would use [[:space:]]jane[[:space:]])
You also have the problem of what is your "script dir" if you call your script from a directory other than the one containing your script, such as calling your script from your $HOME directory with bash script/findJane.sh. In that case your script will attempt to append to $HOME/oldFiles.txt. The positional parameter $0 always contains the full pathname to the current script being run, so you can capture the script directory no matter where you call the script from with:
dirname "$0"
You are using bash, so store all the filenames resulting from your grep command in an array, not some general variable (especially since your use of " jane " suggests that your filenames contain whitespace)
You can make your script much more flexible if you take the information of your input file (e.g list.txt), the term to search for (e.g. "jane"), the location where to check for existence of the files (e.g. $HOME/data) and the output filename to append the names to (e.g. "oldFile.txt") as command line [positonal] parameters. You can give each default values so it behaves as you currently desire without providing any arguments.
Even with the additional scripting flexibility of taking the command line arguments, the script actually has fewer lines simply filling an array using mapfile (synonymous with readarray) and then looping over the contents of the array. You also avoid the additional subshell for dirname with a simple parameter expansion and test whether the path component is empty -- to replace with '.', up to you.
If I've understood your goal correctly, you can put all the pieces together with:
#!/bin/bash
# positional parameters
src="${1:-../data/list.txt}" # 1st param - input (default: ../data/list.txt)
term="${2:-jane}" # 2nd param - search term (default: jane)
data="${3:-$HOME/data}" # 3rd param - file location (defaut: ../data)
outfn="${4:-oldFiles.txt}" # 4th param - output (default: oldFiles.txt)
# save the path to the current script in script
script="$(dirname "$0")"
# if outfn not given, prepend path to script to outfn to output
# in script directory (if script called from elsewhere)
[ -z "$4" ] && outfn="$script/$outfn"
# split names w/term into array
# using the -iw option for case-insensitive whole-word match
mapfile -t files < <(grep -iw "$term" "$src" | cut -d' ' -f 3)
# loop over files array
for ((i=0; i<${#files[#]}; i++)); do
# test existence of file in data directory, redirect name to outfn
[ -e "$data/${files[i]}" ] && printf "%s\n" "${files[i]}" >> "$outfn"
done
(note: test expression and [ expression ] are synonymous, use what you like, though you may find [ expression ] a bit more readable)
(further note: "Janes" being plural is not considered the same as the singular -- adjust the grep expression as desired)
Example Use/Output
As was pointed out in the comment, without a sample of your input file, we cannot provide an exact test to confirm your desired behavior.
Let me know if you have questions.
As far as I can tell, this is what you're going for. This is totally a community effort based on the comments, catching your bugs. Obviously credit to Mark and Jetchisel for finding most of the issues. Notable changes:
Fixed $files to use command substitution
Fixed path to data/$file, assuming you have a directory at ~/data full of files
Fixed the test to not test for a string of files, but just the single file (also using -f to make sure it's a regular file)
Using double brackets — you could also use double quotes instead, but you explicitly have a Bash shebang so there's no harm in using Bash syntax
Adding a second message about not matching files, because there are two possible cases there; you may need to adapt depending on the output you're looking for
Removed the initial empty redirection — if you need to ensure that the file is clear before the rest of the script, then it should be added back, but if not, it's not doing any useful work
Changed the shebang to make sure you're using the user's preferred Bash, and added set -e because you should always add set -e
#!/usr/bin/env bash
set -e
files=$(grep " jane " ../data/list.txt | cut -d' ' -f 3)
for file in $files; do
if [[ -f $HOME/data/$file ]]; then
if [[ -f $HOME/scripts/$file ]]; then
echo "$file" >> oldFiles.txt
else
echo "no matching file"
fi
else
echo "no files"
fi
done

Adding test_ in front of a file name with path

I have a list of files stored in a text file, and if a Python file is found in that list. I want to the corresponding test file using Pytest.
My file looks like this:
/folder1/file1.txt
/folder1/file2.jpg
/folder1/file3.md
/folder1/file4.py
/folder1/folder2/file5.py
When 4th/5th files are found, I want to run the command pytest like:
pytest /folder1/test_file4.py
pytest /folder1/folder2/test_file5.py
Currently, I am using this command:
cat /workspace/filelist.txt | while read line; do if [[ $$line == *.py ]]; then exec "pytest test_$${line}"; fi; done;
which is not working correctly, as I have file path in the text as well. Any idea how to implement this?
Using Bash's variable substring removal to add the test_. One-liner:
$ while read line; do if [[ $line == *.py ]]; then echo "pytest ${line%/*}/test_${line##*/}"; fi; done < file
In more readable form:
while read line
do
if [[ $line == *.py ]]
then
echo "pytest ${line%/*}/test_${line##*/}"
fi
done < file
Output:
pytest /folder1/test_file4.py
pytest /folder1/folder2/test_file5.py
Don't know anything about the Google Cloudbuild so I'll let you experiment with the double dollar signs.
Update:
In case there are files already with test_ prefix, use this bash script that utilizes extglob in variable substring removal:
shopt -s extglob # notice
while read line
do
if [[ $line == *.py ]]
then
echo "pytest ${line%/*}/test_${line##*/?(test_)}" # notice
fi
done < file
You can easily refactor all your conditions into a simple sed script. This also gets rid of the useless cat and the similarly useless exec.
sed -n 's%[^/]*\.py$%test_&%p' /workspace/filelist.txt |
xargs -n 1 pytest
The regular expression matches anything after the last slash, which means the entire line if there is no slash; we include the .py suffix to make sure this only matches those files.
The pipe to xargs is a common way to convert standard input into command-line arguments. The -n 1 says to pass one argument at a time, rather than as many as possible. (Maybe pytest allows you to specify many tests; then, you can take out the -n 1 and let xargs pass in as many as it can fit.)
If you want to avoid adding the test_ prefix to files which already have it, one solution is to break up the sed script into two separate actions:
sed -n '/test_[^/]*\.py/p;t;s%[^/]*\.py$%test_&%p' /workspace/filelist.txt |
xargs -n 1 pytest
The first p simply prints the matches verbatim; the t says if that matched, skip the rest of the script for this input.
(MacOS / BSD sed will want a newline instead of a semicolon after the t command.)
sed is arguably a bit of a read-only language; this is already pressing towards the boundary where perhaps you would rewrite this in Awk instead.
You may want to focus on lines that ends with ".py" string
You can achieve that using grep combined with a regex so you can figure out if a line ends with .py - that eliminates the if statement.
IFS=$'\n'
for file in $(cat /workspace/filelist.txt|grep '\.py$');do pytest $file;done

MacOS shell script to move files based on tag

I am trying to write a shell script so that I can move school files from one destination to another based on the input. I download these files from a source like canvas and want to move them from my downloads based on the tag I assign, to the path for my course folder which is nested pretty deep thanks to how I stay organized. Unfortunately, since I store these files in my OneDrive school account, I am unable to eliminate some spacing issues but I believe I have accounted for these. Right now the script is the following:
if [ "$1" = "311" ];
then
course="'/path/to/311/folder/$2'"
elif [ "$1" = "411" ];
then
course="'/path/to/411/folder/$2'"
elif [ "$1" = "516" ];
then
course="'/path/to/516/folder/$2'"
elif [ "$1" = "530" ];
then
course="'/path/to/530/folder/$2'"
elif [ "$1" = "599" ];
then
course="'/path/to/599/folder/$2'"
fi
files=$(mdfind 'kMDItemUserTags='$1'' -onlyin /Users/user/Downloads)
#declare -a files=$(mdfind 'kMDItemUserTags='$1'' -onlyin /Users/user/Downloads)
#mv $files $course
#echo "mv $files $course"
#echo $course
for file in $files
#for file in "${files[#]}"
do
#echo $file
#echo $course
mv $file $course
done
Where $1 is the tag ID and first part of path selection, and $2 is what week number folder I want to move it to. The single quotation marks are there to take care of the spacing in the filepath. I could very easily do this in python but I'm trying to expand my capabilities some. Every time I run this script I get the following message:
usage: mv [-f | -i | -n] [-v] source target
mv [-f | -i | -n] [-v] source ... directory
I initially tried to just move them all at once (per the first mv command that's commented out) and got this error, then tried the for loop, and array but get the same error each time. However, when I uncomment the echo statements in the for loop and manually try to move each one by copying and pasting the paths to the command line, it works perfectly. My best guess is something to do with the formatting of the variable "files", since
echo "mv $files $course"
indicates the presence of a newline character or separator between each file it saves.
I'm sure it's something super simple that I'm missing since I just started trying to pick up shell scripting last week, but nothing I have been able to find online has helped me resolve this. Any help would be greatly appreciated. Thanks
You can replace the files variable assignment and for loop with one command make this the script:
if [ "$1" = "311" ];
then
course="'/path/to/311/folder/$2'"
elif [ "$1" = "411" ];
then
course="'/path/to/411/folder/$2'"
elif [ "$1" = "516" ];
then
course="'/path/to/516/folder/$2'"
elif [ "$1" = "530" ];
then
course="'/path/to/530/folder/$2'"
elif [ "$1" = "599" ];
then
course="'/path/to/599/folder/$2'"
fi
mv -t $course $(mdfind 'kMDItemUserTags='$1'' -onlyin /Users/user/Downloads | sed ':a;N;$!ba;s/\n/ /g)
The sed ':a;N;$!ba;s/\n/ /g command simply replaces the newline characters with spaces, and the -t option for mv simply makes mv take the destination as the first argument.
You're getting rather confused about how quoting works in the shell. First rule: quotes go around data, not in data. For example, you use:
course="'/path/to/311/folder/$2'"
...
mv $file $course
When you set course this way, the double-quotes are treated as shell syntax (i.e. they change how what's between them is parsed), but the single-quotes are stored as part of the variable's value, and will thereafter be treated as data. When you use this variable in the mv command, it's actually looking for a directory literally named single-quote, and under that a directory named "path", etc. Instead, just put the appropriate quotes for how you want it parsed at that point, and then double-quotes around the variable when you use it (to prevent probably-unwanted word splitting and wildcard expansion). Like this:
course="/path/to/311/folder/$2"
...
mv "$file" "$course" # This needs more work -- see below
Also, where you have:
mdfind 'kMDItemUserTags='$1'' -onlyin /Users/user/Downloads
that doesn't really make any sense. You've got a single-quoted section, 'kMDItemUserTags=' where the quotes have no effect at all (single-quotes suppress all special meanings that characters have, like $ introducing variable substitution, but there aren't any characters there with special meanings, so no reason for the quotes), followed by $ without double-quotes around it, meaning that some special characters (whitespace and wildcards) in its value will get special parsing (which you probably don't want), followed by a zero-length single-quoted string, '', which parses out to exactly nothing. You want the $1 part in double-quotes; some people also include the rest of the string in the double-quoted section, which has no effect at all. In fact, other than the $2 part (and the spaces between parameters), you can quote or not however you want. Thus, any of these would work equivalently:
mdfind kMDItemUserTags="$1" -onlyin /Users/user/Downloads
mdfind "kMDItemUserTags=$1" -onlyin /Users/user/Downloads
mdfind "kMDItemUserTags=$1" '-onlyin' '/Users/user/Downloads'
mdfind 'kMDItemUserTags'="$1" '-'"only"'in' /'Users'/'user'/'Down'loads
...etc
Ok, next problem: parsing the output from mdfind from a series of characters into separate filepaths. This is actually tricky. If you put double-quotes around the resilting string, it'll get treated as one long filepath that happens to contain some newlines in it (which is totally legal, but not what you want). If you don't double-quote it, it'll be split into separate filepaths based on whitespace (not just newlines, but also spaces and tabs -- and spaces are common within macOS filenames), and anything that looks like a wildcard will get expanded to a list of matching filenames. This tends to cause chaos.
The solution: there's one character than cannot occur in a filepath, the ASCII NULL (character code 0), and mdfind -0 will output its list delimited with null characters. You can't put the result in a shell variable (they can't hold nulls either), but you can pass it through a pipe to, say, xargs -0, which will (thanks to the -0 option) parse the nulls as delimiters, and build commands out of the results. There is one slightly tricky thing: you want xargs to put the filepaths it gets in the middle of the argument list to mv, not at the end like it usually does. The -J option lets you tell it where to add arguments. I'll also suggest two safety measures: the -p option to xargs makes it ask before actually executing the command (use this at least until you're sure it's doing the right thing), and the -n option to mv, which tells it not to overwrite existing files if there's a naming conflict. The result is something like this:
mdfind -0 kMDItemUserTags="$1" -onlyin /Users/user/Downloads | xargs -0 -p -J% mv -n % "$course"
It is a good point to consider about filenames with whitespaces.
However the problem is that you are not quoting the filename in the mv command. Please take a look of a simple example below:
filename="with space.txt"
=> assign a variable to a filname with a space
touch "$filename"
=> create a file "with space.txt"
str="'$filename'"
=> wrap with single quotes (as you do)
echo $str
=> yields 'with space.txt' and may look good, which is a pitfall
mv $str "newname.txt"
=> causes an error
The mv command above causes an error because the command is invoked with
three arguments as: mv 'with space.txt' newname.txt. Unfortunately
the pre-quoting with single quotes is meaningless.
Instead, please try something like:
if [ "$1" = "311" ]; then
course="/path/to/311/folder/$2"
elif [ "$1" = "411" ]; then
course="/path/to/411/folder/$2"
elif [ "$1" = "516" ]; then
course="/path/to/516/folder/$2"
elif [ "$1" = "530" ]; then
course="/path/to/530/folder/$2"
elif [ "$1" = "599" ]; then
course="/path/to/599/folder/$2"
else
# illegal value in $1. do some error handling
fi
# the lines above may be simplified if /path/to/*folder/ have some regularity
mdfind "kMDItemUserTags=$1" -onlyin /Users/user/Downloads | while read -r file; do
mv "$file" "$course"
done
# the syntax above works as long as the filenames do not contain newline characters

Basename puts single quotes around variable

I am writing a simple shell script to make automated backups, and I am trying to use basename to create a list of directories and them parse this list to get the first and the last directory from the list.
The problem is: when I use basename in the terminal, all goes fine and it gives me the list exactly as I want it. For example:
basename -a /var/*/
gives me a list of all the directories inside /var without the / in the end of the name, one per line.
BUT, when I use it inside a script and pass a variable to basename, it puts single quotes around the variable:
while read line; do
dir_name=$(echo $line)
basename -a $dir_name/*/ > dir_list.tmp
done < file_with_list.txt
When running with +x:
+ basename -a '/Volumes/OUTROS/backup/test/*/'
and, therefore, the result is not what I need.
Now, I know there must be a thousand ways to go around the basename problem, but then I'd learn nothing, right? ;)
How to get rid of the single quotes?
And if my directory name has spaces in it?
If your directory name could include spaces, you need to quote the value of dir_name (which is a good idea for any variable expansion, whether you expect spaces or not).
while read line; do
dir_name=$line
basename -a "$dir_name"/*/ > dir_list.tmp
done < file_with_list.txt
(As jordanm points out, you don't need to quote the RHS of a variable assignment.)
Assuming your goal is to populate dir_list.tmp with a list of directories found under each directory listed in file_with_list.txt, this might do.
#!/bin/bash
inputfile=file_with_list.txt
outputfile=dir_list.tmp
rm -f "$outputfile" # the -f makes rm fail silently if file does not exist
while read line; do
# basic syntax checking
if [[ ! ${line} =~ ^/[a-z][a-z0-9/-]*$ ]]; then
continue
fi
# collect targets using globbing
for target in "$line"/*; do
if [[ -d "$target" ]]; then
printf "%s\n" "$target" >> $outputfile
fi
done
done < $inputfile
As you develop whatever tool will process your dir_list.tmp file, be careful of special characters (including spaces) in that file.
Note that I'm using printf instead of echo so that targets whose first character is a hyphen won't cause errors.
This might work
while read; do
find "$REPLY" >> dir_list.tmp
done < file_with_list.txt

Resources