check multiple files existing in a directory or not - shell

I am new to UNIX, please help.
I have a file which has many lines, each line is a filename.
Now I want to check if each file exists in another directory with some prefixes.
For example, my text file content is
abc.def.ghi.jkl
mno.pqr.stu.vwx
I want to test if each file exists in a directory, like
cd <search directory>
ls -ltr *abc.def.ghi.jkl*
If above result is false then throw an error.
Note: The file content is DYNAMIC and I am generating this file through another script.

Assuming your file is
$ cat myList
abc.def.ghi.jkl
mno.pqr.stu.vwx
A quick solution is
cat myList | xargs ls -ltr
If files are not found, the output of ls will complain to stdout with
ls: xxx: No such file or directory
IHTH

This is the expected Answer
`ls -l > temp.cat
for i in $(cat temp.txt)
do
ls -l *$i*
if [[ $? -eq 0 ]]
then
echo " File found"
else
echo " File not found for $i"
fi
done
rm -f temp.txt`
`

Related

Command line for loop with a nested if statement

I need to loop through all the files in the same directory, and IF a specific line "File needs to be deleted" exists within any of the files within that directory, delete those files only. How would it work from the command line please?
For example, the directory contains file1, file2, file3 and so on for 1000's of files. Each file has 10,000 rows of strings. If any file contains the string"File needs to be deleted", delete those files, but don't delete the files that do not contain that string.
I was going along the lines of
for each file the directory; do
if [ row text == "File needs to be deleted" ]; then
delete file
fi
done
grep -d skip -lF 'File needs to be deleted' file* | xargs echo rm --
If you only have files, no directories, in your current directory then you can just remove -d skip. If your version of grep doesn't have -d but your directory does contain sub-directories then:
find . -maxdepth 1 -type f -exec grep -lF 'File needs to be deleted' {} + | xargs echo rm --
Remove the echo once you've tested and are happy that it's going to remove the files you expect.
Simple bash example:
#!/bin/bash
# Get the running script name
running_script=$(realpath $0 | awk -F '/' '{ print $NF }')
# Loop throw all files in the current directory
for file in *; do
# If the filename is the same as the running script pass it.
[ "$file" == "$running_script" ] && continue
# If "File needs to be deleted" exists in the file delete the file.
grep -q "File needs to be deleted" "$file" && rm "$file"
done

Loop through directories, check if string exists in one file, move another file

I'm working on a bash script that should do the following: for every directory beginning with Event_*, (in cat eventList), cd into the directory, and if the string "ZJ.ROT" exists in the file *.mcp, I want to copy the file "ROT" to another directory. In simpler terms: loop through directories: if string "ZJ.ROT" exists in a file in that directory, output another file from that directory to a separate directory.
#!/bin/bash
mkdir newdire
for dir in `cat eventList`; do
cd $dir
pwd
if grep "ZJ.KNYN" *.mcp; then
cp "ROT" "newdire"
fi
done
The error I get is:
./azim.sh: line 5: cd: Event_2014.11.21.10.10.19.630: No such file or directory
/Users/files/Event_2013.12.01.06.29.57.800
grep: *.mcp: No such file or directory
For some reason, this for loop isn't looping through each directory, but it's stuck in the first directory Event_2013.... Any ideas about how to implement this code?
After the first time you cd to a subdirectory you are in it for all future loop iterations so your subsequent cds will fail, as you are experiencing. You also need to quote your variables and there's other issues. Try this:
pwd="$PWD"
mkdir newdire
while IFS= read -r dir; do
cd "$dir"
grep -Fq "ZJ.KNYN" *.mcp &&
cp "ROT" "${pwd}/newdire"
cd "$pwd"
done < eventList
but of course you don't actually need to cd:
mkdir newdire
while IFS= read -r dir; do
grep -Fq "ZJ.KNYN" "$dir"/*.mcp &&
cp "${dir}/ROT" newdire
done < eventList
Problem seems to be here:
if grep "ZJ.KNYN" *.mcp; then
You should use -q option in grep to suppress the output and check the return status like this:
if grep -qF "ZJ.KNYN" *.mcp; then
-F is for fixed string search.
Also there is no need to change directory inside the loop.
Your full script can be better rewritten as:
#!/bin/bash
mkdir newdire
for dir in Event_*; do
if [[ -d "$dir" ]] && grep -qF "ZJ.KNYN" "$dir"/*.mcp 2>/dev/null; then
cp "$dir/ROT" "newdire/"
fi
done

unable to get the complete list of file matching the regular expression in shell script

I have a directory with list of files abc_1.txt abc_2.txt.
I am have to parse the file name and do some processing over the file
#shellscript /tmp/log*
file_list=`ls $1`
for file_name in $file_list
do
# need to do some processing over the file name
echo $file_name
done
the script is not giving the proper output i.e script is not giving matching wildcard file name for the ls cmd.
the usage of the above script is shellscript /tmp/log*
Bash expands shellscript /tmp/log/abc* to a list of files names as input to your script. ls is not needed. Just iterate over the input. Try it like this
for f in $*
do
echo "Processing $f"
# do something on $f
done
http://www.cyberciti.biz/faq/bash-loop-over-file/ even gives you some more examples.
When you want filenames without the dir you can use
if [ ! -d "$*" ]; then
echo "Usage: $0 dirname"
exit 1
fi
cd "$*"
find . -type f | cut -c3- | while read file_name; do
echo "Found ${file_name}"
done
Just remove the wildcard character "*" since it matches all the files in the given path. Removing it will just pass the directory path and your script will print each file

Trying to cat files - unrecognized wildcard

I am trying to create a file that contains all of the code of an app. I have created a file called catlist.txt so that the files are added in the order I need them.
A snippet of my catlist.txt:
app/controllers/application_controller.rb
app/views/layouts/*
app/models/account.rb
app/controllers/accounts_controller.rb
app/views/accounts/*
When I run the command the files that are explicitly listed get added but the wildcard files do not.
cat catlist.txt|xargs cat > fullcode
I get
cat: app/views/layouts/*: No such file or directory
cat: app/views/accounts/*: No such file or directory
Can someone help me with this. If there is an easier method I am open to all suggestions.
Barb
Your problem is that xargs is not the shell, so the wildcard is being interpreted literally as an star. You'll need to have a shell to do the expansion for you like this:
cat catlist.txt | xargs -I % sh -c "cat %" > fullcode
Note that the * is not recursive in your data file. I assume that was what you meant. If you want the entries to be recursive, that's a little trickier and would need something more like DevNull's script, but that will require that you change your data file a bit to not include the stars.
Are you positive those directories exist?
The problem with doing a cat on a list like that (where you're using wildcards) is that the cat isn't recursive. It will only list the contents of that directory; not any subdirectories.
Here's what I would do:
#!/bin/bash.exe
output="out.txt"
if [ -f "$output" ]
then
rm $output
fi
for file in $(cat catlist.txt)
do
if [ -f "$file" ]
then
echo "$file is a file."
cat $file >> $output
elif [ -d "$file" ]
then
echo "$file is a directory."
find $file -type f -exec cat {} >> $output \;
else
echo "huh?"
fi
done
If the entry listed is a directory, it finds all files from that point on and cats them.
use a while read loop to read your file
while read -r file
do
if [ -f "$file" ]; then
yourcode "$file"
fi
# expand asterix
case "$file" in
*"*" )
for f in $file
do
yourcode "$f"
done
esac
done <"catlist.txt"

Strip Characters Before Period If Filename Has Prefix in Bash

I have a directory that looks like this:
pages/
folder1/
folder1.filename1.txt
folder1.filename2.txt
folder2/
folder2.filename4.txt
folder2.filename5.txt
folder3/
filename6.txt
I want it to look like this:
pages/
folder1/
filename1.txt
filename2.txt
folder2/
filename3.txt
filename4.txt
folder3/
filename5.txt
With ls * | sed -e s/^[^.]*.// > /tmp/filenames.txt I get a file containing:
filename1.txt
filename2.txt
filename3.txt
filename4.txt
txt
How can I tell sed to ignore filenames of the form [filename].[suffix] and only look at filenames of the form [foldername].[filename].[suffix]?
The final script (as pointed out, the find command would simplify things, but this worked):
for folder in $(ls .)
do
if test -d $folder
then
pushd $folder
ls * | sed 's/.*\.\(.*\..*\)/\1/' > /tmp/filenames.txt
ls * > /tmp/current.txt
exec 3</tmp/current.txt
exec 4</tmp/filenames.txt
while read file <&3; read name <&4;
do
mv "$file" "$name"
done
rm /tmp/current.txt
rm /tmp/filenames.txt
popd
else
echo $folder "not a directory"
fi
done
exit 0
This page is now a community wiki. You can add more elegant solutions below:
for folder in $(ls .)
do
something better
Give this a try:
sed 's/.*\.\(.*\..*\)/\1/'
You should really use find then you wouldn't need the check for "-d folder" or the temp file and execs or the while loop.
You can avoid the temporary file by using process substition:
while read line
do
echo $line
done < <(ls)
Another item of interest: your system may already have a Perl script called rename or prename which will rename files using a regular expression.
You don't need to use sed:
ls * > /tmp/current.txt
exec 3</tmp/current.txt
while read file <&3;
do
replacement=${file#${folder}.}
if [ "$replacement" != "txt" ] ; then
mv "$file" "$replacement"
fi
done
use the following regex:
/\A(.\*?\\.){2,2}.+\Z/

Resources