How can I compare filename without getting permission denied message in bash? - bash

I want to look through all files in my directory and subdirectory
then delete files with special name
Here is my code
for filename in $1*;do
if("$filename" == "hello.txt");then
echo "WOW!"
fi
done
My test directory is TEST/ and there are two files. one name "hello.txt" and "world.txt";However, when I run the code I receive
noStrange.sh: line 2: TEST/hello.txt: Permission denied
noStrange.sh: line 2: TEST/world.txt: Permission denied
I tried the command chmod u+x scriptname, it doesn't work
This is what I input
sh scriptname TEST/
Can anyone tell me what is wrong with the script?

Use basename command to get the basename of a file from file path variable.
for filename in $1*;do if [[ $(basename "$filename") == "hello.txt" ]] ; then echo "wow";fi; done
Or
Use find command. This would search through all the files exists in the current folder as well it's sub folders.
find . -name 'hello.txt'

The immediate answer is that your syntax for tests is wrong; you should have
if ["$filename" == "hello.txt"]; then
etc. However, there are a few issues with your code. Since $filename will match TEST/hello.txt instead of hello.txt, you probably won't get the behavior you want. Also, if you're looking to just delete files with certain names, you probably want a normal UNIX command like
rm TEST/hello.txt
If there are patterns you want do delete, you can use glob/wildcards, or a combination of find, xargs and rm. E.g.
find TEST -name 'hello*.txt' | xargs rm

Related

Passing variables to cp / mv

I can use the cp or mv command to copy/mv files to a new folder manually but while in a for loop, it fails.
I've tried various ways of doing this and none seem to work. The most frustrating part is it works when run locally.
A simple version of what I'm trying to do is shown below:
#!bin/bash
#Define path variables
source_dir=/home/me/loop
destination_dir=/home/me/loop/new
#Change working dir
cd "$source_dir"
#Step through source_dir for each .txt. file
for f in *.txt
do
# If the txt file was modified within the last 300 minutes...
if [[ $(find "$f" -mmin -300) ]]
then
# Add breaks for any spaces in filenames
f="${f// /\\ }"
# Copy file to destination
cp "$source_dir/$f $destination_dir/"
fi
done
Error message is:
cp: missing destination file operand after '/home/me/loop/first\ second.txt /home/me/loop/new/'
Try 'cp --help' for more information.
However, I can manually run:
mv /home/me/loop/first\ second.txt /home/me/loop/new/
and it works fine.
I get the same error using cp and similar errors using rsync so I'm not sure what I'm doing wrong...
cp "$source_dir/$f $destination_dir/"
When you surround both arguments with double quotes you turn them into one argument with an embedded space. Quote them separately.
cp "$source_dir/$f" "$destination_dir/"
There's no do anything special for spaces beforehand. The quoting already ensures files with whitespace are handled correctly.
# Add breaks for any spaces in filenames
f="${f// /\\ }"
Let's take a step back, though. Looping over all *.txt files and then checking each one with find is overly complicated. find already loops over multiple files and does arbitrary things to those files. You can do everything in this script in a single find command.
#!bin/bash
source_dir=/home/me/loop
destination_dir=/home/me/loop/new
find "$source_dir" -name '*.txt' -mmin -300 -exec cp -t "$destination_dir" {} +
You need to divide it in to two strings, like this:
cp "$source_dir/$f" "$destination_dir/"
by having as one you are basically telling cp that the entire line is the first parameter, where it is actually two (source and destination).
Edit: As #kamil-cuk and #aaron states there are better ways of doing what you try to do. Please read their comments

How to make script independent from where it is executed

I am running into the problem of commands failing because I expect them to be executed at some directory and that is not the case.
For example I want to do:
pdfcrop --margins '0 0 -390 0' $pag "$pag"_1stCol.pdf
to create a new pdf document, and then
mv `\ls /home/dir | grep '_1stCol'` /home/gmanglano/dir/columns
The problem is that the mv command is failing because it finds the document, it is trying to move that file found FROM the directory where I executed the script, not from where it was found.
This is happening to me somewhat often and I feel there is a concept I am missing or I am thinking this the wrong way arround.
The error I get is:
mv: cannot stat '1stCol.pdf': No such file or directory
When there is, in fact, said fail, it just is not in the directory I launched the script.
Instead of monkeying with ls and backticks and all that, just use the find command. It's built for to find files and then execute a command based on the results of that find:
find /home/dir -name "*_1stCol.pdf" -exec mv {} /home/gmanglano/dir/columns \;
This is finding files in /home/dir that match the name *_1stCol.pdf and then moves them. The {} is the token for the found file.
Don't parse the output of ls: if you simplify the mv command to
mv /home/dir/*_1stCol.pdf /home/gmanglano/dir/columns
then you won't have an issue with being in the wrong directory.

Bash get specific file with name in all directories and run script

This is my scenario, I have a root folder called A, which contains folders B,C,D with n possibilities. Each one of those folders have a bash script. How can I get those bash scripts and run them using bash script.
A/B/run.sh
A/C/run.sh
A/D/run.sh
With find :
find . -name *.sh -type f -exec bash -c '[[ -x {} ]] || chmod u+x {}; {}' \;
For each *.sh found :
check if file has execute permissions
if not set execute permission for user
execute script
easy :)
eval "$(ls A/*/run.sh)"
ls will return a list of file with path to your scripts. (you could use find too)
the " " around the $() will make sure the result keeps new lines
eval will execute the returned lines as a script.
Mind you if there are spaces in the names of your script and stuff this can be a brittle solution. But if it looks like what you have shown, that should work fine.
this is output for an example:
~/ eval "$(ls */run.sh)"
I am running and my name is: one/run.sh
I am running and my name is: two/run.sh
the run.sh scripts are:
echo "I am running and my name is: $0"

Shell Script to update the contents of a folder - 2

I wrote this piece of code this morning.
The idea is, a text file (new.txt) has the details about the directory structure and the files in the directory.
Read new.txt, create the same directory structure at a destination directory (here it is /tmp), copy the source files to the corresponding destination directory.
Script
clear
DEST_DIR=/tmp
for file in 'cat new.txt'
do
mkdir -p $file
touch $file
echo 'ls -ltr $file'
cp -rf $file $DEST_DIR
find . -name $file -type f
cp $file $DEST_DIR
done
Contents of new.txt
Test/test1/test1.txt
Test/test2/test2.txt
Test/test3/test3.txt
Test/test4/test4.txt
The issue is, it executes the code, creates the directory structure, but instead of creating it at the end, it creates directories named test1.txt, test2.txt, etc. I have no idea why this is happening.
Another question: For Turbo C, C++, there is an option to check the execution flow? Is there something available in Unix, Perl and shell scripting to check the execution flow?
The script creates these directories because you tell it to on the line mkdir -p $file. You have to extract the directory path from you filename. The standard command for this is dirname:
dir=`dirname "$file"`
mkdir -p -- "$dir"
To check the execution flow is to add set -x at the top of your script. This will cause all lines that are executed to be printed to stderr with "+ " in front of it.
you might want to try something like rsync

bash script for copying files between directories

I am writing the following script to copy *.nzb files to a folder to queue them for Download.
I wrote the following script
#!/bin/bash
#This script copies NZB files from Downloads folder to HellaNZB queue folder.
${DOWN}="/home/user/Downloads/"
${QUEUE}="/home/user/.hellanzb/nzb/daemon.queue/"
for a in $(find ${DOWN} -name *.nzb)
do
cp ${a} ${QUEUE}
rm *.nzb
done
it gives me the following error saying:
HellaNZB.sh: line 5: =/home/user/Downloads/: No such file or directory
HellaNZB.sh: line 6: =/home/user/.hellanzb/nzb/daemon.queue/: No such file or directory
Thing is that those directories exsist, I do have right to access them.
Any help would be nice.
Please and thank you.
Variable names on the left side of an assignment should be bare.
foo="something"
echo "$foo"
Here are some more improvements to your script:
#!/bin/bash
#This script copies NZB files from Downloads folder to HellaNZB queue folder.
down="/home/myusuf3/Downloads/"
queue="/home/myusuf3/.hellanzb/nzb/daemon.queue/"
find "${down}" -name "*.nzb" | while read -r file
do
mv "${file}" "${queue}"
done
Using while instead of for and quoting variables that contain filenames protects against filenames that contain spaces from being interpreted as more than one filename. Removing the rm keeps it from repeatedly producing errors and failing to copy any but the first file. The file glob for -name needs to be quoted. Habitually using lowercase variable names reduces the chances of name collisions with shell variables.
If all your files are in one directory (and not in multiple subdirectories) your whole script could be reduced to the following, by the way:
mv /home/myusuf3/Downloads/*.nzb /home/myusuf3/.hellanzb/nzb/daemon.queue/
If you do have files in multiple subdirectories:
find /home/myusuf3/Downloads/ -name "*.nzb" -exec mv {} /home/myusuf3/.hellanzb/nzb/daemon.queue/ +
As you can see, there's no need for a loop.
The correct syntax is:
DOWN="/home/myusuf3/Downloads/"
QUEUE="/home/myusuf3/.hellanzb/nzb/daemon.queue/"
for a in $(find ${DOWN} -name *.nzb)
# escape the * or it will be expanded in the current directory
# let's just hope no file has blanks in its name
do
cp ${a} ${QUEUE} # ok, although I'd normally add a -p
rm *.nzb # again, this is expanded in the current directory
# when you fix that, it will remove ${a}s before they are copied
done
Why don't you just use rm $(a}?
Why use a combination of cp and rm anyway, instead of mv?
Do you realize all files will end up in the same directory, and files with the same name from different directories will overwrite each other?
What if the cp fails? You'll lose your file.

Resources