File ownership not changing in bash script - bash

Im trying to run this script which basically copies an uploaded file to another directory - when I run it, the file gets copied ok but the ownership of the file does not get changed to sales1upload.dba as I expected while it produces the following error on output:
chown: cannot access `test1.txt': No such file or directory
#!/bin/bash
BASE_DIR="/home/sales1upload/upload"
NEW_BASE_DIR="/bbc/prod/today"
current_time=$(date "+%Y.%m.%d-%H.%M.%S")
for file in $(ls ${BASE_DIR});
do
filename=${file}
new_filename=$filename.$current_time
#set user permissions as desired
chown sales1upload.dba "$filename"
cp -prf ${BASE_DIR}/${filename} ${NEW_BASE_DIR}/"moved_files"/$new_filename
cp -prf ${BASE_DIR}/${filename} ${NEW_BASE_DIR}
rm ${BASE_DIR}/${filename}
done
Where am I going wrong with the file ownership in the script?

My quick guess: You're not running this in your Base directory, thus you cannot reference the file without specifying the base in the chmod argument. Change to:
chown sales1upload.dba "${BASE_DIR}/${filename}"
I'd like to add that though mine is the straightforward solution to your issue, getting rid of that ls as the other answers suggest is the way to go here.

You are asking ls to return a list of files in a directory, but they exist relative to that directory, not relative to the current directory.
As pointed out in comments, you should not be using ls for this at all. Fixing the ls to a simple wildcard will also incidentally solve your problem, but now you need to refactor the body of the loop to cope with a full path instead of just a plain file name. (You were already doing the opposite in a couple of places, so this should have been a simple bug to troubleshoot yourself.)
for file in "$BASE_DIR"/*; do
filename=$(basename "$file")
new_filename=$filename.$current_time
chown sales1upload.dba "$file"
cp -prf "$file" "$NEW_BASE_DIR/moved_files/$new_filename"
cp -prf "$file" "$NEW_BASE_DIR"
rm "$file"
done

Find files with find and not with ls. If you use find, you have the correct path. In your example you iterate over the relative path and not the absolute path.

Related

Deleting specific files in a directory using bash

I have a txt file with a list of files (approximately 500) for example:
file_0_hard.msOut
file_1_hard.msOut
file_10_hard.msOut
.
.
.
file_1000_hard.msOut
I want to delete all those files whose name is not in the txt file. All of these files are in the same directory. How can I do this using bash where I read the text file and then delete all those files in the directory that are not in the text file. Help would be appreciated.
Along the lines of user1934428
There is something to say for this solution. But since we have linux at our disposal with a strong filesystem in use I hope. we can make hardlinks; The only requirement for that the destination is on the same filesystem.
So along those lines:
make a directory to store the files you want to keep.
hardlink (ln {file} {target}) ; as this does not cost extra disk space, it only stores the inode number in the new directory file.
remove all files
move the files back from their origin.
And actually this would be about the same as:
mv {files} {save spot}
remove all files
mv {save spot}/{files} back
Which does pretty much the same thing. Then again; it is a nice way to learn about the power of a hardlink.
you may try this :
cd path/dir
for f in *; do
if ! grep -Fxq "$f" pathToFile/file.txt; then
rm -r "$f"
else
printf "exists-- %s \n" ${f}
fi
done
In case you are wondering (as I did) what -Fxq means in plain English:
F: Affects how PATTERN is interpreted (fixed string instead of a regex)
x: Match whole line
q: Shhhhh... minimal printing
Assuming the directory in question is mydir
set -e
cd mydir
tmpdir=/tmp/x$$ # adapt this to your taste
mv $(<list.txt) $tmpdir
cd ..
rm -r mydir
mkdir mydir
mv $tmpdir/* mydir
rm -r $tmpdir
Basically, instead to delete those files you want to keep, you safe them, then delete everything, and then restore them. For your case, this is probably faster than doing the other way around.
UPDATE:
As Michiel commented, it is advisable that you place your tmpdir in the same file system as mydir.

How to make script independent from where it is executed

I am running into the problem of commands failing because I expect them to be executed at some directory and that is not the case.
For example I want to do:
pdfcrop --margins '0 0 -390 0' $pag "$pag"_1stCol.pdf
to create a new pdf document, and then
mv `\ls /home/dir | grep '_1stCol'` /home/gmanglano/dir/columns
The problem is that the mv command is failing because it finds the document, it is trying to move that file found FROM the directory where I executed the script, not from where it was found.
This is happening to me somewhat often and I feel there is a concept I am missing or I am thinking this the wrong way arround.
The error I get is:
mv: cannot stat '1stCol.pdf': No such file or directory
When there is, in fact, said fail, it just is not in the directory I launched the script.
Instead of monkeying with ls and backticks and all that, just use the find command. It's built for to find files and then execute a command based on the results of that find:
find /home/dir -name "*_1stCol.pdf" -exec mv {} /home/gmanglano/dir/columns \;
This is finding files in /home/dir that match the name *_1stCol.pdf and then moves them. The {} is the token for the found file.
Don't parse the output of ls: if you simplify the mv command to
mv /home/dir/*_1stCol.pdf /home/gmanglano/dir/columns
then you won't have an issue with being in the wrong directory.

How to refer to the files in subdirectory in shell program?

I have a script called idk.sh at the root of a folder called autograder.
I also have a subdirectory in autograder called hw1 which contains some .sh files. I tried to print out the file name and contents but I failed. actually I tried /hw1, /hw1/, /hw1/* and failed. I dont really understand why I failed to fetch files and hope someone could answer me as I looked up the web and found that the approach should be /hw1/*. Thank you.
#!/bin/sh
for file in /hw1/*
do
echo $file
if [ -f $file ]
then
cat $file
echo $file
fi
done
~
~
I would simply do a find to achieve this
find /hw/ -type f -print -exec cat {} \;
A directory path starting with / means an absolute path, that is, a path from the root of the filesystem. Relative paths start with any character other than / (and \0, but that's a technicality). You'll also want to use a reference to the directory of the script, to be able to run the script from other directories.
See also:
How do I determine the location of my script?
Bash Pitfalls
Linux Filesystem Tree Overview

BASH: Copy all files and directories into another directory in the same parent directory

I'm trying to make a simple script that copies all of my $HOME into another folder in $HOME called Backup/. This includes all hidden files and folders, and excludes Backup/ itself. What I have right now for the copying part is the following:
shopt -s dotglob
for file in $HOME/*
do
cp -r $file $HOME/Backup/
done
Bash tells me that it cannot copy Backup/ into itself. However, when I check the contents of $HOME/Backup/ I see that $HOME/Backup/Backup/ exists.
The copy of Backup/ in itself is useless. How can I get bash to copy over all the folders except Backup/. I tried using extglob and using cp -r $HOME/!(Backup)/ but it didn't copy over the hidden files that I need.
try rsync. you can exclude file/directories .
this is a good reference
http://www.maclife.com/article/columns/terminal_101_using_rsync_locally
Hugo,
A script like this is good, but you could try this:
cp -r * Backup/;
cp -r .* Backup/;
Another tool used with backups is tar. This compresses your backup to save disk space.
Also note, the * does not cover . hidden files.
I agree that using rsync would be a better solution, but there is an easy way to skip a directory in bash:
for file in "$HOME/"*
do
[[ $file = $HOME/Backup ]] && continue
cp -r "$file" "$HOME/Backup/"
done
This doesn't answer your question directly (the other answers already did that), but try cp -ua when you want to use cp to make a backup. This recurses directories, copies rather than follows links, preserves permissions and only copies a file if it is newer than the copy at the destination.

Remove File on One Level of Directories only in KSH

I have a rm command which clears all the files in a particular directory.
#!/usr/bin/ksh
cd /asd/ded/ses/ddd/rty/leg/
rm *.sas7bdat
rm p_bt*
Unfortunately it clears all the files under this directory, but now I just want it to clear in "parent directory" i.e. "/asd/ded/ses/ddd/rty/leg/" and not in "/asd/ded/ses/ddd/rty/leg/21_11" which is the child directory inside it.
I know level rm is possible in bash. Does it change for KSH and if yes then how.
LonelySoul,
Chepner is correct. The default for 'rm' in ksh is to only remove the files in the current directory. You can remove files from the lower directories (recursively) by adding the '-r' option.
If you are observing different behavior, you may have an alias setup somewhere in your profile. Try entering 'whence rm' to see if there is an alias that is causing you unexpected behavior.
Examples.
>pwd
/tmp
>touch abc.txt
>mkdir ced
>touch ced/abc.txt
>rm abc.txt (will remove abc.txt in /tmp, but leave the file in directory ced.
>whence rm
rm -f

Resources