How to separate commndline arguments on bash scripting on Ubuntu? - bash

In my bash script I want to change file permissions on a particular file called "test.txt" which is located at :
"/var/www/tomcat7/dir1/test.txt"
My question is if I'm giving this full path to the file, I want to make change the permission on all the directories like, "var", "www", tomcat7", "dir1", and finally "test.txt".
File path is given via a separate text file as command-line arguments, and here is my code,
setFilePErmission(){
ssh ppuser#10.101.5.91 "sudo chmod 777 $1"
}
setFilePErmission $1
Can anyone help me? Thank You.... :)

#!/bin/bash
setFilePErmission(){
i=$(echo "$1" | awk -F '/' '{print NF}')
y=$1
while [[ $i -gt 1 ]]
do
ssh ppuser#10.101.5.91 "sudo chmod 777 $y"
y=${y%/*}
(( i-- ))
done
}
setFilePErmission "your path goes here"
Check if this works for you.
I am still doubtful, why would one need such permissions..
Please be sure while running such thing because once you change permission it will be very difficult to get them to previous values unless you dont remember each and every file permissions.

Related

How to change a files permissions through a prompt using bash scripting

just started to learn how to script and Im trying to make a loop script which will automatically change the permissions of the requested file and ask the user again if they want to change another files permissions. So far this is what I have although I'm not entirely sure how to fully make the loop to work.
#!/bin/bash
until "$input"=no
do
echo "Enter the name of file to change permissions"
read filename
chmod 777 $filename
echo "$filename permissions has been changed"
echo "Would you like to change the permissions of another file?(yes or no)"
read $input
done
echo "You typed: $input"
Thanks to oguz's anwser I solved my problem, Use [ "$input" = 'no' ] instead of "$input"=no, quote $filename and drop that $ from read $input. Or just go read a tutorial on loops and conditional constructs

Checkin if a Variable File is in another directory

I'm looking to check if a variable file is in another directory, and if it is, stop the script from running any farther. So far I have this:
#! /bin/bash
for file in /directory/of/variable/file/*.cp;
do
test -f /directory/to/be/checked/$file;
echo $?
done
I ran an echo of $file and see that it includes the full path, which would explain why my test doesn't see the file, but I am at a loss for how to move forward so that I can check.
Any help would be greatly appreciated!
Thanks
I think you want
#! /bin/bash
for file in /directory/of/variable/file/*.cp ; do
newFile="${file##*/}"
if test -f /directory/to/be/checked/"$newFile" ; then
echo "/directory/to/be/checked/$newFile already exists, updating ..."
else
echo "/directory/to/be/checked/$newFile not found, copying ..."
fi
cp -i "$file" /directory/to/be/checked/"$newFile"
done
Note that you can replace cp -i with mv -i and move the file, leaving no file left behind in /directory/of/variable/file/.
The -i option means interrogate (I think), meaning if the file is already there, it will ask you overwrite /directory/to/be/checked/"$newFile" (or similar) to which you must reply y. This will only happen if the file already exists in the new location.
IHTH
The command basename will give you just the file (or directory) without the rest of the path.
#! /bin/bash
for file in /directory/of/variable/file/*.cp;
do
test -f /directory/to/be/checked/$(basename $file);
echo $?
done

Shell Script that monitors a folder for new files

I'm not a pro in shell scripting, thats why I ask here :).
Let's say I got a folder. I need a script that monitors that folder for new files (no prefix name of files is given). When a new file gets copied into that folder, another script should start. Has the second script processed the file successfully the file should be deleted.
I hope you can give me some ideas on how to achieve such script :)
Thank you very much in advance.
Thomas
Try this:
watcher.sh:
#!/bin/bash
if [ -z $1 ];
then
echo "You need to specify a dir as argument."
echo "Usage:"
echo "$0 <dir>"
exit 1
fi
while true;
do
for a in $(ls -1 $1/* 2>/dev/null);
do
otherscript $a && rm $a #calls otherscript with the file a as argument and removes it if otherscript returned something non-zero
done
sleep 2s
done
Don't forget to make it executable
chmod +x ./watcher.sh
call it with:
./watcher.sh <dirname>
try inotify(http://man7.org/linux/man-pages/man7/inotify.7.html)
or you may need to install inotify-tools (http://www.ibm.com/developerworks/linux/library/l-ubuntu-inotify/) to use it by shell.

Shell Script to load multiple FTP files

I am trying to upload multiple files from one folder to a ftp site and wrote this script:
#!/bin/bash
for i in '/dir/*'
do
if [-f /dir/$i]; then
HOST='x.x.x.x'
USER='username'
PASSWD='password'
DIR=archives
File=$i
ftp -n $HOST << END_SCRIPT
quote USER $USER
quote PASS $PASSWD
ascii
put $FILE
quit
END_SCRIPT
fi
It is giving me following error when I try to execute:
username#host:~/Documents/Python$ ./script.sh
./script.sh: line 22: syntax error: unexpected end of file
I can't seem to get this to work. Any help is much appreciated.
Thanks,
Mayank
It's complaining because your for loop does not have a done marker to indicate the end of the loop. You also need more spaces in your if:
if [ -f "$i" ]; then
Recall that [ is actually a command, and it won't be recognized if it doesn't appear as such.
And... if you single quote your glob (at the for) like that, it won't be expanded. No quotes there, but double quotes when using $i. You probably also don't want to include the /dir/ part when you use $i as it's included in your glob.
If I'm not mistaken, ncftp can take wildcard arguments:
ncftpput -u username -p password x.x.x.x archives /dir/*
If you don't already have it installed, it's likely available in the standard repo for your OS.
First, the literal, fixing-your-script answer:
#!/bin/bash
# no reason to set variables that don't change inside the loop
host='x.x.x.x'
user='username'
password='password'
dir=archives
for i in /dir/*; do # no quotes if you want the wildcard to be expanded!
if [ -f "$i" ]; then # need double quotes and whitespace here!
file=$i
ftp -n "$host" <<END_SCRIPT
quote USER $user
quote PASS $password
ascii
put $file $dir/$file
quit
END_SCRIPT
fi
done
Next, the easy way:
lftp -e 'mput -a *.i' -u "$user,$password" "ftp://$host/"
(yes, lftp expands the wildcard internally, rather than expecting this to be done by the outer shell).
First of all my apologies in not making myself clear in the question. My actual task was to copy a file from local folder to a SFTP site and then move the file to an archive folder. Since the SFTP is hosted by a vendor I cannot use the key sharing (vendor limitation. Also, SCP will require password entering if used in a shell script so I have to use SSHPASS. SSHPASS is in the Ubuntu repo however for CentOS it needs to be installed from here
Current thread and How to run the sftp command with a password from Bash script? did gave me better understanding on how to write the script and I will share my solution here:
#!/bin/bash
#!/usr/bin
for i in /dir/*; do
if [ -f "$i" ]; then
file=$i
export SSHPASS=password
sshpass -e sftp -oBatchMode=no -b - user#ftp.com << !
cd foldername/foldername
put $file
bye
!
mv $file /somedir/test
fi
done
Thanks everyone for all the responses!
--Mayank

bash save last user input value permanently in the script itself

Is it possible to save last entered value of a variable by the user in the bash script itself so that I reuse value the next time while executing again?.
Eg:
#!/bin/bash
if [ -d "/opt/test" ]; then
echo "Enter path:"
read path
p=$path
else
.....
........
fi
The above script is just a sample example I wanted to give(which may be wrong), is it possible if I want to save the value of p permanently in the script itself to so that I use it somewhere later in the script even when the script is re-executed?.
EDIT:
I am already using sed to overwrite the lines in the script while executing, this method works but this is not at all good practice as said. Replacing the lines in the same file as said in the below answer is much better than what I am using like the one below:
...
....
PATH=""; #This is line no 7
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )";
name="$(basename "$(test -L "$0" && readlink "$0" || echo "$0")")";
...
if [ condition ]
fi
path=$path
sed -i '7s|.*|PATH='$path';|' $DIR/$name;
Someting like this should do the asked stuff :
#!/bin/bash
ENTERED_PATH=""
if [ "$ENTERED_PATH" = "" ]; then
echo "Enter path"
read path
ENTERED_PATH=$path
sed -i 's/ENTERED_PATH=""/ENTERED_PATH='$path'/g' $0
fi
This script will ask user a path only if not previously ENTERED_PATH were defined, and store it directly into the current file with the sed line.
Maybe a safer way to do this, would be to write a config file somewhere with the data you want to save and source it . data.saved at the begining of your script.
In the script itself? Yes with sed but it's not advisable.
#!/bin/bash
test='0'
echo "test currently is: $test";
test=`expr $test + 1`
echo "changing test to: $test"
sed -i "s/test='[0-9]*'/test='$test'/" $0
Preferable method:
Try saving the value in a seperate file you can easily do a
myvar=`cat varfile.txt`
And whatever was in the file is not in your variable.
I would suggest using the /tmp/ dir to store the file in.
Another option would be to save the value as an extended attribute attached to the script file. This has many of the same problems as editing the script's contents (permissions issues, weird for multiple users, etc) plus a few of its own (not supported on all filesystems...), but IMHO it's not quite as ugly as rewriting the script itself (a config file really is a better option).
I don't use Linux, but I think the relevant commands would be something like this:
path="$(getfattr --only-values -n "user.saved_path" "${BASH_SOURCE[0]}")"
if [[ -z "$path" ]]; then
read -p "Enter path:" path
setfattr -n "user.saved_path" -v "$path" "${BASH_SOURCE[0]}"
fi

Resources