FTP Shell Script mkdir issue - bash

I am using the Bash FTP command to ftp files, however i have a problem where i try to create a directory that is more than 2 folders deep. It works if i use two folders deep but if i go to three folders deep then it fails. For example:
mkdir foo/bar - this works
mkdir foo/bar/baz - this fails
I have also tried this:
mkdir -p foo/bar/baz - which didn't work, it ended up creating a '-p' directory
The shell script i am trying to run is actually quite simple but as you can see the directory structure is 3 folders deep and it fails to create the required folders:
#!/bin/bash
DIRECTORY="foo/bar/baz"
FILE="test.pdf"
HOST="testserver"
USER="test"
PASS="test"
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASS
mkdir $DIRECTORY
cd $DIRECTORY
binary
put $FILE
quit
END_SCRIPT

mkdir under ftp is implemented by the ftp server, not by calling /bin/mkdir no such options as -p,
what you should do is
mkdir foo
cd foo
mkdir bar
cd bar
mkdir baz
cd baz
If you still want your original construct, you can also do it like this:
#!/bin/bash
foo() {
local r
local a
r="$#"
while [[ "$r" != "$a" ]] ; do
a=${r%%/*}
echo "mkdir $a"
echo "cd $a"
r=${r#*/}
done
}
DIRECTORY="foo/bar/baz"
FILE="test.pdf"
HOST="testserver"
USER="test"
PASS="test"
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASS
$(foo "$DIRECTORY")
binary
put $FILE
quit
END_SCRIPT

Try lftp instead:
[dong#idc1-server1 ~]$ lftp sftp://idc1-server2
lftp idc1-server2:~> ls
drwxr-xr-x 3 dong dong 4096 Jun 16 09:11 .
drwxr-xr-x 18 root root 4096 Apr 1 22:25 ..
-rw------- 1 dong dong 116 Jun 16 09:28 .bash_history
-rw-r--r-- 1 dong dong 18 Oct 16 2013 .bash_logout
-rw-r--r-- 1 dong dong 176 Oct 16 2013 .bash_profile
-rw-r--r-- 1 dong dong 124 Oct 16 2013 .bashrc
drwx------ 2 dong dong 4096 Jul 24 2014 .ssh
lftp idc1-server2:~> mkdir a/b/c/d
mkdir: Access failed: No such file (a/b/c/d)
lftp idc1-server2:~> mkdir -p a/b/c/d
mkdir ok, `a/b/c/d' created

Related

Why isn't my docker container respecting my permissions?

Dockerfile:
FROM ubuntu:latest
RUN apt install -y bash
CMD []
build and run:
docker build -t test .
docker run -it test bash
minimal reproduction:
root#8807902e27b4:/# mkdir parent
root#8807902e27b4:/# cd parent
root#8807902e27b4:/parent# mkdir example
root#8807902e27b4:/parent# chmod 000 example
root#8807902e27b4:/parent# ls -la
total 12
drwxr-xr-x 3 root root 4096 Apr 28 19:33 .
drwxr-xr-x 1 root root 4096 Apr 28 19:32 ..
d--------- 2 root root 4096 Apr 28 19:33 example
root#8807902e27b4:/parent# cd example
root#8807902e27b4:/parent/example# echo "test" > test.txt
root#8807902e27b4:/parent/example# chmod 100 test.txt
root#8807902e27b4:/parent/example# cat test.txt
test
root#8807902e27b4:/parent/example# ls -la
total 12
d--------- 2 root root 4096 Apr 28 19:33 .
drwxr-xr-x 3 root root 4096 Apr 28 19:33 ..
---x------ 1 root root 5 Apr 28 19:33 test.txt
In the above example, the cd example command should fail, and even if it doesn't, running cat test.txt should fail. Anyone know what's up?
Here are the same (working) commands run in osx:
beaushinkle#Beaus-MBP ~/p/example-docker> mkdir parent
beaushinkle#Beaus-MBP ~/p/example-docker> cd parent
beaushinkle#Beaus-MBP ~/p/e/parent> mkdir example
beaushinkle#Beaus-MBP ~/p/e/parent> chmod 000 example
beaushinkle#Beaus-MBP ~/p/e/parent> cd example
cd: Permission denied: 'example'
beaushinkle#Beaus-MBP ~/p/e/parent [1]> chmod 777 example
beaushinkle#Beaus-MBP ~/p/e/parent> cd example
beaushinkle#Beaus-MBP ~/p/e/p/example> echo "test" > test.txt
beaushinkle#Beaus-MBP ~/p/e/p/example> chmod 100 test.txt
beaushinkle#Beaus-MBP ~/p/e/p/example> cat test.txt
cat: test.txt: Permission denied
If the prompt is anything to go by, we are logged in as root in the minimal reproduction. Thus, we have root privileges and can read and write all files (external link).

Unable to switch to root user after ssh into the instance using shell script

I have a scenario to automate the manual build update process via shell script on multiple VM nodes.
For the same, I am trying the below sample script to first ssh into the instance and then switch to root user to perform the further steps like copying the build to archives directory under /var and then proceed with the later steps.
Below is the sample script,
#!/bin/sh
publicKey='/path/to/publickey'
buildVersion='deb9.deb build'
buildPathToStore='/var/cache/apt/archives/'
pathToHomedir='/home'
script="whoami && pwd && ls -la && whoami && mv ${buildVersion} ${buildPathToStore} && find ${buildPathToStore} | grep deb9"
for var in "$#"
do
copyBuildPath="${publicKey} ${buildVersion} ${var}:/home/admin/"
echo "copy build ==>" ${copyBuildPath}
scp -r -i ${copyBuildPath}
ssh -i $publicKey -t $var "sudo su - & ${script}; " # This shall execute all commands as root
done
So the CLI stats for the above script are something like this
admin //this is the user check
/home/admin
total 48
drwxr-xr-x 6 admin admin 4096 Dec 6 00:28 .
drwxr-xr-x 6 root root 4096 Nov 17 14:07 ..
drwxr-xr-x 3 admin admin 4096 Nov 17 14:00 .ansible
drwx------ 2 admin admin 4096 Nov 23 18:26 .appdata
-rw------- 1 admin admin 5002 Dec 6 17:47 .bash_history
-rw-r--r-- 1 admin admin 220 May 16 2017 .bash_logout
-rw-r--r-- 1 admin admin 3506 Jun 14 2019 .bashrc
-rw-r--r-- 1 admin admin 675 May 16 2017 .profile
drwx------ 4 admin admin 4096 Nov 23 18:26 .registry
drwx------ 2 admin admin 4096 Jun 21 2019 .ssh
-rw-r--r-- 1 admin admin 0 Dec 6 19:42 testFile.txt
-rw------- 1 admin admin 2236 Jun 21 2019 .viminfo
admin
If I use sudo su -c and remove &
like:
ssh -i $publicKey -t $var "sudo su -c ${script}; "
Then for once whoami returns the user as root but the working directory still prints as /home/admin instead of /root
And the next set of commands are still accounted for admin user rather than the root. So the admin user do not have the privileges to move the build to archive directory and install the build.
Using & I want to ensure that the further steps are being done in the background.
Not sure how to proceed ahead with this. Good suggestions are most welcome right now :)
"sudo su - & ${script}; "
expands to:
sudo su - & whoami && pwd && ...
First sudo su - is run in the background. Then the command chain is executed.
sudo su -c ${script};
expands to:
sudo su -c whoami && pwd && ...
So first sudo su - whoami is executed, which runs whoami as root. Then if this command is successful, then pwd is executed. As normal user.
It is utterly hard to correctly pass commands to execute on remote site using ssh. It is increasingly hard to do it with sudo su - the command will be triple (or twice?) word splitted - one time by ssh, then by the shell, then by the shell run by sudo su.
If you do not need interactive communication, it's best to use a here document with -s shell option, something along (untested):
# DO NOT store commands to use in a variable.
# or if you do and you know what you are doing, properly quote it (printf "%q ") and run it via eval
script() {
set -euo pipefail
whoami
pwd
ls -la
whoami
mv "$buildVersion" "$buildPathToStore"
find "$buildPathToStore" | grep deb9
}
ssh ... "sudo bash -s" <<EOF
echo "Yay! anything here!"
echo "Note that here document delimiter is not quoted!"
$(
# safely import context to work with
# note how command substitution is executed on host side
declare -f script
# pass variables too!
declare -p buildVersion buildPathToStore buildPathToStore
)
script
EOF
When you use su alone it keeps you in your actual directory, if you use su - it simulates the root login.
You should write : su - root -c ${script};

mkdir fails with directory exists after bash test if directory exists fails

I'm building a gitlab ci pipeline, and try to create a directory if it not exists.
Can anybody tell me what I'm doing wrong here?
$ if [ ! -d aws ]
$ then
$ mkdir aws
mkdir: cannot create directory ‘aws’: File exists
ERROR: Job failed: exit code 1
the relevant part of the gitlab-ci.yml
script:
- export
- ls -al
- if [ ! -d aws ]
- then
- mkdir aws
- fi
$ ls -al
total 128
drwxrwxrwx 16 root root 4096 Sep 17 12:07 .
drwxrwxrwx 6 root root 4096 Sep 17 12:07 ..
drwxrwxrwx 2 root root 4096 Sep 17 12:07 aws
I now just used mkdir -p and removed the test
you have something with aws name, which might be symbolic link, hard link, regular file, vs.
first delete or move that file to somewhere else then try again
you can try -e (returns true if file exists regardless of type).

mkdir doesn't do path expansion

So I have folder aa
$ mkdir aa
and path expansion for ls command works like this:
$ ls -la a*
total 0
drwxr-xr-x 1 a a 0 Mar 29 08:41 ./
drwxr-xr-x 1 a a 0 Dec 31 1979 ../
$ ls -la a?
total 0
drwxr-xr-x 1 a a 0 Mar 29 08:41 ./
drwxr-xr-x 1 a a 0 Dec 31 1979 ../
But "the same" for mkdir shows an error:
$ mkdir a*/bb
mkdir: cannot create directory 'a*/bb': No such file or directory
$ mkdir a?/bb
mkdir: cannot create directory 'a?/bb': No such file or directory
Where can I read why this difference in behavior happens and is there simple trick to let mkdir be "smarter" for behavior like in ls?
This does not work, since wildcard expansion is done before the argument is passed to mkdir. bash tries to expand a*/bb, doesn't find a match and tells you so. mkdir is not even invoked here. You can also try e.g.
echo a*/bb
or as you did before
ls -la a*/bb
Both commands will give you the same error message.
Now I realize how stupid that question was. Probably I wanted something like this for expansion to work:
mkdir "$(ls -d a?)"/bb
Try:
mkdir -p a*/aa
mkdir -p a?/aa

Why my shell script not working with cron?

I have two shell scripts .
(working one)
$ cat script_nas.sh
#!/bin/bash
for i in `cat nas_filers`
do echo $i
touch /mnt/config-backup/nas_backup/$i.auditlog.0.$(date '+%Y%m%d')
ssh -o ConnectTimeout=5 root#$i rdfile /etc/configs/config_saved > /mnt/config-backup/nas_backup/$i.auditlog.0.$(date '+%Y%m%d')
done
other
(not working one)
$ cat script_san.sh
#!/bin/bash
for i in `cat san_filers`
do echo $i
touch /mnt/config-backup/san_backup/$i.auditlog.0.$(date '+%Y%m%d')
ssh -o ConnectTimeout=5 root#$i rdfile /etc/configs/config_saved > /mnt/config-backup/san_backup/$i.auditlog.0.$(date '+%Y%m%d')
done
Cron entries are:
$ crontab -l
Filers config save script
0 0 * * * /mnt/config-backup/script_san.sh
0 0 * * * /mnt/config-backup/script_nas.sh
0 0 * * * /mnt/config-backup/delete_file
Script script_san.sh is not working.
Outputs are like
SAN backup directory
san_backup]# ls -lart alln01-na-exch01a.cisco.com.auditlog*
-rw-r--r-- 1 root root 210083 Mar 1 22:24 alln01-na-exch01a.auditlog.0.20150301
[root#XXXXX san_backup]# pwd
/mnt/config-backup/san_backup
NAS backup directory
nas_backup]# ls -lart rcdn9-25f-filer43b.cisco.com.auditlog*
-rw-r--r-- 1 root root 278730 Feb 26 00:06 rcdn9-25f-filer43b.cisco.com.auditlog.0.20150226
-rw-r--r-- 1 root root 281612 Feb 27 00:17 rcdn9-25f-filer43b.cisco.com.auditlog.0.20150227
-rw-r--r-- 1 root root 284105 Feb 28 00:02 rcdn9-25f-filer43b.cisco.com.auditlog.0.20150228
-rw-r--r-- 1 root root 284101 Mar 1 00:02 rcdn9-25f-filer43b.cisco.com.auditlog.0.20150301
[root#XXXXXXX nas_backup]#
From cron logs I can see that cron is executing both the script but output for script_san.sh is not coming.
From my experience, most of the times script is working manually but not from crontab is because login scripts were not running. Try to add something like source ~/.bash_profile in the begging of script or first line in cron file. Did you try (for debugging) to run the script with at command?

Resources