I am copying files from a FTP server to the file system of my cluster. The script is executed on the cluster.
#!/bin/sh
HOST='0.0.0.0' # Host IP
ftp -i -nv $HOST <<END_SCRIPT
quote USER $FTPUSER
quote PASS $FTPPASS
binary
cd /FTPDIR/path/to/data/
mkdir -p /home/admin/path/to/data/
lcd /home/admin/path/to/data/
I added the mkdir -p /home/admin/path/to/data/ command in order to create the directory in my cluster in case it does not exists. However, the script created a directory named -p in the FTP /FTPDIR/path/to/data/ dir.
What would be the command to create it in the cluster?
The mkdir you are calling here is not the mkdir that exists on your system (or on the target system either). It is the mkdir that FTP provides. And that mkdir does not have the -p option.
You have to create each directory one by one, no shortcut.
Related
I have a jenkins job, which has its own set of build servers. The process which i follow is building applications on the jenkins build server and then I use "send files or execute commands over ssh" to copy my build and deploy the same using a shell script.
As a part of the deployment commands, I have quite a few steps to be done, like mkdir, tar -xzvf etc.I want to execute these deployment steps with a specific user "K". But when i type the sudo su - k command, the jenkins job fails because i am unable to feed the password to it.
#!/bin/bash
sudo su - K << \EOF
cd /DIR1/DIR2;
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
EOF
To handle that, I used a PASSWORD parameter and made the build as parameterized, so that i can use the same PASSWORD in the shell script.
I have tried to use Expect, but looks like commands like cd, tar -xzvf are not working inside it and even if they work they will not be executed with the K as a user since the terminal may expire(please correct if wrong).
export $PASSWORD
/usr/bin/expect << EOD
spawn sudo su - k
expect "password for K"
send -- "$PASSWORD"
cd /DIR1/DIR2;
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
EOD
Note: I do not have the root access to the servers and hence cannot tweak the host key files. Is there a work around for this problem?
Even if you get it working, having passwords in scripts or on the command line probably is not ideal from a security standpoint. Two things I would suggest :
1) Use a public SSH key owned by the user on your initiating system as an authorized key on the remote system to allow logging as the intended user on the remote system without a password. You should have all you need to do that (no root access required, only to the users you already use on each system).
2) Set-up the "sudoers" file on the remote system so that the user you log in as is allowed to perform the commands you need as the required user. You would need the system administrator help for that.
Like so:
SUDO_PASSWORD=TheSudoPassword
...
ssh kilroy#somehost "echo $SUDO_PASSWORD | sudo -S some_root_command"
Later
How can i use this in the 1st snippet?
Write a file:
deploy.sh
#!/bin/sh
cd /DIR1/DIR2
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
Then:
chmod +x deploy.sh
scp deploy.sh kilroy#somehost:~
ssh kilroy#somehost "echo $SUDO_PASSWORD | sudo -S ./deploy.sh"
I have a server that has files on /files/category/specific/*.
On my local I have a ~/files directory. What I want to do is pull /files/category/specific/* onto my local into ~/files/something/whatever. But something/whatever doesn't exist, I want rsync to create these local directories as well.
How can I do this with rsync?
Try this:
rsync -a --relative user#remote:/files/category/specific/* ~/files
Or You can use ssh to create a directory beforehand or use two rsync commands like this
rsync -a --rsync-path=”mkdir -p ~/files/whatever/whatever/ && rsync” user#remote:/files/category/specific/* $source
I'd like to write bash script to recursively list all files (with fullpaths) on sftp and interact with paths locally afterwards (so only thing for what sftp is needed is getting the paths). Unfortunately the "ls -R" doesn't work there.
Any idea how to do that with some basic POC would be really appreciated
Available commands:
bye Quit sftp
cd path Change remote directory to 'path'
chgrp grp path Change group of file 'path' to 'grp'
chmod mode path Change permissions of file 'path' to 'mode'
chown own path Change owner of file 'path' to 'own'
df [-hi] [path] Display statistics for current directory or
filesystem containing 'path'
exit Quit sftp
get [-Ppr] remote [local] Download file
help Display this help text
lcd path Change local directory to 'path'
lls [ls-options [path]] Display local directory listing
lmkdir path Create local directory
ln [-s] oldpath newpath Link remote file (-s for symlink)
lpwd Print local working directory
ls [-1afhlnrSt] [path] Display remote directory listing
lumask umask Set local umask to 'umask'
mkdir path Create remote directory
progress Toggle display of progress meter
put [-Ppr] local [remote] Upload file
pwd Display remote working directory
quit Quit sftp
rename oldpath newpath Rename remote file
rm path Delete remote file
rmdir path Remove remote directory
symlink oldpath newpath Symlink remote file
version Show SFTP version
!command Execute 'command' in local shell
! Escape to local shell
? Synonym for help
This recursive script does the job:
#!/bin/bash
#
URL=user#XXX.XXX.XXX.XXX
TMPFILE=/tmp/ls.sftp
echo 'ls -1l' > $TMPFILE
function handle_dir {
echo "====== $1 ========="
local dir=$1
sftp -b $TMPFILE "$URL:$dir" | tail -n +2 | while read info; do
echo "$info"
if egrep -q '^d' <<< $info; then
info=$(echo $info)
subdir=$(cut -d ' ' -f9- <<< $info)
handle_dir "$dir/$subdir"
fi
done
}
handle_dir "."
fill URL with the sftp server data.
I scan the whole Internet and find a great tool sshfs. Mount the remote directory tree through SSHFS. SSHFS is a remote filesystem that uses the SFTP protocol to access remote files.
Once you've mounted the filesystem, you can use all the usual commands without having to care that the files are actually remote.
sshfs helps me a lot, may give you help, too.
mkdir localdir
sshfs user#host:/dir localdir
cd localdir
find . -name '*'
I'm writing a bash script that creates directories and copy files under Mac OSX. Some of these directories and files need to be placed in folders owned by the system such as /Library/Audio/Plug-Ins, and so I run the script under sudo. Such script might look like:
copy-plugins.sh:
#!/usr/bin/env bash
mkdir -p /Library/Audio/Plug-Ins/My-Plugins
cp plugin-A.dylib /Library/Audio/Plug-Ins/My-Plugins
cp plugin-B.dylib /Library/Audio/Plug-Ins/My-Plugins
and called:
$ sudo ./copy-plugins.sh
However when running under sudo, all created directories and copied files are owned by root.
I would like to be able to run the script under sudo and have the files be owned by my user.
I could call chown after each file/directory is created or copied
copy-plugins-cumbersome.sh:
#!/usr/bin/env bash
mkdir -p /Library/Audio/Plug-Ins/My-Plugins
chown 501:501 /Library/Audio/Plug-Ins/My-Plugins
cp plugin-A.dylib /Library/Audio/Plug-Ins/My-Plugins
chown 501:501 /Library/Audio/Plug-Ins/My-Plugins/plugin-A.dylib
cp plugin-B.dylib /Library/Audio/Plug-Ins/My-Plugins
chown 501:501 /Library/Audio/Plug-Ins/My-Plugins/plugin-B.dylib
but I'm hoping for a more general solution.
As far as I can tell there is no setuid for bash.
Use cp -p option to preserve file attributes.
Note this will preserve user, group permissions and the modification and access times of the files.
As you need sudo to copy to the directories you are copying to in script, it means you need to be root to copy anything in those directories.
When you do sudo you are root for that particular command or script, so whatever will be created or executed will have root permissions. Till the time you specify.
The possible ways to come out of it without changing anything:
The one you are using, and
Other one to use -p or -a with cp
rsync -go <source file> <destination file>
-g for preserving group and
-o for preserving ownership.
Note If you do a chown out of script, you will have to specifically do sudo chown since files you would be touching belong to root.
I want to write a script that will connect to my server through my id (two layers of authentication) after i run the script.
ssh id#server->password
after this authentication one more authentication superuser authentication
username :
password :
My OS is MAC.
It's a lot tricker to get everything right so that this will "just work". The poorest documented problem is the correct protections on the login directory, the .ssh directory and the files in the .ssh directory. This is the script that I use to set everything up correctly:
#!/bin/tcsh -x
#
# sshkeygen.sh
#
# make sure your login directory has the right permissions
chmod 755 ~
# make sure your .ssh dir exists and has the right permissions
mkdir -pv -m 700 ~/.ssh
chmod 0700 ~/.ssh
# remove any existing rsa/dsa keys
rm -f ~/.ssh/id_rsa* ~/.ssh/id_dsa*
# if your ssh keys don't exist
set keyname = "`whoami`_at_`hostname`-id_rsa"
echo "keyname: $keyname"
if( ! -e ~/.ssh/$keyname ) then
# generate them
ssh-keygen -b 1024 -t rsa -f ~/.ssh/$keyname -P ''
endif
cd ~/.ssh
# set the permissions
chmod 0700 *id_rsa
chmod 0644 *id_rsa.pub
# create symbolic links to them for the (default) id_rsa files
ln -sf $keyname id_rsa
ln -sf $keyname.pub id_rsa.pub
I have another script that copies the "whoamiathostname-id_rsa.pub" file onto a shared server (as admin) and then merges it into that systems .ssh/authorized_keys file which it then copies back onto the local machine. The first time these scripts run the user is prompted for the admin password to the shared server but after that everything will "just work".
Oh, and it's "Mac" (not "MAC"). [\pedantic] ;-)