script for terminal - macos

I want to write a script that will connect to my server through my id (two layers of authentication) after i run the script.
ssh id#server->password
after this authentication one more authentication superuser authentication
username :
password :
My OS is MAC.

It's a lot tricker to get everything right so that this will "just work". The poorest documented problem is the correct protections on the login directory, the .ssh directory and the files in the .ssh directory. This is the script that I use to set everything up correctly:
#!/bin/tcsh -x
#
# sshkeygen.sh
#
# make sure your login directory has the right permissions
chmod 755 ~
# make sure your .ssh dir exists and has the right permissions
mkdir -pv -m 700 ~/.ssh
chmod 0700 ~/.ssh
# remove any existing rsa/dsa keys
rm -f ~/.ssh/id_rsa* ~/.ssh/id_dsa*
# if your ssh keys don't exist
set keyname = "`whoami`_at_`hostname`-id_rsa"
echo "keyname: $keyname"
if( ! -e ~/.ssh/$keyname ) then
# generate them
ssh-keygen -b 1024 -t rsa -f ~/.ssh/$keyname -P ''
endif
cd ~/.ssh
# set the permissions
chmod 0700 *id_rsa
chmod 0644 *id_rsa.pub
# create symbolic links to them for the (default) id_rsa files
ln -sf $keyname id_rsa
ln -sf $keyname.pub id_rsa.pub
I have another script that copies the "whoamiathostname-id_rsa.pub" file onto a shared server (as admin) and then merges it into that systems .ssh/authorized_keys file which it then copies back onto the local machine. The first time these scripts run the user is prompted for the admin password to the shared server but after that everything will "just work".
Oh, and it's "Mac" (not "MAC"). [\pedantic] ;-)

Related

Create local directory when dealing with remote servers

I am copying files from a FTP server to the file system of my cluster. The script is executed on the cluster.
#!/bin/sh
HOST='0.0.0.0' # Host IP
ftp -i -nv $HOST <<END_SCRIPT
quote USER $FTPUSER
quote PASS $FTPPASS
binary
cd /FTPDIR/path/to/data/
mkdir -p /home/admin/path/to/data/
lcd /home/admin/path/to/data/
I added the mkdir -p /home/admin/path/to/data/ command in order to create the directory in my cluster in case it does not exists. However, the script created a directory named -p in the FTP /FTPDIR/path/to/data/ dir.
What would be the command to create it in the cluster?
The mkdir you are calling here is not the mkdir that exists on your system (or on the target system either). It is the mkdir that FTP provides. And that mkdir does not have the -p option.
You have to create each directory one by one, no shortcut.

Setting up multiple github accounts to use with visual studio in Windows

I tried following the steps in https://code.tutsplus.com/tutorials/quick-tip-how-to-work-with-github-and-multiple-accounts--net-22574 but it fails in the very first step. I am using Windows 10
I ran the ssh-keygen command in gitbash but got the following error:
My user name has a space in between, so how do I deal with this to setup my github accounts? Thanks.
With the latest version of Git, I recommend adding -m PEM, and, in your case, the target file path:
cd /c/Users/Ab*
mkdir .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -t rsa -P "" -m PEM -f ./id_rsa

Providing password using a variable to become a sudo user in Jenkins

I have a jenkins job, which has its own set of build servers. The process which i follow is building applications on the jenkins build server and then I use "send files or execute commands over ssh" to copy my build and deploy the same using a shell script.
As a part of the deployment commands, I have quite a few steps to be done, like mkdir, tar -xzvf etc.I want to execute these deployment steps with a specific user "K". But when i type the sudo su - k command, the jenkins job fails because i am unable to feed the password to it.
#!/bin/bash
sudo su - K << \EOF
cd /DIR1/DIR2;
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
EOF
To handle that, I used a PASSWORD parameter and made the build as parameterized, so that i can use the same PASSWORD in the shell script.
I have tried to use Expect, but looks like commands like cd, tar -xzvf are not working inside it and even if they work they will not be executed with the K as a user since the terminal may expire(please correct if wrong).
export $PASSWORD
/usr/bin/expect << EOD
spawn sudo su - k
expect "password for K"
send -- "$PASSWORD"
cd /DIR1/DIR2;
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
EOD
Note: I do not have the root access to the servers and hence cannot tweak the host key files. Is there a work around for this problem?
Even if you get it working, having passwords in scripts or on the command line probably is not ideal from a security standpoint. Two things I would suggest :
1) Use a public SSH key owned by the user on your initiating system as an authorized key on the remote system to allow logging as the intended user on the remote system without a password. You should have all you need to do that (no root access required, only to the users you already use on each system).
2) Set-up the "sudoers" file on the remote system so that the user you log in as is allowed to perform the commands you need as the required user. You would need the system administrator help for that.
Like so:
SUDO_PASSWORD=TheSudoPassword
...
ssh kilroy#somehost "echo $SUDO_PASSWORD | sudo -S some_root_command"
Later
How can i use this in the 1st snippet?
Write a file:
deploy.sh
#!/bin/sh
cd /DIR1/DIR2
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
Then:
chmod +x deploy.sh
scp deploy.sh kilroy#somehost:~
ssh kilroy#somehost "echo $SUDO_PASSWORD | sudo -S ./deploy.sh"

Default user for files and directories created in bash under sudo

I'm writing a bash script that creates directories and copy files under Mac OSX. Some of these directories and files need to be placed in folders owned by the system such as /Library/Audio/Plug-Ins, and so I run the script under sudo. Such script might look like:
copy-plugins.sh:
#!/usr/bin/env bash
mkdir -p /Library/Audio/Plug-Ins/My-Plugins
cp plugin-A.dylib /Library/Audio/Plug-Ins/My-Plugins
cp plugin-B.dylib /Library/Audio/Plug-Ins/My-Plugins
and called:
$ sudo ./copy-plugins.sh
However when running under sudo, all created directories and copied files are owned by root.
I would like to be able to run the script under sudo and have the files be owned by my user.
I could call chown after each file/directory is created or copied
copy-plugins-cumbersome.sh:
#!/usr/bin/env bash
mkdir -p /Library/Audio/Plug-Ins/My-Plugins
chown 501:501 /Library/Audio/Plug-Ins/My-Plugins
cp plugin-A.dylib /Library/Audio/Plug-Ins/My-Plugins
chown 501:501 /Library/Audio/Plug-Ins/My-Plugins/plugin-A.dylib
cp plugin-B.dylib /Library/Audio/Plug-Ins/My-Plugins
chown 501:501 /Library/Audio/Plug-Ins/My-Plugins/plugin-B.dylib
but I'm hoping for a more general solution.
As far as I can tell there is no setuid for bash.
Use cp -p option to preserve file attributes.
Note this will preserve user, group permissions and the modification and access times of the files.
As you need sudo to copy to the directories you are copying to in script, it means you need to be root to copy anything in those directories.
When you do sudo you are root for that particular command or script, so whatever will be created or executed will have root permissions. Till the time you specify.
The possible ways to come out of it without changing anything:
The one you are using, and
Other one to use -p or -a with cp
rsync -go <source file> <destination file>
-g for preserving group and
-o for preserving ownership.
Note If you do a chown out of script, you will have to specifically do sudo chown since files you would be touching belong to root.

SCP File command as non-root user on the server

I have some files to upload. Usually to edit anything while logged in the server I must precede the command with sudo. That is known.
How do I send a file then as "admin" instead of "root" when I have disabled root login.
scp path\to\file admin#myaddress.com:/var/www/sitename/public/path/
PERMISSION DENIED
In my opinion, either you should give permissions to the admin user or scp your file to /tmp/ and then sudo mv /tmp/yourfile /var/www/sitename/public/path/.
There is no sudo option when we are using scp command from local to server.
Each user will have upload permission to its own folder in home directory ex. home/xxxxuser so use as below:
scp file_source_here xxxuser#yourserver:/home/xxxuser/
Now you can move file from this folder to your destination.
I suggest these two commands as it works in a bash script.
Move the file to tmp as suggested.
scp path\to\file admin#myaddress.com:/tmp
Assuming admin user can do sudo. The ssh option -t allow you to do sudo command.
ssh -t admin#myaddress.com 'sudo chown root:root /tmp/file && sudo mv /tmp/file /var/www/sitename/public/path/'

Resources