permission error on modifying root owned authorized keys file - bash

i need to exchange public key between two systems A and B.
These are the steps am following
copy the content of id_rsa.pub from /root/.ssh directory and save it in variable 'key'
ssh to B as ubuntu user . ssh -i key_file ubuntu#B
Move to root login by sudo su
Append the variable $key to /root/.ssh/authorized_keys
But the file authorized_keys is owned by root. Hence i get the permission error.
I cannot directory connect to system B as root. Only way is to connect as ubuntu and change to root.
I tried the following shell script
# Get all the Ips from the source file
sudo grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' $1 | sort -u > /tmp/list_of_servers.txt
# Get the public key
pubkey=$(sudo cat /root/.ssh/id_rsa.pub)
# For each server
while read ip;
do
(echo "$ip"
# ssh to the server
ssh -i $2 $3#$ip
# append key to autorized_keys file
sudo -c "echo $pubkey >> /root/.ssh/authorized_keys" root
echo "done $ip" )
done < /tmp/list_of_servers.txt
but i didnt work. its giving me permission error.
Can someone help me in the last step.

A fully paranoid approach to the mechanics of the SSH connection might be something like this:
# generate a shell-escaped version of the public key (spaces, wildcards, etc)
printf -v pubkey_q '%q' "$pubkey"
# generate a shell command using that quoted form
cmd="echo $pubkey_q >>/root/.ssh/authorized_keys"
# generate a shell-quoted sudo command invoking the above in a shell
printf -v cmd_q '%q ' sudo bash -c "$cmd"
# ...and execute it on the other end of a ssh connection.
ssh -i "$2" "$3#$ip" "$cmd_q"
printf %q is a bash extension which escapes a string in such a way that being parsed by a shell -- whether in a string that's eval'd, passed to ssh with bash as the remote shell, or passed to bash -c -- evaluates back to the original data. (For regular whitespace its output is safe for sh -c as well, but for any content where bash prefers $'' to escape nonprintable characters, this output may not be POSIX compliant).

This code doesn't do what you think it does:
# ssh to the server
ssh -i $2 $3#$ip
# append key to autorized_keys file
sudo -c "echo $pubkey >> /root/.ssh/authorized_keys" root
The ssh command there would normally open an interactive remote shell, but since we are in a script, an interactive shell is not possible. So the remote shell immediately exits, without actually doing anything at all.
The sudo command that follows is incorrect syntax, it cannot work that way with the -c flag. Check the man page of sudo. And since you are not actually in the remote she'll as you may have believed, the command is running in your local system, not the remote one where you want to append your key.
To run sudo remotely, use something like this:
ssh -i $2 $3#$ip sudo echo hello
The echo is just an example for testing of course.
However, this whole attempt of appending a public key to the authorized list of root is deeply flawed in terms of security. Sudo should be configured to ask for the password of the user, and there is no good way to do that in a script. Or if the user can run sudo without entering a password, that's just unacceptable from a security perspective.

Related

bash script to log as another user and keep the terminal open

I have set up a http server at localhost, with several sites. I would like to connect to each site root folder, at the same way I used to at a remote server via ssh. So, I tried to create a bash script, intended to log as user "http", giving the site root folder as argument and change the $HOME to the site root folder:
#!/bin/bash
echo "Connecting to $1 as http...";
read -p "ContraseƱa: " -s pass;
su - http << EOSU >/dev/null 2>&1
$pass
export HOME="/srv/http/$1";
echo $HOME;
source .bash_profile;
exec $SHELL;
EOFSU
It does not work, basically because of:
echo $HOME keeps giving out the home folder of the user launching the script.
when the script reaches the end, it ends (obvious), but I would like that it stays open, so I could have a terminal session for user "http" and go on typing commands.
In other words, I am looking for a script that saves me 3 commands:
# su - http
# cd <site_root_folder>
# export HOME=<site_root_folder>
Edit:
Someone suggested the following:
#!/bin/bash
init_commands=(
"export HOME=/srv/http/$(printf '%q' "$1")"
'cd $HOME'
'. .bash_profile'
)
su http -- --init-file <(printf '%s\n' "${init_commands[#]}")
I am sorry but their post is gone... In any case, this give me out bash: /dev/fd/63: permission denied. I am not so skillful to understand the commands above and try to sort it out. Can someone help me?
Thanks.
Possible solution:
I have been playing around, based on what was posted and some googling, and finally I got it :-)
trap 'rm -f "$TMP"' EXIT
TMP=$(mktemp) || exit 1
chmod a+r $TMP
cat >$TMP <<EOF
export HOME=/srv/http/$(printf '%q' "$1")
cd \$HOME
. .bash_profile
EOF
su http -- --init-file $TMP
I admit it is not a nice code, because of:
the temporary file is created by the user executing the script and later I have to chmod a+r so user "http" can access... not so good.
I am sure this can be done on the fly, without creating a tmp file.
If some can improve it, it will be welcome; although in any case, it works!
Your main problem is that the $HOME is evaluated as when the user run the script, meaning that it will get his evaluation of $HOME instead of evaluating it as the given user.
You can evaluate the $HOME as the given user (using the eval command) but I wont recommend it, it is generally bad practice to use this kind of evaluation, especially when talking about security.
I can recommend you to get the specific user home directory with standard linux tools like passwd
Example of the behavior you are seeing:
# expected output is /home/eliott
$ sudo -u eliott echo $HOME
/root
Working it around with passwd:
$ sudo -u eliott echo $(getent passwd eliott | cut -d: -f6)
/home/eliott

No such file or directory in Heredoc, Bash

I am deeply confused by Bash's Heredoc construct behaviour.
Here is what I am doing:
#!/bin/bash
user="some_user"
server="some_server"
address="$user"#"$server"
printf -v user_q '%q' "$user"
function run {
ssh "$address" /bin/bash "$#"
}
run << SSHCONNECTION1
sudo dpkg-query -W -f='${Status}' nano 2>/dev/null | grep -c "ok installed" > /home/$user_q/check.txt
softwareInstalled=$(cat /home/$user_q/check.txt)
SSHCONNECTION1
What I get is
cat: /home/some_user/check.txt: No such file or directory
This is very bizarre, because the file exists if I was to connect using SSH and check the following path.
What am I doing wrong? File is not executable, just a text file.
Thank you.
If you want the cat to run remotely, rather than locally during the heredoc's evaluation, escape the $ in the $(...):
softwareInstalled=\$(cat /home/$user_q/check.txt)
Of course, this only has meaning if some other part of your remote script then refers to "$softwareInstalled" (or, since it's in an unquoted heredoc, "\$softwareInstalled").

Passing a variable into rsync

I have the following script:
#!/bin/sh
...
rsync -e 'ssh -i "$SSHKeyPath"'
The error is:
Warning: Identity file $SSHKeyPath not accessible: No such file or directory.
How can I get $SSHKeyGen evaluated before rsync gets called?
UPDATE:
fwiw, this is on OSX.
There are several solutions that I believe are easier:
Use ssh-agent(1) to unlock the private portion of the key for ssh(1) processes as they need it. This is by far the easiest mechanism to use.
Use ~/.ssh/config to select a different private key based on hostname:
host backuphost
IdentityFile ~/.ssh/different_key
Then there is no need to specify a key on the command line.
Update
Given that you're trying to separate the key from the individual user, this makes a lot more sense to me now. If you use another variable in sh you can make your original approach work:
$ cat foo.sh
#!/bin/sh
SSHKeyPath=/home/sarnold/.ssh/id_rsa
KEYARG="ssh -i $SSHKeyPath"
rsync -e "$KEYARG" /tmp/pointless localhost:/tmp/new_pointless
$ ./foo.sh
Enter passphrase for key '/home/sarnold/.ssh/id_rsa':
skipping directory pointless
You're using single quotes, which cannot substitute variable name with it's value. It will be ok with douuble quotes.
For example
$> path_to_file="/tmp/file"; rsync -e "$( ssh -i "$path_to_file" )"
Warning: Identity file /tmp/file not accessible: No such file or directory.
Try this:
rsync -e "ssh -i ""$SSHKeyPath"

Bash script to run over ssh cannot see remote file

The script uses scp to upload a file. That works.
Now I want to log in with ssh, cd to the directory that holds the uploaded file, do an md5sum on the file. The script keeps telling me that md5sum cannot find $LOCAL_FILE. I tried escaping: \$LOCAL_FILE. Tried quoting the EOI: <<'EOI'. I'm partially understanding this, that no escaping means everything happens locally. echo pwd unescaped gives the local path. But why can I do "echo $MD5SUM > $LOCAL_FILE.md5sum", and it creates the file on the remote machine, yet "echo md5sum $LOCAL_FILE > md5sum2" does not work? And if it the local md5sum, how do I tell it to work on the remote?
scp "files/$LOCAL_FILE" "$i#$i.567.net":"$REMOTE_FILE_PATH"
ssh -T "$i#$i.567.net" <<EOI
touch I_just_logged_in
cd $REMOTE_DIRECTORY_PATH
echo `date` > I_just_changed_directories
echo `whoami` >> I_just_changed_directories
echo `pwd` >> I_just_changed_directories
echo "$MD5SUM" >> I_just_changed_directories
echo $MD5SUM > $LOCAL_FILE.md5sum
echo `md5sum $LOCAL_FILE` > md5sum2
EOI
You have to think about when $LOCAL_FILE is being interpreted. In this case, since you've used double-quotes, it's being interpreted on the sending machine. You need instead to quote the string in such a way that $LOCAL_FILE is in the command line on the receiving machine. You also need to get your "here document" correct. What you show just sends the output to touch to the ssh.
What you need will look something like
ssh -T address <'EOF'
cd $REMOTE_DIRECTORY_PATH
...
EOF
The quoting rules in bash are somewhat arcane. You might want to read up on them in Mendel Cooper's Advanced Guide to Bash Scripting.

How can I upload (FTP) files to server in a Bash script?

I'm trying to write a Bash script that uploads a file to a server. How can I achieve this? Is a Bash script the right thing to use for this?
Below are two answers. First is a suggestion to use a more secure/flexible solution like ssh/scp/sftp. Second is an explanation of how to run ftp in batch mode.
A secure solution:
You really should use SSH/SCP/SFTP for this rather than FTP. SSH/SCP have the benefits of being more secure and working with public/private keys which allows it to run without a username or password.
You can send a single file:
scp <file to upload> <username>#<hostname>:<destination path>
Or a whole directory:
scp -r <directory to upload> <username>#<hostname>:<destination path>
For more details on setting up keys and moving files to the server with RSYNC, which is useful if you have a lot of files to move, or if you sometimes get just one new file among a set of random files, take a look at:
http://troy.jdmz.net/rsync/index.html
You can also execute a single command after sshing into a server:
From man ssh
ssh [...snipped...] hostname [command] If command is specified, it is
executed on the remote host instead of a login shell.
So, an example command is:
ssh username#hostname.example bunzip file_just_sent.bz2
If you can use SFTP with keys to gain the benefit of a secured connection, there are two tricks I've used to execute commands.
First, you can pass commands using echo and pipe
echo "put files*.xml" | sftp -p -i ~/.ssh/key_name username#hostname.example
You can also use a batchfile with the -b parameter:
sftp -b batchfile.txt ~/.ssh/key_name username#hostname.example
An FTP solution, if you really need it:
If you understand that FTP is insecure and more limited and you really really want to script it...
There's a great article on this at http://www.stratigery.com/scripting.ftp.html
#!/bin/sh
HOST='ftp.example.com'
USER='yourid'
PASSWD='yourpw'
FILE='file.txt'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
binary
put $FILE
quit
END_SCRIPT
exit 0
The -n to ftp ensures that the command won't try to get the password from the current terminal. The other fancy part is the use of a heredoc: the <<END_SCRIPT starts the heredoc and then that exact same END_SCRIPT on the beginning of the line by itself ends the heredoc. The binary command will set it to binary mode which helps if you are transferring something other than a text file.
You can use a heredoc to do this, e.g.
ftp -n $Server <<End-Of-Session
# -n option disables auto-logon
user anonymous "$Password"
binary
cd $Directory
put "$Filename.lsm"
put "$Filename.tar.gz"
bye
End-Of-Session
so the ftp process is fed on standard input with everything up to End-Of-Session. It is a useful tip for spawning any process, not just ftp! Note that this saves spawning a separate process (echo, cat, etc.). It is not a major resource saving, but it is worth bearing in mind.
The ftp command isn't designed for scripts, so controlling it is awkward, and getting its exit status is even more awkward.
Curl is made to be scriptable, and also has the merit that you can easily switch to other protocols later by just modifying the URL. If you put your FTP credentials in your .netrc, you can simply do:
# Download file
curl --netrc --remote-name ftp://ftp.example.com/file.bin
# Upload file
curl --netrc --upload-file file.bin ftp://ftp.example.com/
If you must, you can specify username and password directly on the command line using --user username:password instead of --netrc.
Install ncftpput and ncftpget. They're usually part of the same package.
Use this to upload a file to a remote location:
#!/bin/bash
#$1 is the file name
#usage:this_script <filename>
HOST='your host'
USER="your user"
PASSWD="pass"
FILE="abc.php"
REMOTEPATH='/html'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
cd $REMOTEPATH
put $FILE
quit
END_SCRIPT
exit 0
The command in one line:
ftp -in -u ftp://username:password#servername/path/to/ localfile
#/bin/bash
# $1 is the file name
# usage: this_script <filename>
IP_address="xx.xxx.xx.xx"
username="username"
domain=my.ftp.domain
password=password
echo "
verbose
open $IP_address
USER $username $password
put $1
bye
" | ftp -n > ftp_$$.log
Working example to put your file on root...see, it's very simple:
#!/bin/sh
HOST='ftp.users.qwest.net'
USER='yourid'
PASSWD='yourpw'
FILE='file.txt'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
put $FILE
quit
END_SCRIPT
exit 0
There isn't any need to complicate stuff. This should work:
#/bin/bash
echo "
verbose
open ftp.mydomain.net
user myusername mypassword
ascii
put textfile1
put textfile2
bin
put binaryfile1
put binaryfile2
bye
" | ftp -n > ftp_$$.log
Or you can use mput if you have many files...
If you want to use it inside a 'for' to copy the last generated files for an everyday backup...
j=0
var="`find /backup/path/ -name 'something*' -type f -mtime -1`"
# We have some files in $var with last day change date
for i in $var
do
j=$(( $j + 1 ))
dirname="`dirname $i`"
filename="`basename $i`"
/usr/bin/ftp -in >> /tmp/ftp.good 2>> /tmp/ftp.bad << EOF
open 123.456.789.012
user user_name passwd
bin
lcd $dirname
put $filename
quit
EOF # End of ftp
done # End of 'for' iteration
echo -e "open <ftp.hostname>\nuser <username> <password>\nbinary\nmkdir New_Folder\nquit" | ftp -nv

Resources