I am a newbie to bash scripting. I am trying to copy a gz file, then change permissions and untar it on remote servers (all centos machines).
#!/bin/bash
pwd=/home/sujatha/downloads
cd $pwd
logfile=$pwd/log/`echo $0|cut -f1 -d'.'`.log
rm $logfile
touch $logfile
server="10.1.0.22"
for a in $server
do
scp /home/user/downloads/prometheus-2.0.0.linux-amd64.tar.gz
ssh -f sujatha#10.1.0.22 "tar -xvzf/home/sujatha/downloads/titantest/prometheus-2.0.0.linux-amd64.tar.gz"
sleep 2
echo
done
exit
The scp part is successfull. But not able to do the remaining actions. after untarring I also want to add more actions like appending a variable to the config files. all through the script. Any advise would be helpful
Run a bash session in your ssh connection:
ssh 192.168.2.9 bash -c "ls; sleep 2; echo \"bye\""
Related
I have multiple repos on my remote server like this:
sourcecode/
repoA
repoB
repoC
Each time I have to work on the remote machine on a repo (say repoA), I do two things:
ssh to remote-server
cd sourcecode/repoA on remote
Instead of doing this all the time, I would like to be able to have an alias / function on my Mac such that when I do "ssh-with-fancy-function" and pass argument: repoA , it should ssh into the remote server and after that is complete, should change my current directory on the remote server to sourcecode/repoA and stay.
How can I do this ?
ssh_cd() {
if (( $# != 1 )); then
echo "usage: ${FUNCNAME[0]} remote_dir" >&2
return 1
fi
printf -v cmd 'cd %q && bash -i' "$1"
ssh -t user#remote-host "$cmd"
}
The key here is that you end the command with bash -i to start an interactive shell; and the -t ssh option to ensure you have a tty to interact with.
This assumes your remote shell is bash.
I have a remote bash script, which goes through files using a for loop:
#!/bin/bash
for f in *.pdf;
do
echo $f
done
When executed locally:
server$./test.sh
1508.01585.pdf
1605.07683.pdf
When executed through ssh from a remote client:
client$ ssh user#server 'bash -c ~/papers/test.sh'
*.pdf
I have tried using ssh -T, bash -s, and nothing seems to work.
Please help!
Printing documents from printer connected to internet is really slow at my university. Therefore I'm writing a script that sends a file to a remote computer with SCP, sends a series of commands over SSH to print the document from the remote computer (which has better connection with the printer) and then delete the file on the remote computer.
It works like a charm but the annoying part is that it prompts for the password two times, one time when it sends the file with SCP and one time when it sends commands over SSH. How can this be solved? I read that you can use a identity file? The thing is though that multiple users will use it and many has very limited experience with bash programming so the script must do everything including creating the file.
Users will mostly use Mac and the remote computer uses Red Hat. Here's the code so far:
#!/bin/sh
FILENAME="$1"
PRINTER="$2"
# checks if second argument is set, else prompt for it
if [ -z ${PRINTER:+x} ]; then
printf "Printer: ";
read PRINTER;
fi
# prompt for username
printf "CID: "
read CID
scp $FILENAME $CID#adress:$FILENAME
ssh -t $CID#adress bash -c "'
lpr -P $PRINTER $FILENAME
rm $FILENAME
exit
'"
You don't need to copy the file at all; you can simply send it to lpr via standard input.
ssh -t $CID#adress lpr -P "$PRINTER" < "$FILENAME"
(ssh reads from $FILENAME and forwards it to the remote command.)
start an ssh-agent and add your key to it:
eval $(ssh-agent -s)
ssh-add # here you will be prompted
scp "$FILENAME" "$CID#adress:$FILENAME"
ssh -t "$CID#adress" bash -c <<END
lpr -P "$PRINTER" "$FILENAME"
rm "$FILENAME"
END
ssh-agent -k # kill the agent
I am trying to make a shell script that creates a mysql dump and then puts it on another computer. I have already set up keyless ssh and sftp. They script below creates the mysql dump file on the local computer when it is run and doesn't throw any errors, however the file "dbdump.db" is never put on the remote computer. If I execute the sftp connection and put command by hand then it works.
contents of mysql_backup.sh
mysqldump --all-databases --master-data > dbdump.db
sftp -b /home/tim tim#100.10.10.1 <<EOF
put dbdump.db
exit
EOF
Try to use scp that should be easier in your case.
scp dbdump.db tim#100.10.10.1:/home/tim/dbdump.db
Both sftp and scp are using SSH.
Please write mput/put command into one file (file_contains_put_command) and try below command.
sftp2 -B file_contains_put_command /home/tim tim#100.10.10.1 >> log_file
Example:
echo binary > sample_file
echo mput dbdump.db >> sample_file
echo quit >> sample_file
sftp2 -B sample_file /home/tim tim#100.10.10.1 >> log_file
Your initial approach is a few characters off working though. You're telling sftp to read it's batch-commands from /home/tim -b /home/tim. So, if you change this to -b -, it should read it's batch commands from stdin.
Something along these lines, if -b /home/tim were intended to i.e. change directory remotely, you can add cd /home/tim to your here document.
mysqldump --all-databases --master-data > dbdump.db
sftp -b - tim#100.10.10.1 <<EOF
put dbdump.db
exit
EOF
I have a public/private key pair set up so I can ssh to a remote server without having to log in. I'm trying to write a shell script that will list all the folders in a particular directory on the remote server. My question is: how do I specify the remote location? Here's what I've got:
#!/bin/bash
for file in myname#example.com:dir/*
do
if [ -d "$file" ]
then
echo $file;
fi
done
Try this:
for file in `ssh myname#example.com 'ls -d dir/*/'`
do
echo $file;
done
Or simply:
ssh myname#example.com 'ls -d dir/*/'
Explanation:
The ssh command accepts an optional command after the hostname and, if a command is provided, it executes that command on login instead of the login shell; ssh then simply passes on the stdout from the command as its own stdout. Here we are simply passing the ls command.
ls -d dir/*/ is a trick to make ls skip regular files and list out only the directories.