I'm ssh'ing to a remote server and running a command. This works fine.
sshpass -p password ssh user#192.168.0.1 /bin/bash << EOF
cd /usr
ls -lh
EOF
But I want to save the output of ls -lh to a file on the remote or local server.
I've tried the following but that doesn't seem to do anything locally or remotely.
sshpass -p password ssh user#192.168.0.1 /bin/bash << EOF
cd /usr
ls -lh > /home/user/filename
EOF
How do I do this ?
Thanks
UPDATE.
Quick update.. the live script will be running multiple commands and I only want the results of one command sending the the file. Thanks
Related
I am trying to write a deployment script which after copying the new release to the server should perform a few sudo commands on the remote machine.
#!/bin/bash
app=$1
echo "Deploying application $app"
echo "Copy file to server"
scp -pr $app-0.1-SNAPSHOT-jar-with-dependencies.jar nuc:/tmp/
echo "Execute deployment script"
ssh -tt stefan#nuc ARG1=$app 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo Hello world
echo $ARG1
sudo ifconfig
exit
ENDSSH
The file gets copied correctly and the passed argument printed as well. But the prompt for the password shows for two seconds then it says "Sorry, try again" and the second prompt shows the text I enter in plain text (meaning not masked) but also does not work if I enter the password correctly.
stefan#X220:~$ ./deploy.sh photos
Deploying application photos
Copy file to server
photos-0.1-SNAPSHOT-jar-with-dependencies.jar 100% 14MB 75.0MB/s 00:00
Execute deployment script
# commands to run on remote host
echo Hello world
echo $ARG1
sudo ifconfig
exit
stefan#nuc:~$ # commands to run on remote host
stefan#nuc:~$ echo Hello world
Hello world
stefan#nuc:~$ echo $ARG1
photos
stefan#nuc:~$ sudo ifconfig
[sudo] password for stefan:
Sorry, try again.
[sudo] password for stefan: ksdlgfdkgdfg
I tried leaving out the -t flags for ssh as well as using -S for sudo which did not help. Any help is highly appreciated.
What I would do :
ssh stefan#nuc bash -s foobar <<'EOF'
echo "arg1 is $1"
echo "$HOSTNAME"
ifconfig
exit
EOF
Tested, work well.
Notes :
for the trick to work, use ssh key pair instead of using a password, it's even more secure
take care of the place of your bash -s argument. Check how I pass it
no need -tt at all
no need sudo to execute ifconfig and better use ip a
I came up with another solution: Create another file with the script to execute on the remote server. Then copy it using scp and in the calling script do a
ssh -t remoteserver sudo /tmp/deploy_remote.sh parameter1
This works as expected. Of course the separate file is not the most elegant solution, but -t and -tt did not work when inlining the script to execute on the remote machine.
This question already has answers here:
Multiple commands on remote machine using shell script
(3 answers)
Closed 6 years ago.
I've only got a little question for you.
I have made a little shell script that allows me to connect to a server and gather certain files and compress them to another location on another server, which works fine.
It is something in the vane of:
#!/bin/bash
ssh -T user#server1
mkdir /tmp/logs
cd /tmp/logs
tar -czvf ./log1.tgz /somefolder/docs
tar -czvf ./log2.tgz /somefolder/images
tar -czvf ./log3.tgz /somefolder/video
cd ..
-czvf logs_all.tgz /tmp/logs
What I would really like to do is:
Login with the root password when connect via ssh
Run the commands
Logout
Login to next server
Repeat until all logs have been compiled.
Also, it is not essential but, if I can display the progress (as a bar perhaps) then that would be cool!!
If anyone can help that would be awesome.
I am in between n00b and novice so please be gentle with me!!
ssh can take a command as argument to run on the remote machine:
ssh -T user#server1 "tar -czf - /somefolder/anotherfolder"
This will perform the tar command on the remote machine, writing the tar's output to stdout which is passed to the local machine by the ssh command. So you can write it locally somewhere (there's no need for that /tmp/logs/ on the remote machine):
ssh -T user#server1 "tar -czf - /somefolder/anotherfolder" > /path/on/local/machine/log1.tgz
If you just want to collect them on the remove server (no wish to transfer them to the local machine), just do the straight forward version:
ssh -T user#server1 "mkdir /tmp/logs/"
ssh -T user#server1 "tar -cvzf /tmp/logs/log1.tgz /somefolder/anotherfolder"
ssh -T user#server1 "tar -cvzf /tmp/logs/log2.tgz /somefolder/anotherfolder"
…
ssh -T user#server1 "tar -czvf /tmp/logs_all.tgz /tmp/logs"
You could send a tar command that writes a compressed archive to standard out and save it locally:
ssh user#server1 'tar -C /somefolder -czvf - anotherfolder' > server1.tgz
ssh user#server2 'tar -C /somefolder -czvf - anotherfolder' > server2.tgz
...
I have a script which starts an ssh-connection.
so the variable $ssh start the ssh connection.
so $SSH hostname gives the hostname of the host where I ssh to.
Now I try to echo something and copy the output of the echo to a file.
SSH="ssh -tt -i key.pem user#ec2-instance"
When I perform a manual ssh to the host and perform:
sudo sh -c "echo 'DEVS=/dev/xvdbb' >> /etc/sysconfig/docker-storage-setup"
it works.
But when I perform
${SSH} sudo sh -c "echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup"
it does not seem to work.
EDIT:
Also using tee is working fine after performing an ssh manually but does not seem to work after the ssh in the script.sh
The echo command after an ssh of the script is happening on my real host (from where I'm running the script, not the host where I'm performing an ssh to). So the file on my real host is being changed and not the file on my host where I've performed an ssh to.
The command passed to ssh will be executed by the remote shell, so you need to add one level of quoting:
${SSH} "sudo sh -c \"echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup\""
The only thing you really need on the server is the writing though, so if you don't have password prompts and such you can get rid of some of this nesting:
echo 'DEVS=/dev/xvdb' | $SSH 'sudo tee /etc/sysconfig/docker-storage-setup'
I am trying to copy several files from a remote server into local drive in Bash using scp.
Here's the part of the code
scp -r -q $USR#$IP:/home/file1.txt $PWD
scp -r -q $USR#$IP:/home/file2.txt $PWD
scp -r -q $USR#$IP:/root/file3.txt $PWD
However, the problem is that EVERY time that it wants to copy a file, it keeps asking for the password of the server, which is the same. I want it to ask only once and then copy all my files.
And please, do not suggest rsync nor making a key authentication file since I do not want to do that.
Are there any other ways...?
Any help would be appreciated
You can use expect script or sshpass
sshpass -p 'password' scp ...
#!/usr/bin/expect -f
spawn scp ...
expect "password:"
send "ur_password"
An disadvantage is that your password is now in plaintext
I'm assuming that if you can scp files from the remote server, you can also ssh in and create a tarball of the remote files.
The -r flag is recursive, for copying entire directories but your listing distinct files in your command, so -r becomes superfluous.
Try this from the bash shell on the remote system:
$ mkdir /home/file_mover
$ cp /home/file1.txt /home/file_mover/
$ cp /home/file2.txt /home/file_mover/
$ cp /root/file3.txt /home/file_mover/
$ tar -cvf /home/myTarball.tar /home/file_mover/
$ scp -q $USR#$IP:/home/myTarball.tar $PWD
Well, in this particular case, you can write...
scp -q $USR#$IP:/home/file[1-3].txt $PWD
I'm trying to write a bash script to automatically do stuff in clients machines in the network. But after the code
ssh -i ~/.ssh/key root#machine
The bash program just stops
What can I do to send the command to the remote machine?
Thanks
Same way as if you were invoking bash directly.
ssh ... somescriptontheserver.sh
ssh ... ls -lR /
ssh ... << EOF
ls -lR /
EOF