Is there a way to secure append text to a file on remote machine?
Without actually copying over the file as such.
At the moment, I need to do a secure copy, then cat and append. Any way directly append securely?
Any help is great, thanks.
cat /local/file.txt | ssh user#remote.host 'cat >> /remote/file.txt'
ssh passes its stdin to remote commands, so the remote cat sees the piped output of the local cat. Nifty.
Related
I have a shell script (script.sh) on a local linux machine that I want to run on a few remote servers. The script should take a local txt file (output.txt) as an argument and write to it from the remote server.
script.sh:
#!/bin/sh
file="$1"
echo "hello from remote server" >> $file
I have tried the following with no success:
ssh user#host "bash -s" < script.sh /path/to/output.txt
So if I'm reading this correctly...
script.sh is stored on the local machine, but you'd like to execute it on remote machines,
output.txt should get the output of the remotely executed script, but should live on the local machine.
As you've written things in your question, you're providing the name of the local output file to a script that won't be run locally. While $1 may be populated with a value, that file it references is nowhere to be seen from the perspective of where it's running.
In order to get a script to run on a remote machine, you have to get it to the remote machine. The standard way to do this would be to copy the script there:
$ scp script.sh user#remotehost:.
$ ssh user#remotehost sh ./script.sh > output.txt
Though, depending on the complexity of the script, you might be able to get away with embedding it:
$ ssh user#remotehost sh -c "$(cat script.sh)" > output.txt
The advantage of this is of course that it doesn't require disk space to be used on the remote machine. But it may be trickier to debug, and the script may function a little differently if it's run in-line like this rather than from a file. (For example, positional options will be different.)
If you REALLY want to provide the local output file as an option to the script you're running remotely, then you need to include a remote path to get to that script. For example:
script.sh:
#!/bin/sh
outputhost="${1%:*}" # trim off path, leaving host
outputpath="${1#*:}" # trim off host, leaving path
echo "Hello from $(hostname)." | ssh "$outputhost" "cat >> '$outputpath'"
Then after copying the script over, call the script with:
$ ssh user#remotehost sh ./script.sh localhostname:/path/to/output.txt
That way, script.sh running on the remote host will send its output independently, rather than through your existing SSH connection. Of course, you'll want to set up SSH keys and such to facilitate this extra connection.
You can achieve it like below:-
For saving output locally on UNIX/Linux:
ssh remote_host "bash -s script.sh" > /tmp/output.txt
The first line of your script file should be #!/bin/bash and you don't need to use bash -s in your command line. Lets try like below:-
Better always put full path for your bash file like
ssh remote_host "/tmp/scriptlocation/script.sh" > /tmp/output.txt
For testing execute any unix command first:-
ssh remote_host "/usr/bin/ls -l" > /tmp/output.txt
For saving output locally on Windows:,
ssh remote_host "script.sh" > .\output.txt
Better use plink.exe . Example
plink remote_host "script.sh" > log.txt
I want to modify an existing variable on a remote machine. On a local machine I use 'sed' for that. But how can I achieve this on a remote machine?
comlink.sh:
The remote file containing the variable that should be changed.
#!/bin/bash
test=1
new=2
ready=0
How I change that variable locally:
sed 's/ready=.*/ready=1/' /home/pi/comlink.sh > tmp
mv tmp /home/pi/comlink.sh
My approach of what I want to achieve remotely:
sed 's/ready=.*/ready=1/' ssh pi#[myIP] /home/pi/comlink.sh > tmp
mv ssh pi#[myIP] tmp /home/pi/comlink.sh
But this is not the right way or syntax. I think I might need some help here. Thanks!
Do this: it's really just a quoting challenge -- execute the sed command remotely
ssh pi#ip "sed -i 's/ready=.*/ready=1/' /home/pi/comlink.sh"
The first word in a Bash expression is a command (a program, alias, function, builtin, etc.). ssh is a program to connect to remote machines, so you can't just drop the string "ssh" into other commands and have them do the work of ssh. The confusion is understandable because the opposite is somewhat true - you can pass arbitrary commands to ssh in order to execute them on the remote machine.
So while sed PATTERN ssh pi#[myIP] ... is meaningless (you're applying PATTERN to files in your current directory called ssh and pi#[myIP], which presumably don't exist), you can say something like:
ssh pi#[myIP] "cat /home/pi/comlink.sh"
To output the contents of a file on your remote machine named /home/pi/comlink.sh.
You can also do more complex operations, like output redirection (>), over ssh, but you need to ensure the full command you want to run is being passed to ssh by quoting it - otherwise the output redirection will occur locally. Compare:
ssh pi#[myIP] echo foo > /tmp/foo
vs.
ssh pi#[myIP] 'echo foo > /tmp/foo'
The former will invoke echo foo on your remote machine and write the output to /tmp/foo on your local machine. The latter will invoke echo foo > /tmp/foo on the remote machine.
Take a look also at the -i flag for sed - it will apply the pattern in-place to the given file(s), so you don't need to write to a temporary file and then move it.
These three lines of code require authentication twice. I don't yet have password-less authentication set up on this server. In fact, these lines of code are to copy my public key to the server and concatenate it with the existing file.
How can I re-write this process with a single ssh command that requires authentication only once?
scp ~/local.txt user#server.com:~/remote.txt
ssh -l user user#server.com
cat ~/remote.txt >> ~/otherRemote.txt
I've looked into the following possibilities:
command sed
operator ||
operator &&
shared session: Can I use an existing SSH connection and execute SCP over that tunnel without re-authenticating?
I also considered placing local.txt at an openly accessible location, for example, with a public dropbox link. Then if cat could accept this as an input, the scp line wouldn't be necessary. But this would also require an additional step and wouldn't work in cases where local.txt cannot be made public.
Other references:
Using a variable's value as password for scp, ssh etc. instead of prompting for user input every time
https://superuser.com/questions/400714/how-to-remotely-write-to-a-file-using-ssh
You can redirect the content to the remote, and then use commands on the remote to do something with it. Like this:
ssh user#server.com 'cat >> otherRemote.txt' < ~/local.txt
The remote cat command will receive as its input the content of ~/local.txt, passed to the ssh command by input redirection.
Btw, as #Barmar pointed out, specifying the username with both -l user and user# was also redundant in your example.
I have a process (which I have put into an alias in .bash_profile) to get a log file from my remote ssh server, save it to my local machine and then empty the remote file.
At the moment, my process has these two commands:
scp admin#remote.co.za:public/proj/current/log/exceptions.log "exceptions $(date +"%d %b %Y").log"
to download the file to my local machine, and then
ssh admin#remote.co.za "> /public/proj/current/log/exceptions.log"
to clear the remote file. Doing it this way means that I'm logging in via ssh twice. I want this to be effecient as possible, so I want a way to only login once, do both operations, and then logout.
So if I can find a way to send the file to my local machine from the command-line of the server, I can do this:
ssh admin#remote.co.za "[GET FILE]; > /public/proj/current/log/exceptions.log"
Is there a way to do this? And if not, is there any other way to do achieve my goal while logging in once only?
ssh admin#remote.co.za "cat /public/proj/current/log/exceptions.log &&
> /public/proj/current/log/exceptions.log" > "exceptions $(date +"%d %b %Y").log"
This works by catting the entire file to stdout which will flow through as the stdout of ssh, then truncating the file remotely (assuming cat succeeded).
I have a bash script that automatically connects to a remote location and runs another script that is stored at the remote location. In the script on the remote location, I want to have my autoconnect script (which is stored on my local pc) capture specific output that is echoed out from the remote script so I can store it in a separate log. I know that something will need to be placed on the remote script that redirects the output so the local script can catch it, I'm just not sure where to go. Help is appreciated!
On your local script, in your ssh line, you can redirect some of the outputs to a file with tee:
ssh ... | tee -a output.log
If you want to filter which one goes to the output.log file, you can use process substitution:
ssh .... | tee >(grep "Some things you want to filter." >> output.log)
Besides grep you can use other commands as well like awk.