I have a bash script that automatically connects to a remote location and runs another script that is stored at the remote location. In the script on the remote location, I want to have my autoconnect script (which is stored on my local pc) capture specific output that is echoed out from the remote script so I can store it in a separate log. I know that something will need to be placed on the remote script that redirects the output so the local script can catch it, I'm just not sure where to go. Help is appreciated!
On your local script, in your ssh line, you can redirect some of the outputs to a file with tee:
ssh ... | tee -a output.log
If you want to filter which one goes to the output.log file, you can use process substitution:
ssh .... | tee >(grep "Some things you want to filter." >> output.log)
Besides grep you can use other commands as well like awk.
Related
I've got a script, that will use ssh to login to another machine and run a script there. My local script will redirect all the output to a file. It works fine in most cases, but on certain remote machines, i am capturing output that i don't want, and it looks like it's coming from stderr. Maybe because of the way bash is processing entries in its start-up files, but this is just speculation.
Here is an example of some unwanted lines that end up in my file.
which: no node in (/usr/local/bin:/bin:/usr/bin)
stty: standard input: Invalid argument
My current method is to just strip the predictable output that i don't want, but it feels like bad practice.
How can i capture output from only my script?
Here's the line that runs the remote script.
ssh -p 22 -tq user#centos-server "/path/to/script.sh" > capture
The ssh uses authorized_keys.
Edit: In the meantime, i'm going to work on directing the output from my script on machine B to a file and then copying it to A via scp and deleting it on B. But i would really like to be able to suppress the output completely, because when i run the script on machine A, it makes the output difficult to read.
To build on your comment on Raman's answer. Have you tried supressing .bashrc and .bash_profile as shown below?
ssh -p 22 -tq user#centos-server "bash --norc --noprofile /path/to/script.sh" > capture
If rc-files is the problem on some servers you should try and fix the broken rc-files instead of your script/invocation since it'll affect all (non-interactive) logins.
Try running ssh user#host 'grep -ls "which node" .*' on all your servers to find if they have "which node" in any dotfiles as indicated by your error message.
Another thing to look out for is your shebang. You tag this as bash and write CentOS but on a Debian/Ubuntu server #!/bin/sh gives you dash instead of (sh-compatible) bash.
YOu can redirect stdout (2) to /dev/null and redirect the rest to the log fole as follows:
ssh -p 22 -tq user#centos-server bash -c "/path/to/script.sh" 2>/dev/null >> capture
I have a shell script (script.sh) on a local linux machine that I want to run on a few remote servers. The script should take a local txt file (output.txt) as an argument and write to it from the remote server.
script.sh:
#!/bin/sh
file="$1"
echo "hello from remote server" >> $file
I have tried the following with no success:
ssh user#host "bash -s" < script.sh /path/to/output.txt
So if I'm reading this correctly...
script.sh is stored on the local machine, but you'd like to execute it on remote machines,
output.txt should get the output of the remotely executed script, but should live on the local machine.
As you've written things in your question, you're providing the name of the local output file to a script that won't be run locally. While $1 may be populated with a value, that file it references is nowhere to be seen from the perspective of where it's running.
In order to get a script to run on a remote machine, you have to get it to the remote machine. The standard way to do this would be to copy the script there:
$ scp script.sh user#remotehost:.
$ ssh user#remotehost sh ./script.sh > output.txt
Though, depending on the complexity of the script, you might be able to get away with embedding it:
$ ssh user#remotehost sh -c "$(cat script.sh)" > output.txt
The advantage of this is of course that it doesn't require disk space to be used on the remote machine. But it may be trickier to debug, and the script may function a little differently if it's run in-line like this rather than from a file. (For example, positional options will be different.)
If you REALLY want to provide the local output file as an option to the script you're running remotely, then you need to include a remote path to get to that script. For example:
script.sh:
#!/bin/sh
outputhost="${1%:*}" # trim off path, leaving host
outputpath="${1#*:}" # trim off host, leaving path
echo "Hello from $(hostname)." | ssh "$outputhost" "cat >> '$outputpath'"
Then after copying the script over, call the script with:
$ ssh user#remotehost sh ./script.sh localhostname:/path/to/output.txt
That way, script.sh running on the remote host will send its output independently, rather than through your existing SSH connection. Of course, you'll want to set up SSH keys and such to facilitate this extra connection.
You can achieve it like below:-
For saving output locally on UNIX/Linux:
ssh remote_host "bash -s script.sh" > /tmp/output.txt
The first line of your script file should be #!/bin/bash and you don't need to use bash -s in your command line. Lets try like below:-
Better always put full path for your bash file like
ssh remote_host "/tmp/scriptlocation/script.sh" > /tmp/output.txt
For testing execute any unix command first:-
ssh remote_host "/usr/bin/ls -l" > /tmp/output.txt
For saving output locally on Windows:,
ssh remote_host "script.sh" > .\output.txt
Better use plink.exe . Example
plink remote_host "script.sh" > log.txt
I have 2 Linux boxes and i am trying to upload files from one machine to other using sftp. I have put all the commands I use in the terminal to she'll script like below.
#!/bin/bash
cd /home/tests/sftptest
sftp user1#192.168.0.1
cd sftp/sftptest
put test.txt
bye
But this is not working and gives me error like the directory does not exist. Also, the terminal remain in >sftp, which means bye is not executed. How can I fix this?
I suggest to use a here-document:
#!/bin/bash
cd /home/tests/sftptest
sftp user1#192.168.0.1 <<< EOF
cd sftp/sftptest
put test.txt
EOF
When you run the sftp command, it connects and waits for you to type commands. It kind of starts its own "subshell".
The other commands in your script would execute only once the sftp command finishes. And they would obviously execute as local shell commands, so particularly the put will fail as a non existing command.
You have to provide the sftp commands directly to sftp command.
One way to do that, is using an input redirection. E.g. using the "here document" as the answer by #cyrus already shows:
sftp username#host <<< EOF
sftp_command_1
sftp_command_2
EOF
Other way is using an external sftp script:
sftp username#host -b sftp.txt
Where, the sftp.txt script contains:
sftp_command_1
sftp_command_2
I want to modify an existing variable on a remote machine. On a local machine I use 'sed' for that. But how can I achieve this on a remote machine?
comlink.sh:
The remote file containing the variable that should be changed.
#!/bin/bash
test=1
new=2
ready=0
How I change that variable locally:
sed 's/ready=.*/ready=1/' /home/pi/comlink.sh > tmp
mv tmp /home/pi/comlink.sh
My approach of what I want to achieve remotely:
sed 's/ready=.*/ready=1/' ssh pi#[myIP] /home/pi/comlink.sh > tmp
mv ssh pi#[myIP] tmp /home/pi/comlink.sh
But this is not the right way or syntax. I think I might need some help here. Thanks!
Do this: it's really just a quoting challenge -- execute the sed command remotely
ssh pi#ip "sed -i 's/ready=.*/ready=1/' /home/pi/comlink.sh"
The first word in a Bash expression is a command (a program, alias, function, builtin, etc.). ssh is a program to connect to remote machines, so you can't just drop the string "ssh" into other commands and have them do the work of ssh. The confusion is understandable because the opposite is somewhat true - you can pass arbitrary commands to ssh in order to execute them on the remote machine.
So while sed PATTERN ssh pi#[myIP] ... is meaningless (you're applying PATTERN to files in your current directory called ssh and pi#[myIP], which presumably don't exist), you can say something like:
ssh pi#[myIP] "cat /home/pi/comlink.sh"
To output the contents of a file on your remote machine named /home/pi/comlink.sh.
You can also do more complex operations, like output redirection (>), over ssh, but you need to ensure the full command you want to run is being passed to ssh by quoting it - otherwise the output redirection will occur locally. Compare:
ssh pi#[myIP] echo foo > /tmp/foo
vs.
ssh pi#[myIP] 'echo foo > /tmp/foo'
The former will invoke echo foo on your remote machine and write the output to /tmp/foo on your local machine. The latter will invoke echo foo > /tmp/foo on the remote machine.
Take a look also at the -i flag for sed - it will apply the pattern in-place to the given file(s), so you don't need to write to a temporary file and then move it.
Is there a way to secure append text to a file on remote machine?
Without actually copying over the file as such.
At the moment, I need to do a secure copy, then cat and append. Any way directly append securely?
Any help is great, thanks.
cat /local/file.txt | ssh user#remote.host 'cat >> /remote/file.txt'
ssh passes its stdin to remote commands, so the remote cat sees the piped output of the local cat. Nifty.