I have function that is sourced through the .bashrc file on remote host A.
If i use "which" on remote host A , i`m getting function body as output.
I need to run it through ssh remotely from another host B.
Currently , all my tries are ending with "command not found error".
I already tried to pass to
ssh A "source /home/user/.bashrc && function "
, this not help.
Also tried force ssh to assing pseudo-tty with -t key. SHELL on both hosts is bash.
ssh localhost on host A still keeps function status available.
Output :
[user#hostA ~]$ which status
status is a function
status ()
{
dos -s $*
}
[user#hostB ~]$ ssh hostA " source /home/user/deploy/bin/_bashrc && status all "
ls: : No such file or directory
bash: status: command not found
Basically, you can't. To do that you need to copy the sourced file on the remote host and source it in there. Note, that your file may be sourcing in some other files as well… This is almost like running local program on the remote host.
The trick is to get the remote end to properly load your file containing the function into the shell environment.
I found with bash that the following works...
Put your function into .bashrc on the remote:
foo_func()
{
echo Hello World
}
Then on the local side:
ssh user#remote bash -l -c foo_func
The bash -l instructs bash to run as a login shell (sourcing startup files) and then the -c tells the shell to execute the string foo_func.
Related
The bash script I'm trying to run on the K8S cluster node from a proxy server is as below:
#!/usr/bin/bash
cd /home/ec2-user/PVs/clear-nginx-deployment
for file in $(ls)
do
kubectl -n migration cp $file clear-nginx-deployment-d6f5bc55c-sc92s:/var/www/html
done
This script is not copying data which is therein path /home/ec2-user/PVs/clear-nginx-deployment of the master node.
But it works fine when I try the same script manually on the destination cluster.
I am using python's paramiko.SSHClient() for executing the script remotely:
def ssh_connect(ip, user, password, command, port):
try:
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(ip, username=user, password=password, port=port)
stdin, stdout, stderr = client.exec_command(command)
lines = stdout.readlines()
for line in lines:
print(line)
except Exception as error:
filename = os.path.basename(__file__)
error_handler.print_exception_message(error, filename)
return
To make sure the above function is working fine, I tried another script:
#!/usr/bin/bash
cd /home/ec2-user/PVs/clear-nginx-deployment
mkdir kk
This one runs fine with the same python function, and creates the directory 'kk' in desired path.
If you could please suggest the reason behind, or suggest an alternative to carry out this.
Thank you in advance.
The issue is now solved.
Actually, the issue was related to permissions which I got to know later. So what I did to resolve is, first scp the script to remote machine with:
scp script.sh user#ip:/path/on/remote
And then run the following command from the local machine to run the script remotely:
sshpass -p "passowrd" ssh user#ip "cd /path/on/remote ; sudo su -c './script.sh'"
And as I mentioned in question, I am using python for this.
I used the system function in os module of python to run the above commands on my local to both:
scp the script to remote:
import os
command = "scp script.sh user#ip:/path/on/remote"
os.system(command)
scp the script to remote:
import os
command = "sshpass -p \"passowrd\" ssh user#ip \"cd /path/on/remote ; sudo su -c './script.sh'\""
os.system(command)
Bellow command if i write inside a script (test.sh) and execute directly on the specific machine it works.
sshpass -p $HOST_PWD sftp testuser#host <<!
cd parent
mkdir test
bye
!
But when i try to run (directly below scrip or invoking the test.sh file in the specif path) in Jenkins with "Execute shall script on remote host using ssh" it failing with
sshpass: Failed to run command: No such file or directory
I have installed sshpass, lftp and rsync in the remote machine
Issue :
I have added export $HOST_PWD in .bashrc of specific machine as well as Jenkins but in not finding it
Script placed in specific machine, if directly executed the script in that machine it works even with $HOST_PWD. But not working if we invoke from jenkins either script or directly scrip using "Execute shall script on remote host using ssh"
Working with Changes :
Instead of $HOST_PWD if i added directly password it works.
According to a document, it says:
When an interactive shell that is not a login shell is started, Bash reads and executes commands from ~/.bashrc, if that file exists.
I did a quick test:
At my server,
[USER#MYSERVER ~]$ cat .bashrc
...
echo 'I am in a bashrc file of my server'
...
At a remote server,
# unquoted
[USER#REMOTESERVER ~]$ ssh MYSERVER echo $-
I am in a bashrc file of my server
himBH
#quoted
[USER#REMOTESERVER ~]$ ssh MYSERVER 'echo $-'
I am in a bashrc file of my server
hBc
When command is unquoted, it seems to be run in an interactive shell, and when quoted, it seems to be run in a non-interactive shell.
Why is this so?
And both read the bashrc file of MYSERVER, which doesn't follow the rule in the document.
Any link or comment appreciated.
EDITED:
And it seems to be a non-login shell.
[USER#REMOTESERVER ~]$ ssh MYSERVER 'shopt -q login_shell && echo 1 || echo 2'
2
In the bash document, there says:
Invoked by remote shell daemon
Bash attempts to determine when it is being run with its standard input connected to a network connection, as when executed by the remote shell daemon, usually rshd, or the secure shell daemon sshd. If Bash determines it is being run in this fashion, it reads and executes commands from ~/.bashrc, if that file exists and is readable.
I missed this part...
Therefore, calling from ssh should read .bashrc file.
And ssh remote command is a non-interactive shell, as comments to the question explain.
The remote bash is indeed not started as an interactive shell (as we can see from the output from $-), so somewhat else must be sourcing your .bashrc. For sure, it is run as a login shell. Could it be that you have a ~/.bash_profile or ~.bash_login or ~/.profile, which explicitly sources .bashrc?
I have local bash script which is used to invoke a bash script in the remote server and get some reports from remote server.
The way I call this script currently in local_script.sh is:
ssh remoteuse#ip "/bin/bash remote_script.sh"
Now, I want to set a date variable in local_script.sh file and variable needs to available in remote_script.sh files also.
Please give some ideas.
EDIT:
Please see my test script:
[user#localserver]$ ssh remoteusr#ip "/bin/bash remote_script.sh $test_var"
And my remote script:
[user#remoteserver]$ cat remote_script.sh
#!/bin/bash
echo $test_var > test_var.log
But test_var.log file on remote server is empty after running the script
The remote server doesn't know you local variables, you can only pass the value of the variable from local to remote with an extra argument at the ssh line:
ssh remoteuse#ip "/bin/bash remote_script.sh $variable"
You have to add the variable to the environment of the executed command. That can be done with the var=value cmd syntax.
But since the line you pass to ssh will be evaluated on the remote server, you must ensure the variable is in a format that is reusable as shell input. Two ways come to mind depending on your version of bash:
With bash 4.4 or newer, you can use the Q operator in ${parameter#operator}:
local script:
foo="abc'def \"123\" *"
ssh remoteuse#ip "foo=${foo#Q} /bin/bash remote.sh"
remote script:
printf '<%s>\n' "$foo"
output:
$ ./local_script.sh
<abc'def "123" *>
If you don't have bash 4.4 or newer, you can use the %q directive to printf:
ssh remoteuse#ip "foo=$(printf '%q' "$foo") /bin/bash remote.sh"
There is the following bash script:
#!/bin/bash
set -o errexit
# Общие параметры
server="some_server"
login="admin"
default_path="/home/admin/web/"
html_folder="/public_html"
# Параметры проекта
project_folder="project_name"
go_to_folder() {
ssh "$login#$server"
cd "/home/admin/web/"
}
go_to_folder
I got error "deploy.sh: line 16: cd: /home/admin/web/: No such file or directory", but if I connect manually and change directory through "cd" it works. How can I change my script?
Yes it is obvious, didn't it? You are trying to do cd to the local machine and not on the target machine. The commands to be passed to ssh much be provided in-line with its parameters, on a separate newline its looking as if you are doing no-op on the remote machine and running the cd in the local.
go_to_folder() {
ssh "$login#$server" "cd /home/admin/web/"
}
Or a more cleaner way to do would be to use here-docs
go_to_folder() {
ssh "$login#$server" <<EOF
cd /home/admin/web/
EOF
}
other ways to make ssh read from stanard input on the commands to run would be to use here-strings(<<<) also.