I've got two windows machines running the latest version of cygwin. I have OpenSSH configured on both of them, and password-less authentication has been set up for the remote machine. I can ssh into either machine without problems. All commands below are executed in cmd.exe.
System Specification (identical for both machines):
Cygwin version 1.7.32
Windows 7
ver from cmd returns "Microsoft Windows [Version 6.1.7601]"
R 2.14.1
The basic form of my problem is this. I have to start an executable on my remote machine. I must start this executable via ssh through windows command line, not cygwin. That executable has a couple parameters. One of these parameters needs to be encapsulated within double quotes (Because I am working with a third party package in R, which makes a call to system(), and one parameter expects a string). The actual parameter is -e "parallel:::.slaveRSOCK()"
The script.exe called below is the file Rscript.exe. This comes with any (to my knowledge) installation of R. I did not create it, compile it, or anything. It is just utilized by the package I am trying to debug, as it allows you to execute R commands outside of the R console gui. The package I am trying to debug is "parallel", which I am using to run parallel processes on remote machines. I also did not have any hand in creating or compiling this code.
Maybe needless additional info, but the portion of the package I'm trying to debug is the function that spins up a process on a remote machine. This function develops a command, given some parameters, and executes this command in cmd.exe. I'm trying to replicate the command and manually execute, as when running through the actual package the process simply hangs.
If I were starting the executable on my machine, I would do the following, in windows cmd.
C:\Path\script.exe -e "parallel:::.slaveRSOCK()"
And this works fine. Establishing an ssh connection to the remote machine and subsequently running this command (changing C to c) also works.
But, when I make the following call for starting this script on the remote machine from my machine
ssh remoteHost c:/Path/script.exe -e "parallel:::.slaveRSOCK()"
I get the following error
bash: -c: line 0: syntax error near unexpected token '('
bash: -c: line 0: 'c:/Path/script.exe -e parallel:::.slaveRSOCK()'
So I've lost the double quotes, obviously I'm not escaping them correctly. I've tried the following call. which was close
ssh remoteHost c:/Path/script.exe -e \"parallel:::.slaveRSOCK()\"
but the second line of the error gave me
bash: -c: line 0: 'c:/Path/script.exe -e \parallel:::.slaveRSOCK()"'
Doesn't make a whole lot of sense to me, as I managed to escape the second quote but the first disappears, and I'm left with a \ before parallel.
EDIT
This one, as suggested in one of the answers
ssh remoteHost "c:/Path/script.exe -e \"parallel:::.slaveRSOCK()\""
gave me the following error
bash: -c: line 0: 'c:/Path/script.exe -e \parallel:::.slaveRSOCK()\'
Also quite an odd result, we lose both double quotes but keep the escapes
I've also tried various combinations of double quotes (single quotes) around the whole command after ssh remoteHost, and using the ^ to escape , but now it has pretty much turned into taking shots in the dark, so I thought it may be a good idea to turn to people more knowledgeable than me.
Any help or insight that can be provided is much appreciated. If there are any questions let me know.
EDIT 2
Here are some simple examples of the odd escaping that's going on.
Call:
ssh otherhost echo \"hello()\"
Returns:
bash: -c: line 0: unexpected EOF while looking for matching '"'
bash: -c: line 0: syntax error: unexpected end of file
Call:
ssh otherhost echo \"hello()"
Returns:
hello()
Call:
ssh otherhost echo '\"hello()\" '
Returns:
"hello()"
Call:
ssh otherhost echo "\"hello()\""
Returns:
hello\(\)
Alternatively, an explanation of this behavior would be appreciated.
I think you want:
ssh remoteHost "c:/Path/script.exe -e \"parallel:::.slaveRSOCK()\""
I honestly can't fully explain the reasoning behind the quoting (or what's screwing it up if it's a bug), but I think this will work for you:
ssh remoteHost "c:/Path/script.exe -e \"parallel:::.slaveRSOCK\(\)\""
The complication seems to be the following sequence of things parsing the command line(s):
Windows cmd.exe on the local machine gets and parses the whole thing
ssh sends the "c:/Path/script.exe -e \"parallel:::.slaveRSOCK()\"" portion to sshd on the remote Windows host. I'm not sure in what form sshd gets this command line, or exactly what transformations it might do to it.
sshd apparently invokes bash to run the command (thorough some cygwin mechanism?). Again, I'm not sure of the exact form the command line that gets to bash.
bash gets confused unless the parens are escaped, and the backslashes seem to get far enough through this assembly line of processes to make bash on the remote windows-with-cygwin machine happy.
Related
I have a bash script that runs another script in a screen on a remote computer. The environment variable GITLAB_CI_TOKEN is set on the host machine and is defined properly. However, the script configure.sh on the remote machine tells that this environment variable is empty when it is executed, even if it is defined on the same line as the script...
Here is the command I am using:
ssh -o "StrictHostKeyChecking=accept-new" "${COMPUTERS_IPS[i]}" \
screen -S "deploy_${COMPUTERS_IPS[i]}" -dm " \
GITLAB_CI_TOKEN=${GITLAB_CI_TOKEN} \
bash \"${REMOTE_FOLDER}/configure.sh\" \"${REMOTE_FOLDER}\" > ${LOG_FILE} 2>&1;
"
Additionally, the logs are not being written to LOG_FILE, but are being displayed on the console of the screen. I have been pulling my hair out over this for the past two days... Any help or guidance would be greatly appreciated :)
Why GITLAB_CI_TOKEN is "empty":
Passing a command to a remote host over ssh is very similar to running it through eval. For example in your case, escaped newlines on the first evaluation become unescaped newlines on a subsequent evaluation. Consider this very simple program named args (place it in bin or somewhere else on your path to demo):
#!/bin/bash
for arg ; do
echo "|$arg|"
done
And these two use cases:
args "\
Hello \
World"
# prints:
# |Hello World|
ssh host args "\
Hello \
World"
# prints:
# |Hello|
# |World|
As you can see, when we run this via ssh the newline we attempted to escape splits our data into two separate lines even though we tried to keep it all on one line. This means your assignment of GITLAB_CI_TOKEN is just a regular shell variable instead of a scoped environment variable for your bash command. A scoped environment variable requires the declaration and the command to happen on the same line.
The easiest thing to do is likely to just export the variable explicitly with export GITLAB_CI_TOKEN=${GITLAB_CI_TOKEN}.
For similar reasons, the output of your command is going to the screen and not the logfile because the outer quotes of screen -dm "commands >output" are getting stripped on the first evaluation, and then the remote host is parsing screen -dm commands >output and assigning the output redirection to screen instead of commands. That means your configure.sh is writing to the screen, and it's the screen program that's writing its own output to a logfile.
To send complex commands to a remote host, you may want to look into tools like printf %q which can produce escaped output suitable for being safely evaluated in an eval-like context. Take a look at BashFAQ/096 for an example.
Writing a script to retrieve various environment parameters back from a list of servers. My script returns no value when ran but the same command returns the desired value outside of a script.
I have tried using a couple of variations to retrieve the same data. One of the commands fails because of restrictions placed on the accounts I have access to. The second command works but only if executed in an elevated mode.
This fails with access denied (pwdx is restricted)
dzdo pgrep -f /some/path | xargs pwdx
This works outside of a script but returns no value within a script
dzdo /bin/readlink -e /proc/"$(pgrep -f /some/path)"/cwd
When using "bash -x" to execute my scriipt, I see the "readlink" code is blank.
Ideally, I would like to return the PID and path of the process running as the "pgrep" command does. I can work with the path alone as returned by the "readlink" version returns. The end goal is to gather the information from several servers for audit purposes. (version, etc.)
Am I using the wrong syntax for the "readlink" command? I'm fairly new to coding bash scripts so I appreciate any guidance to help understand when to to what if I'm using a command in a script vs command line.
If pwdx is the restricted program, you need to run that with dzdo, not pgrep.
pgrep -f /some/path | dzdo xargs pwdx
Inb4 anyone saying this is a bad idea, it's actually a reasonable approach for things like this.
I'm writing this Docker container that can execute user provided commands through contained OpenVPN connection in Docker, e.g. docker run vpntunnel curl example.com.
So the ENTRYPOINT of the image will fire up OpenVPN, after the VPN tunnel is up, execute the user provided CMD line.
Problem is, the standard way to run commands after OpenVPN is up is through the --up option of OpenVPN. Here is the man page description of this option:
--up cmd
Run command cmd after successful TUN/TAP device open (pre --user UID change).
cmd consists of a path to script (or executable program), optionally followed
by arguments. The path and arguments may be single- or double-quoted and/or
escaped using a backslash, and should be separated by one or more spaces.
So the reasonable approach here is for ENTRYPOINT script to correctly escape the user provided CMD line and pass the whole thing as one parameter to --up option of OpenVPN.
In case my Docker image needs to perform some initializations after the tunnel is up and before the user command line is executed, I can prepend a script before the user provided CMD line like this: --up 'tunnel-up.sh CMD...' and in the last line of tunnel-up.sh use "$#" to execute user provided arguments.
Now as you may guess, the only problem left is how to correctly escape an entire command line to be able to passed as a single argument.
The naive approach is just --up "tunnel-up.sh $#" but it surely can't distinguish command lines between a b c and "a b" c.
In bash 4.4+ you can use parameter transformation with # to quote values:
--up "tunnel-up.sh ${*#Q}"
In prior versions you could use printf '%q' to achieve the same effect:
--up "tunnel-up.sh $((($#)) && printf '%q ' "$#")"
(The (($#)) check makes sure there are parameters to print before calling printf.)
I am writing a simple bash script (checkServs.sh) that will ssh into a list of servers and perform a health check on them.
I keep getting errors on the following line:
SERVERS=(blah1.example.com blah2.example.com blah3.example.com blah4.example.com)
Error is:
checkServs.sh: 3: checkServs.sh: Syntax error: "(" unexpected
I've checked online examples and this seems correct, isn't it? Thanks in advance!
I don't know about the syntax error, but this should work:
SERVERS="blah1.example.com blah2.example.com blah3.example.com blah4.example.com"
for server in $SERVERS
do
echo $server
done
EDIT: As noted by Jonathan Leffler in a comment, maybe you are not running the script with bash. Other shells, such as dash, may not recognize the array syntax. If that's the case, you can do:
SERVERS=(blah1.example.com blah2.example.com blah3.example.com blah4.example.com)
for i in $(seq 0 3)
do
echo ${SERVERS[$i]}
done
But if you just want to loop through the names and run an SSH command (ie if having an array won't provide useful functionality), the first method is more straightforward.
Your remote server probably calls a different shell when executing commands. Try to add bash -c to your arguments:
ssh user#server bash -c "<your commands>"
Or:
ssh user#server bash < yourscript.sh ## None in yourscript.sh must read input though.
An opening parenthesis starts a subshell, which is not a correct thing to have on the right side of an equals sign. It expects a string expression, not a command.
Quotation marks are used to keep a string expression together.
I have roughly 12 computers that each have the same script on them. This script merely pings all the other machines, and prints out whether the machine is "reachable" or "unreachable". However, it is inefficient to login to each machine manually using ssh to execute this script.
Suppose I'm logged into node 1. Is there any way to for me to login to node 2-12 automatically using SSH, execute the ping script, pipe the results to a file, logout and proceed to the next machine? Some kind of bash shell script?
I'm afraid I'm at a loss here since I haven't had experience with shell-scripting before.
Since the script is on the other machines, you can just have ssh run the command for you there:
ssh $hostname my_script >> results_file
When you specify a command like that, it's executed instead of the login shell.
I'll leave it up to you to figure out how to loop over hostnames!
One trick you'll need to use is setting up pre-authorized keys for each host. Then you can run a script on one host, running something like 'ssh hostname command > log.hostname'
This script might be what you are looking for: It allows you to execute one command (which can be your script) on multiple remote machines via ssh. It's a simple script with bash source available, so you should be able to customize it to your needs:
http://www.heinzi.at/projects/upgradebest.sh/
Yes you can
You need actually 2 small scripts as following:
remote_ssh.sh ( which takes as first argument the name of the machine and the rest of the arguments are your script that you want to execute with his own arguments)
Example : remote_ssh.sh node5 "echo hello world"
remote_ssh.sh as following:
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
REST_ARG=${ALL_ARG##$FST_ARG}
echo "Executing REMOTE COMMAND ON $FST_ARG"
/usr/bin/ssh $FST_ARG bash execute_ssh_command.sh $FST_ARG pwd $REST_ARG
execute_ssh_command.sh as following :
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
DIR_ARG=$2
REM_ARG="$1 $2"
REST_ARG=${ALL_ARG##$REM_ARG}
cd $DIR_ARG
$REST_ARG
of course you have to get this 2 scripts in your path of all your nodes ( maybe ~/bin/ )
Hope that it's helpful