I'm using Envoy to provision a remote server. Provisioning is done by pulling the bash script from a private repo and then execute it.
The bash script ask some confirmation like yes/no (using bash "read -p"): it works as expected when i'm connected to the remote server... the script wait for user input.
Instead Envoy seems to ignore any prompt. Is it an expected behavior?
Any workaround?
Yes, this is expected. There's nothing for read to read from so it doesn't.
You have a few options.
Rewrite your script to use a config file when there's no terminal to prompt from.
Use something like [ -t 0 ] to test if the standard input is a terminal and load a configuration file with defaults. The simplest way to do that is just have a file that contains appropriate variable assignments and just source it . defaults.sh or whatever. You don't even need the -t test if you source the defaults first since then anything the user inputs will over-ride the default value.
Rewrite your script to have sane defaults.
Rewrite whatever runs the script to provide your script input via pipeline/file via redirection (e.g. printf 'answer 1\nanswer 2\n' | ./script.sh or ./script.sh <answerfile).
Related
In one of my bash script I want to read and use the variable value from other script which is on remote machine.
How should I go ahead to resolve this. Any related info would be helpful.
Thanks in advance!
How about this (which is code I cannot currently test myself):
text=$(ssh yourname#yourmachine 'grep uploadRate= /root/yourscript')
It assumes that the value of the variable is contained in one line. The variable text now contains you variable assignment, presumably something like
uploadRate=1MB/s
There are several ways to convert the text/code into a real variable assignment in your current script, like evaluating the string or using grep. I would recommend
uploadRate=${text#*=}
to just remove the part up and including the =.
Edit: One more caveat to mention is that this only works if the original assignment does not contain variable references itself like in
uploadRate=1000*${kB}/s
ssh user#machine 'command'
will print the standard output of the remote command.
I would tell two ways at least:
1) You can simply redirect output to a file from remote server to your system with scp command...It would work for you.Then your script on your machine should read that file as an argument...
script on your machine:
read -t 50 -p "Waiting for argumet: " $1
It waits for output from remote machine,
Then you can
sshpass -p<password> scp user#host:/Path/to/file /path/to/script/
What you need to do:
You should tell the script from your machine, that the output from scp command is the argument($1)
2)Run script from your machine:
#!/bin/bash
script='
#Your commands
'
sshpass -p<password> ssh user#host $script
And you have also another ways to run script to do sth with remote machine.
I am using a shell script in Jenkins that, at a certain point, uploads a file to a server using curl. I would like to see whatever output curl produces but also check whether it is the output I expect. If it isn't, then I want to set the shell error code to > 0 so that Jenkins knows the script failed.
I first tried using curl -f, but this causes the pipe to be cut as soon as the upload fails and the error output never gets to the client. Then I tried something like this:
curl ...params... | tee /dev/tty | \
xargs -I{} test "Expected output string" = '{}'
This works from a normal SSH shell but in the Jenkins console output I see:
tee: /dev/tty: No such device or address
I'm not sure why this is since I thought Jenkins was communicating with the slave using a normal SSH shell. In any case, the whole xargs + test thing strikes me as a bit of a hack.
Is there a way to accomplish this in Jenkins so that I can see the output and also test whether it matches a specific string?
When Jenkins communicates with slave via SSH, there is no terminal allocated, and so there is no /dev/tty device for that process.
Maybe you can send it to /dev/stderr instead? It will be a terminal in an interactive session and some log file in non-interactive session.
Have you thought about using the Publish over SSH Plugin instead of using curl? Might save you some headache.
If you just copy the file from master to slave there is also a plugin for that, copy to slave Plugin.
Cannot write any comments yet, so I had to post it as an answer.
My host upgraded my version of FreeBSD and now one of my scripts is broken. The script simply uploads a data feed to google for their merchant service.
The script (that was working prior to the upgrade):
ftp ftp://myusername:mypassword#uploads.google.com/<<END_SCRIPT
ascii
put /usr/www/users/myname/feeds/mymerchantfile.txt mymerchantfile.txt
exit
END_SCRIPT
Now the script says "unknown host". The same script works on OSX.
I've tried removing the "ftp://". - No effect
I can log in from the command line if I enter the username and password manually.
I've search around for other solutions and have also tried the following:
HOST='uploads.google.com'
USER='myusername'
PASSWD='mypassword'
ftp -dni <<END_SCRIPT
open $HOST
quote USER $USER
quote PASS $PASS
ascii
put /usr/www/users/myname/feeds/mymerchantfile.txt mymerchantfile.txt
END_SCRIPT
And
HOST='uploads.google.com'
USER='myusername'
PASSWD='mypassword'
ftp -dni <<END_SCRIPT
open $HOST
user $USER $PASS
ascii
put /usr/www/users/myname/feeds/mymerchantfile.txt mymerchantfile.txt
END_SCRIPT
Nothing I can find online seems to be doing the trick. Does anyone have any other ideas? I don't want to use a .netrc file since it is executed by cron under a different user.
ftp(1) shows that there is a simple -u command line switch to upload a file; and since ascii is the default (shudder), maybe you can replace your whole script with one command line:
ftp -u ftp://username:password#uploads.google.com/mymerchantfile.txt\
/usr/www/users/myname/feeds/mymerchantfile.txt
(Long line wrapped with \\n, feel free to remove the backslash and place it all on one line.)
ftp $HOSTNAME <<EOFEOF
$USER
$PASS
ascii
put $LOCALFILE $REMOTETEMPFILE
rename $REMOTETEMPFILE $REMOTEFINALFILE
EOFEOF
Please note that the above code can be easily broken by, for example, using spaces in the variables in question. Also, this method gives you virtually no way to detect and handle failure reliably.
Look into the expect tool if you haven't already. You may find that it solves problems you didn't know you had.
Some ideas:
just a thought since this is executed in a subshell which should inherit correctly from parent, does an env show any difference when executed from within the script than from the shell?
Do you use a correct "shebang"?
Any proxy that requires authentication?
Can you ping the host?
In BSD, you can create a NETRC script that ftp can use for logging on. You can even specify the NETRC file in your ftp command too using the -N parameter. Otherwise, the default NETRC is used (which is $HOME/.netrc).
Can you check if there's a difference in the environment between your shell-login, and the cron-job? From your login, run env, and look out for ftp_proxy and http_proxy.
Next, include a line in the cron-job that will dump the environment, e.g. env >/tmp/your.env.
Maybe there's some difference...Also, did you double-check your correct usage of the -n switch?
Question
How can one specify a user's emacs init file to load with emacsclient?
emacsclient does not understand '-u user' and '-a ALTERNATE-EDITOR' does not allow me to quote it and provide '-u user'.
For example, running:
/usr/bin/emacsclient -n -a '/usr/bin/emacs -u <username>' ~<username>/notes.txt
returns
/usr/bin/emacsclient: error executing alternate editor "/usr/bin/emacs -u <username>"
Background
I'm using emacs version 23.1.1 and emacsclient version 23.1.
emacs itself supports '-u user' to load a specified user's init file.
I use the following bash function in my aliases file to invoke emacs
# a wrapper is needed to sandwich multiple command line arguments in bash
# 2>/dev/null hides
# "emacsclient: can't find socket; have you started the server?"
emacs_wrapper () {
if [ 0 -eq $# ]
then
/usr/bin/emacsclient -n -a /usr/bin/emacs ~<username>/notes.txt 2>/dev/null &
else
/usr/bin/emacsclient -n -a /usr/bin/emacs $* 2>/dev/null &
fi
}
alias x='emacs_wrapper'
Typing x followed by a list of files:
Connects to an existing emacs server if one is running, otherwise starts a new one
Executes as a background process
Opens the list of files or my notes file if no files are provided
This works great when I'm logged in as myself. However, many production boxes require me to log in as a production user. I've separated my aliases into a bash script, therefore I can get my aliases and bash functions by simply running.
. ~<username>/alias.sh
Unfortunately, this won't let me use my .emacs file (~<username>/.emacs) :(
This problem has been driving me crazy.
If you can't include command line arguments in your alternate editor specification, then simply write a shell script which does that for you, and supply that as the alternate editor argument instead.
#!/bin/sh
emacs -u (username) "$#"
The point of the server/client model is that you have an existing Emacs with its own set of configurations, and then you connect via one or more clients.
What should the client do if the server was already configured to show the menu bar (a global setting), and the client says not to show it? What if another client attaches saying to show it?
If you want to use different settings for Emacs, use different emacs sessions.
I have roughly 12 computers that each have the same script on them. This script merely pings all the other machines, and prints out whether the machine is "reachable" or "unreachable". However, it is inefficient to login to each machine manually using ssh to execute this script.
Suppose I'm logged into node 1. Is there any way to for me to login to node 2-12 automatically using SSH, execute the ping script, pipe the results to a file, logout and proceed to the next machine? Some kind of bash shell script?
I'm afraid I'm at a loss here since I haven't had experience with shell-scripting before.
Since the script is on the other machines, you can just have ssh run the command for you there:
ssh $hostname my_script >> results_file
When you specify a command like that, it's executed instead of the login shell.
I'll leave it up to you to figure out how to loop over hostnames!
One trick you'll need to use is setting up pre-authorized keys for each host. Then you can run a script on one host, running something like 'ssh hostname command > log.hostname'
This script might be what you are looking for: It allows you to execute one command (which can be your script) on multiple remote machines via ssh. It's a simple script with bash source available, so you should be able to customize it to your needs:
http://www.heinzi.at/projects/upgradebest.sh/
Yes you can
You need actually 2 small scripts as following:
remote_ssh.sh ( which takes as first argument the name of the machine and the rest of the arguments are your script that you want to execute with his own arguments)
Example : remote_ssh.sh node5 "echo hello world"
remote_ssh.sh as following:
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
REST_ARG=${ALL_ARG##$FST_ARG}
echo "Executing REMOTE COMMAND ON $FST_ARG"
/usr/bin/ssh $FST_ARG bash execute_ssh_command.sh $FST_ARG pwd $REST_ARG
execute_ssh_command.sh as following :
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
DIR_ARG=$2
REM_ARG="$1 $2"
REST_ARG=${ALL_ARG##$REM_ARG}
cd $DIR_ARG
$REST_ARG
of course you have to get this 2 scripts in your path of all your nodes ( maybe ~/bin/ )
Hope that it's helpful