Ant sshexec task unable to execute remote script file separate from session - bash

I have an Ant script to be called from Jenkins that - after other deployment tasks - start a JBoss server. The deployment package already contains an startup script which wraps up the JBoss run script:
[...]/bin/run.sh -b ip -c config >/dev/null 2>&1 &
The startup script runs fine when manually executed (i.e ssh to the server and sudo ./startup.sh)
Now I'm having trouble invoking this startup script from sshexec task. The task can execute the startup script and JBoss does gets spun up but will terminate as soon as the task return and move on to the next task - similar to running the run.sh directly and closing the session.
My task is pretty standard
<sshexec host="host" username="username" password="password"
command="echo password | sudo -S sh ${JBOSS_HOME}/server/config/startup.sh" />
I'm confused. Shouldn't the startup script above covered starting up JBoss separately from the session already? Any idea how to solve this?
The remote system is Redhat 6.

Never mind, I found it. Still need to combine nohup and background running with the startup script. Plus the "dirty workaround" from here
https://unix.stackexchange.com/questions/91065/nohup-sudo-does-not-prompt-for-passwd-and-does-nothing (was actually brilliant)
End result:
echo password | sudo -S env && sudo sh -c 'nohup startup.sh > /dev/null 2>&1 &'

Related

I'm having trouble getting a consistent PATH for my teamcity agent

I'm using ansible to setup teamcity agents. Part of that process is to add a directory to the path in .bashrc and .zshrc. We currently default to zsh. Unfortunately the PATH for the teamcity agent seems inconsistent.
If I run agent.sh start manually the path is correct. I also have an ansible task that restarts the service as part of the setup. This gives inconsistent results. The teamcity agent has the correct path on some hosts, but is missing the directory in .zshrc on other hosts.
Ansible Task:
- name: Stop kill agent.sh script
command: sh /opt/teamcity-agent/buildAgent/bin/agent.sh stop kill -f
become: yes
- name: Start agent.sh script
command: sh /opt/teamcity-agent/buildAgent/bin/agent.sh start
agent.sh
...
if [ "$command_name" = "start" ]; then
NOHUP=""
nohup id >/dev/null 2>&1 && NOHUP=nohup
$NOHUP "$java_exec" $TEAMCITY_LAUNCHER_OPTS_ACTUAL -cp $TEAMCITY_LAUNCHER_CLASSPATH jetbrains.buildServer.agent.Launcher $TEAMCITY_AGENT_OPTS_ACTUAL jetbrains.buildServer.agent.AgentMain -file $CONFIG_FILE > "$LOG_DIR/output.log" 2> "$LOG_DIR/error.log" &
launcher_pid=$!
echo $launcher_pid > $PID_FILE
echo "Done [$launcher_pid], see log at `cd $LOG_DIR && pwd`/teamcity-agent.log"
...
I suspect one possibility is that agent.sh isn't an interactive shell, and that I should update the path in .zshenv. But this working on some of the agents is throwing me off.
How does a teamcity agent get environmental variables from the host machine?
And how can I fix this problem so that I have a consistent environment?

Why nohup on execute resource doesn't work - Chef recipe

I'm trying to deploy a django app (dev mode) using chef. The problem is, when execute the recipe the server doesn't kept alive.The command works when I log in, but because it doesn't change the session. Any suggestions are helpful.
execute 'django_run' do
user 'root'
cwd '/var/www/my-app/'
command 'source ./.venv/bin/activate && sudo -E nohup python2 ./manage.py runserver 0.0.0.0:8000 > /dev/null 2>&1 &'
end
I suspect some weirdness with sudo and & is at-play here. Try to use sudo -b instead of ampersand. Also a better way to do this may be to use the service chef resource instead of execute:
https://docs.chef.io/resources/service/

changing user in upstart script

I have an upstart script that does some logging tasks. The script testjob.conf looks like below:
description "Start Logging"
start on runlevel [2345]
script
sudo -u user_name echo Test Job ran at `date` >> /home/user_name/Desktop/jobs.log
end script
Then I run the script with sudo service testjob start and I get testjob stop/waiting as result. The file jobs.log is created and the logging is done. However the file is owned by root. I wanted to change this and hence added sudo -u user_name part infront of the command mentioned in this similar post.
However this doesnot seem to do the trick. Is there another way to do this ?
The log file is created by the >> indirection which runs in the context of the root shell that also starts sudo.
Try making the process that sudo starts create the file, for instance with:
sudo -u user_name sh -c 'echo Test Job ran at `date` >> /home/user_name/Desktop/jobs.log'
In this case the sh running as user_name will "execute" the >> indirection.

Jenkins not able to execute ssh script

I have below ssh script which I am trying to execute by Jenkins, it runs fine when I invoke it from shell.
#ssh to remote machine
sshpass ssh 10.40.94.36 -l root -o StrictHostKeyChecking=no
#Remove old slave.jar
rm -f slave.jar
#download slave.jar to that machine
wget http://10.40.95.14:8080/jnlpJars/slave.jar
pwd
#make new dir to that machine
mkdir //var//Jenkins
# make slave online
java -jar slave.jar -jnlpUrl http://10.40.95.14:8080/computer/nodeV/slave-agent.jnlp
When I execute this script through shell it downloads the jar file to remote machine and also makes a new directory. But When I invoke it by shell plugin of jenkins, every command runs seprately. so the jar gets downloaded at master and also directory get created at master.
Also I am using sshpass for passwordless automated login, which fails sometime. Is there any other way of doing this.

sending script over ssh using ruby

I'm attempting to write a bash script in ruby that will start a Resque worker for one of my apps.
The command that I generate from the params given in the console looks like this...
command = "ssh user##{#ip} 'cd /path/to/app; bundle exec rake resque:work QUEUE=#{#queue}&'"
`command`
The command is interpolated correctly and everything looks great. I'm asked to input the password for the ssh command and then nothing happens. I'm pretty sure my syntax is correct for making an ssh connection and running a line of code within that connection. ssh user#host 'execute command'
I've done a simpler command that only runs the mac say terminal command and that worked fine
command = "ssh user##{#ip} 'say #{#queue}'"
`command`
I'm running the rake task in the background because I have used that line once inside ssh and it will only keep the worker alive if you run the process in the background.
Any thoughts? Thanks!
I figured it out.
It was an rvm thing. I need to include . .bash_profile at the beginning of the scripts I wanted to run.
So...
"ssh -f hostname '. .bash_profile && cd /path/to/app && bundle exec rake resque:work QUEUE=queue'" is what I needed to make it work.
Thanks for the help #Casper
Ssh won't exit the session until all processes that were launched by the command argument have finished. It doesn't matter if you run them in the background with &.
To get around this problem just use the -f switch:
-f Requests ssh to go to background just before command execution. This is
useful if ssh is going to ask for passwords or passphrases, but the user
wants it in the background. This implies -n. The recommended way to start
X11 programs at a remote site is with something like ssh -f host xterm.
I.e.
"ssh -f user##{#ip} '... bundle exec rake resque:work QUEUE=#{#queue}'"
EDIT
In fact looking more closely at the problem it seems ssh is just waiting for the remote side to close stdin and stdout. You can test it easily like this:
This hangs:
ssh localhost 'sleep 10 &'
This does not hang:
ssh localhost 'sleep 10 </dev/null >/dev/null &'
So I assume the last version is actually pretty closely equivalent to running with -f.

Resources