How to make JMeter (running remotely from another machine) run in background so that when I turn off my PC (and thus kill the putty shell), it will still continue running?
Use below command to run Jmeter in the background:
nohup ./jmeter.sh -n -t test.jmx &
when you run above command a file nohup.out will be created in the same folder and it will store the console output.
You can run in background without a master, this is one of the reasons I wondered if to choose master-slave configuration.
./jmeter.sh -n -t "mytest.jmx" &
As #Arbaz Alam mention you can use nohup, see jmeter nohup answer.
nohup "./jmeter.sh -n -t /home/chamith/WSO2MB/new/apache-jmeter-2.13/bin/GamesSubscriber.jmx -l result.jtl" > /dev/null 2>&1 &
You can check a mailing-list answer:
ssh $jmeterserverone \"setsid
/home/chuesgen/jakarta-jmeter/bin/jmeter-server >> ~/jmeterServer.out
2>&1 &\"
Related
I have following bash script which runs two process in parallel (two bash scripts internally), I need two bash scripts two run in parallel once both are finished I need total time execution. But the issue is the first bash script ./cloud.sh doesn't run but when I run it individually it runs successfully, and I am running main test bash script with sudo rights.
Test
#!/bin/bash
start=$(date +%s%3N)
./cloud.sh &
./client.sh &
end=$(date +%s%3N)
echo "Time: $((duration=end-start))ms."
Client.sh
#!/bin/bash
sudo docker build -t testing .'
Cloud.sh
#!/bin/bash
start=$(date +%s%3N)
ssh kmaster#192.168.101.238 'docker build -t testing .'
end=$(date +%s%3N)
echo "cloud: $((duration=end-start)) ms"
A background process won't be able to get keyboard input from you. As soon as it tries so, it will receive a SIGTTIN signal which will stop it (until it is brought back to the foreground).
I suspect that one or both of your scripts asks you to enter something, typically a password.
Solution 1: configure sudo and ssh in order to make them password-less. With ssh this is easy (ssh key), with sudo this is a security risk. If docker build needs you to enter something, you are doomed.
Solution 2: make only the ssh script (Cloud.sh) password-less and keep the sudo script (Client.sh) in foreground. Here again, if the remote docker build needs you to enter something, this won't work.
How to wait for your background processes? Just use the wait builtin (help wait).
An example with solution 2:
#!/bin/bash
start=$(date +%s%3N)
./cloud.sh &
./client.sh
wait
end=$(date +%s%3N)
echo "Time: $((duration=end-start))ms."
I'd like to have a Jenkins job which kills all processes on port 5000 (bash).
The easy solution
fuser -k 5000/tcp
works fine when I execute this command in the terminal, but on Jenkins ("execute shell") marks build as failure.
I have tried also
kill $(lsof -i -t:5000)
but again, as it works on regular terminal, on Jenkins I get
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
Build step 'Execute shell' marked build as failure
Any ideas how to fix this?
I hadd the same problem. It did not work when the process was not running. bash just did it, but jenkins failed.
You can add an || true to your jenkins job to indicate jenkins to proceed with the job if the bash command fails.
So its:
fuser -k 5000/tcp || true
see also don't fail jenkins build if execute shell fails
Try put the command with the path
/usr/bin/kill $(/usr/sbin/lsof -i -t:5000)
If the user running the jenkins service is not the same as the user with the process on port 5000 you won't be able to kill the process. Maybe you will need to run this with sudo.
Try this
su -s jenkins #Or the user who run jenkins
/usr/bin/kill $(/usr/sbin/lsof -i -t:5000)
maybe jenkins user can't see the processes because of privileges so the expansion of $(lsof ..) is empty.
the error output may not be complete because if lsof call fails there will be a message on stderr.
The problem is that $ is a special char in jenkins commands. It means you are referring to an ENV VAR.
You should try writing the command wrapped with single quotes.
I was facing the same issue and in many cases standard input/output is disabled (specially when you have ssh to the target machine). What you can do is create a executable shell file in the target server and execute that file.
So, the step would look something like below :
Step 1 . -> create the shell file
cat > kill_process.sh << EOF
target_port_num=\`lsof -i:\${1} -t\`;
echo "Kill process at port is :: \${target_port_num}"
kill -9 \${target_port_num}
EOF
Step 2 . -> make it executable
chmod +x process_killer.sh
Step 3 . -> execute the shell and pass the port number
./process_killer.sh 3005
Hope this help.
So I've tried googling and reading a few questions on here as well as elsewhere and I can't seem to find an answer.
I'm using Jenkins and executing a shell script to scp a .jar file to a server and then sshing in, running the build, and then exiting out of the server. However, I cannot get out of that server for the life of me. This is what I'm running, minus the sensitive information:
ssh root#x.x.x.x 'killall -9 java; nohup java -jar /root/project.jar -prod &; exit'
I've tried doing && exit, exit;, but none of it will get me out of the server and jenkins just spins for ever. So the Jenkins build never actually finishes.
Any help would be sweet! I appreciate it.
So I just took off the exit and ran a ssh -f root#x.x.x.x before the command and it worked. The -f just runs the ssh command in the background so Jenkins isn't sitting around waiting.
Usual way of starting a command and sending it to background is nohup command &
Try this. This is working for me. Read the source for more information.
nohup some-background-task &> /dev/null # No space between & and > !
Example :
ssh root#x.x.x.x 'killall -9 java; nohup java -jar /root/project.jar -prod &> /dev/null'
no need exit keyword
source : https://blog.jakubholy.net/2015/02/17/fix-shell-script-run-via-ssh-hanging-jenkins/
I ssh to another server and run a shell script like this nohup ./script.sh 1>/dev/null 2>&1 &
Then type exit to exit from the server. However it just hangs. The server is Solaris.
How can I exit properly without hanging??
Thanks.
I assume that this script is a long running one. In this case you need to detach the process from the terminal that you wish to close when you terminate your ssh session.
Actually you already done most of the work by reassigning both stdout and stderr to /dev/null, however you didn't do that for stdin.
I used the test case of:
ssh localhost
nohup sleep 10m &> /dev/null &
^D
# hangs
While
ssh localhost
nohup sleep 10m &> /dev/null < /dev/null &
^D
# exits
I second the recommendation to use the excellent gnu screen, that will do this service for you, among others.
Oh, and have you considered running the script directly and not within a shell? I.e.:
ssh user#host script.sh
If you're trying to leave a command running remotely after you close your SSH link, I strongly recommend you use screen and learn to detach the screen. That's much better than leaving background processes around; it also lets you reconnect and see what the process is up to.
Since you haven't provided us with script.sh, I don't think we can know for sure why the command is hanging.
You can use the command :
~.
This command close the ssh session.
sh -c ./script.sh &
In a bash script I execute a command on a remote machine through ssh. If user breaks the script by pressing Ctrl+C it only stops the script - not even ssh client. Moreover even if I kill ssh client the remote command is still running...
How can make bash to kill local ssh client and remote command invocation on Crtl+c?
A simple script:
#/bin/bash
ssh -n -x root#db-host 'mysqldump db' -r file.sql
Eventual I found a solution like that:
#/bin/bash
ssh -t -x root#db-host 'mysqldump db' -r file.sql
So - I use '-t' instead of '-n'.
Removing '-n', or using different user than root does not help.
When your ssh session ends, your shell will get a SIGHUP. (hang-up signal). You need to make sure it sends that on to all processes started from it. For bash, try shopt -s huponexit; your_command. That may not work, because the man page says huponexit only works for interactive shells.
I remember running into this with users running jobs on my cluster, and whether they had to use nohup or not (to get the opposite behaviour of what you want) but I can't find anything in the bash man page about whether child processes ignore SIGHUP by default. Hopefully huponexit will do the trick. (You could put that shopt in your .bashrc, instead of on the command line, I think.)
Your ssh -t should work, though, since when the connection closes, reads from the terminal will get EOF or an error, and that makes most programs exit.
Do you know what the options you're passing to ssh do? I'm guessing not. The -n option redirects input from /dev/null, so the process you're running on the remote host probably isn't seeing SIGINT from Ctrl-C.
Now, let's talk about how bad an idea it is to allow remote root logins:
It's a really, really bad idea. Have a look at HOWTO: set up ssh keys for some suggestions how to securely manage remote process execution over ssh. If you need to run something with privileges remotely you'll probably want a solution that involves a ssh public key with embedded command and a script that runs as root courtesy of sudo.
trap "some_command" SIGINT
will execute some_command locally when you press Ctrl+C . help trap will tell you about its other options.
Regarding the ssh issue, i don't know much about ssh. Maybe you can make it call ssh -n -x root#db-host 'killall mysqldump' instead of some_command to kill the remote command?
What if you don't want to require using "ssh -t" (for those as forgetful as I am)?
I stumbled upon looking at the parent PID, because CTRL/C from the initiating session results in the ssh-launched process on the remote process exiting, although its child process continues. By way of example, here's my script that is on the remote server.
#!/bin/bash
Answer=(Alive Dead)
Index=0
while [ ${Index} -eq 0 ]; do
if ! kill -0 ${PPID} 2> /dev/null ; then Index=1; fi
echo "Parent PID ${PPID} is ${Answer[$Index]} at $(date +%Y%m%d%H%M%S%Z)" > ~/NowTime.txt
sleep 1
done
I then invoke it with "ssh remote_server ./test_script.sh"
"watch cat ~/NowTime.txt" on the remote server shows the timestamp in the file increasing and declaring that the parent process is alive; once I hit CTRL/C in the launching process, the script on the remote server notes that its parent process has died, and the script exits.