Not to wait for a certain command to end execution; - bash

I have a little script to run a search daemon, like:
run.sh:
cd ~/apache-solr
xterm -e java -jar start.jar
sleep 5
cd ~/anotherFolder
#make something else
The problem: - after the xterm -e ... command the script waits for the command is complete to run another commands;
The question:
Can we run next command without waiting for the end of xterm -e ... command executing?
P.S.
Sorry for my English and Thanks for any help.

Or even better you could use nohup
Like:
nohup xterm -e java -jar start.jar &
With nohup your command will not receive a killsig even if you close your putty session for example.

Yes, you can put an & after your line you want to run in the background; that will allow your script to continue while the command is running;
xterm -e java -jar start.jar &
An example;
date
sleep 5
date
> Thu Mar 22 11:57:17 CET 2012
> Thu Mar 22 11:57:22 CET 2012
date
sleep 5 &
date
> Thu Mar 22 11:57:25 CET 2012
> Thu Mar 22 11:57:25 CET 2012

yes put & at end of command, that will start new thread of execution

An ampersand at the end of a command will start the command in the background and let the script continue with the next line.
xterm -e java -jar start.jar &

How about
xterm -e java -jar start.jar &
Note the ending & that tells the shell to run the process in the background.
It is another question how to know when that command finished if you need the results in your script.

Related

Windows bash script to run something from WSL

I am trying to write down a Windows bash script to:
Start Windows Subsystem for Linux (WSL) and
cd within WSL to "../myfolder/"
run ./foo first_parameter second_parameter
Wait until finished and exit WSL
cd within Windows to "../myWinFolder/"
run Foo.exe parameter
wait until finished
This is my attempt:
bash -c "cd ../myFolder/ && ./foo first_parameter second_parameter"
cd ..
cd myWinFolder
START /WAIT Foo.exe parameter
But sadly CMD does not wait for WSL to finish before running the EXE.
Anything I can do?
I'm happy that the interop between dos and WSL does in fact wait for commands to finish. Try the following:
In a file called runit.bat
echo bat01
bash -c "echo bat02; cd ./bash/; ./runit.sh; echo bat03"
echo bat04
In a sub-folder called ./bash/ paste the following in a file called runit.sh
echo sh01
sleep 2s
echo sh02
When you run runit.bat from within dos you will see a wait of 2 seconds
You have not specified what is inside your ./foo script. I suspect that it is running a task in the background or running something that returns immediately. This can be simulated by putting & after the sleep so that it runs in the background within wsl sleep 2s &. When you do this you see that there is no pause in the execution of the script.
So I would check ./foo maybe add some echo debug statements around inside it and run it from within WSL first to make sure that it does indeed wait until all the commands are finished before it exits.

Issue Starting up ColdFusion 2018 on Solaris 11.3 on non-root account

I have a Solaris system with 3 users ( root, cfruntime , cfdev)
After a successful installation of ColdFusion 2018, the owner of the coldfusion2018 installation is cfruntime.
As cfdev I try starting ColdFusion using the following command
sudo /disktwo/coldfusion2018/cfusion/bin/coldfusion start
This however doesnt appear to start coldfusion normally, but also doesn't generate any abonormal error/log
Looking at the startup script /disktwo/coldfusion2018/cfusion/bin/coldfusion. The folllowing lines actually starts ColdFusion
CFSTART='su $RUNTIME_USER -c "LD_LIBRARY_PATH=$LD_LIBRARY_PATH;
export LD_LIBRARY_PATH;
cd $CF_DIR/bin;
$JAVA_EXECUTABLE -classpath $CLASSPATH $JVM_ARGS
com.adobe.coldfusion.bootstrap.Bootstrap -start &"'
eval $CFSTART > /dev/null
An interesting observation I made was that if I removed the & at the end of the CFSTART, ColdFusion would start normally (although I need to put it in the background crtl-z , bg)
The ColdFusion process doesn't appear to be persistent after exiting the startup script if started as (cfdev/cfruntime) , but starts normally if the script is run as root.
Any thoughts?
Adding a nohup before the $JAVA_EXECUTABLE command and sending the output to >/dev/null 2>&1 did the trick for me
CFSTART='su $RUNTIME_USER -c "LD_LIBRARY_PATH=$LD_LIBRARY_PATH;
export LD_LIBRARY_PATH;
cd $CF_DIR/bin;
nohup $JAVA_EXECUTABLE -classpath $CLASSPATH $JVM_ARGS
com.adobe.coldfusion.bootstrap.Bootstrap -start > /dev/null 2>&1 &"'
I found that it appears that switching to the runtime user su $RUNTIME_USER and starting the process in the background caused all jobs started by the shell to close once the startup script completed (sending a hangup signal (SIGHUP) to all jobs started by that terminal) .
The nohup prevents the $JAVA_EXECUTABLE from closing when it recives the hangup signal (SIGHUP)

run Terminal script that executes one command and while that command is running, opens a new tab and runs another command

At the moment I am making a java application. To test it I have to run a server and then a client.
So I want to run this using a bash script:
#!/bin/bash
clear
gradle runServer
osascript -e 'tell application "Terminal" to activate' -e 'tell application "System Events" to tell process "Terminal" to keystroke "t" using command down'
gradle runClient
Problem: The server when run does not end until you close the game, so the next two commands will not execute. How can I run them concurrently/simultaneously ?
Run the server in the background, then kill it when the script is done.
Here’s an example with a simple HTTP server and client.
#!/bin/bash
date > foo.txt
python -m SimpleHTTPServer 1234 &
SERVER_PID="${!}"
# Automatically terminate server on script exit
trap 'kill "${SERVER_PID}"' 0 1 2 3 15
# Wait for server to start
while ! netstat -an -f inet | grep -q '\.1234 '; do
sleep 0.05
done
# Run client
curl -s http://localhost:1234/foo.txt
Running in another tab gets a lot trickier; this interleaves the output from the client and the server.
$ ./doit.sh
127.0.0.1 - - [19/Sep/2014 23:00:32] "GET /foo.txt HTTP/1.1" 200 -
Fri 19 Sep 2014 23:00:32 MDT
Note the log output from the HTTP server, and the output from the HTTP client. The server is automatically killed afterwards.

shell scripting, run in parallel processes

#!/bin/ksh
##########################################################################
$JAVA_HOME/bin/java -jar SocketListener.jar 8182
run_something_else
exit 0
SocketListener is started, and shell is waiting while SocketListener don't die.
How can I run run_something_else and SocketListener at the same time
$JAVA_HOME/bin/java -jar SocketListener.jar 8182 &
add an ampersand(&) at the end.this will give control of the terminal to the next line and makes your SocketListener run in the background.
nohup can be used to run the process in the background as daemon.
nohup runsomethingelse &
You could background something else:
nohup run_something_else &
Nohup will guarantee that sumething_else will run even if your terminal closes. So it will make it ignore sighup

Debugging monit

I find debugging monit to be a major pain. Monit's shell environment basically has nothing in it (no paths or other environment variables). Also, there are no log file that I can find.
The problem is, if the start or stop command in the monit script fails, it is difficult to discern what is wrong with it. Often times it is not as simple as just running the command on the shell because the shell environment is different from the monit shell environment.
What are some techniques that people use to debug monit configurations?
For example, I would be happy to have a monit shell, to test my scripts in, or a log file to see what went wrong.
I've had the same problem. Using monit's verbose command-line option helps a bit, but I found the best way was to create an environment as similar as possible to the monit environment and run the start/stop program from there.
# monit runs as superuser
$ sudo su
# the -i option ignores the inherited environment
# this PATH is what monit supplies by default
$ env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin /bin/sh
# try running start/stop program here
$
I've found the most common problems are environment variable related (especially PATH) or permission-related. You should remember that monit usually runs as root.
Also if you use as uid myusername in your monit config, then you should change to user myusername before carrying out the test.
Be sure to always double check your conf and monitor your processes by hand before letting monit handle everything. systat(1), top(1) and ps(1) are your friends to figure out resource usage and limits. Knowing the process you monitor is essential too.
Regarding the start and stop scripts i use a wrapper script to redirect output and inspect environment and other variables. Something like this :
$ cat monit-wrapper.sh
#!/bin/sh
{
echo "MONIT-WRAPPER date"
date
echo "MONIT-WRAPPER env"
env
echo "MONIT-WRAPPER $#"
$#
R=$?
echo "MONIT-WRAPPER exit code $R"
} >/tmp/monit.log 2>&1
Then in monit :
start program = "/home/billitch/bin/monit-wrapper.sh my-real-start-script and args"
stop program = "/home/billitch/bin/monit-wrapper.sh my-real-stop-script and args"
You still have to figure out what infos you want in the wrapper, like process infos, id, system resources limits, etc.
You can start Monit in verbose/debug mode by adding MONIT_OPTS="-v" to /etc/default/monit (don't forget to restart; /etc/init.d/monit restart).
You can then capture the output using tail -f /var/log/monit.log
[CEST Jun 4 21:10:42] info : Starting Monit 5.17.1 daemon with http interface at [*]:2812
[CEST Jun 4 21:10:42] info : Starting Monit HTTP server at [*]:2812
[CEST Jun 4 21:10:42] info : Monit HTTP server started
[CEST Jun 4 21:10:42] info : 'ocean' Monit 5.17.1 started
[CEST Jun 4 21:10:42] debug : Sending Monit instance changed notification to monit#example.io
[CEST Jun 4 21:10:42] debug : Trying to send mail via smtp.sendgrid.net:587
[CEST Jun 4 21:10:43] debug : Processing postponed events queue
[CEST Jun 4 21:10:43] debug : 'rootfs' succeeded getting filesystem statistics for '/'
[CEST Jun 4 21:10:43] debug : 'rootfs' filesytem flags has not changed
[CEST Jun 4 21:10:43] debug : 'rootfs' inode usage test succeeded [current inode usage=8.5%]
[CEST Jun 4 21:10:43] debug : 'rootfs' space usage test succeeded [current space usage=59.6%]
[CEST Jun 4 21:10:43] debug : 'ws.example.com' succeeded testing protocol [WEBSOCKET] at [ws.example.com]:80/faye [TCP/IP] [response time 114.070 ms]
[CEST Jun 4 21:10:43] debug : 'ws.example.com' connection succeeded to [ws.example.com]:80/faye [TCP/IP]
monit -c /path/to/your/config -v
By default, monit logs to your system message log and you can check there to see what's happening.
Also, depending on your config, you might be logging to a different place
tail -f /var/log/monit
http://mmonit.com/monit/documentation/monit.html#LOGGING
Assuming defaults (as of whatever old version of monit I'm using), you can tail the logs as such:
CentOS:
tail -f /var/log/messages
Ubuntu:
tail -f /var/log/syslog
Mac OSX
tail -f /var/log/system.log
Windows
Here be Dragons
But there is a neato project I found while searching on how to do this out of morbid curiosity: https://github.com/derFunk/monit-windows-agent
Yeah monit isn't too easy to debug.
Here a few best practices
use a wrapper script that sets up your log file. Write your command arguments in there while you are at it:
shell:
#!/usr/bin/env bash
logfile=/var/log/myjob.log
touch ${logfile}
echo $$ ": ################# Starting " $(date) "########### pid " $$ >> ${logfile}
echo "Command: the-command $#" >> ${logfile} # log your command arguments
{
exec the-command $#
} >> ${logfile} 2>&1
That helps a lot.
The other thing I find that helps is to run monit with '-v', which gives you verbosity. So the workflow is
get your wrapper working from the shell "sudo my-wrapper"
then try and get it going from monit, run from the command line with "-v"
then try and get it going from monit, running in the background.
You can also try running monit validate once processes are running, to try and find out if any of them are having problems (and sometimes get more information than you would get in the log files if there are any problems). Beyond that, there's not much more you can do.

Resources