I have a series of 7 processes required to run a complex web app that I develop on. I typically start these processes manually like this:
job &>/tmp/term.tail &
term.tail is a fifo pipe I leave tail running on to see the output of these processes when I need to.
I'd like to find away to start up all the processes within my current shell, but a typical script (shell or ruby) runs w\in it's own shell. Are there any work arounds?
I'm using zsh in iTerm2 on OSX.
You can run commands in the current shell with:
source scriptfile
or
. scriptfile
A side note, your processes will block if they generate much output and there isn't something reading from the pipe (i.e. if the tail dies).
Related
Simple question but when I write something like " evince document .pdf" my program is launched but now my actual shell window keep looping so I have to use another one
is there a way to not have an empty shell each time I launch some program from shell ?
You can put the app into background by adding &:
# evince document.pdf &
This would return control to the shell and would keep the app running, provided it does not attempt to read/write on standard input/output. If it does, you may try using nohup or redirect stdio to/from /dev/null.
I have the following code written in a script anmed test.csh to start a GUI based application in foreground in Solaris Unix. When I run the script and want to kill the GUI process using Keyboard Ctrl + C, the process is not getting terminated. If I open the GUI application directly from the terminal, I am able to kill the process using Ctrl + C. Can someone help me understand why am I not able to kill the process invoked from a script?
#! /usr/bin/csh
# some script to set env variables
# GUI Process
cast
Then I execute the script using the following command. I am not able to terminate the vcast process using Ctrl + C command.
source test.csh
If it is being launched into its own thread then the hangup request may not get to the application. You could add a signal handler to cascade the hangup request or look at the process table to see what the process id is for the app and then kill it. This could also be scripted very easily.
You should better execute the script directly, instead of sourcing it.
1) first add #!/bin/csh at the beginning of your script,
2) set it as executable :
$ chmod u+x test.csh
3) execute it directly:
$ ./test.csh
you should be able to kill it. Anyway, consider that the problem may be related to some executable code that you are running within your script. Consider to try to debug your script by copy-pasting line after line in a terminal until you reach the point where it lags.
Another possible annoying issue can be an infinite while loop. Check for this kind of error too. Maybe you have a while loop that never gets the breaking point.
Regards
In Windows you can use the following command in Matlab to start a new instance of MATLAB which will run in the background (i.e. you can keep executing commands in your first version of MATLAB).
system('matlab &')
An analogous call in OSX,
system([matlabroot '/bin/matlab &'])
however results in the display of the splash image, then nothing. If I take out the ampersand, the new instance opens as expected. Unfortunately, this won't work for me, I really need to be able to control the first instance of MATLAB while the second is running.
Does anyone know why this discrepancy between the operating systems exists? By the way, I'm using OSX 10.7, Windows 7 64 bit, and MATLAB R2012a on Mac and R2012b on PC.
As some background, I'm trying to write a generic tester for an interactive command line interface that uses the input() function extensively.
Edit: I should have mentioned that the command
/Applications/MATLAB_R2012a.app/bin/matlab &
works as expected from the OSX terminal. In other words, a new instance of MATLAB opens and new commands can be entered into the terminal. So this problem seems to be specific to the system() function in OSX matlab.
Also, I tried adding that command to a bash script and calling the script from matlab, but had the same problem that I did with putting the command into the system() function.
Thanks
This is a long shot, but it might be happening because when you invoke the new instance of Matlab from Matlab with the system() command on Unix or OS X, the matlab_helper process forks and runs a shell process to run the new application. If you omit the ampersand, the shell blocks and waits for the program to finish, and system() waits for it, so the first Matlab locks up. And (here's the speculation part) if you add the ampersand, Matlab launches in the background, and then the forked shell exits, which then causes the new Matlab process to exit because its parent process (the shell) has exited. (Windows doesn't have the same parent/child process relationships, process launch mechanism, or shells, which would explain the different behavior.)
You could try prefixing the command with nohup, which protects processes from getting killed by SIGHUP, which might be what's happening here to your second Matlab process.
system(['nohup ' matlabroot '/bin/matlab &'])
You could also try using the OS X open command to launch a new independent instance. Something like this. You may need to fiddle with the options and path, but -n should be what gives you a new instance. It should be pointing at /Applications/MATLAB_R2012a.app; I'm assuming that's what matlabroot is returning on OS X.
system(['open -na ' matlabroot])
You could also try running it from the Java process-launching features from within Matlab instead of with system(). Runtime.exec() doesn't block like system() does, and there may be other quirks to system(), like the matlab_helper architecture. Try launching it with java.lang.Runtime from Matlab.
jrt = java.lang.Runtime.getRuntime();
newMatlabProcess = jrt.exec([matlabroot '/bin/matlab']);
You can try the other command line variants above using this mechanism too, and you may need to redirect stdout to /dev/null, since the new processes input and output are buffered in to that newMatlabProcess object.
You can use applescript to do this. I do something like this:
! osascript -e "tell application \"Terminal\" to do script \"cd `pwd`;matlab -nojvm -nosplash -r 'why'\""
This example opens a new Matlab instance, in the current directory, and runs the command "why". You can remove the "-nojvm" if you need java in your background Matlab process
I am trying to implement a terminal emulator in Java. It is supposed to be able to host both cmd.exe on Windows and bash on Unix-like systems (I would like to support at least Linux and Mac OS X). The problem I have is that both cmd.exe and bash repeat on their standard output whatever I send to their standard input.
For example, in bash, I type "ls", hit enter, at which point the terminal emulator sends the input line to bash's stdin and flushes the stream. The process then outputs the input line again "ls\n" and then the output of the ls command.
This is a problem, because other programs apart from bash and cmd.exe don't do that. If I run, inside either bash, or cmd.exe, the command "python -i", the python interactive shell does not repeat the input in the way bash and cmd.exe does. This means a workaround would have to know what process the actual output came from. I doubt that's what actual terminal emulators do.
Running "bash -i" doesn't change this behaviour. As far as I know, cmd.exe doesn't have distinct "interactive" and "noninteractive" modes.
EDIT: I am creating the host process using the ProcessBuilder class. I am reading the stdout and stderr and writing to the stdin of the process using a technique similar to the stream gobbler. I don't set any environment variables before I start the host process. The exact commands I use to start the processes are bash -i for bash and cmd for cmd.exe. I'll try to post minimal code example as soon as I manage to create one.
On Unix, run stty -echo to disable "local echo" (i.e. the shell repeating everything that you type). This is usually enabled so a user can edit what she types.
In your case, BASH must somehow allocate a pseudo TTY; otherwise, it would not echo every command. set +x would have a similar effect but then, you'd see + ls instead of ls in the output.
With cmd.exe the command #ECHO OFF should achieve the same effect.
Just execute those after the process has been created and it should work.
I'd like to type (at bash)
./start_screen.sh 3 some_cmd with parameters
and have it start up GNU screen with three separate, independent copies of the running command some_cmd with parameters running in bash in three separate vertically-split windows. What's the best way to do this? Does someone know how to put the pieces together?
(This is so I can run three worker daemons in the background and monitor them in one window.)
NOTE: alternatives to screen are just fine. In fact, at worst, it's ok if you can't interact with the windows apart from killing them all at once. (I mostly just want to see the outputs in parallel.)
screen executes commands from $HOME/.screenrc on startup by default.
You can override this with the -c option.
Create a temporary file with the commands you want, then run screen -c your-file.
This won't get the default settings you already have in $HOME/.screenrc unless you copy them to the temporary file.
(Disclaimer: I haven't tried this.)