Simple question but when I write something like " evince document .pdf" my program is launched but now my actual shell window keep looping so I have to use another one
is there a way to not have an empty shell each time I launch some program from shell ?
You can put the app into background by adding &:
# evince document.pdf &
This would return control to the shell and would keep the app running, provided it does not attempt to read/write on standard input/output. If it does, you may try using nohup or redirect stdio to/from /dev/null.
Related
I am trying to write a script that opens a command-line application (sagemath in this case) which on start up will send a certain command down the pipe (attach a script) without closing the application at the end.
I tried something like:
#!/bin/bash
echo "load(\"script.sage\")" | sage
This, of course, opens sage load the script print the output of the script and closes sage. Adding & at the end of the last line didn't work.
I know that technically I can add this script to the list of scripts which are loaded on startup always but this is not what I want. I thought that it might be done be making a dynamic link at some directory to my script, but not sure if there is such a directory and where it is.
Any suggestions?
Edit:
I didn't know about Expect (I'm a youngster in linux). Reading about, following Mark's suggestion, it a bit I managed to solve this. If this is of any interest to anyone in the future then this does the trick:
#!/usr/bin/expect
set timeout 20
spawn sage
expect "sage:"
send "load(\"script.sage\")\n"
interact
#!/usr/bin/expect
set timeout 20
spawn sage
expect "sage:"
send "load(\"script.sage\")\n"
interact
You could use 'screen' depending on how dynamically you need this script to run. See http://linux.die.net/man/1/screen for info on how to use screen.
You can either:
Use nohup to start the program E.g., nohup "load(\"script.sage\")" | sage.
Or, you can use the disown command.
I have a shell script that is using echo to give a continuous output (the progress of an rsync) that I am using AppleScript to run with administrator privileges. Before I was using NSTask to run the shell script, but I couldn't find a way to run it with the privileges that it needed, so now I am using applescript to run it. When it was running via NSTask, I could use an output pipe and waitForDataInbackgroundAndNotify to get the continuous output and put it into a text field, but now that I am using AppleScript, I cannot seem to find a way to accomplish this. The shell script is still using echo, but it seems to get lost in the AppleScript "wrapper." How do I make sure that the AppleScript sees the output from the shell script and passes it on to the application? Remember, this isn't one single output, but continuous output.
Zero is correct. When you use do shell script, you can consider it similar to using backticks in perl. The command will be executed, and the everything sent to STDOUT will be returned as the result.
The only work around would be to have the your command write the output to a temporary file and then use do shell script "foo" without waiting. From there, you can read from the file sequentially using native AppleScript commands. It's clunky, but it'll work in a pinch.
If I create a process from a cmd prompt using the start command (opening a new cmd) is it possible to redirect the stdout and stderr from that process back to the calling cmd?
If you want the output of the STARTed process to appear in the parent command console, then simply use the START /B option.
If you want to process the output of your command, then you should use FOR /F ... in ('someCommand') DO ... instead.
OK. I have yet to find a straightforward answer to this question. I didn't want to bog down my question with what I thought unnecessary detail but seeing as I'm being criticized for the lack of this I'll expand a bit here.
I want to automate the updating of FWs on our production line, so I've a python app that gets the FWs from ftp and then uses the processors flash tool via python subprocess command to upload it to the board's flash. OK for all but one of the tools.
The one tool seems to have a problem when it's not running in its own terminal, so I provide a start to the subprocess command string which allows it to run OK in its own terminal. However, I need the output from this other terminal for logging reasons.
A possible solution was to log stdout and stderr to file using >> or wintee and then poll this file via a thread in my original app (perhaps a rather convoluted solution to the question). However the tool also has a separate problem where it doesn't like any std redirection, so this doesn't work for me.
I am trying to implement a terminal emulator in Java. It is supposed to be able to host both cmd.exe on Windows and bash on Unix-like systems (I would like to support at least Linux and Mac OS X). The problem I have is that both cmd.exe and bash repeat on their standard output whatever I send to their standard input.
For example, in bash, I type "ls", hit enter, at which point the terminal emulator sends the input line to bash's stdin and flushes the stream. The process then outputs the input line again "ls\n" and then the output of the ls command.
This is a problem, because other programs apart from bash and cmd.exe don't do that. If I run, inside either bash, or cmd.exe, the command "python -i", the python interactive shell does not repeat the input in the way bash and cmd.exe does. This means a workaround would have to know what process the actual output came from. I doubt that's what actual terminal emulators do.
Running "bash -i" doesn't change this behaviour. As far as I know, cmd.exe doesn't have distinct "interactive" and "noninteractive" modes.
EDIT: I am creating the host process using the ProcessBuilder class. I am reading the stdout and stderr and writing to the stdin of the process using a technique similar to the stream gobbler. I don't set any environment variables before I start the host process. The exact commands I use to start the processes are bash -i for bash and cmd for cmd.exe. I'll try to post minimal code example as soon as I manage to create one.
On Unix, run stty -echo to disable "local echo" (i.e. the shell repeating everything that you type). This is usually enabled so a user can edit what she types.
In your case, BASH must somehow allocate a pseudo TTY; otherwise, it would not echo every command. set +x would have a similar effect but then, you'd see + ls instead of ls in the output.
With cmd.exe the command #ECHO OFF should achieve the same effect.
Just execute those after the process has been created and it should work.
I have a series of 7 processes required to run a complex web app that I develop on. I typically start these processes manually like this:
job &>/tmp/term.tail &
term.tail is a fifo pipe I leave tail running on to see the output of these processes when I need to.
I'd like to find away to start up all the processes within my current shell, but a typical script (shell or ruby) runs w\in it's own shell. Are there any work arounds?
I'm using zsh in iTerm2 on OSX.
You can run commands in the current shell with:
source scriptfile
or
. scriptfile
A side note, your processes will block if they generate much output and there isn't something reading from the pipe (i.e. if the tail dies).