I have a program that runs in the command line (i.e. $ run program starts up a prompt) that runs mathematical calculations. It has it's own prompt that takes in text input and responds back through standard-out/error (or creates a separate x-window if needed, but this can be disabled). Sometimes I would like to send it small input, and other times I send in a large text file filled with a series of input on each line. This program takes a lot of resources and also has a large startup time, so it would be best to only have one instance of it running at a time. I could keep open the program-prompt and supply the input this way, or I can send the process with an exit command (to leave prompt) which just prints the output. The problem with sending the request with an exit command is that the program must startup each time (slow ...). Furthermore, the output of this program is sometimes cryptic and it would be helpful to filter the output in some way (eg. simplify output, apply ANSI colors, etc).
This all makes me want to put some 2-way IO filter (or is that "pipe"? or "wrapper"?) around the program so that the program can run in the background as single process. I would then communicate with it without having to restart. I would also like to have this all while filtering the output to be more user friendly. I have been looking all over for ideas and I am stumped at how to accomplish this in some simple shell accessible manor.
Some things I have tried were redirecting stdin and stdout to files, but the program hangs (doesn't quit) and only reads the file once making me unable to continue communication. I think this was because the prompt is waiting for some user input after the EOF. I thought that this could be setup as a local server, but I am uncertain how to begin accomplishing that.
I would love to find some simple way to accomplish this. Additionally, if you can think of a way to perform this, do you think there is a way to also allow for attaching or detaching to the prompt by request? Any help and ideas would be greatly appreciated.
You could create two named pipes (man mkfifo) and redirect input and output:
myprog < fifoin > fifoout
Then you could open new terminal windows and do this in one:
cat > fifoin
And this in the other:
cat < fifoout
(Or use tee to save the input/output as well.)
To dump a large input file into the program, use:
cat myfile > fifoin
Related
I'm using go's exec Run command to get command output, which works great when the command 'Stdout' field is set to os.Stdout, and the error is sent to os.Stderr.
I want to display the output and the error output to the console, but I also want my program to see what the output was.
I then made my own Writer type that did just that, wrote both to a buffer and printed to the terminal.
Here's the problem—some applications change their output to something much less readable by humans when it detects it's not writing to a tty. So the output I get changes to something ugly when I do it in the latter way. (cleaner for computers, uglier for humans)
I wanted to know if there was some way within Go to convince whatever command I'm running that I am a tty, despite not being os.Stdout/os.Stderr. I know it's possible to do using the script bash command, but that uses a different flag depending on Darwin/Linux, so I'm trying to avoid that.
Thanks in advance!
The only practical way to solve this is to allocate a pseudo terminal (PTY) and make your external process use it for its output: since PTY is still a terminal, a process checking whether it's connected to a real terminal thinks it is.
You may start with this query.
The github.com/creack/ptyis probably a good starting point.
The next step is to have a package implementing a PTY actually allocate it, and connect "the other end" of a PTY to your custom writer.
(By the way, there's no point in writing a custom "multi writer" as there exist io.MultiWriter).
I have a command line program that receives interactive input.
I want to have the program backgrounded after it has read the input.
After it has read the input, it execs another program. I want to be able to control that program using shell job control.
I am lazy, so I don't want to type C-Z and bg to achieve that.
I am in control of that program (I wrote it and can change it), and it 'knows' when it should be backgrounded.
I'm sure this is achievable (for example, I guess an expect script could start my program, which could then signal its parent (expect) when it's ready to be backgrounded.
What is the best (simplest, easiest, best-behaving) way to achieve this?
Have you tried the following (Assuming bash)?
# Get necessary input
exec "/usr/bin/bash" ./AnotherScript.sh
How can I capture the input/ output from a script in realtime (such as with tee), but line-by-line instead of character-by-character? My goal is to capture the input typed into the interactive prompts of a script only after backspaces and auto-completion have finished processing (after the RETURN key is hit).
Specifically, I am trying to create a wrapper script for ssh that creates a timestamped log of commands used on remote servers. The script, which uses tee to redirect the output for filtering, works well, but the redirected output gets jumbled with unsubmitted characters whenever I use the backspace key or the up/down keys to scroll through my remote history. For example: service test stopexitservice test stopart or cd ..logs[1Pls -al.
Perhaps there is a way to capture the terminal's scrollback and redirect that like with tee?
Update: I have found a character-based cleanup solution that does what I want most of the time. However, I am still hoping for an answer to this question (which may well be msw's answer that it is very difficult to do).
In the Unix world there are two primary modes of handling keyboard input. These are known as 'raw' in which characters are passed from the terminal to the reading program one at a time. This is the mode that editors (and such) will use because the editor needs to respond immediately when you press a key.
The other terminal discipline is called 'cooked' which is the line by line behavior that you think of as the bash line by line input where you get to backspace and the command is not executed until you press return. Ssh has to take your input in raw, character-by-character mode because it has no idea what is running on the other side. For example, if you are running an editor on the far side, it can't wait for a return before sending the key-press. So, as some have suggested, grabbing shell history on the far side is the only reasonable way to get a command-by-command record of the bash commands you typed.
I oversimplified for clarity; actually most installations of bash take input in raw mode because they allow editor like command modification. For example, Ctrl-P scrolls up the command history or Ctrl-A goes to the beginning of the line. And bash needs to be able to get those keys the moment they are typed not waiting for a return.
This is another reason that capturing on the local side is obnoxiously difficult: if you capture on the local side, the stream will be filled with Backspaces and all of bash's editing commands. To get a true transcript of what the remote shell actually executed you have to parse the character stream as if you were the remote shell. There also a problem if you run something like
vi /some_file/which_is_on_the_remote/machine
the input stream to the local ssh will be filled with movement commands snippets of text including backspaces and so on and it would be bloody difficult to figure out what is part of a bash command and what is you talking to the editor.
Few things involving computers are impossible; getting clean input from the local side of an ssh invocation is really, really hard.
I question the actual utility of recording the commands that you execute on a local or remote machine. The reason is that there is so much state which is not visible from a command log. As a simple example here's a log of two commands:
17:00$ cp important_file important_file.bak
17:15$ rm important_file
and two days later you are trying to figure out whether important_file.bak should have the contents you intended or not. Given that log you can't answer that simple question. Even if you had the sequence
16:58$ cat important_file
17:00$ cp important_file important_file.bak
17:15$ rm important_file
If you aren't capturing the output, the cat in the log will not tell you anything. Give me almost any command sequence and I can envision a scenario in which it will not give you the information you need to make sense of what was done.
For a very similar purpose I use GNU screen which offer the option to record everything you do in a shell session (INPUT/OUTPUT). The log it creates also comes with undesirable characters but I clean them with perl:
perl -ne 's/\x1b[[()=][;?0-9]*[0-9A-Za-z]?//g;s/\r//g;s/\007//g;print' < screenlog.0
I hope this helps.
Some features of screen:
http://speaking-my-language.blogspot.com/2010/09/top-5-underused-gnu-screen-features.html
Site I found the perl-oneliner:
https://superuser.com/questions/99128/removing-the-escape-characters-from-gnu-screens-screenlog-n
I'm new to programming/development and I'm having trouble installing development tools.One of my biggest problems when installing something is understanding the shell or terminal (are they the same thing?) and how it relates to installing tools like uncrustify for example. What do I need to read to understand the shell/terminal and $PATH?
Have you tried Googling?
Environment variable
PATH (variable)
(I think you're getting good advice so far on PATH)
The most generic description of a shell is that is a program that facilitates interaction w programs. Programs facilitate 'communication' with the OS to perform work by the hardware.
There are two modes that you will normally interact with a shell.
a command-line processor, where you type in commands, letter-by-letter, word-by-word until you press the enter key. Then the shell will read what you have typed, validate that it understands the general form of what you have asked for, and then start running the 1 (or more) programs specified in what you have typed.
a batch-script processor. In this case you have assembled all of the commands you want executed into a file, and then thru 1 of several mechanisms, you arrange to have the batch-script run so it will in turn run the commands you have specified and the computer does your work for you. Have you done a Windows .Bat file? same idea, but more powerful.
So, a terminal widow is program that is responsible for a. getting input and b., printing output. When you get to the c-programming that underlies the Unix system, you are talking about a feature of the OS design which are called Standard In and Standard Out. Normal unix commands expect to read instructions from StdIn and print output to StdOut.
Of course, all good programs can get their input from files and write there output to files as well, and most programs will take over the StdIn/Out and process files instead of reading input from the keyboard and/or writing to the screen.
To return to the shell, this program that lets you type while the terminal window is open. There are numerous versions of the shell that you may run into AND have varying levels of features that support a. interactive-mode, b. batch-script mode.
To sum it up, here a diagram of what is involved (very basically) for terminal and shell
(run a) terminal-window (program)
shell-command-prompt (program) (automatically started as subprogram)
1. enter commands one at a time, with input from
a. typed at keyboard (std-in)
b. infile
and output to
a. screen (std-out)
b. outFile
program
calls OS level functions for
a. computation
b. I/O
OR 2.
(run the shell program without a terminal, usually from the cron sub-system)
shell-batch-processor
shell program reads batch-script file, 1 'statement' at a time
validate statements
run program, relying on script or cfg to provide inFile data and
indicate where to put outfile data.
I hope this helps.
Is there a terminal program that shows the difference between input, standard output, error output, the prompt, and user-entered commands? It should also show when standard input is needed vs. running a command.
One way would be to highlight each differently. The cursor could change color depending on if it was waiting for a command, running a command, or waiting for standard input.
Another way would be to have 3 frames -- a large frame on the top for output (including prompt and commands running), a small frame near the bottom for standard input, and an one-line frame at the bottom for command line input. That would possibly even allow running another command to provide input while the previous command is still waiting for standard input.
From http://jamesjava.blogspot.com/2007/09/terminal-window-with-3-frames.html
Hotwire could be a good candidate, but it's not doing that out of the box, AFAIK
For now it appears that there is no such program.
My program gush (Graphical User SHell) does part of this.
It uses different colours for commands and program stdin/stdout/stderr.
Note that the traditional separation of shell and terminal makes this
impossible because the interface between them models an old serial
terminal connection and therefore only has a single input and single
output channel. I get around this problem by combining shell and
terminal into one program.
It would be nice to also indicate when a program is waiting for input,
but I don't think there's any way to detect this, unless you traced the
system calls of the child program to detect when it tries to read stdin.
For interactive programs, you can guess that if the last output did not
end with newline it's probably prompting for input, but this would not
work for non-interactive programs, eg. sed.