Is there a way to use input redirection after the program has started?
For example I want to run a program, scrape some data from it, then use that data to push it + some static data (from a file) to std input:
1 ./Binary
2 Hello the open machine is: computer2
3 Which computer:command do you want to use:
4 <<< "computer2:RunWaterPlants"
I want to redirect line 4 in using some program output from line 2.
I've tried Keeping a bash script running along the program it has started, and sending the program input, but it will just continue with the program execution without waiting for my input.
I can't edit ./Binary.
I found Write to stdin of a running process using pipe and it works for what I'm asking, but I can't see the stdout when I run it with pipe.
I figured it out from Writing to stdin of a process. Pretty much I started a fifo pipe and wrote to it and let it listen for input.
Related
Say you have a shell command like
cat file1 | ./my_script
Is there any way from inside the 'my_script' command to detect the command run first as the pipe input (in the above example cat file1)?
I've been digging into it and so far I've not found any possibilities.
I've been unable to find any environment variables set in the process space of the second command recording the full command line, the command data the my_script commands sees (via /proc etc) is just _./my_script_ and doesn't include any information about it being run as part of a pipe. Checking the process list from inside the second command even doesn't seem to provide any data since the first process seems to exit before the second starts.
The best information I've been able to find suggests in bash in some cases you can get the exit codes of processes in the pipe via PIPESTATUS, unfortunately nothing similar seems to be present for the name of commands/files in the pipe. My research seems to be saying it's impossible to do in a generic manner (I can't control how people decide to run my_script so I can't force 3rd party pipe replacement tools to be used over build in shell pipes) but it just at the same time doesn't seem like it should be impossible since the shell has the full command line present as the command is run.
(update adding in later information following on from comments below)
I am on Linux.
I've investigated the /proc/$$/fd data and it almost does the job. If the first command doesn't exit for several seconds while piping data to the second command can you read /proc/$$/fd/0 to see the value pipe:[PIPEID] that it symlinks to. That can then be used to search through the rest of the /proc//fd/ data for other running processes to find another process with a pipe open using the same PIPEID which gives you the first process pid.
However in most real world tests I've done of piping you can't trust that the first command will stay running long enough for the second one to have time to locate it's pipe fd in /proc before it exits (which removes the proc data preventing it being read). So if this method will return any information is something I can't rely on.
I want to run command in console and insert all user data needed.
#!/bin/bash
program < data &
My code works, but after less than second program disappears (only blinks).
How can I run program, pass data from file and stay in that program(I have no need to continue bash script after app launching.)
Inasmuch as the program you are launching reads data from its standard input, it is reasonable to suppose that when you say that you want to "stay in that program" you mean that you want to be able to give it further input interactively. Moreover, I suppose that the program disappears / blinks either because it is disconnected from the terminal (by operation of the & operator) or because it terminates when it detects end-of-file on its standard input.
If the objective is simply to prepend some canned input before the interactive input, then you should be able to achieve that by piping input from cat:
cat data - | program
The - argument to cat designates the standard input. cat first reads file data and writes it to standard out, then it forwards data from its standard input to its standard output. All of that output is fed to program's standard input. There is no need to exec, and do not put either command into the background, as that disconnects it from the terminal (from which cat is obtaining input and to which program is, presumably, writing output).
I'm running a process using popen3, and trying to read the stdout stream. I want to be able to detect each character as it's put into stdout, but even using stdout.getc, I'm only able to get any output once there's a newline.
Any tips? I've seen this answered with stdin.getc, but not stdout.
The problem you have is that most programs when run through a pipe, which is what happens with popen, will be run using buffered output.
But what you are looking for is unbuffered output. Unbuffered output only happens when the proccess is attached to a PTY. I.e. a shell.
When you attach the process to a pipe the output will be buffered unless the process is explicitly calling flush during its output processing.
You have a few options:
If you have control over the source code that you run with popen then force a flush call on stdout after each output sequence.
You could try and run the command under stdbuf -o0, which tries to force unbuffered output.
See this answer:
Force line-buffering of stdout when piping to tee.
However that is not guaranteed to work for all programs.
You could try to use the Ruby PTY library instead of popen to make the program run under a pseudo-terminal, and hence run unbuffered. See this answer:
Continuously read from STDOUT of external process in Ruby
Option 3 is most likely to work with any program you want to run and monitor, while the other options may or may not work depending on the program you are running and if you have access to the source code of that program.
I've got two scripts, one that takes a couple filenames as input and writes data to the pipes (really passes the pipes as arguments to program I wrote). And then the other one calls the first script with some named pipes as inputs and then calls some other programs to process the data from the pipes.
My problem is that my pipes are stalling and what I think is happening is the first bash script is called in the background from the second script, which then goes on to immediately start up the consumer processes, so I think the readers are being opened before the writers (in the subscript), which can cause a stall?
Is there a way to synchronize on a named pipe and wait for it to be opened in bash?
I don't think that's your problem.
If the producer starts later than the consumer, no big deal.
Example:
Window 1
$ mkfifo foo.pipe
$ cat foo.pipe
(hangs)
Window 2
$ echo 'something' > foo.pipe
Window 1
something
(exits)
Perhaps your problem is that one process is consuming the output of the fifo, then the producer quits, then you're trying to read from the fifo again.
In that case, it would hang indefinitely.
e.g. after the above sequence:
Window 1
$ cat foo.pipe
hangs until you run another echo something > foo.pipe.
My Ruby script is running a shell command and parsing the output from it. However, it seems the command is first executed and output saved in an array. I would like to be able to access the output lines in real time just as they are printed. I've played around with threads, but haven't got it to work. Any suggestions?
You are looking for pipes. Here is an example:
# This example runs the netstat command via a pipe
# and processes the data in Ruby as it come back
pipe = IO.popen("netstat 3")
while (line = pipe.gets)
print line
print "and"
end
When call methods/functions to run system/shell commands, your interpreter spawns another process to run it and waits for it to finish, then gives you the output.
Even if you use threads, the only thing that you would accomplish is not letting your program to hang while the command is run, but you still won't get the output till its done.
I think you can accomplish that with pipes, but I am not sure how.
#Marcel got it.