Bash, cygwin, run command with user input (disable process switch) - bash

I want to run command in console and insert all user data needed.
#!/bin/bash
program < data &
My code works, but after less than second program disappears (only blinks).
How can I run program, pass data from file and stay in that program(I have no need to continue bash script after app launching.)

Inasmuch as the program you are launching reads data from its standard input, it is reasonable to suppose that when you say that you want to "stay in that program" you mean that you want to be able to give it further input interactively. Moreover, I suppose that the program disappears / blinks either because it is disconnected from the terminal (by operation of the & operator) or because it terminates when it detects end-of-file on its standard input.
If the objective is simply to prepend some canned input before the interactive input, then you should be able to achieve that by piping input from cat:
cat data - | program
The - argument to cat designates the standard input. cat first reads file data and writes it to standard out, then it forwards data from its standard input to its standard output. All of that output is fed to program's standard input. There is no need to exec, and do not put either command into the background, as that disconnects it from the terminal (from which cat is obtaining input and to which program is, presumably, writing output).

Related

How to get error text in the iperf message? [duplicate]

I am rather confused with the purpose of these three files. If my understanding is correct, stdin is the file in which a program writes into its requests to run a task in the process, stdout is the file into which the kernel writes its output and the process requesting it accesses the information from, and stderr is the file into which all the exceptions are entered. On opening these files to check whether these actually do occur, I found nothing seem to suggest so!
What I would want to know is what exactly is the purpose of these files, absolutely dumbed down answer with very little tech jargon!
Standard input - this is the file handle that your process reads to get information from you.
Standard output - your process writes conventional output to this file handle.
Standard error - your process writes diagnostic output to this file handle.
That's about as dumbed-down as I can make it :-)
Of course, that's mostly by convention. There's nothing stopping you from writing your diagnostic information to standard output if you wish. You can even close the three file handles totally and open your own files for I/O.
When your process starts, it should already have these handles open and it can just read from and/or write to them.
By default, they're probably connected to your terminal device (e.g., /dev/tty) but shells will allow you to set up connections between these handles and specific files and/or devices (or even pipelines to other processes) before your process starts (some of the manipulations possible are rather clever).
An example being:
my_prog <inputfile 2>errorfile | grep XYZ
which will:
create a process for my_prog.
open inputfile as your standard input (file handle 0).
open errorfile as your standard error (file handle 2).
create another process for grep.
attach the standard output of my_prog to the standard input of grep.
Re your comment:
When I open these files in /dev folder, how come I never get to see the output of a process running?
It's because they're not normal files. While UNIX presents everything as a file in a file system somewhere, that doesn't make it so at the lowest levels. Most files in the /dev hierarchy are either character or block devices, effectively a device driver. They don't have a size but they do have a major and minor device number.
When you open them, you're connected to the device driver rather than a physical file, and the device driver is smart enough to know that separate processes should be handled separately.
The same is true for the Linux /proc filesystem. Those aren't real files, just tightly controlled gateways to kernel information.
It would be more correct to say that stdin, stdout, and stderr are "I/O streams" rather
than files. As you've noticed, these entities do not live in the filesystem. But the
Unix philosophy, as far as I/O is concerned, is "everything is a file". In practice,
that really means that you can use the same library functions and interfaces (printf,
scanf, read, write, select, etc.) without worrying about whether the I/O stream
is connected to a keyboard, a disk file, a socket, a pipe, or some other I/O abstraction.
Most programs need to read input, write output, and log errors, so stdin, stdout,
and stderr are predefined for you, as a programming convenience. This is only
a convention, and is not enforced by the operating system.
As a complement of the answers above, here is a sum up about Redirections:
EDIT: This graphic is not entirely correct.
The first example does not use stdin at all, it's passing "hello" as an argument to the echo command.
The graphic also says 2>&1 has the same effect as &> however
ls Documents ABC > dirlist 2>&1
#does not give the same output as
ls Documents ABC > dirlist &>
This is because &> requires a file to redirect to, and 2>&1 is simply sending stderr into stdout
I'm afraid your understanding is completely backwards. :)
Think of "standard in", "standard out", and "standard error" from the program's perspective, not from the kernel's perspective.
When a program needs to print output, it normally prints to "standard out". A program typically prints output to standard out with printf, which prints ONLY to standard out.
When a program needs to print error information (not necessarily exceptions, those are a programming-language construct, imposed at a much higher level), it normally prints to "standard error". It normally does so with fprintf, which accepts a file stream to use when printing. The file stream could be any file opened for writing: standard out, standard error, or any other file that has been opened with fopen or fdopen.
"standard in" is used when the file needs to read input, using fread or fgets, or getchar.
Any of these files can be easily redirected from the shell, like this:
cat /etc/passwd > /tmp/out # redirect cat's standard out to /tmp/foo
cat /nonexistant 2> /tmp/err # redirect cat's standard error to /tmp/error
cat < /etc/passwd # redirect cat's standard input to /etc/passwd
Or, the whole enchilada:
cat < /etc/passwd > /tmp/out 2> /tmp/err
There are two important caveats: First, "standard in", "standard out", and "standard error" are just a convention. They are a very strong convention, but it's all just an agreement that it is very nice to be able to run programs like this: grep echo /etc/services | awk '{print $2;}' | sort and have the standard outputs of each program hooked into the standard input of the next program in the pipeline.
Second, I've given the standard ISO C functions for working with file streams (FILE * objects) -- at the kernel level, it is all file descriptors (int references to the file table) and much lower-level operations like read and write, which do not do the happy buffering of the ISO C functions. I figured to keep it simple and use the easier functions, but I thought all the same you should know the alternatives. :)
I think people saying stderr should be used only for error messages is misleading.
It should also be used for informative messages that are meant for the user running the command and not for any potential downstream consumers of the data (i.e. if you run a shell pipe chaining several commands you do not want informative messages like "getting item 30 of 42424" to appear on stdout as they will confuse the consumer, but you might still want the user to see them.
See this for historical rationale:
"All programs placed diagnostics on the standard output. This had
always caused trouble when the output was redirected into a file, but
became intolerable when the output was sent to an unsuspecting
process. Nevertheless, unwilling to violate the simplicity of the
standard-input-standard-output model, people tolerated this state of
affairs through v6. Shortly thereafter Dennis Ritchie cut the Gordian
knot by introducing the standard error file. That was not quite enough.
With pipelines diagnostics could come from any of several programs
running simultaneously. Diagnostics needed to identify themselves."
stdin
Reads input through the console (e.g. Keyboard input).
Used in C with scanf
scanf(<formatstring>,<pointer to storage> ...);
stdout
Produces output to the console.
Used in C with printf
printf(<string>, <values to print> ...);
stderr
Produces 'error' output to the console.
Used in C with fprintf
fprintf(stderr, <string>, <values to print> ...);
Redirection
The source for stdin can be redirected. For example, instead of coming from keyboard input, it can come from a file (echo < file.txt ), or another program ( ps | grep <userid>).
The destinations for stdout, stderr can also be redirected. For example stdout can be redirected to a file: ls . > ls-output.txt, in this case the output is written to the file ls-output.txt. Stderr can be redirected with 2>.
Using ps -aux reveals current processes, all of which are listed in /proc/ as /proc/(pid)/, by calling cat /proc/(pid)/fd/0 it prints anything that is found in the standard output of that process I think. So perhaps,
/proc/(pid)/fd/0 - Standard Output File
/proc/(pid)/fd/1 - Standard Input File
/proc/(pid)/fd/2 - Standard Error File
for example
But only worked this well for /bin/bash other processes generally had nothing in 0 but many had errors written in 2
For authoritative information about these files, check out the man pages, run the command on your terminal.
$ man stdout
But for a simple answer, each file is for:
stdout for a stream out
stdin for a stream input
stderr for printing errors or log messages.
Each unix program has each one of those streams.
stderr will not do IO Cache buffering so if our application need to print critical message info (some errors ,exceptions) to console or to file use it where as use stdout to print general log info as it use IO Cache buffering there is a chance that before writing our messages to file application may close ,leaving debugging complex
A file with associated buffering is called a stream and is declared to be a pointer to a defined type FILE. The fopen() function creates certain descriptive data for a stream and returns a pointer to designate the stream in all further transactions. Normally there are three open streams with constant pointers declared in the header and associated with the standard open files.
At program startup three streams are predefined and need not be opened explicitly: standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). When opened the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device
https://www.mkssoftware.com/docs/man5/stdio.5.asp
Here is a lengthy article on stdin, stdout and stderr:
What Are stdin, stdout, and stderr on Linux?
To summarize:
Streams Are Handled Like Files
Streams in Linux—like almost everything else—are treated as though
they were files. You can read text from a file, and you can write text
into a file. Both of these actions involve a stream of data. So the
concept of handling a stream of data as a file isn’t that much of a
stretch.
Each file associated with a process is allocated a unique number to
identify it. This is known as the file descriptor. Whenever an action
is required to be performed on a file, the file descriptor is used to
identify the file.
These values are always used for stdin, stdout, and stderr:
0: stdin
1: stdout
2: stderr
Ironically I found this question on stack overflow and the article above because I was searching for information on abnormal / non-standard streams. So my search continues.

Why go programs print output to terminal screen but not /dev/stderr?

As I see in the source of golang
go will print output to os.Stderr which is
Stderr = NewFile(uintptr(syscall.Stderr), "/dev/stderr")
So why I run this program in my terminal with the command go run main.go
the output is printed to the terminal screen, not the /dev/stderr
// main.go
func main() {
log.Println("this is my first log")
}
In standard Unix/Linux terminals, both stdout and stderr are connected to the terminal so the output goes there.
Here's a shell snippet to clarify this:
$ echo "joe" >> /dev/stderr
joe
Even though we echoed "joe" to something that looks like a file, it gets emitted to the screen. Replace /dev/stderr with /tmp/foo and you won't see the output on the screen (though it will be appended to the file /tmp/foo)
In Go you can specifically select which stream to output to by passing it to functions like fmt.Fprintf in its first argument.
Well, several things are going on here.
First, is that on a UNIX-like system (and you appear to be on a Linux-based one), the environment in which each user-space program runs, includes the concept of the so-called "standard I/O streams" — that is, each program bootstrapped by the OS and taken control automatically has three file descriptors opened and available: representing the standard input stream, the standard output stream and the standard error stream.
Second, typically (but not always) the spawned program inherits these streams from its parent program. For the case of an interactive shell running in a terminal (or a terminal emulator), that parent program is the shell, and so the standard I/O streams of the spawned program are inherited from the shell.
The shell's standard I/O streams, in turn, naturally connected to the terminal it runs at: that's why it's possible to input data to the shell and read what it prints back: you actually type into the terminal, not to the shell; it's the terminal which delivers that data to the shell; the case for the shell's output is just the reverse.
Third, that /dev/stderr is a Linux-specific "hack" which is a virtual device meaning "whatever my stderr is connected to".
That is, when a process opens that special file, it gets back a file descriptor connected to whatever the process' stderr is already connected to.
Fourth, let's grok the code example you cited:
NewFile(uintptr(syscall.Stderr), "/dev/stderr")
Here, a call to os.NewFile is made, receiving two arguments.
To cite it's documentation:
$ go doc os.NewFile
func NewFile(fd uintptr, name string) *File
NewFile returns a new File with the given file descriptor and name. The returned value will be nil if fd is not a valid file descriptor.
<…>
OK, so this function takes a raw kernel-level
file descriptor
and a name of the file it is supposed to have been opened to.
That latter bit is crucial: the OS kernel itself is (almost) oblivious about what sort of stream a file descriptor actually represents — at least as long as its public API is considered.
So, when NewFile is called to obtain an instance of *os.File for the program's standard error stream by the log package,
it does not open the file "/dev/stderr" (even though it exists);
it merely uses it's name since os.NewFile requests it.
It could have used "" there to much the same extent except for changes in error reporting: if something fails when using the resulting *os.File, the error output would not have included the name "/dev/stderr".
The syscall.Stderr value is merely the number of the file descriptor connected to the standard error stream.
On UNIX-compatible kernels it's always 2; you can run go doc syscall.Stderr and see for yourself.
To recap,
The call NewFile(...) you referred to does not open any files;
it merely wraps an already open file descriptor connected to the standard error stream of the current process into a value of type os.File which is used throughout the os package for I/O on files.
On Linux, the special virtual device file /dev/stderr does really exist but it has nothing to do with what's happening here.
When you run a program in an interactive shell without using any I/O redirection, the standard streams of the created process are connected to the same "sinks and sources" as those of the shell. And they, in turn, are most of the time connected to the terminal which hosts the shell.
Now I urge you to fetch an introductory book on the design of UNIX-like operating systems and read it.

Input redirection after program has started running

Is there a way to use input redirection after the program has started?
For example I want to run a program, scrape some data from it, then use that data to push it + some static data (from a file) to std input:
1 ./Binary
2 Hello the open machine is: computer2
3 Which computer:command do you want to use:
4 <<< "computer2:RunWaterPlants"
I want to redirect line 4 in using some program output from line 2.
I've tried Keeping a bash script running along the program it has started, and sending the program input, but it will just continue with the program execution without waiting for my input.
I can't edit ./Binary.
I found Write to stdin of a running process using pipe and it works for what I'm asking, but I can't see the stdout when I run it with pipe.
I figured it out from Writing to stdin of a process. Pretty much I started a fifo pipe and wrote to it and let it listen for input.

When data is piped from one program via | is there a way to detect what that program was from the second program?

Say you have a shell command like
cat file1 | ./my_script
Is there any way from inside the 'my_script' command to detect the command run first as the pipe input (in the above example cat file1)?
I've been digging into it and so far I've not found any possibilities.
I've been unable to find any environment variables set in the process space of the second command recording the full command line, the command data the my_script commands sees (via /proc etc) is just _./my_script_ and doesn't include any information about it being run as part of a pipe. Checking the process list from inside the second command even doesn't seem to provide any data since the first process seems to exit before the second starts.
The best information I've been able to find suggests in bash in some cases you can get the exit codes of processes in the pipe via PIPESTATUS, unfortunately nothing similar seems to be present for the name of commands/files in the pipe. My research seems to be saying it's impossible to do in a generic manner (I can't control how people decide to run my_script so I can't force 3rd party pipe replacement tools to be used over build in shell pipes) but it just at the same time doesn't seem like it should be impossible since the shell has the full command line present as the command is run.
(update adding in later information following on from comments below)
I am on Linux.
I've investigated the /proc/$$/fd data and it almost does the job. If the first command doesn't exit for several seconds while piping data to the second command can you read /proc/$$/fd/0 to see the value pipe:[PIPEID] that it symlinks to. That can then be used to search through the rest of the /proc//fd/ data for other running processes to find another process with a pipe open using the same PIPEID which gives you the first process pid.
However in most real world tests I've done of piping you can't trust that the first command will stay running long enough for the second one to have time to locate it's pipe fd in /proc before it exits (which removes the proc data preventing it being read). So if this method will return any information is something I can't rely on.

What is a simple explanation for how pipes work in Bash?

I often use pipes in Bash, e.g.:
dmesg | less
Although I know what this outputs, it takes dmesg and lets me scroll through it with less, I do not understand what the | is doing. Is it simply the opposite of >?
Is there a simple, or metaphorical explanation for what | does?
What goes on when several pipes are used in a single line?
Is the behavior of pipes consistent everywhere it appears in a Bash script?
A Unix pipe connects the STDOUT (standard output) file descriptor of the first process to the STDIN (standard input) of the second. What happens then is that when the first process writes to its STDOUT, that output can be immediately read (from STDIN) by the second process.
Using multiple pipes is no different than using a single pipe. Each pipe is independent, and simply links the STDOUT and STDIN of the adjacent processes.
Your third question is a little bit ambiguous. Yes, pipes, as such, are consistent everywhere in a bash script. However, the pipe character | can represent different things. Double pipe (||), represents the "or" operator, for example.
In Linux (and Unix in general) each process has three default file descriptors:
fd #0 Represents the standard input of the process
fd #1 Represents the standard output of the process
fd #2 Represents the standard error output of the process
Normally, when you run a simple program these file descriptors by default are configured as following:
default input is read from the keyboard
Standard output is configured to be the monitor
Standard error is configured to be the monitor also
Bash provides several operators to change this behavior (take a look to the >, >> and < operators for example). Thus, you can redirect the output to something other than the standard output or read your input from other stream different than the keyboard. Specially interesting the case when two programs are collaborating in such way that one uses the output of the other as its input. To make this collaboration easy Bash provides the pipe operator |. Please note the usage of collaboration instead of chaining. I avoided the usage of this term since in fact a pipe is not sequential. A normal command line with pipes has the following aspect:
> program_1 | program_2 | ... | program_n
The above command line is a little bit misleading: user could think that program_2 gets its input once the program_1 has finished its execution, which is not correct. In fact, what bash does is to launch ALL the programs in parallel and it configures the inputs outputs accordingly so every program gets its input from the previous one and delivers its output to the next one (in the command line established order).
Following is a simple example from Creating pipe in C of creating a pipe between a parent and child process. The important part is the call to the pipe() and how the parent closes fd1 (writing side) and how the child closes fd1 (writing side). Please, note that the pipe is a unidirectional communication channel. Thus, data can only flow in one direction: fd1 towards fd[0]. For more information take a look to the manual page of pipe().
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
int main(void)
{
int fd[2], nbytes;
pid_t childpid;
char string[] = "Hello, world!\n";
char readbuffer[80];
pipe(fd);
if((childpid = fork()) == -1)
{
perror("fork");
exit(1);
}
if(childpid == 0)
{
/* Child process closes up input side of pipe */
close(fd[0]);
/* Send "string" through the output side of pipe */
write(fd[1], string, (strlen(string)+1));
exit(0);
}
else
{
/* Parent process closes up output side of pipe */
close(fd[1]);
/* Read in a string from the pipe */
nbytes = read(fd[0], readbuffer, sizeof(readbuffer));
printf("Received string: %s", readbuffer);
}
return(0);
}
Last but not least, when you have a command line in the form:
> program_1 | program_2 | program_3
The return code of the whole line is set to the last command. In this case program_3. If you would like to get an intermediate return code you have to set the pipefail or get it from the PIPESTATUS.
Every standard process in Unix has at least three file descriptors, which are sort of like interfaces:
Standard output, which is the place where the process prints its data (most of the time the console, that is, your screen or terminal).
Standard input, which is the place it gets its data from (most of the time it may be something akin to your keyboard).
Standard error, which is the place where errors and sometimes other out-of-band data goes. It's not interesting right now because pipes don't normally deal with it.
The pipe connects the standard output of the process to the left to the standard input of the process of the right. You can think of it as a dedicated program that takes care of copying everything that one program prints, and feeding it to the next program (the one after the pipe symbol). It's not exactly that, but it's an adequate enough analogy.
Each pipe operates on exactly two things: the standard output coming from its left and the input stream expected at its right. Each of those could be attached to a single process or another bit of the pipeline, which is the case in a multi-pipe command line. But that's not relevant to the actual operation of the pipe; each pipe does its own.
The redirection operator (>) does something related, but simpler: by default it sends the standard output of a process directly to a file. As you can see it's not the opposite of a pipe, but actually complementary. The opposite of > is unsurprisingly <, which takes the content of a file and sends it to the standard input of a process (think of it as a program that reads a file byte by byte and types it in a process for you).
In short, as described, there are three key 'special' file descriptors to be aware of. The shell by default send the keyboard to stdin and sends stdout and stderr to the screen:
A pipeline is just a shell convenience which attaches the stdout of one process directly to the stdin of the next:
There are a lot of subtleties to how this works, for example, the stderr stream might not be piped as you would expect, as shown below:
I have spent quite some time trying to write a detailed but beginner friendly explanation of pipelines in Bash. The full content is at:
https://effective-shell.com/docs/part-2-core-skills/7-thinking-in-pipelines/
A pipe takes the output of a process, by output I mean the standard output (stdout on UNIX) and passes it on the standard input (stdin) of another process. It is not the opposite of the simple right redirection > which purpose is to redirect an output to another output.
For example, take the echo command on Linux which is simply printing a string passed in parameter on the standard output. If you use a simple redirect like :
echo "Hello world" > helloworld.txt
the shell will redirect the normal output initially intended to be on stdout and print it directly into the file helloworld.txt.
Now, take this example which involves the pipe :
ls -l | grep helloworld.txt
The standard output of the ls command will be outputed at the entry of grep, so how does this work?
Programs such as grep when they're being used without any arguments are simply reading and waiting for something to be passed on their standard input (stdin). When they catch something, like the ouput of the ls command, grep acts normally by finding an occurence of what you're searching for.
Pipes are very simple like this.
You have the output of one command. You can provide this output as the input into another command using pipe. You can pipe as many commands as you want.
ex:
ls | grep my | grep files
This first lists the files in the working directory. This output is checked by the grep command for the word "my". The output of this is now into the second grep command which finally searches for the word "files". Thats it.
The pipe operator takes the output of the first command, and 'pipes' it to the second one by connecting stdin and stdout.
In your example, instead of the output of dmesg command going to stdout (and throwing it out on the console), it is going right into your next command.
| puts the STDOUT of the command at left side to the STDIN of the command of right side.
If you use multiple pipes, it's just a chain of pipes. First commands output is set to second commands input. Second commands output is set to next commands input. An so on.
It's available in all Linux/widows based command interpreter.
All of these answere are great. Something that I would just like to mention, is that a pipe in bash (which has the same concept as a unix/linux, or windows named pipe) is just like a pipe in real life.
If you think of the program before the pipe as a source of water, the pipe as a water pipe, and the program after the pipe as something that uses the water (with the program output as water), then you pretty much understand how pipes work.
And remember that all apps in a pipeline run in parallel.
Regarding the efficiency issue of pipe:
A command can access and process the data at its input before previous pipe command to complete that means computing power utilization efficiency if resources available.
Pipe does not require to save output of a command to a file before next command to access its input ( there is no I/O operation between two commands) that means reduction in costly I/O operations and disk space efficiency.
If you treat each unix command as a standalone module,
but you need them to talk to each other using text as a consistent interface,
how can it be done?
cmd input output
echo "foobar" string "foobar"
cat "somefile.txt" file *string inside the file*
grep "pattern" "a.txt" pattern, input file *matched string*
You can say | is a metaphor for passing the baton in a relay marathon.
Its even shaped like one!
cat -> echo -> less -> awk -> perl is analogous to cat | echo | less | awk | perl.
cat "somefile.txt" | echo
cat pass its output for echo to use.
What happens when there is more than one input?
cat "somefile.txt" | grep "pattern"
There is an implicit rule that says "pass it as input file rather than pattern" for grep.
You will slowly develop the eye for knowing which parameter is which by experience.

Resources