How to get error text in the iperf message? [duplicate] - shell

I am rather confused with the purpose of these three files. If my understanding is correct, stdin is the file in which a program writes into its requests to run a task in the process, stdout is the file into which the kernel writes its output and the process requesting it accesses the information from, and stderr is the file into which all the exceptions are entered. On opening these files to check whether these actually do occur, I found nothing seem to suggest so!
What I would want to know is what exactly is the purpose of these files, absolutely dumbed down answer with very little tech jargon!

Standard input - this is the file handle that your process reads to get information from you.
Standard output - your process writes conventional output to this file handle.
Standard error - your process writes diagnostic output to this file handle.
That's about as dumbed-down as I can make it :-)
Of course, that's mostly by convention. There's nothing stopping you from writing your diagnostic information to standard output if you wish. You can even close the three file handles totally and open your own files for I/O.
When your process starts, it should already have these handles open and it can just read from and/or write to them.
By default, they're probably connected to your terminal device (e.g., /dev/tty) but shells will allow you to set up connections between these handles and specific files and/or devices (or even pipelines to other processes) before your process starts (some of the manipulations possible are rather clever).
An example being:
my_prog <inputfile 2>errorfile | grep XYZ
which will:
create a process for my_prog.
open inputfile as your standard input (file handle 0).
open errorfile as your standard error (file handle 2).
create another process for grep.
attach the standard output of my_prog to the standard input of grep.
Re your comment:
When I open these files in /dev folder, how come I never get to see the output of a process running?
It's because they're not normal files. While UNIX presents everything as a file in a file system somewhere, that doesn't make it so at the lowest levels. Most files in the /dev hierarchy are either character or block devices, effectively a device driver. They don't have a size but they do have a major and minor device number.
When you open them, you're connected to the device driver rather than a physical file, and the device driver is smart enough to know that separate processes should be handled separately.
The same is true for the Linux /proc filesystem. Those aren't real files, just tightly controlled gateways to kernel information.

It would be more correct to say that stdin, stdout, and stderr are "I/O streams" rather
than files. As you've noticed, these entities do not live in the filesystem. But the
Unix philosophy, as far as I/O is concerned, is "everything is a file". In practice,
that really means that you can use the same library functions and interfaces (printf,
scanf, read, write, select, etc.) without worrying about whether the I/O stream
is connected to a keyboard, a disk file, a socket, a pipe, or some other I/O abstraction.
Most programs need to read input, write output, and log errors, so stdin, stdout,
and stderr are predefined for you, as a programming convenience. This is only
a convention, and is not enforced by the operating system.

As a complement of the answers above, here is a sum up about Redirections:
EDIT: This graphic is not entirely correct.
The first example does not use stdin at all, it's passing "hello" as an argument to the echo command.
The graphic also says 2>&1 has the same effect as &> however
ls Documents ABC > dirlist 2>&1
#does not give the same output as
ls Documents ABC > dirlist &>
This is because &> requires a file to redirect to, and 2>&1 is simply sending stderr into stdout

I'm afraid your understanding is completely backwards. :)
Think of "standard in", "standard out", and "standard error" from the program's perspective, not from the kernel's perspective.
When a program needs to print output, it normally prints to "standard out". A program typically prints output to standard out with printf, which prints ONLY to standard out.
When a program needs to print error information (not necessarily exceptions, those are a programming-language construct, imposed at a much higher level), it normally prints to "standard error". It normally does so with fprintf, which accepts a file stream to use when printing. The file stream could be any file opened for writing: standard out, standard error, or any other file that has been opened with fopen or fdopen.
"standard in" is used when the file needs to read input, using fread or fgets, or getchar.
Any of these files can be easily redirected from the shell, like this:
cat /etc/passwd > /tmp/out # redirect cat's standard out to /tmp/foo
cat /nonexistant 2> /tmp/err # redirect cat's standard error to /tmp/error
cat < /etc/passwd # redirect cat's standard input to /etc/passwd
Or, the whole enchilada:
cat < /etc/passwd > /tmp/out 2> /tmp/err
There are two important caveats: First, "standard in", "standard out", and "standard error" are just a convention. They are a very strong convention, but it's all just an agreement that it is very nice to be able to run programs like this: grep echo /etc/services | awk '{print $2;}' | sort and have the standard outputs of each program hooked into the standard input of the next program in the pipeline.
Second, I've given the standard ISO C functions for working with file streams (FILE * objects) -- at the kernel level, it is all file descriptors (int references to the file table) and much lower-level operations like read and write, which do not do the happy buffering of the ISO C functions. I figured to keep it simple and use the easier functions, but I thought all the same you should know the alternatives. :)

I think people saying stderr should be used only for error messages is misleading.
It should also be used for informative messages that are meant for the user running the command and not for any potential downstream consumers of the data (i.e. if you run a shell pipe chaining several commands you do not want informative messages like "getting item 30 of 42424" to appear on stdout as they will confuse the consumer, but you might still want the user to see them.
See this for historical rationale:
"All programs placed diagnostics on the standard output. This had
always caused trouble when the output was redirected into a file, but
became intolerable when the output was sent to an unsuspecting
process. Nevertheless, unwilling to violate the simplicity of the
standard-input-standard-output model, people tolerated this state of
affairs through v6. Shortly thereafter Dennis Ritchie cut the Gordian
knot by introducing the standard error file. That was not quite enough.
With pipelines diagnostics could come from any of several programs
running simultaneously. Diagnostics needed to identify themselves."

stdin
Reads input through the console (e.g. Keyboard input).
Used in C with scanf
scanf(<formatstring>,<pointer to storage> ...);
stdout
Produces output to the console.
Used in C with printf
printf(<string>, <values to print> ...);
stderr
Produces 'error' output to the console.
Used in C with fprintf
fprintf(stderr, <string>, <values to print> ...);
Redirection
The source for stdin can be redirected. For example, instead of coming from keyboard input, it can come from a file (echo < file.txt ), or another program ( ps | grep <userid>).
The destinations for stdout, stderr can also be redirected. For example stdout can be redirected to a file: ls . > ls-output.txt, in this case the output is written to the file ls-output.txt. Stderr can be redirected with 2>.

Using ps -aux reveals current processes, all of which are listed in /proc/ as /proc/(pid)/, by calling cat /proc/(pid)/fd/0 it prints anything that is found in the standard output of that process I think. So perhaps,
/proc/(pid)/fd/0 - Standard Output File
/proc/(pid)/fd/1 - Standard Input File
/proc/(pid)/fd/2 - Standard Error File
for example
But only worked this well for /bin/bash other processes generally had nothing in 0 but many had errors written in 2

For authoritative information about these files, check out the man pages, run the command on your terminal.
$ man stdout
But for a simple answer, each file is for:
stdout for a stream out
stdin for a stream input
stderr for printing errors or log messages.
Each unix program has each one of those streams.

stderr will not do IO Cache buffering so if our application need to print critical message info (some errors ,exceptions) to console or to file use it where as use stdout to print general log info as it use IO Cache buffering there is a chance that before writing our messages to file application may close ,leaving debugging complex

A file with associated buffering is called a stream and is declared to be a pointer to a defined type FILE. The fopen() function creates certain descriptive data for a stream and returns a pointer to designate the stream in all further transactions. Normally there are three open streams with constant pointers declared in the header and associated with the standard open files.
At program startup three streams are predefined and need not be opened explicitly: standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). When opened the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device
https://www.mkssoftware.com/docs/man5/stdio.5.asp

Here is a lengthy article on stdin, stdout and stderr:
What Are stdin, stdout, and stderr on Linux?
To summarize:
Streams Are Handled Like Files
Streams in Linux—like almost everything else—are treated as though
they were files. You can read text from a file, and you can write text
into a file. Both of these actions involve a stream of data. So the
concept of handling a stream of data as a file isn’t that much of a
stretch.
Each file associated with a process is allocated a unique number to
identify it. This is known as the file descriptor. Whenever an action
is required to be performed on a file, the file descriptor is used to
identify the file.
These values are always used for stdin, stdout, and stderr:
0: stdin
1: stdout
2: stderr
Ironically I found this question on stack overflow and the article above because I was searching for information on abnormal / non-standard streams. So my search continues.

Related

Why go programs print output to terminal screen but not /dev/stderr?

As I see in the source of golang
go will print output to os.Stderr which is
Stderr = NewFile(uintptr(syscall.Stderr), "/dev/stderr")
So why I run this program in my terminal with the command go run main.go
the output is printed to the terminal screen, not the /dev/stderr
// main.go
func main() {
log.Println("this is my first log")
}
In standard Unix/Linux terminals, both stdout and stderr are connected to the terminal so the output goes there.
Here's a shell snippet to clarify this:
$ echo "joe" >> /dev/stderr
joe
Even though we echoed "joe" to something that looks like a file, it gets emitted to the screen. Replace /dev/stderr with /tmp/foo and you won't see the output on the screen (though it will be appended to the file /tmp/foo)
In Go you can specifically select which stream to output to by passing it to functions like fmt.Fprintf in its first argument.
Well, several things are going on here.
First, is that on a UNIX-like system (and you appear to be on a Linux-based one), the environment in which each user-space program runs, includes the concept of the so-called "standard I/O streams" — that is, each program bootstrapped by the OS and taken control automatically has three file descriptors opened and available: representing the standard input stream, the standard output stream and the standard error stream.
Second, typically (but not always) the spawned program inherits these streams from its parent program. For the case of an interactive shell running in a terminal (or a terminal emulator), that parent program is the shell, and so the standard I/O streams of the spawned program are inherited from the shell.
The shell's standard I/O streams, in turn, naturally connected to the terminal it runs at: that's why it's possible to input data to the shell and read what it prints back: you actually type into the terminal, not to the shell; it's the terminal which delivers that data to the shell; the case for the shell's output is just the reverse.
Third, that /dev/stderr is a Linux-specific "hack" which is a virtual device meaning "whatever my stderr is connected to".
That is, when a process opens that special file, it gets back a file descriptor connected to whatever the process' stderr is already connected to.
Fourth, let's grok the code example you cited:
NewFile(uintptr(syscall.Stderr), "/dev/stderr")
Here, a call to os.NewFile is made, receiving two arguments.
To cite it's documentation:
$ go doc os.NewFile
func NewFile(fd uintptr, name string) *File
NewFile returns a new File with the given file descriptor and name. The returned value will be nil if fd is not a valid file descriptor.
<…>
OK, so this function takes a raw kernel-level
file descriptor
and a name of the file it is supposed to have been opened to.
That latter bit is crucial: the OS kernel itself is (almost) oblivious about what sort of stream a file descriptor actually represents — at least as long as its public API is considered.
So, when NewFile is called to obtain an instance of *os.File for the program's standard error stream by the log package,
it does not open the file "/dev/stderr" (even though it exists);
it merely uses it's name since os.NewFile requests it.
It could have used "" there to much the same extent except for changes in error reporting: if something fails when using the resulting *os.File, the error output would not have included the name "/dev/stderr".
The syscall.Stderr value is merely the number of the file descriptor connected to the standard error stream.
On UNIX-compatible kernels it's always 2; you can run go doc syscall.Stderr and see for yourself.
To recap,
The call NewFile(...) you referred to does not open any files;
it merely wraps an already open file descriptor connected to the standard error stream of the current process into a value of type os.File which is used throughout the os package for I/O on files.
On Linux, the special virtual device file /dev/stderr does really exist but it has nothing to do with what's happening here.
When you run a program in an interactive shell without using any I/O redirection, the standard streams of the created process are connected to the same "sinks and sources" as those of the shell. And they, in turn, are most of the time connected to the terminal which hosts the shell.
Now I urge you to fetch an introductory book on the design of UNIX-like operating systems and read it.

Shell IO redirection order, pipe version

I have seen this question:
Shell redirection i/o order.
But I have another question. If this line fails to redirect stderr to file:
ls -xy 2>&1 1>file
Then why this line can redirect stderr to grep?
ls -xy 2>&1 | grep ls
I want to know how it is actually being run underneath.
It is said that 2>&1 redirects stderr to a copy of stdout. What does "a copy of stdout" mean? What is actually being copied?
The terminal registers itself (through the OS) for sending and receiving through the standard streams of the processes it creates, right? Does the other redirections go through the OS as well (I don't think so, as the terminal can handle this itself)?
The pipe redirection (connecting standard output of one command to the stdin of the next) happens before the redirection performed by the command.
That means by the time 2>&1 happens, the stdout of ls is already setup to connect to stdin of grep.
See the man page of bash:
Pipelines
The standard output of command is connected via a pipe to
thestandard input of command2. This connection is performed before
anyredirections specified by the command (see REDIRECTION below). If
|&is used, command's standard error, in addition to its
standardoutput, is connected to command2's standard input through the
pipe;it is shorthand for 2>&1 |. This implicit redirection of
thestandard error to the standard output is performed after
anyredirections specified by the command.
(emphasis mine).
Whereas in the former case (ls -xy 2>&1 1>file), nothing like that happens i.e. when 2>&1 is performed the stdout of ls is still connected to the terminal (and hasn't yet been redirected to the file).
That answers my first question. What about the others?
Well, your second question has already been answered in the comments. (What is being duplicated is a file descriptor).
As to your last question(s),
The terminal registers itself (through the OS) for sending and receiving through the standard streams of the processes it creates, right? Does the other redirections go through the OS as well (I don't think so, as the terminal can handle this itself)?
it is the shell which attaches the standard streams of the processes it creates (pipes first, then <>’s, as you have just learned). In the default case, it attaches them to its own streams, which might be attached to a tty, with which you can interact in a number of ways, usually a terminal emulation window, or a serial console, whatever. Terminal is a very ambiguous word.

When would piping work - does application have to adhere to some standard format? What is stdin and stdout in Unix?

I am using a program that allows me to do
echo "Something" | app outputilfe
But a similar program doesnt do that (and its a bash script that runs Java -jar internally).
Both works with
app input output
This leads to me this question . And why some programs do it and some don't ?
I am basically trying to understand in a larger sense how does programs inter-operate so fluently in *nix - The idea behind it- what is stdin and stdout in a simple layman terms and
A simple way of writing a program that takes an input file and writes an output file is:
Write a code in such a manor that the first 2 positional arguments get interpreted as input and output strings where input should a file that is available in the file system and output is a string that is where its going to write back the binary data .
But this is not how it is . It seems I can stream it . Thats a real paradigm shift for me. I believe its the File Descriptor abstraction that makes it possible? That is you normally write code to expect a FD as positional arguments and not the real file strings ? Which in turn means the output file gets opened and the fd is sent to the program once I execute the command in bash ?
It can read from Terminal and give the display to screen or a application . What makes this possible ? I think there is some concept of file descriptors that I am missing here ?
Does applications 'talk' in terms of File Descriptors and not file name as strings? - In Unix everything is a file and that means FD is used ?
Few other related reads :
http://en.wikipedia.org/wiki/Pipeline_(Unix)
What is a simple explanation for how pipes work in BASH?
confused about stdin, stdout and stderr?
Here's a very non-technical description of a relatively technical topic:
A file descriptor, in Unix parlance, is a small number that identifies a given file or file-like thingy. So let's talk about file-like-thingies in the Unix sense.
What's a Unix file-like-thingy? It's something that you can read from and/or write to. So standard files that live in a directory on your hard disk certainly can qualify as files. So can your terminal session – you can type into it, so it can be read, and you can read output printed on it. So can, for that matter, network sockets. So can (and we'll talk about this more) pipes.
In many cases, an application will read its data from one (or more) file descriptors, and write its results to one (or more) file descriptors. From the point of view of the core code of the application, it doesn't really care which file descriptors its using, or what they're "hooked up" to. (Caveat: Different file descriptors can be hooked up to file-like-thingies with different capabilities, like read-only-ness; I'm ignoring this deliberately for now.) So I might have a trivial program which looks like (ignoring error checking):
void zcrew_up_zpelling(int in_fd, int out_fd) {
char c;
ssize_t
while(read(in_fd, &c, 1)) {
if (c == 's') c = 'z';
write(out_fd, &c, 1));
}
}
Don't worry too much about what this code does (please!); instead, just notice that it's copying-and-modifying from one file descriptor to another.
So, what file descriptors are actually used here? Well, that's up to the code that calls zcrew_up_zpelling(). There are, however, some vague conventions. Many programs that need a single source of input default to using stdin as the file descriptor they'll read from; many programs that need a single source of output default to using stdout as the file descriptor they'll write to. Many of these programs provide ways to use a different file descriptor instead, often one hooked up to a named file.
Let's write a program like this:
int main(int argc, char **argv) {
int in_fd = 0; // Descriptor of standard input
int out_fd = 1; // Descriptor of standard output
if (argc >= 2) in_fd = open(argv[1], O_RDONLY);
if (argc >= 3) out_fd = open(argv[2], O_WRONLY);
zcrew_up_zpelling(in_fd, out_fd);
return 0;
}
So, let's run our program:
./our_program
Hmm, it's waiting for input. We didn't pass any arguments, so it's just using stdin and stdout. What if we type "Using stdin and stdout"?
Uzing ztdin and ztdout
Interesting. Let's try something different. First, we create a file containing "Hello worlds" named, let's say, hello.txt.
./our_program hello.txt
What do we get?
Hello worldz
And one more run:
./out_program hello.txt output.txt
Out program returns immediately, but creates a file called output.text containing... our output!
Deep breath. At this point, I'm hoping that I've successfully explained how a program is able to have behavior independent of the type of file-like-thingy hooked up to a file descriptor, and also to choose what file-like-thingy gets hooked up.
What about that pipe thing I mentioned? What about streaming? Why does it work when I say:
echo Tessting | ./our_program | grep -o z | wc -l
Well, each of these programs follows some form of the conventions above. our_program, as we know, by default reads from stdin and writes to stdout. grep does the same thing. wc by default reads from stdin, but by default writes to stdout -- it likes to live at the end of pipelines. And echo doesn't read from a file descriptor at all (it just reads arguments, like we did in main()), but writes to stdout, so likes to live at the front of streams.
How does this all work? Well, to get much deeper we have to talk about the shell. The shell is the program that starts other command line programs, and it gets to choose what file descriptors are already hooked up to when a program starts. Those magic numbers of 0 and 1 for stdin and stdout we used earlier? That's a Unix convention, and the shell hooks up a file-like-thingy to each of those file descriptors before starting your program. When the shell sees you asking for a pipeline by entering a command with | characters, it hooks the stdout of one program directly into the stdin of the next program, using a file-like-thingy called a pipe. A file-like-thingy pipe, just like a plumbing pipe, takes whatever is put in one end and puts it out the other.
So, we've combined three things:
Code that deals with file descriptors, without worrying about what they're hooked to
Conventions for default file descriptors to use for normal tasks
The shell's ability to set up a program's file descriptors to "pipe" to other programs'
Together, these give us the ability to write programs that "play nice" with streaming and pipelines, without each program having to understand where it sits in the pipeline and what's happening around it.

What is a simple explanation for how pipes work in Bash?

I often use pipes in Bash, e.g.:
dmesg | less
Although I know what this outputs, it takes dmesg and lets me scroll through it with less, I do not understand what the | is doing. Is it simply the opposite of >?
Is there a simple, or metaphorical explanation for what | does?
What goes on when several pipes are used in a single line?
Is the behavior of pipes consistent everywhere it appears in a Bash script?
A Unix pipe connects the STDOUT (standard output) file descriptor of the first process to the STDIN (standard input) of the second. What happens then is that when the first process writes to its STDOUT, that output can be immediately read (from STDIN) by the second process.
Using multiple pipes is no different than using a single pipe. Each pipe is independent, and simply links the STDOUT and STDIN of the adjacent processes.
Your third question is a little bit ambiguous. Yes, pipes, as such, are consistent everywhere in a bash script. However, the pipe character | can represent different things. Double pipe (||), represents the "or" operator, for example.
In Linux (and Unix in general) each process has three default file descriptors:
fd #0 Represents the standard input of the process
fd #1 Represents the standard output of the process
fd #2 Represents the standard error output of the process
Normally, when you run a simple program these file descriptors by default are configured as following:
default input is read from the keyboard
Standard output is configured to be the monitor
Standard error is configured to be the monitor also
Bash provides several operators to change this behavior (take a look to the >, >> and < operators for example). Thus, you can redirect the output to something other than the standard output or read your input from other stream different than the keyboard. Specially interesting the case when two programs are collaborating in such way that one uses the output of the other as its input. To make this collaboration easy Bash provides the pipe operator |. Please note the usage of collaboration instead of chaining. I avoided the usage of this term since in fact a pipe is not sequential. A normal command line with pipes has the following aspect:
> program_1 | program_2 | ... | program_n
The above command line is a little bit misleading: user could think that program_2 gets its input once the program_1 has finished its execution, which is not correct. In fact, what bash does is to launch ALL the programs in parallel and it configures the inputs outputs accordingly so every program gets its input from the previous one and delivers its output to the next one (in the command line established order).
Following is a simple example from Creating pipe in C of creating a pipe between a parent and child process. The important part is the call to the pipe() and how the parent closes fd1 (writing side) and how the child closes fd1 (writing side). Please, note that the pipe is a unidirectional communication channel. Thus, data can only flow in one direction: fd1 towards fd[0]. For more information take a look to the manual page of pipe().
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
int main(void)
{
int fd[2], nbytes;
pid_t childpid;
char string[] = "Hello, world!\n";
char readbuffer[80];
pipe(fd);
if((childpid = fork()) == -1)
{
perror("fork");
exit(1);
}
if(childpid == 0)
{
/* Child process closes up input side of pipe */
close(fd[0]);
/* Send "string" through the output side of pipe */
write(fd[1], string, (strlen(string)+1));
exit(0);
}
else
{
/* Parent process closes up output side of pipe */
close(fd[1]);
/* Read in a string from the pipe */
nbytes = read(fd[0], readbuffer, sizeof(readbuffer));
printf("Received string: %s", readbuffer);
}
return(0);
}
Last but not least, when you have a command line in the form:
> program_1 | program_2 | program_3
The return code of the whole line is set to the last command. In this case program_3. If you would like to get an intermediate return code you have to set the pipefail or get it from the PIPESTATUS.
Every standard process in Unix has at least three file descriptors, which are sort of like interfaces:
Standard output, which is the place where the process prints its data (most of the time the console, that is, your screen or terminal).
Standard input, which is the place it gets its data from (most of the time it may be something akin to your keyboard).
Standard error, which is the place where errors and sometimes other out-of-band data goes. It's not interesting right now because pipes don't normally deal with it.
The pipe connects the standard output of the process to the left to the standard input of the process of the right. You can think of it as a dedicated program that takes care of copying everything that one program prints, and feeding it to the next program (the one after the pipe symbol). It's not exactly that, but it's an adequate enough analogy.
Each pipe operates on exactly two things: the standard output coming from its left and the input stream expected at its right. Each of those could be attached to a single process or another bit of the pipeline, which is the case in a multi-pipe command line. But that's not relevant to the actual operation of the pipe; each pipe does its own.
The redirection operator (>) does something related, but simpler: by default it sends the standard output of a process directly to a file. As you can see it's not the opposite of a pipe, but actually complementary. The opposite of > is unsurprisingly <, which takes the content of a file and sends it to the standard input of a process (think of it as a program that reads a file byte by byte and types it in a process for you).
In short, as described, there are three key 'special' file descriptors to be aware of. The shell by default send the keyboard to stdin and sends stdout and stderr to the screen:
A pipeline is just a shell convenience which attaches the stdout of one process directly to the stdin of the next:
There are a lot of subtleties to how this works, for example, the stderr stream might not be piped as you would expect, as shown below:
I have spent quite some time trying to write a detailed but beginner friendly explanation of pipelines in Bash. The full content is at:
https://effective-shell.com/docs/part-2-core-skills/7-thinking-in-pipelines/
A pipe takes the output of a process, by output I mean the standard output (stdout on UNIX) and passes it on the standard input (stdin) of another process. It is not the opposite of the simple right redirection > which purpose is to redirect an output to another output.
For example, take the echo command on Linux which is simply printing a string passed in parameter on the standard output. If you use a simple redirect like :
echo "Hello world" > helloworld.txt
the shell will redirect the normal output initially intended to be on stdout and print it directly into the file helloworld.txt.
Now, take this example which involves the pipe :
ls -l | grep helloworld.txt
The standard output of the ls command will be outputed at the entry of grep, so how does this work?
Programs such as grep when they're being used without any arguments are simply reading and waiting for something to be passed on their standard input (stdin). When they catch something, like the ouput of the ls command, grep acts normally by finding an occurence of what you're searching for.
Pipes are very simple like this.
You have the output of one command. You can provide this output as the input into another command using pipe. You can pipe as many commands as you want.
ex:
ls | grep my | grep files
This first lists the files in the working directory. This output is checked by the grep command for the word "my". The output of this is now into the second grep command which finally searches for the word "files". Thats it.
The pipe operator takes the output of the first command, and 'pipes' it to the second one by connecting stdin and stdout.
In your example, instead of the output of dmesg command going to stdout (and throwing it out on the console), it is going right into your next command.
| puts the STDOUT of the command at left side to the STDIN of the command of right side.
If you use multiple pipes, it's just a chain of pipes. First commands output is set to second commands input. Second commands output is set to next commands input. An so on.
It's available in all Linux/widows based command interpreter.
All of these answere are great. Something that I would just like to mention, is that a pipe in bash (which has the same concept as a unix/linux, or windows named pipe) is just like a pipe in real life.
If you think of the program before the pipe as a source of water, the pipe as a water pipe, and the program after the pipe as something that uses the water (with the program output as water), then you pretty much understand how pipes work.
And remember that all apps in a pipeline run in parallel.
Regarding the efficiency issue of pipe:
A command can access and process the data at its input before previous pipe command to complete that means computing power utilization efficiency if resources available.
Pipe does not require to save output of a command to a file before next command to access its input ( there is no I/O operation between two commands) that means reduction in costly I/O operations and disk space efficiency.
If you treat each unix command as a standalone module,
but you need them to talk to each other using text as a consistent interface,
how can it be done?
cmd input output
echo "foobar" string "foobar"
cat "somefile.txt" file *string inside the file*
grep "pattern" "a.txt" pattern, input file *matched string*
You can say | is a metaphor for passing the baton in a relay marathon.
Its even shaped like one!
cat -> echo -> less -> awk -> perl is analogous to cat | echo | less | awk | perl.
cat "somefile.txt" | echo
cat pass its output for echo to use.
What happens when there is more than one input?
cat "somefile.txt" | grep "pattern"
There is an implicit rule that says "pass it as input file rather than pattern" for grep.
You will slowly develop the eye for knowing which parameter is which by experience.

Redirection Doesn't Work

I want to put my program's output into a file. I keyed in the following :
./prog > log 2>&1
But there is nothing in the file "log". I am using the Ubuntu 11.10 and the default shell is bash.
Anybody know the cause of this AND how I can debug this?
There are many possible causes:
The program reads the input from log file while you try to redirect into it with truncation (see Why doesn't "sort file1 > file1" work?)
The output is buffered so that you don't see data in the file until the output buffer is flushed. You can manually call fflush or output std::flush if using C++ I/O stream etc.
The program is smart enough and disables output if the output stream is not a terminal.
You look at the wrong file (i.e. in another directory).
You try to dump file's contents incorrectly.
Your program outputs '\0' as the first character so the output appears to be empty, even though there is some data.
Name your own.
Your best bet is to run this application under a debugger (like gdb) or use strace or ptrace (or both) and see what the program is doing. I mean, really, output redirection works for the last like 40 years, so the problem must be somewhere else.

Resources