I have say two terminal sessions pts/10 and pts/11. In pts/10, I want to capture the stdout of any process that takes place in pts/11 and redirect it to a file. I know that the output can be redirected from pts/11 itself (using >/dev/pts/10), but I don't want to do that. As I said, I want to 'capture' whatever is being printed in the stdout by pts/11. Is there some utility to do that?
I don't think, you can do that, UNLESS, you start something on pts/11 (Either output redirect, or tee /dev/pts/10 or script command.)
If it had been possible, it could essentially be used in hacking/snooping.
Imagine getting passwords in case of wget --user=someuser --password=plain_text_password command run on the terminal pts/11 & captured by pts/10. (EDIT: Ok, that was stdin, not stdout.) But there may be serious security issue if it had been possible.
Related
I have seen this question:
Shell redirection i/o order.
But I have another question. If this line fails to redirect stderr to file:
ls -xy 2>&1 1>file
Then why this line can redirect stderr to grep?
ls -xy 2>&1 | grep ls
I want to know how it is actually being run underneath.
It is said that 2>&1 redirects stderr to a copy of stdout. What does "a copy of stdout" mean? What is actually being copied?
The terminal registers itself (through the OS) for sending and receiving through the standard streams of the processes it creates, right? Does the other redirections go through the OS as well (I don't think so, as the terminal can handle this itself)?
The pipe redirection (connecting standard output of one command to the stdin of the next) happens before the redirection performed by the command.
That means by the time 2>&1 happens, the stdout of ls is already setup to connect to stdin of grep.
See the man page of bash:
Pipelines
The standard output of command is connected via a pipe to
thestandard input of command2. This connection is performed before
anyredirections specified by the command (see REDIRECTION below). If
|&is used, command's standard error, in addition to its
standardoutput, is connected to command2's standard input through the
pipe;it is shorthand for 2>&1 |. This implicit redirection of
thestandard error to the standard output is performed after
anyredirections specified by the command.
(emphasis mine).
Whereas in the former case (ls -xy 2>&1 1>file), nothing like that happens i.e. when 2>&1 is performed the stdout of ls is still connected to the terminal (and hasn't yet been redirected to the file).
That answers my first question. What about the others?
Well, your second question has already been answered in the comments. (What is being duplicated is a file descriptor).
As to your last question(s),
The terminal registers itself (through the OS) for sending and receiving through the standard streams of the processes it creates, right? Does the other redirections go through the OS as well (I don't think so, as the terminal can handle this itself)?
it is the shell which attaches the standard streams of the processes it creates (pipes first, then <>’s, as you have just learned). In the default case, it attaches them to its own streams, which might be attached to a tty, with which you can interact in a number of ways, usually a terminal emulation window, or a serial console, whatever. Terminal is a very ambiguous word.
I have a script using for a building a program that I redirect to sed to highlight errors and such during the build.
This works great, but the problem is at the end of this build script it starts an application which usually writes to the terminal, but stdout and stderr redirection doesn't seem to capture it. I'm not exactly sure how this output gets printed and it's kind of complicated to figure out.
buildAndStartApp # everything outputs correctly
buildAndStartApp 2>&1 | colorize # Catches build output, but not server output
Is there any way to capture all terminal output? The "script" command catches everything, but I would like the output to still print to my terminal rather than redirecting to a file.
I found out script has a -c option which runs a command and all of the output is printed to stdout as well as to a file.
My command ended up being:
script -c "buildAndStartApp" /dev/null | colorize
First, when you use script, the output does still go to the terminal (as well as redirecting to the file). You could do something like this in a second window to see the colorized output live:
tail -f typescript | colorize
Second, if the output of a command is going to the terminal even though you have both stdout and stderr redirected, it's possible that the command is writing directly to /dev/tty, in which case something like script that uses a pseudo-terminal is the only thing that will work.
I am capturing some terminal session output (using the "script" command) for testing.
The script command works great, gets everything no matter what, even sub-shells and remote login sessions. However, it would be really useful to know WHEN things happened, so I'd like to have a timestamp on each line of the output of script. Unfortunately, script doesn't use stdout (for obvious reasons), so I can't just pipe it to, say, ts. I also doesn't recognize any form of the special '-' file name.
I'd like to do something like this: > script |ts > foo
Where script would open "|ts > foo" as a file, but writes to it would go through the pipe
to ts, which itself would redirect to file foo.
Is there any shell syntax or trickery to do this? (prefer ksh, can use bash.)
The only thing I could come up with was to use a named pipe, but that may have buffering
issues, and seems really clumsy for this use.
BTW I used the script command because regular stdout capture doesn't get all the terminal interaction. As far as I can tell, it's the only command which does that.
Does anyone happen to know how to direct STDOUT in Terminal to Cache? Sometimes I would like to copy text from STDOUT somewhere else, e.g. my mail program, and it seems always a bit inconvenient to me to either copy the output manually or create a new temporary file.
Is there an easy way to do this?
Thanks a lot!
Alex
It's not clear exactly what you're asking. But if you're talking about capturing stdout to file whilst still being able to see it on the console, then you can use tee (assuming you're using *nix):
./myApp | tee stdout.txt
I have some sort of learning block for cron, and no matter what I read, I can never get a get understanding of it. I asked for help from my webhost to create a cron job that runs a python script every two hours.
This is what he sent back:
0 */2 * * * python /path/to/file.py >> /dev/null 2>&1
I get that the first bit is saying everyone hour evenly divisible by two, the second part is using python to execute my file, and the rest, I don't really know.
The support guy sent me an email back saying
That means that stdout and stderr will be redirected nowhere to keep
you clean of garbled messages, and command outputs if any (useful and
common in cron).
To test script functionality, use the same without redirection.
Which makes sense, because I remember >> being used in the command prompt to write output to files. I still don't get two things though. First, what does 2>&1 do? And second, by redirection, is he talking about sending the output to /dev/null? If it didn't go there, and I did want to confirm it was working, where would it go?
2 represents the stderr stream, and it's saying to redirect it to same place that stream 1 (stdout) was directed, which is /dev/null (sometimes referred to as the "bit bucket").
If you didn't want the output to go to /dev/null, you could put, for example, a filename there, and the output of stderr and stdout would go there.
Ex:
0 */2 * * * python /path/to/file.py >> your_filename 2>&1
Finally, the >> (as opposed to >) means append, so in the case of a filename, the output would be appended instead of overwriting the file. With /dev/null, it doesn't matter though since you are throwing away the output anyway.
2>&1 redirects all error output to the same stream as the standard output (e.g. in your case to /dev/null = nowhere)
If you run python /path/to/file.py in a console window (e.g. removing the output redirection starting with >>) the output will be printed on your console (so you can read it visually)
Note: By default the output of cron jobs will be sent as an e-mail to the user owning the job. For that reason it is very common to always direct standard and error output to /dev/null.
>> is unnecessary there - /dev/null isn't a real file, it doesn't matter whether you use > or >>
2>&1 means send STDERR to the same place as STDOUT, i.e. /dev/null
The man page for cron explains what it does if you don't have the redirect; in general, it emails the admin with the output.
If you wanted to check it was working, you'd replace '/dev/null' with an actual file' say '/tmp/log', and check that file. This is why there's a >> in the command: when logging to a real file, you want to append each time rather than overwriting it.
The >> appends standard output to /dev/null; the 2>&1 sends file descriptor 2 (standard error) to the same place that file descriptor 1 (standard output) is going.
The append is unusual but not actually harmful; you'd normally just write >. If you were dealing with real files instead of /dev/null, the append is probably better, so this reduces the chances of running into problems when you change from /dev/null to /tmp/cron/job.log or whatever.
Throwing away errors is not necessarily a good idea, but if the command is 'chatty', that output will typically end up in an email to the user whose cron job it is.