I am capturing some terminal session output (using the "script" command) for testing.
The script command works great, gets everything no matter what, even sub-shells and remote login sessions. However, it would be really useful to know WHEN things happened, so I'd like to have a timestamp on each line of the output of script. Unfortunately, script doesn't use stdout (for obvious reasons), so I can't just pipe it to, say, ts. I also doesn't recognize any form of the special '-' file name.
I'd like to do something like this: > script |ts > foo
Where script would open "|ts > foo" as a file, but writes to it would go through the pipe
to ts, which itself would redirect to file foo.
Is there any shell syntax or trickery to do this? (prefer ksh, can use bash.)
The only thing I could come up with was to use a named pipe, but that may have buffering
issues, and seems really clumsy for this use.
BTW I used the script command because regular stdout capture doesn't get all the terminal interaction. As far as I can tell, it's the only command which does that.
Related
I have a shell script that is using echo to give a continuous output (the progress of an rsync) that I am using AppleScript to run with administrator privileges. Before I was using NSTask to run the shell script, but I couldn't find a way to run it with the privileges that it needed, so now I am using applescript to run it. When it was running via NSTask, I could use an output pipe and waitForDataInbackgroundAndNotify to get the continuous output and put it into a text field, but now that I am using AppleScript, I cannot seem to find a way to accomplish this. The shell script is still using echo, but it seems to get lost in the AppleScript "wrapper." How do I make sure that the AppleScript sees the output from the shell script and passes it on to the application? Remember, this isn't one single output, but continuous output.
Zero is correct. When you use do shell script, you can consider it similar to using backticks in perl. The command will be executed, and the everything sent to STDOUT will be returned as the result.
The only work around would be to have the your command write the output to a temporary file and then use do shell script "foo" without waiting. From there, you can read from the file sequentially using native AppleScript commands. It's clunky, but it'll work in a pinch.
I'm using a bash script to automatically run a simulation program. This program periodically prints the current status of the simulation in the console, like "Iteration step 42 ended normally".
Is it possible to abort the script, if the console output is something like "warning: parameter xyz outside range of validity"?
And what can I do, if the console output is piped to a text file?
Sorry if this sounds stupid, I'm new to this :-)
Thanks in advance
This isn't an ideal job for Bash. However, you can certainly capture and test STDOUT inside a Bash iteration loop using an admixture of conditionals, grep-like tools, and command substitution.
On the other hand, if Bash isn't doing the looping (e.g. it's just waiting for an external command to finish) then you need to use something like expect. Expect is purpose-built to monitor output streams for regular expressions, and perform branching based on expression matches.
I have a program (that I did not write) which is not designed to read in commands from a file. Entering commands on STDIN is pretty tedious, so I'd like to be able to automate it by writing the commands in a file for re-use. Trouble is, if the program hits EOF, it loops infinitely trying to read in the next command dropping an endless torrent of menu options on the screen.
What I'd like to be able to do is cat a file containing the commands into the program via a pipe, then use some sort of shell magic to have it switch from the file to STDIN when it hits the file's EOF.
Note: I've already considered using cat with the '-' for STDIN. Unfortunately (I didn't know this before), piped commands wait for the first program's output to terminate before starting the second program -- they do not run in parallel. If there's some way to get the programs to run in parallel with that kind of piping action, that would work!
Any thoughts? Thanks for any assistance!
EDIT:
I should note that my goal is not only to prevent the system from hitting the end of the commands file. I would like to be able to continue typing in commands from the keyboard when the file hits EOF.
I would do something like
(cat your_file_with_commands; cat) | sh your_script
That way, when the file with commands is done, the second cat will feed your script with whatever you type on stdin afterwards.
Same as Idelic answer with more simple syntax ;)
cat your_file_with_commands - | sh your_script
I would think expect would work for this.
Have you tried using something like tail -f commandfile | command I think that should pipe the lines of the file to command without closing the file descriptor afterwards. Use -n to specify the number of lines to be piped if tail -f doesn't catch all of them.
I have a program that I want to automate runs for, since it takes awhile to complete. For some reason it outputs everything to stderr instead of stdout, and I'd like to check on its progress, so I find myself needing to redirect stderr output within a start command.
I tried this:
start "My_Program" "C:\Users\Me\my_program.exe" --some --presets --for --my_program.exe --output "C:\Users\Me\output_file_for_my_program" "C:\Users\Me\input_file_for_my_program" 2>"C:\Users\Me\my_program_output.log"
But it turns out that the redirect is being picked up by start, so that I get a 0-byte file with the result of "start" - namely, nothing. Is there any way to make the output redirection attach in some way to the output of my_program?
I've experimented with escaping, and neither "^2>" nor "2^>" seem to work.
If "Workaround Oriented Programmming" is acceptable (it probably is, you are programming Windows Batch lol), you could put the problematic code line in another .BAT file, without any "start" and then "start" this other BAT.
I recently discovered 'comint-show-output' in emacs shell mode, which jumps to the first line of shell output, which I find incredibly handy when looking at shell output that exceeds a screen length. The advantages of this command over scrolling with 'page up' are A) you don't have to scan with your eyes for the first line of the output B) you only have to hit the key combo once (instead of 'page up' a number of times which probably is not known beforehand).
I thought about ending all my commands with '| more' but actually this is not what I want since most of the time, I want to retain all output in the terminal buffer, and I usually want to see the end of the shell output first.
I use OSX. Is there a terminal app (on os x) and shell (on remote linux) combination equivalent (so I can do something similar without using emacs all the time - I know, crazy talk)? I normally use bash, but would be fine with switching shells just for this feature.
The way I do this sort of thing is by sending my output to a file and then watching the file as it is written. You still get the results of the command dumped to terminal history in real time and can still inspect the output's actual contents further after the fact (or in another terminal, etc...)
command > output &
tail -f output
head output
You could always do something in bash like this:
alias foo='!! | more'
which would make foo run the previous command with more. I'm not sure of any way to do exactly what you are suggesting.
If you're expecting a lot of output and don't want to run your command twice, you can use tee(1) to fork the output:
my-command | tee /tmp/my-command.log | less
This will pipe the output to a paginator (less), while simultaneously logging the output to a file (in this case, a file named /tmp/my-command.log). If you need to review the output after you've quit from less, you can just cat the log file instead of re-running the command.