Error piping before output - shell

Let’s you have a group of commands, and some give error as follows:
begin;
echo “Starting Test";
ls;
bad_command -xyz;
end
If you don’t redirect the output or error the results is as expected i.e.
Starting Test
foo.txt
someotherfile.png
someDir
Unknown command: ‘bad_command’
However, is I pipe the whole block to TextEdit by changing last line to end 2&>1 | open -f -a TextEdit the error comes first in the file, and the order is messed up. This also happens when piping to other commands. Why does that happen, and how can I prevent that?

You are having this problem because pipes will buffer stdout but not stderr, and therefore you are getting stderr output first. The only way to solve this is to not use pipes, and instead redirect your output to a temporary file. Then, work with that file for what you need to do.

Related

Pipe command in Bash

Pipe command is showing it's results properly .When i try to use it cat or > it doesn't show the output
i have try to run the command with different spaces but it didn't help
sort spiderman.txt | cat > superman.txt
sort spiderman.txt | > superman.txt
in the first above code cat is not showing it's output (the cat command is not showing contents of superman.txt ) however if i write is separately the cat command it's showing the contents
in the second command nothing happens to superman.txt
ideally it should have replaced all contents of superman.txt and replaced with sorted contents of spiderman.txt but nothing happens.
If you're trying simple output redirection you shouldn't pipe (|), just redirect (>):
sort spiderman.txt > superman.txt
If you want to show the content as well as redirect to a file - perhaps what you're looking for is tee?
sort spiderman.txt | tee superman.txt
Description:
The tee utility copies standard input to standard output, making a copy in zero or more files. The output is unbuffered.
> superman.txt (with no command) is processed as follows:
superman.txt is opened for writing and truncated
The output redirection is removed from the current command.
Since there is nothing left, the empty command is treated as having
run and exited successfully. Nothing actually reads from the pipe
or writes to superman.txt.
cat is necessary as a command which does read from standard input and writes to standard output.
It sometimes seems a little odd to me that more shells don't provide a minimal built-in that simply copies input to output with no frills, to avoid otherwise having to fork and exec cat. ( I should say "no" rather than "more", as I'm not aware of any shell that does. zsh might, if I bothered to search through the documentation to find it.)
(Some shells will optimize away an extra fork when processing a command line; bash is not one of them, though. It forks once to create a process for the write end of the pipe, then forks again to run cat. I believe ksh would simply exec cat directly instead of unnecessarily forking, in which case a built-in cat is less necessary.)

Capture output of last executed command into a variable without affecting Vim and line returns

From this question: bash - automatically capture output of last executed command into a variable I used this command:
PROMPT_COMMAND='LAST="`cat /tmp/x`"; exec >/dev/tty; exec > >(tee /tmp/x)'
It works, but when I use Vim I get this:
# vim
Vim: Warning: Output is not to a terminal
Then Vim opens. But it takes a while. Is there a way to get rid of this message and the slowdown?
Also when I list dir and I echo $LAST it removes the return lines (\n). Is there a way to keep the return lines (\n)?
I think what you ask for is hard do achieve. Vim tests if the output is a terminal. The command you've provided redirects the output to the tee command. tee saves its input (which also menans: command's output) to the file and outputs it to the terminal. But vim knows nothing about it. It only knows its output is not a terminal. So it outputs warning. And from the vim's source code:
[...]
if (scriptin[0] == NULL)
ui_delay(2000L, TRUE);
TIME_MSG("Warning delay");
which means this redirection will always get you 2 seconds delay.
Also, for example, man vim command will not work with such redirections, because terminal output has some attributest (e.g. width and height) which generic file hasn't. So... it won't work.

Pipe direct tty output to sed

I have a script using for a building a program that I redirect to sed to highlight errors and such during the build.
This works great, but the problem is at the end of this build script it starts an application which usually writes to the terminal, but stdout and stderr redirection doesn't seem to capture it. I'm not exactly sure how this output gets printed and it's kind of complicated to figure out.
buildAndStartApp # everything outputs correctly
buildAndStartApp 2>&1 | colorize # Catches build output, but not server output
Is there any way to capture all terminal output? The "script" command catches everything, but I would like the output to still print to my terminal rather than redirecting to a file.
I found out script has a -c option which runs a command and all of the output is printed to stdout as well as to a file.
My command ended up being:
script -c "buildAndStartApp" /dev/null | colorize
First, when you use script, the output does still go to the terminal (as well as redirecting to the file). You could do something like this in a second window to see the colorized output live:
tail -f typescript | colorize
Second, if the output of a command is going to the terminal even though you have both stdout and stderr redirected, it's possible that the command is writing directly to /dev/tty, in which case something like script that uses a pseudo-terminal is the only thing that will work.

How to get error output and store it in a variable or file

I'm having a little trouble figuring out how to get error output and store it in a variable or file in ksh. So in my script I have cp -p source.file destination inside a while loop.
When I get the below error
cp: source.file: The file access permissions do not allow the specified action.
I want to grab it and store it in a variable or file.
Thanks
You can redirect the error output of the command like so:
cp -p source.file destination 2>> my_log.txt
It will append the error message to the my_log.txt file.
In case you want a variable you can redirect stderr to stdout and assign the command output to a variable:
my_error_var=$(cp -p source.file destination 2>&1)
In ksh (as per Q), as in bash and other sh derivatives, you can get all/just stderr output from cp using redirection, then grabbing in a var (using $(), better than backtick if using a vaguely recent version):
output=$(cp -p source.file destination 2>&1)
cp doesn't normally output anything though this would capture stdout and stderr; to capture just stderr this way, use 1>/dev/null also. The other solutions redirecting to a file could use cat/various other commands to output/process the logfile.
Reason why I don't suggest using outputting to temporary files:
Redirecting to a file then reading that in (via read command or more inefficiently via $(cat file) ), particularly for just a single line, is less efficient and slower; though not so bad if you want to append to it each time for multiple operations before displaying the errors. You'll also leave the temporary file around unless you ALWAYS clear it up, don't forget when people interrupt (ie. Ctrl-C) or kill the script.
Using temporary files also could be a problem if the script is run multiple times at once (eg. could happen via cron if filesystem/other delays cause massive overruns or just from multiple users), unless the temporary filename is unique.
Generating temporary files is also a security risk unless done very carefully, especially if the file data is processed again or the contents could be rewritten before display by something else to confuse/phish the user/break the script. Don't get into a habit of doing it too casually, read up on temporary files (eg. mktemp) first via other questions here/google.
You can do STDERR redirects by doing:
command 2> /path/to/file.txt

Switch from file contents to STDIN in piped command? (Linux Shell)

I have a program (that I did not write) which is not designed to read in commands from a file. Entering commands on STDIN is pretty tedious, so I'd like to be able to automate it by writing the commands in a file for re-use. Trouble is, if the program hits EOF, it loops infinitely trying to read in the next command dropping an endless torrent of menu options on the screen.
What I'd like to be able to do is cat a file containing the commands into the program via a pipe, then use some sort of shell magic to have it switch from the file to STDIN when it hits the file's EOF.
Note: I've already considered using cat with the '-' for STDIN. Unfortunately (I didn't know this before), piped commands wait for the first program's output to terminate before starting the second program -- they do not run in parallel. If there's some way to get the programs to run in parallel with that kind of piping action, that would work!
Any thoughts? Thanks for any assistance!
EDIT:
I should note that my goal is not only to prevent the system from hitting the end of the commands file. I would like to be able to continue typing in commands from the keyboard when the file hits EOF.
I would do something like
(cat your_file_with_commands; cat) | sh your_script
That way, when the file with commands is done, the second cat will feed your script with whatever you type on stdin afterwards.
Same as Idelic answer with more simple syntax ;)
cat your_file_with_commands - | sh your_script
I would think expect would work for this.
Have you tried using something like tail -f commandfile | command I think that should pipe the lines of the file to command without closing the file descriptor afterwards. Use -n to specify the number of lines to be piped if tail -f doesn't catch all of them.

Resources