Assigning stdin to file also assigns stdout to that file - bash

I executed the following bash script:
#!/bin/env bash
exec 0<log
My understanding is that it permanently reassigns the stdin for this bash process to the log file. So, when I ran
ls > log
I expected that I was passing "ls" to stdin. Since I have not reassigned stdout, I was also expecting to see the result of the "ls" command in the terminal (where I normally see stdout). However, I saw the output of "ls" in the log file. Why is stdout in the log file and not the terminal?

ls writes to whatever file it is given for standard output. Without a redirection, that is whatever file it inherits from its parent (your script). With the redirection, you are explicitly providing the file log for standard output.
This is independent of whatever else the file log might be used for.

Related

Can the pipe operator be used with the stdout redirection operator?

We know that:
The pipe operator | is used to take the standard output of left side command as the standard input for the right side process.
The stdout redirection operator > is used to redirect the stdout to a file
And the question is, why cannot ls -la | > file redirect the output of ls -la to file? (I tried, and the file is empty)
Is it because that the stdout redirection operator > is not a process?
Is it because that the stdout redirection operator > is not a process?
In short, yes.
In a bit more detail, stdout, stderr and stdin are special file descriptors (FDs), but these remarks work for every FD: each FD refers to exactly one resource. It can be a file, a directory, a pipe, a device (such as terminal, a hard drive etc) and more. One FD, one resource. It is not possible for stdout to output to both a pipe and a file at the same time. What tee does is takes stdin (typically from a pipe, but not necessarily), opens a new FD associated with the filename provided as its argument, and writes whatever it gets from stdin to both stdout and the new FD. This copying of content from one to two FDs is not available from bash directly.
EDIT: I tried answering the question as originally posted. As it stands now, DevSolar's comment is actually more on point: why does > file, without a command, make an empty file in bash?
The answer is in Shell Command Language specification, under 2.9.1 Simple commands. In the first step, the redirection is detected. In the second step, no fields remain, so there is no command to be executed. In step 3, redirections are performed in a subshell; however, since there is no command, standard input is simply discarded, and the empty standard output of no-command is used to (try to) create a new file.

stdout and stderr are out-of-order when a bash script is run from Atom

I tried to write a little program to list a non existing directory and echo done, in a .sh file:
#!/bin/bash
ls notexist
echo 'done'
But my console outputs done on the first line, before the error message to list the nonexisting directory:
done
ls: notexist: No such file or directory
I don't think bash creates a thread automatically for each line of code, does it? I'm using terminal in macOS Big Sur.
Edit: I'm accessing terminal indirectly from the script package of the Atom text editor in macOS Big Sur. The error goes away if I run code directly in console via ./file.sh.
If we look at the source code to the Atom script plugin, the problem becomes clear:
It creates a BufferedProcess with separate stdout and stderr callbacks (using them, among other things, to determine whether any output has been written to each of these streams).
Implementing this requires stdout and stderr to be directed to different FIFOs. This means that, unlike a typical terminal where there's an absolute ordering of which content was written to the single FIFO shared by both stdout and stderr at the same time, there's no strict guarantee that content will be processed through those functions in the same order it was written.
As a workaround, you can exec 2>&1 into your script to put all content on stdout, or exec >&2 to put all content on stderr. Ideally, if the script plugin doesn't need to track the two streams separately, it would do this itself, and put a callback only on the single stream to which all content has been redirected.

How to get output redirect as parameter in bash?

I would like to know if it's possible to get the output redirection file name as a parameter in bash?
For example :
./myscript.sh parameter1 > outputfile
Is there a way to get "outputfile" as a parameter like $2? In my script I have to do few operations in outputfile but I don't know which file I have to update... The second problem is, this script is already running and used by several tasks so I cannot change the user input...
Best regards
Redirections are not parameters to the program. When a program's output is redirected, the shell opens the file and connects file descriptor 2 to it before running the program. The program then simply writes to fd 2 (aka stdout) and it goes to the file.
On Linux and similar systems you can use /dev/stdout, which is a symbolic link to the process's stdout file.

How can i print output of a command to a file in Batch Script?

I am trying to call a command in batch file using "call" method and whatever is the output of that command, I want to write in a file.
I went through from this link but cannot find the answer.
I am using this command
call %confPath% GetIniString %datFile% Keyname name >%newFile% >&1
but it creates a empty file always. How can i write the output of above command in the file?
Thanks in advance.
>%newFile% redirects the standard output to a file. in >&1, the 1 stands for standard output, and if no stream is specified, standard output is the default, so >&1 redirects on itself, although it was already redirected with the first command. So, this is illegal and shouldn't produce a file at all. In my tests, this just aborts with an errormessage.
The usual idiom 2>&1, OTOH, redirects stream 2, which is standard ERROR, to standard output, which ensures that both output and error messages end up in the file.

How to redirect the standard output of all the content within a shell script to a log file?

Let us consider there are many system commands inside a shell script with each of them returning some content to stdout or stderr.
Instead of performing redirection for each and every command separately is there any way to redirect all the stdout or stderr generated from the shell script to a log file ?
the obviously simple solution would be to use
./scriptfile.sh > foo.log
thus all stdout generated withing the scriptfile goes to foo.log
i guess however, that your question is directed towards a solution that works from withing the script.
you can (re)direct a file-descriptor to a given file using the exec command (line 2 in the following snippet will redirect stdout to foo.log):
#!/bin/sh
exec 1>>foo.log
echo blue
echo blart

Resources