Display exact output of Unix command from a Bash script - bash

I am trying to print out the output displayed from a command passed into a bash script. The problem I am trying to solve is how to get the output to look exactly like it would if you ran the command from the shell. For example, when I run ls, I see different colors for directories vs. files.
Here is some sample code of what I have so far:
#!/bin/bash
command="$#"
output=`$command`
echo "$output"
So my shell script takes in a command, runs the command, then prints the output. I know that I can customize the color of the output using color codes and echo -e, but I want the output to look just as it does when I run the command from the shell. Any idea of how I can do this?

If all you need is to display the output, you can run the command inline within your script (just let it write to stdout directly, without storing its output in some variable).
That is, you can replace:
output=`$command`
echo $output
with:
$command
or
eval $command
If you also need that output for some kind of processing, that would be a bit tricky. You can (for instance) use | tee /var/tmp/some-temp-file.$$ and then read the output from the temporary file.

Some programs, such as ls, check whether standard output isatty() and behave differently depending on that. If you are capturing the command's output in a variable, the shell redirects its standard output to a pipe which is not a TTY.
There is not much you can do about this except reading the manual page for each individual command to find out whether it supports special options that make its behavior independent of whether its standard output is piped. For ls in particular, you can use the dir program that will always produce human-friendly column formatted output as an alternative.
On a more general level: What you are trying to do seems to be a rather strange thing anyway. I'm sure there is a more robust solution to do what you are trying to accomplish.

Why not just have your script as:
#!/bin/bash
"$#"
It will run any command line passed as argument and print the output unmodified.

Related

What is the concept of linux stores the output to a variable?

Can anyone help me to understand the concept of how the Linux stores its terminal output to a variable?
files=`ls`
echo $files
a.txt
b.txt
I want to know-how Linux stores this to a variable. I mean whatever the "ls" output to stdout will redirect to the variable "files" or any operation will takes place?
The terminal isn't really relevant. Your question seems to imply that all commands write directly to the terminal, then command substitution somehow "copies" what was written to a variable.
It's the other way around: commands write to standard output, which is some file the command receives from whoever starts the command. Standard output for an interactive shell is the terminal, but the command substitution overrides that, causing the shell to capture the output in memory, then using that output to set the value of the variable.

Why does this simple shell code give the output as "/dev/fd/11"?

I was trying out some redirection in my mac zsh shell.
I see that echo <(echo $c) outputs /dev/fd/11. I have no idea why this happens. Can someone explain?
Note: It doesn't matter if $c is initialised or not.
echo < '' returns zsh: no such file or directory: , so I am at a loss understanding what's going on.
Process expansion effectively "expands" to a file name. You are passing that file name as the argument to echo, which it dutifully writes back to standard output. If you had written
cat <(echo $c)
you would get as output, as I think you expected, the output of the command echo $c, because cat would open /dev/fd/11 for reading and output its contents.
Got the answer from here
https://unix.stackexchange.com/questions/17107/process-substitution-and-pipe
Looks like <(COMMAND) is slightly different that a the < (STDIN redirect) command
Pipes and input redirects shove content onto the STDIN stream. Process
substitution runs the commands, saves their output to a special
temporary file and then passes that file name in place of the command.
Whatever command you are using treats it as a file name. Note that the
file created is not a regular file but a named pipe that gets removed
automatically once it is no longer needed.

CMake's execute_process and arbitrary shell scripts

CMake's execute_process command seems to only let you, well, execute a process - not an arbitrary line you could feed a command shell. The thing is, I want to use pipes, file descriptor redirection, etc. - and that does not seem to be possible. The alternative would be very painful for me (I think)...
What should I do?
PS - CMake 2.8 and 3.x answer(s) are interesting.
You can execute any shell script, using your shell's support for taking in a script within a string argument.
Example:
execute_process(
COMMAND bash "-c" "echo -n hello | sed 's/hello/world/;'"
OUTPUT_VARIABLE FOO
)
will result in FOO containing world.
Of course, you would need to escape quotes and backslashes with care. Also remember that running bash would only work on platforms which have bash - i.e. it won't work on Windows.
execute_process command seems to only let you, well, execute a process - not an arbitrary line you could feed a command shell.
Yes, exactly this is written in documentation for that command:
All arguments are passed VERBATIM to the child process. No intermediate shell is used, so shell operators such as > are treated as normal arguments.
I want to use pipes
Different COMMAND within same execute_process invocation are actually piped:
Runs the given sequence of one or more commands with the standard output of each process piped to the standard input of the next.
file descriptor redirection, etc. - and that does not seem to be possible.
For complex things just prepare separate shell script and run it using execute_process. You can pass variables from CMake to this script using its parameters, or with prelimiary configure_file.
I needed to pipe two commands one after the other and actually learned that each COMMAND of the execute_process is piped already. So at least that much is resolved by simply adding commands one after the other:
execute_process(
COMMAND echo "Hello"
COMMAND sed -e 's/H/h/'
OUTPUT_VARIABLE GREETINGS
OUTPUT_STRIP_TRAILING_WHITESPACE)
Now the variable GREETINGS is set to hello.
If you indeed need a lot of file redirection (as you stated), you probably want to write an external script and then execute that script from CMakeLists.txt. It's really difficult to get all the escaping right in CMake.
If you can simplify your scripts to one command generating a file, then another handling that file, etc. then you can always use the INPUT_FILE and OUTPUT_FILE options. Or pass a filename to your command for the input.
It's often much cleaner to handle one file at a time. Although I understand that some commands may need multiple sources and destinations.

Copy output of commands to a file but make them believe they are writing in a terminal

To copy the output of my commands launched from a shell I use
exec > >(tee myfile)
and then the next commands will be logged into the file.
The problem is the commands know the output is not a terminal anymore. So they can change how they display. For instance, with the command ls when the redirection is on, the output is displayed in only one column.
I know I can use unbuffer when I use a pipe, but it is not what I want. I want to be able to log all the outputs I have from my shell.
You can use script, which copies all output to a file (usually typescript). It does not interfere with the program, allowing it to think it is writing to the terminal.
The program is available "everywhere", though some options differ:
script(1) Linux
script(1) OSX
The main difference that I encounter is how to specify the output filename and the command. With Linux you can give a command as an option, while in OSX the command consists of the argument(s) past the filename. When using the -c option on Linux, keep in mind that script runs this using the shell identified by the SHELL environment variable. That can actually be "any" program (I've used a text editor). Running a shell to execute a command means that it may use new environment variables (normally not a problem).
If you do not use the -c option, script starts a new shell, writing everything to its output until you exit from that shell. To use it as you were doing for redirection, you could make an alias like
alias redir=`script myfile'
to write to myfile, or
alias redir='script -a myfile'
to append to myfile. In either case, exiting the shell (press controlD, or type exit) will end the "redirection".
Aside from ls (which ignores the terminal database), most programs use the TERM environment variable. It is possible that you do something unusual in initializing your shell, so that running script would reinitialize TERM to a different value than you are currently using. To see this, you could do something like
env >before.log
script -c "env >after.log"
diff before.log after.log

Difference between typing a shell command, and save it to a file and using `cat myfile` to execute it?

I have an rsync command that works as expected when I type it directly into a terminal. The command includes several --include='blah' and --exclude='foo' type arguments. However, if I save that command to a one-line file called "myfile" and I try `cat myfile` (or, equivalently $(cat myfile)), the rsync command behaves differently.
I'm sure it is the exact same command in both cases.
Is this behavior expected/explainable?
I've found the answer to this question. The point is that the cat command takes the contents of the file and treats it like a string. Any string operators (like the escape operator, ) are executed. Then, the final string output is what is passed to a command via the backticks.
As a solution, I've just made "myfile" a shell script that I can execute rather than trying to use cat.

Resources