Copy output of commands to a file but make them believe they are writing in a terminal - shell

To copy the output of my commands launched from a shell I use
exec > >(tee myfile)
and then the next commands will be logged into the file.
The problem is the commands know the output is not a terminal anymore. So they can change how they display. For instance, with the command ls when the redirection is on, the output is displayed in only one column.
I know I can use unbuffer when I use a pipe, but it is not what I want. I want to be able to log all the outputs I have from my shell.

You can use script, which copies all output to a file (usually typescript). It does not interfere with the program, allowing it to think it is writing to the terminal.
The program is available "everywhere", though some options differ:
script(1) Linux
script(1) OSX
The main difference that I encounter is how to specify the output filename and the command. With Linux you can give a command as an option, while in OSX the command consists of the argument(s) past the filename. When using the -c option on Linux, keep in mind that script runs this using the shell identified by the SHELL environment variable. That can actually be "any" program (I've used a text editor). Running a shell to execute a command means that it may use new environment variables (normally not a problem).
If you do not use the -c option, script starts a new shell, writing everything to its output until you exit from that shell. To use it as you were doing for redirection, you could make an alias like
alias redir=`script myfile'
to write to myfile, or
alias redir='script -a myfile'
to append to myfile. In either case, exiting the shell (press controlD, or type exit) will end the "redirection".
Aside from ls (which ignores the terminal database), most programs use the TERM environment variable. It is possible that you do something unusual in initializing your shell, so that running script would reinitialize TERM to a different value than you are currently using. To see this, you could do something like
env >before.log
script -c "env >after.log"
diff before.log after.log

Related

AppleScript do shell script returns error for "which" command

I'm writing an AppleScript that will ask a user which remote cloud service and then which bucket they would like to mount in Mac OS using rclone. But in order to run the rclone command in an AppleScript, you need to include the entire path to the app. For me that is: /usr/local/bin/rclone
I want to include, as a variable, the location of rclone using the which command in a shell script like this:
set rcloneLOC to paragraphs of (do shell script "which rclone")
But I get a script error stating "The command exited with a non-zero status." This happens even if I just try to run do shell script "which rclone" by itself. If I type which rclone into terminal, I get the result I expect.
How do I get this to work?
As #GordonDavisson suggests, you can view your path using echo $PATH.
To change your applescript's path (and view the change) try this:
do shell script "export PATH=/usr/local/bin:$PATH ; echo $PATH"
The first part of the shell command (up to the semi-colon) will prepend /usr/local/bin to your default path. The second part will return your updated path. The semi-colon has the second part run after the first part is finished.
It's important to note that this change is temporary and only in effect for this shell script and only while it is operating. This is why you need the combined commands in order to see the effect.
I'll use 'rsync' as an example since I don't have rclone; substitute 'rclone' to get its path. To get its path, you combine the export command with which, like so:
do shell script "export PATH=/usr/local/bin:$PATH ; which rsync"
The result is /usr/local/bin/rsync.
To clarify a couple of things… the environment is a set of conditions that apply for each user. You can get a basic rundown of it by running man 7 environ in Terminal. There is an env command which lists your settings and allows you to edit them; man env will provide info on it. At the bottom of these man pages, you should see references to related commands which you can also look up. Meanwhile, from within Script Editor, you could run a 1-line script with do shell script "env" and see the corresponding environment for applescript's shell.
Based on Apple's documentation (or my interpretation of it), they chose this setup because it is relatively secure and portable. You know what you get every time you run a shell script. You don't need to use a more modern shell to run the which command. You can modify the environment as needed, the same way you would while using the terminal.
Finally, Apple has provided Technical Note 2065 which provides info on using shell scripts with applescript. Also, you can likely get more info here or on the unix stack exchange.
NB All of the above is just my understanding, which is limited.

What is the concept of linux stores the output to a variable?

Can anyone help me to understand the concept of how the Linux stores its terminal output to a variable?
files=`ls`
echo $files
a.txt
b.txt
I want to know-how Linux stores this to a variable. I mean whatever the "ls" output to stdout will redirect to the variable "files" or any operation will takes place?
The terminal isn't really relevant. Your question seems to imply that all commands write directly to the terminal, then command substitution somehow "copies" what was written to a variable.
It's the other way around: commands write to standard output, which is some file the command receives from whoever starts the command. Standard output for an interactive shell is the terminal, but the command substitution overrides that, causing the shell to capture the output in memory, then using that output to set the value of the variable.

Use of a pipe prevents left process to export variables. Why?

I have the following one-line bash file foo.sh:
export PATH=<new path>
In another script, I use:
echo $PATH # --> old path
. foo.sh | grep bar
echo $PATH # --> old path!!!!
Depending on the machine I execute this second script on, the PATH is or is not updated in the main script. On the machines where it does not work, whatever the command right of the pipe, it still does not work. On the contrary, if I drop the pipe, it always work whatever the machine.
My machines are supposed to have the exact same configuration (even though, considering this issue, it looks as if they don't). Bash version is 4.1.2.
Do you have any idea where/what to look to understand this behaviour?
In bash, all parts of a pipeline are executed in separate subshells, which is why sourcing the script doesn't change the path.
Some shells are able to run the last command in the current shell environment (ksh93, for example), but bash does not (unless job control is disabled and the lastpipe shell option is enabled, and the pipeline is not executed in the background).
The bash manual states, in the "Pipelines" section,
Each command in a pipeline is executed as a separate process (i.e., in
a subshell).

Bash interactive and non-interactive shell behaviour

I have a hard time with interactive and non-interactive shells. I don't understand which is which.
For example, I have read that non interactive shells usually check for the BASH_ENV variable on their startup and execute whatever it points to.
So, what I did is I set the BASH_ENV to point to some script which only echoes OK. Then I typed in bash in terminal and this script echoed OK. But why? Didn't I call yet another INTERACTIVE shell by typing bash in terminal, and not the other way around? Why did it execute the bash_env? I'm on linux mint maya.
The only thing you can be certain of is what's shown in the manpage for bash (see INVOCATION) - that lists in details what startup files are run in each instance.
However, there's nothing stopping (for example) one of those startup files running other files which would normally not be run.
By way of example, if .bash_profile had the following line:
. ~/.profile
it would also run the .profile script.
In fact the manpage states:
When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following command were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
So, if you put that exact line in your startup scripts for an interactive shell like ~/.bash_profile, you'll also source the file pointed to by BASH_ENV.
Your best bet is to examine the INVOCATION section to find out which of the files will run, and then track through them (with something like set -x at the top of the script) to see what's getting called from where.
If memory serves, Bash is only interactive if you tell it, example
bash -i
So, by you calling just bash you invoked a non-interactive Bash.
More info
-i
If the -i option is present, the shell is interactive.

Bash: What is the effect of "#!/bin/sh" in a bash script with curl

I make a complex and long line command to successful login in a site. If I execute it in Console it work. But if I copy and paste the same line in a bash script it not work.
I tried a lot of thing, but accidentally discovery that if I NOT use the line
#!/bin/sh
it work! Why this happens in my mac OSX Lion? What this config line do in a bash script?
A bash script that is run via /bin/sh runs in sh compatibility mode, which means that many bash-specific features (herestrings, process substitution, etc.) will not work.
sh-4.2$ cat < <(echo 123)
sh: syntax error near unexpected token `<'
If you want to be able to use full bash syntax, use #!/bin/bash as your shebang line.
"#!/bin/sh" is a common idiom to insure that the correct interpreter is used to run the script. Here, "sh" is the "Bourne Shell". A good, standard "least common denominator" for shell scripts.
In your case, however, "#!/bin/sh" seems to be the wrong interpreter.
Here's a bit more info:
http://www.unix.com/answers-frequently-asked-questions/7077-what-does-usr-bin-ksh-mean.html
Originally, we only had one shell on unix. When you asked to run a
command, the shell would attempt to invoke one of the exec() system
calls on it. It the command was an executable, the exec would succeed
and the command would run. If the exec() failed, the shell would not
give up, instead it would try to interpet the command file as if it
were a shell script.
Then unix got more shells and the situation became confused. Most
folks would write scripts in one shell and type commands in another.
And each shell had differing rules for feeding scripts to an
interpreter.
This is when the "#! /" trick was invented. The idea was to let the
kernel's exec() system calls succeed with shell scripts. When the
kernel tries to exec() a file, it looks at the first 4 bytes which
represent an integer called a magic number. This tells the kernel if
it should try to run the file or not. So "#! /" was added to magic
numbers that the kernel knows and it was extended to actually be able
to run shell scripts by itself. But some people could not type "#! /",
they kept leaving the space out. So the kernel was exended a bit again
to allow "#!/" to work as a special 3 byte magic number.
So #! /usr/bin/ksh and
#!/usr/bin/ksh now mean the same thing. I always use the former since at least some kernels might still exist that don't understand the
latter.
And note that the first line is a signal to the kernel, and not to the
shell. What happens now is that when shells try to run scripts via
exec() they just succeed. And we never stumble on their various
fallback schemes.
The very first line of the script can be used to select which script interpreter to use.
With
#!/bin/bash
You are telling the shell to invoke /bin/bash interpreter to execute your script.
Assure that there are not spaces or empty lines before #!/bin/bash or it will not work.

Resources