Is it possible to run WSL Bash in non-interactive mode? - bash

One may want to use Bash on Windows in Task Scheduler or maybe as version-control hook scripts. Is it possible or supported?
If not, why? Is it a bug or a measure to prevent some issues?

Use #3d1t0r's solution, but also pipe to cat
wsl bash -c "man bash | cat" # noninteractive; streams the entire manpage to the terminal
wsl bash -c "man bash" # shows me the first page, and lets me scroll around; need to hit `q` to exit
If interactive mode is fine, bash -c is often superfluous
wsl man bash # same behavior as `wsl bash -c "man bash"`
Context (what in the world is "interactive" vs "non-interactive" invocation?):
The example above might not make it entirely clear, but man is changing its behavior based on what it's connected to.
In "interactive mode", man lets me scroll around, so that I can read the page at a comfortable reading pace.
In noninteractive mode, man dumps the entire manpage to the console, giving me no time to read anything.
"But wait," I hear you ask, "isn't man catting the man page because you asked it to? I see it right there--man bash | cat"
No, man has no idea what cat is. It just gets hints about whether STDOUT is connected to an interactive terminal.
Here's a different example, that consistently cats:
wsl bash -c "echo hey | grep --color e" # colors 'e' red
wsl bash -c "echo hey | grep --color e | cat" # colors disappear, what gives?
Now both examples are streaming their output, but the second one is defiantly ignoring my --color flag.
The common thread here is man and grep both behave appropriately depending on whether they think their output is going to be read by a human piped away somewhere.
Other common commands that auto-detect interactivity include ls and git. Usually the behavior change will involve output paging or colors (other variations exist).
paging is nice for humans, because humans generally can't read at the speed of streamed output.
paging is bad for robots, because paging is a lot of protocol overhead when you can just consume buffered streams. I mean seriously, why are humans so slow and chatty?
colors are nice for humans, because we like additional visual cues to aid visual distinction.
colors are bad for streaming to file, because your file will be full of ansi color code garbage, that most text editors don't display nicely.
Automatic behavior switching based on whether STDOUT is connected to an interactive terminal makes all these use cases usually "just work".
Restating the Original Question
In my use case and #bahrep's use case, interactive mode can be especially bad for unsupervised scripts (e.g. as launched by Task Scheduler). I am guessing #bahrep's scheduled runs hung on less getting invoked and waiting for human input.
For some reason, wsl-driven scripts launched from the task-scheduler give underlying scripts the wrong hints--they hint that the final output is attached to an interactive terminal.
Ideally, wsl would know from the windows side of the execution environment whether it is getting invoked interactively or not, and pass along the proper hint. Then I could just run wsl [command]. Until that happens, I'll need to use wsl bash -c "[command] | cat" as a workaround.

If I'm understanding your question correctly, the -c option is what you're looking for. It allows you to directly invoke a Linux command.
For example, to open the man page for bash (perhaps in order to find out about the -c option):
bash -c "man bash"
Note: You can leave off the quotes if you escape any spaces (e.g. bash -c man\ bash), but it's often easier to just use the quotes, as the first unescaped space will lose the rest of your command.
e.g. bash -c man bash will be interpreted the same as bash -c man.

Related

Are tee and script essentially equivalent?

In the context where I want to capture the stdout of a process in a file but still want to have this output displayed in the terminal I can choose between script and tee. In this context, are these tools essentially equivalent or is there a – possibly subtle – reason to prefer one over the other?
The programs script and tee are designed for different purposes:
script -- make typescript of terminal session
tee -- pipe fitting
Important differences between script and tee are:
script transmits the exit status of the process it supervises, while tee, being a filter, does not even know about it.
script captures stdin, stdout, stderr of the process it supervises while tee only catches the stream it filters.
None of these differences are relevant in the given context.
They have a very different purpose and the usage is quite different as well.
Script is to record what you are doing in a shell session. Handy to show a professor what you did, to show co-workers how to do something, etc...
Tee is just an application to write to both your screen and a file. Very handy when installing something or running a command that generates a lot of output and wanting to see the output realtime while still saving it to disk.
A notable difference between the two is that you can use script to create an interactive shell to log everything (e.g. script commands.log zsh) including colors and such. Tee won't register as a tty so with that regard it's pretty different.
I found script to be useful for making control sequences work when piping to tee:
script -q -c 'python -c "import pdb, sys; pdb.set_trace()"' /dev/null \
| tee -a /tmp/tmp.txt
With only the following, Ctrl-A would be displayed as ^A etc:
python -c "import pdb, sys; pdb.set_trace()" | tee -a /tmp/tmp.txt
This is a minimal example. I am using tee here to capture the output from a pytest test run, but sometimes there might be a debugger in there, and cursor keys etc should work then.
Via https://unix.stackexchange.com/a/61833/1920.

Ignore user-input when running a unix command from within Matlab

I have a Matlab program that runs different unix commands fairly often. For this question let's assume that what I'm doing is:
unix('ls test')
It happens to me quite frequently that I accidentally press a key(like enter or the arrow keys) f.e. when I'm waking up my display from standby. In theory this shouldn't interfere with the unix command. Though unfortunately, Matlab will take this input and forward it right into the execution of the command. The above command then becomes something like this:
unix('ls te^[0Ast')
(Side note: ^[0A is the hex representation of the linefeed character)
Obviously, this will produce an error.
Does anyone have an idea how to work around this issue?
I was thinking that there might be a way to start Matlab with my script in a way that doesn't forward any user input from within the unix shell.
#!/bin/bash
matlab -nodisplay -nosplash -r "runMyScript();"
Can I somehow pipe the user-input somewhere else and isolate Matlab from any sort of input?
That is not very specific question, but let me try. I can see several options. I am assuming that matlab is text terminal application.
There is nohup(1) command. Since you use linux, chances are that there is non-posix version if it which says in it's man page: If standard input is a terminal, redirect it from /dev/null.
$ nohup matlab -nodisplay -nosplash -r "runMyScript();"
You can redirect /dev/null yourself
$ matlab -nodisplay -nosplash -r "runMyScript();" < /dev/null
But matlab can actually re-open it's stdin ignoring what you piped into it (for example ssh does that, you can't use echo password | ssh somewhere.
if you are running in graphics environment you may want to minimise the window, so that it does not receive any input. Probably not your case, you would figure out yourself :)
you may try to wake up by hitting "Ctrl", similar key or mouse
You may run matlab in screen(1) command disconnect from the screen or switch to different window. Screen is a program allowing you to create virtual terminals (similar to virtual desktops in GUI). If you haven't heard of screen, I suggest you to look at some tutorials. Googling for gnu screen tutorial seems to offer quite a few.

use "!" to execute commands with same parameter in a script

In a shell, I run following commands without problem,
ls -al
!ls
the second invocation to ls also list files with -al flag. However, when I put the above script to a bash script, complaints are thrown,
!ls, command not found.
how to realise the same effects in script?
You would need to turn on both command history and !-style history expansion in your script (both are off by default in non-interactive shells):
set -o history
set -o histexpand
The expanded command is also echoed to standard error, just like in an interactive shell. You can prevent that by turning on the histverify shell option (shopt -s histverify), but in a non-interactive shell, that seems to make the history expansion a null-op.
Well, I wanted to have this working as well, and I have to tell everybody that the set -o history ; set -o histexpand method will not work in bash 4.x. It's not meant to be used there, anyway, since there are better ways to accomplish this.
First of all, a rather trivial example, just wanting to execute history in a script:
(bash 4.x or higher ONLY)
#!/bin/bash -i
history
Short answer: it works!!
The spanking new -i option stands for interactive, and history will work. But for what purpose?
Quoting Michael H.'s comment from the OP:
"Although you can enable this, this is bad programming practice. It will make your scripts (...) hard to understand. There is a reason it is disabled by default. Why do you want to do this?"
Yes, why? What is the deeper sense of this?
Well, THERE IS, which I'm going to demonstrate in the follow-up section.
My history buffer has grown HUGE, while some of those lines are script one-liners, which I really would not want to retype every time. But sometimes, I also want to alter these lines a little, because I probably want to give a third parameter, whereas I had only needed two in total before.
So here's an ideal way of using the bash 4.0+ feature to invoke history:
$ history
(...)
<lots of lines>
(...)
1234 while IFS='whatever' read [[ $whatever -lt max ]]; do ... ; done < <(workfile.fil)
<25 more lines>
So 1234 from history is exactly the line we want. Surely, we could take the mouse and move there, chucking the whole line in the primary buffer? But we're on *NIX, so why can't we make our life a bit easier?
This is why I wrote the little script below. Again, this is for bash 4.0+ ONLY (but might be adapted for bash 3.x and older with the aforementioned set -o ... stuff...)
#!/bin/bash -i
[[ $1 == "" ]] || history | grep "^\s*$1" |
awk '{for (i=2; i<=NF; i++) printf $i" "}' | tr '\n' '\0'
If you save this as xselauto.sh for example, you may invoke
$ ./xselauto.sh 1234
and the contents of history line #1234 will be in your primary buffer, ready for re-use!
Now if anyone still says "this has no purpose AFAICS" or "who'd ever be needing this feature?" - OK, I won't care. But I would no longer want to live without this feature, as I'm just too lazy to retype complex lines every time. And I wouldn't want to touch the mouse for each marked line from history either, TBH. This is what xsel was written for.
BTW, the tr part of the pipe is a dirty hack which will prevent the command from being executed. For "dangerous" commands, it is extremely important to always leave the user a way to look before he/she hits the Enter key to execute it. You may omit it, but ... you have been warned.
P.S. This scriptlet is in fact a workaround, simulating !1234 typed on a bash shell. As I could never make the ! work directly in a script (echo would never let me reveal the contents of history line 1234), I worked around the problem by simply greping for the line I wanted to copy.
History expansion is part of the interactive command-line editing features of a shell, not part of the scripting language. It's not generally available in the context of a script, only when interacting with a (pseudo-)human operator. (pseudo meaning that it can be made to work with things like expect or other keystroke repeating automation tools that generally try to play act a human, not implying that any particular operator might be sub-human or anything).

Can colorized output be captured via shell redirect? [duplicate]

This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null

How can I flush the input buffer in an expect script?

I'm writing an Expect script and am having trouble dealing with the shell prompt (on Linux). My Expect script spawns rlogin and the remote system is using ksh. The prompt on the remote system contains the current directory followed by " > " (space greater-than space). A script snippet might be:
send "some command here\r"
expect " > "
This works for simple commands, but things start to go wrong when the command I'm sending exceeds the width of the terminal (or more precisely, what ksh thinks is the width of the terminal). In that case, ksh does some weird horizontal scrolling of the interactive command line, which seems to rewrite the prompt and stick an extra " > " in the output. Naturally this causes the Expect script to get confused and out of sync when there appears to be more than one prompt in the output after executing a command (my script contains several send/expect pairs).
I've tried changing PS1 on the remote system to something more distinctive like "prompt> " but a similar problem arises which indicates to me that's not the right way to solve this.
What I'm thinking might help is the ability for the script to tell Expect that "I know I'm properly synchronised with the remote system at this point, so flush the input buffer now." The expect statement has the -notransfer flag which doesn't discard the input buffer even if the pattern does match, so I think I need the opposite of that.
Are there any other useful techniques that I can use to make the remote shell behave more predictably? I understand that Expect goes through a lot of work to make sure that the spawned session appears to be interactive to the remote system, but I'd rather that some of the more annoying interactive features (such as the horizontal scrolling of ksh) be turned off.
If you want to throw away all output Expect has seen so far, try
expect -re $
This is a regexp match on $ which means the end of the input buffer, so it will just skip everything received so far. More details at the Expect man page.
You could try "set -o multiline" or COLUMNS=1000000 (or some other suitably large value).
I have had difficulty with ksh and Expect in the past. My solution was to use something other than
ksh for a login shell.
If you can change the remote login to other than ksh (using the chsh command or editing /etc/passwd) then you might try this with /bin/sh as the shell.
Another alternative is to tell KSH that the terminal is a dumb terminal - disallow it from doing any special processing.
$ export TERM=""
might do the trick.

Resources