Elegant and efficient way to start GUI programs from terminal without spamming it (Bash or any Posix shell) - bash

Every once in a while I have to fire up a GUI program from my terminal session to do something. It usually is Chrome to display some HTML file are some task alike.
These programs however throw warnings all over the place and it can actually become ridiculous to write anything so I always wanted to redirect stderr/stdout to /dev/null.
While ($PROGRAM &) &>/dev/null seems okay I decided to create a simple Bash function for it so I don't have to repeat myself everytime.
So for now my solution is something like this:
#
# silly little function
#
gui ()
{
if [ $# -gt 0 ] ; then
($# &) &>/dev/null
else
echo "missing argument"
fi
}
#
# silly little example
#
alias google-chrome='gui google-chrome'
So what I'm wondering about is:
Is there a way without an endless list of aliases that's still snappy?
Are there different strategies to accomplish this?
Do other shells offer different solutions?
In asking these questions I want to point out that your strategies and solutions might deviate substantially form mine. Redirecting output to /dev/null and aliasing it was the only way I know but there might be entirely different ways that are more efficient.
Hence this question :-)

As others have pointed in the comments, I think the real problem is a way to distinguish between gui vs. non-gui apps on the commandline. As such, the cleanest way I could think of is to put this part of your script:
#!/bin/bash
if [ $# -gt 0 ] ; then
($# &) &>/dev/null
else
echo "missing argument"
fi
into a file called gui, then chmod +x it and put it in your ~/bin/ (make sure ~/bin is in your $PATH). Now you can launch gui apps with:
`gui google-chrome`
on the prompt.
Alternatively, you can do the above, then make use of bind:
bind 'RETURN: "\e[1~gui \e[4~\n"'
This will allow you to just do:
google-chrome
on the prompt and it would automatically append gui before google-chrome
Or, you can bind the above action to F12 instead of RETURN with
bind '"\e[24~": "\e[1~gui \e[4~\n"'
To separate what you want launched with gui vs. non-gui.
More discussion on binding here and here.
These alternatives offer you a way out of endless aliases; a mix of:
Putting gui in your ~/bin/, and
Binding use of gui to F12 as shown above
seems the most ideal (albeit hacky) solution.
Update - #Enno Weichert's Resultant Solution:
Rounding this solution out ...
This would take care of aliases (in a somewhat whacky way though) and different escape encodings (in a more pragmatic rather than exhaustive way).
Put this in $(HOME)/bin/quiet
#!/bin/bash -i
if [ $# -gt 0 ] ; then
# Expand if $1 is an alias
if [ $(alias -p | awk -F "[ =]" '{print $2}' | grep -x $1) > 0 ] ; then
set -- $(alias $1 | awk -F "['']" '{print $2}') "${#:2}"
fi
($# &) &>/dev/null
else
echo "missing argument"
fi
And this in $(HOME)/.inputrc
#
# Bind prepend `quiet ` to [ALT][RETURN]
#
# The condition is of limited use actually but serves to seperate
# TTY instances from Gnome Terminal instances for me.
# There might very well be other VT emulators that ID as `xterm`
# but use totally different escape codes!
#
$if $term=xterm
"\e\C-j": "\eOHquiet \eOF\n"
$else
"\e\C-m": "\e[1~quiet \e[4~\n"
$endif

Although it is an ugly hack, I sometimes use nohup for a similar needs. It has the side effect of redirecting the command's output, and it makes the program independent of the terminal session.
For the case of running GUI programs in a desktop envinroment it has only little risk for resource leaks as the program will anyway end with window manager session. Nevertheless it should be taken into account.
There is also the option of opening another terminal session. In Gnome for instance, you can use gnome-terminal --comand 'yourapp'.
But this will result in opening many useless terminal windows.

First, it's a great idea to launch GUI applications from the terminal, as this reduces mouse usage. It's faster, and more convenient in terms of options and arguments. For example, take the browser. Say you have the URL in the clipboard, ready to paste. Just type, say, ice (for Iceweasel) and hit Shift-Insert (to paste) and Enter. Compare this to clicking an icon (possibly in a menu), wait for the window to load (even worse if there is a start up page), then click the URL bar (or hit Ctrl-L), then Ctrl-V... So I understand you desire for this to work.
But, I don't see how this would require an "infinite" list of aliases and functions. Are you really using that many GUI applications? And even so, aliases are one line - functions, which may be more practical for handling arguments, are perhaps 1-5 lines of (sparse) code. And, don't feel you need to set them up once and for all - set them up one by one as you go along, when the need arises. Before long, you'll have them all.
Also, if you have a tabbed terminal, like urxvt (there is a Perl extension), you'd benefit from moving from "GUI:s" to "CLI:s": for downloading, there's rtorrent; for IRC, irssi; instead of XEmacs (or emacs), emacs -nw; there is a CLI interface to vlc for streaming music; for mail, Emacs' rmail; etc. etc.! Go hunt :)
(The only sad exception I've run across that I think is a lost cause is the browser. Lynx, W3M, etc., may be great from a technical perspective, but that won't always even matter, as modern web pages are simply not designed with those, text-only browsers in mind. In all honesty, a lot of those pages look a lot less clear in those browsers.)
Hint: To get the most out of a tabbed terminal, you'd like the "change tab" shortcuts "close" (e.g., Alt-J for previous tab and Alt-K for next tab, not the arrow keys that'll make you reach).
Last, one solution that'll circumvent this problem, and that is to launch the "GUI:s" as background processes (with &) in your ~/.xinitrc (including your terminal emulator). Not very flexible, but great for the stuff you always use, every time you use your computer.

Ok, so I kept thinking that the one function should be enough. No aliases, no bashisms, no nonsense. But it seemed to me that the only the way to do that without possibly affecting regular use, such as expansions and completions, was to put the function at the end of the command. This is not as easy as one might first assume.
First I considered a function tacked onto the end in which I would call, say, printf "%b" "\u" in order to cut the current line, plug in another printf, paste it back in, and a quote or two at the beginning, then do what little is needed at the end. I don't know how to make this work though, I'm sorry to say. And even if I did, I couldn't hope for any real reliability/portability with this method due to the varying ways shells interpret escape sequences, not to mention the terminal emulators they run in. Perhaps stty could offer a way forward along these lines, but if so, you won't find it here... now, anyway.
I eventually instead resorted to actually copying the current command's /proc/{PID}/cmdline to a variable, then (shamefully) killing it entirely, and finally wrapping it as I pleased. On the plus side, this is very easily done, very quickly done (though I can imagine arguing its 'efficiency' either way), and seems mostly to work, regardless of the original input, whether that be an alias, a variable, a function, etc. I believe it is also POSIX portable (though I can't remember if I need to specify the kill SIGNALS by name for POSIX or not), and is definitely no nonsense.
On the other hand, its elegance certainly leaves much to be desired, and, though it's probably not worth worrying about, it does waste entirely a single PID. And it doesn't stop completely the shell spam; that is, in my shell I have enabled background jobs reporting with set and so, when first run, the shell kindly informs me that I've just opened and wasted a PID in two lines. Also, because I copy the ../cmdline instead of interfacing directly with the 0 file descriptor, I expect pipes and ; and etc to be problematic. This I can, and likely will, fix myself very soon.
I will fix that, that is, if I cannot find a way to instead make use of, as I suspect can be done,SIGTSTP + SIGCONT by first suspending the process outside a subshell then within one continuing it after redirecting the subshell's output. It seems that this works unreliably for reasons I haven't yet discovered, but I think it's promising. Perhaps nohup and a trap (to effectively rehup it, as it were) is what is needed, but I'm not really sure how to put those together either...
Amyway, without further ado, my semi-aborted, backwards gui launcher:
% _G() { (
_gui_cmd="$(tr '\0' ' ' </proc/"$\!"/cmdline )" ;
kill -9 "$\!" ;
exec eval "${_gui_cmd} &" )
&>/dev/null 2&>1
}
% google-chrome-beta --disk-cache-dir="/tmp/cache" --disk-cache-size=100000000 &_G
[1] 2674
[1] + 2674 killed google-chrome-beta --disk-cache-dir="/tmp/cache" --disk-cache-
%
So another problem one might encounter if attempting to do similar is the order of expansion the shell assumes. Without some trick such as mine you're sure to have some serious difficulty expanding your function before the prior command snags it as an argument. I believe I have guarded against this without unnecessary redundancy by simply tacking my _G function call onto the original command's & background intstruction. Simply add &_Gto the tail-end of the command you wish to run and, well, good luck.
-Mike
P.S. Ok, so writing that last sentence makes me think of what might be done with tee.

Related

Makefile: store warning count into variable without using temp file

I would like to improve an existing Makefile, so it prints out the number of warnings and/or errors that were encountered during the build process.
My basic idea is that there must be a way to pipe the output to grep and have the number of occurences of a certain string in either stderr or stdout stream (i.e. "Warning:") stored into a variable that can then simply be echo'ed out at the end make command.
Requirements / Challenges:
Current console output and exit code must remain exactly the same
That means also without even changing control characters. Dev's using the MakeFile must not recognize any difference to what the output was prior to my change (except for a nice, additional Warning count output at the end of the make process). Any approaches with tee i tried so far were not successful, as the color coding of stderr messages in the console is lost, changing them to all black & white.
Must be system-independent
The project is currently being built by Win/OSX/Linux devs and thus needs to work with standard tools available out-of-the-box in most *nix / CygWin shells. Introducing another dependency such as unbuffer is not an option.
It must be stable and free of side-effects (see also 5.)
If the make process is interrupted (i.e. by the user pressing CTRL+C or for any other reason), there should be no side-effects (such as having an orphaned log file of the output left behind on disk)
(Bonus) It should be efficient
The amount of output may get >1MB and more, so just piping to a file and greping it will be a small performance hit and also there will be additional the disk I/O (thus unnecessarily slowing down the build). I'm simply wondering if this cannot be done w/o a temp file as i understand pipes as sort of "streams" that just need to be analysed as the flow through.
(Bonus) Make it locale-independent w/o changing the current output
Depending on the current locale, the string to grep and count is localized differently, i.e. "Warning:" (en_US.utf8) or "Warnung:" (de_DE.utf8). Surely i could have locale switch to en_US in the Makefile, but that would change console output for users (Hence breaking requirement 1.), so i'd like to know if there's any (efficient) approach you could think of for this.
At the end of the day, i'd be able to do with a solid solution that just fullfills requirement 1. + 2.
If 3. to 5. are not possible to be done then i'd have to convince the project maintainers to have some changes to .gitignore, have the build process slightly take up more time and resources, and/or fix the make output to english only but i assume they will agree that would be worth it.
Current solution
The best i have so far is:
script -eqc "make" make.log && WARNING_COUNT=$(grep -i --count "Warning:" make.log)" && rm make.log || rm make.log
That fulfills my requirements 1, 2 and almost no. 3: still, if the machine has a power-outage while running the command, make.log will remain as an unwanted artifact. Also the repetition of rm make.log looks ugly.
So i'm open on alternative approaches and improvements by anybody. Thanks in advance.

How to properly write an interactive shell program which can exploit bash's autocompletion mechanism

(Please, help me adjust title and tags.)
When I run connmanctl I get a different prompt,
enrico:~$ connmanctl
connmanctl>
and different commands are available, like services, technologies, connect, ...
I'd like to know how this thing works.
I know that, in general, changing the prompt can be just a matter of changing the variable PS1. However this thing alone (read "the command connmanctl changes PS1 and returns) wouldn't have any effect at all on the functionalities of the commands line (I would still be in the same bash process).
Indeed, the fact that the available commands are changed, looks to me like the proof that connmanctl is running all the time the prompt is connmanctl>, and that, upon running connmanctl, a while loop is entered with a read statement in it, followed by a bunch of commands which process the the input.
In this latter scenario that I imagine, there's not even need to change PS1, as the connmanctl> line could simply be obtained by echo -n "connmanctl> ".
The reason behind this curiosity is that I'm trying to write a wrapper to connmanctl. I've already written it, and it works as intended, except that I don't know how to properly setup the autocompletion feature, and I think that in order to do so I first need to understand what is the right way to write an interactive shell script.

Bash: how to duplicate input/output from interactive scripts only in complete lines?

How can I capture the input/ output from a script in realtime (such as with tee), but line-by-line instead of character-by-character? My goal is to capture the input typed into the interactive prompts of a script only after backspaces and auto-completion have finished processing (after the RETURN key is hit).
Specifically, I am trying to create a wrapper script for ssh that creates a timestamped log of commands used on remote servers. The script, which uses tee to redirect the output for filtering, works well, but the redirected output gets jumbled with unsubmitted characters whenever I use the backspace key or the up/down keys to scroll through my remote history. For example: service test stopexitservice test stopart or cd ..logs[1Pls -al.
Perhaps there is a way to capture the terminal's scrollback and redirect that like with tee?
Update: I have found a character-based cleanup solution that does what I want most of the time. However, I am still hoping for an answer to this question (which may well be msw's answer that it is very difficult to do).
In the Unix world there are two primary modes of handling keyboard input. These are known as 'raw' in which characters are passed from the terminal to the reading program one at a time. This is the mode that editors (and such) will use because the editor needs to respond immediately when you press a key.
The other terminal discipline is called 'cooked' which is the line by line behavior that you think of as the bash line by line input where you get to backspace and the command is not executed until you press return. Ssh has to take your input in raw, character-by-character mode because it has no idea what is running on the other side. For example, if you are running an editor on the far side, it can't wait for a return before sending the key-press. So, as some have suggested, grabbing shell history on the far side is the only reasonable way to get a command-by-command record of the bash commands you typed.
I oversimplified for clarity; actually most installations of bash take input in raw mode because they allow editor like command modification. For example, Ctrl-P scrolls up the command history or Ctrl-A goes to the beginning of the line. And bash needs to be able to get those keys the moment they are typed not waiting for a return.
This is another reason that capturing on the local side is obnoxiously difficult: if you capture on the local side, the stream will be filled with Backspaces and all of bash's editing commands. To get a true transcript of what the remote shell actually executed you have to parse the character stream as if you were the remote shell. There also a problem if you run something like
vi /some_file/which_is_on_the_remote/machine
the input stream to the local ssh will be filled with movement commands snippets of text including backspaces and so on and it would be bloody difficult to figure out what is part of a bash command and what is you talking to the editor.
Few things involving computers are impossible; getting clean input from the local side of an ssh invocation is really, really hard.
I question the actual utility of recording the commands that you execute on a local or remote machine. The reason is that there is so much state which is not visible from a command log. As a simple example here's a log of two commands:
17:00$ cp important_file important_file.bak
17:15$ rm important_file
and two days later you are trying to figure out whether important_file.bak should have the contents you intended or not. Given that log you can't answer that simple question. Even if you had the sequence
16:58$ cat important_file
17:00$ cp important_file important_file.bak
17:15$ rm important_file
If you aren't capturing the output, the cat in the log will not tell you anything. Give me almost any command sequence and I can envision a scenario in which it will not give you the information you need to make sense of what was done.
For a very similar purpose I use GNU screen which offer the option to record everything you do in a shell session (INPUT/OUTPUT). The log it creates also comes with undesirable characters but I clean them with perl:
perl -ne 's/\x1b[[()=][;?0-9]*[0-9A-Za-z]?//g;s/\r//g;s/\007//g;print' < screenlog.0
I hope this helps.
Some features of screen:
http://speaking-my-language.blogspot.com/2010/09/top-5-underused-gnu-screen-features.html
Site I found the perl-oneliner:
https://superuser.com/questions/99128/removing-the-escape-characters-from-gnu-screens-screenlog-n

Adding commands in shell scripts to history?

I notice that the commands I have in my shell scripts never get added to the history list. Understandably, most people wouldn't want it to be, but for someone who does, is there a way to do it?
Thanks.
Edit:
Sorry about the extremely late reply.
I have a script that conveniently combines some statements that ultimately result in lynx opening a document. That document is in a dir several directories below the current one.
Now, I usually end up closing lynx to open another document in the current dir and need to keep switching back and forth between the two. I could do this by having another window open, but since I'm mostly on telnet, and the switches aren't too frequent, I don't want to do that.
So, in order to get back to lynx from the other document, I end up having to re-type the lynx command, with the (long) path/filename. In this case, of course, lynx isn't stored in the command history.
This is what I want added to the history, so that I can get back to it easily.
Call it laziness, but hey, if it teaches me a new command....
Cheers.
As #tripleee pointed out, which problem are you actually trying to solve? It's perfectly possible to include any shell code in the history, but above some level of complexity it's much better to keep them in separate shell scripts.
If you want to keep multi-line commands in the history as they are, you might want to try out shopt -s lithist, but that means searching through history will only return one line at a time.

Pitfalls of using shell scripts to wrap a program?

Consider I have a program that needs an environment set. It is in Perl and I want to modify the environment (to search for libraries a special spot).
Every time I mess with the the standard way to do things in UNIX I pay a heavy price and I pay a penalty in flexibility.
I know that by using a simple shell script I will inject an additional process into the process tree. Any process accessing its own process tree might be thrown for a little bit of a loop.
Anything recursive to a nontrivial way would need to defend against multiple expansions of the environment.
Anything resembling being in a pipe of programs (or closing and opening STDIN, STDOUT, or STDERR) is my biggest area of concern.
What am I doing to myself?
What am I doing to myself?
Getting yourself all het up over nothing?
Wrapping a program in a shell script in order to set up the environment is actually quite standard and the risk is pretty minimal unless you're trying to do something really weird.
If you're really concerned about having one more process around — and UNIX processes are very cheap, by design — then use the exec keyword, which instead of forking a new process, simply exec's a new executable in place of the current one. So, where you might have had
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
perl myprog.pl
You'd just say
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
exec perl myprog.pl
and the spare process goes away.
This trick, however, is almost never worth the bother; the one counter-example is that if you can't change your default shell, it's useful to say
$ exec zsh
in place of just running the shell, because then you get the expected behavior for process control and so forth.

Resources