How to bundle bash completion with a program and have it work in the current shell? - bash

I sweated over the question above. The answer I'm going to supply took me a while to piece together, but it still seems hopelessly primitive and hacky compared to what one could do were completion to be redesigned to be less staticky. I'm almost afraid to ask if there's some good reason that completion logic seems to be completely divorced from the program it's completing for.

I wrote a command line library (can be seen in scala trunk) which lets you flip a switch to have a "--bash" option. If you run
./program --bash
It calculates the completion file, writes it out to a tempfile, and echoes
. /path/to/temp/file
to the console. The result is that you can use backticks like so:
`./program --bash`
and you will have completion for "program" in the current shell since it will source the tempfile.
For a concrete example: check out scala trunk and run test/partest.

Related

Is it possible to capture the bitstream of post-interpreted code? (pre-execution) eg. speedup calls I make often

I've wondered this many times and in many cases, and I like to learn so general or close-but-more needed answers are acceptable to me.
I'll get specific, to help explain the question. Please remember that this question is more about accelerating common interpreted language calls (yes, exactly the same arguments), than it is about the specific programs I'm calling in this case.
Here we go:
Using i3WM I use i3lock-fancy to lock my workspace with a key-combo mapped to the command:
i3lock-fancy -p -f /usr/share/fonts/fantasque_mono.ttf
So here is why I think this is possible, though my google-fu has failed me:
i3lock-fancy is a bash script, and bash is an interpreted language
each time I run the command I call it with the same arguments
Theoretically the interpreter is spitting out the same bitstream to be executed, right?
Please don't complain about portability, I understand it, the captured bitstream, would not be
For visual people:
When I call the above command > bash interpreter converts bash-code to byte-code > CPU executes byte-code
I want to:
execute command > bash interpreter converts to byte-code > save to file
so that I can effectively skip interpretation (since it's EXACTLY the same every time):
call file > CPU executes byte-code
What I tried:
Looking around on SO before asking the question lead me shc which is similar in some ways to what I'm asking for.
But this is not what shc is for (thanks #stefan)
is there a way to do this which is more like what I've described?
Simply put, is there a way to interpret bash, and save the result without actually running it?

How to properly write an interactive shell program which can exploit bash's autocompletion mechanism

(Please, help me adjust title and tags.)
When I run connmanctl I get a different prompt,
enrico:~$ connmanctl
connmanctl>
and different commands are available, like services, technologies, connect, ...
I'd like to know how this thing works.
I know that, in general, changing the prompt can be just a matter of changing the variable PS1. However this thing alone (read "the command connmanctl changes PS1 and returns) wouldn't have any effect at all on the functionalities of the commands line (I would still be in the same bash process).
Indeed, the fact that the available commands are changed, looks to me like the proof that connmanctl is running all the time the prompt is connmanctl>, and that, upon running connmanctl, a while loop is entered with a read statement in it, followed by a bunch of commands which process the the input.
In this latter scenario that I imagine, there's not even need to change PS1, as the connmanctl> line could simply be obtained by echo -n "connmanctl> ".
The reason behind this curiosity is that I'm trying to write a wrapper to connmanctl. I've already written it, and it works as intended, except that I don't know how to properly setup the autocompletion feature, and I think that in order to do so I first need to understand what is the right way to write an interactive shell script.

Is it possible to create an event driven service in shells

Hi I would like to create a small program that listens for copy comands copied content for later retrival in bash. Is it possible to listen to key strokes while still keeping the shell interactive? And how can this be don arcitectualy. I don't need the whole program just a hint at how it can be done. I have no preferance when it comes to language exept that it should be implemented in a scripting language or maby c++.
Pherhaps this needs to be written like a shell extension or somthing. just a hint would be fine.
Consider the way that the script program works (see man script). I havn't done this in a while, but basically you write your pseudo terminal in C and push that into the stream, then launch the shell.
See tcgetattr/tcsetattr, grantpt, unlockpt, and ptsname, with ptem, ldterm and possibly ttcompat to be pushed using ioctl.
A simpler, though less efficient, is to run script into a pipe and capture the output. You probably will need script -f to flush the buffer (I think the -f is only in the GNU version).

Help in understanding this bash file

I am trying to understand the code in this page: https://github.com/corroded/git-achievements/blob/gh-pages/git-achievements
and I'm kinda at a loss on how it actually works. I do know some bash and shell scripting, but how does this script actually "store" how many times you've used a command(im guessing saving into a text file?) and how does it "sense" that you actually typed in a git command? I have a feeling it's line 464 onwards that does it but I don't seem to quite follow the logic.
Can anyone explain this in a bit more understandable context?
I plan to do some achievements for other commands and I hope to have an idea on HOW to go about it without randomly copying and pasting stuff and voodoo.
Yes on 464 start the script, everything before are helping functions. I dont know how it gets installed, but I would assume you have to call this script instead of the normal git-command. It just checks if the first parameter is achievement, and if not then just (regular) git with the rest parameters is executed. Afterwards he checks if an error happend (if he exits). And then he just makes log_action and check_for_achievments. log_action just writes the issued command with a date into a text file, while achievments scans for that log file for certains events. If you want to add another achievment you have to do it in this check_for_achievments.
Just look how the big case handles it (most of the achievments call the count_function which counts the # usages of the function and matches when a power of 2 is reached).

Pitfalls of using shell scripts to wrap a program?

Consider I have a program that needs an environment set. It is in Perl and I want to modify the environment (to search for libraries a special spot).
Every time I mess with the the standard way to do things in UNIX I pay a heavy price and I pay a penalty in flexibility.
I know that by using a simple shell script I will inject an additional process into the process tree. Any process accessing its own process tree might be thrown for a little bit of a loop.
Anything recursive to a nontrivial way would need to defend against multiple expansions of the environment.
Anything resembling being in a pipe of programs (or closing and opening STDIN, STDOUT, or STDERR) is my biggest area of concern.
What am I doing to myself?
What am I doing to myself?
Getting yourself all het up over nothing?
Wrapping a program in a shell script in order to set up the environment is actually quite standard and the risk is pretty minimal unless you're trying to do something really weird.
If you're really concerned about having one more process around — and UNIX processes are very cheap, by design — then use the exec keyword, which instead of forking a new process, simply exec's a new executable in place of the current one. So, where you might have had
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
perl myprog.pl
You'd just say
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
exec perl myprog.pl
and the spare process goes away.
This trick, however, is almost never worth the bother; the one counter-example is that if you can't change your default shell, it's useful to say
$ exec zsh
in place of just running the shell, because then you get the expected behavior for process control and so forth.

Resources