How to enable tab completion from the terminal specific to the executable - bash

In bash, I believe it is possible to enable tab completion on the terminal for terms that are specific to the executable being invoked.
For example, given an executable "eat" with valid arguments {cake, carrot, banana}, typing 'eat car' should complete to 'eat carrot'.
I believe this is possible because I have seen it with 'ant' tab-completing its targets (though how this was set up I don't know).
How can this behaviour be implemented?

This is done with scripts in /etc/bash_completion.d/ and if you want to write your own completion support for an executable, here's a tutorial to get you started.
If you only need to get the behaviour working for common executables, your Linux distro probably has a bash-completion package available with support for common commands.

This is quite similar to filename globbing where the shell will attempt to autocomplete based on the globbing wildcard...for instance....
echo foo*
will list all files in the current directory beginning with 'foo'...the bash shell globbed the wildcard and expanded it into a list of files...
MSDOS had a similar concept, although it was not explicitly linked in at run-time, I'm talking about the old Turbo C stuff, when the wildcard globbing was activated by linking with 'wildargs.obj' (if my memory serves me correct), internally, that code will iterate through the directory and expand the list based on wildcard pattern matching.
In Linux/*nix land, globbing is standard, but however, you cannot manually hit the sequence Tab key to do the pattern matching or completion...as different terminals may translate the tab key differently and of course handle it differently...

Related

How can I generate a list of every valid syntactic operator in Bash including input and output?

According to the Bash Reference Manual, the Bash scripting language is constituted of 4 distinct subclasses of syntactic elements:
built-in commands (alias, cd)
reserved words (if, function)
parameters and variables ($, IFS)
functions (abort, end-of-file - activated with keybindings such as Ctrl-d)
Apart from reading the manual, I became inherently curious if there was a programmatic way to list out or generate all such keywords, at least from one of the above categories. I think this could be useful in some contexts. Sometimes I wish I could see all the options available to me for what I can write in any given moment, and having that information as data, instead of a formatted manual, is convenient, focused, and can be edited, in case you want to strike out commands you know well, or that are too obscure for now.
My understanding is that Bash takes the input into stdin and passes it to the running shell process. When code is distributed in a production-ready form, it is compiled, so it runs faster. Unlike using a Python REPL, you don’t have access to the Bash source code from within Bash, so it is not a very direct route to write a program that searches through source files to find various defined commands. I mean that if you wanted to list all functions, Python has the dir() function which programmatically looks for function names in the namespace. But I don’t think Bash can do that. I think it doesn’t have a special syntax in its source files which makes it easy to find and identify all the keywords. Instead, they will be found if you simply enter them - like cd will “find” the program cd because $PATH returns the path to that command - but there’s no special way to discover them.
Or am I wrong? Technically, you could run a “brute force” search by generating every combination of symbols of every length and record when you did not get “error: unknown command” as a response.
Is there any other clever programmatic way to do this?
I mean I want to see a list of every symbol or string that the bash
compiler
Bash is not a compiler. It and every other shell I know are interpreters of various languages.
recognises and knows what to do with, including commands like
“ls” or just a symbol like “*”. I also want to see the inputs and
outputs for each symbol, i.e., some commands are executed in the shell
prompt by themselves, but what data type do they return?
All commands executed by the shell have an exit status, which is a number between 0 and 255. This is as close to a "return type" as you get. Many of them also produce idiosyncratic output to one or two streams (a standard output stream and a standard error stream) under some conditions, and many have other effects on the shell environment or operating environment.
And some
require a certain data type to standard input.
I can't think of a built-in utility whose expected input is well characterized as having a particular data type. That's not really a stream-oriented concept.
I want to do this just as a rigorous way to study the language.
If you want to rigorously study the language, then you should study its manual, where everything you describe has already been compiled. You might also want to study the POSIX shell command language manual for a slightly different perspective, which is more thorough in some areas, though what it documents differs in a few details from Bash's default behavior.
If you want to compile your own summary of Bash syntax and behavior, then those are the best source materials for such an effort.
You can get a list of all reserved words and syntactic elements of bash using this trick:
help -s '*' | cut -d: -f1
Or more accurately:
help -s \* | awk -F ': ' 'NR>2&&!/variables/{print $1}'

Why don't makefiles behave more like shell scripts within recipes?

I find makefiles very useful, and the header of each recipe
<target> : [dependencies]
is helpful. Within a recipe, the prefixes # and - are useful, as well as the automatically-defined variables like $# and $?. However, besides that, I find the way of coding the actual recipe to be strange and unhelpful. There are so many questions on StackOverflow along the lines of "how to do this in a makefile" for something that's simple (or at least more familiar) to do in bash.
Is there a reason why the recipe contents are not just interpreted as a regular shell script? Reading the manual pages, there seems to be many tools with equivalent functionality to a shell script but with different syntax. I end up specifying .ONESHELL and escaping $ with $$, or sometimes just call a script from the recipe when I can't figure out how to make it work in a makefile. My question is whether this is just unfortunate design, or are there are important features of makefiles that force them to be designed this way?
I don't really know how to answer your question. Probably that means it's not really appropriate for StackOverflow.
The requirement for using $$ instead of $ is obvious. The reasoning for using a separate shell for each logical line of a makefile instead of passing the entire recipe to a single shell, is less clear. It could have worked either way, and this is the way it was chosen.
There is one advantage to the way it works now, although maybe most people don't care about it: you only have to indent the first recipe line with TAB, if you use backslash newline to continue each line. If you don't use backslash newline, then every line has to be indented with TAB else you don't know where the recipe ends.
If your question is, could Stuart Feldman have made very different syntax decisions that would have made it easier to write long/complex recipes in makefiles, then sure. Choosing a more obscure character than $ as a variable introducer would reduce the amount of escaping (although, shell scripting uses pretty much every special character somewhere so "reduce" is the best you can do). Choosing an explicit "start/stop" character sequence for recipes would make it simpler to write long recipes, possibly at the expense of some readability.
But that's not how it was done.

What is the rationale behind variable assignment without space in bash script

I am trying to write an automate process for AWS that requires some JSON processing and other things in bash script. I am following a few blogs for bash script and I found this:
a=b
with the following note:
There is no space on either side of the equals ( = ) sign. We
also leave off the $ sign from the beginning of the variable name when
setting it
This is ugly and very difficult to read and comparing to other scripting languages, it is easy for user to make a mistake when writing a bash script by leaving space in between. I think everyone like to write clean and readable code, this restriction for sure is bad for code readability.
Can you explain why? explanation with examples are highly appreciated.
It's because otherwise the syntax would be ambiguous. Consider this command line:
cat = foo
Is that an assignment to the variable cat, or running the command cat with the arguments "=" and "foo"? Note that "=" and "foo" are both perfectly legal filenames, and therefore reasonable things to run cat on. Shell syntax settles this in favor of the command interpretation, so to avoid this interpretation you need to leave out the spaces. cat =foo has the same problem.
On the other hand, consider:
var= cat
Is that the command cat run with the variable var set to the empty string (i.e. a shorthand for var='' cat), or an assignment to the shell variable var? Again, the shell syntax favors the command interpretation so you need to avoid the temptation to add spaces.
There are many places in shell syntax where spaces are important delimiters. Another commonly-messed-up place is in tests, where if you leave out any of the spaces in:
if [ "$foo" = "$bar" ]
...it will lead to a different meaning, which might cause an error, or might just silently do the wrong thing.
What I'm getting at is that shell syntax does not allow you to arbitrarily add or remove spaces to improve readability. Don't even try, you'll just break things.
What you need to understand is that the shell language and syntax is old. Really old. The first version of the UNIX shell with variables was the Bourne shell which was designed and implemented in 1977. Back then, there were few precedents. (AFAIK, just the Thompson shell, which didn't support variables according to the manual entry.)
The rationale for the design decisions in the 1970's are ... lost in the mists of time. The design decisions were made by Steve Bourne and colleagues working at Bell Labs on v6 UNIX. They probably had no idea that their decisions would still be relevant 40+ years later.
The Bourne shell was designed to be general purpose and simple to use ... compared with the alternative of writing programs in C. And small. It was an outstanding success in those terms.
However, any language that is successful has the "problem" that it gets widely adopted. And that makes it more difficult to fix any issues (real or perceived) that may arise. Any proposal to change a language needs to be balanced against the impact of that change on existing users / uses of the language. You don't want to break existing programs or scripts.
Irrespective of arguments about whether spaces around = should be allowed in a shell variable assignment, changing this would break millions of shell scripts. It is just not going to happen.
Of course, Linux (and UNIX before it) allow you to design and implement your own shell. You could (in theory) replace the default shell. It is just a lot of work.
And there is nothing stopping you from writing your scripts in another scripting language (e.g. Python, Ruby, Perl, etc) or designing and implementing your own scripting language.
In summary:
We cannot know for sure why they designed the shell with this syntax for variable assignment, but it is moot anyway.
Reference:
Evolution of shells in Linux: a history of shells.
It prevents ambiguity in a lot of cases. Otherwise, if you have a statement foo = bar, it could then either mean run the foo program with = and bar as arguments, or set the foo variable to bar. When you require that there are no spaces, now you've limited ambiguity to the case where a program name contains an equals sign, which is basically unheard of.
I agree with #StephenC, and here's some more context with sources:
Unix v6 from 1975 did not have an environment, there was just a exec syscall that took a program and a string array of arguments. The system sh, written by Thompson, did not support variables, only single digit numbered arguments like $1 (probably why $12 to this day is interpreted as ${1}2)
Unix v7 from 1979, emboldened by advances in hardware, added a ton of features including a second string array to the exec call. The man page described it like this, which is still how it works to this day:
An array of strings called the environment is made available by exec(2) when a process begins. By convention these strings have the form name=value
The system sh, now written by Bourne, worked much like v6 shell, but now allowed you to specify these environment strings in the same format in front of commands (because which other format would you use?). The simplistic parser essentially split words by spaces, and flagged a word as destined for a variable if it contained a = and all preceding characters had been alphanumeric.
Thanks to Unix v7's incredible popularity, forks and clones copied a lot of things including this behavior, and that's what we're still seeing today.

Short/long options with option argument - is this some sort of convention? [duplicate]

This question already has answers here:
What is the general syntax of a Unix shell command?
(4 answers)
Closed 7 years ago.
It seems that most (a lot of) commands implement option arguments like this:
if a short option requires an option argument, the option is separated by a space from the option argument, e.g.
$ head -n 10
if a long option requires an option argument, the option is separated by a = from the option argument, e.g.
$ head --lines=10
Is this some sort of convention and yes, where can I find it? Besides, what's the reasoning?
Why e.g. is it not
$ head --lines 10
?
The short option rationale is documented in the POSIX Utility Conventions. Most options parsers allow the value to be 'attached' to the letter (-n10), mainly because of extensive historical precedent.
The long option rationale is specified by GNU in their Coding Standards and in the manual page for getopt_long().
Once upon a long time ago, in a StackOverflow of long ago, there was a question about command option styles. Not perhaps a good question, but I think the answers rescued it (but I admit to bias). Anyway, it has since been deleted, so I'm going to resuscitate my answer here because (a) it was a painful process to rediscover the answer and (b) it has useful information in it related to options.
How many different types of options do you recognize? I can think of many, including:
Single-letter options preceded by single dash, groupable when there is no argument, argument can be attached to option letter or in next argument (many, many Unix commands; most POSIX commands).
Single-letter options preceded by single dash, grouping not allowed, arguments must be attached (RCS).
Single-letter options preceded by single dash, grouping not allowed, arguments must be separate (pre-POSIX SCCS, IIRC).
Multi-letter options preceded by single dash, arguments may be attached or in next argument (X11 programs).
Multi-letter options preceded by single dash, may be abbreviated (Atria Clearcase).
Multi-letter options preceded by single plus (obsolete).
Multi-letter options preceded by double dash; arguments may follow '=' or be separate (GNU utilities).
Options without prefix/suffix, some names have abbreviations or are implied, arguments must be separate. (AmigaOS Shell, added by porneL)
Options taking an optional argument sometimes must be attached, sometimes must follow an '=' sign. POSIX doesn't support optional arguments meaningfully (the POSIX getopt() only allows them for the last option on the command line).
All sensible option systems use an option consisting of double-dash ('--') alone to mean "end of options" - the following arguments are "non-option arguments" (usually file names) even if they start with a dash. (I regard supporting this notation as an imperative.) Note that if you have a command cmd with an option -f that expects an argument, then if you invoke it with -- in place of the argument (cmd -f -- -other, many versions of getopt() will treat the -- as the file name for -f and then parse -other as regular options. That is, -- does not terminate the options if it has to be interpreted as an argument to another option.
Many but not all programs accept single dash as a file name to mean standard input (usually) or standard output (occasionally). Sometimes, as with GNU 'tar', both can be used in a single command line:
tar -cf - -F - | ...
The first solo dash means 'write to stdout'; the second means 'read file names from stdin'.
Some programs use other conventions — that is, options not preceded by a dash. Many of these are from the oldest days of Unix. For example, 'tar' and 'ar' both accept options without a dash, so:
tar cvzf /tmp/somefile.tgz some/directory
The dd command uses opt=value exclusively:
dd if=/some/file of=/another/file bs=16k count=200
Some programs allow you to interleave options and other arguments completely; the C compiler, make and the GNU utilities run without POSIXLY_CORRECT in the environment are examples. Many programs expect the options to precede the other arguments.
Modern programs such as git increasingly seem to use a base command name (git) followed by a sub-command (commit) followed by options (-m "Commit message"). This was presaged by the sccs interface to the SCCS commands, and then by cvs, and is used by svn too (and they are all version control systems). However, other big suites of commands adopt similar styles when it seems appropriate.
I don't have strong preferences between the different systems. When there are few enough options, then single letters with mnemonic value are convenient. GNU supports this, but recommends backing it up with multi-letter options preceded by a double-dash.
There are some things I do object to. One of the worst is the same option letter being used with different meanings depending on what other option letters have preceded it. In my book, that's a no-no, but I know of software where it is done.
Another objectionable behaviour is inconsistency in style of handling arguments (especially for a single program, but also within a suite of programs). Either require attached arguments or require detached arguments (or allow either), but do not have some options requiring an attached argument and others requiring a detached argument. And be consistent about whether '=' may be used to separate the option and the argument.
As with many, many (software-related) things — consistency is more important than the individual decisions.
Whatever you do, please, read the TAOUP's Command-Line Options and consider Standards for Command Line Interfaces. (Added by J F Sebastian — thanks; I agree.)

How does bash tab completion work?

I have been spending a lot of time in the shell lately and I'm wondering how the tab autocomplete works. What's the mechanism behind it? How does the bash know the contents of every directory?
There are two parts to the autocompletion:
The readline library, as already mentioned by fixje, manages the command line editing, and calls back to bash when tab is pressed, to enable completion. Bash then gives (see next point) a list of possible completions, and readline inserts as much characters as are identified unambiguously by the characters already typed in. (You can configure the readline library quite much, see the section Command line editing of the Bash manual for details.)
Bash itself has the built-in complete to define a completion mechanism for individual commands. If for the current command nothing is defined, it used completion by file name (using opendir/readdir, as Ignacio said).
The part to define your own completions is described in the section Programmable Completion. In short, with
complete «options» «command» you define the completion for some command. For example complete -u su says
when completing an argument for the su command, search for users of the current system.
If this is more complicated than the
normal options can cover (e.g. different completions depending on argument index, or depending on previous arguments),
you can use -F function, which will then invoke a shell function to generate the list of possible completions.
(This is used for example for the git completion, which is very complicated, depending on subcommand and sometimes
on options given, and using sometimes names of branches (which are nothing bash knows about).
You can list the existing completions defined in your current bash environment using simply complete, to have an impression on what is possible. If you have the bash-completion package installed (or however it is named on your system), completions for a lot of commands are installed, and as Wrikken said, /etc/bash_completion contains a bash script which is then often executed at shell startup to configure this. Additional custom completion scripts may be placed in /etc/bash_completion.d; those are all sourced from /etc/bash_completion.
If you are interested in the basics:
Bash uses readline which features history and basic completion. You could inspect the source if you want to get a detailed understanding.
Furthermore, you can use readline to build your own CLI interfaces with completion

Resources