When you type something, you often use bash autocompletion: you start writing a command, for example, and you type TAB to get the rest of the word.
As you have probably noticed, when multiple choices match your command, bash displays them like this :
foobar#myserv:~$ admin-
admin-addrsync admin-adduser admin-delrsync admin-deluser admin-listsvn
admin-addsvn admin-chmod admin-delsvn admin-listrsync
I'm looking for a solution to display each possible solution on a new line, similar to the last column on a ls -l. Ever better, it would be perfect if I could apply a rule like this: "if you find less than 10 suggestions, display them one by line, if more => actual display".
bash prior to version 4.2 doesn't allow any control over the output format of completions, unfortunately.
Bash 4.2+ allows switching to 1-suggestion-per-line output globally, as explained in Grisha Levit's helpful answer, which also links to a clever workaround to achieve a per-completion-function solution.
The following is a tricky workaround for a custom completion.
Solving this problem generically, for all defined completions, would be much harder (if there were a way to invoke readline functions directly, it might be easier, but I haven't found a way to do that).
To test the proof of concept below:
Save to a file and source it (. file) in your interactive shell - this will:
define a command named foo (a shell function)
whose arguments complete based on matching filenames in the current directory.
(When foo is actually invoked, it simply prints its argument in diagnostic form.)
Invoke as:
foo [fileNamePrefix], then press tab:
If between 2 and 9 files in the current directory match, you'll see the desired line-by-line display.
Otherwise (1 match or 10 or more matches), normal completion will occur.
Limitations:
Completion only works properly when applied to the LAST argument on the command line being edited.
When a completion is actually inserted in the command line (once the match is unambiguous), NO space is appended to it (this behavior is required for the workaround).
Redrawing the prompt the first time after printing custom-formatted output may not work properly: Redrawing the command line including the prompt must be simulated and since there is no direct way to obtain an expanded version of the prompt-definition string stored in $PS1, a workaround (inspired by https://stackoverflow.com/a/24006864/45375) is used, which should work in typical cases, but is not foolproof.
Approach:
Defines and assigns a custom completion shell function to the command of interest.
The custom function determines the matches and, if their count is in the desired range, bypasses the normal completion mechanism and creates custom-formatted output.
The custom-formatted output (each match on its own line) is sent directly to the terminal >/dev/tty, and then the prompt and command line are manually "redrawn" to mimic standard completion behavior.
See the comments in the source code for implementation details.
# Define the command (function) for which to establish custom command completion.
# The command simply prints out all its arguments in diagnostic form.
foo() { local a i=0; for a; do echo "\$$((i+=1))=[$a]"; done; }
# Define the completion function that will generate the set of completions
# when <tab> is pressed.
# CAVEAT:
# Only works properly if <tab> is pressed at the END of the command line,
# i.e., if completion is applied to the LAST argument.
_complete_foo() {
local currToken="${COMP_WORDS[COMP_CWORD]}" matches matchCount
# Collect matches, providing the current command-line token as input.
IFS=$'\n' read -d '' -ra matches <<<"$(compgen -A file "$currToken")"
# Count matches.
matchCount=${#matches[#]}
# Output in custom format, depending on the number of matches.
if (( matchCount > 1 && matchCount < 10 )); then
# Output matches in CUSTOM format:
# print the matches line by line, directly to the terminal.
printf '\n%s' "${matches[#]}" >/dev/tty
# !! We actually *must* pass out the current token as the result,
# !! as it will otherwise be *removed* from the redrawn line,
# !! even though $COMP_LINE *includes* that token.
# !! Also, by passing out a nonempty result, we avoid the bell
# !! signal that normally indicates a failed completion.
# !! However, by passing out a single result, a *space* will
# !! be appended to the last token - unless the compspec
# !! (mapping established via `complete`) was defined with
# !! `-o nospace`.
COMPREPLY=( "$currToken" )
# Finally, simulate redrawing the command line.
# Obtain an *expanded version* of `$PS1` using a trick
# inspired by https://stackoverflow.com/a/24006864/45375.
# !! This is NOT foolproof, but hopefully works in most cases.
expandedPrompt=$(PS1="$PS1" debian_chroot="$debian_chroot" "$BASH" --norc -i </dev/null 2>&1 | sed -n '${s/^\(.*\)exit$/\1/p;}')
printf '\n%s%s' "$expandedPrompt" "$COMP_LINE" >/dev/tty
else # Just 1 match or 10 or more matches?
# Perform NORMAL completion: let bash handle it by
# reporting matches via array variable `$COMPREPLY`.
COMPREPLY=( "${matches[#]}" )
fi
}
# Map the completion function (`_complete_foo`) to the command (`foo`).
# `-o nospace` ensures that no space is appended after a completion,
# which is needed for our workaround.
complete -o nospace -F _complete_foo -- foo
bash 4.2+ (and, more generally, applications using readline 6.2+) support this with the use of the completion-display-width variable.
The number of screen columns used to display possible matches when performing completion. The value is ignored if it is less than 0 or greater than the terminal screen width. A value of 0 will cause matches to be displayed one per line. The default value is -1.
Run the following to set the behavior for all completions1 for your current session:
bind 'set completion-display-width 0'
Or modify your ~/.inputrc2 file to have:
set completion-display-width 0
to change the behavior for all new shells.
1 See here for a method for controlling this behavior for individual custom completion functions.
2 The search path for the readline init file is $INPUTRC, ~/.inputrc, /etc/inputrc so modify the file appropriate for you.
Related
I am trying to write a simple Bash completion script for a program that runs its arguments as a command. A good example of this is kind of program is the prime-run script provided by the nvidia-prime package:
#!/bin/bash
__NV_PRIME_RENDER_OFFLOAD=1 __VK_LAYER_NV_optimus=NVIDIA_only __GLX_VENDOR_LIBRARY_NAME=nvidia "$#"
This script sets a few environment variables, which instructs the prime driver to use the Nvidia dGPU on a hybrid system. The first argument is treated as the command, and all trailing arguments are passed through. So for example you can run prime-run code . and VSCode will start in the current directory using the dGPU.
Therefore from a completion-script POV, what we want is to basically try to complete as if the prime-run token isn't there (hence "transparent proxy"-like behaviour). To give a rather contrived example:
> prime-run journalc<TAB>
(completes journalctl)
> prime-run journalctl --us<TAB>
(completes --user)
However I am finding this surprisingly difficult in Bash (not that I know how in other shells). So the question is simple: is it possible and if so how?
Ideas I've (hopelessly) had
The simple complete -A command prime-run: the first argument gets completed as a command as expected (let's call it foo), but the following arguments are also completed as commands rather than as arguments to foo
Use some combination of compgen and complete -p to invoke the completion function of foo, but AFAIK the completion function for all foo is locally defined and thus uncallable
TL;DR
bash-completion provides a function named _command_offset (permalink), which is exactly what I need.
# A meta-command completion function for commands like sudo(8), which need to
# first complete on a command, then complete according to that command's own
# completion definition.
Keep reading if you are interested in how I got here.
So I was daydreaming the other day, when it hit me - doesn't sudo basically have the exact same behaviour I want? So the task became simple - reverse engineer the completion script for sudo. Source available here: permalink.
Turns out, most of the code has to do with completing the various options, so it's safe to simply throw most of it out:
L 8-11, 50-52: Related to sudo's edit mode. Safe to ditch.
L 19-24, 27-39, 43-49: These complete sudo's options. Safe to ditch.
So we're left with this:
_sudo()
{
local cur prev words cword split
_init_completion -s || return
for ((i = 1; i <= cword; i++)); do
if [[ ${words[i]} != -* ]]; then
local PATH=$PATH:/sbin:/usr/sbin:/usr/local/sbin
local root_command=${words[i]}
_command_offset $i
return
fi
done
$split && return
} &&
complete -F _sudo sudo sudoedit
The for and if block are there to deal with sudo's options that precede the "guest command". Safe to ditch (after replacing all $i with 1).
The variable $split is only referenced in _init_completion (permalink), and it seems to be used for handling different argument styles (--foo=bar v.s. --foo bar). Same with the -s flag. Irrelevant.
Appending to $PATH and setting $root_command have to do with privilege escalation. Only relevant to sudo.
So after the dust has cleared, by process of elimination, I ended up with this simple chunk of code:
_my-script()
{
local cur prev words cword
_init_completion || return
_command_offset 1
} && complete -F _my-script my-script
Declaring these four local variables and calling _init_completion is standard for all completion scipts, so really it's as simple as one command. Of course someone had to write the massively-complex _command_offset function so lucky me I guess?
Anyways, thank you for reading the story of me messing around and hopefully this will be helpful to some other person in the future.
Just like getting the function calls with ${FUNCNAME[#]}, is there a way to get the commands? BASH_COMMAND can only be used to get the last command (it's not an array, just a string).
I know I can achieve that by using BASH_SOURCE and BASH_LINENO to read the right line from the right file, but it does not work in case of evals (see my other, less-specific question Get the contents of an expanded expression given to eval through Bash internals)
Is there another way?
What is your intent? If you want to print a stack trace, you can use the Bash builtin command caller, like this:
dump_trace() {
local frame=0 line func source n=0
while caller "$frame"; do
((frame++))
done | while read line func source; do
((n++ == 0)) && {
printf 'Stack trace:\n'
}
printf '%4s at %s\n' " " "$func ($source:$line)"
done
}
From Bash manual:
caller [expr]
Returns the context of any active subroutine call (a shell function or
a script executed with the . or source builtins).
Without expr, caller displays the line number and source filename of
the current subroutine call. If a non-negative integer is supplied as
expr, caller displays the line number, subroutine name, and source
file corresponding to that position in the current execution call
stack. This extra information may be used, for example, to print a
stack trace. The current frame is frame 0.
The return value is 0 unless the shell is not executing a subroutine
call or expr does not correspond to a valid position in the call
stack.
See the full logging/error handling implementation here:
https://github.com/codeforester/base/blob/master/lib/stdlib.sh
Simple answer: there's no way to do that in Bash.
Related to the linked question and eval: Zsh seems to handle evals better, with variables and arrays such as EVAL_LINENO, zsh_eval_context and others.
funcstack
This array contains the names of the functions, sourced files, and (if EVAL_LINENO is set) eval commands. currently being executed. The first element is the name of the function using the parameter.
The standard shell array zsh_eval_context can be used to determine the type of shell construct being executed at each depth: note, however, that is in the opposite order, with the most recent item last, and it is more detailed, for example including an entry for toplevel, the main shell code being executed either interactively or from a script, which is not present in $funcstack.
See man zshall for more details.
Old stuff:
Background:
- Ultimate goal is to put a script in my .bash_profile that warns me by changing text color if I'm typing a commit message and it gets too
long (yes I'm aware vim has something like this).
Progress:
- I found the read -n option which led me to write this:
while true; do
# This hits at the 53rd character
read -rn53 input
# I have commit aliased to gc so the if is just checking if I'm writing a commit
if [ "${input:0:2}" = "gc" ]; then
printf "\nMessage getting long"
fi
done
Question:
- However, running this takes the user out of the bash prompt. I need a way to do something like this while at a normal prompt. I can't find
information on anything like this. Does that mean it's not possible?
Or am I just going about it the wrong way?
New progress:
I found the bind -x option which led me to write this:
check_commit() {
if [ "${READLINE_LINE:0:13}" == 'git commit -m' ] && [ ${#READLINE_LINE} -gt 87 ]; then
echo "Commit is $((${#READLINE_LINE} - 87)) characters too long!"
fi
READLINE_LINE="$READLINE_LINE$1"
READLINE_POINT=$(($READLINE_POINT+1))
}
bind -x '"\"": check_commit "\""'
It listens for a double quote and if I'm writing a long commit message tells me how many characters I am over the limit. Also puts the character I typed into the current line since it is eaten by the bind.
New question:
Now I just need a way to put in a regex, character list or at least a variable instead of \" so I can listen on more keys (Yes, I'm aware bind -x probably wasn't intended to be used this way. I can check performance/footprint/stability myself). I tried "$char", "${char}", "$(char)" and a few other things, but none seem to work. What is the correct approach here ?
AFAIK, not possible in a sane way if you want this to happen during your normal prompt (when PROMPT_COMMAND and PS1 are evaluated). That would involved binding a custom compiled readline function for every insert-self and alike.
If you want this to happen in a script using prompt builtin, this is crudely possible with a loop of
read -e -i $(munge_buf $buf) -n $(buf_warn_len $buf) -p $(buf_warning $buf) buf
like commands. This will allow you to create munge_buf() to alter the currently typed text if needed, buf_warn_len() to calculate a new len to warn at (which may be very large if warning was already displayed), and buf_warn_msg() to derive a warning message based upon the buffer.
A config file that the last line contains data that I want to assign everything to the RIGHT of the = sign into a variable that I can display and call later in the script.
Example: /path/to/magic.conf:
foo
bar
ThisOption=foo.bar.address:location.555
What would be the best method in a bash shell script to read the last line of the file and assign everything to the right of the equal sign? In this case, foo.bar.address:location.555.
The last line always has what I want to target and there will only ever be a single = sign in the file that happens to be the last line.
Google and searching here yielded many close but non-relative results with using sed/awk but I couldn't come up with exactly what I'm looking for.
Use sed:
variable=$(sed -n 's/^ThisOption=//p' /path/to/magic.conf)
echo "The option is: $variable")
This works by finding and removing the ThisOption= marker at the start of the line, and printing the result.
IMPORTANT: This method absolutely requires that the file be trusted 100%. As mentioned in the comments, anytime you "eval" code without any sanitization there are grave risks (a la "rm -rf /" magnitude - don't run that...)
Pure, simple bash. (well...using the tail utility :-) )
The advantage of this method, is that it only requires you to know that it will be the last line of the file, it does not require you to know any information about that line (such as what the variable to the left of the = sign will be - information that you'd need in order to use the sed option)
assignment_line=$(tail -n 1 /path/to/magic.conf)
eval ${assignment_line}
var_name=${assignment_line%%=*}
var_to_give_that_value=${!var_name}
Of course, if the var that you want to have the value is the one that is listed on the left side of the "=" in the file then you can skip the last assignment and just use "${!var_name}" wherever you need it.
I created an empty directory on zsh and added a file
called hello.rb by doing the following:
echo 'Hello, world.' >hello.rb
If I want to make changes in this file using the terminal
what's the proper way of doing it without opening the file
itself using let's say TextEditor?
I want to be able to make changes in the file hello.rb strictly
by using my zsh terminal, is this at all possible?
Zsh is not a terminal but a shell. The terminal is the window in which the shell executes. The shell is the text program prompting you commands and executing them.
If you want to edit the file within the terminal, then using vim, nano, emacs -nw or any other text-mode text editor will do it. They are not Zsh commands, but external commands that you can call from Zsh or from any other shell.
If you want to edit the file within Zsh, then use zed. You will need to run once (in ~/.zshrc)
autoload zed
and then you can edit hello.rb using:
zed hello.rb
(exit and save with Control-j)
You have already created and edited the file.
To edit it again, you can use the >> to append.
For example
echo "\nAnd you too!\n" >> hello.rb
This would edit the file by concatenating the additional string.
Edit, of course, by your use and definition of 'changing' a file, this is the simplest way to do so using the shell.
In a normal way, though you probably want to use a terminal editor.
Zed is a great answer, but to be even more stripped down - for a level of editing that even a script can do - zsh can hand all 256 characters/byte-values (including null) in variables. This means you can edit line by line or chunk by chunk almost any kind of file data directly from the command-line. This is approximately what zed/vared does. If you have a current version with all the standard modules included, it is a great benefit to have zsh/mapfile or zsh/system loaded so that you can capture any of the characters that are left out by command-expansion (zed uses $(<$file) to read a file to memory). Here is an example of a way you could use this variable manipulation method:
% typeset -T Buffer buffer $'\n'
% typeset -T Edit edit $'\n'
It is most common to use newline to divide a text file one wishes to edit.
This handy feature will make zsh give you full access to one line or a range of lines at a time without unintentionally messing with the data.
% zmodload zsh/mapfile
% Buffer=$mapfile[path/to/file]
Here, I use the handy mapfile module because I can load the contents of a file byte-for-byte. Alternately you can use % Buffer="$(<path/to/file)", like zed does, but you will always have the trailing newlines removed and other word splitting is possible with a typo or environment variation, so the simplicity of the module's method is best. When finished, you save the changes by simply assigning the $Buffer value back to the $mapfile[file] or use a more classic command like printf '%s' $Buffer >path/to/file (this is exact string writing, byte-for-byte, so any newlines or formatting you added back will be written).
You transfer the lines between Buffer and Edit using the mapped arrays as follows, however, remember that in its simplest form assigning one array to another drops elements that are completely empty (one \n \n two \n three becomes one \n two \n three). You can suppress this empty-element removal by quoting the input array and adding an '#' symbol to its index "$buffer[#]", if using the whole array; and adding the '#' symbol to the flags if using a range of the array "${(#)buffer[2,50]}". Preserving empty lines can be a bit troublesome for typing, but these multiple arrays should only be used in a script or function, since you can just edit one line at a time from the command line with buffer[54]="echo This is a newly written line."
% edit=($buffer[50,70])
...
% buffer[50,70]=($edit)
This is standard Zsh syntax, that means in the ... area you can edit and manipulate the $edit array of lines or the $Edit scalar block of text all you want, including adding more lines or taking some away. When you add the lines back into $buffer it will replace the specified block of lines (50-70) with the new lines, automatically expanding or reducing the other array elements to accommodate the reintegrated lines. -- Because of the dynamic array accommodations, you can also just insert whatever you need as a new line like this buffer[40]=("new string as new line" "$buffer[40]"). This inserts it before the index given, while swapping the order of the elements ("$buffer[40]" "new string as new line") inserts the new line after the index given. Either will adjust all following elements, including totally empty elements, to their current index plus one.
If you wanted to re-write the zed function to use this method in some complex way like: newzed /path/to/file [start-line] [end-line], that would be great and handy too.
Before I leave, I wanted to mention that using vared directly, once you have these commands typed on the interactive terminal, you may find it frustrating that you can't use "Enter" for inserting or appending new lines. I found that with my terminal and Zsh version using ESC-ENTER worked well, but I don't know about older versions (Mac usually comes stocked with a not-most-recent version, if my memory is right). If that doesn't work, you may have to do some documentation digging to learn how to set up your ZLE (Zsh Line Editor, a component of Zsh) or acquire a newer version of Zsh. Also, some other shells, when indexing a scalar variable may count by the byte because in ascii and C a byte is the same as a character, but Zsh supports UTF8 and will index a scalar string by the UTF8 character unless you turn off the shell option multibyte (on by default). This will help with manipulating each line if you need to use the old byte-character indexing. Also, if you have a version of Zsh that for whatever was not compiled with zsh/mapfile or zsh/system, then you can achieve a similar effect using number of options to the read builtin, like <path/to/file |read -u 0 -k $[5 * 2**20] -r -s Buffer ||(($#Buffer)). As you can see here, you have to make the read length big enough to accommodate the file's size or it will leave off part of the file, and the read return code will nearly always be an error because of not being able to read the full length of the string. We fix this with ||(($#Buffer)), but this builtin was simply not meant to handle large scale byte manipulation efficiently, so what you see is what you can get from it.