How to auto-complete a multi-level command aliased to a single command? - bash

Say I have two bash functions:
dock() { sudo docker $# ;}
and
dock-ip() { sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $# ;}
How to get bash auto-completion working with the second function?
With the first one, it is as easy as adding:
_completion_loader docker; complete -F _docker dock
This will not work for the second one. The autocomplete source for Docker is in /usr/share/bash-completion/completions/docker on Debian Stretch. I have more functions like dock-run, dock-exec, etc. so I don't want to write a custom completion function for each of them.
Also, complete -F _docker_container_inspect dock-ip only partially works; tab only lists containers, not completes partial strings.
Research:
How do I autocomplete nested, multi-level subcommands? <-- needs custom functions
https://superuser.com/questions/436314/how-can-i-get-bash-to-perform-tab-completion-for-my-aliases <-- automated for top commands only

With a combined hour of bash completion experience, I took apart the docker completion script (/usr/share/bash-completion/completions/docker) and the bash_completion.sh script to come up with a wrapper function:
# Usage:
# docker_alias_completion_wrapper <completion function> <alias/function name>
#
# Example:
# dock-ip() { docker inspect --format '{{ .NetworkSettings.IPAddress }}' $# ;}
# docker_alias_completion_wrapper __docker_complete_containers_running dock-ip
function docker_alias_completion_wrapper {
local completion_function="$1";
local alias_name="$2";
local func=$(cat <<EOT
# Generate a new completion function name
function _$alias_name() {
# Start off like _docker()
local previous_extglob_setting=\$(shopt -p extglob);
shopt -s extglob;
# Populate \$cur, \$prev, \$words, \$cword
_get_comp_words_by_ref -n : cur prev words cword;
# Declare and execute
declare -F $completion_function >/dev/null && $completion_function;
eval "\$previous_extglob_setting";
return 0;
};
EOT
);
eval "$func";
# Register the alias completion function
complete -F _$alias_name $alias_name
}
export -f docker_alias_completion_wrapper
I then created my alias/functions like this:
# Get container IP
dock-ip() { docker inspect --format '{{ .NetworkSettings.IPAddress }}' $# ;}
docker_alias_completion_wrapper __docker_complete_containers_running dock-ip
# Execute interactive container
dock-exec() { docker exec -i -t --privileged $# ;}
docker_alias_completion_wrapper __docker_complete_containers_all dock-exec
...
Be sure to call _completion_loader docker; at the top of your profile aliases script to load the main Docker completion scripts. I invite more skilled bash programmers to improve this answer, please.

Related

Bash how to add conditional quoted argument '--pull "always"' to docker command

I am trying to conditionally add arguments to a docker call in a bash script but docker says the flag is unknown, but I can run the command verbatim by hand and it works.
I have tried a few strategies to add the command, including using a string instead of an array, and I have tried using a substitution like the solution here ( using ${array[#]/#/'--pull '} ): https://stackoverflow.com/a/68675860/10542275
docker run --name application --pull "always" -p 3000:3000 -d private.docker.repository/group/application:version
This bash script
run() {
getDockerImageName "/group" "$PROJECT_NAME:$VERSION" "latest";
local imageName=${imageName};
local additionalRunParameters=${additionalRunParameters};
cd "$BASE_PATH/$PROJECT_NAME" || exit 1;
stopAnyRunning "$PROJECT_NAME";
echo docker run --name "$PROJECT_NAME" \
"${additionalRunParameters[#]}" \
-p 3000:3000 \
-d "$imageName";
// docker run --name application --pull "always" -p 3000:3000 -d private.docker.repository/group/application:version
docker run --name "$PROJECT_NAME" \
"${additionalRunParameters[#]}" \
-p 3000:3000 \
-d "$imageName";
//unknown flag: --pull "always"
}
The helper 'getDockerImageName'
# Gets the name of the docker image to use for deploy.
# $1 - The path to the image in the container registry
# $2 - The name of the image and the tag
# $3 - 'latest' if the deploy should use the container built by CI
export imageName="";
export additionalRunParameters=();
getDockerImageName() {
imageName="group/$2";
if [[ $3 == "latest" ]]; then
echo "Using docker image from CI...";
docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "https://$DOCKER_BASE_URL";
imageName="${DOCKER_BASE_URL}${1}/$2";
additionalRunParameters=('--pull "always"');
fi
}
Don't put code (such as arguments) in a variable. Basically, use an array is good, and you are almost doing that. This line -
local additionalRunParameters=${additionalRunParameters};
is probably what's causing you trouble, along with
additionalRunParameters=('--pull "always"');
which is embedding the spaces between what you seem to have meant to be two operands (the option and its argument), turning them into a single string that is an unrecognized garble. //unknown flag: --pull "always" is telling you the flag it's parsing is --pull "always", which is NOT the --pull flag docker DOES know, followed by an argument.
Also,
export additionalRunParameters=(); # nope
arrays don't really export. Take that out, it will only confuse someone.
A much simplified set of examples:
$: declare -f a b c
a ()
{
foo=('--pull "always"') # single value array
}
b ()
{
echo "1: \${foo}='${foo}' <= scalar, returns first element of the array";
echo "2: \"\${foo[#]}\"='${foo[#]}' <= returns entire array (be sure to put in quotes)";
echo "3: \"\${foo[1]}\"='${foo[1]}' <= indexed, returns only second element of array"
}
c ()
{
foo=(--pull "always") # two values in this array
}
$: a # sets ${foo[0]} to '--pull "always"'
$: b
1: ${foo}='--pull "always"' <= scalar, returns first element of the array
2: "${foo[#]}"='--pull "always"' <= returns entire array (be sure to put in quotes)
3: "${foo[1]}"='' <= indexed, returns only second element of array
$: c # setd ${foo[0]} to '--pull' and ${foo[1]} to "always"
$: b
1: ${foo}='--pull' <= scalar, returns first element of the array
2: "${foo[#]}"='--pull always' <= returns entire array (be sure to put in quotes)
3: "${foo[1]}"='always' <= indexed, returns only second element of array
So what you need is:
getDockerImageName(){
. . . # stuff
additionalRunParameters=( --pull "always" ); # no single quotes
}
and just take OUT
local additionalRunParameters=${additionalRunParameters}; # it's global, no need
You have one more issue though - "${additionalRunParameters[#]}" \ is good, as long as the array isn't empty. In your example it will apparently always be loaded with the same values, so I don't see why you are adding all this extra complication of putting it into a global array that gets loaded incidentally in another function... seems like an antipattern. Just put the arguments you are universally enforcing anyway on the command itself.
However, on the possibility that you simplified some details out, then if this array is ever empty it's going to pass a literal quoted empty string as an argument on the command line, and you're likely to get something like the following error:
$: docker run --name foo "" -p 3000:3000 -d bar # docker doesn't understand the empty string
docker: invalid reference format.
Maybe, rather than
additionalRunParameters=( --pull "always" ); # no single quotes
and
"${additionalRunParameters[#]}" \
what you really want is
pull_always=true
and
${pull_always:+ --pull always } \
...with no quotes, so that if the var has been set (with anything) it evaluates to the desired result, but if it's unset and unquoted it evaluates to nothing and gets ignored, as nothing actually gets passed in - not even a null string.
Good luck. Let us know if you need more help.

ksh remote function calling another remote function2

I am having problem to run simple code below:
#!/bin/ksh
set -x
function test_me
{
set -x
date
}
function check_me
{
set -x
ssh ${HST2} "$(typeset -f test_me); test_me"
}
ssh ${HST1} "$(typeset -f); check_me"
Fails with syntax error at line 5: `;;' unexpected
Though I can't explain why this gives you the particular error message you see, at least I see that your code can't work:
First you run in one process (on HST1) the commands
function check_me
{
set -x
ssh ${HST2} "$(typeset -f test_me); test_me"
};check_me
On HST1 defines only the function check_me, nothing else. Then you run this check_me.
Inside this function, you refer to two things: A variable HST2, and a function test_me. There is nothing in your code which would define these entities. They are only defined in that initial process, where you do the SSH, but they are not defined on the remote process on the host $HST1.
Moreover, you don't even run a ksh on $HST1. At least I don't see and ksh invocation in your code. Actually, you just pass a function .... to $HST1, and function is not an executable.
#!/bin/ksh
set -x
function second_me
{
set -x
date
}
function first_me
{
set -x
ssh hostname2 "$(typeset -f second_me); second_me"
}
#from local_machine
ssh hostname1 "$(typeset -f); first_me"
above Fails with syntax error at line X: `;;' unexpected
But If I add extra dummy function third_me, things work fine, looks like ksh bug?
#!/bin/ksh
set -x
function third_me
{
set -x
date
}
function second_me
{
set -x
date
}
function first_me
{
set -x
ssh hostname2 "$(typeset -f second_me); second_me"
}
#from local_machine
ssh hostname1 "$(typeset -f); first_me"
Code works fine
Another work around using sub-function:
#!/bin/ksh
set -x
function first_me
{
set -x
function second_me
{
set -x
date
}
ssh hostname2 "$(typeset -f second_me); second_me"
}
#from local_machine
ssh hostname1 "$(typeset -f); first_me"

How to call a variable of string with spaces in a terraform provisioner?

I am trying to run terraform provisioner which is calling my ansible playbook , now I am passing public key as a variable from user . When passing public key it doesnt take the entire key and just ssh-rsa , but not a complete string.
I want to pass the complete string as "ssh-rsa Aghdgdhfghjfdh"
The provisioner in terraform which I am running is :
resource "null_resource" "bastion_user_provisioner" {
provisioner "local-exec" {
command = "sleep 30 && ansible-playbook ../../../../ansible/create-user.yml --private-key ${path.module}/${var.project_name}.pem -vvv -u ubuntu -e 'username=${var.username}' -e 'user_key=${var.user_key}' -i ${var.bastion_public_ip}, -e 'root_shell=/bin/rbash' -e 'raw_password=${random_string.bastion_password.result}'"
}
}
If i run playbook alone as:
ansible-playbook -i localhost create-user.yml --user=ubuntu --private-key=kkk000.pem -e "username=kkkkk" -e 'user_key='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+GWlljlLzW6DOEo"' -e root_shell="/bin/bash"
it works,
But I want the string to be in a terraform variable which is passed in provisioner.
I want to have key copied to a file as
ssh-rsa AWRDkj;jfdljdfldkf'sd.......
and not just
ssh-rsa
You are getting bitten by the -e key=value splitting that goes on with the command-line --extra-args interpretation [citation]. What you really want is to feed -e some JSON text, to stop it from trying to split on whitespace. That will also come in handy for sufficiently complicated random string passwords, which would otherwise produce a very bad outcome when trying to pass them on the command-line.
Thankfully, there is a jsonencode() function that will help you with that problem:
resource "null_resource" "bastion_user_provisioner" {
provisioner "local-exec" {
command = <<SH
set -e
sleep 30
ansible -vvv -i localhost, -c local -e '${jsonencode({
"username"="${var.username}",
"user_key"="${var.user_key}",
"raw_password"="${random_string.bastion_password.result}",
})}' -m debug -a var=vars all
SH
}
}

Jenkins pipeline undefined variable

I'm trying to build a Jenkins Pipeline for which a parameter is
optional:
parameters {
string(
name:'foo',
defaultValue:'',
description:'foo is foo'
)
}
My purpose is calling a shell script and providing foo as argument:
stages {
stage('something') {
sh "some-script.sh '${params.foo}'"
}
}
The shell script will do the Right Thing™ if the provided value is the empty
string.
Unfortunately I can't just get an empty string. If the user does not provide
a value for foo, Jenkins will set it to null, and I will get null
(as string) inside my command.
I found this related question but the only answer is not really helpful.
Any suggestion?
OP here realized a wrapper script can be helpful… I ironically called it junkins-cmd and I call it like this:
stages {
stage('something') {
sh "junkins-cmd some-script.sh '${params.foo}'"
}
}
Code:
#!/bin/bash
helpme() {
cat <<EOF
Usage: $0 <command> [parameters to command]
This command is a wrapper for jenkins pipeline. It tries to overcome jenkins
idiotic behaviour when calling programs without polluting the remaining part
of the toolkit.
The given command is executed with the fixed version of the given
parameters. Current fixes:
- 'null' is replaced with ''
EOF
} >&2
trap helpme EXIT
command="${1:?Missing command}"; shift
trap - EXIT
typeset -a params
for p in "$#"; do
# Jenkins pipeline uses 'null' when the parameter is undefined.
[[ "$p" = 'null' ]] && p=''
params+=("$p")
done
exec $command "${params[#]}"
Beware: prams+=("$p") seems not to be portable among shells: hence this ugly script is running #!/bin/bash.

bash complete from two sources, folders and keys from array

I am trying to define a bash function, mycd. This function uses an associative array mycdar. If the key exists in the array the function will change directory to the corresponding value of the key. If the key doesn't exist it will change to the dir provided in the command line.
What I would like to do is to have completion for this function, from both they keys of the associated array or from the folders existing in the current directory.
Thank you.
Building my own cd function with completion
Using an associative array for storing some paths.
First the command:
mycd() { [ -v mycdar["$1"] ] && cd "${mycdar[$1]}" || cd "$1"; }
Second the completion command
_mycd() {
local cur;
_cd ;
_get_comp_words_by_ref cur;
COMPREPLY=($(
printf "%s\n" "${!mycdar[#]}" |
grep ^$cur)
${COMPREPLY[#]});
}
One array:
declare -A mycdar='(
["docs"]="/usr/share/doc"
["home"]="$HOME"
["logs"]="/var/log"
["proc"]="/proc"
["root"]="/"
["tmp"]="/tmp"
)'
Than finaly the bind:
complete -F _mycd -o nospace mycd
Or to permit standard path building behaviour:
complete -F _mycd -o nospace -o plusdirs mycd
It turns out that there is an option that to the complete function that does exactly what is asked:
complete -o plusdirs -o nospace -F _mycd mycd
In this case _mycd just returns matching elements from the keys of the associative array.

Resources