Using the Bash autocompletion of another command - bash

When I create a command that wraps an existing command with some sugar, I'd like the new command to support the autocompletion of the original command. Is there a way to tell Bash to reuse the autocompletion script of another command?
Silly example:
cat > ~/ls-on-steroids.sh <<EOF
echo "Here are some goodies!"
ls "$#"
EOF
chmod +x ~/ls-on-steroids.sh
Now, how do I configure my new script such that when I type:
~/ls-on-steroids.sh <TAB><TAB>
I'd like the same behavior as with:
ls <TAB><TAB>
Preferably in a portable, repeatable manner, without having to manually track down the location of ls's autocomplete script.

You have to configure it manually, but it's relatively simple to copy completions from an existing command. First, run complete -p ls to see what, if any, command was defined for ls. If nothing comes up, ls doesn't use any special completions. You are probably expecting to see something like the following as the output, though
complete -o default -F _longopt ls
which says that the function _ls is called to generate completions for the command ls, and if the doesn't generate any results, use the bash default completion. You can apply the same function to ls_on_steroids by simply running
complete -o default -F _longopt ls_on_steroids
(i.e., replace ls with ls_on_steroids as the final argument in the command printed by complete -p).

Related

Use content of file as part of a Bash command

I want to use the content of a file.txt as part of a bash command.
Suppose that the bash command with its options that I want to execute is:
my_command -a first value --b_long_version_option="second value" -c third_value
but the first 2 options (-a and --b_long_version_option ) are very verbose so instead of inserting directly on the command line (or bash script) I wrote them in a file.txt like this:
-a first value \
--b_long_version_option="second value"
Now I expect to call the command "my_command" with the following syntax (where "path_to/file.txt" is the path to file.txt, expressed in relative or absolute form):
my_command "$(cat path_to/file.txt)" -c third_value
This however is not the right syntax, as my command is breaking and complaining.
How should I write the new version of the command and/or the file.txt so that it is equivalent to its native bash usage?
Thanks in advance!
The quotes are preserving the newlines. Take them off.
You also don't need the cat unless you're running an old bash parser.
my_command $(<path_to/file.txt) -c third_value
You'll need to take the backslashes at the ends of lines out.
Be careful doing things like this, though. It's probably better to just put the whole command in the file, rather than just pieces of it. If you really just want arguments, maybe define them a little more carefully in an array, source the file and then apply them, like this:
in file:
myArgs=( "-a" "first value"
"--b_long_version_option=second value"
)
Note the quoting. Then run with
. file
my_command "${myArgs[#]" -c third_value
e.g.,
$: printf "[%s] " "${myArgs[#]}" -c=foo
[-a] [first value] [--b_long_version_option=second value] [-c=foo]
I haven't seen any example of what you're trying. But, there are simpler ways to achieve your goal.
Bash Alias
ll for example is a bash alias for ls -al. It usually is defined in .bash_profile or .bashrc as follows :
alias ll='ls -al'
So, what you can do is to set another alias for your shorthand command.
alias mycmd='mycommand -a first value --b_long_version_option="second value"'
then you can use it as follows :
mycmd -c third_value
Config file
You can also define a mycommand.json file or mycommand.ini file for default arguments. Then, you will need to check for config file in your software, and assign arguments from it.
Using config file is more advanced solution. You can define multiple config files. You can set default config file in /etc/mycommand/config.ini for example. When running on different directories, you should check ${cwd}/mycommand.ini to check local config file exists or not. You can even add a --config-file argument to your command.
Using alias is more convenient for small tasks, or thing that won't change much. If your command's behavior should be different in some other project, the using a config file would be a better solution.

Override command for make from outside

I have several dirs with files stamp1.txt and stamp0.txt, and i want override cat command. I need it for example to suppress 'stamp1' files from archiving into library.
So i wrote little filter program called 'realname' and bash script to override original cat command.
function cat() {
local e=""
for s in $#
do
if realname $s; then
e=$e" "$s;
fi
done
command cat $e;
}
So command:
cat dir1/stamp1.txt dir2/stamp0.txt
will be converted to
cat dir2/stamp0.txt
And this example works just fine
ar cruv some_lib.a `cat dir1/stamp1.txt dir2/stamp0.txt`
But when i run some makefile to build some software - inside this process used original cat nor overrided.
How to override cat or any other command in way to get it work for make process without changing makefile(makefile is 3rdparty software and i don't want patch it every time when upgrade is needed)?
You can't do that with a shell function, because a shell function exists only in the local shell. It's not passed to programs like make. Also, GNU make always invokes /bin/sh by default, not /bin/bash, and your shell function above is written in bash syntax, so putting it in your ~/.bashrc will have no impact.
You could run:
$ make SHELL=/bin/bash
and add that shell function to your ~/.bashrc and that might work.
The only other thing you can do (assuming that your third party makefile invokes cat directly and doesn't use a variable like $(CAT) instead) is to create a cat shell script (not a function) and put it on your PATH before /bin and /usr/bin when you invoke make. Something like:
$ mkdir tmp
$ vi tmp/cat
...add commands...
$ chmod 755 tmp/cat
$ PATH=$(pwd)/tmp:$PATH make ...
Of course when you do this you can't use command cat ... in your script, you'll have to use a fully-qualified path like /bin/cat ...
Short version: you cannot (and you probably should not want to).
Longer version: You've defined a shell function. You could actually even export that shell function into the environment export -f cat. But unless the Makefile was saying bash -c cat ... instead of just cat (or other reference to the same effect), it would not give you the behavior you wanted.
If you really insisted... and the Makefile did not use hard coded path (e.g. /bin/cat). You could write your own cat, place it somewhere and make this location precede other possible hits for cat (just put it up front).
There is also some chance (look in the make file) it uses a variable (e.g. CAT) to know what to call, so you could just provide your own definition if that was the case.
In any case though. I would discourage you from using workaround like these because the actual behavior of the machinery gets obfuscated by doing so. There is something declared here... and something else in the environment giving it a different meaning. Which is a very common source of mistakes and eventually non-obvious (harder to resolve) bugs.
Example/clarification to the function bit. I have a Makefile:
all:
#echo foo
And define and export a function "overriding" echo. echo() { /bin/echo "$#" bar; } ; export -f echo. I run make and get:
$ make
foo
Because make just looks for echo in PATH (tries to exec and once it finds it, it runs it). If I changed it to have bash step in between, the exported function would kick in, but... that's an usual way to use commands in make and you'd have to edit the Makefile which you did not want:
all:
#bash -c 'echo foo'
This would yield you the result you wanted:
$ make
foo bar
The other option I've mentioned. I've put behavior of that function into a script /tmp/bin/echo reading:
#!/bin/bash
/bin/echo "$#" bar
And I've modified the PATH env var export PATH=/tmp/bin:$PATH. Now even with the first form of the Makefile:
all:
#echo foo
I get:
$ make
foo bar
But if the Makefile that is given says /bin/echo instead, I'd have no such luck. You could still change the binary... or change its behavior by forcing a shared library preload... but sounds quote extreme and fully exposes why this really might not be the best direction to take it.

Making a bash script switch to interactive mode and give a prompt

I am writing a training tool, it is written in bash to teach bash/unix.
I want a script to run to set things up, then to hand control to the user.
I want it to be easily runnable by typing ./script-name
How do I do this?
I.E.
User types: tutorial/run
The run-tutorial script sets things up.
The user is presented with a task. (this bit works)
The command prompt is returned, with the shell still configured.
Currently it will work if I type . tutorial/bashrc
There are several options:
You start script in the same shell, using source or .;
You start a new shell but with your script as a initialization script:
The first is obvious; I write a little bit more details about the second.
For that, you use --init-file option:
bash --init-file my-init-script
You can even use this option in the shebang line:
#!/bin/bash --init-file
And then you start you script as always:
./script-name
Example:
$ cat ./script-name
#!/bin/bash --init-file
echo Setting session up
PS1='.\$ '
A=10
$ ./script-name
Setting session up
.$ echo $A
10
.$ exit
$ echo $A
$
As you can see, the script has made the environment for the user and then has given him the prompt.
Try making it an alias in your ~/.bashrc file. Add this to the bottom of ~/.bashrc:
alias tutorial='. tutorial/bashrc'
Then close and re-open your terminal, or type . ~/.bashrc to re-source it.
To use this alias, simply call tutorial, and that will automatically get replaced with its alias, as though you had called . tutorial/bashrc.

Prevent a command in a shellscript from being executed inside another shellscript

I am invoking a shellscript inside another shellscript, and the invoked one has a command to delete a folder, which I don't want to be executed, like this
$ rm ../temp -rf
Is there a way to prevent this command from being executed without changing the invoked script contents?
You can define an alias like this in your script:
alias rm='echo SAFE'
By default aliases work only in interactive shells, so you must change this behaviour with:
shopt -s expand_aliases
in your script and source the other script (the one you cannot change):
source the_other_script.sh
or
. the_other_script.sh
This will work unless the other script runs rm in a sub-shell.
Another, safer method is to create your own rm command which does nothing (or just prints out its argument so you know what's going on), put it into a directory and put this directory as the first one in the PATH environment variable like this (Korn shell syntax):
$ export PATH=/path/to/your/dummy/rm/replacement:$PATH

How to run multiple Unix commands in one time?

I'm still new to Unix. Is it possible to run multiple commands of Unix in one time? Such as write all those commands that I want to run in a file, then after I call that file, it will run all the commands inside that file? or is there any way(or better) which i do not know?
Thanks for giving all the comments and suggestions, I will appreciate it.
Short answer is, yes. The concept is known as shell scripting, or bash scripts (a common shell). In order to create a simple bash script, create a text file with this at the top:
#!/bin/bash
Then paste your commands inside of it, one to a line.
Save your file, usually with the .sh extension (but not required) and you can run it like:
sh foo.sh
Or you could change the permissions to make it executable:
chmod u+x foo.sh
Then run it like:
./foo.sh
Lots of resources available on this site and the web for more info, if needed.
echo 'hello' && echo 'world'
Just separate your commands with &&
We can run multiple commands in shell by using ; as separator between multiple commands
For example,
ant clean;ant
If we use && as separator then next command will be running if last command is successful.
you can also use a semicolon ';' and run multiple commands, like :
$ls ; who
Yep, just put all your commands in one file and then
bash filename
This will run the commands in sequence. If you want them all to run in parallel (i.e. don't wait for commands to finish) then add an & to the end of each line in the file
If you want to use multiple commands at command line, you can use pipes to perform the operations.
grep "Hello" <file-name> | wc -l
It will give number of times "Hello" exist in that file.
Sure. It's called a "shell script". In bash, put all the commands in a file with the suffix "sh". Then run this:
chmod +x myfile.sh
then type
. ./myFile
or
source ./myfile
or just
./myfile
To have the commands actually run at the same time you can use the job ability of zsh
$ zsh -c "[command1] [command1 arguments] & ; [command2] [command2 arguments]"
Or if you are running zsh as your current shell:
$ ping google.com & ; ping 127.0.0.1
The ; is a token that lets you put another command on the same line that is run directly after the first command.
The & is a token placed after a command to run it in the background.

Resources