I manipulate dozens of bash scripts in which I'm likely to change options. Changing the options involves three operations :
Changing the string you provide to getopts to parse options (:g:h:pt for example)
Write the piece of code to affect arguments (opt1=$OPTARG)
Changing the usage function (the function which displays a description of the description)
This is a bit heavy, especially when you know that boost::program_options provide a nice interface to handle options in C++.
Is there something similar to boost::program_options in Bash ?
Use argbash. You won't regret it.
Argbash documentation
Argbash uses getopts in the background, but manages most of the implementation for you and provides a more consistent parsing codebase across projects. I've used it successfully, and its awesome, but does have a learning curve. Its a code generator that creates a parser script that supports long and short options, and creates help documentation automatically. It will even help with man pages.
The basic steps are:
Install argbash. You can use the install instructions on site to compile, but I recommend using Docker container
create a template m4 file, defining your options. You can do this manually or create a script.
If you are using the docker container, it would be something like:
argbash-init-docker \
--opt myoption1 \
--opt-bool myoption2 \
--pos my_arg1 \
--pos my_arg2 \
parser.m4
Run argbash with the template you generated as an input.
Something like:
argbash-docker \
parser.m4 \
--strip user-content \
-o parser.sh
In your main script that will be using the parser, source the parser.sh script from the output of the previous command
source ./parser.sh
Reference the options in your code
An example of how you'd reference a boolean option:
if test $_arg_myoption2 = on;then
echo 'myoption2 is on'
fi
Test
./my-script.sh --myoption2 arg1 arg2
repeat
As for your concern of manual steps, argbash allows you to keep it to a minimum. You can get it to a point that you are just updating the template and running a build script.My current implementation does have more manual steps than I'd like, but I'll be automating them out soon.
I have a more detailed outline of how I use it in my project here README-Argbash, and you can look at my code to see how I use it in the main script.
Option 2 - Use docopts , the docopt implementation for Bash.
The downside of the docopts is that it requires the docopt interpreter to be separately distributed to each user of your cli. This is a no-go for me. As a side note, Argbash can generate a docopt compliant help documentation to be used as the docopt template.
Use getoptions.
https://github.com/ko1nksm/getoptions
getoptions is a new option parser (generator) written in POSIX-compliant shell scripts for those who want to support POSIX and GNU standard option syntax in your shell scripts (dash, bash, ksh, zsh and all POSIX shells).
It is fast and small, while supporting abbreviated long options, subcommands, automatic help generation, etc. And it is very easy to use because it is implemented with pure shell functions and does not require any other tools.
It is also an option parser generator. Pre-generating the option parser code eliminates the need for including the libraries and makes it even faster.
Example (this is all):
#!/bin/sh
VERSION=1.0
parser_definition() {
setup REST help:usage abbr:true -- "Usage: example.sh [options] [arguments]" ''
msg -- "Options:"
flag FLAG_A -a
flag FLAG_B -b
flag FLAG -f +f --{no-}flag -- "takes no arguments"
param PARAM -p --param -- "takes one argument"
option OPTION -o --option on:"default" -- "takes no arguments or one argument"
disp :usage -h --help
disp VERSION --version
}
eval "$(getoptions parser_definition) exit 1"
echo "FLAG: $FLAG, PARAM: $PARAM, OPTION: $OPTION"
printf ': %s\n' "$#" # Rest arguments
script.sh -ab -f +f --flag --no-flag -p 1 -p2 --param 3 --param=4 --option=5 --op -- A B
Related
I was going through a shell script where set -m was used.
My understanding is that set is used to get positional args.
For ex: set "SO community is very helpful". Now, if I do echo $1, I should get SO and so on for $2, $3...
After checking the command with help flag, I got "-m Job control is enabled."
My question is, what is the purpose of set -m in the following code?
set -m
(
current_hash="some_ha54_one"
new_hash=$(cat file.txt)
if [ $current_hash -ne new_hash ]; then
pip install -r requirement.txt
fi
tmp="temp variable"
export tmp
bash some_bash_file.sh &
wait
bash some_other_bash_file.sh &
)
I understand (to the best of my knowledge) what I going on inside () but what is the use of set -m ?
"Job control" enables features like bg and fg; signal-handling and file-descriptor routing changes intended for human operators who might use them to bring background tasks into the foreground to provide them with input; and the ability to refer to background tasks by job number instead of PID. The script segment you showed doesn't use these features, so the set -m call is presumably pointless.
These features are meant for human users, not scripts; and so in scripts they're off by default. In general, code that attempts to use them in scripts is buggy, and should be replaced with code that operates by PID. As an example, code that runs two scripts in parallel with each other, and then collects the exit status of each when they're finished without needing job control follows:
bash some_bash_file & some_pid=$!
bash some_other_file & some_other_pid=$!
wait "$some_pid"; some_rc=$?
wait "$some_other_pid"; some_other_rc=$?
I want to use several instances of a spider object of scrapy with several instances of polipo proxies.
For this, it is quite simple:
for i in `seq 1 100`
do
numeroPortProxy=$(($i+30000))
command="scrapy crawl courses_PT ... -s HTTP_PROXY=http://127.0.0.1:${numeroPortProxy}"
eval $command&
done
My issue is I need to answer automatically "o" to the command.
I tried this eval $command&<<<"o", it does not work. The logfile indicates: EOFError: EOF when reading a line
I tried this too eval $command<<<"o" &, it does not work too. It gives this error in the terminal:
Usage
=====
scrapy crawl [options] <spider>
crawl: error: Invalid -s value, use -s NAME=VALUE
Well it does not appreciate.
So, how can I do to launch several instances of command at one time like & operator allows and to answer the prompt of each several instances like <<< allows.
I am after a bash script which I can use to trigger a delta import of XML files via CRON. After a bit of digging and modification I have this:
#!/bin/bash
# Bash to initiate Solr Delta Import Handler
# Setup Variables
urlCmd='http://localhost:8080/solr/dataimport?command=delta-import&clean=false'
statusCmd='http://localhost:8080/solr/dataimport?command=status'
outputDir=.
# Operations
wget -O $outputDir/check_status_update_index.txt ${statusCmd}
2>/dev/null
status=`fgrep idle $outputDir/check_status_update_index.txt`
if [[ ${status} == *idle* ]]
then
wget -O $outputDir/status_update_index.txt ${urlCmd}
2>/dev/null
fi
Can I get any feedback on this? Is there a better way of doing it? Any optimisations or improvements would be most welcome.
This certainly looks usable. Just to confirm, you intend to run this ever X minutes from your crontab? That seems reasonsable.
The only major quibble (IMHO) is discarding STDERR information with 2>/dev/null. Of course it depends on what are your expectations for this system. If this is for a paying customer or employer, do you want to have to explain to the boss, "gosh, I didn't know I was getting error message 'Cant connect to host X' for the last 3 months because we redirect STDERR to /dev/null"! If this is for your own project, and your monitoring the work via other channels, then not so terrible, but why not capture STDERR to file, and if check that there are no errors. as a general idea ....
myStdErrLog=/tmp/myProject/myProg.stderr.$(/bin/date +%Y%m%d.%H%M)
wget -O $outputDir/check_status_update_index.txt ${statusCmd} 2> ${myStdErrLog}
if [[ ! -s ${myStdErrLog} ]] ; then
mail -s "error on myProg" me#myself.org < ${myStdErrLog}
fi
rm ${myStdErrLog}
Depending on what curl includes in its STDERR output, you may need filter what is in the StdErrLog to see if there are "real" error messages that you need to have sent to you.
A medium quibble is your use backticks for command substitution, if you're using dbl-sqr-brackets for evaluations, then why not embrace complete ksh93/bash semantics. The only reason to use backticks is if you think you need to be ultra-backwards compatible and that you'll be running this script under the bourne shell (or possibly one of the stripped down shells like dash).Backticks have been deprecated in ksh since at least 1993. Try
status=$(fgrep idle $outputDir/check_status_update_index.txt)
The $( ... ) form of command substitution makes it very easy to nest multiple cmd-subtitutions, i.e. echo $(echo one $(echo two ) ). (Bad example, as the need to nest cmd-sub is pretty rare, I can't think of a better example right now).
Depending on your situation, but in a large production environement, where new software is installed to version numbered directories, you might want to construct your paths from variables, i.e.
hostName=localhost
portNum=8080
SOLRPATH=/solr
SOLRCMD='delta-import&clean=false"
urlCmd='http://${hostName}:${portNum}${SOLRPATH}/dataimport?command=${SOLRCMD}"
The final, minor quibble ;-). Are you sure ${status} == *idle* does what you want?
Try using something like
case "${status}" in
*idle* ) .... ;;
* ) echo "unknown status = ${status} or similar" 1>&2 ;;
esac
Yes, your if ... fi certainly works, but if you want to start doing more refined processing of infomation that you put in your ${status} variable, then case ... esac is the way to go.
EDIT
I agree with #alinsoar that 2>/dev/null on a line by itself will be a no-op. I assumed that it was a formatting issue, but looking in edit mode at your code I see that it appears to be on its own line. If you really want to discard STDERR messages, then you need cmd ... 2>/dev/null all on one line OR as alinsoar advocates, the shell will accept redirections at the front of the line, but again, all on one line ;-!.
IHTH
My application does cyclic error_logger reports.
These will be displayed on the Erlang shell which is quite a lot of output.
This makes typing into the shell quite a nuisance.
What is the usual way of dealing with this given that:
I really want to see this output
I'd don't like it all over the input line I just type
How to deal with this? Always have distribution on and connect with a second shell for user input (this is extra effort when starting the application, which I do often during development).
I'd prefer some automatic easily startable setup where all logging and sasl messages go one place and my input and return values is undisturbed in another place.
For reference this is how I start my sessions:
#!/bin sh
erl +W w -boot start_sasl -config myapp -s myapp -extra "$#"
In the docs for the kernel ( http://erlang.org/doc/man/kernel_app.html ) it described how to set your application environment variables to redirect error_logger printouts to a file or disable them completely. Something like this should work for you:
erl +W w -boot start_sasl -kernel error_logger '{file,"/tmp/log"}' -config myapp -s myapp -extra "$#"
there are also similar options which you can use for sasl printouts: http://erlang.org/doc/man/sasl_app.html
I'm trying to write a small command launcher application, and would like to use bash's tab completions in my own completion system. I've been able to get a list of completions for general commands using compgen -abck.
However, I would also like to get completions for specific commands: for instance, the input git p should display completion for git's commands.
Is there any way I can use compgen to do this? If not, are there any other ways I can get a list of completions programmatically?
[EDIT: To clarify, I'm not trying to provide completion to bash - my app is a GUI command launcher. I'd simply like to use bash's existing completions in my own app.]
I don't really know how it works, but the awesome window manager uses the following Lua code for getting access to bash completion's result:
https://github.com/awesomeWM/awesome/blob/master/lib/awful/completion.lua#L119
Via complete -p we find complete -o bashdefault -o default -o nospace -F _git git. We remember "_git" for later.
The length of "git l" is 5, so we set COMP_COUNT=6. We are completing the first argument to "git", so COMP_CWORD=1.
All together we use the following script:
__print_completions() {
printf '%s\n' "${COMPREPLY[#]}"
}
# load bash-completion functions
source /etc/bash_completion
# load git's completion function
_completion_loader git
COMP_WORDS=(git l)
COMP_LINE='git l'
COMP_POINT=6
COMP_CWORD=1
_git
__print_completions
Output: "log"
Check in the /etc/bash_completion.d/ directory. This is where the different command completion scripts stay.
Quite an old question, but in the mean time I've implemented a script that handles this to reuse completions with ZSH
Here a simple but working example in bash :
function foo_completion()
{
local currentWord=${COMP_WORDS[COMP_CWORD]}
local completionList=""
case "${COMP_CWORD}" in
"1")
completionList="command1 command2 command3";
;;
"2")
completionList="param1 param2 param3";
;;
esac
COMPREPLY=( $( compgen -W "${completionList}" -- ${currentWord} ) )
}
complete -F foo_completion foo
With this kind of code, you will get commandN completed when you type "foo c" + tab and paramN completed when you type "foo command1 p" + tab
You can compute the completion list from the command help.
my2c