What is the best way to create a configuration file in bash? - bash

I use bash to configure many of my build tools.
I want to use variables to set certain things so that the build tools can be used in many environments.
Currently I do
exports var=val
and then I do
$var
when I need to use it.
Is this the best way to go about it, as I know there are many ways to do things in bash.
**Example**
#!/bin/bash
path_bash="$HOME/root/config/bash/"
source "${path_bash}_private.sh"
source "${path_bash}config.sh"
source "${path_bash}utility.sh"
source "${path_bash}workflow.sh"
source "${path_bash}net.sh"
source "${path_bash}makeHTM.sh"
#
#
#
# Divider - Commands
#
#
#
cd ~/root

Skip the export unless you really need it (that is, unless you need that variable to propagate to unrelated (=execed) processes that you execute).
If you do export, it's usually a good idea to capitalize the variable (export VAR=val) to make it apparent that it will spread to executed binaries.
When you refer to a shell variable, you usually want to double quote it ("$var") unless you need glob expansion and whitespace-splitting (splitting on $IFS characters, to be exact) done on that variable expansion.

sparrow automation framework gives you an opportunity to configure bash scripts in many ways :
passing command line named parameters
configuration files in yaml format
configuration files in json format
configuration files in config::general format
Of course you need to pack your bash scripts into sparrow plugins first to get this behavior , but this is not a big deal . Follow https://github.com/melezhik/sparrow#configuring-tasks for details .
PS. disclosure - I am the sparrow tool author .

Related

Simple configuration files template processor that preserves bash-style variables

I have to introduce some templating over text configuration files (yaml, xml, json) that already contain bash-like syntax variables. I need to preserve existing bash-like variables untouched but substitute my ones. List is dynamic, variables should come from environment. Something like simple processor taking "$${MY_VAR}}" pattern but ignoring $MY_VAR. Preferably pure Bash or as small number of tooling required as possible.
Pattern could be $$(VAR) or anything that can be easily separated from ${VAR} and $VAR. The key limitation - it is intended for a docker container startup procedure injecting environment variables into provided service configuration templates and this way building this configuration. So something like Java or even Perl processing is not an option.
Does anybody have a simple approach?
I was using the following bash processing for such variable substitution where original files had no variables. But now I need something one step smarter.
# process input file ($1) placing output into ($2) with shell variables substitution.
process_file() {
set -e
eval "cat <<EOF
$(<$1)
EOF
" | cat > $2
}
Obvious clean solution that is too complex for Docker file because of number of packages needed:
perl -p -i -e 's/\$\{\{([^}]+)\}\}/defined $ENV{$1} ? $ENV{$1} : $&/eg' < test.json
This filters out ${{VAR}}, even better - only set ones.

Replace a variable every where in a shell script

I need to write a shell script such that I have to read .sh script and find a particular variable (for example, Variable_Name="variable1") and take out is value(variable1).
In other shell script if Variable_Name is used I need to replace it with its Value(variable1)
A simple approach, to build on, might be:
assignment=$(echo 'Variable_Name="variable1"' | sed -r 's/Variable_Name=(.*)/\1/')
echo $assignment
"variable1"
Depending on variable type, the value might be quoted or not, quoted with single apostrophs or quotes. That might be neccessary (String with or without blanks) or superflous. Behind the assignment there might be furter code:
pi=3.14;v=42;
or a comment:
user=janis # Janis Joplin
it might be complicated:
expr="foobar; O'Reilly " # trailing blank important
But only you may know, how complicated it might get. Maybe the simple case is already sufficient. If the new script looks similar, it might work, or not:
targetV=INSERT_HERE; secondV=23
# oops: secondV accidnetally hidden:
targetV="foobar; O'Reilly " # trailing blank important; secondV=23
If the second script is under your control, you can prevent such problems easily. If source and target language are identical, what worked here should work there too.

How to add arguments to $# without splitting the space-containing arguments that are already there?

My goal is to encapsulate the svn command to provide additional features. Here is a minimal (not) working example of the function used to do that:
function custom_svn {
newargs="$# my_additional_arg"
svn $newargs
}
But this does not work with arguments containing spaces. For instance when called like this:
message="my commit text"
custom_svn ci some_file.txt -m "$message"
SVN tries to commit some_file.txt, commit and text, with the message "my" instead of committing only some_file.txt with message "my commit text".
I guess the issue lies in the erroneous use of $#, but I'm not sure how to proceed to keep the message whole.
In standard sh, the positional arguments are the only sort of array you've got. You can append to them with set:
set -- "$#" my_additional_arg
svn "$#"
In bash, you can create your own custom array variables too:
newargs=("$#" my_additional_arg)
svn "${newargs[#]}"
Of course, as DigitalRoss answered, in your specific example you can avoid using a variable entirely, but I'll guess that your example is a bit oversimplified.
Do this:
svn "$#" my_additional_arg
The problem is that the construct "$#" is special only in that exact form.
It's interesting, this brings up the whole good-with-the-bad nature of shell programming. Because it's a macro processor, it's much easier to write simple things in bash than in full languages. But, it's harder to write complex things, because every time you try to go a level deeper in abstraction you need to change your code to properly expand the macros in the new level of evaluation.

How to declare variable in a script that will belong to the caller

as the title say, I have many variables, like 200, and I'd like to have a script just containing the declarations, appart from my bash with execute code. Would it be possible with my bash that execute code to just call a script that create the variable and that exist after?
EDIT : The site proposed that answer the questions explain the same situation, however there are really good details the people who answered here gave.
When the declarations are in /usr/local/lib/myvars, start your script (after the SHEBANG line) with sourcing that file using the dot notation:
. /usr/local/lib/myvars
When you have so much vars, you must have a lot of code and some general functions. Put those in one file and include that one:
. /usr/local/lib/my_utils
And know you might be wondering: 2 includes in every scriptfile? No, you can source the myvars file in the my_utils file.
Be aware you are introducing global variables, they can be changed everywhere.
You can export the variables to be available externally and source that file.
Example. Contents of variables.sh
export VARIABLE1=Value1
export VARIABLE2=Value2
.
.
export VARIABLE200=Value200
Contents of main script:
#!/bin/bash
source /Pathtosourcefile/variables.sh
echo $VARIABLE1
This would print out:
Value1

How to embed shell snippets in doxygen documentation

When installing my package, the user should at some point type
./wand-new "`cat wandcfg_install.spell`"
Or whatever the configuration file is called. If I put this line inside \code ... \endcode, doxygen thinks it is C++ or... Anyway, the word "new" is treated as keyword. How do I avoid this is in a semantically correct way?
I think \verbatim is disqualified because it actually is code, right?
(I guess the answer is to poke that Dimitri should add support for more languages inside a code block like LaTeX listings package, or at least add an disableparse option to code in the meantime)
Doxygen, as of July 2017, does not officially support documenting Shell/Bash scripting language, not even as an extension. There is an unofficial filter called bash-doxygen. Simple to setup: only one file download and three flags adjustments:
Edit the Doxyfile to map shell files to C parser: EXTENSION_MAPPING = sh=C
Set your shell script file names pattern as Doxygen inputs, like
e.g.: FILE_PATTERNS = *.sh
Mention doxygen-bash.sed in either the INTPUT_FILTER or the
FILTER_PATTERN directive of your Doxyfile. If doxygen-bash.sed is in
your $PATH, then you can just invoke it as is, else use sed -n -f /path/to/doxygen-bash.sed --.
Please note that since it uses C language parsing, some limitations apply, as stated in the main README page of bash-doxygen, one of them, at least in my tests, that the \code {.sh} recognises shell syntax, but all lines in the code block begin with an asterisk (*), apparently as a side-effect of requiring that all Doxygen doc sections have lines starting with double-hashes (##).

Resources