What does $VARIABLE$ stand for - bash

..in the following shell script?
$USER1$=/usr/lib/nagios/plugins
As far as i know variable defining is done as-
export USER1=/usr/lib/nagios/plugins
Source:
Ok, the command works. Now I have to implement it into Nagios. Because
all my "local" command not installed by the package-manager shall be
in /usr/lib/nagios/plugins_local I define a $USER2$ variable for this
path:
# vim resource.cfg
...
# Sets $USER1$ to be the path to the plugins
$USER1$=/usr/lib/nagios/plugins
# my own check-commands live here:
$USER2$=/usr/lib/nagios/plugins_local

$USERn$ (more specifically, $USER1 to $USER255$) is the way to declare a user-defined Macro in Nagios.
See also "Understanding Macros and how they work."

More specifically and more interestingly this is a good way to hide usernames/passwords needed withing database/http checks for instance.
This means that you can attempt soemthing like the following directly in your configuration files and thus you do not fear that you are committing or backing up usernames/passwords.
./nrpe -c check_http -H $IP -a $USER1$:$USER2$ -u $LINK
An aside: Unfortunetly Nagios only supports up to 32 of the USER variables.

Related

How to use openssh-client in Cyberark environments with autocompletion and multiple servers?

I usually have what I need in my ~/.ssh/ folder (a config file and more) to connect to servers with ssh <tab><tab>. In an environment with Cyberark the configuration seems to be a bit more intricate due to the three # signs
I found this answer, but I struggled to find a way to enjoy autocompletion for many hosts because the User field does not support tokens like %h for host, so I'd have to create the same entry again for every server where I previously just added servers to the Host line. Is there a way this can be achieved?
After spending some time I came up with the following solution which is more like a workaround. I'm not really proud of it, but it gets the job done with the least amount of new code or difficult to understand code.
Create a wrapper script like this:
$ cat ~/bin/ssh-wrapper.sh
#!/bin/bash
# https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/latest/en/Content/PASIMP/PSSO-PMSP.htm#The
# Replace where appropriate:
# $1 = Server FQDN
# prefix = your administrative user may be different from you normal user
# internal.example.org = domain server (controller)
# pam.example.org = Cyberark jump host
ssh -t ${USERNAME,,}#prefix${USERNAME,,}#internal.example.org#$1#pam.example.org
Add the following to your bash startup file. Yours may be different than mine, because I'm hacking here in a customer environment with Tortoise Git-Bash. (Which works nice by the way when you use it with Flux Terminal, k9s and jq.)
Create an alias for you wrapper script, I chose sshw here.
Create variable with all the FQDNs of the servers you want to have in your autocompletion, or create file which contains these FQDNs and read it to a variable.
Create a bash completion expression which applies this to your sshw alias.
$ cat ~/.bash_profile
alias sshw="$HOME/bin/ssh-wrapper.sh"
SSH_COMPLETE=$(cat "$HOME/.ssh/known_hosts_wrapper")
complete -o default -W "${SSH_COMPLETE[*]}" sshw
Now you can tab you way to servers.

Source for fancy_echo usage and documentation

I was looking starting point for an install script that will allow me to automatically provision a new or re-imaged computer and came across one by thoughtbot. However, it makes frequent use of a command that I'm not familiar with, fancy_echo.
fancy_echo() {
local fmt="$1"; shift # shellcheck disable=SC2059
printf "\n$fmt\n" "$#"
}
It's also used in this script by dockyard:
fancy_echo "This script will setup your laptop"
fancy_echo "If you want to reuse your old SSH key, copy your SSH config over before running this script"
fancy_echo "During installation, it will ask for your sudo password a few times"
Since this is run in the command line and looks like bash, I've tried the man pages, the standalone GNU info system and the --help option, all with no luck. I presume it prints to the screen but I don't know, so I'm asking. Here are my questions:
What is fancy_echo and how does it differ from echo?
What is a good source of documentation on it?
It's not a standard command, there's no documentation, it's just the function whose definition you copied, which is internal to that package.
fancy_echo format_string arg arg arg....
is equivalent to
printf format_string arg arg arg....
except that it adds newlines around the output.

How do I use SSFT (Shell Scripts Frontend Tool) on Ubuntu (or any Linux)?

I can't find a man page or any help for ssft. I want to use it in my bash scripts to select either kdialog (if on KDE) or zenity (if on gnome).
See Shell Scripts Frontend Tool
Surely the help pages are somewhere, but I must be overlooking them.
I am running Debian 6.0 Squeeze stable right now, and it has a manpage for ssft.sh. Try man ssft.sh. If that doesn't do what you want, let me know and you and I will figure out what does.
Update: All right. You have tried the manpage, which doesn't tell you what you want to know. There does not appear to exist any more thorough documentation for Ssft (maybe, when this is all over, you will write and contribute that very documentation). However, in Ssft's source appears to be a test script that makes the software do the various things it is designed to do. Sometimes, a good example is even better than a manual. That test script may be just what you need.
To extract the test script, issue a sequence of commands like the following sequence.
$ cd /tmp
$ apt-get source ssft
$ ls
$ cd ssft-0.9.13 # (Your version number may differ from 0.9.13.)
$ ls
$ cd tests
$ ls
When I do the above, the last ls listing reveals a shell script named ssft-test.sh. Inside that script appear to be several examples of how to use ssft.sh correctly.
http://man.devl.cz/man/1/ssft.sh
ssft.sh(1)
SSFT
Name
ssft.sh - library of shell script frontend functions
Synopsis
. ssft.sh
Description
ssft.sh is a library of shell functions that must be sourced from other scripts. If the script is executed without arguments it prints an usage message and also supports the options --doc, --help and --version.
To get a list of available functions call the script with the --doc argument and to get a description of what a given function does call the script with --doc FUNCTION_NAME.
On the typical case the library must be sourced and the SSFT_FRONTEND variable must be set to the desired frontend (zenity, dialog or text); if the variable is not set the default frontend is noninteractive.
To choose the theorically best looking frontend use the function ssft_choose_frontend as follows:
. ssft.sh [ -n "$SSFT_FRONTEND" ] || SSFT_FRONTEND="$( ssft_choose_frontend )"
Written by Sergio Talens-Oliag .
$ /usr/bin/ssft.sh
Shell Script Frontend Tool (version 0.9.13)
Usage: . ssft.sh
When called directly the program supports the following options:
-d,--doc [FUNCTIONS] Prints the list of available functions. If function names are given prints functions' documentation.
-h,--help This message
-v,--version File version
functions:
$ /usr/bin/ssft.sh -d
ssft_set_textdomain
ssft_reset_textdomain
ssft_choose_frontend
ssft_print_text_title
ssft_display_message
ssft_display_error
ssft_display_emsg
ssft_file_selection
ssft_directory_selection
ssft_progress_bar
ssft_read_string
ssft_read_password
ssft_select_multiple
ssft_select_single
ssft_yesno
ssft_show_file

Ruby equivalent of .irbrc?

While irb utilizes .irbrc to automatically perform certain actions upon start, I have not been able to find how to do the same automatically for invocations of ruby itself. Any suggestions where the documentation for such can be found would be greatly appreciated.
For environments where I need this (essentially never) I've used the -r [filename] option, and the RUBYOPT environment variable.
(You may want to specify include directories, which can be done a variety of ways, including the -I [directory] option).
This is essentially the same answer as Phrogz, but without the shell script. The scripts are a bit more versatile since you can have any number of them for trivial pre-execution environment rigging.
Just as you can use ruby -rfoo to require library foo for that run, so you can specify to always require a particular library for every Ruby run:
if [ -f "$HOME/.ruby/lib/mine.rb" ]; then
RUBYLIB="$HOME/.ruby/lib"
RUBYOPT="rmine"
export RUBYLIB RUBYOPT
fi
Put your own custom code in a file (like mine.rb above) and get your interpreter to always add its directory to your $LOAD_PATH (aka $:) and always require it (which runs the code therein).
Shell code above and background information here:
http://tbaggery.com/2007/02/11/auto-loading-ruby-code.html

Ruby, Unicorn, and environment variables

While playing with Heroku, I found their approach of using environment variables for server-local configuration brilliant. Now, while setting up an application server of my own, I find myself wondering how hard that would be to replicate.
I'm deploying a sinatra application, riding Unicorn and Nginx. I know nginx doesn't like to play with the environment, so that one's out. I can probably put the vars somewhere in the unicorn config file, but since that's under version control with the rest of the app, it sort of defeats the purpose of having the configuration sit in the server environment. There is no reason not to keep my app-specific configuration files together with the rest of the app, as far as I'm concerned.
The third, and last (to my knowledge) option, is setting them in the spawning shell. That's where I got lost. I know that login and non-login shells use different rc files, and I'm not sure whether calling something with sudo -u http stuff is or not spawning a login shell. I did some homework, and asked google and man, but I'm still not entirely sure on how to approach it. Maybe I'm just being dumb... either way, I'd really appreciate it if someone could shed some light on the whole shell environment deal.
I think your third possibility is on the right track. What you're missing is the idea of a wrapper script, whose only function is to set the environment and then call the main program with whatever options are required.
To make a wrapper script that can function as a control script (if prodEnv use DB=ProdDB, etc), there is one more piece that simplifies this problem. Bash/ksh both support a feature called sourcing files. This an operation that the shell provides, to open a file and execute what is in the file, just as if it was in-lined in the main script. Like #include in C and other languages.
ksh and bash will automatically source /etc/profile, /var/etc/profile.local (sometimes), $HOME/.profile. There are other filenames that will also get picked up, but in this case, you'll need to make your own env file and the explicitly load it.
As we're talking about wrapper-scripts, and you want to manage how your environment gets set up, you'll want to do the sourcing inside the wrapper script.
How do you source an environment file?
envFile=/path/to/my/envFile
. $envFile
where envFile will be filled with statements like
dbServer=DevDBServer
webServer=QAWebServer
....
you may discover that you need to export these variable for them to be visble
export dbServer webServer
An alternate assignment/export is supported
export dbServer=DevDBServer
export webServer=QAWebServer
Depending on how non-identical your different environments are, you can have your wrapper script figure out which environment file to load.
case $( /bin/hostame ) in
prodServerName )
envFile=/path/2/prod/envFile ;;
QASeverName )
envFile=/path/2/qa/envFile ;;
devSeverName )
envFile=/path/2/dev/envFile ;;
esac
. ${envFile}
#NOW call your program
myProgram -v -f inFile -o outFile ......
As you develop more and more scripts in your data processing environment, you can alway source your envFile at the top. When you eventually change the physical location of a server (or it's name), then you have only one place that you need to make the change.
IHTH
Also a couple of gems dealing with this. figaro that works both with or without heroku. Figaro uses a yaml file (in config and git ignored) to keep track of variables. Another option is dotenv that reads variables from an .env file. And also another article with all them options.
To spawn an interactive shell (a.k.a. login shell) you need to invoke sudo like this:
sudo -i -u <user> <command>
Also you may use -E to preserve the environment. This will allow some variables to be pased for your current environment to the command invoked with sudo.
I solved a similar problem by explicitly telling Unicorn to read a variables file as part of startup in its init.d script. First I created a file in a directory above the application root called variables. In this script I call export on all my environment variables, e.g. export VAR=value. Then I defined a variable GET_VARS=source /path/to/variables in the /etc/init.d/unicorn file. Finally, I modified the start option to read su - $USER -c "$GET_VARS && $CMD" where $CMD is the startup command and $USER is the app user. Thus, the variables defined in the file are exported into the shell of Unicorn's app user on startup. Note that I used an init.d script almost identical to the one from this article.

Resources