How to use openssh-client in Cyberark environments with autocompletion and multiple servers? - openssh

I usually have what I need in my ~/.ssh/ folder (a config file and more) to connect to servers with ssh <tab><tab>. In an environment with Cyberark the configuration seems to be a bit more intricate due to the three # signs
I found this answer, but I struggled to find a way to enjoy autocompletion for many hosts because the User field does not support tokens like %h for host, so I'd have to create the same entry again for every server where I previously just added servers to the Host line. Is there a way this can be achieved?

After spending some time I came up with the following solution which is more like a workaround. I'm not really proud of it, but it gets the job done with the least amount of new code or difficult to understand code.
Create a wrapper script like this:
$ cat ~/bin/ssh-wrapper.sh
#!/bin/bash
# https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/latest/en/Content/PASIMP/PSSO-PMSP.htm#The
# Replace where appropriate:
# $1 = Server FQDN
# prefix = your administrative user may be different from you normal user
# internal.example.org = domain server (controller)
# pam.example.org = Cyberark jump host
ssh -t ${USERNAME,,}#prefix${USERNAME,,}#internal.example.org#$1#pam.example.org
Add the following to your bash startup file. Yours may be different than mine, because I'm hacking here in a customer environment with Tortoise Git-Bash. (Which works nice by the way when you use it with Flux Terminal, k9s and jq.)
Create an alias for you wrapper script, I chose sshw here.
Create variable with all the FQDNs of the servers you want to have in your autocompletion, or create file which contains these FQDNs and read it to a variable.
Create a bash completion expression which applies this to your sshw alias.
$ cat ~/.bash_profile
alias sshw="$HOME/bin/ssh-wrapper.sh"
SSH_COMPLETE=$(cat "$HOME/.ssh/known_hosts_wrapper")
complete -o default -W "${SSH_COMPLETE[*]}" sshw
Now you can tab you way to servers.

Related

Where is the list of users returned from bash completion sourced from?

For example, typing compgen -u on an Ubuntu box returns a list of users. This includes more users than are listed in /etc/passwd . So the question is when using bash completion to list out users, where does the list come from?
/etc/passwd is the default source of user account information on UNIX-like systems, but it's often supplemented by other sources, paricularly on machines that are part of some larger organization that needs to keep that information consistent.
NIS (formerly known as "YP") is one common system. LDAP is another. I'm sure there are others.
The getent passwd command should show you all the relevant account information. On my machine (which doesn't use NIS or LDAP), it's equivalent to cat /etc/passwd; on yours, it will probably show additional information.
The various getpw*() functions (getpwuid(), getpwnam(), getpwent()) retrieve user account information, equivalent to what's in /etc/passwd plus whatever supplement your system uses. Presumably both the getent command and bash use this mechanism to obtain the relevant information.
You can run strace -o compgen.out bash -c 'compgen -u' and then look at the compgen.out file to try to find out what it is using.
On my machine that file ends with an open("/etc/passwd", O_RDONLY) followed by the writing of the output.

What does $VARIABLE$ stand for

..in the following shell script?
$USER1$=/usr/lib/nagios/plugins
As far as i know variable defining is done as-
export USER1=/usr/lib/nagios/plugins
Source:
Ok, the command works. Now I have to implement it into Nagios. Because
all my "local" command not installed by the package-manager shall be
in /usr/lib/nagios/plugins_local I define a $USER2$ variable for this
path:
# vim resource.cfg
...
# Sets $USER1$ to be the path to the plugins
$USER1$=/usr/lib/nagios/plugins
# my own check-commands live here:
$USER2$=/usr/lib/nagios/plugins_local
$USERn$ (more specifically, $USER1 to $USER255$) is the way to declare a user-defined Macro in Nagios.
See also "Understanding Macros and how they work."
More specifically and more interestingly this is a good way to hide usernames/passwords needed withing database/http checks for instance.
This means that you can attempt soemthing like the following directly in your configuration files and thus you do not fear that you are committing or backing up usernames/passwords.
./nrpe -c check_http -H $IP -a $USER1$:$USER2$ -u $LINK
An aside: Unfortunetly Nagios only supports up to 32 of the USER variables.

linux bash script: set date/time variable to auto-update (for inclusion in file names)

Essentially, I have a standard format for file naming conventions. It breaks down to this:
target_dateUTC_timeUTC_tool
So, for instance, if I run tcpdump on a target of 'foo', then the file would be foo_dateUTC_timeUTC_tcpdump. Simple enough, but a pain for everyone to constantly (and consistently) enter... so I've tried to create a bash script which sets system variables like so:
FILENAME=$TARGET\_$UTCTIME\_$TOOL
Then, I can just call the variable at runtime, like so:
tcpdump -w $FILENAME.lpc
All of this works like a champ. I've got a menu-driven .sh which gives the user the options of viewing the current variables as well as setting them... file generation is a breeze. Unfortunately, by setting the date/time variable, it is locked to the value at the time of creation (naturally). I set the variable like so:
UTCTIME=$(/bin/date --utc +"%Y%m%d_%H%M%Z")
What I really need is either a way to create a variable which updates at runtime, or (more likely) another way to skin this cat.
While scouring for solutions, I came across a similar issues... like this.
But, to be honest, I'm stumped on how to marry the two approaches and create a simple, distributable solution.
.sh file is posted via pastebin, here.
Use a function:
generate_filename() { echo "${1}_$(/bin/date --utc +"%Y%m%d_%H%M%Z")_$2"; }
And use it like this:
tcpdump -w "$(generate_filename foo tcpdump).lpc"
It's hard to get the function to automatically determine the command name. You can use bash history to get it and save a couple of characters typing:
tcpdump -w "$(generate_filename foo !#:0).lpc"

Ruby, Unicorn, and environment variables

While playing with Heroku, I found their approach of using environment variables for server-local configuration brilliant. Now, while setting up an application server of my own, I find myself wondering how hard that would be to replicate.
I'm deploying a sinatra application, riding Unicorn and Nginx. I know nginx doesn't like to play with the environment, so that one's out. I can probably put the vars somewhere in the unicorn config file, but since that's under version control with the rest of the app, it sort of defeats the purpose of having the configuration sit in the server environment. There is no reason not to keep my app-specific configuration files together with the rest of the app, as far as I'm concerned.
The third, and last (to my knowledge) option, is setting them in the spawning shell. That's where I got lost. I know that login and non-login shells use different rc files, and I'm not sure whether calling something with sudo -u http stuff is or not spawning a login shell. I did some homework, and asked google and man, but I'm still not entirely sure on how to approach it. Maybe I'm just being dumb... either way, I'd really appreciate it if someone could shed some light on the whole shell environment deal.
I think your third possibility is on the right track. What you're missing is the idea of a wrapper script, whose only function is to set the environment and then call the main program with whatever options are required.
To make a wrapper script that can function as a control script (if prodEnv use DB=ProdDB, etc), there is one more piece that simplifies this problem. Bash/ksh both support a feature called sourcing files. This an operation that the shell provides, to open a file and execute what is in the file, just as if it was in-lined in the main script. Like #include in C and other languages.
ksh and bash will automatically source /etc/profile, /var/etc/profile.local (sometimes), $HOME/.profile. There are other filenames that will also get picked up, but in this case, you'll need to make your own env file and the explicitly load it.
As we're talking about wrapper-scripts, and you want to manage how your environment gets set up, you'll want to do the sourcing inside the wrapper script.
How do you source an environment file?
envFile=/path/to/my/envFile
. $envFile
where envFile will be filled with statements like
dbServer=DevDBServer
webServer=QAWebServer
....
you may discover that you need to export these variable for them to be visble
export dbServer webServer
An alternate assignment/export is supported
export dbServer=DevDBServer
export webServer=QAWebServer
Depending on how non-identical your different environments are, you can have your wrapper script figure out which environment file to load.
case $( /bin/hostame ) in
prodServerName )
envFile=/path/2/prod/envFile ;;
QASeverName )
envFile=/path/2/qa/envFile ;;
devSeverName )
envFile=/path/2/dev/envFile ;;
esac
. ${envFile}
#NOW call your program
myProgram -v -f inFile -o outFile ......
As you develop more and more scripts in your data processing environment, you can alway source your envFile at the top. When you eventually change the physical location of a server (or it's name), then you have only one place that you need to make the change.
IHTH
Also a couple of gems dealing with this. figaro that works both with or without heroku. Figaro uses a yaml file (in config and git ignored) to keep track of variables. Another option is dotenv that reads variables from an .env file. And also another article with all them options.
To spawn an interactive shell (a.k.a. login shell) you need to invoke sudo like this:
sudo -i -u <user> <command>
Also you may use -E to preserve the environment. This will allow some variables to be pased for your current environment to the command invoked with sudo.
I solved a similar problem by explicitly telling Unicorn to read a variables file as part of startup in its init.d script. First I created a file in a directory above the application root called variables. In this script I call export on all my environment variables, e.g. export VAR=value. Then I defined a variable GET_VARS=source /path/to/variables in the /etc/init.d/unicorn file. Finally, I modified the start option to read su - $USER -c "$GET_VARS && $CMD" where $CMD is the startup command and $USER is the app user. Thus, the variables defined in the file are exported into the shell of Unicorn's app user on startup. Note that I used an init.d script almost identical to the one from this article.

Passing parameters to a SSH client to execute a ForceCommand with parameters

I'm having trouble passing command parameters remotely to a "ForceCommand" program in ssh.
In my remote server I have this configuration in sshd_config:
Match User user_1
ForceCommand /home/user_1/user_1_shell
The user_1_shell program limits the commands the user can execute, in this case, I only allow the user to execute "set_new_mode param1 param2". Any other commands will be ignored.
So I expect that when a client logs in via ssh such as this one:
ssh user_1#remotehost "set_new_mode param1 param2"
The user_1_shell program seems to be executed, but the parameter string doesn't seem to be passed.
Maybe, I should be asking, does ForceCommand actually support this?
If yes, any suggestions on how I could make it work?
Thanks.
I found the answer. The remote server captures the parameter string and saves it in "$SSH_ORIGINAL_COMMAND" environment variable.
As already answered, the commandline sent from the ssh client is put into the SSH_ORIGINAL_COMMAND environment variable, only the ForcedCommand is executed.
If you use the information in SSH_ORIGINAL_COMMAND in your ForcedCommand you must take care of security implications. An attacker can augment your command with arbitrary additional commands by sending e.g. ; rm -rf / at the end of the commandline.
This article shows a generic script which can be used to lock down allowed parameters. It also contains links to relevant information.
The described method (the 'only' script) works as follows:
Make the 'only' script the ForcedCommand, and give it the allowed
command as its parameter. Note that more then one allowed command may be used.
Put a .onlyrules files into the home directory of user_1 and fill it with rules (regular expressions) which are matched against the
commandline sent by the ssh client.
Your example would look like:
Match User user_1
ForceCommand /usr/local/bin/only /home/user_1/user_1_shell
and if, for example, you want to allow as parameters only 'set_new_mode' with exactly two alphanumeric arbitrary parameters the .onlyrules file would look like this:
\:^/home/user_1/user_1_shell set_new_mode [[:alnum:]]\{1,\} [[:alnum:]]\{1,\}$:{p;q}
Note that for sending the command to the server you must use the whole commandline:
/home/user_1/user_1_shell set_new_mode param1 param2
'only' looks up the command on the server and uses its name for matching the rules. If any of these checks fail, the command is not run.
[Disclosure: I wrote sshdo which is described below]
There's a program called sshdo for doing this. It controls which commands may be executed via incoming ssh connections. It's available for download at:
http://raf.org/sshdo/ (read manual pages here)
https://github.com/raforg/sshdo/
It has a learning mode to allow all commands that are attempted, and a --learn option to produce the configuration needed to allow learned commands permanently. Then learning mode can be turned off and any other commands will not be executed.
It also has an --unlearn option to stop allowing commands that are no longer in use so as to maintain strict least privilege as requirements change over time.
It can also be configured manually.
It is very fussy about what it allows. It won't allow a command with any arguments. Only complete shell commands can be allowed. But it does support simple patterns to represent similar commands that vary only in the digits that appear on the command line (e.g. sequence numbers or date/time stamps).
It's like a firewall or whitelisting control for ssh commands.

Resources