Hiding secret from command line parameter on Unix - bash

I've a script that launches inside of itself a command with a parameter that is a secret. For example:
#!/bin/bash
command-name secret
While running the command I can read through ps -ef | grep command-name which is the secret.
Is there any way of hiding the secret in a way that through ps -ef, the command line parameter is obfuscated?

First, you can NOT hide command line arguments. They will still be visible to other users via ps aux and cat /proc/$YOUR_PROCESS_PID/cmdline at the time of launching the program (before the program has a chance to do run-time changes to arguments). Good news is that you can still have a secret by using alternatives:
Use standard input:
mySecret='hello-neo' printenv mySecret | myCommand
Use a dedicated file if you want to keep the secret detached from the main script (note that you'd be recommended to use full disc encryption and make sure the file has correct chmod permissions):
cat /my/secret | myCommand
Use environment variables (with caveats). If your program can read them, do this:
mySecret='hello-neo' myCommand
Use temporary file descriptor:
myCommand <( mySecret='hello-neo' printenv mySecret )
In the last case your program will be launched like myCommand /dev/fd/67, where the contents of /dev/fd/67 is your secret (hello-neo in this example).
In all of the above approaches, be wary of leaving the command in bash command history (~/.bash_history). You can avoid this by either running the command from a script (file), or by interactively prompting yourself for password each time:
read -s secret
s=$secret printenv s | myCommand # approach 2
myCommand <( s=$secret printenv s ) # approach 3
secret=$secret myCommand # approach 4
export secret && myCommand # another variation of approach 4

If the secret doesn't change between executions, use a special configuration file, ".appsecrets". Set the permissions of the file to be read-only by owner. Inside the file set an environment variable to the secret. The file needs to be in the home directory of the user running the command.
#!/bin/bash
#filename: .appsecrets
set SECRET=polkalover
Load the config file so the environment variable gets set.
. ~/.appsecrets
What I've seen done:
1)
echo $SECRET | command
works if the command prompts for the password from stdin AND if 'echo' is a builtin of your shell. We were using Korn.
2)
password=$ENV{"SECRET"};
works if you have control of the code (e.g. in perl or C++)
3)
. ./.app.config #sets the environment variables
isql -host [host] -user [user] -password <<SECRET
${SQLPASSWORD}
SECRET
works if the command can accept the secret from std-in. One limitation is that the <<string has to be the last argument given to the command. This might be troublesome if there is a non-optional arg that has to appear after -password
The benefit of this approach is you can arrange it so the secret can be hidden in production. Use the same filename in production but it will be in the home directory of the account that runs the command in production. You can then lock down access to the secret like you would access to the root account. Only certain people can 'su' to the prod account to view or maintain the secret while developers can still run the program because they use their own '.appsecret' file in their home directory.
You can use this approach to store secured information for any number of applications, as long as they use different environment variable names for their secrets.
(WRONG WAY)
One old method I saw the DBAs use was to set SYBASE to "/opt/././././././././././././././././././././././././././././././././././sybase/bin". So their commandlines were so long the ps truncated it. But in linux I think you might be able to sniff out the full commandline from /proc.

I saw it on another post. This is the easiest way under Linux.
This modifies the memory part of command line that all other programs see.
strncpy(argv[1], "randomtrash", strlen(argv[1]));
You can also change the name of the process, but only when read from the command line. Programs like top will show the real process name:
strncpy(argv[0], "New process name", strlen(argv[0]));
Don't forget to copy maximum strlen(argv[0]) bytes because probably there's no more space allocated.
I think that arguments can only be found in the portion of the memory that we modify so I think that this works like a charm. If someone knows something accurate about this, please comment.
VasyaNovikov note: The password can still be intercepted after the program has invoked but before it started doing the changes you described.

The only way to conceal your secret argument from ps is not to provide the secret as an argument. One way of doing that is to place the secret in a file, and to redirect file descriptor 3 to read the file, and then remove the file:
echo secret > x.$$
command 3<x.$$
rm -f x.$$
It isn't entirely clear that this is a safe way to save the secret; the echo command is a shell built-in, so it shouldn't appear in the 'ps' output (and any appearance would be fleeting). Once upon a very long time ago, echo was not a built-in - indeed, on MacOS X, there is still a /bin/echo even though it is a built-in to all shells.
Of course, this assumes you have the source to command and can modify it to read the secret from a pre-opened file descriptor instead of from the command line argument. If you can't modify the command, you are completely stuck - the 'ps' listing will show the information.
Another trick you could pull if you're the command owner: you could capture the argument (secret), write it to a pipe or file (which is immediately unlinked) for yourself, and then re-exec the command without the secret argument; the second invocation knows that since the secret is absent, it should look wherever the first invocation hid the secret. The second invocation (minus secret) is what appears in the 'ps' output after the minuscule interval it takes to deal with hiding the secret. Not as good as having the secret channel set up from the beginning. But these are indicative of the lengths to which you have to go.
Zapping an argument from inside the program - overwriting with zeroes, for example - does not hide the argument from 'ps'.

The expect library was created partially for these kind of things, so you can still provide a password / other sensitive information to a process without having to pass it as an argument. Assuming that when 'secret' isn't given the program asks for it of course.

There's no easy way. Take a look at this question I asked a while ago:
Hide arguments from ps
Is command your own program? You could try encrypting the secret and have the command decrypt it before use.

You can use LD_PRELOAD to have a library manipulate the command line arguments of some binary within the process of that binary itself, where ps does not pick it up. See this answer of mine on Server Fault for details.

Per the following article:
https://www.cyberciti.biz/faq/linux-hide-processes-from-other-users/
you can configure the OS to hide / separate the processes from each other with the hidepid mount option for the /proc, requires Linux kernel 3.2+.

may be you can do like this:
#include <boost/algorithm/string/predicate.hpp>
void hide(int argc, char** argv, std::string const & arg){
for(char** current = argv; current != argv+ argc ;++current){
if(boost::algorithm::starts_with(*current, "--"+arg)){
bzero(*current, strlen(*current));
}
}
}
int main(int argc, char** argv){
hide(argc, argv, "password");
}

Here is one way to hide a secret in an environment variable from ps:
#!/bin/bash
read -s -p "Enter your secret: " secret
umask 077 # nobody but the user can read the file x.$$
echo "export ES_PASSWORD=$secret" > x.$$
. x.$$ && your_awesome_command
rm -f x.$$ # Use shred, wipe or srm to securely delete the file
In the ps output you will see something like this:
$ps -ef | grep your_awesome_command
root 23134 1 0 20:55 pts/1 00:00:00 . x.$$ && your_awesome_command
Elastalert and Logstash are examples of services that can access passwords via environment variables.

If the script is intended to run manually, the best way is to read it in from STDIN
#!/bin/bash
read -s -p "Enter your secret: " secret
command "$secret"

I always store sensitive data in files that I don't put in git and use the secrets like this:
$(cat path/to/secret)

Related

Using Expect to fill a password in a bash script

I am relatively new to working in bash and one of the biggest pains with this script I have to run is that I get prompted for passwords repeatedly when running this script. I am unable to pass ssh keys or use any options except expect due to security restrictions but I am struggling to understand how to use expect.
Does Expect require a separate file from this script to call itself, it seems that way looking at tutorials but they seem rather complex and confusing for a new user. Also how do I input into my script that I want it to auto fill in any prompt that says Password: ? Also this script runs with 3 separate unique variables every time the script is called. How do I make sure that those are gathered but the password is still automatically filled?
Any assistance is greatly appreciated.
#!/bin/bash
zero=`echo $2`
TMPIP=`python bin/dgip.py $zero`
IP=`echo $TMPIP`
folder1=`echo $zero | cut -c 1-6`
folder2=`echo $zero`
mkdir $folder1
cd $folder1
mkdir $folder2
cd $folder2
scp $1#`echo $IP`:$3 .
Embedding expect code in an shell script is not too difficult. We have to be careful to get the quoting correct. You'll do something like this:
#!/usr/bin/env bash
user=$1
zero=$2
files=$3
IP=$(python bin/dgip.py "$zero")
mkdir -p "${zero:0:6}/$zero"
cd "${zero:0:6}/$zero"
export user IP files
expect <<<'END_EXPECT' # note the single quotes here!
set timeout -1
spawn scp $env(user)#$env(IP):$env(files) .
expect {assword:}
send "$env(my_password)\r"
expect eof
END_EXPECT
Before you run this, put your password into your shell's exported environment variables:
export my_password=abc123
bash script.sh joe zero bigfile1.tgz
bash script.sh joe zero bigfile2.tgz
...
Having said all that, public key authentication is much more secure. Use that, or get your sysadmins to enable it, if at all possible.

Bash - SSH to remote server and retrieve data

I have a bash script that needs to connect to another server for parts of it's execution. I have tried many of the standard instructions and syntaxes for executing ssh commands, but with little progress.
On the remote server, I need to source a shell script that contains several env parameters for some software. One of these parameters are then used in a filepath to point to an executable, which contains a function ' -lprojects ' that can list the projects for the software on that server.
I have verified that running the commands on the server itself works multiple times. My issue is when I try to run the same commands over SSH. If I use the approach where I use the env variable for the filepath, it shows that the variable is null in the filepath, giving a file/directory not found error. If I hard-code the filepath to point to the executable, it gives me an error saying that the shell script is not sourced (which I assume it needs for other functions and apis for the executable to reveal it's -lprojects function)
Here is how the code looks like somewhat:
ssh remote.server 'source /filepath/remotescript.sh'
filelist=$(ssh remote.server $REMOTEVARIABLE'/bin/executable -lprojects')
echo ${filelist[#]}
for file in $filelist
do
echo $file
ssh SERVER2 awk 'something' /filepath/"$file"/somefile.txt | sed 'something' >> filepath/values.csv;
done
As you can see, I then also need to loop through the contents of the -lprojects output in the remote.server, do some awk and sed on the files to extract the wanted text (this works), but then I need to write that back to the client (local server) values.csv file. This is more generic, as there will be several servers I have to do this for, but all of them have to write to the same .csv file. For simplicity, you can just regard this as a one remote server case, since it is vital I get it working for at least one now in the beginning.
Note that I also tried something like:
ssh remote.server << EOF
'source /filepath/remotescript.sh'
filelist=$(ssh remote.server $REMOTEVARIABLE'/bin/executable -lprojects')
EOF
But with similar results. Also placing the single-quotes in the filelist both before and after the remotevariable, etc.
How do I go about properly doing this?
To access the environment variable, you must source the script that defines the environment within the same SSH call as the one where you are using it, otherwise, you're running your commands in two different shells which are unrelated:
filelist=$(ssh remote.server 'source /filepath/remotescript.sh; $REMOTEVARIABLE/bin/executable -lprojects')
Assuming executable outputs one file name per line, you can use readarray to achieve the effect :
readarray -t filelist < <(ssh remote.server '
source /filepath/remotescript.sh
$REMOTEVARIABLE/bin/executable -lprojects
'
)
echo ${filelist[#]}
for file in $filelist
do
echo $file
ssh SERVER2 awk 'something' /filepath/"$file"/somefile.txt | sed 'something' >> filepath/values.csv;
done

Laravel Envoy and bash prompt

I'm using Envoy to provision a remote server. Provisioning is done by pulling the bash script from a private repo and then execute it.
The bash script ask some confirmation like yes/no (using bash "read -p"): it works as expected when i'm connected to the remote server... the script wait for user input.
Instead Envoy seems to ignore any prompt. Is it an expected behavior?
Any workaround?
Yes, this is expected. There's nothing for read to read from so it doesn't.
You have a few options.
Rewrite your script to use a config file when there's no terminal to prompt from.
Use something like [ -t 0 ] to test if the standard input is a terminal and load a configuration file with defaults. The simplest way to do that is just have a file that contains appropriate variable assignments and just source it . defaults.sh or whatever. You don't even need the -t test if you source the defaults first since then anything the user inputs will over-ride the default value.
Rewrite your script to have sane defaults.
Rewrite whatever runs the script to provide your script input via pipeline/file via redirection (e.g. printf 'answer 1\nanswer 2\n' | ./script.sh or ./script.sh <answerfile).

using grep in a script which prompt user for input

I have written one shell script which ask for some username and password from standart input.
Once username and password is typed there is a output depending upon the parameters passed in the script.
Say my script name is XYZ.ksh.
Now my problem is that users of these script want to use want to use this script in conjugation with other shell commands like grep, less, more, wc etc.
Normally yes they can use
XYZ.ksh | grep abc
But in my case since XYZ is prompting for username and password we are not able to use "|" in front of that. It blocks forever.
I just wanted to know how can I implement the functinality.
What I tried
I tried taking input of "more commands " from user where user types things like "| grep abc"
but when i used this input in my script it did not work.
Use <<< like this:
XYZ.ksh <<< "your inputs" | grep abc
In your script you can test to see if stdout is connected to a terminal with:
if [[ -t 1 ]]
That way you can supress the prompt if the output is not going to the console.
Alternatively, with your "more commands" solution, run the command connected to a named pipe.
There are multiple solutions commonly used for this kind of problem but none of them is perfect :
Read password from standard input. It makes it really hard to use the script in pipes. This method is used by commands that deal with changing passwords : passwd, smbpasswd
Provide username and password in the command line parameters. This solution is good for using the script in pipes, but command line can be viewed by anyone, using ps -ef for exemple. This is used by mysql, htpasswd, sqlplus, ...
Store username and password unencrypted in a file in user's home directory. This solution is good for using the script in pipes, but the script must check if the file is visible or modifiable by other users. This is used by mysql
Store private key in local file and public key in distant file, as used by SSH. You must have a good encryption knowledge to do this correctly (or rely on SSH), but it's excellent for use in pipes, even creating pipes accross different machines !
Don't deal with passwords, and assume that if a user is logged in in the system, he has the right to run the program. You may give execute privilege only to one group to filter who can use the program. This is used by sqlplus from Oracle, VirtualBox, games on some Linux distributions, ...
My preferred solution would be the last, as the system is certainly better than any program I could write with regard to security.
If the password is used to login to some other service, then I would probably go for the private file containing the password.
One less-than-optimal possibility is to display the prompt to stderr instead of stdout.
echo -n "Username:" >/dev/stderr
A better solution would be to check stdin of the shell. If it's a terminal, then open it for writing and redirect to that file. Unfortunately, I'm not sure how to do that in bash or ksh; perhaps something like
echo -n "Username:" >/dev/tty
You can use (I assume you are reading username and password in your script with read)
(
read -p "user:" USER
read -p "pass:" PASS
) < /dev/tty > /dev/tty
and you'll be able to run
$ cmd | XYZ.ksh
However, I agree with other answers: just don't ask for user and password and give the correct permissions to the script to allow access.

FTP inside a shell script not working

My host upgraded my version of FreeBSD and now one of my scripts is broken. The script simply uploads a data feed to google for their merchant service.
The script (that was working prior to the upgrade):
ftp ftp://myusername:mypassword#uploads.google.com/<<END_SCRIPT
ascii
put /usr/www/users/myname/feeds/mymerchantfile.txt mymerchantfile.txt
exit
END_SCRIPT
Now the script says "unknown host". The same script works on OSX.
I've tried removing the "ftp://". - No effect
I can log in from the command line if I enter the username and password manually.
I've search around for other solutions and have also tried the following:
HOST='uploads.google.com'
USER='myusername'
PASSWD='mypassword'
ftp -dni <<END_SCRIPT
open $HOST
quote USER $USER
quote PASS $PASS
ascii
put /usr/www/users/myname/feeds/mymerchantfile.txt mymerchantfile.txt
END_SCRIPT
And
HOST='uploads.google.com'
USER='myusername'
PASSWD='mypassword'
ftp -dni <<END_SCRIPT
open $HOST
user $USER $PASS
ascii
put /usr/www/users/myname/feeds/mymerchantfile.txt mymerchantfile.txt
END_SCRIPT
Nothing I can find online seems to be doing the trick. Does anyone have any other ideas? I don't want to use a .netrc file since it is executed by cron under a different user.
ftp(1) shows that there is a simple -u command line switch to upload a file; and since ascii is the default (shudder), maybe you can replace your whole script with one command line:
ftp -u ftp://username:password#uploads.google.com/mymerchantfile.txt\
/usr/www/users/myname/feeds/mymerchantfile.txt
(Long line wrapped with \\n, feel free to remove the backslash and place it all on one line.)
ftp $HOSTNAME <<EOFEOF
$USER
$PASS
ascii
put $LOCALFILE $REMOTETEMPFILE
rename $REMOTETEMPFILE $REMOTEFINALFILE
EOFEOF
Please note that the above code can be easily broken by, for example, using spaces in the variables in question. Also, this method gives you virtually no way to detect and handle failure reliably.
Look into the expect tool if you haven't already. You may find that it solves problems you didn't know you had.
Some ideas:
just a thought since this is executed in a subshell which should inherit correctly from parent, does an env show any difference when executed from within the script than from the shell?
Do you use a correct "shebang"?
Any proxy that requires authentication?
Can you ping the host?
In BSD, you can create a NETRC script that ftp can use for logging on. You can even specify the NETRC file in your ftp command too using the -N parameter. Otherwise, the default NETRC is used (which is $HOME/.netrc).
Can you check if there's a difference in the environment between your shell-login, and the cron-job? From your login, run env, and look out for ftp_proxy and http_proxy.
Next, include a line in the cron-job that will dump the environment, e.g. env >/tmp/your.env.
Maybe there's some difference...Also, did you double-check your correct usage of the -n switch?

Resources