I have some functions in my .bashrc file which are used to issue backup commands on remote websites. Right now, the username and password fields are stored as function-local strings in plain text within the function definition. Is there a better way of doing this?
My idea so far was to put a hashed version of the passwords in a file to which only my user account has read access, run a de-hashing command-line function on it and store the plain text result in memory, use it, then clear it.
Is there a better/safer or even a de-facto common way of accomplishing this?
Thank you.
There are 2 ways I can think of safely approaching this problem.
1. GPG
Keep a GPG encrypted file with your passwords in it in key=value format (shell parsable basically), one per line. Such as:
foo_pass='bar'
pop_pass='tart'
When you want to access them, just do:
eval "$(gpg -d /path/to/file | grep '^foo_pass=')"
SUPERSECRETPASSWORD="$foo_pass" somecmd
If the command needs the password as an argument (this is unsafe), just adjust that last line.
2. Keyring daemon
Depending on your OS, you might have access to a keyring which you can store your passwords in. On linux, this might be the gnome keyring daemon. Then this keyring can probably be accessed via CLI/script somehow.
For example, there is gkeyring for use with the gnome keyring daemon.
Related
The openssl man page says the "-pass pass:[password]" option for the openssl command is not secure because it's visible to utilities like ps, but is it secure through bash?
Like this:
#!/bin/bash
read -s psword
openssl enc -pass pass:$psword -aes-256-cbc -in text.plaintext -out text.encrypted
I've run a program like this on my computer and all ps seems to see is "openssl". Will other utilities be able to see the password?
The command line is normally easy in any operating system to get from any process normally. See this answer to getting a command line for a process. So it doesn't really matter what "starts" the process, be it bash or some custom application. This is the reason that that advise is given.
With any of these things it comes down to risk. If you accept the risk that it's not that secure then there is no reason not to use the command line (i.e. it's your machine and you are the only one using it). If lots of people can see your process sessions and possibility see a sensitive password then the risk may not be worth it. It's up to you to determine if the risk is acceptable.
if you want to secure the password, then its better to write it to a file that only your process has access to, and read the password from that file in your command. This will hide the plain password in the command line and make it invisible to other processes.
You can check the following answer. It is related to generating openssl keys but is similar to this topic:
How to generate an openSSL key using a passphrase from the command line?
Is there any way .netrc or .fetchmailrc can
fetch passwords from environment variables, or
fetch them from a spawned/forked shell?
mutt can do it with something like this:
set my_pw_gmail=`some shell command that emits to stdout`
Is there any similar ability in either of these RC files?
Thanks!
It depends on how the config file is read and interpreted. This means that the c/c++/java or whatever has to call the equivalent of system() on the back tic-ed string.
BSD's ftp and gnu ftp use environment variables to look for .netrc. For gnu (what you probably have) export NETRC=/path/to/different where /path/to/different is the directory where one version of .netrc lives. It also will use HOME for the default .netrc
Read here:
http://www.manpagez.com/man/1/ftp/
BSD ftp uses the HOME variable to look for .netrc.
fetchmail is different
HOME_ETC
If the HOME_ETC variable is set, fetchmail will read $HOME_ETC/.fetchmailrc instead of ~/.fetchmailrc.
If HOME_ETC and FETCHMAILHOME are both set, HOME_ETC will be ignored.
See : http://linux.die.net/man/1/fetchmail
So, environment variables are an answer to your question, since you were implying their use by your query. But the answer means you need several different files, not just one.
I have a shell script that produces sensitive content when run. It sits on a box that only a few users have permissions to access. However, I have also added layered obfuscation to prevent unauthorized usage, via the following:
script must be run as root
script must be passed specific command line arguments to produce any output
script has been encoded by the shell compiler "shc" to mask facts #1 and #2 from normal users (those who would not know to use TRACE or STRINGS to still view the actual code).
To then add a layer of actual security to protect again more advanced users and system admins, I have also encrypted the script with gpg.
My question is -- Is there a gpg command (or other encryption method) that I could run which prompts for the decryption passphrase, and decrypts the script and runs it in memory only (without saving the decrypted version of the file to the file system)?
I realize that sensitive information may still exist in unprotected memory while being executed, I'll address that separately.
You can capture the output of decrypting by
decrypted=$(gpg -d ...)
You can then eval the result
eval "$decrypted"
Another simple option to contrast with choroba's answer:
Save the decrypted output to a file in /dev/shm/. (It's an in-ram tmpfs filesystem there by default on virtually all Linux distros.) Setup a trap to delete the file when your script exits.
It's very possible I could refine this, but here's another idea where you execute the script rather than evaluate it like in choroba's example. It allows you to pass arguments...
bash <( gpg -d ... ) arg1 arg2
...it 'overrides' the interpreter, though. I.e. I'd run my scripts with bash -ue. May or may not be a problem depending on the scripts and whether you are writing them yourself or not :)
Writing a bash script, and I want to get user input. Awesome,
read -p "What directory should we save in? " -e FOLDER
Except that what I'd like to do, ideally, is have the user see something like:
What directory should we save in? /home/user/default/
with the cursor at the end of the line, and the ability to delete backwards or append or whatever. Essentially, pre-filling the user's input, but giving them the ability to edit it.
Readline obviously has the capability, but it appears to be not exposed in the read command. Any alternatives? I'd prefer to not have to use perl or such.
The constraint I'm working under is that I'm writing a single shell script that would be nice to disseminate widely, so should rely on as little pre-existing infrastructure as possible. rlwrap and read -i both work if their dependencies (rlwrap and bash version >> whatever I have, respectively) are available. Both good answers, choose whichever works for you.
$ read -p "What directory should we save in? " -i "/home/user/default/" -e FOLDER
What directory should we save in? /home/user/default/
that should work, right?
You can wrap the command in rlwrap, which provides instant readline capabilities: https://github.com/hanslub42/rlwrap
(rlwrap -P does what you want)
As far as a pure bash solution is concerned for the 3.2 line (which i am presuming you are using), I dont think its possible
I have a bash script that uses bash's "read" builtin to obtain a username and password from the user and then uses them as part of an AFP URL. I'm encountering some problems, however, when passwords contain characters that affect URL parsing (';' for example).
I've looked around for a command-line utility that can do URL-filtering but haven't found one. Does anybody know of such a utility? I would like to be able to do something like the following:
mount_afp "afp://`urlfilter $USER`:`urlfilter $PASS`#server.com".
You can use a simple one-line Python script to do this:
echo $USER | python -c 'import urllib; print urllib.quote(raw_input())'
Note that, while acceptable for your usage, urllib.quote is designed for use on URL components, not the entire URL. See #120951 for more.