OS X Command-line URL filter - macos

I have a bash script that uses bash's "read" builtin to obtain a username and password from the user and then uses them as part of an AFP URL. I'm encountering some problems, however, when passwords contain characters that affect URL parsing (';' for example).
I've looked around for a command-line utility that can do URL-filtering but haven't found one. Does anybody know of such a utility? I would like to be able to do something like the following:
mount_afp "afp://`urlfilter $USER`:`urlfilter $PASS`#server.com".

You can use a simple one-line Python script to do this:
echo $USER | python -c 'import urllib; print urllib.quote(raw_input())'
Note that, while acceptable for your usage, urllib.quote is designed for use on URL components, not the entire URL. See #120951 for more.

Related

How can I pass arguments to shell script in other command with questions

I have a small doubts because I don't know how can I prepare shell script which should execute other command with questions...
It means e.g. that I have to connect with VPN client and need to answer a several question. Accept trusting (yes/no), then choose VPN option (VPN/VPN-1), introduce login and password. I would like to have one script with all parameters (of course exclude password).
Any ideas? Thanks
If the answers file works, you can avoid placing the password into the file using replacement token. For example, write 'PASSWORD in the file, and then use 'sed' (or other tool) to replace it at run time.
Possible to use 'read -s password' or other method to get the password at run-time.
read -s REAL_PASSWORD
sed -e 's/__PASSWORD__/$REAL_PASSWORD/' | command-to-setup
If the number of items in the answer file is small, and do not change, you can inline them into your script
read -s REAL_PASSWORD
command-to-setup <<__ANSWERS__
yes
VPN-1
login
$REAL_PASSWORD
__ANSWERS__

.netrc, .fetchmailrc, and non-hardcoded passwords

Is there any way .netrc or .fetchmailrc can
fetch passwords from environment variables, or
fetch them from a spawned/forked shell?
mutt can do it with something like this:
set my_pw_gmail=`some shell command that emits to stdout`
Is there any similar ability in either of these RC files?
Thanks!
It depends on how the config file is read and interpreted. This means that the c/c++/java or whatever has to call the equivalent of system() on the back tic-ed string.
BSD's ftp and gnu ftp use environment variables to look for .netrc. For gnu (what you probably have) export NETRC=/path/to/different where /path/to/different is the directory where one version of .netrc lives. It also will use HOME for the default .netrc
Read here:
http://www.manpagez.com/man/1/ftp/
BSD ftp uses the HOME variable to look for .netrc.
fetchmail is different
HOME_ETC
If the HOME_ETC variable is set, fetchmail will read $HOME_ETC/.fetchmailrc instead of ~/.fetchmailrc.
If HOME_ETC and FETCHMAILHOME are both set, HOME_ETC will be ignored.
See : http://linux.die.net/man/1/fetchmail
So, environment variables are an answer to your question, since you were implying their use by your query. But the answer means you need several different files, not just one.

Automatically decrypt and run an encrypted bash script without saving decrypted file to file system

I have a shell script that produces sensitive content when run. It sits on a box that only a few users have permissions to access. However, I have also added layered obfuscation to prevent unauthorized usage, via the following:
script must be run as root
script must be passed specific command line arguments to produce any output
script has been encoded by the shell compiler "shc" to mask facts #1 and #2 from normal users (those who would not know to use TRACE or STRINGS to still view the actual code).
To then add a layer of actual security to protect again more advanced users and system admins, I have also encrypted the script with gpg.
My question is -- Is there a gpg command (or other encryption method) that I could run which prompts for the decryption passphrase, and decrypts the script and runs it in memory only (without saving the decrypted version of the file to the file system)?
I realize that sensitive information may still exist in unprotected memory while being executed, I'll address that separately.
You can capture the output of decrypting by
decrypted=$(gpg -d ...)
You can then eval the result
eval "$decrypted"
Another simple option to contrast with choroba's answer:
Save the decrypted output to a file in /dev/shm/. (It's an in-ram tmpfs filesystem there by default on virtually all Linux distros.) Setup a trap to delete the file when your script exits.
It's very possible I could refine this, but here's another idea where you execute the script rather than evaluate it like in choroba's example. It allows you to pass arguments...
bash <( gpg -d ... ) arg1 arg2
...it 'overrides' the interpreter, though. I.e. I'd run my scripts with bash -ue. May or may not be a problem depending on the scripts and whether you are writing them yourself or not :)

linux bash script: set date/time variable to auto-update (for inclusion in file names)

Essentially, I have a standard format for file naming conventions. It breaks down to this:
target_dateUTC_timeUTC_tool
So, for instance, if I run tcpdump on a target of 'foo', then the file would be foo_dateUTC_timeUTC_tcpdump. Simple enough, but a pain for everyone to constantly (and consistently) enter... so I've tried to create a bash script which sets system variables like so:
FILENAME=$TARGET\_$UTCTIME\_$TOOL
Then, I can just call the variable at runtime, like so:
tcpdump -w $FILENAME.lpc
All of this works like a champ. I've got a menu-driven .sh which gives the user the options of viewing the current variables as well as setting them... file generation is a breeze. Unfortunately, by setting the date/time variable, it is locked to the value at the time of creation (naturally). I set the variable like so:
UTCTIME=$(/bin/date --utc +"%Y%m%d_%H%M%Z")
What I really need is either a way to create a variable which updates at runtime, or (more likely) another way to skin this cat.
While scouring for solutions, I came across a similar issues... like this.
But, to be honest, I'm stumped on how to marry the two approaches and create a simple, distributable solution.
.sh file is posted via pastebin, here.
Use a function:
generate_filename() { echo "${1}_$(/bin/date --utc +"%Y%m%d_%H%M%Z")_$2"; }
And use it like this:
tcpdump -w "$(generate_filename foo tcpdump).lpc"
It's hard to get the function to automatically determine the command name. You can use bash history to get it and save a couple of characters typing:
tcpdump -w "$(generate_filename foo !#:0).lpc"

Pre-filling a prompt in Bash

Writing a bash script, and I want to get user input. Awesome,
read -p "What directory should we save in? " -e FOLDER
Except that what I'd like to do, ideally, is have the user see something like:
What directory should we save in? /home/user/default/
with the cursor at the end of the line, and the ability to delete backwards or append or whatever. Essentially, pre-filling the user's input, but giving them the ability to edit it.
Readline obviously has the capability, but it appears to be not exposed in the read command. Any alternatives? I'd prefer to not have to use perl or such.
The constraint I'm working under is that I'm writing a single shell script that would be nice to disseminate widely, so should rely on as little pre-existing infrastructure as possible. rlwrap and read -i both work if their dependencies (rlwrap and bash version >> whatever I have, respectively) are available. Both good answers, choose whichever works for you.
$ read -p "What directory should we save in? " -i "/home/user/default/" -e FOLDER
What directory should we save in? /home/user/default/
that should work, right?
You can wrap the command in rlwrap, which provides instant readline capabilities: https://github.com/hanslub42/rlwrap
(rlwrap -P does what you want)
As far as a pure bash solution is concerned for the 3.2 line (which i am presuming you are using), I dont think its possible

Resources