Can I use command substitution in git bash? - bash

I love Git Bash because it puts all the powerful tools on a CLI for windows. I'm trying to use it with openssl to generate a csr and key at the same time. And, I'd like to do it all in one command. Here's what I have so far:
openssl req -new -sha256 \
-newkey ec:<(openssl ecparam -name prime256v1) -keyout site.key \
-batch -out site.csr -utf8 \
-subj '//C=US\ST=State\L=City\O=organization\OU=org unit\CN=site.com\emailAddress=admin#site.com' \
-addext 'subjectAltName=DNS:site.com,DNS:www.site.com'
My problem is that git bash doesn't handle the command substitution properly (or I'm doing it wrong). When I run the above, I get
Can't open parameter file /dev/fd/63
15160:error:02001003:system library:fopen:No such process:../openssl-1.1.1a/crypto/bio/bss_file.c:72:fopen('/dev/fd/63','r')
15160:error:2006D080:BIO routines:BIO_new_file:no such file:../openssl-1.1.1a/crypto/bio/bss_file.c:79:
According to the manual,
All other algorithms support the -newkey alg:file form, where file may be an algorithm parameter file, created by the genpkey -genparam command...
so, I think I'm looking to feed it a file after ec:, I'm just not sure how. (btw, I took direction from this answer. Although it's written for standard bash, I was hoping it would work.)

Related

Openssl passin throws "Bad password read"

Trying to encrypt a file & make executable using openssl. Found an interesting link that was suitable to my problem with one issue. which is "I had to pass the password used for encryption in openssl command" which is resulting error.
Create Write your script (script-base.sh)
#!/bin/sh
echo "Hello World"
Encrypt your script (give a password): foobar is my password
openssl enc -e -aes-256-cbc -a -in script-base.sh > script-enc
Write de Wrapper (script-final.sh):
#!/bin/sh
openssl enc -d -aes-256-cbc -a -in script-enc | sh -passin pass:foobar
Running "script-final.sh" I see following error in console
enter aes-256-cbc decryption password: bad password read
Though the following code works but its deprecated
openssl enc -d -aes-256-cbc -a -in script-enc -k foobar | sh -
when used the following error is thrown
*** WARNING : deprecated key derivation used.
Using -iter or -pbkdf2 would be better.
bad decrypt
... works but [] deprecated: openssl enc -d -aes-256-cbc -a -in script-enc -k foobar | sh -
In that case you give -k foobar as an option to the openssl enc -d command, so it is used as the password to decrypt, which succeeds since you did in fact encrypt using foobar as the password (and the same cipher and default KDF). See below about deprecation.
openssl enc -d -aes-256-cbc -a -in script-enc | sh -passin pass:foobar [gives]
enter aes-256-cbc decryption password: bad password read
Here you didn't give -passin pass:foobar as an option to openssl, you gave it as an option to the shell that is the second component of the pipeline. Since you didn't give the password as an argument to openssl and it is needed, openssl prompted you to input it, but you didn't give valid input (perhaps entering control-D or similar) so it failed. If you did instead
openssl enc -d -aes-256-cbc -a -in script-enc -passin pass:foobar | sh
it would work exactly the same as the -k version, except for taking more space in your script.
It is indeed true that the key-derivation long used (and still default) by openssl enc is very poor and weak and has been widely criticized for decades; OpenSSL 1.1.1 (released 2018, after the date of the answer you link to) and up finally offers a better method with -pbkdf2 and warns about using the old one. However, you should pay attention to this warning on the encrypt side rather than decrypt; once you've encrypted with the poor method, you must use it to decrypt (and suffer the warning). Also note, as I commented at that link, OpenSSL 1.1.x (and 3.0) are incompatible with earlier versions, so if any system(s) you or anyone (like your users if any) want this to work on are running older software it will fail.
Alternatively, consider using something that was properly designed in the first place, such as GPG which was recommended in the answer by Gilles on that same question (well over a year earlier). Although GPG, depending on the version, makes it less convenient to provide the password on the commandline because that usually allows it to be compromised -- but in your case you are already compromising it yourself, so GPG's attempt to give you security is wasted.

Hiding sensitive ruby shell commands

I'm using fastlane and sh command to decrypt some credentials but seems ruby prints the output in logs. How do I hide the sensitive information from logs?
cmd_decrypt = "openssl enc -aes-256-cbc -d -a -k \"#{ENV["MATCH_PASSWORD"]}\" -in #{enc_file} -out #{dec_file[0]}"
sh(cmd_decrypt)
output:
[09:38:15]: --------------------------------------------------------------------
[09:38:15]: Step: openssl enc -aes-256-cbc -d -a -k "PASSWORD_SHOWN!" -in /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/zz-out /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/xx
[09:38:15]: --------------------------------------------------------------------
[09:38:15]: $ openssl enc -aes-256-cbc -d -a -k "PASSWORD_SHOWN!" -in /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/zz -out /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/xx
You can pass sh extra parameters. In this case, you would call it like this:
sh(cmd_decrypt, log: false)
The documentation for sh is here: https://docs.fastlane.tools/actions/sh/
You get can get the docs for other built-in actions here:
https://docs.fastlane.tools/actions/
And the docs for other plugin's actions here: https://docs.fastlane.tools/plugins/available-plugins/
Since you have an environment variable, why not just run with that?
cmd_decrypt = "openssl enc -aes-256-cbc -d -a -k \"$MATCH_PASSWORD\" -in #{enc_file} -out #{dec_file[0]}"
sh(cmd_decrypt)
From there shell interpolation should take over and make it work. One thing to note is your -in parameter doesn't have shell escaping, which it usually must have, done using shellescape.
You really should be specifying these as separate arguments, though, whenever possible to avoid injection issues. The problem is you lose shell interpolation at that point.
The good news is you can always write a wrapper script to provide safety and ease of use, something like:
#!/bin/sh
# descrypt.sh
openssl enc -aes-256-cbc -d -a -k "$MATCH_PASSWORD" -in $1 -out $2
So then you can call it like this:
sh('descrypt.sh', enc_file, dec_file[0])
Now it logs something a lot quieter as well as a bonus. You can pick which arguments to pass through, or even throw them all through with $*.

Shell: Capturing output files in variables

Commands like openssl have arguments like -out <file> for output files. I'd like to capture the content of these output files in shell variables for use in other commands without creating temporary files. For example, to generate a self-signed certificate, one can use:
openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout KEYFILE -out CERTFILE 2>/dev/null
The closest I've got to capture both output files is to echo them to stdout via process substitution but this is not ideal because one would still have to parse them apart.
openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout >(key=$(cat -); echo $key) -out >(cert=$(cat -); echo $cert) 2>/dev/null
Is there a clean way to capture the content of output files in shell variables?
Most modern shells now support /dev/stdout as a file name to redirect to stdout. This is good enough for single file solutions but for two output files you need to go with "process substitution".
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout >(echo "keyout='$( cat )'" ) -out >(echo out="'$( cat )'" ) )"
This uses process substitution to direct each "file" to a separate process that prints to stdout an assignment of the calculated values. The whole thing is then passed to an eval to do the actual assignments.
Keeping output of stderr to show any error messages that pops up. Useful to log it in times of troubles.
Edit: incorporating Charles Duffy's good paranoia:
flockf="$(mktemp -t tmp.lock.$$.XXXXXX )" || exit $?;
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \
-keyout >( set -x; 99>"$flockf" && \
flock -x "$flockf" printf "keyout=%q " "$( cat )"; ) \
-out >( set -x; 99>"$flockf" && \
flock -x "$flockf" printf "out=%q " "$( cat )"; ) \
)" ;
rm -f "$flockf"
An extension to Gilbert's answer providing additional paranoia:
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \
-keyout >(printf 'keyout=%q\n' "$(</dev/stdin)") \
-out >(printf 'out=%q\n' "$(</dev/stdin)") )"
(Note that this is not suitable if your data contains NULs, which bash cannot store in a native shell variable; in that case, you'll want to assign the contents to your variables in base64-encoded form).
Unlike echo "keyout='$(cat)'", printf 'keyout=%q\n' "$(cat)" ensures that even malicious contents cannot be evaluated by the shell as a command substitution, redirection, or otherwise content other than literal data.
To explain why this is necessary, let's take a simplified case:
write_to_two_files() { printf 'one\n' >"$1"; printf 'two\n' >"$2"; }
write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'")
...we get output akin to (but with no particular ordering):
two='two'
one='one'
...which, when evaled, sets two variables:
$ eval "$(write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'"))"
$ declare -p one two
declare -- one="one"
declare -- two="two"
However, let's say that our program behaves a bit differently:
## Demonstrate why eval'ing content created by echoing data is dangerous
write_to_two_files() {
printf "'%s'\n" '$(touch /tmp/i-pwned-your-box)' >"$1"
echo "two" >"$2"
}
eval "$(write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'"))"
ls -l /tmp/i-pwned-your-box
Instead of merely assigning the output to a variable, we evaluated it as code.
If you're further interested in ensuring that the two print operations happen at different times (preventing their output from being intermingled), it's useful to further add locking. This does involve a temporary file, but does not write your keying material to disk (avoidance of which is the most compelling reason to avoid temporary file usage):
lockfile=$(mktemp -t output.lck.XXXXXX)
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \
-keyout >(in=$(cat); exec 99>"$lockfile" && flock -x 99 && printf 'keyout=%q\n' "$in") \
-out >(in=$(cat); exec 99>"$lockfile" && flock -x 99 && printf 'out=%q\n' "$in") )"
Note that we're only blocking for the write, not the read, so we can't get into race conditions (ie. where openssl isn't finishing writing to file-A because it's blocked on a write to file-B, which can never complete because the subshell on the read side of the file-A write holds the lock).

Openssl: cat: /dev/fd/63: No such file or directory

I try to create a Certificate Signing Request (CSR) using
openssl req -new -sha256 -key domain.key -subj "/" \ -reqexts SAN -config <(cat /System/Library/OpenSSL/openssl.cnf \ <(printf "[SAN]\nsubjectAltName=DNS:foo.com,DNS:www.foo.com"))
but getting the following error message on my macbook
cat: /dev/fd/63: No such file or directory
unknown option -reqexts
Any ideas?
Your command is probably copy&paste of a multi-line command where backslashes are used to join lines, however you somehow managed to get newlines converted to spaces.
Remove all occurences of "backslash + space".
I came across this question and in my case I did not have any \ , but I was using sudo so I had to use sudo su -c
$ sudo su -c 'openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-blah.blah.com.key -new -out /etc/ssl/certs/nginx-blah.blah.com.crt -subj "/O=blah.com/OU=blah/CN=blah.blah.com" -reqexts SAN -extensions SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:blah.blah.com,IP:192.168.174.128")) -sha256'
In some cases not mounted devpts can lead to the same error print. So needed to do:
mount -t devpts devpts /dev/pts

OpenSSL CommandLine Windows Fully Updated

openssl enc -e -bf -in X:\a.jpg -out X:\a -kfile Y:\password.txt
or
openssl enc -e -bf -in X:\a.jpg -out X:\a -k password
I get:
### is some number always different
###:error20074002:BIO routines:FILE_CTRL:system lib:.\crypto\bio\bss_file.C:400:
It would seem it does not like writing to Drives. It use too work till I updated even then it was sort of iffy.
I have tried every Windows admin rights I think of http://www.mydigitallife.info/how-to-open-elevated-command-prompt-with-administrator-privileges-in-windows-vista/

Resources