Shell: Capturing output files in variables - bash

Commands like openssl have arguments like -out <file> for output files. I'd like to capture the content of these output files in shell variables for use in other commands without creating temporary files. For example, to generate a self-signed certificate, one can use:
openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout KEYFILE -out CERTFILE 2>/dev/null
The closest I've got to capture both output files is to echo them to stdout via process substitution but this is not ideal because one would still have to parse them apart.
openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout >(key=$(cat -); echo $key) -out >(cert=$(cat -); echo $cert) 2>/dev/null
Is there a clean way to capture the content of output files in shell variables?

Most modern shells now support /dev/stdout as a file name to redirect to stdout. This is good enough for single file solutions but for two output files you need to go with "process substitution".
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout >(echo "keyout='$( cat )'" ) -out >(echo out="'$( cat )'" ) )"
This uses process substitution to direct each "file" to a separate process that prints to stdout an assignment of the calculated values. The whole thing is then passed to an eval to do the actual assignments.
Keeping output of stderr to show any error messages that pops up. Useful to log it in times of troubles.
Edit: incorporating Charles Duffy's good paranoia:
flockf="$(mktemp -t tmp.lock.$$.XXXXXX )" || exit $?;
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \
-keyout >( set -x; 99>"$flockf" && \
flock -x "$flockf" printf "keyout=%q " "$( cat )"; ) \
-out >( set -x; 99>"$flockf" && \
flock -x "$flockf" printf "out=%q " "$( cat )"; ) \
)" ;
rm -f "$flockf"

An extension to Gilbert's answer providing additional paranoia:
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \
-keyout >(printf 'keyout=%q\n' "$(</dev/stdin)") \
-out >(printf 'out=%q\n' "$(</dev/stdin)") )"
(Note that this is not suitable if your data contains NULs, which bash cannot store in a native shell variable; in that case, you'll want to assign the contents to your variables in base64-encoded form).
Unlike echo "keyout='$(cat)'", printf 'keyout=%q\n' "$(cat)" ensures that even malicious contents cannot be evaluated by the shell as a command substitution, redirection, or otherwise content other than literal data.
To explain why this is necessary, let's take a simplified case:
write_to_two_files() { printf 'one\n' >"$1"; printf 'two\n' >"$2"; }
write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'")
...we get output akin to (but with no particular ordering):
two='two'
one='one'
...which, when evaled, sets two variables:
$ eval "$(write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'"))"
$ declare -p one two
declare -- one="one"
declare -- two="two"
However, let's say that our program behaves a bit differently:
## Demonstrate why eval'ing content created by echoing data is dangerous
write_to_two_files() {
printf "'%s'\n" '$(touch /tmp/i-pwned-your-box)' >"$1"
echo "two" >"$2"
}
eval "$(write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'"))"
ls -l /tmp/i-pwned-your-box
Instead of merely assigning the output to a variable, we evaluated it as code.
If you're further interested in ensuring that the two print operations happen at different times (preventing their output from being intermingled), it's useful to further add locking. This does involve a temporary file, but does not write your keying material to disk (avoidance of which is the most compelling reason to avoid temporary file usage):
lockfile=$(mktemp -t output.lck.XXXXXX)
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \
-keyout >(in=$(cat); exec 99>"$lockfile" && flock -x 99 && printf 'keyout=%q\n' "$in") \
-out >(in=$(cat); exec 99>"$lockfile" && flock -x 99 && printf 'out=%q\n' "$in") )"
Note that we're only blocking for the write, not the read, so we can't get into race conditions (ie. where openssl isn't finishing writing to file-A because it's blocked on a write to file-B, which can never complete because the subshell on the read side of the file-A write holds the lock).

Related

Shell-script to decrypt file

I have to create a shell-script that can decrypt a RSA key file that is encrypted with a specific .pem file. And then to decrypt zip file with the AES key which I get from the RSA file once it is decrypted in a file named keyaes (or whatever you want).
Here are the two commands I have to use
openssl rsautl -decrypt -in AES_KEY -inkey CERTIFICATE.pem -out keyaes
openssl enc -d -aes-256-cbc -in zipfile.zip -out extraction.zip -nosalt -p -K RSA_KEY_from_key_aes_output -iv 0
The commands work perfectly, the problem is in my script I don't know how to make it automatically and to get the key from the keyaes output and put it into the next command properly.
How can I do it ?
you could just use bash command substitution in the second command, using backticks
openssl enc -d -aes-256-cbc -in zipfile.zip -out extraction.zip -nosalt -p -K `cat output_filename_with_aes_key` -iv 0

Hiding sensitive ruby shell commands

I'm using fastlane and sh command to decrypt some credentials but seems ruby prints the output in logs. How do I hide the sensitive information from logs?
cmd_decrypt = "openssl enc -aes-256-cbc -d -a -k \"#{ENV["MATCH_PASSWORD"]}\" -in #{enc_file} -out #{dec_file[0]}"
sh(cmd_decrypt)
output:
[09:38:15]: --------------------------------------------------------------------
[09:38:15]: Step: openssl enc -aes-256-cbc -d -a -k "PASSWORD_SHOWN!" -in /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/zz-out /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/xx
[09:38:15]: --------------------------------------------------------------------
[09:38:15]: $ openssl enc -aes-256-cbc -d -a -k "PASSWORD_SHOWN!" -in /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/zz -out /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/xx
You can pass sh extra parameters. In this case, you would call it like this:
sh(cmd_decrypt, log: false)
The documentation for sh is here: https://docs.fastlane.tools/actions/sh/
You get can get the docs for other built-in actions here:
https://docs.fastlane.tools/actions/
And the docs for other plugin's actions here: https://docs.fastlane.tools/plugins/available-plugins/
Since you have an environment variable, why not just run with that?
cmd_decrypt = "openssl enc -aes-256-cbc -d -a -k \"$MATCH_PASSWORD\" -in #{enc_file} -out #{dec_file[0]}"
sh(cmd_decrypt)
From there shell interpolation should take over and make it work. One thing to note is your -in parameter doesn't have shell escaping, which it usually must have, done using shellescape.
You really should be specifying these as separate arguments, though, whenever possible to avoid injection issues. The problem is you lose shell interpolation at that point.
The good news is you can always write a wrapper script to provide safety and ease of use, something like:
#!/bin/sh
# descrypt.sh
openssl enc -aes-256-cbc -d -a -k "$MATCH_PASSWORD" -in $1 -out $2
So then you can call it like this:
sh('descrypt.sh', enc_file, dec_file[0])
Now it logs something a lot quieter as well as a bonus. You can pick which arguments to pass through, or even throw them all through with $*.

Can I use command substitution in git bash?

I love Git Bash because it puts all the powerful tools on a CLI for windows. I'm trying to use it with openssl to generate a csr and key at the same time. And, I'd like to do it all in one command. Here's what I have so far:
openssl req -new -sha256 \
-newkey ec:<(openssl ecparam -name prime256v1) -keyout site.key \
-batch -out site.csr -utf8 \
-subj '//C=US\ST=State\L=City\O=organization\OU=org unit\CN=site.com\emailAddress=admin#site.com' \
-addext 'subjectAltName=DNS:site.com,DNS:www.site.com'
My problem is that git bash doesn't handle the command substitution properly (or I'm doing it wrong). When I run the above, I get
Can't open parameter file /dev/fd/63
15160:error:02001003:system library:fopen:No such process:../openssl-1.1.1a/crypto/bio/bss_file.c:72:fopen('/dev/fd/63','r')
15160:error:2006D080:BIO routines:BIO_new_file:no such file:../openssl-1.1.1a/crypto/bio/bss_file.c:79:
According to the manual,
All other algorithms support the -newkey alg:file form, where file may be an algorithm parameter file, created by the genpkey -genparam command...
so, I think I'm looking to feed it a file after ec:, I'm just not sure how. (btw, I took direction from this answer. Although it's written for standard bash, I was hoping it would work.)

Syntax error: "(" unexpected error while creating an openssl certificate using a bash command

Using the instructions over here to define the SAN field inside the a openssl certificate, I am using the following commands to generate my own self-signed certificate:
openssl genrsa -out domain.org.key
openssl req -newkey rsa:2048 -nodes -keyout domain.org.key -subj "/C=CN/ST=GD/L=SZ/O=Acme, Inc./CN=*.domain.org" -out domain.org.csr
openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain.org,DNS:www.domain.org") -days 365 -in domain.org.csr -CA rootCA.crt -CAkey rootCA.key -CAcreateserial -out domain.org.crt
However, I am getting the following error:
Syntax error: "(" unexpected
I don't see anything specifically wrong with the bash syntax used, could anyone help?
That error-message doesn't look like Bash to me; rather, Bash error-messages look like this:
bash: syntax error near unexpected token `('
I recommend double-checking that you're running these commands in Bash, and not a different shell. (Process substitution isn't specified by POSIX, so not all shells support it.)
If it turns out that Bash is not available, you can use a temporary file:
printf "subjectAltName=DNS:domain.org,DNS:www.domain.org" > tmp-ext-file
openssl x509 -req -extfile tmp-ext-file -days 365 -in domain.org.csr -CA rootCA.crt -CAkey rootCA.key -CAcreateserial -out domain.org.crt
or standard input:
printf "subjectAltName=DNS:domain.org,DNS:www.domain.org" \
| openssl x509 -req -extfile /dev/stdin -days 365 -in domain.org.csr -CA rootCA.crt -CAkey rootCA.key -CAcreateserial -out domain.org.crt

Openssl: cat: /dev/fd/63: No such file or directory

I try to create a Certificate Signing Request (CSR) using
openssl req -new -sha256 -key domain.key -subj "/" \ -reqexts SAN -config <(cat /System/Library/OpenSSL/openssl.cnf \ <(printf "[SAN]\nsubjectAltName=DNS:foo.com,DNS:www.foo.com"))
but getting the following error message on my macbook
cat: /dev/fd/63: No such file or directory
unknown option -reqexts
Any ideas?
Your command is probably copy&paste of a multi-line command where backslashes are used to join lines, however you somehow managed to get newlines converted to spaces.
Remove all occurences of "backslash + space".
I came across this question and in my case I did not have any \ , but I was using sudo so I had to use sudo su -c
$ sudo su -c 'openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-blah.blah.com.key -new -out /etc/ssl/certs/nginx-blah.blah.com.crt -subj "/O=blah.com/OU=blah/CN=blah.blah.com" -reqexts SAN -extensions SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:blah.blah.com,IP:192.168.174.128")) -sha256'
In some cases not mounted devpts can lead to the same error print. So needed to do:
mount -t devpts devpts /dev/pts

Resources