How to use GNU parallel on Multiple Computers - parallel-processing

I wanted use GNU parallel on my two computers. I was successful at running parallel on one computer, but I was unable to run parallel on remote computer.
Version: Local: parallel-20140222 , Remote: parallel-20130522. I had enabled password less ssh login.
parallel -j+0 --eta 'muscle -in {} -out {.}.aln -quiet' < list
But When I tried to run on remote computer "parallely" using following commands,
1) time parallel -j+0 --eta -Svaramesh#10.117.173.5,: -transfer, --return {.}.aln --cleanup 'muscle -in {} -out {.}.aln -quiet' < list
2) time parallel -j+0 --eta -S10.117.173.5,: -transfer, --return {.}.aln --cleanup 'muscle -in {} -out {.}.aln -quiet' < list
3) time parallel -j+0 --eta -S :,10.117.10.5 -transfer, --return {.}.aln --cleanup 'muscle -in {} -out {.}.aln -quiet' < list
All of them are giving following error
parallel: Error: Cannot open input file `nsfer,': No such file or directory.

Transfer has a double dash and no comma: --transfer
You may want to use the shorthand for --transfer --return --cleanup: --trc {.}.aln
And since you do not have special shell characters, you do not need ' around muscle -in {} -out {.}.aln -quiet.
If you like --eta you might want to try out --bar too.

Related

Shell-script to decrypt file

I have to create a shell-script that can decrypt a RSA key file that is encrypted with a specific .pem file. And then to decrypt zip file with the AES key which I get from the RSA file once it is decrypted in a file named keyaes (or whatever you want).
Here are the two commands I have to use
openssl rsautl -decrypt -in AES_KEY -inkey CERTIFICATE.pem -out keyaes
openssl enc -d -aes-256-cbc -in zipfile.zip -out extraction.zip -nosalt -p -K RSA_KEY_from_key_aes_output -iv 0
The commands work perfectly, the problem is in my script I don't know how to make it automatically and to get the key from the keyaes output and put it into the next command properly.
How can I do it ?
you could just use bash command substitution in the second command, using backticks
openssl enc -d -aes-256-cbc -in zipfile.zip -out extraction.zip -nosalt -p -K `cat output_filename_with_aes_key` -iv 0

Hiding sensitive ruby shell commands

I'm using fastlane and sh command to decrypt some credentials but seems ruby prints the output in logs. How do I hide the sensitive information from logs?
cmd_decrypt = "openssl enc -aes-256-cbc -d -a -k \"#{ENV["MATCH_PASSWORD"]}\" -in #{enc_file} -out #{dec_file[0]}"
sh(cmd_decrypt)
output:
[09:38:15]: --------------------------------------------------------------------
[09:38:15]: Step: openssl enc -aes-256-cbc -d -a -k "PASSWORD_SHOWN!" -in /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/zz-out /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/xx
[09:38:15]: --------------------------------------------------------------------
[09:38:15]: $ openssl enc -aes-256-cbc -d -a -k "PASSWORD_SHOWN!" -in /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/zz -out /var/folders/7g/yy/T/d20190925-1304-1qv6cj1/vault/xx
You can pass sh extra parameters. In this case, you would call it like this:
sh(cmd_decrypt, log: false)
The documentation for sh is here: https://docs.fastlane.tools/actions/sh/
You get can get the docs for other built-in actions here:
https://docs.fastlane.tools/actions/
And the docs for other plugin's actions here: https://docs.fastlane.tools/plugins/available-plugins/
Since you have an environment variable, why not just run with that?
cmd_decrypt = "openssl enc -aes-256-cbc -d -a -k \"$MATCH_PASSWORD\" -in #{enc_file} -out #{dec_file[0]}"
sh(cmd_decrypt)
From there shell interpolation should take over and make it work. One thing to note is your -in parameter doesn't have shell escaping, which it usually must have, done using shellescape.
You really should be specifying these as separate arguments, though, whenever possible to avoid injection issues. The problem is you lose shell interpolation at that point.
The good news is you can always write a wrapper script to provide safety and ease of use, something like:
#!/bin/sh
# descrypt.sh
openssl enc -aes-256-cbc -d -a -k "$MATCH_PASSWORD" -in $1 -out $2
So then you can call it like this:
sh('descrypt.sh', enc_file, dec_file[0])
Now it logs something a lot quieter as well as a bonus. You can pick which arguments to pass through, or even throw them all through with $*.

Shell: Capturing output files in variables

Commands like openssl have arguments like -out <file> for output files. I'd like to capture the content of these output files in shell variables for use in other commands without creating temporary files. For example, to generate a self-signed certificate, one can use:
openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout KEYFILE -out CERTFILE 2>/dev/null
The closest I've got to capture both output files is to echo them to stdout via process substitution but this is not ideal because one would still have to parse them apart.
openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout >(key=$(cat -); echo $key) -out >(cert=$(cat -); echo $cert) 2>/dev/null
Is there a clean way to capture the content of output files in shell variables?
Most modern shells now support /dev/stdout as a file name to redirect to stdout. This is good enough for single file solutions but for two output files you need to go with "process substitution".
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout >(echo "keyout='$( cat )'" ) -out >(echo out="'$( cat )'" ) )"
This uses process substitution to direct each "file" to a separate process that prints to stdout an assignment of the calculated values. The whole thing is then passed to an eval to do the actual assignments.
Keeping output of stderr to show any error messages that pops up. Useful to log it in times of troubles.
Edit: incorporating Charles Duffy's good paranoia:
flockf="$(mktemp -t tmp.lock.$$.XXXXXX )" || exit $?;
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \
-keyout >( set -x; 99>"$flockf" && \
flock -x "$flockf" printf "keyout=%q " "$( cat )"; ) \
-out >( set -x; 99>"$flockf" && \
flock -x "$flockf" printf "out=%q " "$( cat )"; ) \
)" ;
rm -f "$flockf"
An extension to Gilbert's answer providing additional paranoia:
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \
-keyout >(printf 'keyout=%q\n' "$(</dev/stdin)") \
-out >(printf 'out=%q\n' "$(</dev/stdin)") )"
(Note that this is not suitable if your data contains NULs, which bash cannot store in a native shell variable; in that case, you'll want to assign the contents to your variables in base64-encoded form).
Unlike echo "keyout='$(cat)'", printf 'keyout=%q\n' "$(cat)" ensures that even malicious contents cannot be evaluated by the shell as a command substitution, redirection, or otherwise content other than literal data.
To explain why this is necessary, let's take a simplified case:
write_to_two_files() { printf 'one\n' >"$1"; printf 'two\n' >"$2"; }
write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'")
...we get output akin to (but with no particular ordering):
two='two'
one='one'
...which, when evaled, sets two variables:
$ eval "$(write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'"))"
$ declare -p one two
declare -- one="one"
declare -- two="two"
However, let's say that our program behaves a bit differently:
## Demonstrate why eval'ing content created by echoing data is dangerous
write_to_two_files() {
printf "'%s'\n" '$(touch /tmp/i-pwned-your-box)' >"$1"
echo "two" >"$2"
}
eval "$(write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'"))"
ls -l /tmp/i-pwned-your-box
Instead of merely assigning the output to a variable, we evaluated it as code.
If you're further interested in ensuring that the two print operations happen at different times (preventing their output from being intermingled), it's useful to further add locking. This does involve a temporary file, but does not write your keying material to disk (avoidance of which is the most compelling reason to avoid temporary file usage):
lockfile=$(mktemp -t output.lck.XXXXXX)
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \
-keyout >(in=$(cat); exec 99>"$lockfile" && flock -x 99 && printf 'keyout=%q\n' "$in") \
-out >(in=$(cat); exec 99>"$lockfile" && flock -x 99 && printf 'out=%q\n' "$in") )"
Note that we're only blocking for the write, not the read, so we can't get into race conditions (ie. where openssl isn't finishing writing to file-A because it's blocked on a write to file-B, which can never complete because the subshell on the read side of the file-A write holds the lock).

Verify SSL certificate against various CRL files

I am given multiple certificate files e.g. "cert1.crt", "cert2.crt" etc, and multiple CRL lists, "list1.crl", "list2.crl" etc. No rootCA or any other type of files are provided. My task is to find out what certificates have NOT been revoked. Despite extensive search for "verification" command I failed to find any command or procedure that would provide me at least a clue. In the end, I managed to do some bash script aerobatics which let me manually test serial number for each .crt file
for((i=1;i<9;i++))
do
echo $i
fileIn="crl"$i".crl"
#serial is manually c/p from each .crt file
serial="1319447396"
OUTPUT="$(openssl crl -in $fileIn -noout -text | grep $serial)"
echo $OUTPUT
done
This way I could do it manually one at a time, but it will work only for small number of files (9 at present). With tens of files it would get tiresome and ineffective, with 100+ it would get impossible to do it like this.
I was wondering is there a "smart" way to validate .crt against .crl? Or at least is there a way to bash script the job so I wouldn't have to check each .crt manually? Right now it's way beyond my scripting knowledge.
So, in pseudo, I would be thrilled if something like this existed:
openssl x509 -verify cert1.cert -crl_list list8.crl
In general, yes, each certificate is checked against a CRL, as is detailed in this guide.
But, Actually, each crl is a simple list of revoked certificate serial numbers.
The list contained in a crl could be expanded with:
openssl crl -inform DER -text -noout -in mycrl.crl
Asuming the crl is in DER form (adapt as needed).
Expand each (all) crl to a text file, like:
openssl crl -inform DER -text -noout -in mycrl.crl > mycrl.crl.txt
The out file could be reduced to only the Serial Number: lines.
Get the Serial Number from the text expansion of a cert:
mycrt=$(openssl x509 -in mycrt.com.crt -serial -noout)
mycrt=${mycrt#*=}
grep the serial number in all text files from step one (if one match the cert is revoked) in one call to grep:
if grep -rl "$mycrt" *.crl.txt 2>/dev/null; then
echo "the certificate has been revoked"
fi
Full script:
#!/bin/bash
# Create (if they don't exist) files for all the crl given.
for crl in *.crl; do
if [[ ! -e "$crl.txt" ]]; then
openssl crl -inform DER -text -noout -in "$crl" |
awk -F ': ' '/Serial Number:/{print $2}'> "$crl.txt"
fi
done
# Process all certificates
for crt in *.crt; do
mycrt=$(openssl x509 -in "$crt" -serial -noout)
mycrt=${mycrt#*=}
if grep -rl "$mycrt" *.crl.txt; then
echo "Certificate $crt has been revoked"
fi
done
I finally managed to solve this in a way that's maybe not optimal, but requires much less bash knowledge. Here is my script:
#!/bin/bash
for((j=1;j<10;j++))
do
indicator=0
cert="cert"$j".crt"
for((i=1;i<9;i++))
do
infile="crl"$i".crl"
SERIAL="$(openssl x509 -noout -text -in $cert | grep Serial | cut -d 'x' -f 2 | cut -d ')' -f 1)"
OUTPUT="$(openssl crl -inform DER -in $infile -noout -text | grep $SERIAL )"
if [ -n $OUTPUT ]
then ((indicator++))
fi
done
echo $cert
if [ $indicator == 0 ]
then echo "not revoked"
else
echo "revoked"
fi
done

OpenSSL CommandLine Windows Fully Updated

openssl enc -e -bf -in X:\a.jpg -out X:\a -kfile Y:\password.txt
or
openssl enc -e -bf -in X:\a.jpg -out X:\a -k password
I get:
### is some number always different
###:error20074002:BIO routines:FILE_CTRL:system lib:.\crypto\bio\bss_file.C:400:
It would seem it does not like writing to Drives. It use too work till I updated even then it was sort of iffy.
I have tried every Windows admin rights I think of http://www.mydigitallife.info/how-to-open-elevated-command-prompt-with-administrator-privileges-in-windows-vista/

Resources