How to transfer tokens using a multisig authority on Solana v1.10.25 - solana

I am trying to transfer tokens using a multisig authority on Solana v1.10.25. I have written a script which creates all necessary dependencies for this process and runs it against a local validator. Unfortunately, when I attempt to run the transfer SPL command, I get an error error: invalid account data. You can view the script here.
The sequence of commands:
#!/bin/bash
#
# # Usage
# "./multisig-transfer.sh"
#
set -e
function gen_kp {
# Generates a keypair and returns its path to stdout
local name="${1}"
local path="${dir}/${name}-kp.json"
solana-keygen new --no-passphrase -o "${path}" &> /dev/null
pubkey=$(solana-keygen pubkey ${path})
echo "${name}: ${pubkey}" 1>&2
echo "${path}"
}
# random prefix for all files
dir=$(cat /dev/urandom | tr -dc '[:alpha:]' | fold -w ${1:-5} | head -n 1)
dir="runs/${dir}"
mkdir -p "${dir}"
printf "\n\n----- pin solana -----\n\n"
SOLANA_VERSION="1.10.25"
solana --version 2>&1 1>/dev/null || sh -c "$(curl -sSfL https://release.solana.com/${SOLANA_VERSION}/install)"
solana --version | grep "${SOLANA_VERSION}" || solana-install init "${SOLANA_VERSION}"
printf "\n\n----- point CLI to local validator -----\n\n"
solana config set --url http://127.0.0.1:8899
printf "\n\n----- create a fee payer -----\n\n"
fp=$(gen_kp "fee-payer")
solana airdrop 100 -k "${fp}"
printf "\n\n----- prepare multisig authorities -----\n\n"
auth1_kp=$(gen_kp "auth1")
auth2_kp=$(gen_kp "auth2")
auth3_kp=$(gen_kp "auth3")
printf "\n\n----- create multisig -----\n\n"
multisig_kp=$(gen_kp "multisig")
spl-token create-multisig 2 "${auth1_kp}" "${auth2_kp}" "${auth3_kp}" \
--fee-payer "${fp}" \
--address-keypair "${multisig_kp}"
printf "\n\n----- create mint -----\n\n"
mint_kp=$(gen_kp "mint")
mint_auth_kp=$(gen_kp "mint-auth")
spl-token create-token "${mint_kp}" \
--mint-authority "${mint_auth_kp}" \
--fee-payer "${fp}"
printf "\n\n----- create source token account -----\n\n"
source_acc_kp=$(gen_kp "source-acc")
spl-token create-account "${mint_kp}" "${source_acc_kp}" \
--owner "${multisig_kp}" \
--fee-payer "${fp}"
printf "\n\n----- create target token account -----\n\n"
target_acc_kp=$(gen_kp "target-acc")
target_acc_owner=$(gen_kp "target-acc-owner")
spl-token create-account "${mint_kp}" "${target_acc_kp}" \
--owner "${target_acc_owner}" \
--fee-payer "${fp}"
printf "\n\n----- mint to source token account -----\n\n"
spl-token mint "${mint_kp}" 10 "${source_acc_kp}" \
--mint-authority "${mint_auth_kp}" \
--fee-payer "${fp}"
printf "\n\n----- create a nonce account -----\n\n"
nonce_kp=$(gen_kp "nonce")
nonce_auth_kp=$(gen_kp "nonce-auth")
solana create-nonce-account "${nonce_kp}" 1 \
--nonce-authority "${nonce_auth_kp}" \
-k "${fp}"
blockhash=$(solana nonce "${nonce_kp}")
printf "\n\n----- spl accounts info -----\n\n"
spl-token multisig-info "${multisig_kp}"
spl-token account-info --address "${source_acc_kp}"
spl-token account-info --address "${target_acc_kp}"
printf "\n\n----- multisig transfer -----\n\n"
transfer_cmd="spl-token transfer ${mint_kp} 10 ${target_acc_kp} \
--from ${source_acc_kp} \
--owner ${multisig_kp} \
--multisig-signer ${auth1_kp} \
--multisig-signer ${auth2_kp} \
--multisig-signer ${auth3_kp} \
--blockhash ${blockhash} \
--fee-payer ${fp} \
--nonce ${nonce_kp} \
--nonce-authority ${nonce_auth_kp}"
# skip first 3 lines to get the list of signers in format
# pubkey1=signhash1
# pubkey2=signhash2
# ...
transfer_signers_lines=$( eval "${transfer_cmd} --mint-decimals 9 --sign-only" | tail -n +4 )
signers_flags=""
while IFS= read -r line; do
signers_flags="${signers_flags} --signer ${line}"
done <<< "$transfer_signers_lines"
eval "${transfer_cmd} ${signers_flags}"
And here's an output of with the error mentioned above:
----- pin solana -----
solana-cli 1.10.25 (src:d64f6808; feat:965221688)
----- point CLI to local validator to it -----
Config File: /home/xxx/.config/solana/cli/config.yml
RPC URL: http://127.0.0.1:8899
WebSocket URL: ws://127.0.0.1:8900/ (computed)
Keypair Path: /home/xxx/.config/solana/id.json
Commitment: confirmed
----- create a fee payer -----
fee-payer: 4UihfiAJFJbkzcvqbCxhf2UyDCKYKDaEPYsmQMHW6UoK
Requesting airdrop of 100 SOL
Signature: 5QtszEFNNcAqTToPbS3HaVXoPtTgjBKNaWY7oaoYDZRmVof31euzAhZvhYQ2TveZ7EzBD4B4Zmb4ue1qgX1yfytm
100 SOL
----- prepare multisig authorities -----
auth1: 1WdKMVYxeSc3bFQyUaC726EHiEHkT8xdxK6Tju9WyrK
auth2: HiH2SGB8SsTAACLj8hLkXeLzXqVWRUCxWMo4XGVT5H3w
auth3: 46VhbvYvNaFZ6QyGaB8asvAZw2WxxmrZb8Fvx9ibL5ir
----- create multisig -----
multisig: FxJi77UjbeQH6wMqBgpJr93Y6Y8Kq5WMom9svHupBGHV
Creating 2/3 multisig FxJi77UjbeQH6wMqBgpJr93Y6Y8Kq5WMom9svHupBGHV
Signature: CjyFBA7Yh7eoxksvFr3UwAonjsBj6EtpRkbbKYeSvZUi5cb3u8ukCJTFDkJFdRCQRj9fy1TCXVBekd7TnDXHa6Z
----- create mint -----
mint: DukyrQsNTPED4mhoxwAoiegCaknqzM4x7hB1UPnuAnYG
mint-auth: J4xzof5UjNGD2VvwjbdTNLwBWsvHpJCMH3VNTNKX2Bpw
Creating token DukyrQsNTPED4mhoxwAoiegCaknqzM4x7hB1UPnuAnYG
Signature: 3WMLSHVGGDounFFgGR6C21qdKjezaiavnzfaHRiK9hdSVczCDgThZ2jkyHVSwjEDgHhCzC2MYvCTYV2DxRcx2bL2
----- create source token account -----
source-acc: 56TeLuYYyevRDEBkYgFZg3BaSU7ovoo7B9E2QrZtro4Z
Creating account 56TeLuYYyevRDEBkYgFZg3BaSU7ovoo7B9E2QrZtro4Z
Signature: 3qtgQgeJCPy7N9xYq5iTafVMWYfoBX5PyY5AWAEC5ybX7auyPmTzRNACJBCib7qyoN5BjWgwhRURB6hfoFD22i5i
----- create target token account -----
target-acc: G2moZysTkpEtwPjQGbUFTXMc8N7BEWcjhh6yYpvuhnCW
target-acc-owner: 49ethEAs8j3Lssgx9kfEDEXZd5KabyABzxHYBEWxSwpS
Creating account G2moZysTkpEtwPjQGbUFTXMc8N7BEWcjhh6yYpvuhnCW
Signature: 63vS5AdX6HtTRxWZD5nxbsVqNwGCoATDYLeLKUoEjygZEr7fEpFzKW4bva6utpAA4psZuyuvDwsj2hTK29SsvHAq
----- mint to source token account -----
Minting 10 tokens
Token: DukyrQsNTPED4mhoxwAoiegCaknqzM4x7hB1UPnuAnYG
Recipient: 56TeLuYYyevRDEBkYgFZg3BaSU7ovoo7B9E2QrZtro4Z
Signature: 2c2ttnZxe2Qi6T6s123L5MPdEovbwCxf9VCECQ9v84CzMvbcRyPRfGvoVAVAZenR2VpuzAgc2LHRPds5yVq2itBe
----- create a nonce account -----
nonce: AjVABrmzMtnzwFiggfNLGbuXpAfUWV9opAXrXpTfcseN
nonce-auth: GYNQE8RQyo5C3Ejj945FQhPaqUGxytRt7yVGbGyn2qZN
Signature: CPnr8nnrRRk93wtciQ2LboHL8EnFGYqsn2znprZoTWFAgfFzBN68jQTzr7hxR8Y9WbwdLb9E9aYTTyyMVeQF1Ay
----- spl accounts info -----
Address: FxJi77UjbeQH6wMqBgpJr93Y6Y8Kq5WMom9svHupBGHV
M/N: 2/3
Signers:
1: 1WdKMVYxeSc3bFQyUaC726EHiEHkT8xdxK6Tju9WyrK
2: HiH2SGB8SsTAACLj8hLkXeLzXqVWRUCxWMo4XGVT5H3w
3: 46VhbvYvNaFZ6QyGaB8asvAZw2WxxmrZb8Fvx9ibL5ir
Address: 56TeLuYYyevRDEBkYgFZg3BaSU7ovoo7B9E2QrZtro4Z (Aux*)
Balance: 10
Mint: DukyrQsNTPED4mhoxwAoiegCaknqzM4x7hB1UPnuAnYG
Owner: FxJi77UjbeQH6wMqBgpJr93Y6Y8Kq5WMom9svHupBGHV
State: Initialized
Delegation: (not set)
Close authority: (not set)
* Please run `spl-token gc` to clean up Aux accounts
Address: G2moZysTkpEtwPjQGbUFTXMc8N7BEWcjhh6yYpvuhnCW (Aux*)
Balance: 0
Mint: DukyrQsNTPED4mhoxwAoiegCaknqzM4x7hB1UPnuAnYG
Owner: 49ethEAs8j3Lssgx9kfEDEXZd5KabyABzxHYBEWxSwpS
State: Initialized
Delegation: (not set)
Close authority: (not set)
* Please run `spl-token gc` to clean up Aux accounts
----- multisig transfer -----
Transfer 10 tokens
Sender: 56TeLuYYyevRDEBkYgFZg3BaSU7ovoo7B9E2QrZtro4Z
Recipient: G2moZysTkpEtwPjQGbUFTXMc8N7BEWcjhh6yYpvuhnCW
error: invalid account data

Related

my .sh script for PGO client setup show error of invalid value?

When I run .sh script I see this error
error: error executing template "{{.data.username | base64decode }}:{{.data.password | base64decode}}": template: output:1:19: executing "output" at <base64decode>: invalid value; expected string
error: error executing template "{{ index .data \"tls.crt\" | base64decode }}": template: output:1:27: executing "output" at <base64deco de>: invalid value; expected string
error: error executing template "{{ index .data \"tls.key\" | base64decode }}": template: output:1:27: executing "output" at <base64deco de>: invalid value; expected string
This is the script
# Use the pgouser-admin secret to generate pgouser file
kubectl get secret -n "${PGO_OPERATOR_NAMESPACE}" "${PGO_USER_ADMIN}" \
-o 'go-template={{.data.username | base64decode }}:{{.data.password | base64decode }}' > $OUTPUT_DIR/pgouser
# ensure this file is locked down to the specific user running this
chmod a-rwx,u+rw "${OUTPUT_DIR}/pgouser"
*# Use the pgo.tls secret to generate the client cert files
kubectl get secret -n "${PGO_OPERATOR_NAMESPACE}" pgo.tls \
-o 'go-template={{ index .data "tls.crt" | base64decode }}' > $OUTPUT_DIR/client.crt
kubectl get secret -n "${PGO_OPERATOR_NAMESPACE}" pgo.tls \
-o 'go-template={{ index .data "tls.key" | base64decode }}' > $OUTPUT_DIR/client.key
# ensure the files are locked down to the specific user running this
chmod a-rwx,u+rw "${OUTPUT_DIR}/client.crt" "${OUTPUT_DIR}/client.key"
echo "pgo client files have been generated, please add the following to your bashrc"
echo "export PATH=${OUTPUT_DIR}:\$PATH"
echo "export PGOUSER=${OUTPUT_DIR}/pgouser"
echo "export PGO_CA_CERT=${OUTPUT_DIR}/client.crt"
echo "export PGO_CLIENT_CERT=${OUTPUT_DIR}/client.crt"
echo "export PGO_CLIENT_KEY=${OUTPUT_DIR}/client.key"
I don't see any error, any suggestion Please.
What I want it to do:
It should create PGO client and not show any error.
Edited Question:
This how I created secret
kubectl create secret docker-registry pgo.tls -n pgo --docker-server='https://index.docker.io/v1/' --docker-username='tauqeerdocker' --docker-email='myeamil#gmail.com' --docker-password='Letstest'
If you create a secret like this:
kubectl create secret docker-registry pgo.tls \
-n pgo \
--docker-server='https://index.docker.io/v1/' \
--docker-username='tauqeerdocker' \
--docker-email='myeamil#gmail.com' \
--docker-password='Letstest'
Then you end up with a resource that looks like this:
apiVersion: v1
kind: Secret
metadata:
name: pgo.tls
namespace: pgo
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: eyJhdXRocyI6eyJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOnsidXNlcm5hbWUiOiJ0YXVxZWVyZG9ja2VyIiwicGFzc3dvcmQiOiJMZXRzdGVzdCIsImVtYWlsIjoibXllYW1pbEBnbWFpbC5jb20iLCJhdXRoIjoiZEdGMWNXVmxjbVJ2WTJ0bGNqcE1aWFJ6ZEdWemRBPT0ifX19
When you run:
kubectl get secret -n pgo pgo.tls \
-o 'go-template={{ index .data "tls.crt" | base64decode }}'
You're asking for the key tls.crt from the data attribute, but there is no such attribute. You've created a docker registry secret, not a TLS secret.
If you have a certificate and key available locally, you can create a TLS secret like this:
kubectl -n pgo create secret tls \
--cert=tls.crt --key=tls.key
This gets you:
apiVersion: v1
data:
tls.crt: ...
tls.key: ...
kind: Secret
metadata:
name: pgo.tls
namespace: pgo
type: kubernetes.io/tls
And when we try your command using that secret, it works as expected:
$ kubectl get secret -n pgo pgo.tls \
-o 'go-template={{ index .data "tls.crt" | base64decode }}'
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

Tutorials about start a private substrate network, Where the suri come from?

The tutorials about start a private substrate network.
It says:
This example uses the secret seed generated from the key subcommand
into the keystore. In this tutorial, the secret seed generated was
0x563d22ef5f00e589e07445a3ad88bb92efaa897d7f73a4543d9ac87476434e65, so
the --suri command-line option specifies that string to insert the key
into the keystore:
My wondering is where the suri come from? the article doesn't demo very clearly.
I log what I did:
$ ./target/release/node-template key generate --scheme Sr25519 --password-interactive
Key password: 123456
Secret phrase `raw glory squeeze allow demand erase ensure car hair dry tobacco mule` is account:
Secret seed: 0xa80c9a2c2c96ac61a548a358c81aa07a519af00e7b3fc25f06761e2a5af42044 # use the string as next step's input, import to node01
Public key (hex): 0x780a4cd1e018e5433c061da3c28ad1ff33a59da6cd8b750a5a37f3e7fb69fc62
Public key (SS58): 5En6fQsu3ju9zo2PvwptfnZZWrrWWs9zsBt1WuF9U8TGNWFj
Account ID: 0x780a4cd1e018e5433c061da3c28ad1ff33a59da6cd8b750a5a37f3e7fb69fc62
SS58 Address: 5En6fQsu3ju9zo2PvwptfnZZWrrWWs9zsBt1WuF9U8TGNWFj #put this in the chain-spec file, aura.authorities
$ ./target/release/node-template key inspect --password-interactive --scheme Ed25519 0xa80c9a2c2c96ac61a548a358c81aa07a519af00e7b3fc25f06761e2a5af42044
Key password: 123456
Secret Key URI `0xa80c9a2c2c96ac61a548a358c81aa07a519af00e7b3fc25f06761e2a5af42044` is account:
Secret seed: 0xa80c9a2c2c96ac61a548a358c81aa07a519af00e7b3fc25f06761e2a5af42044
Public key (hex): 0x9c1726a7a0cca51dc506a06789b0781260e999ccafd687799c275a52916b1b01
Public key (SS58): 5FbNCp3ZHWzFGQkS1PRt9SPUs16zAHk1WhC2CWTQ97nsE2yk
Account ID: 0x9c1726a7a0cca51dc506a06789b0781260e999ccafd687799c275a52916b1b01
SS58 Address: 5FbNCp3ZHWzFGQkS1PRt9SPUs16zAHk1WhC2CWTQ97nsE2yk #put this in the chain-spec file, grandpa.authorities
$ ./target/release/node-template key generate --scheme Sr25519 --password-interactive
Key password: 123456
Secret phrase `caution evil word live concert suit cousin crisp tobacco lizard wheat banner` is account:
Secret seed: 0x52e547fc68fed1d7e97be6232434ccc51d9cfe1cc237820d9cf3a559dd2be6e8 # use the string as next step's input, import to node02
Public key (hex): 0xced1d44c697e75fd3c51096e869d204f9aec8620ab3422d3e81ec6870fe81c41
Public key (SS58): 5Gjt44znWzR8eu7fDH7cRey8KavbHQuoraD1a3ttYPsVpn75
Account ID: 0xced1d44c697e75fd3c51096e869d204f9aec8620ab3422d3e81ec6870fe81c41
SS58 Address: 5Gjt44znWzR8eu7fDH7cRey8KavbHQuoraD1a3ttYPsVpn75 #put this in the chain-spec file, aura.authorities
$ ./target/release/node-template key inspect --password-interactive --scheme Ed25519 0x52e547fc68fed1d7e97be6232434ccc51d9cfe1cc237820d9cf3a559dd2be6e8
Key password: 123456
Secret Key URI `0x52e547fc68fed1d7e97be6232434ccc51d9cfe1cc237820d9cf3a559dd2be6e8` is account:
Secret seed: 0x52e547fc68fed1d7e97be6232434ccc51d9cfe1cc237820d9cf3a559dd2be6e8
Public key (hex): 0x1d2259132f8ad2d6cb92ce397c97dfe86226708130c94ca3fa10651276de514f
Public key (SS58): 5CiuT1fKfVZGeok2T68g4zx1RCMCmZbHD7zFUrguLeiuCZ1g
Account ID: 0x1d2259132f8ad2d6cb92ce397c97dfe86226708130c94ca3fa10651276de514f
SS58 Address: 5CiuT1fKfVZGeok2T68g4zx1RCMCmZbHD7zFUrguLeiuCZ1g #put this in the chain-spec file, grandpa.authorities
./target/release/node-template key insert --base-path /tmp/node01 \
--chain customSpecRaw.json \
--suri 0xa80c9a2c2c96ac61a548a358c81aa07a519af00e7b3fc25f06761e2a5af42044 \ # Secret seed
--password-interactive \
--key-type aura
./target/release/node-template key insert --base-path /tmp/node01 \
--chain customSpecRaw.json \
--suri 0xa80c9a2c2c96ac61a548a358c81aa07a519af00e7b3fc25f06761e2a5af42044 \
--password-interactive \
--key-type gran
./target/release/node-template key insert --base-path /tmp/node02 \
--chain customSpecRaw.json \
--suri 0x52e547fc68fed1d7e97be6232434ccc51d9cfe1cc237820d9cf3a559dd2be6e8 \
--password-interactive \
--key-type aura
./target/release/node-template key insert --base-path /tmp/node02 \
--chain customSpecRaw.json \
--suri 0x52e547fc68fed1d7e97be6232434ccc51d9cfe1cc237820d9cf3a559dd2be6e8 \
--password-interactive \
--key-type gran
After import key to keystore
$ ls /tmp/node01/chains/local_testnet/keystore
61757261780a4cd1e018e5433c061da3c28ad1ff33a59da6cd8b750a5a37f3e7fb69fc62 6772616e780a4cd1e018e5433c061da3c28ad1ff33a59da6cd8b750a5a37f3e7fb69fc62
$ ls /tmp/node02/chains/local_testnet/keystore
61757261ced1d44c697e75fd3c51096e869d204f9aec8620ab3422d3e81ec6870fe81c41 6772616eced1d44c697e75fd3c51096e869d204f9aec8620ab3422d3e81ec6870fe81c41
I restart node1,node2, but The result is :
Idle (1 peers), best: #94 (0x8634…b5c9), finalized #0 (0x4f9a…68f0), ⬇ 40 B/s ⬆ 0.1kiB/s
finalized block number is always 0
I tried the old version of the topic start a private network, it use the subkey to generate the keys. It worked. but I failed when reference the new tutorial.
Where am I wrong?
Let me to post a correct answer:
$./target/release/node-template key insert --base-path /tmp/node01 \
--chain customSpecRaw.json \
--scheme ed25519 \
--suri 0xa80c9a2c2c96ac61a548a358c81aa07a519af00e7b3fc25f06761e2a5af42044 \
--password-interactive \
--key-type gran
when import grandpa key, add this: --scheme ed25519 .
after do this, It works.
Idle (1 peers), best: #95 (0xf51f…65a8), finalized #93 (0x072b…192a), ⬇ 0.5kiB/s ⬆ 0.6kiB/s
please reference: https://core.tetcoin.org/docs/en/knowledgebase/integrate/subkey#inserting-keys-to-a-nodes-keystore
There are a section called Generate your own keys.
https://docs.substrate.io/tutorials/v3/private-network/#generate-your-own-keys
The Secret Seed is your suri
And here some nice script to insert the keys easily
https://github.com/substrate-developer-hub/substrate-node-template/blob/tutorials/solutions/private-chain-v3/key-insert/insert-keys.sh

Hashicorp Vault RSASSA-PSS Prehashed cannot be verified with OpenSSL

I am trying to use Hashicorp Vault to sign a file with RSASSA-PSS-4096. The file is too big for sending it to the server directly, so I want to prehash it locally and then send the digest via POST request to the Vault transit engine.
While the Vault signature verification works, the OpenSSL verification fails.
Please see my drafted script:
# Calculate SHA256 hash and convert to base64
sha256sum_base64=$(openssl dgst -sha256 -binary $1 | base64)
# Sign Hash Value with Vault
json_response=$(curl -s \
--header "X-Vault-Token: $(cat token)" \
--request POST \
--data-binary '{"input": "'"$sha256sum_base64"'", "prehashed": true, "signature_algorithm": "pss", "hash_algorithm": "sha2-256"}' \
http://127.0.0.1:8200/v1/transit/sign/rsa_4096)
# Extract base64 signature from the json response.
signature_base64=$(echo $json_response | python3 -c "import sys, json; print(json.load(sys.stdin)['data']['signature'])" | cut -d ":" -f 3)
# Convert signature from base64 to binary and write to file
sigfile=$1__signature.bin
echo $signature_base64 | openssl base64 -d -A -in - -out $sigfile
# Check whether signature is valid via OpenSSL
echo "OpenSSL --> " $(openssl dgst -sha256 -sigopt rsa_padding_mode:pss -sigopt rsa_pss_saltlen:32 -verify rsa_4096_pub.pem -signature $sigfile $1)
# Check whether signature is valid via Vault
signature_vaultformat="vault:v1:$signature_base64"
verify_response=$(curl -s \
--header "X-Vault-Token: $(cat token)" \
--request POST \
--data-binary '{"input": "'"$sha256sum_base64"'", "signature": "'"$signature_vaultformat"'", "prehashed": true, "signature_algorithm": "pss", "hash_algorithm": "sha2-256"}' \
http://127.0.0.1:8200/v1/transit/verify/rsa_4096)
echo "Vault Verify --> " $(echo $verify_response | python3 -c "import sys, json; print(json.load(sys.stdin)['data']['valid'])")
What could be the problem here? I played with rsa_pss_saltlen parameters (e.g. -1) without success. Is there another OpenSSL parameter I am missing? Do I need to consider something for EMSA-PSS?
Here is a proof-of-concept where you can sign a piece of text using the Transit secrets engine and then verify the signature using openssl rather than using the Transit secrets engine again.
# Define our plaintext
TEXT="abc123"
# Encode our plaintext with base64
B64_ENCODED_TEXT=$(echo $TEXT | base64)
# Reset the transit secrets engine
vault secrets disable transit
vault secrets enable transit
# Create a key called 'test' using 'rsa-2048'
vault write -f transit/keys/test \
type='rsa-2048'
# Export the public key from the transit secret engine key named 'test'
PUBLIC_KEY=$(vault read -format=json transit/keys/test | \
jq -r '.data.keys."1".public_key')
# Sign our base64 encoded text using our transit key named 'test' and
# capture the signature
SIGNATURE=$(vault write -format=json transit/sign/test/sha2-256 \
input="$B64_ENCODED_TEXT" \
signature_algorithm="pss" | \
jq -r '.data.signature')
# Demonstrate that we can use transit to verify our signature
printf "\nVerifying signature using Vault Transit...\n"
vault write transit/verify/test/sha2-256 \
signature_algorithm="pss" \
input=$B64_ENCODED_TEXT \
signature=$SIGNATURE
# Write out public key to a file
echo $PUBLIC_KEY > publickey.pem
# Remove the metadata from the Vault supplied signature and decode the
# signature using base64, writing the raw signature to a file
echo $SIGNATURE | cut -d':' -f3 | base64 -d > sig
# Write the non-encoded plaintext to a file
echo "$TEXT" > mytext
# Use openssl to verify the signature using the base64 decoded raw signature
# along with the public key and the non-encoded plaintext
printf "\nVerifying signature using openssl...\n"
openssl dgst \
-sha256 \
-verify publickey.pem \
-signature sig \
-sigopt rsa_padding_mode:pss \
mytext
Some important notes below:
Note that ALL data that is signed by Vault Transit secret engine must first be base64 encoded.
When using openssl to verify a signature, you must make sure that you are using the correct signature algorithm.
When Vault provides a signature, it's in the following format: vault:v1:8SDd3WHDOjf7mq69... where vault denotes that it was signed by Vault, v1 denotes the version of the key and the final part is the actual signature that is encoded using base64. The openssl utility requires that the signature is binary and not base64. In order to verify this signature with openssl, you must remove the first 2 parts of the Vault provided signature. You must then decode the base64 encoded signature and use the resultant binary signature when verifying with openssl.
When verifying with openssl you can not use use the base64 encoded version of the text, you must use the non-base64 encoded plaintext.

Can't reply to email via mutt with GnuPG - asks for "keyID"

I'm using mutt with GnuPG on Ubuntu. I have these general settings for GnuPG:
set pgp_decode_command = "gpg %?p?--passphrase-fd 0? --no-verbose --batch --output - %f"
set pgp_verify_command = "gpg --no-verbose --batch --output - --verify %s %f"
set pgp_decrypt_command = "gpg --passphrase-fd 0 --no-verbose --batch --output - %f"
set pgp_sign_command = "gpg --no-verbose --batch --output - --passphrase-fd 0 --armor --detach-sign --textmode %?a?-u %a? %f"
set pgp_clearsign_command = "gpg --no-verbose --batch --output - --passphrase-fd 0 --armor --textmode --clearsign %?a?-u %a? %f"
set pgp_import_command = "gpg --no-verbose --import -v %f"
set pgp_export_command = "gpg --no-verbose --export --armor %r"
set pgp_verify_key_command = "gpg --no-verbose --batch --fingerprint --check-sigs %r"
set pgp_list_pubring_command = "gpg --no-verbose --batch --with-colons --list-keys %r"
set pgp_list_secring_command = "gpg --no-verbose --batch --with-colons --list-secret-keys %r"
unset pgp_retainable_sigs
# set pgp_ignore_subkeys
# set pgp_verify_sig=yes
# set pgp_create_traditional = no
# set pgp_autosign = no
# set pgp_autoencrypt = no
# set pgp_replysignencrypted
# set pgp_replyencrypt = yes
# set pgp_replysign = yes
set crypt_autosign # automatically sign all outgoing messages
# set crypt_replysign # sign only replies to signed messages
# set crypt_autoencrypt=yes # automatically encrypt outgoing msgs
# set crypt_replyencrypt=yes # encryp only replies to signed messages
# set crypt_replysignencrypted=yes # encrypt & sign replies to encrypted msgs
set crypt_verify_sig=yes # auto verify msg signature when opened
set pgp_create_traditional = yes # http://www.rdrop.com/docs/mutt/manual236.html#pgp_create_traditional
set pgp_timeout = 3600
set pgp_good_sign = "^gpg: Good signature from"
And I have these settings for the specific account in question:
send-hook mark.nichols#gmial.com 'set pgp_autosign'
pgp-hook mark.nichols#gmail.com 53445200
set pgp_encrypt_only_command="/usr/lb/mutt/pgpewrap gpg --batch --quiet --no-verbose --output - --encrypt --textmode --armor --always-trust --encrypt-to 53445200 -- -r %r -- %f"
set pgp_encrypt_sign_command="/usr/lib/mutt/pgpewrap gpg --passphrase-fd 0 --batch --quiet --no-verbose --textmode --output - --encrypt --sign %?a?-u %a? --armor --always-trust --encrypt-to 53445200 -- -r %r -- %f"
set pgp_sign_as=53445200
If I send an email to someone who does not have a GPG key, the outgoing email is digitally signed using my key and is sent. When I get a reply to that email, and reply in turn, however, I cannot it as I asked:
Enter keyID for <recipient email>
I can get out of that dialog with Ctrl-G, but I cannot get past it. At first I thought it was one of these two setting:
# set crypt_replyencrypt=yes # encryp only replies to signed messages
# set crypt_replysignencrypted=yes # encrypt & sign replies to encrypted msgs
But even with them commented out I am asked for a key that doesn't exist. Where is the setting I am missing that is requiring I specify a key for the receiver on a reply?
Thanks
The answer is use GPGME. The GPGME library encapsulates most, if not all, commonly used GnuPG functions. With just a few configuration lines everything GnuPG related "just works." This posting is a good guide: https://sanctum.geek.nz/arabesque/gnu-linux-crypto-email/
In the end my setup now looks like:
# Use GPGME
set crypt_use_gpgme = yes
# Sign replies to signed emails
set crypt_replysign = yes
# Encrypt replies to encrypted emails
set crypt_replyencrypt = yes
# Encrypt and sign replies to encrypted and signed email
set crypt_replysignencrypted = yes
# Attempt to verify signatures automatically
set crypt_verify_sig = yes
# Use my key for signing and encrypting
set pgp_sign_as = 0x53445200
# Automatically sign all out-going email
set crypt_autosign = yes
Far simpler, and it works.

Jenkins Reload Configuration Removes Changes

Currently using Jenkins at my company, I set up a server for all of our engineers to plug in to, in doing so I made some server management jobs to make my life a little easier. One of them was a config editor to edit the $JENKINS_HOME/config.xml file and trigger a configuration reload to reflect the new changes.
However today when I went to go use that job, the changes we no longer taking effect, nor were they shown when ssh'd into the server and cat-ing the config.xml file.
Did some debugging, made sure that the file contents were being replaced correctly, even threw the checks into the build executor to make sure I knew that everything was correct prior to running the reload-configuration command by double checking md5 sums as the entire content is replaced in my script. I even sleep 15-d before the reload so I could cat the config.xml file and ensure my changes are there, and they always are.
However, as soon as the reload command is run, all of my changes are replaced with what the config contents were just before I made my changes (I also confirmed this from md5 sums of the file in my debugging)
Here's the executor of my job if that helps at all:
$CONFIG_FILE is always $JENKINS_HOME/config.xml
#!/bin/bash
set -o pipefail -e -u -x
cp "$CONFIG_FILE" "$WORKSPACE/config_backup.xml"
printf "Creating an AMI profile with these parameters: \n\n\
Config File: | $CONFIG_FILE \n\
AMI ID: | $AMI_ID \n\
Description: | $DESCRIPTION \n\
Instance Type: | $INSTANCE_TYPE \n\
Security Groups: | $SECURITY_GROUPS \n\
Remote Workspace: | $REMOTE_WORKSPACE \n\
Label(s): | $LABELS \n\
Subnet ID: | $SUBNET_ID \n\
IAM Profile: | $IAM_INSTANCE_PROFILE \n\
Instance Tags: | $TAGS \n\
Executors: | $EXECUTORS \
\n\n\
"
new_xml="$(python "$WORKSPACE/<scriptname removed for security reasons>" \
--file $CONFIG_FILE \
--ami $AMI_ID \
--description $DESCRIPTION \
--type $INSTANCE_TYPE \
--security-groups $SECURITY_GROUPS \
--remote-workspace $REMOTE_WORKSPACE \
--labels $LABELS \
--iam-instance-profile $IAM_INSTANCE_PROFILE \
--subnet-id $SUBNET_ID \
--tags $TAGS \
--executors $EXECUTORS)" || true
if [ -z "$new_xml" ]; then
echo "Ran into an error..."
cat "xml_ami_profile_parser.log"
exit 1
fi
echo "setting new config file content..."
echo "$new_xml" > "$CONFIG_FILE"
echo "config file set!"
CONFIG_MD5="$(md5sum "$CONFIG_FILE" | awk '{print $1}')"
NEW_MD5="$(echo "$new_xml" | md5sum | awk '{print $1}')"
printf "comparing MD5 Sums: \n\
[ $CONFIG_MD5 ] \n\
[ $NEW_MD5 ]\n\n"
if [[ "$CONFIG_MD5" != "$NEW_MD5" ]]; then
echo "Config File ($CONFIG_FILE) was not overwritten successfully. Restoring backup..."
cp "$WORKSPACE/config_backup.xml" "$CONFIG_FILE"
exit 1
fi
# use jenkins api user info
USERNAME="$(cat <scriptname removed for security reasons> | awk '{print $8}')"
PASSWORD="$(cat <scriptname removed for security reasons> | awk '{print $9}')"
curl -X POST -u "$USERNAME:$PASSWORD" "<url removed for security reasons>"
sleep 10
NEW_MD5="$(md5sum "$CONFIG_FILE" | awk '{print $1}')"
printf "comparing MD5 Sums: \n\
[ $CONFIG_MD5 ] \n\
[ $NEW_MD5 ]\n\n"
if [[ "$CONFIG_MD5" != "$NEW_MD5" ]]; then
echo "Config file reverted after reload, marking build as error."
exit 1
fi
Any help at all is greatly appreciated!
EDIT:
Here's the common output now and can't get past it:
setting new config file content...
config file set!
comparing MD5 Sums:
[ 58473de6acbb48b2e273e3395e64ed0f ]
[ 58473de6acbb48b2e273e3395e64ed0f ]
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
comparing MD5 Sums:
[ 58473de6acbb48b2e273e3395e64ed0f ]
[ f521cec2a2e376921995f773522f78e1 ]
Config file reverted after reload, marking build as error.
Build step 'Execute shell' marked build as failure
Finished: FAILURE
For everyone coming to this later, I solved my own problem. Jenkins has it's own failsafe to keep uptime but doesn't give you any notice of it doing so. If you replace a config.xml with something that a plugin can't parse correctly (in my case the Amazon EC2 Plugin) then the plugin tells Jenkins that the config file is bad, and Jenkins will revert to the last correct XML file it was using (usually the one it has in memory).
If this happens to you double check that you aren't using special chars.
the offending code in mine was an output of the tags section including html char converted quotations " -> " and the plugin couldn't parse this. It was solely a difference in:
<tags>
<hudson.plugins.ec2.EC2Tag>
<name>"Email</name>
<value><removed for security reasons>"</value>
</hudson.plugins.ec2.EC2Tag>
<hudson.plugins.ec2.EC2Tag>
<name>"Name</name>
<value><removed for security reasons>"</value>
</hudson.plugins.ec2.EC2Tag>
</tags>
and
<tags>
<hudson.plugins.ec2.EC2Tag>
<name>Email</name>
<value><removed for security reasons></value>
</hudson.plugins.ec2.EC2Tag>
<hudson.plugins.ec2.EC2Tag>
<name>Name</name>
<value><removed for security reasons></value>
</hudson.plugins.ec2.EC2Tag>
</tags>

Resources