How to get value of encrypted data bag secret within Test Kitchen - ruby

I have added data_bags_path and encrypted_data_bag_secret_key_path within kitchen.yml as follows:
provisioner:
name: chef_zero
chef_omnibus_url: omni-url/chef/install.sh
roles_path: 'test/integration/default/roles'
data_bags_path: "test/integration/default/data_bags"
encrypted_data_bag_secret_key_path: "test/integration/default/encrypted_data_bag_secret"
I believe the above copies the encrypted_data_bag_secret to a file named encrypted_data_bag_secret under /tmp/kitchen/
That is why, in my recipe I am calling secret as follows:
secret = Chef::EncryptedDataBagItem.load_secret("/tmp/kitchen/encrypted_data_bag_secret")
encryptkey = Chef::EncryptedDataBagItem.load("tokens", "encryptkey", secret)
However, the test kitchen is failing with following error:
No such file or directory - file not found '/tmp/kitchen/encrypted_data_bag_secret'

In general you probably don't want to use encrypted data bags in your tests. If you do want to use the encryption for some reason (really, don't) use the normal data_bag_item() API which does the key loading for you.

Related

kubebuilder debug web-hooks locally

We have a kubebuilder controller which is working as expected, now we need to create a webhooks ,
I follow the tutorial
https://book.kubebuilder.io/reference/markers/webhook.html
and now I want to run & debug it locally,
however not sure what to do regard the certificate, is there a simple way to create it , any example will be very helpful.
BTW i've installed cert-manager and apply the following sample yaml but not sure what to do next ...
I need the simplest solution that I be able to run and debug the webhooks locally as Im doing already with the controller (Before using webhooks),
https://book.kubebuilder.io/cronjob-tutorial/running.html
Cert-manager
I've created the following inside my cluster
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: example-com
namespace: test
spec:
# Secret names are always required.
secretName: example-com-tls
# secretTemplate is optional. If set, these annotations and labels will be
# copied to the Secret named example-com-tls. These labels and annotations will
# be re-reconciled if the Certificate's secretTemplate changes. secretTemplate
# is also enforced, so relevant label and annotation changes on the Secret by a
# third party will be overwriten by cert-manager to match the secretTemplate.
secretTemplate:
annotations:
my-secret-annotation-1: "foo"
my-secret-annotation-2: "bar"
labels:
my-secret-label: foo
duration: 2160h # 90d
renewBefore: 360h # 15d
subject:
organizations:
- jetstack
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: example.com
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
- client auth
# At least one of a DNS Name, URI, or IP address is required.
dnsNames:
- example.com
- www.example.com
uris:
- spiffe://cluster.local/ns/sandbox/sa/example
ipAddresses:
- 192.168.0.5
# Issuer references are always required.
issuerRef:
name: ca-issuer
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
Still not sure how to sync it with the kubebuilder to work locally
as when I run the operator in debug mode I got the following error:
setup problem running manager {"error": "open /var/folders/vh/_418c55133sgjrwr7n0d7bl40000gn/T/k8s-webhook-server/serving-certs/tls.crt: no such file or directory"}
What I need is the simplest way to run webhooks locally
Let me walk you through the process from the start.
create webhook like it's said in the cronJob tutorial - kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting --programmatic-validation . This will create webhooks for implementing defaulting logics and validating logics.
Implement the logics as instructed - Implementing defaulting/validating webhooks
Install cert-manager. I find the easiest way to install is via this commmand - kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.yaml
Edit the config/default/kustomization.yaml file by uncommenting everything that have [WEBHOOK] or [CERTMANAGER] in their comments. Do the same for config/crd/kustomization.yaml file also.
Build Your Image locally using - make docker-build IMG=<some-registry>/<project-name>:tag. Now you dont need to docker-push your image to remote repository. If you are using kind cluster, You can directly load your local image to your specified kind cluster:
kind load docker-image <your-image-name>:tag --name <your-kind-cluster-name>
Now you can deploy it to your cluster by - make deploy IMG=<some-registry>/<project-name>:tag.
You can also run cluster locally using make run command. But, that's a little tricky if you have enabled webooks. I would suggest you running your cluster using KIND cluster in this way. Here, you don't need to worry about injecting certificates. cert-manager will do that for you. You can check out the /config/certmanager folder to figure out how this is functioning.

Is there a way to access Rails 6 secrets.yml hash as dot notation object

I would like to access Rails.application.secrets as an object till its deeper length.
Example: ../config/secrets.yml
development:
secret_key_base: ""
my_data:
user_email: "abc#example.io"
external_service:
remote:
password: ""
local:
password: ""
Presently Remote password of an external service is fetched using:
Rails.application.secrets.my_data[:external_service][:remote][:password]
Instead I would like to access it as below:
Rails.application.secrets.my_data.external_service.remote.password
Is there a way that I can configure my application to behave in above format?
Note:
Only config/secrets.yml must be affected
Please specify the file path/name where the configurations must be changed/added
Also suggest if there is an alternate way(gem, etc)
You might be able to put this in an initializer and achieve what you're looking for:
Rails.application.secrets = JSON.parse(Rails.application.secrets.to_json, object_class: OpenStruct)

Confd ignores the role set on my aws config?

I'm currently trying to setup a confd POC using SSM as provider for the keys, we currently have one account on AWS which is the root account and multiples roles to separate the environments.
currently my AWS config looks like that
[default]
region=eu-west-1
output=json
role_arn=arn:aws:iam::*:role/OrganizationAccountAccessRole
This works quite fine for me given the command
aws ssm get-parameters --names /eric
give me back the key I created for this poc
PARAMETERS arn:aws:ssm:eu-west-1:*:parameter/eric * /eric String test 1
for confd though it does not
confd -onetime -backend ssm --log-level debug
2019-04-07T18:25:08Z 3ce95f057568 confd[359]: DEBUG Processing key=/eric
2019-04-07T18:25:08Z 3ce95f057568 confd[359]: DEBUG Got the following map from store: map[]
2019-04-07T18:25:08Z 3ce95f057568 confd[359]: DEBUG Using source template /etc/confd/templates/myconfig.conf.tmpl
2019-04-07T18:25:08Z 3ce95f057568 confd[359]: DEBUG Compiling source template /etc/confd/templates/myconfig.conf.tmpl
2019-04-07T18:25:08Z 3ce95f057568 confd[359]: ERROR template: myconfig.conf.tmpl:2:17: executing "myconfig.conf.tmpl" at <getv "/eric">: error calling getv: key does not exist: /eric
2019-04-07T18:25:08Z 3ce95f057568 confd[359]: FATAL template: myconfig.conf.tmpl:2:17: executing "myconfig.conf.tmpl" at <getv "/eric">: error calling getv: key does not exist: /eric
I did one short test and created the key /eric in the root account instead of the role account, after doing that it worked as I expected, which makes me wonder, is there any hidden configuration for confd to make it "use" the role? because currently it seems like it does not take the role into consideration.
my confd template looks like
[template]
src = "myconfig.conf.tmpl"
dest = "/tmp/myconfig.conf"
keys = [
"/eric"
]
and my confd config looks like
database_url = {{getv "/eric"}}
Can someone give me any direction regarding this specific problem?
Ok, I found the issue, my AWS config has been completely ignored, after looking at the currently open pull requests for this project I found this one.
https://github.com/kelseyhightower/confd/pull/736, The author mentions
Existing session creation was ignoring AWS config options unless the
env var AWS_SDK_LOAD_CONFIG was exported. The SharedConfigState option
removes that need.
so yes setting the var AWS_SDK_LOAD_CONFIG to true made this, I assume that when this pr get merge this "workaround" will not be necessary.

Check if config file was set successfully for GnuPG in GPGME

For specifying the preferenced order of encryption algorithms in GPG I use
gpgme_set_engine_info(GPGME_PROTOCOL_OpenPGP, NULL, CONFIG_DIR);
to set a custom config file. However how can I check if this operation was successful? home_dir is set to the given value, but this also happens if I pass a directory without a config file. I can't see any function or call in the documentation to evaluate if the config file was loaded OR what the current preference order is.
The function returns an error value if a problem occured. From the documentation:
This function returns the error code GPG_ERR_NO_ERROR if successful, or an eror code on failure.
You observed unexpected behavior with setting a home directory without a configuration file:
home_dir is set to the given value, but this also happens if I pass a directory without a config file.
This is expected behavior in GnuPG. An empty configuration file is not an error, but simply means no other configuration but the defaults is in place. Similar things happen if you pass --homedir to GnuPG with a reference to an empty folder: GnuPG will try to initialize this folder as a home directory, but print an information message:
$ LANG=C gpg --homedir /tmp
gpg: keyring `/tmp/secring.gpg' created
gpg: keyring `/tmp/pubring.gpg' created
gpg: Go ahead and type your message ...
If you want to verify the folder is already set up, I'd propose to verify some options you'd expect, or test for a configuration file (or whatever you expect to be available) on your own.

How does EC2 install the public key from your keypair?

I am debugging creation of a custom AMI and it's not clear to me how EC2 actually installs the public key of your keypair onto your AMI... I presume it goes into ~someuser/.ssh/authorized_keys, but I cannot figure out if this is done exactly once, on every boot, or how the target user is determined.
More specifically cloud-init is a Python module that gets run every time an instance starts.
You can browse through the code here:
/usr/lib/python2.7/dist-packages/cloudinit
They parts that get the key are the DataSource.py and DataSourceEc2.py files. They query the metadata using the URL: http://169.254.169.254/2011-01-01/meta-data/public-keys/.
The find the list of keys using that URL and then pick them up one of by one. (It's usually one). Ultimately they query: http://169.254.169.254/2011-01-01/meta-data/public-keys/0/openssh-key/ then they copy that key to the default cloud-init user's ~/.ssh/authorized_keys file.
The default cloud-init user (as well as all the cloud-init config) is defined in the /etc/cloud/cloud.cfg file. This in excerpt of a cloud.cfg file:
user: ubuntu
disable_root: 1
preserve_hostname: False
# datasource_list: ["NoCloud", "ConfigDrive", "OVF", "MAAS", "Ec2", "CloudStack"]
cloud_init_modules:
- bootcmd
- resizefs
- set_hostname
- update_hostname
- update_etc_hosts
- ca-certs
- rsyslog
- ssh
cloud_config_modules:
- disk-setup
- mounts
- ssh-import-id
- locale
- set-passwords
- grub-dpkg
...
It's basically a yaml format config file.
For more information on cloud-init you can read their public docs here:
http://cloudinit.readthedocs.org/en/latest/index.html
Hope this helps.

Resources