Spring boot log secret.yaml from helm - spring-boot

I am getting started with helm. I have defined the deployment, service, configMap and secret yaml files.
I have a simple spring boot application with basic http authentication, the username and password are defined in the secret file.
My application is correctly deployed, and when I tested it in the browser, it tells me that the username and password are wrong.
Is there a way to know what are the values that spring boot receives from helms?
Or is there a way to decrypt the secret.yaml file?
values.yaml
image:
repository: myrepo.azurecr.io
name: my-service
tag: latest
replicaCount: 1
users:
- name: "admin"
password: "admintest"
authority: "admin"
- name: "user-test"
password: "usertest"
authority: "user"
spring:
datasource:
url: someurl
username: someusername
password: somepassword
platform: postgresql
secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-secret
stringData:
spring.datasource.url: "{{ .Values.spring.datasource.url }}"
spring.datasource.username: "{{ .Values.spring.datasource.username }}"
spring.datasource.password: "{{ .Values.spring.datasource.password }}"
spring.datasource.platform: "{{ .Values.spring.datasource.platform }}"
{{- range $idx, $user := .Values.users }}
users_{{ $idx }}_.name: "{{ $user.name }}"
users_{{ $idx }}_.password: "{{ printf $user.password }}"
users_{{ $idx }}_.authority: "{{ printf $user.authority }}"
{{- end }}

Normally the secret in the secret.yaml file won't be encrypted, just encoded in base64. So you could decode the content of the secret in tool like https://www.base64decode.org/ If you've got access to the kubernetes dashboard that also provides a way to see the value of the secret.
If you're injecting the secret as environment variables then you can find the pod with kubeclt get pods and then kubectl describe pod <pod_name> will include output of which environment variables are injected.
With helm I find it very useful to run helm install --dry-run --debug as then you can see in the console exactly what kubernetes resources will be created from the template for that install.

Related

How to include dynamic configmap and secrets in helmfile

I am using helmfile to deploy multiple sub-charts using helmfile sync -e env command.
I am having the configmap and secretes which I need to load based on the environment.
Is there a way to load configmap and secretes based on the environment in the helmfile.
I tried to add in the
dev-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: backend
namespace: {{ .Values.namespace }}
data:
NODE_ENV: {{ .Values.env }}
In helmfile.yaml
environments:
envDefaults: &envDefaults
values:
- ./values/{{ .Environment.Name }}.yaml
- kubeContext: ''
namespace: '{{ .Environment.Name }}'
dev:
<<: *envDefaults
secrets:
- ./config/secrets/dev.yaml
- ./config/configmap/dev.yaml
Is there a way to import configmap and secrets (Not encrypted) YAML dynamically based on the environment in helmfile?

How to deploy an openstack instance with ansible in a specific project

I've been trying to deploy an instance in openstack to a different project then my users default project. The only way to do this appears to be by passing the project_name within the auth: setting. This works fine, but is not really compatible with using a clouds.yaml config with the clouds: setting or even with using the admin-openrc.sh file that openstack provides. (The admin-openrc.sh appears to take precedence over any settings in auth:).
I'm using the current openstack.cloud collection 1.3.0 (https://docs.ansible.com/ansible/latest/collections/openstack/cloud/index.html). Some of the modules have the option to specify a project: like the network module, but the one server module does not.
So this deploys in a named project:
- name: Create instances
server:
state: present
auth:
auth_url: "{{ auth_url}}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project }}"
project_domain_name: "{{ domain_name }}"
user_domain_name: "{{ domain_name }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When having sourced the admin-openrc.sh, this deploys only to your default project (OS_PROJECT_NAME=<project_name>
- name: Create instances
server:
state: present
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When I unset the OS_PROJECT_NAME, but set all other values from admin-openrc.sh, I can do this, but this requires to work with a non-default setting (unsetting the one enviromental variable:
- name: Create instances
server:
state: present
auth:
project_name: "{{ project }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
I'm looking for the most usefull way to use a specific authorization model (be it clouds.yaml or environmental variables) for all my openstack modules, while still being able to deploy to a specific project.
(You should upgrade to the last collection (1.5.3), or perhaps it is compatible with 1.3.0)
You can use the cloud property from the
"server" task (openstack.cloud.server). Here is how you can proceed :
All projects definitions are stored into clouds.yml (here is a part of its content)
clouds:
tarantula:
auth:
auth_url: https://auth.cloud.myprovider.net/v3/
project_name: tarantula
project_id: the-id-of-my-project
user_domain_name: Default
project_domain_name: Default
username: my-username
password: my-password
regions :
- US
- EU1
from a task you can refer to the appropriate cloud like this
- name: Create instances
server:
state: present
cloud: tarantula
region_name: EU1
name: "test-instance-1"
We now refrain from using any environment variables and make sure the project id is set in the configuration of in the group_vars. This works because it does not depend on a local clouds.yml file. We basically build and auth object in Ansible that can be use thoughout the deployment

Using Ansible environment and assume a role with boto3

I've run into an issue assuming a role when using the environment setting to set proxies on a task.
For example, if I use a custom module with proxy_env set:
- name: compare values from api
my_custom_module:
module_data: "{{ some_var }}"
register: cpmared_vals
environment: "{{ proxy_env }}"
I get this error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
however if I remove 'environment: "{{ proxy_env }}"' it works.
This is what proxy_env looks like:
proxy_env:
https_proxy: "http://corp-proxy.com:80"
http_proxy: "http://corp-proxy.com:80"
no_proxy: "internal-apps.com"
Thanks

Ansible for Openshift Deployment

I want to deploy the pods on Openshift using Ansible Playbook.
For this, i have written the following play :
- name: Create Deployment Config for the usecase
with_dict: "{{ apps }}"
openshift_v1_deployment_config:
name: "{{ item.key }}"
namespace: "{{ usecaseId }}"
labels:
app: "{{ item.key }}"
service: "{{ item.key }}"
replicas: 1
selector:
app: "{{ item.key }}"
service: "{{ item.key }}"
spec_template_metadata_labels:
app: "{{ item.key }}"
service: "{{ item.key }}"
containers:
- env:
image: "{{ openshift_registry_svc_url }}/{{ usecaseId }}/{{ item.key }}"
name: "{{ item.key }}"
ports:
- container_port: 8080
protocol: TCP
Anyone having idea how can i get the ip-address of the deployed pod using ansible itself.TIA
Shagun , I do not think that you would be able to get the IP address of the pods outside the cluster as the IP are managed by the openshift SAN the outside world can connect to pods are by routes, port forwarding ,Manually assign an external IP to a service
Hope this url helps with the methods to connect to pods https://docs.openshift.com/container-platform/3.5/dev_guide/expose_service/index.html
FInally I found it .
This could be done using following kubernetes module provided by ansible : k8s
e.g. :
- name: Fetch all pods which are running
set_fact:
deployments_pod: "{{ lookup('k8s', kind='Pod', namespace=test) }}"

How create users per enviroment using ansible Inventory and module "htpasswd"

I'm newbie in ansible. I wrote ansible role for creating user and password in "/etc/httpd/.htpasswd" like that:
- name: htpasswd
htpasswd:
path: /etc/httpd/.htpasswd
name: dev
password: dev
group: apache
mode: 0640
become: true
Now, I'm trying to understand, how I can set user and password placeholder variable per environment for this model using inventory(or any other way). Like, if I ran "ansible playbook -i inventories/dev" so in role of this model could be set:
- name: htpasswd
htpasswd:
path: /etc/httpd/.htpasswd
name: "{{ inventory.htpasswd.name }}"
password: "{{ inventory.htpasswd.password }}"
group: apache
mode: 0640
become: true
And in inventory folder per environment will be file "htpasswd" with name and password content like that:
name: dev
password: dev
Does Ansible have something like that? Or can someone explain me what best practices?
By default, each host is assigned to a all group by Ansible. With the following structure you can define group vars based on inventory.
inventories/dev/hosts
inventories/dev/group_vars/all.yml
inventories/staging/hosts
inventories/staging/group_vars/all.yml
In inventories/dev/group_vars/all.yml:
name: dev
password: dev
In inventories/staging/group_vars/all.yml:
name: staging
password: staging
And then in your tasks, reference the vars with their names:
- name: htpasswd
htpasswd:
path: /etc/httpd/.htpasswd
name: "{{ name }}"
password: "{{ password }}"
group: apache
mode: 0640
become: true

Resources