How to Configure Spring Boot on Kubernetes With Secrets - spring-boot

I have encrypted two database passwords with kubeseal, but I am not sure how exactly to mount them in my configuration file assuming I am using Spring Boot.
The application keeps complaining about missing placeholder password.
Could not resolve placeholder 'datasources.eco.password'
Here is the generated secret :
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
annotations:
sealedsecrets.bitnami.com/namespace-wide: "true"
creationTimestamp: null
name: database-keys
namespace: eco-test
spec:
encryptedData:
ecoadmin: AgBPqs07GicbU4eyYXfQrVoRHCkfPHH8jxN8...sefwfs4fse
ecodb: AgAHYRYpk5j+ZCyIDpYr89d8pYLJ6E8S...sr3245sefsf
template:
data: null
metadata:
annotations:
sealedsecrets.bitnami.com/namespace-wide: "true"
creationTimestamp: null
name: database-keys
namespace: eco-test
Here is where I try to mount the secret in my properties file:
datasources:
eco:
#url: jdbc:oracle:thin:#10.246...
url: jdbc:oracle:thin:#12.234...
username: ECO
password:
secretKeyRef:
name: database-keys
key: ecodb
minPoolSize: 5
maxPoolSize: 20
edition: 'REL_2021_12_06'
ecoadmin:
#url: jdbc:oracle:thin:#10.246...
url: jdbc:oracle:thin:#21.32...
username: ECOADM
password:
secretKeyRef:
name: database-keys
key: ecoadmin

not sure if you are confusing plattform (k8s) with service (springboot) features here.
when you configure your springboot app to expect a value at "datasources.eco.password", you cannot use the kubernetes method of mounting values from secrets there because it expects something like
datasources:
eco:
password: password123
i assume that you can reference ENVs in your properties file, so one way to go would be to mount the secretsvalue as a ENV and reference that in your properties file.
properties file:
datasources:
eco:
#url: jdbc:oracle:thin:#10.246...
url: jdbc:oracle:thin:#12.234...
username: ECO
password: ${DB_ADMIN_KEY_PW}
minPoolSize: 5
maxPoolSize: 20
edition: 'REL_2021_12_06'
ecoadmin:
#url: jdbc:oracle:thin:#10.246...
url: jdbc:oracle:thin:#21.32...
username: ECOADM
password: ${DB_ADMIN_KEY_PW}
deployment.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
...
spec:
...
template:
...
spec:
...
containers:
- name: <app>
image: <image>
env:
- name: DB_ADMIN_KEY_PW
valueFrom:
secretKeyRef:
name: database-keys
key: ecoadmin
...
references:
https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Using env variable in Spring Boot's application.properties

Related

how to access go templated kubernetes secret in manifest

I'm running this tutorial https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html and found that the elasticsearch operator comes included with a pre-defined secret which is accessed through kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'. I was wondering how I can access it in a manifest file for a pod that will make use of this as an env var. The pod's manifest is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-depl
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
spec:
containers:
- name: user
image: reactor/user
env:
- name: PORT
value: "3000"
- name: ES_SECRET
valueFrom:
secretKeyRef:
name: quickstart-es-elastic-user
key: { { .data.elastic } }
---
apiVersion: v1
kind: Service
metadata:
name: user-svc
spec:
selector:
app: user
ports:
- name: user
protocol: TCP
port: 3000
targetPort: 3000
When trying to define ES_SECRET as I did in this manifest, I get this error message: invalid map key: map[interface {}]interface {}{\".data.elastic\":interface {}(nil)}\n. Any help on resolving this would be much appreciated.
The secret returned via API (kubectl get secret ...) is a JSON-structure, where there:
{
"data": {
"elastic": "base64 encoded string"
}
}
So you just need to replace
key: { { .data.elastic } }
with
key: elastic
since it's secretKeyReference (e.g. you refer a value in some key in data (=contents) of some secret, which name you specified above). No need to worry about base64 decoding; Kubernetes does it for you.

Validating Error on deployment in Kubernetes

I have tried to deploy the producer-service app with MySQL database in the Kubernetes cluster. When i try to deploy producer app then the following validation error has thrown.
error: error validating "producer-deployment.yml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
producer-deployment.yml
apiVerion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
-nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: producer-app
spec:
selector:
matchLabels:
app: producer-app
replicas: 3
template:
metadata:
labels:
app: producer-app
spec:
containers:
- name: producer
image: producer:1.0
ports:
- containerPort: 9090
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
i have tried to find the error or typo within the config file but still, couldn't. What is wrong with the producer-deployment.yml file
Multiple issues:
It would be apiVersion: v1 not apiVerion: v1 in the Service
wrong Spec.ports formation of Service. As nodePort, port, targetPort and protocol are under the ports as a list but your did wrong formation.
your service yaml should be like below:
apiVersion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
- nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
So your overall yaml should be:
apiVersion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
- nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: producer-app
spec:
selector:
matchLabels:
app: producer-app
replicas: 3
template:
metadata:
labels:
app: producer-app
spec:
containers:
- name: producer
image: producer:1.0
ports:
- containerPort: 9090
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
Please change the first line in producer-deployment.yml. Letter s is missing.
From
apiVerion: v1
To
apiVersion: v1
There is a typo in the first line: "apiVerion" should be "apiVersion".
Your first error(there are more than 1) just point you to the place where you should start your investigation from..
error validating data: apiVersion not set;
As you know, each object in kubernetes has its own apiVersion.
Check Understanding Kubernetes Objects, especially Required Fields part:
In the .yaml file for the Kubernetes object you want to create, you'll
need to set values for the following fields:
apiVersion - Which version of the Kubernetes API you're using to
create this object
kind - What kind of object you want to create
metadata - Data that helps uniquely identify the object, including a
name string, UID, and optional namespace
spec - What state you desire
for the object The precise format of the object spec is different for
every Kubernetes object, and contains nested fields specific to that
object.
The Kubernetes API Reference can help you find the spec format
for all of the objects you can create using Kubernetes.
You can check Latest 1.20 API here
These values are mandatory and you wont be able to create object without them. So please, next time read more carefully errors you receive.

DB Credentials Exposed as part job parameters when executing a task from SCDF

I have Custom Built SCDF which is built as docker image in Openshift and referred in server-deployment.yaml as docker image.I use the Oracle db to store the task meta data and is an external source here. I pass the all db properties in configmap. The DB password is base64 encoded and added in config map as secret. These db details are being used by SCDF to store task metadata.
These job parameters are passed by SCDF to the executing job.But these job parameters which in turn are the datasource properties including the db password present in the configmap are being printed in logs as Job parameters, and batch_job_execution_params table.
I thought using the password as secret in configmap should resolve this. But it's not. Below is the logs and table snippet of job parameters being printed.
I would like to know how to avoid passing these db properties as job parameters to the executing job so to prevent the credentials being exposed?
12-06-2020 18:12:38.540 [main] INFO org.springframework.batch.core.launch.support.SimpleJobLauncher.run - Job:
[FlowJob: [name=Job]] launched with the following parameters: [{
-spring.cloud.task.executionid=8010,
-spring.cloud.data.flow.platformname=default,
-spring.datasource.username=ACTUAL_USERNAME,
-spring.cloud.task.name=Alljobs,
Job.ID=1591985558466,
-spring.datasource.password=ACTUAL_PASSWORD,
-spring.datasource.driverClassName=oracle.jdbc.OracleDriver,
-spring.datasource.url=DATASOURCE_URL,
-spring.batch.job.names=Job_1}]
Pod Created for the Job execution - openshift screenshot
Database Table
Custom SCDF Dockerfile.yaml
===========================
FROM maven:3.5.2-jdk-8-alpine AS MAVEN_BUILD
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn package
FROM openjdk:8-jre-alpine
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/BatchAdmin-0.0.1-SNAPSHOT.jar /app/
ENTRYPOINT ["java", "-jar", "BatchAdmin-0.0.1-SNAPSHOT.jar"]
Deployment.yaml
===============
apiVersion: apps/v1
kind: Deployment
metadata:
name: scdf-server
labels:
app: scdf-server
spec:
selector:
matchLabels:
app: scdf-server
replicas: 1
template:
metadata:
labels:
app: scdf-server
spec:
containers:
- name: scdf-server
image: docker-registry.default.svc:5000/batchadmin/scdf-server #DockerImage
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: /config
readOnly: true
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /management/health
port: 80
initialDelaySeconds: 45
readinessProbe:
httpGet:
path: /management/info
port: 80
initialDelaySeconds: 45
resources:
limits:
cpu: 1.0
memory: 2048Mi
requests:
cpu: 0.5
memory: 1024Mi
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: SERVER_PORT
value: '80'
- name: SPRING_CLOUD_CONFIG_ENABLED
value: 'false'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_ANALYTICS_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_TASK_COMPOSED_TASK_RUNNER_URI
value: 'docker://springcloud/spring-cloud-dataflow-composed-task-runner:2.6.0.BUILD-SNAPSHOT'
- name: SPRING_CLOUD_KUBERNETES_CONFIG_ENABLE_API
value: 'true'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_ENABLE_API
value: 'true'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_PATHS
value: /etc/secrets
- name: SPRING_CLOUD_DATAFLOW_FEATURES_TASKS_ENABLED
value: 'true'
- name: SPRING_CLOUD_KUBERNETES_CONFIG_NAME
value: scdf-server
- name: SPRING_CLOUD_DATAFLOW_SERVER_URI
value: 'http://${SCDF_SERVER_SERVICE_HOST}:${SCDF_SERVER_SERVICE_PORT}'
# Add Maven repo for metadata artifact resolution for all stream apps
- name: SPRING_APPLICATION_JSON
value: "{ \"maven\": { \"local-repository\": null, \"remote-repositories\": { \"repo1\": { \"url\": \"https://repo.spring.io/libs-snapshot\"} } } }"
serviceAccountName: scdf-sa
volumes:
- name: config
configMap:
name: scdf-server
items:
- key: application.yaml
path: application.yaml
#- name: SPRING_CLOUD_DATAFLOW_FEATURES_TASKS_ENABLED
#value : 'true'
server-config.yaml
==================
apiVersion: v1
kind: ConfigMap
metadata:
name: scdf-server
labels:
app: scdf-server
data:
application.yaml: |-
spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
default:
limits:
memory: 1024Mi
cpu: 2
entry-point-style: exec
image-pull-policy: always
datasource:
url: jdbc:oracle:thin:#db_url
username: BATCH_APP
password: ${oracle-root-password}
driver-class-name: oracle.jdbc.OracleDriver
testOnBorrow: true
validationQuery: "SELECT 1"
flyway:
enabled: false
jpa:
hibernate:
use-new-id-generator-mappings: true
oracle-secrets.yaml
===================
apiVersion: v1
kind: Secret
metadata:
name: oracle
labels:
app: oracle
data:
oracle-root-password: a2xldT3ederhgyzFCajE4YQ==
Any help would be much appreciated. Thanks.
In SCDF V2.6.2 the team fixed this issue. The DB credentials are no longer exposed in logs, POD Description page or Database. By default, the credentials would be visible. So anyone who have this issue have to do is add the following environment variable as part of the Deployment configuration and set the value to true.
SPRING_CLOUD_DATAFLOW_TASK_USE_KUBERNETES_SECRETS_FOR_DB_CREDENTIALS = true
This is not a perfect example - but you can mask the password in the log by using features of logback (default logging lib used by Spring Boot)
Put the below configuration into your logback, it would replace the password by ****
<springProfile name="local">
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>
%d{dd-MM-yyyy HH:mm:ss.SSS} [%thread] %-5level %logger{36}.%M - %replace(%msg){'password=\S*', 'password=****'}%n
</pattern>
</encoder>
</appender>
<root level="info">
<appender-ref ref="console"/>
</root>
</springProfile>
For the password logged in database, look like we are out of luck. The best thing we can do is to put it in a separate schema and have specific permissions to access those SCDF tables.

server-deployment.yml not reading values from server-config.yml in Spring Cloud Data flow server

I have deployed the Custom Built SCDF 2.52 in openshift environment which is up and running successfully. I followed the guide 2.5.0.RELEASE_Guide. The Issue is the the properties given in server-config are not being considered by server-deployment.yaml file when I mount them. Though I could see the mappings for application.yaml is visible in deployment configuration, the properties are not read while the server is starting.
So when I build the custom scdf I have to add all the server properties including kubernetes memory limits, oracle datasource(External Datasource) properties in the scdf projects' application.properties file. Only then values of kube properties are being read platform being setup and External oracle datasource is getting connected. Below are the files that I'm using. I'm new to this SCDF and kubernetes. So please let me know if i'm missing anything anywhere.
Also why I added the kubernetes properties in application.properties of custom scdf project. Reason here in this question
server-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: scdf-server
labels:
app: scdf-server
data:
application.yaml: |-
spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
default:
limits:
memory: 1024Mi
datasource:
url: jdbc:oracle:thin:#hostname:port/db
username: root
password: oracle-root-password
driver-class-name: oracle.jdbc.OracleDriver
testOnBorrow: true
validationQuery: "SELECT 1"
server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: scdf-server
labels:
app: scdf-server
spec:
selector:
matchLabels:
app: scdf-server
replicas: 1
template:
metadata:
labels:
app: scdf-server
spec:
containers:
- name: scdf-server
image: docker-registry.default.svc:5000/batchadmin/scdf-server
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: /config
readOnly: true
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /management/health
port: 80
initialDelaySeconds: 45
readinessProbe:
httpGet:
path: /management/info
port: 80
initialDelaySeconds: 45
resources:
limits:
cpu: 1.0
memory: 2048Mi
requests:
cpu: 0.5
memory: 1024Mi
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: SERVER_PORT
value: '80'
- name: SPRING_CLOUD_CONFIG_ENABLED
value: 'false'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_ANALYTICS_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_TASK_COMPOSED_TASK_RUNNER_URI
value: 'docker://springcloud/spring-cloud-dataflow-composed-task-runner:2.6.0.BUILD-SNAPSHOT'
- name: SPRING_CLOUD_KUBERNETES_CONFIG_ENABLE_API
value: 'false'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_ENABLE_API
value: 'false'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_PATHS
value: /etc/secrets
- name: SPRING_CLOUD_DATAFLOW_FEATURES_TASKS_ENABLED
value : 'true'
- name: SPRING_CLOUD_DATAFLOW_SERVER_URI
value: 'http://${SCDF_SERVER_SERVICE_HOST}:${SCDF_SERVER_SERVICE_PORT}'
# Add Maven repo for metadata artibatcht resolution for all stream apps
- name: SPRING_APPLICATION_JSON
value: "{ \"maven\": { \"local-repository\": null, \"remote-repositories\": { \"repo1\": { \"url\": \"https://repo.spring.io/libs-snapshot\"} } } }"
serviceAccountName: scdf-sa
volumes:
- name: config
configMap:
name: scdf-server
items:
- key: application.yaml
path: application.yaml
application.properties - the Only thing that runs the SCDF right now.
spring.application.name=batchadmin
spring.datasource.url=jdbc:oracle:thin:#hostname:port/db
spring.datasource.username=root
spring.datasource.password=oracle_root_password
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.image-pull-policy= always
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.entry-point-style= exec
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.limits.cpu=2
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.limits.memory=1024Mi
spring.flyway.enabled=false
spring.jpa.show-sql=true
spring.jpa.hibernate.use-new-id-generator-mappings=true
logging.level.root=info
logging.file.max-size=5GB
logging.file.max-history=30
logging.pattern.console=%d{dd-MM-yyyy HH:mm:ss.SSS} [%thread] %-5level %logger.%M - %msg%n
My main concern here apart from the above issue is db password. Since SCDF passes all the application.properties related to datasource and kubernetes as job_parameters including the db password, the password is being printed in the logs, visible in the running pod config and in batch_job_execution_params.
Application.properties as Job params
To summarize the issues here as questions,
server-config.yaml properties are not being used by server-deployment.yaml? What went wrong?
Since I pass server properties from application.prop file all the properties are visible in logs as well as Db. So is there a way I could hide them?
Thanks in advance.
server-role
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: scdf-role
rules:
- apiGroups: [""]
resources: ["services", "pods", "replicationcontrollers", "persistentvolumeclaims"]
verbs: ["get", "list", "watch", "create", "delete", "update"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets", "deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
- apiGroups: ["extensions"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["create", "delete", "get", "list", "watch", "update", "patch"]

pod spring boot(jhipster) not connect cloud SQL

I have tried to connect from a pod (jhipster) to a Google cloud SQL but I have not been successful.
My pod is left in CrashLoopBackOff because Cloud SQL can not connect Error:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IPconnections.atorg.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:280)atorg.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)......ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [cl/databin/invoicing/folio/config/LiquibaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.DatabaseException: org.postgresql.util.PSQLException: Connection to localhost:5432 refused.
my folio-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: folio
namespace: jhipster
spec:
replicas: 2
selector:
matchLabels:
app: folio
version: "v1"
template:
metadata:
labels:
app: folio
version: "v1"
spec:
containers:
- name: folio-app
image: skilledboy/folio:v1
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: JHIPSTER_SECURITY_AUTHENTICATION_JWT_BASE64_SECRET
valueFrom:
secretKeyRef:
name: jwt-secret
key: secret
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://localhost:5432/folio
- name: POSTGRES_DB_USER
value: user
- name: POSTGRES_DB_PASSWORD
value: password1
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=invo-project-233618:us-central1:folios=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-oauth-credential
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: "x-request-id,x-ot-span-context"
- name: JAVA_OPTS
value: " -Xmx256m -Xms256m"
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1"
ports:
- name: http
containerPort: 8081
readinessProbe:
httpGet:
path: /folio/management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /folio/management/health
port: http
initialDelaySeconds: 120
volumes:
- name: cloudsql-oauth-credential
secret:
secretName: cloudsql-oauth-credential
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
and in the configuration of my application-prod.yml
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:postgresql://127.0.0.1:5432/folio
username: ${POSTGRES_DB_USER}
password: ${POSTGRES_DB_PASSWORD}
What will I have wrong? someone to give me an idea that I can have bad? thanks
Your problem is that you are telling the Cloud SQL proxy to run with -credential_file=/secrets/cloudsql/credentials.json, but you haven't actually provided a file at /secrets/cloudsql/ for it to use. (The volume in your config is at /etc/ssl/certs).
It's also worth pointing out that the credential_file flag is for using a service account key, and token flag is used for an oauth token (it's unclear which you are trying to use)

Resources