I have tried to connect from a pod (jhipster) to a Google cloud SQL but I have not been successful.
My pod is left in CrashLoopBackOff because Cloud SQL can not connect Error:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IPconnections.atorg.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:280)atorg.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)......ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [cl/databin/invoicing/folio/config/LiquibaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.DatabaseException: org.postgresql.util.PSQLException: Connection to localhost:5432 refused.
my folio-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: folio
namespace: jhipster
spec:
replicas: 2
selector:
matchLabels:
app: folio
version: "v1"
template:
metadata:
labels:
app: folio
version: "v1"
spec:
containers:
- name: folio-app
image: skilledboy/folio:v1
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: JHIPSTER_SECURITY_AUTHENTICATION_JWT_BASE64_SECRET
valueFrom:
secretKeyRef:
name: jwt-secret
key: secret
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://localhost:5432/folio
- name: POSTGRES_DB_USER
value: user
- name: POSTGRES_DB_PASSWORD
value: password1
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=invo-project-233618:us-central1:folios=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-oauth-credential
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: "x-request-id,x-ot-span-context"
- name: JAVA_OPTS
value: " -Xmx256m -Xms256m"
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1"
ports:
- name: http
containerPort: 8081
readinessProbe:
httpGet:
path: /folio/management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /folio/management/health
port: http
initialDelaySeconds: 120
volumes:
- name: cloudsql-oauth-credential
secret:
secretName: cloudsql-oauth-credential
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
and in the configuration of my application-prod.yml
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:postgresql://127.0.0.1:5432/folio
username: ${POSTGRES_DB_USER}
password: ${POSTGRES_DB_PASSWORD}
What will I have wrong? someone to give me an idea that I can have bad? thanks
Your problem is that you are telling the Cloud SQL proxy to run with -credential_file=/secrets/cloudsql/credentials.json, but you haven't actually provided a file at /secrets/cloudsql/ for it to use. (The volume in your config is at /etc/ssl/certs).
It's also worth pointing out that the credential_file flag is for using a service account key, and token flag is used for an oauth token (it's unclear which you are trying to use)
Related
I'm creating Microservices that are deployed in docker-desktop Kubernetes cluster for development. I'm using Spring security with Auth0 and the pods are using Kubernetes Native Service Discovery coupled with Spring cloud gateway. When I log in using Auth0, it authenticates just fine but the token that is received appears to be empty based on the error given.
I'm new to Kubernetes and this error only seems to occur when running the application on the kubernetes cluster. If I use Eureka for local testing, Auth0 works completely fine. I've tried to do some research to see if the issue is the token unable to be retrieved in the kubernetes cluster and the only solution I've seem to be able to find is to implement istioctl within the cluster.
FRONTEND deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-interface-app
labels:
app: user-interface-app
spec:
replicas: 1
selector:
matchLabels:
app: user-interface-app
template:
metadata:
labels:
app: user-interface-app
spec:
containers:
- name: user-interface-app
image: imageName:tag
imagePullPolicy: Always
ports:
- containerPort: 8084
env:
- name: GATEWAY_URL
value: api-gateway-svc.default.svc.cluster.local
- name: ZIPKIN_SERVER_URL
valueFrom:
configMapKeyRef:
name: gateway-cm
key: zipkin_service_url
- name: STRIPE_API_KEY
valueFrom:
secretKeyRef:
name: secret
key: stripe-api-key
- name: STRIPE_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: secret
key: stripe-public-key
- name: STRIPE_WEBHOOK_SECRET
valueFrom:
secretKeyRef:
name: secret
key: stripe-webhook-secret
- name: AUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: secret
key: auth-client-id
- name: AUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: secret
key: auth-client-secret
---
apiVersion: v1
kind: Service
metadata:
name: user-interface-svc
spec:
selector:
app: user-interface-app
type: ClusterIP
ports:
- port: 8084
targetPort: 8084
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: user-interface-lb
spec:
selector:
app: user-interface-app
type: LoadBalancer
ports:
- name: frontend
port: 8084
targetPort: 8084
protocol: TCP
- name: request
port: 80
targetPort: 8084
protocol: TCP
API-GATEWAY deployment.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: gateway-cm
data:
cart_service_url: http://cart-service-svc.default.svc.cluster.local
customer_profile_service_url: http://customer-profile-service-svc.default.svc.cluster.local
order_service_url: http://order-service-svc.default.svc.cluster.local
product_service_url: lb://product-service-svc.default.svc.cluster.local
zipkin_service_url: http://zipkin-svc.default.svc.cluster.local:9411
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway-app
labels:
app: api-gateway-app
spec:
replicas: 1
selector:
matchLabels:
app: api-gateway-app
template:
metadata:
labels:
app: api-gateway-app
spec:
containers:
- name: api-gateway-app
image: imageName:imageTag
imagePullPolicy: Always
ports:
- containerPort: 8090
env:
- name: PRODUCT_SERVICE_URL
valueFrom:
configMapKeyRef:
name: gateway-cm
key: product_service_url
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway-np
spec:
selector:
app: api-gateway-app
type: NodePort
ports:
- port: 80
targetPort: 8090
protocol: TCP
nodePort: 30499
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway-svc
spec:
selector:
app: api-gateway-app
type: ClusterIP
ports:
- port: 80
targetPort: 8090
protocol: TCP
I have tried to deploy the producer-service app with MySQL database in the Kubernetes cluster. When i try to deploy producer app then the following validation error has thrown.
error: error validating "producer-deployment.yml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
producer-deployment.yml
apiVerion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
-nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: producer-app
spec:
selector:
matchLabels:
app: producer-app
replicas: 3
template:
metadata:
labels:
app: producer-app
spec:
containers:
- name: producer
image: producer:1.0
ports:
- containerPort: 9090
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
i have tried to find the error or typo within the config file but still, couldn't. What is wrong with the producer-deployment.yml file
Multiple issues:
It would be apiVersion: v1 not apiVerion: v1 in the Service
wrong Spec.ports formation of Service. As nodePort, port, targetPort and protocol are under the ports as a list but your did wrong formation.
your service yaml should be like below:
apiVersion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
- nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
So your overall yaml should be:
apiVersion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
- nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: producer-app
spec:
selector:
matchLabels:
app: producer-app
replicas: 3
template:
metadata:
labels:
app: producer-app
spec:
containers:
- name: producer
image: producer:1.0
ports:
- containerPort: 9090
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
Please change the first line in producer-deployment.yml. Letter s is missing.
From
apiVerion: v1
To
apiVersion: v1
There is a typo in the first line: "apiVerion" should be "apiVersion".
Your first error(there are more than 1) just point you to the place where you should start your investigation from..
error validating data: apiVersion not set;
As you know, each object in kubernetes has its own apiVersion.
Check Understanding Kubernetes Objects, especially Required Fields part:
In the .yaml file for the Kubernetes object you want to create, you'll
need to set values for the following fields:
apiVersion - Which version of the Kubernetes API you're using to
create this object
kind - What kind of object you want to create
metadata - Data that helps uniquely identify the object, including a
name string, UID, and optional namespace
spec - What state you desire
for the object The precise format of the object spec is different for
every Kubernetes object, and contains nested fields specific to that
object.
The Kubernetes API Reference can help you find the spec format
for all of the objects you can create using Kubernetes.
You can check Latest 1.20 API here
These values are mandatory and you wont be able to create object without them. So please, next time read more carefully errors you receive.
I'm running Traefik v2.3.0 in a AKS (Azure Kubernetes Service) Cluster and i'm currently trying to setup a Basic Authentication on my Traefik UI.
The dashboard (Traefik UI) works fine without any authentication but i'm getting the server not found page when I try to access with a Basic Authentication.
Here is my configuration.
IngressRoute, Middleware for BasicAuth, Secret and Service :
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`traefik-ui.domain.com`) && PathPrefix(`/`) || PathPrefix(`/dashboard`)
services:
- name: traefik-ui
port: 80
middlewares:
- name: traefik-ui-auth
namespace: ingress-basic
tls:
secretName: traefik-ui-cert
---
apiVersion: v1
kind: Secret
metadata:
name: traefik-secret
namespace: ingress-basic
data:
users: |2
dWlhZG06JGFwcjEkanJMZGtEb1okaS9BckJmZzFMVkNIMW80bGtKWFN6LwoK
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: traefik-ui-auth
namespace: ingress-basic
spec:
basicAuth:
secret: traefik-secret
---
apiVersion: v1
kind: Service
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: traefik-ingress-lb
sessionAffinity: None
type: ClusterIP
DaemonSet and Service:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: traefik-ingress
namespace: ingress-basic
spec:
selector:
matchLabels:
app: traefik-ingress-lb
template:
metadata:
labels:
app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- small
containers:
- args:
- --api.dashboard=true
- --accesslog
- --accesslog.fields.defaultmode=keep
- --accesslog.fields.headers.defaultmode=keep
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.metrics.address=:8082
- --providers.kubernetesIngress.ingressClass=traefik-cert-manager
- --certificatesresolvers.default.acme.email=info#domain.com
- --certificatesresolvers.default.acme.storage=acme.json
- --certificatesresolvers.default.acme.tlschallenge
- --providers.kubernetescrd
- --ping=true
- --pilot.token=xxxxxx-xxxx-xxxx-xxxxx-xxxxx-xx
- --metrics.statsd=true
- --metrics.statsd.address=localhost:8125
- --metrics.statsd.addEntryPointsLabels=true
- --metrics.statsd.addServicesLabels=true
image: traefik:v2.3.0
imagePullPolicy: IfNotPresent
name: traefik-ingress-lb
ports:
- containerPort: 80
name: web
protocol: TCP
- containerPort: 8080
name: admin
protocol: TCP
- containerPort: 443
name: websecure
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /acme/acme.json
name: acme
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: traefik-ingress
serviceAccountName: traefik-ingress
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoSchedule
key: size
operator: Equal
value: small
volumes:
- hostPath:
path: /srv/configs/acme.json
type: ""
name: acme
With this configuration :
kubectl exec -it -n ingress-basic traefik-ingress-2m88q -- curl http://localhost:8080/dashboard/
404 page not found
When removing the Middleware and adding "--api.insecure" in the DaemonSet config :
kubectl exec -it -n ingress-basic traefik-ingress-1hf4q -- curl http://localhost:8080/dashboard/
<!DOCTYPE html><html><head><title>Traefik</title><meta charset=utf-8><meta name=description content="Traefik UI"><meta name=format-detection content="telephone=no"><meta name=msapplication-tap-highlight content=no><meta name=viewport content="user-scalable=no,initial-scale=1,maximum-scale=1,minimum-scale=1,width=device-width"><link rel=icon type=image/png href=statics/app-logo-128x128.png><link rel=icon type=image/png sizes=16x16 href=statics/icons/favicon-16x16.png><link rel=icon[...]</body></html>
Please let me know what I am doing wrong here? Is there any other way of doing it ?
Regards,
Here's another take on the IngressRoute, adapted to your environment.
I think 99% of the issue is actual route matching, especially if you say --api.insecure works.
Also as a rule of a thumb, logging & access log would help a lot in the DaemonSet definition.
- --log
- --log.level=DEBUG
- --accesslog
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik-ui.domain.com`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))
kind: Rule
services:
- name: api#internal
kind: TraefikService
middlewares:
- name: traefik-basic-auth
tls:
secretName: traefik-ui-cert
I try to deploy an application with a mariadb database on my k8s cluster. This is the deployment i use:
apiVersion: v1
kind: Service
metadata:
name: app-back
labels:
app: app-back
namespace: dev
spec:
type: ClusterIP
ports:
- port: 8080
name: app-back
selector:
app: app-back
---
apiVersion: v1
kind: Service
metadata:
name: app-db
labels:
app: app-db
namespace: dev
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 3306
name: app-db
selector:
app: app-db
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
labels:
app: mysql
data:
60-server.cnf: |
[mysqld]
bind-address = 0.0.0.0
skip-name-resolve
connect_timeout = 600
net_read_timeout = 600
net_write_timeout = 600
max_allowed_packet = 256M
default-time-zone = +00:00
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-db
namespace: dev
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: app-db
template:
metadata:
labels:
app: app-db
spec:
containers:
- name: app-db
image: mariadb:10.5.8
env:
- name: MYSQL_DATABASE
value: app
- name: MYSQL_USER
value: app
- name: MYSQL_PASSWORD
value: app
- name: MYSQL_RANDOM_ROOT_PASSWORD
value: "true"
ports:
- containerPort: 3306
name: app-db
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "400Mi"
cpu: "200m"
volumeMounts:
- name: config-volume
mountPath: /etc/mysql/conf.d
volumes:
- name: config-volume
configMap:
name: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-back
namespace: dev
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: app-back
template:
metadata:
labels:
app: app-back
spec:
containers:
- name: app-back
image: private-repository/app/app-back:latest
env:
- name: spring.profiles.active
value: dev
- name: DB_HOST
value: app-db
- name: DB_PORT
value: "3306"
- name: DB_NAME
value: app
- name: DB_USER
value: app
- name: DB_PASSWORD
value: app
ports:
- containerPort: 8080
name: app-back
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "200Mi"
cpu: "400m"
imagePullSecrets:
- name: docker-private-credentials
When i run this, the mariadb container log the following warning :
2020-12-03 8:23:41 28 [Warning] Aborted connection 28 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
2020-12-03 8:23:41 25 [Warning] Aborted connection 25 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
2020-12-03 8:23:41 31 [Warning] Aborted connection 31 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
2020-12-03 8:23:41 29 [Warning] Aborted connection 29 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
...
My app is stuck on trying to connect to the database. The application is a Sprinbboot application build with this dockerfile:
FROM maven:3-adoptopenjdk-8 AS builder
WORKDIR /usr/src/mymaven/
COPY . .
RUN mvn clean package -e -s settings.xml -DskipTests
FROM tomcat:9-jdk8-adoptopenjdk-hotspot
ENV spring.profiles.active=dev
ENV DB_HOST=localhost
ENV DB_PORT=3306
ENV DB_NAME=app
ENV DB_USER=app
ENV DB_PASSWORD=app
COPY --from=builder /usr/src/mymaven/target/app.war /usr/local/tomcat/webapps/
Any idea?
Ok, i found the solution. This was not an error of mariadb. This is due to apache that break the connection if running inside a container with to low memory. Set memory limit to 1500Mi solved the problem.
I have deployed the Custom Built SCDF 2.52 in openshift environment which is up and running successfully. I followed the guide 2.5.0.RELEASE_Guide. The Issue is the the properties given in server-config are not being considered by server-deployment.yaml file when I mount them. Though I could see the mappings for application.yaml is visible in deployment configuration, the properties are not read while the server is starting.
So when I build the custom scdf I have to add all the server properties including kubernetes memory limits, oracle datasource(External Datasource) properties in the scdf projects' application.properties file. Only then values of kube properties are being read platform being setup and External oracle datasource is getting connected. Below are the files that I'm using. I'm new to this SCDF and kubernetes. So please let me know if i'm missing anything anywhere.
Also why I added the kubernetes properties in application.properties of custom scdf project. Reason here in this question
server-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: scdf-server
labels:
app: scdf-server
data:
application.yaml: |-
spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
default:
limits:
memory: 1024Mi
datasource:
url: jdbc:oracle:thin:#hostname:port/db
username: root
password: oracle-root-password
driver-class-name: oracle.jdbc.OracleDriver
testOnBorrow: true
validationQuery: "SELECT 1"
server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: scdf-server
labels:
app: scdf-server
spec:
selector:
matchLabels:
app: scdf-server
replicas: 1
template:
metadata:
labels:
app: scdf-server
spec:
containers:
- name: scdf-server
image: docker-registry.default.svc:5000/batchadmin/scdf-server
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: /config
readOnly: true
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /management/health
port: 80
initialDelaySeconds: 45
readinessProbe:
httpGet:
path: /management/info
port: 80
initialDelaySeconds: 45
resources:
limits:
cpu: 1.0
memory: 2048Mi
requests:
cpu: 0.5
memory: 1024Mi
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: SERVER_PORT
value: '80'
- name: SPRING_CLOUD_CONFIG_ENABLED
value: 'false'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_ANALYTICS_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_TASK_COMPOSED_TASK_RUNNER_URI
value: 'docker://springcloud/spring-cloud-dataflow-composed-task-runner:2.6.0.BUILD-SNAPSHOT'
- name: SPRING_CLOUD_KUBERNETES_CONFIG_ENABLE_API
value: 'false'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_ENABLE_API
value: 'false'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_PATHS
value: /etc/secrets
- name: SPRING_CLOUD_DATAFLOW_FEATURES_TASKS_ENABLED
value : 'true'
- name: SPRING_CLOUD_DATAFLOW_SERVER_URI
value: 'http://${SCDF_SERVER_SERVICE_HOST}:${SCDF_SERVER_SERVICE_PORT}'
# Add Maven repo for metadata artibatcht resolution for all stream apps
- name: SPRING_APPLICATION_JSON
value: "{ \"maven\": { \"local-repository\": null, \"remote-repositories\": { \"repo1\": { \"url\": \"https://repo.spring.io/libs-snapshot\"} } } }"
serviceAccountName: scdf-sa
volumes:
- name: config
configMap:
name: scdf-server
items:
- key: application.yaml
path: application.yaml
application.properties - the Only thing that runs the SCDF right now.
spring.application.name=batchadmin
spring.datasource.url=jdbc:oracle:thin:#hostname:port/db
spring.datasource.username=root
spring.datasource.password=oracle_root_password
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.image-pull-policy= always
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.entry-point-style= exec
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.limits.cpu=2
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.limits.memory=1024Mi
spring.flyway.enabled=false
spring.jpa.show-sql=true
spring.jpa.hibernate.use-new-id-generator-mappings=true
logging.level.root=info
logging.file.max-size=5GB
logging.file.max-history=30
logging.pattern.console=%d{dd-MM-yyyy HH:mm:ss.SSS} [%thread] %-5level %logger.%M - %msg%n
My main concern here apart from the above issue is db password. Since SCDF passes all the application.properties related to datasource and kubernetes as job_parameters including the db password, the password is being printed in the logs, visible in the running pod config and in batch_job_execution_params.
Application.properties as Job params
To summarize the issues here as questions,
server-config.yaml properties are not being used by server-deployment.yaml? What went wrong?
Since I pass server properties from application.prop file all the properties are visible in logs as well as Db. So is there a way I could hide them?
Thanks in advance.
server-role
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: scdf-role
rules:
- apiGroups: [""]
resources: ["services", "pods", "replicationcontrollers", "persistentvolumeclaims"]
verbs: ["get", "list", "watch", "create", "delete", "update"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets", "deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
- apiGroups: ["extensions"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["create", "delete", "get", "list", "watch", "update", "patch"]