I have a local setup of Tekton Pipeline, which has task to clone and build application . The code base is on java so, clones the code from BitBucket and does a Gradle build.
gradle.build
repositories {
mavenCentral()
maven {
url "artifactregistry://${LOCATION}-maven.pkg.dev/${PROJECT}/${REPOSITORY}"
}
}
gradle.properties
LOCATION=australia-southeast2
PROJECT=fetebird-350310
REPOSITORY=common
I have set up a service account and passed it to the pipeline run
apiVersion: v1
kind: Secret
metadata:
name: gcp-secret
namespace: tekton-pipelines
type: kubernetes.io/opaque
stringData:
gcs-config: |
{
"type": "service_account",
"project_id": "xxxxx-350310",
"private_key_id": "28e8xxxxx2642a8a0cd9cd5c2696",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC3zOuPTogiZ2kU\nEYsGMCl4lUO48GSLOjOH1lwkQ76zxL\n0F6cfpV/8iwao/9IOqsmKoRPUZcQjqXFMEuCYJNhoScsn4TAYMNBeATVq+2JJ/5T\n2e7YfbbPVue9R36MfTwqDeI=\n-----END PRIVATE KEY-----\n",
"client_email": "xxxxxx#xxxx-xxxx.iam.gserviceaccount.com",
"client_id": "xxxxxxxxxxxxxxxxx",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/fetebird%40fetebird-350310.iam.gserviceaccount.com"
}
service-account
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-account
secrets:
- name: git-ssh-auth
- name: gcp-secret
During the Gradle build, getting an exception as
> Could not resolve fete.bird:common:1.0.1.
2022-06-22T12:43:41.599990295Z > Could not get resource 'https://australia-southeast2-maven.pkg.dev/fetebird-350310/common/fete/bird/common/1.0.1/common-1.0.1.pom'.
2022-06-22T12:43:41.606630129Z > Could not GET 'https://australia-southeast2-maven.pkg.dev/fetebird-350310/common/fete/bird/common/1.0.1/common-1.0.1.pom'. Received status code 403 from server: Forbidden
implementation("fete.bird:common:1.0.1") is published on GCP artifact registry
In local development, the Gradle build is working file, because the service key is exported as environment variable export GOOGLE_APPLICATION_CREDENTIALS="file-location.json"
Pipeline Run
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: run-pipeline
namespace: tekton-pipelines
spec:
serviceAccountNames:
- taskName: clone-repository
serviceAccountName: git-service-account
- taskName: build
serviceAccountName: gcp-service-account
pipelineRef:
name: fetebird-discount
workspaces:
- name: shared-workspace
persistentVolumeClaim:
claimName: fetebird-discount-pvc
params:
- name: repo-url
value: git#bitbucket.org:anandjaisy/discount.git
Related
I have encrypted two database passwords with kubeseal, but I am not sure how exactly to mount them in my configuration file assuming I am using Spring Boot.
The application keeps complaining about missing placeholder password.
Could not resolve placeholder 'datasources.eco.password'
Here is the generated secret :
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
annotations:
sealedsecrets.bitnami.com/namespace-wide: "true"
creationTimestamp: null
name: database-keys
namespace: eco-test
spec:
encryptedData:
ecoadmin: AgBPqs07GicbU4eyYXfQrVoRHCkfPHH8jxN8...sefwfs4fse
ecodb: AgAHYRYpk5j+ZCyIDpYr89d8pYLJ6E8S...sr3245sefsf
template:
data: null
metadata:
annotations:
sealedsecrets.bitnami.com/namespace-wide: "true"
creationTimestamp: null
name: database-keys
namespace: eco-test
Here is where I try to mount the secret in my properties file:
datasources:
eco:
#url: jdbc:oracle:thin:#10.246...
url: jdbc:oracle:thin:#12.234...
username: ECO
password:
secretKeyRef:
name: database-keys
key: ecodb
minPoolSize: 5
maxPoolSize: 20
edition: 'REL_2021_12_06'
ecoadmin:
#url: jdbc:oracle:thin:#10.246...
url: jdbc:oracle:thin:#21.32...
username: ECOADM
password:
secretKeyRef:
name: database-keys
key: ecoadmin
not sure if you are confusing plattform (k8s) with service (springboot) features here.
when you configure your springboot app to expect a value at "datasources.eco.password", you cannot use the kubernetes method of mounting values from secrets there because it expects something like
datasources:
eco:
password: password123
i assume that you can reference ENVs in your properties file, so one way to go would be to mount the secretsvalue as a ENV and reference that in your properties file.
properties file:
datasources:
eco:
#url: jdbc:oracle:thin:#10.246...
url: jdbc:oracle:thin:#12.234...
username: ECO
password: ${DB_ADMIN_KEY_PW}
minPoolSize: 5
maxPoolSize: 20
edition: 'REL_2021_12_06'
ecoadmin:
#url: jdbc:oracle:thin:#10.246...
url: jdbc:oracle:thin:#21.32...
username: ECOADM
password: ${DB_ADMIN_KEY_PW}
deployment.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
...
spec:
...
template:
...
spec:
...
containers:
- name: <app>
image: <image>
env:
- name: DB_ADMIN_KEY_PW
valueFrom:
secretKeyRef:
name: database-keys
key: ecoadmin
...
references:
https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Using env variable in Spring Boot's application.properties
I'm running this tutorial https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html and found that the elasticsearch operator comes included with a pre-defined secret which is accessed through kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'. I was wondering how I can access it in a manifest file for a pod that will make use of this as an env var. The pod's manifest is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-depl
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
spec:
containers:
- name: user
image: reactor/user
env:
- name: PORT
value: "3000"
- name: ES_SECRET
valueFrom:
secretKeyRef:
name: quickstart-es-elastic-user
key: { { .data.elastic } }
---
apiVersion: v1
kind: Service
metadata:
name: user-svc
spec:
selector:
app: user
ports:
- name: user
protocol: TCP
port: 3000
targetPort: 3000
When trying to define ES_SECRET as I did in this manifest, I get this error message: invalid map key: map[interface {}]interface {}{\".data.elastic\":interface {}(nil)}\n. Any help on resolving this would be much appreciated.
The secret returned via API (kubectl get secret ...) is a JSON-structure, where there:
{
"data": {
"elastic": "base64 encoded string"
}
}
So you just need to replace
key: { { .data.elastic } }
with
key: elastic
since it's secretKeyReference (e.g. you refer a value in some key in data (=contents) of some secret, which name you specified above). No need to worry about base64 decoding; Kubernetes does it for you.
I have deployed the Custom Built SCDF 2.52 in openshift environment which is up and running successfully. I followed the guide 2.5.0.RELEASE_Guide. The Issue is the the properties given in server-config are not being considered by server-deployment.yaml file when I mount them. Though I could see the mappings for application.yaml is visible in deployment configuration, the properties are not read while the server is starting.
So when I build the custom scdf I have to add all the server properties including kubernetes memory limits, oracle datasource(External Datasource) properties in the scdf projects' application.properties file. Only then values of kube properties are being read platform being setup and External oracle datasource is getting connected. Below are the files that I'm using. I'm new to this SCDF and kubernetes. So please let me know if i'm missing anything anywhere.
Also why I added the kubernetes properties in application.properties of custom scdf project. Reason here in this question
server-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: scdf-server
labels:
app: scdf-server
data:
application.yaml: |-
spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
default:
limits:
memory: 1024Mi
datasource:
url: jdbc:oracle:thin:#hostname:port/db
username: root
password: oracle-root-password
driver-class-name: oracle.jdbc.OracleDriver
testOnBorrow: true
validationQuery: "SELECT 1"
server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: scdf-server
labels:
app: scdf-server
spec:
selector:
matchLabels:
app: scdf-server
replicas: 1
template:
metadata:
labels:
app: scdf-server
spec:
containers:
- name: scdf-server
image: docker-registry.default.svc:5000/batchadmin/scdf-server
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: /config
readOnly: true
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /management/health
port: 80
initialDelaySeconds: 45
readinessProbe:
httpGet:
path: /management/info
port: 80
initialDelaySeconds: 45
resources:
limits:
cpu: 1.0
memory: 2048Mi
requests:
cpu: 0.5
memory: 1024Mi
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: SERVER_PORT
value: '80'
- name: SPRING_CLOUD_CONFIG_ENABLED
value: 'false'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_ANALYTICS_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_TASK_COMPOSED_TASK_RUNNER_URI
value: 'docker://springcloud/spring-cloud-dataflow-composed-task-runner:2.6.0.BUILD-SNAPSHOT'
- name: SPRING_CLOUD_KUBERNETES_CONFIG_ENABLE_API
value: 'false'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_ENABLE_API
value: 'false'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_PATHS
value: /etc/secrets
- name: SPRING_CLOUD_DATAFLOW_FEATURES_TASKS_ENABLED
value : 'true'
- name: SPRING_CLOUD_DATAFLOW_SERVER_URI
value: 'http://${SCDF_SERVER_SERVICE_HOST}:${SCDF_SERVER_SERVICE_PORT}'
# Add Maven repo for metadata artibatcht resolution for all stream apps
- name: SPRING_APPLICATION_JSON
value: "{ \"maven\": { \"local-repository\": null, \"remote-repositories\": { \"repo1\": { \"url\": \"https://repo.spring.io/libs-snapshot\"} } } }"
serviceAccountName: scdf-sa
volumes:
- name: config
configMap:
name: scdf-server
items:
- key: application.yaml
path: application.yaml
application.properties - the Only thing that runs the SCDF right now.
spring.application.name=batchadmin
spring.datasource.url=jdbc:oracle:thin:#hostname:port/db
spring.datasource.username=root
spring.datasource.password=oracle_root_password
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.image-pull-policy= always
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.entry-point-style= exec
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.limits.cpu=2
spring.cloud.dataflow.task.platform.kubernetes.accounts.default.limits.memory=1024Mi
spring.flyway.enabled=false
spring.jpa.show-sql=true
spring.jpa.hibernate.use-new-id-generator-mappings=true
logging.level.root=info
logging.file.max-size=5GB
logging.file.max-history=30
logging.pattern.console=%d{dd-MM-yyyy HH:mm:ss.SSS} [%thread] %-5level %logger.%M - %msg%n
My main concern here apart from the above issue is db password. Since SCDF passes all the application.properties related to datasource and kubernetes as job_parameters including the db password, the password is being printed in the logs, visible in the running pod config and in batch_job_execution_params.
Application.properties as Job params
To summarize the issues here as questions,
server-config.yaml properties are not being used by server-deployment.yaml? What went wrong?
Since I pass server properties from application.prop file all the properties are visible in logs as well as Db. So is there a way I could hide them?
Thanks in advance.
server-role
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: scdf-role
rules:
- apiGroups: [""]
resources: ["services", "pods", "replicationcontrollers", "persistentvolumeclaims"]
verbs: ["get", "list", "watch", "create", "delete", "update"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets", "deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
- apiGroups: ["extensions"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["create", "delete", "get", "list", "watch", "update", "patch"]
I am using spring-cloud-starter-kubernetes-all dependency for reading config map from my spring boot microservices and its working fine.
After modifying the config map i am using refresh endpoint
minikube servie list # to get the servive url
curl http://192.168.99.100:30824/actuator/refresh -d {} -H "Content-Type: application/json"
it working as expected and application loads configmap changes.
Issue
The above working fine if i have only 1 pod of my application but when i do use more that 1 pods only 1 pods picks the changes not all.
In below example only i pod picks the changes
[message-producer-5dc4b8b456-tbbjn message-producer] Say Hello to the World12431
[message-producer-5dc4b8b456-qzmgb message-producer] Say Hello to the World
minkube deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: message-producer
labels:
app: message-producer
spec:
replicas: 2
selector:
matchLabels:
app: message-producer
template:
metadata:
labels:
app: message-producer
spec:
containers:
- name: message-producer
image: sandeepbhardwaj/message-producer
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: message-producer
spec:
selector:
app: message-producer
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
configmap.yml
kind: ConfigMap
apiVersion: v1
metadata:
name: message-producer
data:
application.yml: |-
message: Say Hello to the World
bootstrap.yml
spring:
cloud:
kubernetes:
config:
enabled: true
name: message-producer
namespace: default
reload:
enabled: true
mode: EVENT
strategy: REFRESH
period: 3000
configuration
#ConfigurationProperties(prefix = "")
#Configuration
#Getter
#Setter
public class MessageConfiguration {
private String message = "Default message";
}
rbac
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default # "namespace" can be omitted since ClusterRoles are not namespaced
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
metadata:
name: service-reader
subjects:
- kind: User
name: default # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: service-reader
apiGroup: rbac.authorization.k8s.io
This is happening because when you hit curl http://192.168.99.100:30824/actuator/refresh -d {} -H "Content-Type: application/json" kubernetes will send that request to one of the pods behind the service via round robin load balancing.
You should use the property source reload feature by setting spring.cloud.kubernetes.reload.enabled=true. This will reload the property whenever there is a change in the config map hence you don't need to use the refresh endpoint.
I would like to configure default Index Lifecycle Management (ILM) policy and index template durring installation ES in kubernetes cluster, in the YAML installation file, instead of calling ES API after installation. How can I do that?
I have Elasticsearch installed in kubernetes cluster based on YAML file.
The following works queries work.
PUT _ilm/policy/logstash_policy
{
"policy": {
"phases": {
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
PUT _template/logstash_template
{
"index_patterns": ["logstash-*"],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1,
"index.lifecycle.name": "logstash_policy"
}
}
I would like to have above setup just after installation, without making any curl queries.
I'll try to answer both of your questions.
index template
You can pass the index template with this configuration in your elasticsearch yaml. For instance:
setup.template:
name: "<chosen template name>-%{[agent.version]}"
pattern: "<chosen pattern name>-%{[agent.version]}-*"
Checkout the ES documentation to see where exactly this setup.template belongs and you're good to go.
ilm policy
The way to make this work is to get the ilm-policy.json file that has your ilm configuration to the pod's /usr/share/filebeat/ directory. in your YAML installation file, you can then use this line in your config to get it to work (I've added my whole ilm config):
setup.ilm:
enabled: true
policy_name: "<policy name>"
rollover_alias: "<rollover alias name
policy_file: "ilm-policy.json"
pattern: "{now/d}-000001"
So, how to get the file there? The ingredients are 1 configmap containing your ilm-policy.json, and a volume and volumeMount in your daemonset configuration to mount the configmap's contents to the pod's directories.
Note: I used helm for deploying filebeat to an AKS cluster (v 1.15), which connects to Elastic cloud. In your case, the application folder to store your json will probably be /usr/share/elasticsearch/ilm-policy.json.
Below, you'll see a line like {{ .Files.Get <...> }}, which is a templating function for helm getting the contents of the files. Alternatively, you can copy the file contents directly into the configmap yaml, but to have the file separate makes it better managable in my opinion.
The configMap
Make sure your ilm-policy.json is somewhere reachable by your deployments. This is how the configmap can look:
apiVersion: v1
kind: ConfigMap
metadata:
name: ilmpolicy-config
namespace: logging
labels:
k8s-app: filebeat
data:
ilm-policy.json: |-
{{ .Files.Get "ilm-policy.json" | indent 4 }}
The Daemonset
at the deamonSet's volumeMounts section, append this:
- name: ilm-configmap-volume
mountPath: /usr/share/filebeat/ilm-policy.json
subPath: ilm-policy.json
readOnly: true
and at the volume section append this:
- name: ilm-configmap-volume
configMap:
name: ilmpolicy-config
I'm not exactly sure the spacing is correct in the browser, but this should give a pretty good idea.
I hope this works for your setup! good luck.
I've used the answer to get a custom policy in place for Packetbeat running with ECK.
The ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: packetbeat-ilmpolicy
labels:
k8s-app: packetbeat
data:
ilm-policy.json: |-
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"rollover": {
"max_age": "1d"
}
}
},
"delete": {
"min_age": "1d",
"actions": {
"delete": {}
}
}
}
}
}
The Beat config:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: packetbeat
spec:
type: packetbeat
elasticsearchRef:
name: demo
kibanaRef:
name: demo
config:
pipeline: geoip-info
packetbeat.interfaces.device: any
packetbeat.protocols:
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80, 8000, 8080, 9200, 9300]
- type: tls
ports: [443, 993, 995, 5223, 8443, 8883, 9243]
packetbeat.flows:
timeout: 30s
period: 30s
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
setup.ilm:
enabled: true
overwrite: true
policy_name: "packetbeat"
policy_file: /usr/share/packetbeat/ilm-policy.json
pattern: "{now/d}-000001"
daemonSet:
podTemplate:
spec:
terminationGracePeriodSeconds: 30
hostNetwork: true
automountServiceAccountToken: true # some older Beat versions are depending on this settings presence in k8s context
dnsPolicy: ClusterFirstWithHostNet
tolerations:
- operator: Exists
containers:
- name: packetbeat
securityContext:
runAsUser: 0
capabilities:
add:
- NET_ADMIN
volumeMounts:
- name: ilmpolicy-config
mountPath: /usr/share/packetbeat/ilm-policy.json
subPath: ilm-policy.json
readOnly: true
volumes:
- name: ilmpolicy-config
configMap:
name: packetbeat-ilmpolicy
The important parts in the Beat config are the Volume mount where we mount the configmap into the container.
After this we can reference the file in the config with setup.ilm.policy_file.