Passing annotation to scheduled task in spring cloud dataflow - spring

I am running SCDF on Kubernetes and have scheduled some tasks. I have to pass annotations to my tasks. I have given the following annotations in env:
- name: SPRING_CLOUD_DEPLOYER_KUBERNETES_POD_ANNOTATIONS
value:
- name: SPRING_CLOUD_DEPLOYER_KUBERNETES_JOB_ANNOTATIONS
value:
These work when i manually trigger the task. I also want these annotations to be added to pods created by scheduled cron jobs. How can i do that?
I tried the following annotations:
- name: SPRING_CLOUD_SCHEDULER_KUBERNETES_POD_ANNOTATIONS
value:
- name: SPRING_CLOUD_SCHEDULER_KUBERNETES_JOB_ANNOTATIONS
value:

The support for podAnnotations and jobAnnotations is not implemented yet in spring-cloud/spring-cloud-scheduler-kubernetes project. It'd be a port of exactly the same functionality that is already available in spring-cloud/spring-cloud-deployer-kubernetes project.
As discussed at spring-cloud/spring-cloud-dataflow-server-kubernetes/issues/428#issuecomment-488670392, it can be implemented and released when it is ready.
Contributions welcome!

Related

Kubernetes autogenerated environment variables set wrong?

During pod startup Kubernetes is creating some environment variables based on services i created (via downward API?). Problem is that one of them, MY_APPLICATION_PORT, seems to be initialized incorrectly, it looks like:
MY_APPLICATION_PORT=tcp://192.168.0.5:7777
whereas i expect it to hold only 7777 value. The problem is that i have a Spring Boot application that has this property in application.properties:
my.application.port=7777
So when spring resolves it's properties, it prefers value from environment variable over one from .properties file, thus overwriting it with incorrect value.
My question is - do you guys know how to control creation of kubernetes env variables? I can overwrite it in my deployment.yaml, but I wonder if there's another way.
EDIT:
I've found this as a closest description of my issue I've seen online:
https://github.com/kubernetes/kubernetes/issues/65130
This environment variable comes from compatibility with a very old Docker feature. You can disable it in Kubernetes by setting enableServiceLinks: false on a Container object in a Pod spec, anywhere that may appear. For example:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: ...
enableServiceLinks: false
env: [...]
In particular the syntax is intended to be compatible with the environment variables generated by container links in first-generation Docker networking. Since then Docker has also introduced a DNS system into its core, and in pure Docker using links at all is now considered obsolete. It should be safe to always set this Kubernetes property, especially if it causes conflicts like what you describe here.

Do SpringBoot Configuration Trees support refresh

Do SpringBoot Configuration Trees support refresh?
I have the following. If the /mnt/secrets/ volume changes does Spring automatically refresh Beans with #ConfigurationProperties?
spring:
application:
name: "foo"
# Read Secrets
config:
import:
- configtree:/mnt/secrets/
activate:
on-cloud-platform: kubernetes
Currently if a Kubernetes configmap or secret is modified/updated, it does not redeploy the pod automatically. There needs to be a manual deployment to pick the new changes.
This is currently a feature in progress to facilicate this. https://github.com/kubernetes/kubernetes/issues/22368
So going by above, can you see if your case falls on the same lines. If so, check if below helps.
From a Kubernetes perspective, you can use Reloader to look for changes and auto redeployment.
For now, use Reloader - https://github.com/stakater/Reloader
It watches if some change happens in ConfigMap and/or Secret; then performs a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
How to use it - https://github.com/stakater/Reloader#how-to-use-reloader

GCP Cloud Build - Buildpack -> gradle -> testcontainers

having issue by switching to cloud build. Before we were using other platform and just started the grade build. We use spring boot and testcontainers for tests. Now in cloud build the gradle project is going to be built by buildpack. Gradle builds our project and runs tests. These integration tests are failing because testcontainers cannot start required containers. What can be enabled in the cloudbuild.yml to make it possible?
steps:
- name: gcr.io/k8s-skaffold/pack
args:
- build
- '$_GCR_HOSTNAME/$PROJECT_ID/$_SERVICE_NAME:$COMMIT_SHA'
- '--env'
- 'BP_GRADLE_BUILD_ARGUMENTS=$_GRADLE_ARGS'
- '--tag=$_GCR_HOSTNAME/$PROJECT_ID/$_SERVICE_NAME:$_TAG_2'
- '--builder=paketobuildpacks/builder:base'
- '--path=.'
id: Buildpack
entrypoint: pack
Thank you in advance.
To keep this question from being complete unanswered, I recommend that anyone who wishes to perform multi-container integration tests to use the following Github Repository as a reference:
https://github.com/GoogleCloudPlatform/cloudbuild-integration-testing
And to answer OP's question specifically:
There's no need to enable anything specific in order to test things, but integration testing for containers is best performed by other containers that wait for the containers to be built before running tests, as can be seen in this file

Deploy and Upgrade pods in Namespace

I am working in a Java Springboot microservice-based complex application that comprises 30 services.
All are containerized and from ECR, services are deployed inside the Kubernetes namespace in AWS.
Every time, the namespace is purged and all services are re-deployed.
How can I update only one service inside a namespaceā€¦is it possible to do that kind of deployment.
Can someone please Any sample configurations using helm or any useful links
How can I update only one service inside a namespaceā€¦is it possible to do that kind of deployment.
If you are executing helm upgrade it should only update the resources which are updated.
you need to understand how does Helm packs the resources, helm is using Kustomization so if you are updating Secrets, ConfigMap etc it will generate new names for those resources.
As a side effect, it will "change" the Deployment and the results will be a "full" deploy of all the resources

How to achieve that a task just run on one instance with spring-boot microservice architecture

Have one service named AccountService.5 instances are needed to support the traffic.all these instances run in docker container.Now I need to run a schedule task.this task should only run on a single AccountService instance.but not all the five instances.which one is not important
My question is how to configure to achieve this.Can eureka do this?and zookeeper seems have the ability to manage the cluster.Do I need to register the AccountService into Zookeeper?
Hope someone can share experience with me
Consider using a shared data store like Redis or, if you're already using a DB, a table in the DB, to have a task lock. The first instance to come up can grab the lock, run the task, and release the lock.
Include spring-cloud-task in your dependency (which is suitable for scheduled tasks).
Then enable this property - spring.cloud.task.single-instance-enabled=true
Add Spring Integration dependencies. Copy/paste from here - https://docs.spring.io/spring-cloud-task/docs/current/reference/#features-single-instance-enabled
Note:
Locks are created and stored under TASK_LOCK table. Make sure its clean, otherwise you will have problems restarting.
Use Spring Based task scheduling #Schedule
For more click here

Resources