How to override Spring boot Consul config properties? - spring

I use a distributed Consul config in my application, and it works well. But also i want to override some config properties, when i start an application with different Spring profile. Fo example, i define Kafka server address in the main Consul configuration:
spring:
kafka:
bootstrap-servers: some_kafka:9092
and i want to use different address when i switch to Spring profile named "local":
spring:
kafka:
bootstrap-servers: localhost:9092
How to do it? I tried using bootstrap file with suffix (bootstrap-local.yml) and Consul profile-depended setings (stored in folder "application,local/"), but it does't work for me.

In order to be able to override the properties from consul, you first need to add two properties in your consul file to enable the override. The properties are:
"spring.cloud.config.allowOverride": "true",
"spring.cloud.config.overrideSystemProperties": "false"
Please check this for more details.
Once that is done, you can now override the properties by using a JVM parameter, from your example, will be: -Dspring.kafka.bootstrap-servers=localhost:9092 or you can do the same with a bootstrap-local.yml and setting the active profile to local

Related

Is there any way to add `application` tag to standard prometheus micrometer metrics?

I have few spring-boot microservices with actuator and exposed prometheus metrics. For example:
# HELP process_uptime_seconds The uptime of the Java virtual machine
# TYPE process_uptime_seconds gauge
process_uptime_seconds 3074.971
But there is no application tag, so I'm not able to bind it to a certain application within a grafana dashboard...
Also I expect to have few application instances of some microservice, so in general it would be great to add an instance tag also.
Is there any way to customize the standard metrics with these tags?
The best way to add tags is to use the Prometheus service discovery. This keeps these tags out of your application code and keeps it from being concerned about where it exists.
However sometime if you absolutely need those extra tags (due to the service having extra insight that Prometheus service discovery isn't surfacing) you can't use the Java Simple Client (the Go client does support this though)
I turns out this feature is offered via a Micrometer feature called 'Common Tags' which wraps the Prometheus Java client. You setup your client so the tags are available via a config() call.
registry.config().commonTags("stack", "prod", "region", "us-east-1");
What I am usually doing is using Maven filtering on the resource files (e.g. application.yml) which will replace Maven-known properties like project.artifactId. Springs configuration then takes care of interpolating management.metrics.tags.application.
An application.yml example:
spring:
application:
name: ${project.artifactId}
management:
metrics:
tags:
application: ${spring.application.name}

How to enable spring-actuator to collect metrics from multiple applications

I have a pod which consists of multiple containers each have an application running. How do I enable actuator to fetch metrics from these applications. I couldn't find a way to do this.
There are four micro services running in the pod on different ports say 8082, 8080, 8081, 8083. But the actuator is scraping the metrics only from the micro service running on 8080(default port).
I tried adding application properties indicated in code section to all properties. but it didn't work. Here is the application.property content:
management.endpoint.metrics.enabled=true
management.endpoints.web.exposure.include=*
management.endpoint.prometheus.enabled=true
management.metrics.export.prometheus.enabled=true
management.metrics.use-global-registry=true
management.server.port=8888
Expected output: I should be able to see the metrics from each applications using /metrics endpoint.
I was able to solve this problem. Here are the steps:
Configure separate acutator ports for each service in application.properties.
Expose the configure ports in deployment yaml(in case of kubernetes).
That's it.

Access specific JMX metric information on JConsole for a Spring Boot Application

I have a Spring Boot application packaged as a WAR and deployed on a Tomcat 9 server.
It's been configured to expose the following metrics through JMX:
spring.jmx.default-domain: my-app
management.endpoints.jmx.exposure.include: health,info,metrics
I can connect to Tomcat through JConsole and see the my-app MBean that offers those 3 endpoints:
Selecting Metrics -> Operations - listNames I can get the whole list of metrics exposed, invoking the listNames method:
Now I would like to see a specific metric (e.g. jvm.memory.used), going to metrics -> Operations -> metrics:
However the metric(requiredMetricName, tag) method is disabled.
How can I get the value of a specific metric from the mbean in JConsole?
The reason it's disabled is because JConsole won't allow input of parameters of complex types. See https://stackoverflow.com/a/12025340/62667
But if you use an alternative JMX interface (e.g. add Hawtio to your application) then you could use that to invoke the operations.

How to connect to Kafka Mesos Framework from an application using Spring Cloud Stream?

Having a Mesos-Marathon cluster in place and a Spring Boot application with Spring Cloud Stream that consumes a topic from Kafka, we now want to integrate Kafka with the Mesos cluster. For this we want to install Kafka Mesos Framework.
Right now we have the application.yml configuration like this:
---
spring:
profiles: local-docker
cloud:
stream:
kafka:
binder:
zk-nodes: 192.168.88.188
brokers: 192.168.88.188
....
Once we have installed Kafka Mesos Framework,
How can we connect to kafka from Spring Cloud Stream?
or more specifically
How will be the configuration?
The configuration properties look good. Do you have the host addresses correct.
For more info on the kafka binder config properties, you can refer here:
https://github.com/spring-cloud/spring-cloud-stream/blob/master/spring-cloud-stream-docs/src/main/asciidoc/spring-cloud-stream-overview.adoc#kafka-specific-settings

Strange behavior with log4j2 configuration

I am working with spring cloud and log4j2 with "all" as level.
I will describe two situations with the same config file, I want to write on a Syslog TCP.
First test: I puth my log4j2 config file in resources folder, then, I start my app and start logging to the syslog.
But I need my configuration on a git, so I expose it to an url.
So here comes the second test:
I changed my bootstrap.yml and and the followind line:
logging: config:
http://xxx.xx.xx.75:3000/admin123/config-repository/raw/master/log4j2.xml
Then, I started my app and it starts to write the logging lines of spring boot in my syslog, but, when I put:
LOGGER.info("printing lalala");
Nothing is writed in the syslog and I can see a [FIN, ACK] beetween client and server on my TCP connections.
So, I understand that the config file is readed from the repository, becouse I can see it in my connections capture and becouse the app starts to log on syslog some lines, but something happend after that to close connection and write no more.
I can`t understand what is happening.
You must add the path of the logging.config on the application.yml not the bootstrap.yml and it works.

Resources