How to use SpringBoot actuator over JMX - spring-boot

I am having existing Spring Boot application and I want to do monitoring the application through actuator.I tried with http endpoints and it is working fine for me. Instead of http end points I need JMX end points for my existing running application.

If you add spring-boot-starter-actuatordependency in your build.gradle or pom.xml file you will have JMX bean enabled by default as well as HTTP Endpoints.
You can use JConsole in order to view your JMX exposed beans. You'll find more info about this here.
More details about how to access JMX endpoints here.

Assuming you're using a Docker image where the entry point is the Spring Boot app using java in which case the PID is "1" and so would the Attach API's Virtual Machine ID. You can implement a health probe as follows.
import com.sun.tools.attach.spi.AttachProvider;
import java.util.Map;
import javax.management.MBeanServerConnection;
import javax.management.ObjectName;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;
public class HealthProbe {
public static void main(String[] args) throws Exception {
final var attachProvider = AttachProvider.providers().get(0);
final var virtualMachine = attachProvider.attachVirtualMachine("1");
final var jmxServiceUrl = virtualMachine.startLocalManagementAgent();
try (final var jmxConnection = JMXConnectorFactory.connect(new JMXServiceURL(jmxServiceUrl))) {
final MBeanServerConnection serverConnection = jmxConnection.getMBeanServerConnection();
#SuppressWarnings("unchecked")
final var healthResult =
(Map<String, ?>)
serverConnection.invoke(
new ObjectName("org.springframework.boot:type=Endpoint,name=Health"),
"health",
new Object[0],
new String[0]);
if ("UP".equals(healthResult.get("status"))) {
System.exit(0);
} else {
System.exit(1);
}
}
}
}
This will use the Attach API and make the original process start a local management agent.
The org.springframework.boot:type=Endpoint,name=Health object instance would have it's health method invoked which will provide a Map version of the /actuator/health output. From there the value of status should be UP if things are ok.
Then exit with 0 if ok, or 1 otherwise.
This can be embedded in an existing Spring Boot app so long as loader.main is set. The following is the HEALTHCHECK probe I used
HEALTHCHECK --interval=5s --start-period=60s \
CMD ["java", \
"-Dloader.main=net.trajano.swarm.gateway.healthcheck.HealthProbe", \
"org.springframework.boot.loader.PropertiesLauncher" ]
This is the technique I used in distroless Docker Image.
Side note: Don't try to put this in a CommandLineRunner interface because it will try to pull the configuration from the main app and you likely won't need the whole web stack.

Related

How to redirect Prometheus Metrics to the default spring boot server

I am trying to expose a custom Gauge metric from my Spring Boot Application. I am using Micrometer with the Prometheus registry to do so. I have set up the PrometheusRegistry and configs as per - Micrometer Samples - Github but it creates one more HTTP server for exposing the Prometheus metrics. I need to redirect or expose all the metrics to the Spring boot's default context path - /actuator/prometheus instead of a new context path on a new port. I have implemented the following code so far -
PrometheusRegistry.java -
package com.xyz.abc.prometheus;
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.time.Duration;
import com.sun.net.httpserver.HttpServer;
import io.micrometer.core.lang.Nullable;
import io.micrometer.prometheus.PrometheusConfig;
import io.micrometer.prometheus.PrometheusMeterRegistry;
public class PrometheusRegistry {
public static PrometheusMeterRegistry prometheus() {
PrometheusMeterRegistry prometheusRegistry = new PrometheusMeterRegistry(new PrometheusConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
#Nullable
public String get(String k) {
return null;
}
});
try {
HttpServer server = HttpServer.create(new InetSocketAddress(8081), 0);
server.createContext("/sample-data/prometheus", httpExchange -> {
String response = prometheusRegistry.scrape();
httpExchange.sendResponseHeaders(200, response.length());
OutputStream os = httpExchange.getResponseBody();
os.write(response.getBytes());
os.close();
});
new Thread(server::start).run();
} catch (IOException e) {
throw new RuntimeException(e);
}
return prometheusRegistry;
}
}
MicrometerConfig.java -
package com.xyz.abc.prometheus;
import io.micrometer.core.instrument.MeterRegistry;
public class MicrometerConfig {
public static MeterRegistry carMonitoringSystem() {
// Pick a monitoring system here to use in your samples.
return PrometheusRegistry.prometheus();
}
}
Code snippet where I am creating a custom Gauge metric. As of now, it's a simple REST API to test - (Please read the comments in between)
#SuppressWarnings({ "unchecked", "rawtypes" })
#RequestMapping(value = "/sampleApi", method= RequestMethod.GET)
#ResponseBody
//This Timed annotation is working fine and this metrics comes in /actuator/prometheus by default
#Timed(value = "car.healthcheck", description = "Time taken to return healthcheck")
public ResponseEntity healthCheck(){
MeterRegistry registry = MicrometerConfig.carMonitoringSystem();
AtomicLong n = new AtomicLong();
//Starting from here none of the Gauge metrics shows up in /actuator/prometheus path instead it goes to /sample-data/prometheus on port 8081 as configured.
registry.gauge("car.gauge.one", Tags.of("k", "v"), n);
registry.gauge("car.gauge.two", Tags.of("k", "v1"), n, n2 -> n2.get() - 1);
registry.gauge("car.help.gauge", 89);
//This thing never works! This gauge metrics never shows up in any URI configured
Gauge.builder("car.gauge.test", cpu)
.description("car.device.cpu")
.tags("customer", "demo")
.register(registry);
return new ResponseEntity("Car is working fine.", HttpStatus.OK);
}
I need all the metrics to show up inside - /actuator/prometheus instead of a new HTTP Server getting created. I know that I am explicitly creating a new HTTP Server so metrics are popping up there. Please let me know how to avoid creating a new HTTP Server and redirect all the prometheus metrics to the default path - /actuator/prometheus. Also if I use Gauge.builder to define a custom gauge metrics, it never works. Please explain how I can make that work also. Let me know where I am doing wrong.
Thank you.
Every time you call MicrometerConfig.carMonitoringSystem(); it is creating a new prometheus registry (and trying to start a new server)
You need to inject the MeterRegistry in your class that is creating the gauge and use the injected MeterRegistry that way.

Spring sleuth Baggage key not getting propagated

I've a filter (OncePerRequestFilter) which basically intercepts incoming request and logs traceId, spanId etc. which works well,
this filter lies in a common module which is included in other projects to avoid including spring sleuth dependency in all of my micro-services, the reason why I've created it as a library because any changes to library will be common to all modules.
Now I've to add a new propagation key which need to be propagated to all services via http headers like trace and spanId for that I've extracted current span from HttpTracing and added a baggage key to it (as shown below)
Span span = httpTracing.tracing().tracer().currentSpan();
String corelationId =
StringUtils.isEmpty(request.getHeader(CORELATION_ID))
? "n/a"
: request.getHeader(CORELATION_ID);
ExtraFieldPropagation.set(CUSTOM_TRACE_ID_MDC_KEY_NAME, corelationId);
span.annotate("baggage_set");
span.tag(CUSTOM_TRACE_ID_MDC_KEY_NAME, corelationId);
I've added propagation-keys and whitelisted-mdc-keys to my application.yml (with my library) file like below
spring:
sleuth:
propagation-keys:
- x-corelationId
log:
slf4j:
whitelisted-mdc-keys:
- x-corelationId
After making this change in filter the corelationId is not available when I make a http call to another service with same app, basically keys are not getting propagated.
In your library you can implement ApplicationEnvironmentPreparedEvent listener and add the configuration you need there
Ex:
#Component
public class CustomApplicationListener implements ApplicationListener<ApplicationEvent> {
private static final Logger log = LoggerFactory.getLogger(LagortaApplicationListener.class);
public void onApplicationEvent(ApplicationEvent event) {
if (event instanceof ApplicationEnvironmentPreparedEvent) {
log.debug("Custom ApplicationEnvironmentPreparedEvent Listener");
ApplicationEnvironmentPreparedEvent envEvent = (ApplicationEnvironmentPreparedEvent) event;
ConfigurableEnvironment env = envEvent.getEnvironment();
Properties props = new Properties();
props.put("spring.sleuth.propagation-keys", "x-corelationId");
props.put("log.slf4j.whitelisted-mdc-keys:", "x-corelationId");
env.getPropertySources().addFirst(new PropertiesPropertySource("custom", props));
}
}
}
Then in your microservice you will register this custom listener
public static void main(String[] args) {
ConfigurableApplicationContext context = new SpringApplicationBuilder(MyApplication.class)
.listeners(new CustomApplicationListener()).run();
}
I've gone through documentation and seems like I need to add spring.sleuth.propagation-keys and whitelist them by using spring.sleuth.log.slf4j.whitelisted-mdc-keys
Yes you need to do this
is there another way to add these properties in common module so that I do not need to include them in each and every micro services.
Yes, you can use Spring Cloud Config server and a properties file called application.yml / application.properties that would set those properties for all microservices
The answer from Mahmoud works great when you want register the whitelisted-mdc-keys programatically.
An extra tip when you need these properties also in a test, then you can find the anwser in this post: How to register a ApplicationEnvironmentPreparedEvent in Spring Test

Problem with connection to Neo4j test container using Spring boot 2 and JUnit5

Problem with connection to Neo4j test container using Spring boot 2 and JUnit5
int test context. Container started successfully but spring.data.neo4j.uri property has a wrong default port:7687, I guess this URI must be the same when I call neo4jContainer.getBoltUrl().
Everything works fine in this case:
#Testcontainers
public class ExampleTest {
#Container
private static Neo4jContainer neo4jContainer = new Neo4jContainer()
.withAdminPassword(null); // Disable password
#Test
void testSomethingUsingBolt() {
// Retrieve the Bolt URL from the container
String boltUrl = neo4jContainer.getBoltUrl();
try (
Driver driver = GraphDatabase.driver(boltUrl, AuthTokens.none());
Session session = driver.session()
) {
long one = session.run("RETURN 1",
Collections.emptyMap()).next().get(0).asLong();
assertThat(one, is(1L));
} catch (Exception e) {
fail(e.getMessage());
}
}
}
But SessionFactory is not created for the application using autoconfiguration following to these recommendations - https://www.testcontainers.org/modules/databases/neo4j/
When I try to create own primary bean - SessionFactory in test context I get the message like this - "URI cannot be returned before the container is not loaded"
But Application runs and works perfect using autoconfiguration and neo4j started in a container, the same cannot be told about the test context
You cannot rely 100% on Spring Boot's auto configuration (for production) in this case because it will read the application.properties or use the default values for the connection.
To achieve what you want to, the key part is to create a custom (Neo4j-OGM) Configuration bean. The #DataNeo4jTest annotation is provided by the spring-boot-test-autoconfigure module.
#Testcontainers
#DataNeo4jTest
public class TestClass {
#TestConfiguration
static class Config {
#Bean
public org.neo4j.ogm.config.Configuration configuration() {
return new Configuration.Builder()
.uri(databaseServer.getBoltUrl())
.credentials("neo4j", databaseServer.getAdminPassword())
.build();
}
}
// your tests
}
For a broader explanation have a look at this blog post. Esp. the section Using with Neo4j-OGM and SDN.

Feign with RibbonClient and Consul discovery without Spring Cloud

I was trying to setup Feign to work with RibbonClient, something like MyService api = Feign.builder().client(RibbonClient.create()).target(MyService.class, "https://myAppProd");, where myAppProd is an application which I can see in Consul. Now, if I use Spring annotations for the Feign client (#FeignClient("myAppProd"), #RequestMapping), everything works as Spring Cloud module will take care of everything.
If I want to use Feign.builder() and #RequestLine, I get the error:
com.netflix.client.ClientException: Load balancer does not have available server for client: myAppProd.
My first initial thought was that Feign was built to work with Eureka and only Spring Cloud makes the integration with Consul, but I am unsure about this.
So, is there a way to make Feign work with Consul without Spring Cloud?
Thanks in advance.
In my opinion, it's not feign work with consul, its feign -> ribbon -> consul.
RibbonClient needs to find myAppProd's serverList from its LoadBalancer.
Without ServerList, error: 'does not have available server for client'.
This job has been done by SpringCloudConsul and SpringCloudRibbon project, of course you can write another adaptor, it's just some glue code. IMHO, you can import this spring dependency into your project, but use it in non-spring way . Demo code:
just write a new feign.ribbon.LBClientFactory, that generate LBClient with ConsulServerList(Spring's class).
public class ConsulLBFactory implements LBClientFactory {
private ConsulClient client;
private ConsulDiscoveryProperties properties;
public ConsulLBFactory(ConsulClient client, ConsulDiscoveryProperties consulDiscoveryProperties) {
this.client = client;
this.properties = consulDiscoveryProperties;
}
#Override
public LBClient create(String clientName) {
IClientConfig config =
ClientFactory.getNamedConfig(clientName, DisableAutoRetriesByDefaultClientConfig.class);
ConsulServerList consulServerList = new ConsulServerList(this.client, properties);
consulServerList.initWithNiwsConfig(config);
ZoneAwareLoadBalancer<ConsulServer> lb = new ZoneAwareLoadBalancer<>(config);
lb.setServersList(consulServerList.getInitialListOfServers());
lb.setServerListImpl(consulServerList);
return LBClient.create(lb, config);
}
}
and then use it in feign:
public class Demo {
public static void main(String[] args) {
ConsulLBFactory consulLBFactory = new ConsulLBFactory(
new ConsulClient(),
new ConsulDiscoveryProperties(new InetUtils(new InetUtilsProperties()))
);
RibbonClient ribbonClient = RibbonClient.builder()
.lbClientFactory(consulLBFactory)
.build();
GitHub github = Feign.builder()
.client(ribbonClient)
.decoder(new GsonDecoder())
.target(GitHub.class, "https://api.github.com");
List<Contributor> contributors = github.contributors("OpenFeign", "feign");
for (Contributor contributor : contributors) {
System.out.println(contributor.login + " (" + contributor.contributions + ")");
}
}
interface GitHub {
#RequestLine("GET /repos/{owner}/{repo}/contributors")
List<Contributor> contributors(#Param("owner") String owner, #Param("repo") String repo);
}
public static class Contributor {
String login;
int contributions;
}
}
you can find this demo code here, add api.github.com to your local consul before running this demo.

Invoking remote service using JMS

I have two projects one the service project another one consumer project,
Consumer project consumes the services of other project and the call should be async using JMS
I installed jms plugin in both of the projects
I have defined the JMSConnectionFactory in both of the project as below in resources.groovy
import org.springframework.jms.connection.SingleConnectionFactory
import org.apache.activemq.ActiveMQConnectionFactory
beans = {
jmsConnectionFactory(org.apache.activemq.ActiveMQConnectionFactory) { brokerURL = 'vm://localhost' }
}
Note: Both of the project are for now on same machine (i.e. localhost)
Now from consumer's controller I am making call to service from ServiceProvider project
jmsService.send(service:'serviceProvider', params.body)
In ServiceProvider the service is defined as follow
import grails.plugin.jms.*
class ServiceProviderService {
def jmsService
static transactional = true
static exposes = ['jms1']
def createMessage(msg) {
print "Called1"
sleep(2000) // slow it down
return null
}
}
now when controller submits the call to service it gets submitted successfully but doesn't reach to the actual service
I also tried
jmsService.send(app: "ServiceProvider", service: "serviceProvider", method: "createMessage", msg, "standard", null)
Update
Now I have installed activeMQ plugin to service provider to make it embedded broker (jms is already there)
and created a service
package serviceprovider
class HelloService {
boolean transactional = false
static exposes = ['jms']
static destination = "queue.notification"
def onMessage(it){
println "GOT MESSAGE: $it"
}
def sayHello(String message){
println "hello"+message
}
}
resources.groovy in both of the project is now
import org.springframework.jms.connection.SingleConnectionFactory
import org.apache.activemq.ActiveMQConnectionFactory
beans = {
jmsConnectionFactory(org.apache.activemq.ActiveMQConnectionFactory) { brokerURL = 'tcp://127.0.0.1:61616' }
}
from consumer's controller I am calling this service like below
jmsService.send(app:'queue.notification',service:'hello',method: 'sayHello', params.body)
call to method gets submitted but actually it is not getting called!
The in vm activemq config (vm://localhost) works only within a single VM. If your 2 projects run in separate VMs try setting up an external AMQ broker.
if you are using separate processes, then you need to use a different transport than VM (its for a single VM only), also, is one of your processes starting a broker? If not, then one of them should embed the broker (or run it externally) and expose it over a transport (like TCP)...

Resources