How to redirect Prometheus Metrics to the default spring boot server - spring-boot

I am trying to expose a custom Gauge metric from my Spring Boot Application. I am using Micrometer with the Prometheus registry to do so. I have set up the PrometheusRegistry and configs as per - Micrometer Samples - Github but it creates one more HTTP server for exposing the Prometheus metrics. I need to redirect or expose all the metrics to the Spring boot's default context path - /actuator/prometheus instead of a new context path on a new port. I have implemented the following code so far -
PrometheusRegistry.java -
package com.xyz.abc.prometheus;
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.time.Duration;
import com.sun.net.httpserver.HttpServer;
import io.micrometer.core.lang.Nullable;
import io.micrometer.prometheus.PrometheusConfig;
import io.micrometer.prometheus.PrometheusMeterRegistry;
public class PrometheusRegistry {
public static PrometheusMeterRegistry prometheus() {
PrometheusMeterRegistry prometheusRegistry = new PrometheusMeterRegistry(new PrometheusConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
#Nullable
public String get(String k) {
return null;
}
});
try {
HttpServer server = HttpServer.create(new InetSocketAddress(8081), 0);
server.createContext("/sample-data/prometheus", httpExchange -> {
String response = prometheusRegistry.scrape();
httpExchange.sendResponseHeaders(200, response.length());
OutputStream os = httpExchange.getResponseBody();
os.write(response.getBytes());
os.close();
});
new Thread(server::start).run();
} catch (IOException e) {
throw new RuntimeException(e);
}
return prometheusRegistry;
}
}
MicrometerConfig.java -
package com.xyz.abc.prometheus;
import io.micrometer.core.instrument.MeterRegistry;
public class MicrometerConfig {
public static MeterRegistry carMonitoringSystem() {
// Pick a monitoring system here to use in your samples.
return PrometheusRegistry.prometheus();
}
}
Code snippet where I am creating a custom Gauge metric. As of now, it's a simple REST API to test - (Please read the comments in between)
#SuppressWarnings({ "unchecked", "rawtypes" })
#RequestMapping(value = "/sampleApi", method= RequestMethod.GET)
#ResponseBody
//This Timed annotation is working fine and this metrics comes in /actuator/prometheus by default
#Timed(value = "car.healthcheck", description = "Time taken to return healthcheck")
public ResponseEntity healthCheck(){
MeterRegistry registry = MicrometerConfig.carMonitoringSystem();
AtomicLong n = new AtomicLong();
//Starting from here none of the Gauge metrics shows up in /actuator/prometheus path instead it goes to /sample-data/prometheus on port 8081 as configured.
registry.gauge("car.gauge.one", Tags.of("k", "v"), n);
registry.gauge("car.gauge.two", Tags.of("k", "v1"), n, n2 -> n2.get() - 1);
registry.gauge("car.help.gauge", 89);
//This thing never works! This gauge metrics never shows up in any URI configured
Gauge.builder("car.gauge.test", cpu)
.description("car.device.cpu")
.tags("customer", "demo")
.register(registry);
return new ResponseEntity("Car is working fine.", HttpStatus.OK);
}
I need all the metrics to show up inside - /actuator/prometheus instead of a new HTTP Server getting created. I know that I am explicitly creating a new HTTP Server so metrics are popping up there. Please let me know how to avoid creating a new HTTP Server and redirect all the prometheus metrics to the default path - /actuator/prometheus. Also if I use Gauge.builder to define a custom gauge metrics, it never works. Please explain how I can make that work also. Let me know where I am doing wrong.
Thank you.

Every time you call MicrometerConfig.carMonitoringSystem(); it is creating a new prometheus registry (and trying to start a new server)
You need to inject the MeterRegistry in your class that is creating the gauge and use the injected MeterRegistry that way.

Related

How to use SpringBoot actuator over JMX

I am having existing Spring Boot application and I want to do monitoring the application through actuator.I tried with http endpoints and it is working fine for me. Instead of http end points I need JMX end points for my existing running application.
If you add spring-boot-starter-actuatordependency in your build.gradle or pom.xml file you will have JMX bean enabled by default as well as HTTP Endpoints.
You can use JConsole in order to view your JMX exposed beans. You'll find more info about this here.
More details about how to access JMX endpoints here.
Assuming you're using a Docker image where the entry point is the Spring Boot app using java in which case the PID is "1" and so would the Attach API's Virtual Machine ID. You can implement a health probe as follows.
import com.sun.tools.attach.spi.AttachProvider;
import java.util.Map;
import javax.management.MBeanServerConnection;
import javax.management.ObjectName;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;
public class HealthProbe {
public static void main(String[] args) throws Exception {
final var attachProvider = AttachProvider.providers().get(0);
final var virtualMachine = attachProvider.attachVirtualMachine("1");
final var jmxServiceUrl = virtualMachine.startLocalManagementAgent();
try (final var jmxConnection = JMXConnectorFactory.connect(new JMXServiceURL(jmxServiceUrl))) {
final MBeanServerConnection serverConnection = jmxConnection.getMBeanServerConnection();
#SuppressWarnings("unchecked")
final var healthResult =
(Map<String, ?>)
serverConnection.invoke(
new ObjectName("org.springframework.boot:type=Endpoint,name=Health"),
"health",
new Object[0],
new String[0]);
if ("UP".equals(healthResult.get("status"))) {
System.exit(0);
} else {
System.exit(1);
}
}
}
}
This will use the Attach API and make the original process start a local management agent.
The org.springframework.boot:type=Endpoint,name=Health object instance would have it's health method invoked which will provide a Map version of the /actuator/health output. From there the value of status should be UP if things are ok.
Then exit with 0 if ok, or 1 otherwise.
This can be embedded in an existing Spring Boot app so long as loader.main is set. The following is the HEALTHCHECK probe I used
HEALTHCHECK --interval=5s --start-period=60s \
CMD ["java", \
"-Dloader.main=net.trajano.swarm.gateway.healthcheck.HealthProbe", \
"org.springframework.boot.loader.PropertiesLauncher" ]
This is the technique I used in distroless Docker Image.
Side note: Don't try to put this in a CommandLineRunner interface because it will try to pull the configuration from the main app and you likely won't need the whole web stack.

Spring Boot auto-configured metrics not arriving to Librato

I am using Spring Boot with auto-configure enabled (#EnableAutoConfiguration) and trying to send my Spring MVC metrics to Librato. Right now only my own created metrics are arriving to Librato but auto-configured metrics (CPU, file descriptors, etc) are not sent to my reporter.
If I access a metric endpoint I can see the info generated there, for instance http://localhost:8081/actuator/metrics/system.cpu.count
I based my code on this post for ConsoleReporter. so I have this:
public static MeterRegistry libratoRegistry() {
MetricRegistry dropwizardRegistry = new MetricRegistry();
String libratoApiAccount = "xx";
String libratoApiKey = "yy";
String libratoPrefix = "zz";
LibratoReporter reporter = Librato
.reporter(dropwizardRegistry, libratoApiAccount, libratoApiKey)
.setPrefix(libratoPrefix)
.build();
reporter.start(60, TimeUnit.SECONDS);
DropwizardConfig dropwizardConfig = new DropwizardConfig() {
#Override
public String prefix() {
return "myprefix";
}
#Override
public String get(String key) {
return null;
}
};
return new DropwizardMeterRegistry(dropwizardConfig, dropwizardRegistry, HierarchicalNameMapper.DEFAULT, Clock.SYSTEM) {
#Override
protected Double nullGaugeValue() {
return null;
}
};
}
and at my main function I added Metrics.addRegistry(SpringReporter.libratoRegistry());
For the Librato library I am using in my compile("com.librato.metrics:metrics-librato:5.1.2") build.gradle. Documentation here. I used this library before without any problem.
If I use the ConsoleReporter as in this post the same thing happens, only my own created metrics are printed to the console.
Any thoughts on what am I doing wrong? or what am I missing?
Also, I enabled debug mode to see the "CONDITIONS EVALUATION REPORT" printed in the console but not sure what to look for in there.
Try to make your MeterRegistry for Librato reporter as a Spring #Bean and let me know whether it works.
UPDATED:
I tested with ConsoleReporter you mentioned and confirmed it's working with a sample. Note that the sample is on the branch console-reporter, not the master branch. See the sample for details.

Spring Cloud - HystrixCommand - How to properly enable with shared libraries

Using Springboot 1.5.x, Spring Cloud, and JAX-RS:
I could use a second pair of eyes since it is not clear to me whether the Spring configured, Javanica HystrixCommand works for all use cases or whether I may have an error in my code. Below is an approximation of what I'm doing, the code below will not actually compile.
From below WebService lives in a library with separate package path to the main application(s). Meanwhile MyWebService lives in the application that is in the same context path as the Springboot application. Also MyWebService is functional, no issues there. This just has to do with the visibility of HystrixCommand annotation in regards to Springboot based configuration.
At runtime, what I notice is that when a code like the one below runs, I do see "commandKey=A" in my response. This one I did not quite expect since it's still running while the data is obtained. And since we log the HystrixRequestLog, I also see this command key in my logs.
But all the other Command keys are not visible at all, regardless of where I place them in the file. If I remove CommandKey-A then no commands are visible whatsoever.
Thoughts?
// Example WebService that we use as a shared component for performing a backend call that is the same across different resources
#RequiredArgsConstructor
#Accessors(fluent = true)
#Setter
public abstract class WebService {
private final #Nonnull Supplier<X> backendFactory;
#Setter(AccessLevel.PACKAGE)
private #Nonnull Supplier<BackendComponent> backendComponentSupplier = () -> new BackendComponent();
#GET
#Produces("application/json")
#HystrixCommand(commandKey="A")
public Response mainCall() {
Object obj = new Object();
try {
otherCommandMethod();
} catch (Exception commandException) {
// do nothing (for this example)
}
// get the hystrix request information so that we can determine what was executed
Optional<Collection<HystrixInvokableInfo<?>>> executedCommands = hystrixExecutedCommands();
// set the hystrix data, viewable in the response
obj.setData("hystrix", executedCommands.orElse(Collections.emptyList()));
if(hasError(obj)) {
return Response.serverError()
.entity(obj)
.build();
}
return Response.ok()
.entity(healthObject)
.build();
}
#HystrixCommand(commandKey="B")
private void otherCommandMethod() {
backendComponentSupplier
.get()
.observe()
.toBlocking()
.subscribe();
}
Optional<Collection<HystrixInvokableInfo<?>>> hystrixExecutedCommands() {
Optional<HystrixRequestLog> hystrixRequest = Optional
.ofNullable(HystrixRequestLog.getCurrentRequest());
// get the hystrix executed commands
Optional<Collection<HystrixInvokableInfo<?>>> executedCommands = Optional.empty();
if (hystrixRequest.isPresent()) {
executedCommands = Optional.of(hystrixRequest.get()
.getAllExecutedCommands());
}
return executedCommands;
}
#Setter
#RequiredArgsConstructor
public class BackendComponent implements ObservableCommand<Void> {
#Override
#HystrixCommand(commandKey="Y")
public Observable<Void> observe() {
// make some backend call
return backendFactory.get()
.observe();
}
}
}
// then later this component gets configured in the specific applications with sample configuraiton that looks like this:
#SuppressWarnings({ "unchecked", "rawtypes" })
#Path("resource/somepath")
#Component
public class MyWebService extends WebService {
#Inject
public MyWebService(Supplier<X> backendSupplier) {
super((Supplier)backendSupplier);
}
}
There is an issue with mainCall() calling otherCommandMethod(). Methods with #HystrixCommand can not be called from within the same class.
As discussed in the answers to this question this is a limitation of Spring's AOP.

How to use BasicAuth with ElasticSearch Connector on Flink

I want to use the elastic producer on flink but I have some trouble for authentification:
I have Nginx in front of my elastic search cluster, and I use basic auth in nginx.
But with the elastic search connector I can't add the basic auth in my url (because of InetSocketAddress)
did you have an Idea to use elasticsearch connector with basic auth ?
Thanks for your time.
there is my code :
val configur = new java.util.HashMap[String, String]
configur.put("cluster.name", "cluster")
configur.put("bulk.flush.max.actions", "1000")
val transportAddresses = new java.util.ArrayList[InetSocketAddress]
transportAddresses.add(new InetSocketAddress(InetAddress.getByName("cluster.com"), 9300))
jsonOutput.filter(_.nonEmpty).addSink(new ElasticsearchSink(configur,
transportAddresses,
new ElasticsearchSinkFunction[String] {
def createIndexRequest(element: String): IndexRequest = {
val jsonMap = parse(element).values.asInstanceOf[java.util.HashMap[String, String]]
return Requests.indexRequest()
.index("flinkTest")
.source(jsonMap);
}
override def process(element: String, ctx: RuntimeContext, indexer: RequestIndexer) {
indexer.add(createIndexRequest(element))
}
}))
Flink uses the Elasticsearch Transport Client which connects using a binary protocol on port 9300.
Your nginx proxy is sitting in front of the HTTP interface on port 9200.
Flink isn't going to use your proxy, so there's no need to provide authentication.
If you need to use a HTTP Client to connect Flink with Elasticsearch, one solution is to use Jest Library.
You have to create a custom SinkFunction, like this basic java class :
package fr.gfi.keenai.streaming.io.sinks.elasticsearch5;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
import io.searchbox.client.JestClient;
import io.searchbox.client.JestClientFactory;
import io.searchbox.client.config.HttpClientConfig;
import io.searchbox.core.Index;
public class ElasticsearchJestSinkFunction<T> extends RichSinkFunction<T> {
private static final long serialVersionUID = -7831614642918134232L;
private JestClient client;
#Override
public void invoke(T value) throws Exception {
String document = convertToJsonDocument(value);
Index index = new Index.Builder(document).index("YOUR_INDEX_NAME").type("YOUR_DOCUMENT_TYPE").build();
client.execute(index);
}
#Override
public void open(Configuration parameters) throws Exception {
// Construct a new Jest client according to configuration via factory
JestClientFactory factory = new JestClientFactory();
factory.setHttpClientConfig(new HttpClientConfig.Builder("http://localhost:9200")
.multiThreaded(true)
// Per default this implementation will create no more than 2 concurrent
// connections per given route
.defaultMaxTotalConnectionPerRoute(2)
// and no more 20 connections in total
.maxTotalConnection(20)
// Basic username and password authentication
.defaultCredentials("YOUR_USER", "YOUR_PASSWORD")
.build());
client = factory.getObject();
}
private String convertToJsonDocument(T value) {
//TODO
return "{}";
}
}
Note that you can also use bulk operations for more speed.
An exemple of Jest implementation for Flink is described at the part "Connecting Flink to Amazon RS" of this post

Feign with RibbonClient and Consul discovery without Spring Cloud

I was trying to setup Feign to work with RibbonClient, something like MyService api = Feign.builder().client(RibbonClient.create()).target(MyService.class, "https://myAppProd");, where myAppProd is an application which I can see in Consul. Now, if I use Spring annotations for the Feign client (#FeignClient("myAppProd"), #RequestMapping), everything works as Spring Cloud module will take care of everything.
If I want to use Feign.builder() and #RequestLine, I get the error:
com.netflix.client.ClientException: Load balancer does not have available server for client: myAppProd.
My first initial thought was that Feign was built to work with Eureka and only Spring Cloud makes the integration with Consul, but I am unsure about this.
So, is there a way to make Feign work with Consul without Spring Cloud?
Thanks in advance.
In my opinion, it's not feign work with consul, its feign -> ribbon -> consul.
RibbonClient needs to find myAppProd's serverList from its LoadBalancer.
Without ServerList, error: 'does not have available server for client'.
This job has been done by SpringCloudConsul and SpringCloudRibbon project, of course you can write another adaptor, it's just some glue code. IMHO, you can import this spring dependency into your project, but use it in non-spring way . Demo code:
just write a new feign.ribbon.LBClientFactory, that generate LBClient with ConsulServerList(Spring's class).
public class ConsulLBFactory implements LBClientFactory {
private ConsulClient client;
private ConsulDiscoveryProperties properties;
public ConsulLBFactory(ConsulClient client, ConsulDiscoveryProperties consulDiscoveryProperties) {
this.client = client;
this.properties = consulDiscoveryProperties;
}
#Override
public LBClient create(String clientName) {
IClientConfig config =
ClientFactory.getNamedConfig(clientName, DisableAutoRetriesByDefaultClientConfig.class);
ConsulServerList consulServerList = new ConsulServerList(this.client, properties);
consulServerList.initWithNiwsConfig(config);
ZoneAwareLoadBalancer<ConsulServer> lb = new ZoneAwareLoadBalancer<>(config);
lb.setServersList(consulServerList.getInitialListOfServers());
lb.setServerListImpl(consulServerList);
return LBClient.create(lb, config);
}
}
and then use it in feign:
public class Demo {
public static void main(String[] args) {
ConsulLBFactory consulLBFactory = new ConsulLBFactory(
new ConsulClient(),
new ConsulDiscoveryProperties(new InetUtils(new InetUtilsProperties()))
);
RibbonClient ribbonClient = RibbonClient.builder()
.lbClientFactory(consulLBFactory)
.build();
GitHub github = Feign.builder()
.client(ribbonClient)
.decoder(new GsonDecoder())
.target(GitHub.class, "https://api.github.com");
List<Contributor> contributors = github.contributors("OpenFeign", "feign");
for (Contributor contributor : contributors) {
System.out.println(contributor.login + " (" + contributor.contributions + ")");
}
}
interface GitHub {
#RequestLine("GET /repos/{owner}/{repo}/contributors")
List<Contributor> contributors(#Param("owner") String owner, #Param("repo") String repo);
}
public static class Contributor {
String login;
int contributions;
}
}
you can find this demo code here, add api.github.com to your local consul before running this demo.

Resources