I'm starting with Micrometer within Quarkus to get metrics. I've followed the examples, but I think I don't understand how to create a Gauge that is backed by a method. My use case is that I'm converting PDFs and using a persistent volume for this, and I want to monitor the disk usage.
I have a PdfConverterService with a method public Integer getConversionDiskUsage() which returns the disk usage in percent.
But how can I incorporate it in my REST resource?
#Inject
PdfConverterService service;
MeterRegistry registry;
private List<Integer> list = List.of(1, 2, 3, 4, 5, 6, 7);
PdfConverterResource(final MeterRegistry registry) {
this.registry = registry;
this.registry.gauge("foo.bar", 42);
this.registry.gauge("foo.foo.bar", Tags.of("A", "B"), this.list, List::size);
this.registry.gauge("foo.baz", Tags.of("A", "B"), this.service, s-> s.getConversionDiskUsage().doubleValue());
Gauge.builder("conversion.volume.usage",
this.service, PdfConverterService::getConversionDiskUsage)
.description("Shows the disk usage of the attached volume used for the PDF conversion")
.register(this.registry);
}
#POST
....
#Timed(value = "pdf.conversion.time", description = "A measure of how long it takes to perform the PDF conversion")
public Response convertPdf(
....
}
After calling the REST endpoint and then calling up the metrics, certain things work, but others don't.
# HELP foo_bar
# TYPE foo_bar gauge
foo_bar 42.0
# HELP foo_foo_bar
# TYPE foo_foo_bar gauge
foo_foo_bar{A="B",} 7.0
# HELP pdf_conversion_time_seconds A measure of how long it takes to perform the PDF conversion
# TYPE pdf_conversion_time_seconds summary
pdf_conversion_time_seconds_count{class="PdfConverterResource",exception="none",method="convertPdf",} 1.0
pdf_conversion_time_seconds_sum{class="PdfConverterResource",exception="none",method="convertPdf",} 0.2069797
# HELP pdf_conversion_time_seconds_max A measure of how long it takes to perform the PDF conversion
# TYPE pdf_conversion_time_seconds_max gauge
pdf_conversion_time_seconds_max{class="PdfConverterResource",exception="none",method="convertPdf",} 0.2069797
I wonder whether it's a problem that the PdfConverterService is injected with an annotation, but MeterRegistry is injected in the constructor.
What am I doing wrong?
Related
I am exploring micrometer and aws cloudwatch. I think there is some understanding gap -
I've create a gauge which is supposed to return the number of connections being used in a connection pool.
public MetricService(CloudWatchConfig config) {
this.cloudwatchMeterRegistry = new CloudWatchMeterRegistry(config, Clock.SYSTEM, CloudWatchAsyncClient.create());
gauge = Gauge.builder("ConnectionPoolGauge", this.connectionPool, value -> {
Double usedConnections = 0.0;
for (Map.Entry<String, Boolean> entry : value.entrySet()) {
if (entry.getValue().equals(Boolean.FALSE)) {
usedConnections++;
}
}
return usedConnections;
})
.tag("GaugeName", "Bhushan's Gauge")
.strongReference(true)
.baseUnit("UsedConnections")
.description("Gauge to monitor connection pool")
.register(Metrics.globalRegistry);
Metrics.addRegistry(cloudwatchMeterRegistry);
}
As you can see, I am currently initiating this gauge in a constructor. Passing the connectionPool instance from outside.
Following is a controller method which consumes the connection -
#GetMapping("/hello")
public String hello() {
// connectionPool.consumeConnection();
// finally { connectionPool.releaseConnection();}
}
Step interval is set to 10 seconds. My understanding is - Every 10 seconds, Micrometer should automatically execute the double function passed to the gauge.
Obviously, it is not happening. I've seen some code samples here which are explicitly setting the gauge value (in a separate thread or scheduled logic).
I also tried with a counter which is instantiated only once, but I explicitly invoke the increment method per call to hello method. My expectation was this counter would keep on incrementing, but after a while, it drops to 0 and starts counting again.
I am totally confused. Appreciate if someone can put light on this concept.
Edit:
Tried following approach for creating Gauge - still no luck.
cloudwatchMeterRegistry.gauge("ConnectionPoolGauge", this.connectionPool, value -> {
Double usedConnections = 0.0;
System.out.println("Inside Guage Value function." + value.entrySet());
for (Map.Entry<String, Boolean> entry : value.entrySet()) {
if (entry.getValue().equals(Boolean.FALSE)) {
usedConnections++;
}
}
return usedConnections;
});
This doesn't return the instance of Gauge, so I cannot call value() on it. Also the gauge is not visible in AWS Cloudwatch. I can see the counter in cloudwatch that I created in the same program.
Micrometer takes the stance that gauges should be sampled and not be set, so there is no information about what might have occurred between samples. After all, any intermediate values set on a gauge are lost by the time the gauge value is reported to a metrics backend anyway, so there seems to be little value in setting those intermediate values in the first place.
If it helps, think of a Gauge as a "heisen-gauge" - a meter that only changes when it is observed. Every other meter type provided out-of-the-box accumulates intermediate counts toward the point where the data is sent to the metrics backend.
So the gauge is updated when the metrics are published, here are a few tips to troubleshooting this:
Put a brake point in the publish method of your CloudWatchMeterRegistry and see if it is called or not.
You are using the Global registry (Metrics.addRegistry) as well as keeping the reference to CloudWatchMeterRegistry (this.cloudwatchMeterRegistry = new CloudWatchMeterRegistry). You don't need both, I would suggest to do not use the Global registry and inject the registry you have wherever you need it.
I'm not sure what you are doing with the connection pool (did you implement your own one?) but there is out-of-the-box support for HikariCP and DBCP is publishing JMX counters that you can bind to Micrometer.
I need to define Duration value (spring.redis.timeout) by application.properties.
I was trying to use one point defined in Spring boot documentation:
Spring Boot has dedicated support for expressing durations. If you expose a java.time.Duration property, the following formats in application properties are available:
A regular long representation (using milliseconds as the default unit unless a #DurationUnit has been specified)
The standard ISO-8601 format used by java.util.Duration
A more readable format where the value and the unit are coupled (e.g. 10s means 10 seconds)
When i use spring.redis.timeout=3s Spring boot application throws this exception:
Cannot convert value of type 'java.lang.String' to required type
'java.time.Duration': no matching editors or conversion strategy found
Which would it be the best way to set a correct value to a Duration property in application.properties withs last Spring boot 2 release?
Any property which is of type duration can be injected via .properties or .yml files.
All you need to do is use a proper formatting.
If you want to inject a duration of 5 seconds it should be defined as PT5S or pt5s or PT5s
case of the letters doesn't matter, so you use any combination which is readable for you
generally everyone uses all capital letters
Other examples
PT1.5S = 1.5 Seconds
PT60S = 60 Seconds
PT3M = 3 Minutes
PT2H = 2 Hours
P3DT5H40M30S = 3Days, 5Hours, 40 Minutes and 30 Seconds
You can also use +ve and -ve signs to denote positive vs negative period of time.
You can negate only one of the entity for example: PT-3H30M = -3 hours, +30 minutes, basically -2.5Hours
Or You can negate the whole entity: -PT3H30M = -3 hours, -30 minutes, basically -3.5Hours
Double negative works here too: -PT-3H+30M = +3 Hours, -30 Minutes, basically +2.5Hours
Note:
Durations can only be represented in HOURS or lower ChronoUnit (NANOS, MICROS, MILLIS, SECONDS, MINUTES, HOURS) since they represent accurate durations
Higher ChronoUnit (DAYS, WEEKS, MONTHS, YEARS, DECADES,CENTURIES, MILLENNIA, ERAS, FOREVER) are not allowed since they don't represent accurate duration. These ChronoUnits have estimated duration due to the possibility of Days varying due to daylight saving, Months have different lengths etc.
Exception - Java does automatic conversion of DAYS into HOURS, But it doesn't do it for any other higher ChronoUnit (MONTHS, YEARS etc.).
If we try to do a "P1D", java automatically converts it into "PT24H". So If we want to do duration of 1 MONTH, we will have to use PT720H or P30D. In case of P30D java's automatic conversion will take place and give us PT720H
Upvote, if it works for you or you like the explanation. Thanks,
It's possible to use #Value notation with Spring Expression Language
#Value("#{T(java.time.Duration).parse('${spring.redis.timeout}')}")
private Duration timeout;
The Duration in the moment (Spring-Boot 2.0.4.RELEASE) it is not possible to use together with #Value notation, but it is possible to use with #ConfigurationProperties
For Redis, you have RedisProperties and you can use the configuration:
spring.redis.timeout=5s
And:
#SpringBootApplication
public class DemoApplication {
#Autowired
RedisProperties redisProperties;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#PostConstruct
void init() {
System.out.println(redisProperties.getTimeout());
}
}
It printed (parse as 5s):
PT5S
https://docs.oracle.com/javase/8/docs/api//java/time/Duration.html#parse-java.lang.CharSequence-
Update for Spring Boot 2.5.5
We can use #Value annotation together with application.properties values.
For example you have the next property in your application.properties file:
your.amazing.duration=100ms
Then you can use it in the #Value annotation:
#Value("${your.amazing.duration}")
final Duration duration;
That is all.
Supported units:
ns for nanoseconds
us for microseconds
ms for milliseconds
s for seconds
m for minutes
h for hours
d for days
Docs: link
If your Spring-Boot version or its dependencies don't put ApplicationConversionService into context (and Spring-Boot doesn't until 2.1), you can expose it explicitly
#Bean
public ConversionService conversionService() {
return ApplicationConversionService.getSharedInstance();
}
It invokes Duration.parse, so you may use PT3S, PT1H30M, etc in properties files.
Spring Boot attempts to coerce the external application properties to the right type when it binds to the #ConfigurationProperties beans.
If you need custom type conversion, you can provide a ConversionService bean (with a bean named conversionService)
See: https://docs.spring.io/spring-boot/docs/2.0.4.RELEASE/reference/htmlsingle/#boot-features-external-config-conversion
Create new ApplicationConversionService bean (it must be named conversionService ). Here you are my code tested with Spring boot 2.0.4:
#Configuration
public class Conversion {
#Bean
public ApplicationConversionService conversionService()
{
final ApplicationConversionService applicationConversionService = new ApplicationConversionService();
return applicationConversionService;
}
Here you are an example project using this approach:
https://github.com/cristianprofile/spring-data-redis-lettuce
I was getting this error, but only during testing; the bean using a #Value-annotated Duration was otherwise working. It turned out that I was missing the #SpringBootTest annotation on the test case class (and the spring-boot-test dependency that provides it) and that was causing only a subset of standard converters to be available for use.
I have a performance problem in my Spring Boot application when it's communicating with Redis that I was hoping someone with expertise on the topic could shed some light on.
Explanation of what I'm trying to do
In short, my application I have 2 nested maps and 3 maps of lists which I want to save to Redis and load back into the application when the data is needed. The data in the first nested map is fairly big, with several levels of non-primitive data types (and lists of these). At the moment I have structured the data in Redis using repositories and Redis Hashes, with repositories A, B, and C, and two different ways of lookup on id for the primary datatype (MyClass) in A. B and C holds data that is referenced from a value in A (with the #Reference annotation).
Performance analysis
Using JProfiler, I have found that the bottleneck is somewhere between my call to a.findOne() and the end of reading the response from Redis (before any conversion from byte[] to MyClass has taken place). I have looked at the slowlog on my Redis server to check for any slow and blocking actions and found none. Each HGETALL command in Redis takes 400μs on average (for a complete hash in A, including finding the referenced hashes in B and C). What strikes me as weird is that timing the a.findOne() call takes from 5-20ms for one single instance of MyClass, depending on how big the hashes in B and C are. A single instance has on average ~2500 hash fields in total when references to B and C are included. When this is done ~900 times on the for the first nested map, I have to wait 10s to get all my data, which is way too long. In comparison, the other nested nested map, which has no references to C (the biggest part of the data), is timed to ~10μs in Redis and <1ms in Java.
Does this analysis seem like normal behavior when the Redis instance is run locally on the same 2015 MacBook Pro as the Spring Boot application? I understand that it will take longer for the complete findOne() method to finish than the actual HGETALL command in Redis, but I don't get why the difference is this big. If anyone could shed some light on the performance of the stuff going on under the hood in the Jedis connection code, I'd appreciate it.
Examples of my data structure in Java
#RedisHash("myClass")
public class MyClass {
#Id
private String id;
private Date date;
private Integer someValue;
#Reference
private Set<C> cs;
private someClass someObject;
private int somePrimitive;
private anotherClass anotherObject;
#Reference
private B b;
Excerpt of class C (a few primitives removed for clarity):
#RedisHash("c")
public class C implements Comparable<BasketValue>, Serializable {
#Id
private String id;
private EnumClass someEnum;
private aClass anObject;
private int aNumber;
private int anotherNumber;
private Date someDate;
private List<CounterClass> counterObjects;
Excerpt of class B:
#RedisHash("b")
public class B implements Serializable {
#Id
private int code;
private String productCodes;
private List<ListClass> listObject;
private yetAnotherClass yetAnotherObject;
private Integer someInteger;
Since protobuffers are a wonderful alternative for java serialisation, we have used it extensively. Also, we have used java builders as general data object. On examining the speeds of constructing an object using message builder ,forming the instance parameter, and normal java primitives forming the object, we found that for an object containing 6 primitive fields, constructing the object using builder(which is the parameter of the object) took 1.1ms whereas using java primitives took only 0.3ms! And for a list of 50 of such fields! Are builders heavy, that using them as general data object affects the speed of construction to this extent?
Below is the sample design i used for the analysis,
message PersonList
{
repeated Person = 1;
message Person
{
optional string name = 1;
optional int32 age = 2;
optional string place = 3;
optional bool alive = 4;
optional string profession = 5;
}
}
The java equivalent
Class PersonList {
List<Person> personList;
Class Person {
String name;
int age;
String place;
boolean alive;
String profession;
}
/* getters and setters*/
}
I have a hard time imagining anything that contains only "6 primitive values" could take 7ms to construct. That's perhaps 100,000 times as long as it should take. So I'm not sure I understand what you're doing.
That said, protobuf builders are indeed more complicated than a typical POJO for a number of reasons. For example, protobuf objects keep track of which fields are currently set. Also, repeated primitives are boxed, which makes them pretty inefficient compared to a Java primitive array. So if you measure construction time alone you may see a significant difference. However, these effects are typically irrelevant compared to the time spent in the rest of your app's code.
I am using SOLR 4.4.0 - I found (possible) issue related to internal caching mechanism.
JVM: -Xmx=15g but 12g was never free.
I created heap dump and analyze it using MemoryAnyzer - I found 2 x 6Gb used as cache data.
In second time I do the same for -Xmx12g - I found 1 x 3.5Gb
It was always the same cache.
I check in source code and I found:
/** Expert: The cache used internally by sorting and range query classes. */
public static FieldCache DEFAULT = new FieldCacheImpl();
see http://grepcode.com/file/repo1.maven.org/maven2/org.apache.lucene/lucene-core/4.4.0/org/apache/lucene/search/FieldCache.java#FieldCache.0DEFAULT
This is very bad news because it is public static field and it is used in about 160 places in source code.
MemoryAnalyzer say:
One instance of "org.apache.lucene.search.FieldCacheImpl" loaded by
"org.apache.catalina.loader.WebappClassLoader # 0x58c3a9848" occupies
4,103,248,240 (80.37%) bytes. The memory is accumulated in one
instance of "java.util.HashMap$Entry[]" loaded by "".
Keywords java.util.HashMap$Entry[]
org.apache.catalina.loader.WebappClassLoader # 0x58c3a9848
org.apache.lucene.search.FieldCacheImpl
I do not know how to manage this kind of caches - any advice?
And finally I got OutOfMemoryError + 12Gb of memory is blocked.
I implemented kind of workaround:
I created this kind of class:
public class InternalApplicationCacheManager implements InternalApplicationCacheManagerMBean {
public synchronized int getInternalCacheSize() {
return FieldCache.DEFAULT.getCacheEntries().length;
}
public synchronized void purgeInternalCaches() {
FieldCache.DEFAULT.purgeAllCaches();
}
}
and registered it in JMX via org.apache.lucene.search.FieldCacheImpl
...
private synchronized void init() {
...
initBeans();
}
private void initBeans() {
try {
InternalApplicationCacheManager cacheManagerMBean = new InternalApplicationCacheManager();
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName name = new ObjectName("org.apache.lucene.search.jmx:type=InternalApplicationCacheManager");
mbs.registerMBean(cacheManagerMBean, name);
} catch (InstanceAlreadyExistsException e) {
...
}
}
...
This solution provide you invalidate internal caches - which solve partially this issue.
Unfortunately there are other places (mostly caches) where some data is stored and not removed as fast as I expect.
If you use FieldCacheRangeFilter you may wanna try range filters which work without field cache. If sorting is an issue, you may try using less sort fields or ones with a data type using less memory.
The field cache for each reader/atomic reader is thrown away when the reader is garbage collected. so a re-initialization of the reader should clear the cache which also means that the first operation using the cache will be a lot slower.
Fact is: FieldCache based range filter and sorting relies on the cache. There is no getting around when you really need those. You only can adapt your usage to minimize the memory consumption.