How to use Micrometer Gauge the correct way? - spring

Say i have an application where a REST API updates the price of a product.I want to use a Micrometer Gauge to expose the new price as a metric. I'm having trouble understanding from Micrometer documentation how this should be accomplished.
The only toDoubleFunction that worked for me was to create a new method in my ProductService which returns it's price. This seems like an overhead for every piece of data i want to expose as a metric.
What am i missing here? why product.getPrice() isn't enough to update the Gauge?

Micrometer's Gauge would hold a reference to whatever it has to pull the value from. And that reference is a WeakReference by default.
For example:
StatsdGauge
DefaultGauge
This means that should your provided value get garbage collected, micrometer would have nothing to poll the value from.
I assume that when you call product.getPrice(); you never hold on to that value just passing it to something like meterRegistry.gauge("product.price",tags,value); Since after the completion of this block of code nothing holds a strong reference to that specific value it gets Garbage Collected (GC-ed).
You have couple solutions here: either build a Gauge using a builder and specifying the strongReference(true) or (better) make sure you hold your references and manage their values yourself.
Both are rather weird as you'll end up holding a lot of "Gauge sources" in memory.

Related

Micrometer tracing for Batch DataFetchers

I am implementing micrometer for our GraphQL service. One thing I am noticing is for #BatchMapping methods we are getting a DataFetcherObservationContext for each index in the incoming list.
Example: I am looking up a group of skus and on each of those skus I am looking up the brand information using a #BatchMapping so that I am making only 1 webservice call to our Brand microservice. However when I look at the observability trace metrics in grafana I am seeing an entry for each index(sku) in the list that I am giving to the #BatchMapping. Is there a way to combine these into a single DataFetcherObservationContext so I am not getting 1 for each Sku that I am ultimately returning?
See attached screenshot for what I see in grafana
I am using all of the OOB Observation contexts for graphql and have just started to dabble into creating my own custom implementation but hoping there is an easier way.
I am expecting this to be one single observation for the entire #BatchMapping not individual for each index of the parent list coming in.
Edit: One other thing I am seeing is stackOverFlowErrors if I try to look at the graphQLContext for the Observation object for all of the parentObservations. It seems to be doing so many that it overflows the buffer.

GA3 Event Push Neccesary fields in Request

I am trying to push a event towards GA3, mimicking an event done by a browser towards GA. From this Event I want to fill Custom Dimensions(visibile in the user explorer and relate them to a GA ID which has visited the website earlier). Could this be done without influencing website data too much? I want to enrich someone's data from an external source.
So far I cant seem to find the minimum fields which has to be in the event call for this to work. Ive got these so far:
v=1&
_v=j96d&
a=1620641575&
t=event&
_s=1&
sd=24-bit&
sr=2560x1440&
vp=510x1287&
je=0&_u=QACAAEAB~&
jid=&
gjid=&
_u=QACAAEAB~&
cid=GAID&
tid=UA-x&
_gid=GAID&
gtm=gtm&
z=355736517&
uip=1.2.3.4&
ea=x&
el=x&
ec=x&
ni=1&
cd1=GAID&
cd2=Companyx&
dl=https%3A%2F%2Fexample.nl%2F&
ul=nl-nl&
de=UTF-8&
dt=example&
cd3=CEO
So far the Custom dimension fields dont get overwritten with new values. Who knows which is missing or can share a list of neccesary fields and example values?
Ok, a few things:
CD value will be overwritten only if in GA this CD's scope is set to the user-level. Make sure it is.
You need to know the client id of the user. You can confirm that you're having the right CID by using the user explorer in GA interface unless you track it in a CD. It allows filtering by client id.
You want to make this hit non-interactional, otherwise you're inflating the session number since G will generate sessions for normal hits. non-interactional hit would have ni=1 among the params.
Wait. Scope calculations don't happen immediately in real-time. They happen later on. Give it two days and then check the results and re-conduct your experiment.
Use a throwaway/test/lower GA property to experiment. You don't want to affect the production data while not knowing exactly what you do.
There. A good use case for such an activity would be something like updating a life time value of existing users and wanting to enrich the data with it without waiting for all of them to come in. That's useful for targeting, attribution and more.
Thank you.
This is the case. all CD's are user Scoped.
This is the case, we are collecting them.
ni=1 is within the parameters of each event call.
There are so many parameters, which parameters are neccesary?
we are using a test property for this.
We also got he Bot filtering checked out:
Bot filtering
It's hard to test when the User Explorer has a delay of 2 days and we are still not sure which parameters to use and which not. Who could help on the parameter part? My only goal is to update de CD's on the person. Who knows which parameters need to be part of the event call?

Chronicle Queue Proxy Method Value is Same Object Instance Each Time

I'm using Chronicle V4 proxy API to convert a message into a function call.
When myMethod(Thing a) is invoked after a readOne() call, the 'a' object instance ID is the same each time but the content has the latest state.
Imagine:
readOne();
readOne();
Methods fired:
myMethod(Thing a)
myMethod(Thing a)
The second call with param 'a' now with different state overrides any previous caches version of 'a' in say a hashmap in memory, because the java object instance ID is the same one when myMethod was invoked initially.
I'm hoping this is some odd in my setup - be good to know if this is by design or just an issue my end.
This is by design to provide implicit recycling of the object.
If you want a new object you can use Marshallable.deepCopy() or use Marshallable.copyTo() an existing one. Unless you retain the object, there shouldn't be an issue. If you write it out to another queue, for example, it is written immediately rather than in the background.
It is implemented this way so you can process millions of events and create very few objects. i.e. less than 1 byte of garbage per message.
I highly recommend using the latest version of Queue https://search.maven.org/search?q=g:net.openhft%20AND%20a:chronicle-queue currently v5.17.4

Realm: Do we need to write each and every new RLMObject we create

Started using Realm as storage layer for my app. This is these scenario I am trying to solve
Scenario: I get a whole bunch of data from the server. I convert each piece of data into a RLMObject. I want to just "save" to persistent storage at the end. In between, I want these RLMObjects create dot reflected when I do a query
I don't see a solution for this in Realm. Looks like only way to is to write each Object back into the Realm DB after they are created. Documentation also says that writes are expensive. Is there any way around?
To reduce the overhead, I guess I could maintain list of objects created and write all of them in one transaction. Still seems like a lot of work. Is that how it is intended to be used?
You can create the objects as standalone without adding them to the Realm, and then add them all in single transaction (which is very efficient) at the end.
Check out the documentation about creating objects here: https://realm.io/docs/objc/latest/#creating-objects
There is also an example of adding objects in bulk here, where they get added in chunks so that other threads can observe the changes as they happens: https://realm.io/docs/objc/latest/#using-a-realm-across-threads

Parse.com. Execute backend code before response

I need to know the relative position of an object in a list. Lets say I need to know the position of a certain wine of all wines added to the database, based in the votes received by users. The app should be able to receive the ranking position as an object property when retrieving a "wine" class object.
This should be easy to do in the backend side but I've seen Cloud Code and it seems it only is able to execute code before or after saving or deleting, not before reading and giving response.
Any way to do this task?. Any workaround?.
Thanks.
I think you would have to write a Cloud function to perform this calculation for a particular wine.
https://www.parse.com/docs/cloud_code_guide#functions
This would be a function you would call manually. You would have to provide the "wine" object or objectId as a parameter and then get have your cloud function return the value you need. Keep in mind there are limitations on cloud functions. Read the documentation about time limits. You also don't want to make too many API calls every time you run this. It sounds like your computation could be fairly heavy if your dataset is large and you aren't caching at least some of the information.

Resources