Duplicates ActivityId in TPL ETW Semantic Logging - task-parallel-library

I am running a Service Fabric application and the the events are being logged into Elasticsearch. I am using out of process semantic logging , elastic search sink.
And for correlating the events, i use Activity Id and Relative Activity Id.
The problem is that some logs that are begin logged from different partitions have same activity Id and I am having difficulties correlating those events.
Is activity id is generated as full GUIDs, Most of the activities have zeros in the middle part of the GUID.
Is it possible to generate full GUIDs for activity Id without explicitly specifying the Current activity id for the Thread.

Related

Reliable HTTP Response Handling for Record Linkage

I have a central system that publishes new records to a message bus topic.
Multiple agents subscribe to these messages and create new records in their respective systems using REST APIs.
These downstream systems cannot accommodate my central system's record Ids.
So I need to link records across all systems using a central record linkage repository e.g.
Central System Id
System A Id
System B Id
1
3231
767
2
3232
768
When each agent creates a new record, there is an opportunity to grab the new downstream system Id in the HTTP response message and use it to populate the above respository.
But the agents have one chance to take note of this Id and either update the central record linkage repository directly or place the Id on a message bus.
If there is a system failure before the agent can persist the Id, there is no way getting the Id back from the downstream system without a human needing to perform record matching.
For these lost records, an agent cannot consult the central record linkage repository to determine whether the record already exists, and therefore creates duplicate records in downstream system.
How can I implement a reliable record linkage strategy?
Alternatively, I could look towards implementing idempotent consumers but the attibutes used for matching existing records could change between source and target systems.

How is CQRS Implemented and where is Read DB getting created

I have discovery service: https://github.com/Naresh-Chaurasia/API-MicroServices-Kafka/tree/master/Microservices-CQRS-SAGA-Kafka/DiscoveryService
I have Product Service: https://github.com/Naresh-Chaurasia/API-MicroServices-Kafka/tree/master/Microservices-CQRS-SAGA-Kafka/ProductsService
I have API gateway: https://github.com/Naresh-Chaurasia/API-MicroServices-Kafka/tree/master/Microservices-CQRS-SAGA-Kafka/ApiGateway
Product Service and API gateway are registered with discovery service. I use API Gateway to access the Product Service.
I am following a course to implement CQRS for products service.
Under ProductService, I have src/main/java/com/appsdeveloperblog/estore/ProductsService/command/ProductAggregate.java
Here the ProductAggregate is Command of CRQS.
It has the following methods (Please refer to GitHub for more details):
#CommandHandler
public ProductAggregate(CreateProductCommand createProductCommand) throws Exception {
...
}
#EventSourcingHandler
public void on(ProductCreatedEvent productCreatedEvent) {
...
}
It also has src/main/java/com/appsdeveloperblog/estore/ProductsService/query/ProductEventsHandler.java, which persist the product in H2 db.
I have also implemented src/main/java/com/appsdeveloperblog/estore/ProductsService/query/ProductsQueryHandler.java, which is used to query the db.
Here the ProductsQueryHandleris Query of CRQS.
My Question is as follows
What i am failing to understand that how and when is the Publish Event generated, and when the message is put in Messaging queue.
Also, is it possible that after the data is persisted to Event Store, it is not stored in Read DB. If yes, then how can we synchronize the Read DB.
What i am failing to understand that how and when is the Publish Event generated, and when the message is put in Messaging queue.
It happens after the events are published into the event store.
There are lots of possible designs that you might use to copy events from the event store to the event handler on the query side. These would include
Having the application code copy the event onto the message queue, which the event handler subscribes to
Having the event handler pull batches of events from the event store on a schedule
Having the event handler pull events from the event store, but using the message queue to announce that there are new messages to pull.
is it possible that after the data is persisted to Event Store, it is not stored in Read DB.
Yes. How common that is will depend on... well, really it mostly depends on how much you invest in reliability.
This is why the pull model tends to be popular - the read process can keep track of which events it has seen, and ask for the next batch of messages after X - where X is a time stamp, or a sequence number, or something.
Warning: if you are trying to roll your own event store, getting these details right can be tricky. Unless the details of the event store are part of your competitive advantage, you really want to buy reliability rather than trying to build it.

how to find data volume size for a specific process group in NiFi?

I am new to NiFi. We have a complex NiFi data flow in our organization. We have segregated different projects data flow into Process Groups. I was asked to find the data volume for a specific project (Process Group in NiFi) for a given period of time? How to find it in NiFi Web UI?
The Apache NiFi User Guide has a section on monitoring components and data flow. By default, a processor/process group will display the total amount of data processed by that component over the last 5 minutes. There are also Reporting Tasks which allow the transmission of such monitoring data to external destinations.

Custom user events using micrometer

I’m new to micrometer. I want to record events such as user registrations, user logins etc. Can I do this with micrometer and Spring Boot and show the data in Prometheus/Grafana?
Yes. However you would be recording the events in aggregate. For example you might have a counter for logins. Over time you would be able to see how many logins had occurred and at what rate.
You wouldn't be tracking how many times user1234 had logged in. (You could force it to do that, but it isn't a good fit for that use case)

Distribute reports when a certain field is used

We are using a Code and if that code is used we want a report to automatically send out.
Sales Code (if this sales code is used send out report)
This is used for a check method to ensure that sales code is not used inproperly.
Not sure how to do do this in cognos.
Thanks in advance,
Nathan
Event Studio might be the way to go here.
Use IBM® Cognos® Event Studio to notify decision-makers in your organization of events as they happen, so that they can make timely and effective decisions.
You create agents that monitor your organization's data to detect occurrences of business events. An event is a situation that can affect the success of your business. An event is identified when specific items in your data achieve significant values. Specify the event condition, or a change in data, that is important to you. When an agent detects an event, it can perform tasks, such as sending an e-mail, adding information to the portal, and running reports.
https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ug_cr_es.doc/c_event_studio.html

Resources