I am sending multiple trap to opennms as an event. But those events are getting merged on Opennms portal and In database the latest event is getting displayed. Is this is a bug or functionality. And how can we segregate the traps on portal end (opennms end).
An event definition can have a reduction key. This key is used to de-duplicate recurring events, especially for SNMP traps. To give you help the event definition would be useful.
Related
We are using Google Analytics Event tracking problematic behavior. Some key events need to be operated immediately. How could we send the event to Pub/Sub and then do the real-time analytic job?
All the events could synchronized to Bigquery for offline analytic. But for some specific events, we want it to trigger some logical operation immediately. But found nothing could achieve this.
We have several services that publishes and subscribes to Domain Events. What we usually do is log events whenever we publish and log events whenever we process events. We basically use this to apply choreography pattern.
We are not doing Event Sourcing in these systems, and there's no programmatic use for them after publishing/processing. That's the main driver we opted not to store these in a durable container, like a database or event store.
Question is, are we missing some fundamental thing by doing this?
Is storing Events a must?
I consider queued messages as system messages, even if they represent some domain event in an event-driven architecture (pub/sub messaging).
There is absolutely no hard-and-fast rule about their storage. If you would like to keep them around you could have your messaging mechanism forward them to some auditing endpoint for storage and then remove them after some time (if necessary).
You are not missing anything fundamental by not storing them.
You're definitely not missing out on anything (but there is a catch) especially if that's not a need by the business. An Event-Sourced System would definitely store all the events generated by the system into a database (or any other event-store)
The main use of an event store is to be able to restore the state of the system to the current state in case of a failure by replaying messages. To make this process of recovery faster we have snapshots.
In your case since these events are just are only relevant until the process is completed, it would not make sense to store them until you have a failure. (this is the catch) especially in a Distributed Transaction case scenario.
What I would suggest?
Don't store the event themselves but log the relevant details about these events and maybe use an ELK stack or Grafana to store these logs.
Use either the Saga Pattern or the Routing Slip pattern in case of a Distributed Transaction and log them as well.
In case a failure occurs while processing an event, put that event into an exception queue and handle it. If it's a part of a distributed transaction make sure either they all have the same TransactionId or they have a CorrelationId so you can lookup for logs and save your system.
For reliably performing your business transactions in a distributed archicture you somehow need to make sure that your events are published at least once.
So a service that publishes events needs to persist such an event within the same transaction that causes it to get created.
Considering you are publishing an event via infrastructure services (e.g. a messaging service) you can not rely on it being available all the time.
Also, your own service instance could go down after persisting your newly created or changed aggregate but before it had the chance to publish the event via, for instance, a messaging service.
Question is, are we missing some fundamental thing by doing this? Is storing Events a must?
It doesn't matter that you are not doing event sourcing. Unless it is okay from the business perspective to sometimes lose an event forever you need to temporarily persist your event with your local transaction until it got published.
You can look into the Transactional Outbox Pattern to achieve reliable event publishing.
Note: Logging/tracking your events somehow for monitoring or later analyzing/reporting purpose is a different thing and has another motivation.
Do smart contracts have events now that I can set up listeners for or do I need to poll the chain manually to get data about them?
There are no events right now on NEAR but you could do the following
https://github.com/near-examples/erc-20-token/blob/master/contract/events.ts
and in Rust
https://github.com/near/docs/issues/362
Instead of native events we have a way to poll for changes in the contract's state. For example the events above for fungible tokens are implemented by using that.
Polling for events can be done via RPC https://docs.near.org/docs/api/rpc-experimental#example-of-data-changes and also we are finishing the indexing infrastructure so can later just run indexing node that will provide all this events (https://github.com/nearprotocol/nearcore/pull/2651)
We are using a Code and if that code is used we want a report to automatically send out.
Sales Code (if this sales code is used send out report)
This is used for a check method to ensure that sales code is not used inproperly.
Not sure how to do do this in cognos.
Thanks in advance,
Nathan
Event Studio might be the way to go here.
Use IBM® Cognos® Event Studio to notify decision-makers in your organization of events as they happen, so that they can make timely and effective decisions.
You create agents that monitor your organization's data to detect occurrences of business events. An event is a situation that can affect the success of your business. An event is identified when specific items in your data achieve significant values. Specify the event condition, or a change in data, that is important to you. When an agent detects an event, it can perform tasks, such as sending an e-mail, adding information to the portal, and running reports.
https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ug_cr_es.doc/c_event_studio.html
I am using global event listener in my application. The event capture perfectly works in my code. The problem is the event seems to be fire multiple time. I have followed this tutorial link.
If your app is listening to global events generated by system, then these events may fire several times according to conditions you do not know. On my experience to get a univocal signal from the system I had to analyze a sequence of global events and the signal was recognized as received only when this sequence of events occured one after another in expected way.
If your app is listening to global events generated by your application, then it is fully under your control. Check your code that fires global events. Use EventLogger to log every moment when you fire and when you receive fired event. Then inspect your log to find out what is going on. Seems that your app fires the global event more times than expected.