Distribute reports when a certain field is used - business-intelligence

We are using a Code and if that code is used we want a report to automatically send out.
Sales Code (if this sales code is used send out report)
This is used for a check method to ensure that sales code is not used inproperly.
Not sure how to do do this in cognos.
Thanks in advance,
Nathan

Event Studio might be the way to go here.
Use IBM® Cognos® Event Studio to notify decision-makers in your organization of events as they happen, so that they can make timely and effective decisions.
You create agents that monitor your organization's data to detect occurrences of business events. An event is a situation that can affect the success of your business. An event is identified when specific items in your data achieve significant values. Specify the event condition, or a change in data, that is important to you. When an agent detects an event, it can perform tasks, such as sending an e-mail, adding information to the portal, and running reports.
https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ug_cr_es.doc/c_event_studio.html

Related

Should we store Events in a database? (Event Driven Design)

We have several services that publishes and subscribes to Domain Events. What we usually do is log events whenever we publish and log events whenever we process events. We basically use this to apply choreography pattern.
We are not doing Event Sourcing in these systems, and there's no programmatic use for them after publishing/processing. That's the main driver we opted not to store these in a durable container, like a database or event store.
Question is, are we missing some fundamental thing by doing this?
Is storing Events a must?
I consider queued messages as system messages, even if they represent some domain event in an event-driven architecture (pub/sub messaging).
There is absolutely no hard-and-fast rule about their storage. If you would like to keep them around you could have your messaging mechanism forward them to some auditing endpoint for storage and then remove them after some time (if necessary).
You are not missing anything fundamental by not storing them.
You're definitely not missing out on anything (but there is a catch) especially if that's not a need by the business. An Event-Sourced System would definitely store all the events generated by the system into a database (or any other event-store)
The main use of an event store is to be able to restore the state of the system to the current state in case of a failure by replaying messages. To make this process of recovery faster we have snapshots.
In your case since these events are just are only relevant until the process is completed, it would not make sense to store them until you have a failure. (this is the catch) especially in a Distributed Transaction case scenario.
What I would suggest?
Don't store the event themselves but log the relevant details about these events and maybe use an ELK stack or Grafana to store these logs.
Use either the Saga Pattern or the Routing Slip pattern in case of a Distributed Transaction and log them as well.
In case a failure occurs while processing an event, put that event into an exception queue and handle it. If it's a part of a distributed transaction make sure either they all have the same TransactionId or they have a CorrelationId so you can lookup for logs and save your system.
For reliably performing your business transactions in a distributed archicture you somehow need to make sure that your events are published at least once.
So a service that publishes events needs to persist such an event within the same transaction that causes it to get created.
Considering you are publishing an event via infrastructure services (e.g. a messaging service) you can not rely on it being available all the time.
Also, your own service instance could go down after persisting your newly created or changed aggregate but before it had the chance to publish the event via, for instance, a messaging service.
Question is, are we missing some fundamental thing by doing this? Is storing Events a must?
It doesn't matter that you are not doing event sourcing. Unless it is okay from the business perspective to sometimes lose an event forever you need to temporarily persist your event with your local transaction until it got published.
You can look into the Transactional Outbox Pattern to achieve reliable event publishing.
Note: Logging/tracking your events somehow for monitoring or later analyzing/reporting purpose is a different thing and has another motivation.

How can I subscribe others to get email notifications from teamcity builds?

We are a large group of developers, working on a project. We recently switched to TeamCity for managing the build process. I would like every developer to get a notification of the build success or failure. Since most of the developers are unfamiliar with TeamCity, and are super busy anyway, I'm concerned that asking them to subscribe to the build result may take a long time.
My Question: Is there a way to subscribe on the behalf of others to get build notifications via email?
Create a group (Administration - User Management - Groups), assign users to the group and create a new notification rule (email, IDE notififications, Windows Tray - whatever you want) for the group (Group detail - Notification rules).

Exchange EWS MessageId -> Available in ActiveSync too?

Is there anyway to get the same "MessageId" you can get in Exchange EWS when using ActiveSync?
I thought this was an Exchange way to identify each message uniquely, but I can't seem to find a way to retrieve it using ActiveSync.
EDIT: I've got 2 applications, one that stores info using ActiveSync, and one that stores info using EWS, and I want them to be able to work separately on the same message.... To do this, I was hoping to use the EWS MessageId, which seems to be a GUID type identifier for each individual message. (Note: This doesn't appear to be the same Message-ID as is found in email headers).
Sadly, you're mostly out of luck.
ActiveSync is not an integration protocol, it's a mobile synchronization protocol designed for low-bandwidth communication devices like smart phones. A lot of capabilities in EWS will not exist in EAS.
Long-term message identification and correlation isn't as important for mobile devices. They simply get told what messages are in each folder, and allow the user to manipulate them. At any time the Exchange server may tell its EAS-connected clients to "re-sync" which causes them to forget the messages they have on the device and pull them cleanly from the server. That happens a lot with EAS, sometimes a couple of times an hour, depending on what is happening with that mailbox. For example, deleting a folder via Outlook causes a FolderSync to happen, and that forces connected devices to cleanly re-sync again.
Therefore EAS appears to have left behind the notion of GUIDs or other long term IDs for messages. Instead, the server will assign ephemeral IDs that are valid only until the next big resync is forced (which could happen at any time). You'll probably see Exchange give very simple IDs like 7:45 (which means message ID 45 within folder 7, IIRC). However after a resync that might have the number 7:32 (if the user deletes other messages in that folder) or something like 4:22 (if the message gets moved to another folder entirely).
Other EAS servers like Zimbra, Kerio or Notes Traveler might assign GUIDs, but from memory this is how Exchange behaves. Your only option might be to put a hidden correlation ID of your own into the body or subject of messages you're interested in. That will allow you to track the lifecycle of the items you're interested in, at the expense of some odd stuff being visible to users in their message contents.
#Brian is correct - There are no global unique identifiers for ActiveSync items that can be used to correlate with EWS (With some exceptions, for instance a meeting invite has a UID, as do Events which can be used with some hackery to retrieve an EWS ID for the related EWS calendar event) and there are no fields that aren't visible to the user that can be hijacked for adding your own data with which to correlate. This is most apparent in email, contacts, tasks, notes etc...
However if you are syncing both, it is possible to use the meta data in the objects to match. For instance, for contacts write a hashing algorithm that combines the data from the first name, last name, company name, etc... fields and produces a result. This can be run on the data from both sides and will have relatively little object collision for matching (and those that do collide will have exactly the same visible data to the user anyway so in most cases it won't matter that you didn't get an exact alignment)

Event Aggregator Error Handling With Rollback

I've been studying a lot of the common ways that developers design/architect an application on domain driven design (Still trying to understand the concept as a whole). Some of the examples that I saw included the use of events via an event aggregator. I liked the concept because it truly keeps the different elements/domains of an application decoupled.
A concern that I have is: how do you rollback an operation in the case of an error?
For example:
Say I have an order application that has to save an order to the database and also save a copy of the order as a pdf to a CMS. The application fires an event that a new order has been created and the pdf service that subscribes to this event saves the pdf. Meanwhile when committing the order changes to the database an exception is thrown. The problem is that the pdf has been saved but their isn't a matching database record.
Should I cache the previously handled events and fire a new error event that looks to the cache for "undo" operations? Use something like the command pattern for this?
Or... is the event aggregator not a good pattern for this.
Edit
I'm starting to think that maybe events should be used for less "mission critical" items, such as emailing and logging.
My initial thought was to limit dependencies by using the event aggregator pattern.
You want the event to be committed in the same transaction as the operation on your database.
In this particular scenario, you can push the event on a queue, which enlists in your transaction, so that the event will never go out unless the aggregate is persisted. This will make creating the PDF eventual consistent; if creating the PDF fails, you can fix the problem, and have it automatically retried.
Maybe you can get more inspiration in one of my previous posts on eventual consistent domain events with RavenDB and IronMQ.
Handling an event before it actually happened (committed) only works if the event handler participates in the transaction. Make the event handler transactional (for instance by storing the PDF in a database), or publish and handle events after the transaction committed.

CAN I raise custom SCOM Alerts in BizTalk

I need to check that all three of the required files in a set have been FTP'd before doing further processing on them. If all three files for the previous day have not arrived by 08:00 I need to raise and alert to a support person to contact the supplier. (Event 1)
If all three files are there, I then check the totals equal the transactions and if not I discard all three files and raise an alert to a support person to contact the supplier (Event 2)
I have the BizTalk (2010) - SCOM Management Pack loaded and need to know if I can create a custom rule that will raise and event for SCOM to detect, upon which I need SCOM to send a scenario-specific email to a recipient (e.g. Event 1 - All Three Required Files not Available! // or // Event 2 - Sum of transactions cannot be reconciled with Totals File)
We have looked at creating a SQL table for the message event log and defining custom processes for each connection point but it is clumsy, and I was advised that the above manner could be achieved, letting SCOM handle the Alert sending based on Events being raised by the process in BizTalk even though they are more BUSINESS events than SERVICE events.
Please advise - we have been going round in circles with this for two or three weeks now.
SCOM doesn't have anything out of the box like you are describing.
I do think you are looking at custom work here as Nick has suggested.
Another option is BizTalk360 does offer this feature. It is called "Process Monitoring" but would require a software license.
One option would be to use Business Activity Monitoring (BAM) and raise alerts in there.

Resources