Information recording for Charging - onem2m

In TS-0001chapter 12 "Information Element Recording" triggers (e.g. a request on Mcc/Mca or any other interface) are described.
In clause 12.2.2 "Filtering of Recorded Information for Offline Charging" it is described how to derive charging information from recorded information which means that charging data is being derived from IERs.
In clause 10.2.11.14, "Service Statistics collection record" is described.
There are 3 questions:
First, is there any correlation between Service Statistics Collection record and IER? It looks like Service statistics collection record is a subset of IER derived on the basis of eventConfig and statsCollect resources. If it is a subset, then there is no field in IER which maps to "collectingEntityID" as Service Statistics Collection Record are derived corresponding to "collectingEntityID".
Second, there is no description for charging data records (CDRs). It is described as subset of IER. As a result of statsCollect, Service Statistics Collection Records are generated. When will the CDRs be generated?
Third, there is no linking between Service Statistics Collection record and CDR, both needs to be transferred on Mch interface.

For your first and third questions, I understand the confusion. The Service Statistics Collection record and a M2M Event Record probably should be combined or consolidated. In fact, based on your question we will shortly bring in contributions to the oneM2M standard to make this change.
For the second question, TS-0001 clause 12.2.4 describes CDRs. This clause defines Accounting-Request and Accounting-Answer messages that flow between an IN and a billing system over Mch. Within the Accounting-Request there is an M2M Information element defined in which M2M Event Record information is stored. This is effectively the CDR. Depending on the requirements of the billing system, the charging function of the IN will filter the required information from the M2M Event Record and store this information in the M2M Information element of the Accounting-Request message for transfer to the billing system.
In addition, TS-0004 A.2 "Diameter Commands on Mch" defines how to bind the Mch Accounting-Request and Accounting-Answer messages to the Diameter protocol for deployments which use Diameter.

Related

Appropriate usage of FHIR MedicationRequest paired with MedicationStatement basedOn

I've seen the relationship between a MedicationRequest and MedicationStatement used in different ways at different healthcare entities - specifically linking the two with MedicationStatement.basedOn - and I'm curious if there is a best practice or recommendation for a couple clinical workflows. The FHIR documentation could be interpreted in multiple ways.
Consider the two following basic workflows:
A primary care clinic prescribes a non-acute medication (i.e. lisinopril), and creates a MedicationRequest, which gets ePrescribed to a pharmacy. This prescription could be repeatedly refilled and remain active for the patient's life, or could be stopped if the patient no longer needs later in time. This prescription will span multiple encounters (events) for the treatment plan for this patient.
A patient arrives for an inpatient stay at a hospital. That patient notes they take lisinopril during their intake, and the hospital stores this as a MedicationStatement. That patient's primary care physician (who prescribed the lisinopril) and the hospital are hooked up to the same HIE, and the hospital sees the MedicationRequest for that patient when ingesting data.
For scenario 2 (more straightforward), I could see the hospital setting a reference on their MedicationStatement.basedOn to the ingested MedicationRequest. This aligns with the following from the MedicationStatement documentation:
This is a record of a medication being taken by a patient or that a medication has been given to a patient, where the record is the result of a report from the patient or another clinician, or derived from supporting information (for example, Claim, Observation or MedicationRequest). A medication statement is not a part of the prescribe->dispense->administer sequence but is a report that such a sequence (or at least a part of it) did take place, resulting in a belief that the patient has received a particular medication.
This resource is distinct from MedicationRequest, MedicationDispense and MedicationAdministration. Each of those resources refers to specific events - an individual order, an individual provisioning of medication or an individual dosing. MedicationStatement is a broader assertion covering a wider timespan and is independent of specific events. The existence of resource instances of any of the preceding three types may be used to infer a medication statement. However, medication statements can also be captured on the basis of other information, including an assertion by the patient or a care-giver, the results of a lab test, etc.
For scenario 1 however, I have seen two possible ways to use these resources based on the above. Depending on how the above is interpreted, both could be argued as appropriate usage:
When the clinic prescribes lisinopril, a MedicationRequest is created. If treatment is stopped, that resource's status is updated. MedicationStatement resources are reserved for when their patients self-report a medication, or from ingesting data from a HIE.
When the clinic prescribes lisinopril, both a MedicationRequest and MedicationStatement is created. The MedicationRequest only refers to that specific encounter of prescribing the medication, and the MedicationStatement is used throughout the overall treatment of the patient, spanning multiple encounters or events. If the treatment no longer requires lisinopril, the MedicationStatement status is updated.
From a purely technical perspective, both are using the resources correctly (MedicationStatement is basedOn the MedicationRequest). However from a FHIR usage and clinical data perspective, which usage is correct?
MedicationStatement.basedOn has been dropped in R5 and shouldn't really be there in R4. It would mean "This MedicationStatement was created under the authorization of order XYZ", and that almost never makes sense. If you're going to link a statement to an order, use the 'derivedFrom' element.
Both scenarios are fine. MedicationStatement can be used just for self-reported meds, but it's also fine to have an instance for each active medication therapy. It sort of depends on how the system likes to represent information.

DDD deal with distributed status accross domains

Let's say we have a simple food delivery app. Where client order the food, then restaurant start preparing the food and gives it to the courier who delivery it to the client.
So here we have three different domains and each of this domain have their own order:
client - here client order the food and have the status of the food in preparation | in delivery | delivered
restaurant - here restaurant got its order and has their own status in queue | in preparation | ready to pick up
courier - courier has only two status delivering | delivered
Moreover each of this domain has their own price and other attribute about order:
client - total price (food price + delivery cost + fee)
restaurant - price of food, time of production to give a hind to the client when food will be delivery
courier - cost of delivery
All I want to highlight is that each of the domain has its own order aggregate, so according to DDD we have to keep it in different aggregates even in different microservices:
client - /orders/:id provides the general status of the order and total price to the client.
restaurant - /restaurants/:restaurantId/orders/:id provides the status of the food in restaurant domain and cost.
courier - /couriers/:courierId/orders/:id provides information how much courier earn from this order and how long it took to delivier
But now I met another problem, because client order combines information from other domains (is food still in restaurant or it's being delivery) so I have to gather this information when client asks about its order, but it means that client doesn't have its domain (its own aggregate, total price, discount etc), but if I create order aggregate for the client then I will not keep all information about order in one place (when restaurant give the food to the courier it should also change status of the order in client domain) what is not really according to microservices, because we keep information about the same order in different microservices.
Should I just create one order domain or should I split it to different domain and make these domains communicate between, when something will change in one domain?
One useful approach is to leverage domain events. When the restaurant's view of the state of the order changes, an event describing that change is published. The other services can then update their model of the event (assuming that that change is relevant to that service).
So for instance, we might have:
user creates order via the client service => OrderCreated event emitted
restaurant service consumes OrderCreated event, translates the order for the restaurant (e.g. uses the prices which the delivery app pays the restaurant vs. the prices the delivery app charges the user) => OrderSentToRestaurant event emitted
courier service consumes OrderCreated and begins trying to figure out which courier will be assigned the order and the approximate transport time from pickup to delivery => DeliveryLatencyEstimateMade event emitted
client service consumes OrderSentToRestaurant and updates its order status (for presentation to the user) to in preparation
courier service ignores OrderSentToRestaurant
restaurant service ignores DeliveryLatencyEstimateMade event
client service consumes DeliveryEstimateLatencyEstimateMade and updates its model (delivery time remains unknown)
restaurant informs restaurant service of expected completion time => OrderReadyForPickupAt event emitted
courier service consumes OrderReadyForPickup, refines courier assignment decisions
client service consumes OrderReadyForPickupAt event, combines with the latest latency estimate to present a predicted delivery time to the user
and so forth. Each service is autonomous and in control of its data representation and free to ignore or interpret the events as it sees fit. Note that this implies eventual consistency (the restaurant service will know about when the order is expected to be ready for pickup before the courier or client services know about that), though microservice autonomy already effectively ruled out strong consistency.
When looking at aggregate design in each bounded context (BC), you have to include only the data required to provide the functionality that belongs to that BC. The fact that the restaurant endpoint needs to return some extra data is not a good enough reason to add that data to the order aggregate in that BC.
You can resolve the need for more data in different ways:
The API client can call multiple endpoints to fetch all the data it needs
The API can implement Data Aggregation, by internally querying multiple BCs/microservices and combining them to produce a single more complete response object
Create Read models, which store data from multiple sources into a single "table" in a way that simplifies querying and returning this data. This approach is more complex, but it's very useful when you need to filter and sort by fields belonging to multiple BCs, which is not possible with the previous two approaches.
Another consideration to make is double-checking if your boundaries are correct. Do you really need a Client BC? What business logic does it implement? Maybe Orders are created directly into Restaurant and there is no Client order? Client order could just be a "façade" providing all Restaurant orders belonging to a single client Id?
As a final note, I completely agree with Levi Ramsey's answer that events are the right way to coordinate the different aggregates. They would also be used to create the read models I mentioned above.

Does event store store state?

Theoretically when using event sourcing you don't store "state" but events. But I saw in many implementations that you store snapshots of state in a column in a format like JSON or just a BLOB. For example:
Using an RDBMS as event sourcing storage
The events table has Data column which stores entire object. To me, it's like storing state for that given time when some event occurred.
Also this picture(taken from Streamstone):
It has Data column with a serialized state. So it stores state too but inside an Event?
So how to replay from the initial state then, If I can simply pick some event and access Data to get the state directly.
What exactly is stored inside Data, is it a state of the entire object or it's serialized event?
Let's say I have a person object (in C#)
public class Person
{
public string Name { get; set }
public int Age { get; set; }
}
What should my event store when I create a person or change properties like name or age.
When I create a Person I most likely will send something like PersonCreatedEvent with the initial state, so the entire object.
But what if I change Name or Age should they be 2 separate events or just 1? PersonChangedEvent or PersonChangedAgeEvent and PersonChangedNameEvent?
What should be stored in the event in this case?
What exactly is stored inside Data, is it a state of the entire object or it's serialized event?
That will usually be a serialized representation of the event.
One way of thinking about it: a stream of events is analogous to a stream of patch documents. Current state is calculated by starting from some known default state and then applying each patch in turn -- aka a "fold". Previous states can be recovered by choosing a point somewhere in the stream, and applying the patches up to that point.
The semantics of events, unlike patches, tends to be domain specific. So Checked-In, Checked-Out rather than Patched, Patched.
We normally keep the events compact - you won't normally record state that hasn't changed.
Within your domain specific event language, the semantics of a field in an event may be "replace" -- for example, when recording a change of a Name, we probably just store the entire new name. In other cases, it may make sense to "aggregate" instead -- with something like an account balance, you might record credits and debits leaving the balance to be derived, or you might update the total balance (like a gauge).
In most mature domains (banking, accounting), the domain language has semantics for recording changes, rather than levels. We write new entries into the ledger, we write new entries into the checkbook register, we read completed and pending transactions in our account statement.
But what if I change Name or Age should they be 2 separate events or just 1? PersonChangedEvent or PersonChangedAgeEvent and PersonChangedNameEvent?
It depends.
There's nothing wrong with having more than one event produced by a transaction.
There's nothing wrong with having a single event schema, that can be re-used in a number of ways.
There's nothing wrong with having more than one kind of event that changes the same field(s). NameLegallyChanged and SpellingErrorCorrected may be an interesting distinction to the business.
Many of the same concerns that motivate task based UIs apply to the design of your event schema.
It still seems to me like PersonChangedEvent will contain all person properties that can change. Even if they didn't change
In messaging (and event design takes a lot of lessons from message design), we often design our schema with optional fields. So the event schema can be super flexible, while any individual representation of an event would be compact (limited to only the information of interest).
To answer your question, an event that is stored should be the event data only. Not the objects state.
When you need to work on your Entity, you read up all the events and apply them to get the latest state every time. So events should be stored with the events data only. (ofc together with AggregateId, Version etc)
The "Objects State" will be the computation of all events, but if you have an Eventlistener that listens to all your published events you can populates a separate ReadModel for you. To query against and use as read only from a users perspective.
Hope it helps!
Updated answer to updated question:
Really depends on your model, if you do the update at the same time Age and Name, yes the new age and name values should be stored in a new event.
The event should only contain this data "name and age with aggregateId, version etc"
The event listener will listen specifically on each event (created, updated etc), find the aggregates read model that you have stored and only update these 2 properties (in this example).
For createevent you create the object for the read model.

OBIEE 11g : Scheduling Reports based on filter

I need my OBIEE Analysis report to be sent to 200 people( all are from different departments) through Actionable Intelligence Agent .
I need to filter the data based on department and send it. I was unable to put the condition in Agent.
Can I filter the data in Dashboard prompt and link the Agent with Dashboard??
Will this work out or any other suggestion for this case??
For those "filter by X and send it to Y" scenarios the best way (in my opinion) is to use BI Publisher bursting options. It's just the textbook case for that.
If you have to stick to the agents in OBIEE, consider enabling row level security for that data following your requirements. Then just configure the agent to send the analysis to the required people and row level sec should do the rest.
If row level security is too much effort, I guess you could play with some auxiliary analysis to filter your main report based on the department of the user. The idea would be the following:
Create a report with the department column in the criteria and a filter by user where the user id is equal to the presentation variable #{user.id} (this is a meta variable that is always available and contains the user logged in).
Filter your main report with a condition where department is based on the results of another analysis (the previous one), so it will return the right department for each user.
Configure your agent to be sent as recipient (not as a specific user) and use the analysis in point 2 as the content to be delivered
Set your 200 recipients manually or use a condition report to get them
Make sure that both analysis in points 1 and 2 are saved in a place where all the users can read them.
I'm quite sure that it will work too :)
Though to be clear, my first option will be BIP bursting followed by proper row level sec.
Hope it helps!

Event Sourcing - Aggregate modeling

Two questions
1) How to model aggregate and reference between them
2) How to organise/store events so that they can be retrieved efficiently
Take this typical use case as example, we have Order and LineItem (they are an aggregate, Order is the aggregate root), and Product aggregate.
As LineItem needs to know which Product, so there are two options 1) LineItem has direct reference to Product aggregate (which seems not a best practice, as it violate the idea of aggregate being a consistence boundary because we can update Product aggregate directly from Order aggregate) 2) then LineItem only has ProductId.
It looks like 2nd option is the way to go...What do you think here?
However, another problem arises which is about building a Order read/view model. In this Order view model, it needs to know which Products are in Order (i.e. ProductId, Type, etc.). The typical use case is reporting, and CommandHandler also can use this Product object to perform logic such as whether there are too many particular products, etc. In order to do it, given the fact that those data are in two separate aggregate, then we need 1+ database roundtrips. As we are using events to build model, so the pseudo code looks like below
1) for a given order id (guid, order aggregate id), we load all the events for it; -- 1st database access
2) then build a Order aggregate, then we know which ProductId are referenced in Order;
3) for the list of ProductIds, we load all events for it; -- 2nd database access
If we build a really big graph of objects (a lot of different aggregates), then this may end up with a few more database access (each of which is slow)...What's your idea in here?
Thanks
Take this typical use case as example, we have Order and LineItem (they are an aggregate, Order is the aggregate root), and Product aggregate.
The Order aggregate makes sense the way you have described it. "Product aggregate" is more suspicious; do you ask the model if the product is allowed to change, or are you telling the model that the product has changed?
If Product can change without first consulting with the order, then the LineItem must not include the product. A reference to the product (aka the ProductId) is ok.
If we build a really big graph of objects (a lot of different aggregates), then this may end up with a few more database access (each of which is slow)...What's your idea in here?
For reads, reports, and the like -- where you aren't going to be adding new events to the history -- one possible answer is to do the slow work in advance. An asynchronous process listens for writes in the event store, and then publishes those events to a bus. Subscribers build new versions of the reports when new events are observed, and cache the results. (search keyword: cqrs)
When a client asks for a report, you give them one out of the cache. All the work is done, so it's very quick.
For command handlers, the answer is more complicated. Business rules are supposed to be in the domain model, so having the command handler try to validate the command (as opposed to the domain model) is a bit broken.
The command handler can load the products to see what the state might look like, and pass that information to the aggregate with the command data, but it's not clear that's a good idea -- if the client is going to send a command to be run, and you need to flesh out the Order command with Product data, why not instead have the command add the product data to the command directly, and skip the middle man.
CommandHandler also can use this Product object to perform logic such as whether there are too many particular products, etc.
This example is a bit vague, but taking a guess: you are thinking about cases where you prevent an order from being placed if the available inventory is insufficient to fulfill the order.
For real world inventory - a physical book in a warehouse - that's probably the wrong approach to take. First, the model itself is wrong; if you want to know how much product is in the warehouse, you should be querying the warehouse, not the product. Second, a physical warehouse is not constrained by your model -- calling the addProduct method on a warehouse aggregate doesn't cause the product to magically appear there.
Third, it probably doesn't match very well with what your domain experts want anyway. If the model says that the warehouse doesn't have enough product, do you think the stake holders want the system to
tell the shopper to buy the product somewhere else, or...
accept the order, and contact the supplier for a new delivery.
Hint: when in doubt, carefully review how amazon.com does it.

Resources