AX 2012 has introduced Delegates on classes. I've reviewed a bunch of documents on the web. All of them illustrate the usage on custom classes. They serve to illustrate the technology rather than real-life scenarios we have to deal with.
I'm looking for an example to capture changes in AX such add/change a worker, customer, vendor etc. to start with. I want to capture the information and pass it to a .net application. I'm having a hard time finding any examples.
See this answer for use of static event delegates to capture changes: Table Update Event Handler
Please be aware that events may be bypassed by using doUpdate etc. and by calling record.skipEvents(true).
Also consider using SQL Server feature Change Data Capture.
Related
I finally got my sample dmn-quarkus example running. Is there a property that enables the trace, such a way it prints the sequence of decisions executed?
I noticed that when I provide a incorrect JSON for my DMN model, Kogito responds with a detail response, telling me which decision failed.
This is awesome! Is there a property to turn on to get the details in each response?
Kogito is based on a domain-model first approach to code generation
Kogito ergo domain
Kogito adopts to your business domain rather than the other way around [...]
this means the automatically code-generated API will always take the "shape" of the input/output context of the DMN model, and no longer the v7.x kie-server approach of a generic API.
The information you obtain during error is meant to provide an analogous to a stacktrace.
You can always leverage the Kogito API programming model to build the REST service yourself, in the way better fit your specific business requirement --shall that be provide a list of DMNDecisionResult(s). For instance a pragmatic approach could be to inspect the automatically generated code, and then code a bespoke service, based on this one.
We are looking into Audit functional requirements, but that is not provided out of the box yet. We always welcome community feedback, especially even more in this very early versions! Don't hesitate to join the community on our mailing-list or raise a JIRA ticket to take part of the conversation, the team will be glad to look further into it considering community feedback and suggestions!
Can someone please tell me which of the following has more advantages - plugin/workflow ?
As the Post in Custom WorkFlows vs Plug-ins in MS CRM seems to be a little outdated, i can share my experieces with you.
Workflows:
Contains certain Logic you provide by only "clicking" on the actions
you want to be made (Like Update, Create, etc.)
Can be run "onDemand"
Can often be handled by KeyUsers and do not need an explicit developer
Should not be used for complicated logic as the iterface often does not provide the possibility to add additional logic afterwards
If used for complicated logic (as statened above), refaktoring or changes are often very hard to integrate!
In current Cloud organisations you get the Information that you SHOULD not use these anymore, but to swith to MS Flow. (VERY IMPORTANT!!)
Plugins:
Custom Code - so you can provide very complicated or also simple server-side logic
You need a(n experienced) developer
Can perform faster than workflows!
nearly everything you can do with a Workflow can be done by a Plugin (or job) but not visa-vera
You have the possibility to trigger the plugin as well as hand in Data (Parameters!) as you can create your own "Messages" (With this i mean you do not only use Update, Delete and Create, etc. as Messages for Plugins, but you can define your own Message Steps by creating "Actions" in the Prozess Section in your Dynamics Organization. There you can define Input- AND Outputparameters. These custom Messages can be also triggered on demand!!! For instance by using javascript. Guid how to use/create custom Messages (Actions))
In my experience Plugins are mostly the better suited solution if you have (even a little) complicated matter, as workflows are far less maintainable. Simple "1 Liners" can often be replaced by workflows.
Nevertheless each developer/consultant has to suggest his own way for the improvement/developmet of his/her organization.
#Community: Feel free to correct me, if i am wrong anywhere or if you have different experiences.
In the current plan, incoming commands are handled via Function Apps, resulting in Events being sent to an Event Hub, and then materializing the views
Someone is arguing that instead of storing events in something like table storage, and materializing views based on events and snapshots, that we should:
Just stream events to a log in Azure Monitor to have auditing
We can make changes to a domain object immediately in response to a command and use the change feed as our source of events for materialized views.
He doesn’t see the advantage of even having a materialized view. Why not just use a query? Argument is we don’t expect a lot of traffic.
He wants to fulfill the whole audit log by saving events to the azure monitor log - Just an application log. Instead, that commands should just directly modify the representation of an entity in cosmos, and we'd use the change feed from CosmosDB as our domain object events, or we would create new events off of that via subscribers to that stream.
Is this actually an advantageous approach? Can ya'll think of any reasons why we wouldn't want to do that? Seems like we'd be losing something here.
He's saying we'd no longer need to be concerned with eventual consistency, as we'd have immediate consistency.
Every reference implementation I've evaluated does NOT do it the way he's suggesting. I'm not deeply versed in the advantages/disadvantages of the event sourcing / CQRS paradigm so I'm at a loss at the moment.. Currently researching furiously
This is a conceptual issue so there's not so much a code example. However, here's some references that seem to back up the approach I'm taking..
https://medium.com/#thomasweiss_io/planet-scale-event-sourcing-with-azure-cosmos-db-48a557757c8d
https://sajeetharan.com/2019/02/03/event-sourcing-with-azure-eventhub-and-cosmosdb/
https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing
If your goal is only to have the audit log, state-based persistence could be a good choice. Event sourcing adds some complexity to the implementation side and unless you can identify more advantages of using it, you might not convince your team to bring this complexity to the system. There are numerous questions and answers on SO, as well as in some blog posts, about pros and cons of event sourcing, so I won't get into that discussion here.
I can warn you, though, that the second article in your list is very weak and would most probably lead you to many difficulties. The role of Event Hub there is completely unclear and it doesn't explain anything about projections and read-models (what you call "materialised views"). Only a very limited number of use-cases can live with only getting one entity by id and without being able to execute a query across multiple entities. That also probably answers your concern of having read-models at all. You will need them very soon when for the first time you will start figuring out how to get a list of entities based on some condition (query).
Using CosmosDb as the event store is completely feasible, as described in the first article if you can manage the costs involved. Just remember to set the change feed TTL to -1, otherwise, you won't be able to replay your projections when you need to.
To summarise:
Keeping the audit log can be done without event-sourcing, but you need to ensure that events are published reliably, preferably in the same transaction as the entity state update. It is often hard or impossible but you might accept the risk of your audit requirement is not strict. You can also base your audit log on the CosmosDb change feed, just collecting document changes and logging them somewhere.
Event sourcing is a powerful technique but it has both pros and cons. The most common prejudice against using event sourcing is its implementation complexity. It might not be a big issue if you have a team that is somewhat experienced in building event-sourced systems. If you don't have such a team, you might want to build a small-scale spike to get some experience.
If you don't get full buy-in from the team to use event sourcing, you will later get all the blame if anything goes wrong. And it will go wrong at some point, especially with little experience in this area.
Spend some time reading books and trying out things yourself, before going wild in production.
Don't use Event Hub for anything that it is not designed for. Event Hub is the powerful event ingestion transport with limited TTL and it should be used for that purpose.
Don't use Table Storage as the event store, unless you only read entities by id. I used it in production for such a scenario and it worked (to some extent) but you can't project read-models from there.
A simple rule of thumb is to not use products for tasks they weren't designed for.
Azure Monitor was not designed to store application domain data. Azure Monitor is designed to store telemetry data from your applications and services and provides features such as alerts and other types of integration into DevOps tools for managing the operation and health of your apps.
There is a simple reason why you were able to find articles on event sourcing using Cosmos DB and why our own docs talk about it. Because it was designed to be used this way. It is simple to set up Cosmos DB to be an append only event store for your applications and use Change Feed to fire off messages in other apps or services or, in your case, to maintain a materialized view state of domain objects within your app.
Something here doesn't feel right to me here, and so I would like the community's input - perhaps I am approaching this in the wrong way....
Q: Is is appropriate to use traditional infrastructure logging frameworks (like log4net) to log business events?
When I say business events, I mean I want a global log like this:
xx:xx Customer A purchased widget B.
xx:xx Widget B was dispatched from warehouse.
xx:xx Customer B payment declined.
Most traditional infrastructure logging frameworks have event levels something like this:
FATAL
ERROR
WARN
INFO
DEBUG
An of course these messages don't fit well into that. Best description would be INFO, but of course these are important events, and INFO is of very low importance.
I would still like this as a 'log' (e.g. I don't want to have to extract this from my business objects each time I want to see it)
Seems to me I have two options:
1) Use a framework like log4net and just define a special logger for this (and live with the fact that it doesn't feel right).
2) Provide a service for performing this that doesn't rely on a traditional logging services.
I'm leaning towards 2. What has anyone else done in a similar situations?
Thanks!
What you're wanting sounds like an auditing service, not a logging service. If I'm right, your goals are to keep track of these business events for historical and maybe even reporting purposes. You can use the details in the audit to, for lack of a better phrase, place blame for events that happen in the system.
I probably wouldn't use a logging system, like log4j, for this purpose. In our system, auditing is a first class citizen as a full service.
--
HTH,
Dusty
Leave the logger for things having to do with the program, not the business. It is just a tool to help the developers.
Write your own system to log business events. If it is a business requirement to have a record, you will want something you have control over and you will need to use the logger above to keep track of how it works.
Basically, #2 in your question.
To me the idea of a Business Event is that it plays a role in some future business processing, anything from actually triggering Business Actions to simply available for analytics.
Hence, completely different QOS requirements. needs its own API.
Conceviably initially that maps down to logging, but in future could go to reliable messaging or DB.
These sound like the sorts of things that your customers might potentially want to query or report on from within your application - the obvious choice would be the database.
In particualr, in this case I'd feel like traditional logging frameworks wouldnt be suitable because when it comes to data that you might later want to access within your application logging frameworks allow you to do things that dont really make sense, for example you might be able to change where the logging is sent to based on the app.config file (which is unhelful if you try and read it from a different location).
That said, if a logging framework allows you to do exactly what you want already then there isnt any shame in just using the logging framework as your implementation and saving yourself the effort:
class TransactionLogger
{
public void Log (Message message)
{
MyLoggingFramework.Log(message.string, etc...);
}
}
In my experience business events comprise large or huge number of technical operations behind the scenes, with only certain business events being important to the business.
This creates problems when trying to use a generic logging methodology, so in general, in the systems I've worked on, both are used.
Logging for the technical aspects, and business event logging for the business events.
The business event logging, doesn't use the same technology as the technical logging, and instead logs to a custom designed history/audit table (Sometimes these are split, depending on the required detail), which is designed specifically for each application. (This keeps the auditors and users nice and happy.)
This allows easy reporting, and management of the information, while obviously expanding the scope of each specification slightly.
you could use it but you need is business activity monitoring and event processing software. Off the top of my head, IBM WebSphere Business Monitor provides this capability. It processes Common Base Event (an IBM implementation of the Web Services Distributed Management Web Event Format standard) and then takes that data and create business activity dashboards.
Check out DiALog: A Distributed Model for Capturing Provenance and Auditing Information, apart from the distributed aspect, you can use the subject-predicate-object principle to record the business events. And afterwards reconstruct certain trails.
Here is a related post - mine. Audit logging and exception management framework.
Pattern of pub-sub events is that the
publisher should not know or care if
there are any subscribers out there,
nor should it care what the
subscribers do if they are there (from
Brian Noyes'
blog)
What are the best practices to using EventAggregator in Prism? Currently I have few modules which are loosely coupled and work independently. These modules use EventAggregator to communicate to other modules. As the application grows I'm confused on how to document my code. There could be many modules publishing Events and many others subscribing to it as Brian puts neither of them knows what other does exactly. When creating a new module how do I make sure they are subscribed to some XYZ event without breaking the loosely coupled structure?
How do I represent a module using EventAggregator visually (some kind of diagrams)?
You have a lot of questions in your post that can be answered "it depends on your application," but I'll try to answer some of them.
One thing that I see most often with EventAggregator is abuse. Many people use EventAggregator in a way that makes both the publisher and subscriber dependent on each other. This brings me to my first bit of advise:
Never assume there are any subscibers to an event.
EventAggregator is useful for publishing events other views might be interested in. For example, in our application we allow a user to change someone's name. This name might be displayed on other views already open in the application (we have a tabbed UI). Our use case was we wanted to have those UIs update when the name was changed, so we published a "UserDataChanged" event so that open views could subscribe and refresh their data appropriately, but if no views that were open were interested in this data, no subscribers were notified.
Favor .NET Events over EventAggregator events where appropriate
Another mistake I see frequently is a business process that is implemented using EventAggregator where data is sent to a central party and then that party replies, all using EventAggregator. This is leads to some side-effects you'd likely want to avoid.
A variation on that I see a lot is communication from a parent view to a sub-view, or vice-versa. Something like "TreeItemChecked" or "ListViewItemSelected". This is a situation where traditional .NET Events would be used, but an author decided that if they have a hammer (EventAggregator), everything (Events) looks like a nail.
You asked about modeling the EventAggregator and I would say this: the EventAggregator is only special in that it allows for decoupling and doesn't create strong references to events (avoiding memory leaks, etc). Other than that, it's really just a very slight variation of the Observer Pattern. However you are modeling Observers is how you would model the EventAggregator in whatever type of diagram you are trying to create.
As to your question about making sure some module or another is subscribed to an event: you don't. If you need to ensure there are subscribers, you should not use the EventAggregator. In these cases I would recommend a service running in your application that modules can grab from your container and use or other similar thing.
The thing to keep in mind about your modules is that you should be able to completely remove one and the rest of your application functions normally. If this is not the case, you either have a module dependency (best to be avoided, but understandable), or dependent modules should be combined into one.