Detecting different types of Events using Drools Fusion? - events

I’m a bit new to Drools; Although, I have been reading the docs on Drools Fusion (Complex Event Processing), which states in Section 2.3:
"The streams can be provided to the application in various forms, from JMS queues to flat text files, from database tables to raw
sockets or even through web service calls.”
My questions are, could fusion be used to detect the following types of events, and if so, could you supply a small example of what the syntax would look like:
Detecting events from RSS/Atom feeds (so, anytime a new feed arrives, it can fire some rule);
It mentions “service calls”, could perhaps Fusion be used to poll some web-service to check for changes. For example, say a certain web-service offers a GET for reading comments. However, I would want to have a kind-of simulated PuSH, where if there is a new comment, I could treat it as an event.

Related

Design guides for Event Sourced microservices

I am thinking what is the best way to structure your micro-services, in the past the team I was working with used Axon Framework and PostgreSQL and each microservice had its own event store in the PostgreSQL database, then we built communication between using REST.
I am thinking that it would be smarter to have all microservices talk to the same event store as we would be able to share events faster instead of rewriting the communication lines using REST.
The questions that follows from the backstory is:
What is the best practice for having an event store
Would each service have its own? Would they share the same eventstore?
Where would I find information to inspire and gather more answers? As searching the internet for best practices and how to structure the Event Store seems like searching for a needle in a haystack.
Bear in mind, the question stated is in no way aimed at Axon Framework, but more the general idea on building scalable and good code. As the applications would work with each own event store for write model and read models.
Thank you for reading and I wish you all the best
-- Me
I'd add a slightly different notion to Tore's response, although the mainline is identical to what I'm sharing here. So, I don't aim to overrule Tore, just hoping to provide additional insight.
If the (micro)services belong to the same Bounded Context, then they're allowed to "learn about each other's language."
This language thus includes the events these applications publish and store.
Whenever there's communication required between different Bounded Contexts, you'd separate the stores, as one context shouldn't be bothered by the specifics of another context.
Hence it is beneficial to deduce what services belong to which Bounded Context since that would dictate the required separation.
Axon aims to support this by allowing multiple contexts with the Axon Server, as you can read here.
It simply allows the registration of applications to specific contexts, within which it will completely separate all message streams (so commands, events, and queries) and the Event Store.
You can also set this up from scratch yourself, of course. Tore's recommendation of Kafka is what's used quite broadly for Event Streaming needs between applications. Honestly, any broadcast type of infrastructure suits event distribution, as that's how events are typically propagated.
You want to have one EventStore per service, just as you would want to have one relation database per service for a non EventSourced system.
Sharing a database/eventstore between services creates coupling and we have all learned the hard way that this is an anti-pattern today.
If you want to use a event log to share events across services, then Kafka is a popular choice.
Important to remember that you only do event-sourcing within a service bounded context.

Does eventstore require event sourcing?

Lately, i've come across the term 'EventStore', i've read many articles, which states that Event Store is used for storing event and support querying events. But i can't find any usage of it out of the context Event Sourcing. Is EventSourcing a must went using EventStore? If not, can you give me some context which i can use it without the EventSourcing?
EventStoreDB and other databases & libraries out there are purpose built for doing event-sourcing (as in using events as the state of the system)
so yes, in theory, you would use them when doing event-sourcing because they provide fundamental structure & indexes needed to build event-sourced systems.
You can use them in other ways, with the same caveats you would have when using a relational database as a document store for instance.

Spring Cloud Stream multiple destination bindings

As described in the documentation of Spring Cloud https://cloud.spring.io/spring-cloud-stream/multi/multi__configuration_options.html, it is possible to bind a channel to multiple destinations.
However, it is not described how messages from each channel will be processed. Is this processed in parallel, round-robin, ...?
Well, round-robin doesn't even apply to to your question, since load balancing implies multiple consumers to a single destination. You are simply asking about binding multiple destination to a channel which is nothing more than a bridge between external destination and internal destination.
Now if you have multiple listeners on the internal destination (such as channel), then round-robin applies as a default load balancing policy, but by that time it's already pushed down the stack to spring-integration framework which handles it. So you can read more on different load balancing policy if that is what you were asking about.
That said, you're also looking at the rather old documentation. We are at 3.0.0.RELEASE now and promoting a different programming model which is much simpler. You can read our release announcement which contains links to 4 different posts (in Quick highlights section) providing more details.

Need defense against wacky challenge to Event Sourcing architecture w/CosmosDB

In the current plan, incoming commands are handled via Function Apps, resulting in Events being sent to an Event Hub, and then materializing the views
Someone is arguing that instead of storing events in something like table storage, and materializing views based on events and snapshots, that we should:
Just stream events to a log in Azure Monitor to have auditing
We can make changes to a domain object immediately in response to a command and use the change feed as our source of events for materialized views.
He doesn’t see the advantage of even having a materialized view. Why not just use a query? Argument is we don’t expect a lot of traffic.
He wants to fulfill the whole audit log by saving events to the azure monitor log - Just an application log. Instead, that commands should just directly modify the representation of an entity in cosmos, and we'd use the change feed from CosmosDB as our domain object events, or we would create new events off of that via subscribers to that stream.
Is this actually an advantageous approach? Can ya'll think of any reasons why we wouldn't want to do that? Seems like we'd be losing something here.
He's saying we'd no longer need to be concerned with eventual consistency, as we'd have immediate consistency.
Every reference implementation I've evaluated does NOT do it the way he's suggesting. I'm not deeply versed in the advantages/disadvantages of the event sourcing / CQRS paradigm so I'm at a loss at the moment.. Currently researching furiously
This is a conceptual issue so there's not so much a code example. However, here's some references that seem to back up the approach I'm taking..
https://medium.com/#thomasweiss_io/planet-scale-event-sourcing-with-azure-cosmos-db-48a557757c8d
https://sajeetharan.com/2019/02/03/event-sourcing-with-azure-eventhub-and-cosmosdb/
https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing
If your goal is only to have the audit log, state-based persistence could be a good choice. Event sourcing adds some complexity to the implementation side and unless you can identify more advantages of using it, you might not convince your team to bring this complexity to the system. There are numerous questions and answers on SO, as well as in some blog posts, about pros and cons of event sourcing, so I won't get into that discussion here.
I can warn you, though, that the second article in your list is very weak and would most probably lead you to many difficulties. The role of Event Hub there is completely unclear and it doesn't explain anything about projections and read-models (what you call "materialised views"). Only a very limited number of use-cases can live with only getting one entity by id and without being able to execute a query across multiple entities. That also probably answers your concern of having read-models at all. You will need them very soon when for the first time you will start figuring out how to get a list of entities based on some condition (query).
Using CosmosDb as the event store is completely feasible, as described in the first article if you can manage the costs involved. Just remember to set the change feed TTL to -1, otherwise, you won't be able to replay your projections when you need to.
To summarise:
Keeping the audit log can be done without event-sourcing, but you need to ensure that events are published reliably, preferably in the same transaction as the entity state update. It is often hard or impossible but you might accept the risk of your audit requirement is not strict. You can also base your audit log on the CosmosDb change feed, just collecting document changes and logging them somewhere.
Event sourcing is a powerful technique but it has both pros and cons. The most common prejudice against using event sourcing is its implementation complexity. It might not be a big issue if you have a team that is somewhat experienced in building event-sourced systems. If you don't have such a team, you might want to build a small-scale spike to get some experience.
If you don't get full buy-in from the team to use event sourcing, you will later get all the blame if anything goes wrong. And it will go wrong at some point, especially with little experience in this area.
Spend some time reading books and trying out things yourself, before going wild in production.
Don't use Event Hub for anything that it is not designed for. Event Hub is the powerful event ingestion transport with limited TTL and it should be used for that purpose.
Don't use Table Storage as the event store, unless you only read entities by id. I used it in production for such a scenario and it worked (to some extent) but you can't project read-models from there.
A simple rule of thumb is to not use products for tasks they weren't designed for.
Azure Monitor was not designed to store application domain data. Azure Monitor is designed to store telemetry data from your applications and services and provides features such as alerts and other types of integration into DevOps tools for managing the operation and health of your apps.
There is a simple reason why you were able to find articles on event sourcing using Cosmos DB and why our own docs talk about it. Because it was designed to be used this way. It is simple to set up Cosmos DB to be an append only event store for your applications and use Change Feed to fire off messages in other apps or services or, in your case, to maintain a materialized view state of domain objects within your app.

What is the development environment for TIBCO Business Works?

I see all these job posts for TIBCO developer but from tibco.com I couldn't really dig what a developer does codewise on this platform because that is geared more towards endusers. Is it a JAVA based platform?
I'll assume that you are talking about TIBCO Business Works as this is where the majority of the development is done.
TIBCO Business Works is a Java based platform, however normally very little development is done in Java. At it's heart TIBCO Business Works is a XSLT processing engine with lots (and I mean lots) of connectivity components (called Starters and Activities in the TIBCO world).
Development is done graphically by linking the Starter to Activities and eventually to a End Activity, very much like a traditional process diagram. You can see what I mean in the top right of this screen shot:
Each of these diagrams is called a Process Definition and the closest equivalent in Java is a method, however they are more closely related to C functions as there is no concept of a Class for Process Definitions.
Looking closely, you'll notice that the StorePO Publish To Adapter Activity is selected. In the bottom right you can see the input to this activity is "mapped" from other process data (which can be either the output from the Start, or the output from other activities). This mapping is actually XSLT, just represented visually. So much so, that copying the root node of the mapping ("body" in this case) into a text document pastes as XSLT (you can even edit it there and copy it back if you are so inclined; good for when you need to do a search and replace).
Looking back at the Process Definition, there is a CheckInventory Call Process Activity. This is how you invoke another Process Definition from the one you are working on. In fact, this Process Definition has a plain Start Activity, which indicates that it it invoked from another Process Definition.
Starter processes are Process Definitions that have a Process Starter instead of a Start Activity. The Process Starter triggers the invocation of the Process Definition based on some event. For instance, a JMS Queue Receiver Process Starter, will trigger when it receives a specific JMS message. There are many such Process Starters, including SOAP, HTTP, SMTP and even plain old TCP.
Likewise the are many Activities, including the ones above and JDBC and FTP.
Without actually having access to TIBCO Designer, the best way to beef up your skills for a TIBCO role is to focus on XPath and XSLT as that's mostly what you'll be working with.
TIBCO AMX Business works is a Java platform use for integration and automation purposes. It uses a plug in based architecture which means that you can extend the functionality. The product has changed from their 5.x version to 6.4.x version now to include micro services capabilities, containerization, cloud enablement, etc.
It uses a model driven development approach to reduce coding parts, that is why is so powerful.
You can find more information on the documentation official siteDocumentation TIBCO AMX BW
If you know spanish and want to learn about the 5.x version I have a set of video tutorials at TIBCO AMX BW Tutorials

Resources