ActiveMQ create hierarchical topics with wildcards - jms

I have read in the ActiveMQ documentation, that subtopics can be created by using wildcards. So for instance I could create the topics:
physicalEnvironmet.Conditions
physicalEnvironmet.Infrastructure
physicalEnvironmet.Location
I could then register to either one of the topics, or to all (physicalEnvironmet.>)
But how is it working for more complex structures, like this:
Would the topic for Flickering be called:
physicalEnvironmet.Conditions.Light.Flickering
And could I still have a precise selection, like only subscribing to topics considered with light:
physicalEnvironmet.Conditions.Light.>
So basically I am asking If there is a level restriction for subtopics and If there is maybe a more easy way to create hierarchical topic orders.

In my 10+ yrs of messaging, every hierarchal topic structure ends up being replaced, b/c the taxonomy never works out. Your overall message pattern suggests a moderate total volume, so I suggest a flexible event model where you use fields to define the variance vs topic names eventType="Environmental" sensorType="Light". This allows you to add new ones and then have the option of filtering out what clients want and do not want without having to mess with the broker.
Another option is to use JMS headers to do the same. This would allow you to use selectors to do broker-side filtering.

Related

Event driven microservice - how to init old data?

I already have micro-services running and would like to add event(Kafka).
For example, I have a customer service with 10000 customers in the db. I will be adding an event to the customer service so that whenever a new user is created, it publishes an event in which will be consumed by consumers (like recommendation-service, statistics-service, etc.)
I think the above is clear to me. However, I am not sure how to handle the already-registered customers (10000 customers) as the event will only be triggered when 'NEW' customer registers.
I can 'hack' the service to sync the data manually but what does most people do in this case?
Thank you
I tried to search the topic but couldn't find the ones that I am looking for.
There are basically two strategies that you can follow here. The first is a bulk load of fake "new customer" events into the Kafka topic, as you also suggested. The second approach would be to use the change data capture (CDC) pattern where there is an initial snapshot of all the observed data and then a constant streaming of new data change events, direclty from the database internal log (WAL).
To handle your entire use case, you could use a tool like Debezium Source Connector for Kafka Connect platform, but note that you will also need to map its change event into your message format. There are plugins to do that with a configuration-driven approach, but you can also create your custom logic using single message transformations (SMT).

Composite queue in ActiveMQ in front of a topic

I use ActiveMQ's composite queue in front of physical queues because of the ability to set permissions differently on the producer and consumer side. And this works like designed.
I also I want to use a composite queue in front of topics. In this way I can use the same permission mechanism like with the above mentioned queuing concept.
Is there a disadvantage for using composite queue in front of a topic regarding for example a potential decrease of performance? Are there other disadvantages which I have to take into account when working constructs like composite queue -> topic?
The performance impact would be negligible for most workloads. Workloads short of 100's of client connections and 100M's of messages per day is usually a blip for modern hardware and ActiveMQ.
This sounds like a policy along the lines of an 'alias naming' for destinations. This pattern exists in other products, and is definitely a valid use case for Composite Destinations in ActiveMQ-- you are well within the lines of intended use for that feature.
Disadvantage wise-- nothing jumps out you should be good.

Event sourcing, hold read side consistent

I'm new in ES, and only trying to sort everything in my head. I have heard that ES is actually solving the consistency issue between write and read database (with some delay for sure). But I still do not fully understand how?
If command is coming to domain and aggregate root firing event to update event store, same event is sending to update read side?? But what if message lost, we will have outdated read side.
Is projections the only solution??So instead of updating from event, read side walking through event store and reproducing aggregate (from beginning or from some snapshot). But in such case it's probably breaking some rules as read side should be simple and it should not know about domain. And also usually read side is a separate application so she can't know about aggregate.
For sure we also can use rabbitMQ or some other message broker to not lost messages,and actually I think we need. But I also read that to make it consistent "you can use rabbit or ES", but again how ES can make it consistent by own??
Benjamin is completely right about the purpose of Event Sourcing.
My answer aims to add some more details.
First:
Read models and projections aren't suppose to represent the aggregate state.
Projections are the way for event-sourced systems to build the read model for CQRS. CQRS in essence postulates that write and read models usually serve different purposes and therefore it makes perfect sense to use another model for the read side.
Therefore, you often find multiple projections building different, narrowly purposed models, targeting specific needs for queries.
Second:
By "solving consistency issues" you probably mean that in event-sourced systems each state transition is represented as an event (or multiple events). Therefore, writes are always transactional. The database you choose as your event store should support (could using some library or additional tool) real-time subscription that would allow you to receive new events in your projection, in order. For new projections, it will start reading from the start and eventually come real-time. Subscriptions usually need to keep the current processing position in the global stream of events so when the projection restarts, it starts receiving events from the point which is last known to it.
By doing this, you will guarantee that every state transition in the write model will be reflected in the read model. This is probably what you mean in your original question.
Third:
Now, all those things above imply that you cannot use a message bus (only) to deliver events to projections. Brokers give no ordering guarantees and can deliver one message more than once. Also, message brokers don't keep history so you cannot build new projections at will.
However, it doesn't mean that you can't use brokers at all. Some projections don't require ordering and are idempotent. But the feed for events to publish via a broker is the same subscription, so you get guaranteed delivery and can read past events if necessary.
Fourth:
CQRS doesn't imply separate databases. Sometimes, using CQRS just means that you use some persistence layer for your domain objects, so you read and write aggregates. But for queries, you just query at will, whatever you want. A database view is a technical example of CQRS.
Almost there:
Projections need to have little to no logic, it is true. The main point here is to ensure idempotency, if possible, so projections usually should not use operations to calculate new values based on old values and information from events.
But projections will know about your domain. Everything in your system should know about your domain.
And last:
You can definitely use different databases for write and read models without getting to Event Sourcing. You just need to choose a database that supports a change feed. SQL Server, Postgres, CosmosDb and other databases have such functionality.
P.S. I'd suggest spending some time studying those concepts. I can point to the book repository, it has CQRS and Event Sourcing examples: https://github.com/PacktPublishing/Hands-On-Domain-Driven-Design-with-.NET-Core
I have heard that ES is actually solving the consistency issue between
write and read database
To the best of my knowledge, Event sourcing has NOTHING to do with consistency between read/write to your db. Consistency between read/write has actually more to do with the type of db you are using such as relational which are mostly ACID versus the non-relational db which are often eventual consistency.
ES is not meant for that, instead ES : "Capture all changes to an application state as a sequence of events" Martin Fowler.
ES works like time machine, which allows you to change the state of your application to a specific date time in the past.

Is Event sourcing using Database CDC considered good architecture?

When we talk about sourcing events, we have a simple dual write architecture where we can write to database and then write the events to a queue like Kafka. Other downstream systems can read those events and act on/use them accordingly.
But the problem occurs when trying to make both DB and Events in sync as the ordering of these events are required to make sense out of it.
To solve this problem people encourage to use database commit logs as a source of events, and there are tools build around it like Airbnb's Spinal Tap, Redhat's Debezium, Oracle's Golden gate, etc... It solves the problem of consistency, ordering guaranty and all these.
But the problem with using the Database commit log as event source is we are tightly coupling with DB schema. DB schema for a micro-service is exposed, and any breaking changes in DB schema like datatype change or column name change can actually break the downstream systems.
So is using the DB CDC as an event source a good idea?
A talk on this problem and using Debezium for event sourcing
Extending Constantin's answer:
TLDR;
Transaction log tailing/mining should be hidden from others.
It is not strictly an event-stream, as you should not access it directly from other services. It is generally used when transitioning a legacy system gradually to a microservices based. The flow could look like this:
Service A commits a transaction to the DB
A framework or service polls the commit log and maps new commits to Kafka as events
Service B is subscribed to a Kafka stream and consumes events from there, not from the DB
Longer story:
Service B doesn't see that your event is originated from the DB nor it accesses the DB directly. The commit data should be projected into an event. If you change the DB, you should only modify your projection rule to map commits in the new schema to the "old" event format, so consumers must not be changed. (I am not familiar with Debezium, or if it can do this projection).
Your events should be idempotent as publishing an event and committing a transaction
atomically is a problem in a distributed scenario, and tools will guarantee at-least-once-delivery with exactly-once-processing semantics at best, and the exactly-once part is rarer. This is due to an event origin (the transaction log) is not the same as the stream that will be accessed by other services, i.e. it is distributed. And this is still the producer part, the same problem exists with Kafka->consumer channel, but for a different reason. Also, Kafka will not behave like an event store, so what you achieved is a message queue.
I recommend using a dedicated event-store instead if possible, like Greg Young's: https://eventstore.org/. This solves the problem by integrating an event-store and message-broker into a single solution. By storing an event (in JSON) to a stream, you also "publish" it, as consumers are subscribed to this stream. If you want to further decouple the services, you can write projections that map events from one stream to another stream. Your event consuming should be idempotent with this too, but you get an event store that is partitioned by aggregates and is pretty fast to read.
If you want to store the data in the SQL DB too, then listen to these events and insert/update the tables based on them, just do not use your SQL DB as your event store cuz it will be hard to implement it right (failure-proof).
For the ordering part: reading events from one stream will be ordered. Projections that aggregates multiple event streams can only guarantee ordering between events originating from the same stream. It is usually more than enough. (btw you could reorder the messages based on some field on the consumer side if necessary.)
If you are using Event sourcing:
Then the coupling should not exist. The Event store is generic, it doesn't care about the internal state of your Aggregates. You are in the worst case coupled with the internal structure of the Event store itself but this is not specific to a particular Microservice.
If you are not using Event sourcing:
In this case there is a coupling between the internal structure of the Aggregates and the CDC component (that captures the data change and publish the event to an Message queue or similar). In order to limit the effects of this coupling to the Microservice itself, the CDC component should be part of it. In this way when the internal structure of the Aggregates in the Microservice changes then the CDC component is also changed and the outside world doesn't notice. Both changes are deployed at the same time.
So is using the DB CDC as an event source a good idea?
"Is it a good idea?" is a question that is going to depend on your context, the costs and benefits of the different trade offs that you need to make.
That said, it's not an idea that is consistent with the heritage of event sourcing as I learned it.
Event sourcing - the idea that our book of record is a ledger of state changes - has been around a long long time. After all, when we talk about "ledger", we are in fact alluding to those documents written centuries ago that kept track of commerce.
But a lot of the discussion of event sourcing in software is heavily influenced by domain driven design; DDD advocates (among other things) aligning your code concepts with the concepts in the domain you are modeling.
So here's the problem: unless you are in some extreme edge case, your database is probably some general purpose application that you are customizing/configuring to meet your needs. Change data capture is going to be limited by the fact that it is implemented using general purpose mechanisms. So the events that are produced are going to look like general purpose patch documents (here's the diff between before and after).
But if we trying to align our events with our domain concepts (ie, what does this change to our persisted state mean), then patch documents are a step in the wrong direction.
For example, our domain might have multiple "events" that make changes to the same, or very similar, sets of fields in our model. Trying to rediscover the motivation for a change by reverse engineering the diff is kind of a dumb problem to have; especially when we have already fought with the same sort of problem learning user interface design.
In some domains, a general purpose change is good enough. In some contexts, a general purpose change is good enough for now. Horses for courses.
But it's not really the sort of implementation that the "event sourcing" community is talking about.
Besides Constantin Galbenu mentioned CDC component side, you can also do it in event storage side like Kafka stream API.
What is Kafka stream API? Input is read from one or more topics in order to generate output to one or more topics, effectively transforming the input streams to output streams.
After transfer detailed data to abstract data, your DB schema is only bind with the transformation now and can release the tightly relation between DB and subscribers.
If your data schema need to change a lot, maybe you should add a new topic for it.

How do I write message queue handling in an object-oriented way?

If you had to write code that takes messages from a message queue and updates a table in a database, how would you go about structuring it in a good oo way. How would you structure it? The messages is XML data, one node per row in the table. The rows in the table could be updated, deleted or inserted.
I don't believe you've provided enough information for a good answer. What do the messages look like? Do they vary in contents/type, or are they all just "messages"? Do they interact with each other, or is this just a data format conversion? One of the keys to OO development is to realize that the "find the nouns-n-verbs" game (which is as much as you've described) rarely leads to the best solution. It certainly won't be the worst, but you'll end up with data aggregation and a bunch of procedural code.
Procedural code isn't bad, though. Why does it need to be OO? Does the problem itself require polymorphism and data hiding? Is there any complex behavior that you are trying to model? There's no shame in using a non-OO solution, when the problem is simple.
Normally with OO implementations of message queues you make the classes that represent the individual types of messages yourself. To the extent that the different message types that you expect to get are derivates of each other, this provides your class hierarchy for the messages.
With configuration based persistence frameworks you can just set up presistence for these classes directly.
Then there's one or more classes that listen to the message queue and just persist the messages, probably just one. It doesn't have to be more elaborate than that.
The best way of building OO code when doing messaging or dealing with any kind of middleware is to hide the middleware APIs from your code and just deal with business logic.
e.g. see these examples
POJO Consuming which is pretty much the use case you describe and
POJO Producing if ever you need to send messages to a message queue.
Then you just need to define what your Data Transfer Objects look like; how you want to encode things on the wire in XML / JSON / whatever.
The great thing about this approach is your code is now totally middleware agnostic - you could swap out your message queue and use a database or JavaSpace or in-memory SEDA or files or any other communication protocol or middleware API.

Resources