Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Our usecase is that multiple assets send different types of data (let's say status, temp and data) to a MQTT broker. Because message brokers are very good at routing, handling topics, etc., the assets publish each type of data to a dedicated topic:
status messages > status topic > e.g. /asset/123/status
temp messages > temp topic > e.g. /asset/123/temp
data messages > data topic > e.g. /asset/123/data
Now our question came up, how the subscriber should handle the different topics. We therefore use the default Paho client via SpringIntegration. In our minds, there are two possible solutions to this:
Solution 1
One Paho client subscribes to all the respective topics. Now the actual routing (which callback for which type of data) must be done in the backend itself.
Solution 2
One Paho client for each topic. So the actual routing is done at the message broker and no routing logic must be done in the backend anymore. The clients simply call their callback and the backend just focuses on its domain logic (not on the routing of topics).
Best Practise?
Now our question is, are there any best practices concerning this question? From our perspective, routing is the job of the message broker, because this is what is is designed for. So the routing logic should not be within the backend. That's good because the backend can now concentrate on its own domain logic. But for this, we would need to have n clients, with n being the amount of different data types. This could possibly let the connections explode when, at some point of time, having more and more message types and therefore more and more topics.
Are there any best practices, benchmarks or (anti) pattern covering this topic?
I've implemented more Solution 2 than Solution 1 in similar cases. As you rightly point out, the backends can just focus on their singular use cases, and not have to worry about the others.....if that is really the case. If at some point down the road, you need /status along with /data, then that becomes a problem you would not have with Solution 1. Solution 2 takes more compute resources, but is usually faster (depending on your MQTT broker and what your solution is running on.) Solution 2 is also better for things like troubleshooting and bugs that cause crashes -- A crash only affects the one domain, not all of them.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 months ago.
Improve this question
I had been interview today and one of the interesting that i come across from the interviewer is we know that Channels are the medium through which goroutines can communicate with each other.
But, What if they is no channels..! or do not want to use channels, Is there any other alternative way that we can send and read messages to goroutines in golang?
If yes, How is it so?
Channel is not the first nor the only communication tool between concurrent or parallel entities. There are numerous others.
As a sarcastic example, one goroutine may upload a file holding the message to Amazon S3, and the other goroutine can fetch that file. This is a communication.
A more efficient one would be to create and write the message to a local file which the other goroutine can read.
Another could be opening a server socket by one goroutine, and connecting to that socket from the other goroutine. And you have a full duplex "channel".
To stay on the "Earth", much simpler and more efficient solutions would be to send messages via a shared variable, but access to it must be synchronized of course via synchronization primitives such as those in the sync and sync/atomic packages.
A slice (of messages) may be shared by the 2 goroutines, and when the first goroutine wants to send a message to the other, it may acquire a write lock (sync.RWMutex), append the message to the slice then release the lock. The second goroutine may use a read lock to check if the slice has messages (length > 0), then use a write lock to take a message from the slice (and delete it).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What is event-driven programming and has event-driven programming anything to do with threading? I came to this question reading about servers and how they handle user requests and manage data. If user sends request, server begins to process data and writes the state in a table. Why is that so? Does server stop processing data for that user and start to process data for another user or processing for every user is run in a different thread (multithread server)?
Event driven programming != Threaded programming, but they can (and should) overlap.
Threaded programming is used when multiple actions need to be handled by a system "simultaneously." I use simultaneously loosely as most OS's use a time sharing model for threaded activity, or at least they do when there are more threads than processors available. Either way, not germane to your Q.
I would use threaded programming when I need an application to do two or more things - like receiving user input from a keyboard (thread 1) and running calculations based upon the received input (thread 2).
Event driven programming is a little different, but in order for it to scale, it must utilize threaded programming. I could have a single thread that waits for an event / interrupt and then processes things on the event's occurrence. If it were truly single threaded, any additional events coming in would be blocked or lost while the first event was being processed. If I had a multi-threaded event processing model then additional threads would be spun up as events came in. I'm glossing over the producer / worker mechanisms required, but again, not germane to the level of your question.
Why does a server start processing / storing state information when an event is received? Well, because it was programmed to. :-) State handling may or may not be related to the event processing. State handling is a separate subject from event processing, just like events are different than threads.
That should answer all of the questions you raised. Jonny's first comment / point is worth heeding - being more specific about what you don't understand will get you better answers.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Im refactoring a monolith to microservies. I am not clear on data responsibility and access with microservices. From what I read we should take vertical slices.
So each service should be responsible for its own UI/WebAPI/DB, with distinct responsibility.
For example if I had a monolith shopping cart app, I could break it into the following services:
CustomerAccount
ProductSearch
ProductMaintenance
ShoppingCart
Ordering
What do I do with shared data, how do I determine what part of the system is responsible for it?
e.g. In my shopping cart example...
The CustomerAccount, ShoppingCart and Ordering need to know about the Customer data.
The ProductSearch, ProductMaintenance, ShoppingCart and Ordering need to know about Product Data data.
The Ordering will update the number of products available, but so should the productMaintenance.
So should the services send messages back and forth to get data from one another,
or should there be a master service, which handles the communication/workflow between services
or should they read/write from a common database
or something else?
This seem little late to answer but it may be good for future use.
Microservice calling another Microservice is totally fine, what you should be aware of is in case the communication between Microservices becomes to chatty than you should look at a different solution(maybe duplication of data across services, or have it with in same service).
In your case I would build a separate services for each entity that you call for common and reevaluate the situation afterwards.
Hope this helps
Best regrads
Burim
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am currently building an app, and i would like to use microservices as pattern and GraphQl for communication. I am thinking about using kafka / rabbitmq + authZ + auth0 + apollo + prisma. And all of this running on docker.
I found many ressources on event sourcing, the advantage/disavantage, and I am stuck on how it work in the real world. As far, this is how i will do it:
Apollo engine to monitor request / responses..
Auth0 for authentification management
AuthZ for authorization
A graphql gateway. Sadly I did not find a reliable solution, I guess i have to do it my self using apollo + graphql-tool to merge schema.
And ideally:
Prisma for the read side of bill's MS
nodejs for the write side of bill's MS
Now if I understand correctly, using apache kafka + zookeeper :
Kafka as the message broker
Zookeeper as an eventstore.
If I am right, can I assume:
There would be 2 ways to validate if the request is valid:
Write's side only get events (from event store, AKA zookeeper) to validate if the requested mutation is possible.
Write's side get a snapshot from a traditional database to validate the requested mutation.
Then it publish an event to kafka (I assume kafka update zookeeper automatically), and then the message can be used by the read's side to update a private snapshot of the entity. Of course, this message can also be used by others MS.
I do not know apache kafka + zookeeper very well, in the past i only used messaging service as rabbitmq. They seems similars in the shape but very different in the usage.
The main difference between event sourcing and basic messaging is the usage of the event-store instead of a entity's snapshot? In this case, can we assume that not all MS need an event's store tactic (i mean, validating via the event store and not via a "private" database)? If yes, does anyone can explain when you need event's store and when not?
I'll try to answer your major concerns on a concept level without getting tied up with the specifics of frameworks and implementations. Hope this will help.
There would be 2 ways to validate if the request is valid:
. Write's side only get events (from event store, AKA zookeeper) to validate if the requested mutation is possible.
. Write's side get a snapshot from a traditional database to validate the requested mutation.
I'd go by the first option. To execute a command, you should rely on the current event stream as authority to determine your model's current state.
The read model of your architecture is only eventually consistent which means there is an arbitrary delay between a command happening and it being reflected on the read model. Although you can work on your architecture to try to ensure this delay will be as small as possible (even if you ignore the costs of doing so) you will always have a window where your read model is not still up to date.
That being said, your commands should be run against your command model based off your current event store.
The main difference between event sourcing and basic messaging is the usage of the event-store instead of a entity's snapshot? In this case, can we assume that not all MS need an event's store tactic (i mean, validating via the event store and not via a "private" database)? If yes, does anyone can explain when you need event's store and when not?
The whole concept of Event Sourcing is: instead of storing your state as an "updatable" piece of data which only reflects the latest stage of such data, you store your state as a series of actions (events) that can be interpreted to reach such state.
So, imagine you have a piece of your domain which reads (on a free form notation):
Entity A = { Id: 1; Name: "Something"; }
And something happens and a command arrives to change the name of such entity to "Other Thing".
In a traditional storage, you would reach for such record and update it to:
{ Id: 1; Name: "Other Thing"; }
But in an event-sourced storage, you wouldn't have such a record, you would have an event stream, with data such as:
{Entity Created with Id = 1} > {Entity with Id = 1 renamed to "Something"} > {Entity with Id = 1 renamed to "Other Thing"}
Now if you "replay" these events in order, you will reach the same state as the traditional storage, only you will "know" how your got to that state and the traditional storage will forget such history.
Now, to answer your question, you're absolutely right. Not all microservices should use an event store and that's even not recommended. In fact, in a microservices architecture each microservice should have its own persistance mechanism (many times being each a different technology) and no microservice should have direct access to another's persistance (as your diagram implies with "Another MS" reaching to the "Event Store" of your "Bill's MS").
So, the basic decision factor to you should be:
Is your microservice one where you gain more from actively storing the evolution of state inside the domain (other than reactively logging it)?
Is your microservice's domain one where your are interested in analyzing old computations? (that is, being able to restore the domain to a given point in time so you can understand its state's evolution pattern - consider here something as complex auditing where you want to understand past computations)
Even if you answer "yes" to both of these questions... will the added complexity of such architecture be worth it?
Just as a closing remark on this topic, note there are multiple patterns intertwined in your model:
Event Sourcing is just the act of storing state as a series of actions instead of an updatable central data-hub.
The pattern that deals with having Read Model vs Command Model is called CQRS (Command-Query Responsibility Segregation)
These 2 patterns are frequently used together because they match up so nicely but this is not a prerequisite. You can store your data with events and not use CQRS to split into two models AND you can organize your domain in two models (commands and queries) without storing any of them primarily as events.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am only one person working on project - so I am developer without PM above me. I finished portal, hovewer client from time to time attacks me with request such as "make font bigger" or change margin in css or make button which makes "xxx and yyy".
There are simple task, sometimes only for few clicks, but it takes my time and I hate doing such tasks. On the other hand I understand those people, while sometimes small fix helps them a lot in work. What say them on communicators - it's hard to ignore them. Is disabling communicators best solution - but I need it to communicate with my co-workers.
What you do in such situations?
Create an established queue where your users can submit requests, in a manner that doesn't disrupt your day-to-day workflow.
From the sounds of this you are getting requests via a communication channel that you check regularly, you might try to move it off to the side.
Cutting off communication is NEVER a good solution. Also, I would formalize a process and time schedule for when you get to those types of requests. I've found great success with this simple approach.
If you're working for yourself, you clients are your single most important reason you're there. They are your business! Thus, it's always good practice to keep them happy.
That being said...
You should always always always have a clearly defined contract when working on any sort of software project for a client. You need to ensure that your deliverables are clearly expressed and defined both to you and to your customer. Once you've got that taken care of you need to also ensure that there is a section that covers "future maintenance requests" and you can then work with your client to ensure expectations are acceptable on both ends of the spectrum and your time spent on them is both accounted for and part of the original plan moving forward.
The fewer open ends, the better.
Afterwards, implementing a system to manage/handle customer requests for each of the projects/websites you've implemented can also be a great help. Tools like FogBugz from one of this sites founders do a great job in handling customer interaction and bug/feature requests. Check it out.
Although not a technical "bug", usability by the client is the most important bug to the user. If you want to continue business with the client, the small things need to be worked.
fixing small bugs == client happiness == more work == more $$
Deploy a system for tracking bugs and tracking change requests (at my office we use MKS, which is also used for source integrity). Then when a user has a request, they go into the tracking system and enter the request as the appropriate type. Ideally they should also be able to attach a severity/priority indicator to it so that the outstanding requests can be ranked. You can then go in and see all outstanding requests, and prioritize them. Since they are not being directly sent to you, you won't feel inundated with requests, and the users will find that they can track the status of their requests more easily than by calling you and asking "when will my fix be done?"
For yourself, you can check the list a few times a day and see if there are any high priority issues to work on. Then schedule some time on a regular basis (one day a week, or an hour day, whatever feels reasonable) to work on the lower priority issues.
I think you have to consider your ongoing relationship with your customer. If a customer spends a few minutes of your time occasionally you may consider that the cost to you is minimal and the benefits of the contact may outweigh the cost anyway.
If the requests are coming in thick and fast, you maybe need to talk to your customer about an hourly rate for changes or cover them in a chargeable support contract.
Do not change your path on each feature request that you get. Collect feature requests for a while, then prioritize the requests, then select the ones that make sense, and then work on the next release.
In my opinion it is good to follow some fixed release schedule: it makes the development process more controllable, improves software quality, and your customers know what to expect.