Java design for periodic update - model-view-controller

I am using websocket based MVC architecture, This MVC life cycle is similar to Spring and Struts, now I have to support periodic update for my views.
Each controller is responsible for each view present in the client
Controller has to update the view in a periodic interval
I am not maintain any session for each client. So the controller has to update for all clients.
I dont want to create thread inside Controller, because we have so many controllers for each view.
So, I want to create a separate module that has to handle this periodic update via Controller (by calling Controller methods).
At last, I want to keep the client just for display, so I dont want to keep timer in client side.
I want to follow some standard way to design this, please help me to follow the standard design patterns (like MVC) to do this.

Two cents of a spring, struts programmer for a websocket programmer.
1.Each controller is responsible for each view present in the client
[Good design, Hope you are not over engineering]
Point 2 and point 3.
Approach 1 - Since there is no session ask your UI/View to send a request to controller for update after a specific interval (say 2 mins) or after a specific event (say page refresh/reload) which ever suits your requirement.
Cons of the approach - Too many frequent updates by the UI/View side code can bring your application on its knees as client code can be changed and use to attack the application.
To avoid this you will need to take care in each request is authentic and complete in itself and can be executed independently.
Pros of the approach - Consider scaling an application, it can scale like anything.
Point 4 and 5 they are similar
controller {
doSomething(){};
}
Or
controller {
HelperCalssReference.doSomething(){};
}
you will always get a thread inside controller [in terms of execution speed] but second approach is more decoupled.
Point 6 and Point 3.
"No timer on the client side" well then maintain session on the server side
but "I am not maintain any session for each client" you have to keep it somewhere take a pick.
Don't maintain a session on server but have to keep track of the clients and send them new update say after 2 mins but how you will know when to stop or the client is offline if you want to stop say after 5 updates from the server you are good but your client UI may be online and waiting for updates.
Point is have a logical break point in your program but can't find that in above requirements.
Probable Approaches -
Use REST keeps an application scale-able and stateless.
[https://capgemini.github.io/architecture/is-rest-best-microservices/ must read section "Design / Implementation/ Configuration Difficulty"]
Use Hollywood design principle but decide who is your Hollywood UI/view or the server.
Event driven approach can help let the user event decide when to ask for update
[watch out for client asking for 3 updates in 1 second by multiple clicks.]
Happy designing :)

Related

How to handle events processing time between services

Let's say we have two services A and B. B has a relation to A so it needs to know about the existing entities of A.
Service A publishes events every time an entity is created or updated. Service B subscribes to the events published by A and therefore knows about the entities existing in service A.
Problem: The client (UI or other micro services) creates a new entity 'a' and right away creates a new entity 'b' with a reference to 'a'. This is done without much delay so what happens if service B did not receive/handle the event from B before getting the create request with a reference to 'b'?
How should this be handled?
Service B must fail and the client should handle this and possibly do retry.
Service B accepts the entity and over time expect the relation to be fulfilled when the expected event is received. Service B provides a state for the entity that ensures it cannot be trusted before the relation have been verified.
It is poor design that the client can/has to do these two calls in the same transaction. The design should be different. How?
Other ways?
I know that event platforms like Kafka ensures very fast event transmittance but there will always be a delay and since this is an asynchronous process there will be kind of a race condition.
What you're asking about falls under the general category of bridging the gap between Eventual Consistency and good User Experience which is a well-documented challenge with a distributed architecture. You have to choose between availability and consistency; typically you cannot have both.
Your example raises the question as to whether service boundaries are appropriate. It's a common mistake to define microservice boundaries around Entities, but that's an anti-pattern. Microservice boundaries should be consistent with domain boundaries related to the business use case, not how entities are modeled within those boundaries. Here's a good article that discusses decomposition, but the TL;DR; is:
Microservices should be verbs, not nouns.
So, for example, you could have a CreateNewBusinessThing microservice that handles this specific case. But, for now, we'll assume you have good and valid reasons to have the services divided as they are.
The "right" solution in your case depends on the needs of the consuming service/application. If the consumer is an application or User Interface of some sort, responsiveness is required and that becomes your overriding need. If the consumer is another microservice, it may well be that it cares more about getting good "finalized" data rather than being responsive.
In either of those cases, one good option is a facade (aka gateway) service that lives between your client and the highly-dependent services. This service can receive and persist the request, then respond however you'd like. It can give the consumer a 200 - OK response with an endpoint to call back to check status of the request - very responsive. Or, it could receive a URL to use as a webhook when the response is completed from both back-end services, so it could notify the client directly. Or it could publish events of its own (it likely should). Essentially, you can tailor the facade service to provide to as many consumers as needed in the way each consumer wants to talk.
There are other options too. You can look into Task-Based UI, the Saga pattern, or even just Faking It.
I think you would like to leverage the flexibility of a broker and the confirmation of a synchronous call . Both of them can be achieved by this
https://www.rabbitmq.com/tutorials/tutorial-six-dotnet.html

What is MVC in the context of backend?

MVC in the frontend makes perfect sense. But why do we need MVC in the backend as well? Where is the "view" in this case given backend doesn't provide anything visual.
MVC is a design pattern that promotes separation of concern where three participating concerns are
•
The Model a data structure which holds business data and is
transferred from one layer to the other.
The View which is responsible to show the data present in the
application or think of it as a data structure (complete decoupled
from the model) solely used for presentation purpose (not necessary a
presentation output itself) e.g. view template in below diagram
The Controller act as a mediator and is responsible to accept a
request from a user, modify a model (if required) and convert it into
the view.
MVC as a pattern can exist completely on the backend or completely on frontend or in its common form of backend and frontend combined.
One has to think relatively and see how to keep all these three concerns sperate for better application design.
The whole idea behind MVC pattern is a very clear separation between domain objects which represents real-world entities and the presentation layer data structure. Domain objects should be completely independent and should work without a View (data representation) as well. Or other way to think of it is, in MVC context views are isolated from the model. It allows to use the same model with different views.
Spring MVC is a good example of a backend MVC framework.
Below diagram depicts how all the three components exist on the server side only (inside the application container). Taken from official blog only.
MVC is about separating concerns in applications that accept user input, perform business logic, and render output. It does not say where that logic resides. Nor does it specify that all the logic must live in a single process.
A fairly traditional use of MVC is with a spreadsheet. Let's look at both a single process application and a multi-process application to see how they might implement this simple spreadsheet:
A B C
----- ----- ---------
1 | 1 2 =A1+B1
Let's say the user enters the number 4 into cell A1. What happens?
SINGLE PROCESS APPLICATION (e.g. Microsoft Excel): The user input is handled by the view logic, up until the user leaves the cell. Once that happens, the controller receives a message to update the model with the new value. The model accepts the new value, but also runs some business logic to update the values of other cells affected by the change. Once complete, the model notifies the view that its state has changed, and the view renders the new state. That notification can happen via pub/sub as #jaco0646 suggests, but it could also be handled with a callback.
MULTI-PROCESS APPLICATION (e.g. Google Sheets): The user input is handled by the view logic (in the client), up until the user leaves the cell. Once that happens, the controller (on the server) receives a message (via HTTP, or a socket) to update the model (also on the server) with the new value. The model accepts the new value, but also runs some business logic to update the values of other cells affected by the change. Once complete, the model notifies the view that its state has changed, and the view renders the new state (in the client). That notification can happen via the controller's HTTP response, or via a socket.
In other words, the MVC pattern is applicable to both scenarios.
Furthermore, it is also perfectly valid to treat the client and the server as two entirely separate applications, both of which might implement MVC. In this scenario, both the client's model, and the server's view are less traditional. The client's model is likely making AJAX requests to the server, as opposed to running complex business logic itself or persisting the data locally. And, the server's view is likely a serializer that produces some form of structured output that the client understands, like JSON, XML, or even a CSV.
MVC is a perfectly valid pattern anytime an application needs to accept user input, perform some business logic, and render some output -- regardless of whether that application lives on one or more processes -- and regardless of whether the view is something a human will consume.

Explications about EventSourcing, Microservice, CQRS [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am currently building an app, and i would like to use microservices as pattern and GraphQl for communication. I am thinking about using kafka / rabbitmq + authZ + auth0 + apollo + prisma. And all of this running on docker.
I found many ressources on event sourcing, the advantage/disavantage, and I am stuck on how it work in the real world. As far, this is how i will do it:
Apollo engine to monitor request / responses..
Auth0 for authentification management
AuthZ for authorization
A graphql gateway. Sadly I did not find a reliable solution, I guess i have to do it my self using apollo + graphql-tool to merge schema.
And ideally:
Prisma for the read side of bill's MS
nodejs for the write side of bill's MS
Now if I understand correctly, using apache kafka + zookeeper :
Kafka as the message broker
Zookeeper as an eventstore.
If I am right, can I assume:
There would be 2 ways to validate if the request is valid:
Write's side only get events (from event store, AKA zookeeper) to validate if the requested mutation is possible.
Write's side get a snapshot from a traditional database to validate the requested mutation.
Then it publish an event to kafka (I assume kafka update zookeeper automatically), and then the message can be used by the read's side to update a private snapshot of the entity. Of course, this message can also be used by others MS.
I do not know apache kafka + zookeeper very well, in the past i only used messaging service as rabbitmq. They seems similars in the shape but very different in the usage.
The main difference between event sourcing and basic messaging is the usage of the event-store instead of a entity's snapshot? In this case, can we assume that not all MS need an event's store tactic (i mean, validating via the event store and not via a "private" database)? If yes, does anyone can explain when you need event's store and when not?
I'll try to answer your major concerns on a concept level without getting tied up with the specifics of frameworks and implementations. Hope this will help.
There would be 2 ways to validate if the request is valid:
. Write's side only get events (from event store, AKA zookeeper) to validate if the requested mutation is possible.
. Write's side get a snapshot from a traditional database to validate the requested mutation.
I'd go by the first option. To execute a command, you should rely on the current event stream as authority to determine your model's current state.
The read model of your architecture is only eventually consistent which means there is an arbitrary delay between a command happening and it being reflected on the read model. Although you can work on your architecture to try to ensure this delay will be as small as possible (even if you ignore the costs of doing so) you will always have a window where your read model is not still up to date.
That being said, your commands should be run against your command model based off your current event store.
The main difference between event sourcing and basic messaging is the usage of the event-store instead of a entity's snapshot? In this case, can we assume that not all MS need an event's store tactic (i mean, validating via the event store and not via a "private" database)? If yes, does anyone can explain when you need event's store and when not?
The whole concept of Event Sourcing is: instead of storing your state as an "updatable" piece of data which only reflects the latest stage of such data, you store your state as a series of actions (events) that can be interpreted to reach such state.
So, imagine you have a piece of your domain which reads (on a free form notation):
Entity A = { Id: 1; Name: "Something"; }
And something happens and a command arrives to change the name of such entity to "Other Thing".
In a traditional storage, you would reach for such record and update it to:
{ Id: 1; Name: "Other Thing"; }
But in an event-sourced storage, you wouldn't have such a record, you would have an event stream, with data such as:
{Entity Created with Id = 1} > {Entity with Id = 1 renamed to "Something"} > {Entity with Id = 1 renamed to "Other Thing"}
Now if you "replay" these events in order, you will reach the same state as the traditional storage, only you will "know" how your got to that state and the traditional storage will forget such history.
Now, to answer your question, you're absolutely right. Not all microservices should use an event store and that's even not recommended. In fact, in a microservices architecture each microservice should have its own persistance mechanism (many times being each a different technology) and no microservice should have direct access to another's persistance (as your diagram implies with "Another MS" reaching to the "Event Store" of your "Bill's MS").
So, the basic decision factor to you should be:
Is your microservice one where you gain more from actively storing the evolution of state inside the domain (other than reactively logging it)?
Is your microservice's domain one where your are interested in analyzing old computations? (that is, being able to restore the domain to a given point in time so you can understand its state's evolution pattern - consider here something as complex auditing where you want to understand past computations)
Even if you answer "yes" to both of these questions... will the added complexity of such architecture be worth it?
Just as a closing remark on this topic, note there are multiple patterns intertwined in your model:
Event Sourcing is just the act of storing state as a series of actions instead of an updatable central data-hub.
The pattern that deals with having Read Model vs Command Model is called CQRS (Command-Query Responsibility Segregation)
These 2 patterns are frequently used together because they match up so nicely but this is not a prerequisite. You can store your data with events and not use CQRS to split into two models AND you can organize your domain in two models (commands and queries) without storing any of them primarily as events.

MVC - where to put communication code

I want to try to write (amateur here!) a multiplayer game, and now at designing I decided to use the MVC-pattern.
Now my question is: where should I put my networking code? In the Model or the Controller? (Obviously not the View)
EDIT:
Sorry, for the hundredst time my question was unclear.
The game itself will be MVC, and it will firstly communicate with a server (find player), and later with that player (send your turn and get other's turn). So where should I do that?
The MVC design pattern is actually a combination of two layers: presentation layer and model layer. Presentation layer usually deals with user interface (updates it and reacts to user's interaction). The model layer deals with domain business logic and persistence.
The networking code should go in the model layer.
To be exact, in the part of model layer, that deals with persistence, because there, from the standpoint of business logic, there is no difference where the data comes from. It can be from the SQL database, from opened network socket or detector on the mars rover. Those all are just data sources, which, often implemented as data mappers, are part of the model layer.
You could put the actual game itself in a new project and reference that between your MVC application, that way your game is entirely separated from your web application. This could be useful if you ever wanted to port it to WPF for instance. Another alternative is to have the game as Web Service which the MVC application requests information from and would provide scalability for additional languages to plugin in.
However, if you decide to keep everything as a whole in MVC then I would suggest the Model.
As a breakdown:
The controller takes care of all the web requests, i.e. GET and POST. It can also populate a model and return the appropriate view for that request.
The model contains the domain objects and logic to perform (i.e. extracting information from the repository and manipulating the data to be passed to the view).
The view returns the markup which is based upon the data stored within the model.
In certain implementations additional logic such as checking conditions and Repository calls also take place at the controller level, which is a technique known as Fat Controller Thin Model.
Edit:
You should be sending a request to the controller. I.e. In your games controller have a HTTPPost method that connects to the server and then sends the players turn info and gets the new information. For example:
[HttpPost]
public ActionResult SendPlayerTurnInformation(PlayerObject player)
{
// logic to connect to the Game Network
// connection.UpdatePlayerTurn(player);
//return success/fail
}
You could then do the same to get a specific players turn information and then update your model to be passed to the view which would contain the new information.

Multiple RemoteObjects - Best Practices

I have an application with about 20 models and controllers and am not using any particular framework. What is the best practice for using multiple remote objects in Flex performance-wise?
1) Method 1 - One per Component - Each component instantiates a RemoteObject for itself
2) Method 2 - Multiple in Application Root - Each controller is handled by a RemoteObject in the root
3) Method 3 - One in Application Root - Combine all controllers into one class and handle them with one RemoteObject
I'm guessing 3 will have the best performance but will be too messy to maintain and 1 would be the cleanest but would take a performance hit. What do you think?
Best practice would be "none of the above." Your Views should dispatch events that a controller or Command component would use to call your service(s) and then update your model on return of the data. Your Views would be bound to the data, and then the Views would automatically be updated with the new data.
My preference is to have one service Class per different piece or type of data I am retrieving--this makes it easier to build mock services that can be swapped for real services as needed depending on what you're doing (for instance if you have a complicated server setup, a developer who is working on skinning would use the mocks). But really, how you do that is a matter of personal preference.
So, where do your services live, so that a controller or command can reach them? If you use a Dependency Injection framework such as Robotlegs or Swiz, it will have a separate object that handles instantiating, storing, and and returning instances of model and service objects (in the case of Robotlegs, it also will create your Command objects for you and can create view management objects called Mediators). If you don't use one of these frameworks, you'll need to "roll your own," which can be a bit difficult if you're not architecturally minded.
One thing people who don't know how to roll their own (such as the people who wrote the older versions of Cairngorm) tend to fall back on is Singletons. These are not considered good practice in this day and age, especially if you are at all interested in unit testing your work. http://misko.hevery.com/code-reviewers-guide/flaw-brittle-global-state-singletons/
A lot depends on how much data you have, how many times it gets refreshed from the server, and of you have to support update as well as query.
Number 3 (and 2) are basically a singletons - which tends to work best for large applications and large datasets. Yes, it would be complex to maintain yourself, but that's why people tend to use frameworks (puremvc, cairgorm, etc). much of the complexity is handled for you. Caching data within the frameworks also enhances performance and response time.
The problem with 1 is if you have to coordinate data updates per component, you basically need to write a stateless UI, always retrieving the data from the server on each component visibility.
edit: I'm using cairgorm - have ~ 30 domain models (200 or so remote calls) and also use view models. some of my models (remote object) have 10's of thousands of object instances (records), I keep a cache with/write back. All of the complexity is encapsulated in the controller/commands. Performance is acceptable.
In terms of pure performance, all three of those should perform roughly the same. You'll of course use slightly more memory by having more instances of RemoteObject and there are a couple of extra bytes that get sent along with the first request that you've made with a given RemoteObject instance to your server (part of the AMF protocol). However, the effect of these things is negligible. As such, Amy is right that you should make a choice based on ease of maintainability and not performance.

Resources