I have an application with about 20 models and controllers and am not using any particular framework. What is the best practice for using multiple remote objects in Flex performance-wise?
1) Method 1 - One per Component - Each component instantiates a RemoteObject for itself
2) Method 2 - Multiple in Application Root - Each controller is handled by a RemoteObject in the root
3) Method 3 - One in Application Root - Combine all controllers into one class and handle them with one RemoteObject
I'm guessing 3 will have the best performance but will be too messy to maintain and 1 would be the cleanest but would take a performance hit. What do you think?
Best practice would be "none of the above." Your Views should dispatch events that a controller or Command component would use to call your service(s) and then update your model on return of the data. Your Views would be bound to the data, and then the Views would automatically be updated with the new data.
My preference is to have one service Class per different piece or type of data I am retrieving--this makes it easier to build mock services that can be swapped for real services as needed depending on what you're doing (for instance if you have a complicated server setup, a developer who is working on skinning would use the mocks). But really, how you do that is a matter of personal preference.
So, where do your services live, so that a controller or command can reach them? If you use a Dependency Injection framework such as Robotlegs or Swiz, it will have a separate object that handles instantiating, storing, and and returning instances of model and service objects (in the case of Robotlegs, it also will create your Command objects for you and can create view management objects called Mediators). If you don't use one of these frameworks, you'll need to "roll your own," which can be a bit difficult if you're not architecturally minded.
One thing people who don't know how to roll their own (such as the people who wrote the older versions of Cairngorm) tend to fall back on is Singletons. These are not considered good practice in this day and age, especially if you are at all interested in unit testing your work. http://misko.hevery.com/code-reviewers-guide/flaw-brittle-global-state-singletons/
A lot depends on how much data you have, how many times it gets refreshed from the server, and of you have to support update as well as query.
Number 3 (and 2) are basically a singletons - which tends to work best for large applications and large datasets. Yes, it would be complex to maintain yourself, but that's why people tend to use frameworks (puremvc, cairgorm, etc). much of the complexity is handled for you. Caching data within the frameworks also enhances performance and response time.
The problem with 1 is if you have to coordinate data updates per component, you basically need to write a stateless UI, always retrieving the data from the server on each component visibility.
edit: I'm using cairgorm - have ~ 30 domain models (200 or so remote calls) and also use view models. some of my models (remote object) have 10's of thousands of object instances (records), I keep a cache with/write back. All of the complexity is encapsulated in the controller/commands. Performance is acceptable.
In terms of pure performance, all three of those should perform roughly the same. You'll of course use slightly more memory by having more instances of RemoteObject and there are a couple of extra bytes that get sent along with the first request that you've made with a given RemoteObject instance to your server (part of the AMF protocol). However, the effect of these things is negligible. As such, Amy is right that you should make a choice based on ease of maintainability and not performance.
Related
I wonder what is the best practice for this business model is:
I have a User object and a Post object, the user has one or many Posts, so I want to know what's the best design:
1 - Create separate endpoints GET /users/id and GET /posts?userId=id
2 - Create only one endpoint GET /users/id and in my service layer call getPostsByUserId() then add posts to my user object and return them in the API.
could you tell me which is the right approach and the pros and cons?
If you go with the first approach, Then you are using Lazy fetching which is common in RestApis and it's being used for one-to-many relationships where you want to get instances related to one entity; Simply it has better performance because your client fetches data as specified and as much as they needed. The second approach uses Eager fetching which is good for loading a single related instance. In the end, Your selection between these two is toughly close to your usage. In RestApis the first approach is the more common and better solution. But in traditional web applications like Spring MVC, In some situations, You have to use the second approach to load all necessary things on your page.
I wanna ask about if we write query in controller instead model. does it have any effect? such as load data become slower or others?
the data that i want to use is over 1000.
if not, what makes the load of data slow in web.
like the ajax request is needed 4-5 sec, and some is until 1 minutes
There is virtually no difference between running a query in a controller or a model except for the small (negligible) amount of overhead a model adds (if it were me I wouldn't worry about it). If you want to adhere to the MVC design pattern, database queries are typically done in the model, not the controller. But this is more of a stylistic thing in CodeIgniter as it is not strictly enforced as it is in other frameworks that use ORM.
With that being said, if you are experiencing long execution times I would recommend not getting all the data at once, and instead using load more, pagination, datatables, or a similar system to reduce the amount of records selected at once to something more manageable depending on your acceptable execution/response times.
You can test what works in your situation best by setting benchmark points: https://www.codeigniter.com/user_guide/libraries/benchmark.html
I have been working with CodeIgniter for some few years and what I have realized is to properly do your queries in models.
The idea of MVC is about code separation where the M(Model) handles or does the heavy liftings of the application which mostly relates to databases, the V(View) does the presentation that is where users of the system get to interact with the application and the C(Controller) acts as a messenger or intermediary between the Model and the View.
This releases the controller from lots of processes so that it doesn't have to query from the database, try to upload lots of data before then showing it on a view.
I always use the (I call it the little magic) {elapsed_time} and {memory_usage}to check how my application work per whatever logic I implement, you can give it a try.
I think this helps.
I want to plan a solution that manages enriched data in my architecture.
To be more clear, I have dozens of micro services.
let's say - Country, Building, Floor, Worker.
All running over a separate NoSql data store.
When I get the data from the worker service I want to present also the floor name (the worker is working on), the building name and country name.
Solution1.
Client will query all microservices.
Problem - multiple requests and making the client be aware of the structure.
I know multiple requests shouldn't bother me but I believe that returning a json describing the entity in one single call is better.
Solution 2.
Create an orchestration that retrieves the data from multiple services.
Problem - if the data (entity names, for example) is not stored in the same document in the DB it is very hard to sort and filter by these fields.
Solution 3.
Before saving the entity, e.g. worker, call all the other services and fill the relative data (Building Name, Country name).
Problem - when the building name is changed, it doesn't reflect in the worker service.
solution 4.
(This is the best one I can come up with).
Create a process that subscribes to a broker and receives all entities change.
For each entity it updates all the relavent entities.
When an entity changes, let's say building name changes, it updates all the documents that hold the building name.
Problem:
Each service has to know what can be updated.
When a trailing update happens it shouldnt update the broker again (recursive update), so this can complicate to the microservices.
solution 5.
Keeping everything normalized. Fileter and sort in ElasticSearch.
Problem: keeping normalized data in ES is too expensive performance-wise
One thing I saw Netflix do (which i like) is create intermediary services for stuff like this. So maybe a new intermediary service that can call the other services to gather all the data then create the unified output with the Country, Building, Floor, Worker.
You can even go one step further and try to come up with a scheme for providing as input which resources you want to include in the output.
So I guess this closely matches your solution 2. I notice that you mention for solution 2 that there are concerns with sorting/filtering in the DB's. I think that if you are using NoSQL then it has to be for a reason, and more often then not the reason is for performance. I think if this was done wrong then yeah you will have problems but if all the appropriate fields that are searchable are properly keyed and indexed (as #Roman Susi mentioned in his bullet points 1 and 2) then I don't see this as being a problem. Yeah this service will only be as fast as the culmination of your other services and data stores, so they have to be fast.
Now you keep your individual microservices as they are, keep the client calling one service, and encapsulate the complexity of merging the data into this new service.
This is the video that I saw this in (https://www.youtube.com/watch?v=StCrm572aEs)... its a long video but very informative.
It is hard to advice on the Solution N level, but certain problems can be avoided by the following advices:
Use globally unique identifiers for entities. For example, by assigning key values some kind of URI.
The global ids also simplify updates, because you track what has actually changed, the name or the entity. (entity has one-to-one relation with global URI)
CAP theorem says you can choose only two from CAP. Do you want a CA architecture? Or CP? Or maybe AP? This will strongly affect the way you distribute data.
For "sort and filter" there is MapReduce approach, which can distribute the load of figuring out those things.
Think carefully about the balance of normalization / denormalization. If your services operate on URIs, you can have a service which turns URIs to labels (names, descriptions, etc), but you do not need to keep the redundant information everywhere and update it. Do not do preliminary optimization, but try to keep data normalized as long as possible. This way, worker may not even need the building name but it's global id. And the microservice looks up the metadata from another microservice.
In other words, minimize the number of keys, shared between services, as part of separation of concerns.
Focus on the underlying model, not the JSON to and from. Right modelling of the data in your system(s) gains you more than saving JSON calls.
As for NoSQL, take a look at Riak database: it has adjustable CAP properties, IIRC. Even if you do not use it as such, reading it's documentation may help to come up with suitable architecture for your distributed microservices system. (Of course, this applies if you have essentially parallel system)
First of all, thanks for your question. It is similar to Main Problem Of Document DBs: how to sort collection by field from another collection? I have my own answer for that so i'll try to comment all your solutions:
Solution 1: It is good if client wants to work with Countries/Building/Floors independently. But, it does not solve problem you mentioned in Solution 2 - sorting 10k workers by building gonna be slow
Solution 2: Similar to Solution 1 if all client wants is a list enriched workers without knowing how to combine it from multiple pieces
Solution 3: As you said, unacceptable because of inconsistent data.
Solution 4: Gonna be working, most of the time. But:
Huge data duplication. If you have 20 entities, you are going to have x20 data.
Large complexity. 20 entities -> 20 different procedures to update related data
High cohesion. All your services must know each other. Data model change will propagate to every service because of update procedures
Questionable eventual consistency. It can be done so data will be consistent after failures but it is not going to be easy
Solution 5: Kind of answer :-)
But - you do not want everything. Keep separated services that serve separated entities and build other services on top of them.
If client wants enriched data - build service that returns enriched data, as in Solution 2.
If client wants to display list of enriched data with filtering and sorting - build a service that provides enriched data with filtering and sorting capability! Likely, implementation of such service will contain ES instance that contains cached and indexed data from lower-level services. Point here is that ES does not have to contain everything or be shared between every service - it is up to you to decide better balance between performance and infrastructure resources.
This is a case where Linked Data can help you.
Basically the Floor attribute for the worker would be an URI (a link) to the floor itself. And Any other linked data should be expressed as URIs as well.
Modeled with some JSON-LD it would look like this:
worker = {
'#id': '/workers/87373',
name: 'John',
floor: {
'#id': '/floors/123'
}
}
floor = {
'#id': '/floor/123',
'level': 12,
building: { '#id': '/buildings/87' }
}
building = {
'#id': '/buildings/87',
name: 'John's home',
city: { '#id': '/cities/908' }
}
This way all the client has to do is append the BASE URL (like api.example.com) to the #id and make a simple GET call.
To remove the extra calls burden from the client (in case it's a slow mobile device), we use the gateway pattern with micro-services. The gateway can expand those links with very little effort and augment the return object. It can also do multiple calls in parallel.
So the gateway will make a GET /floor/123 call and replace the floor object on the worker with the reply.
1) What are the BLL-services? What's the difference between them and Service Layer services? What goes to domain services and what goes to service layer?
2) Howcome I refactor BBL model to give it a behavior: Post entity holds a collection of feedbacks which already makes it possible to add another Feedback thru feedbacks.Add(feedback). Obviosly there are no calculations in a plain blog application. Should I define a method to add a Feedback inside Post entity? Or should that behavior be mantained by a corresponing service?
3) Should I use Unit-Of-Work (and UnitOfWork-Repositories) pattern like it's described in http://www.amazon.com/Professional-ASP-NET-Design-Patterns-Millett/dp/0470292784 or it would be enough to use NHibernate ISession?
1) Business Layer and Service Layer are actually synonyms. The 'official' DDD term is an Application Layer.
The role of an Application Layer is to coordinate work between Domain Services and the Domain Model. This could mean for example that an Application function first loads an entity trough a Repository and then calls a method on the entity that will do the actual work.
2) Sometimes when your application is mostly data-driven, building a full featured Domain Model can seem like overkill. However, in my opinion, when you get used to a Domain Model it's the only way you want to go.
In the Post and Feedback case, you want an AddFeedback(Feedback) method from the beginning because it leads to less coupling (you don't have to know if the FeedBack items are stored in a List or in a Hashtable for example) and it will offer you a nice extension point. What if you ever want to add a check that no more then 10 Feedback items are allowed. If you have an AddFeedback method, you can easily add the check in one single point.
3) The UnitOfWork and Repository pattern are a fundamental part of DDD. I'm no NHibernate expert but it's always a good idea to hide infrastructure specific details behind an interface. This will reduce coupling and improves testability.
I suggest you first read the DDD book or its short version to get a basic comprehension of the building blocks of DDD. There's no such thing as a BLL-Service or a Service layer Service. In DDD you've got
the Domain layer (the heart of your software where the domain objects reside)
the Application layer (orchestrates your application)
the Infrastructure layer (for persistence, message sending...)
the Presentation layer.
There can be Services in all these layers. A Service is just there to provide behaviour to a number of other objects, it has no state. For instance, a Domain layer Service is where you'd put cohesive business behaviour that does not belong in any particular domain entity and/or is required by many other objects. The inputs and ouputs of the operations it provides would typically be domain objects.
Anyway, whenever an operation seems to fit perfectly into an entity from a domain perspective (such as adding feedback to a post, which translates into Post.AddFeedback() or Post.Feedbacks.Add()), I always go for that rather than adding a Service that would only scatter the behaviour in different places and gradually lead to an anemic domain model. There can be exceptions, like when adding feedback to a post requires making connections between many different objects, but that is obviously not the case here.
You don't need a unit-of-work pattern on top on the NHibernate session:
Why would I use the Unit of Work pattern on top of an NHibernate session?
Using Unit of Work design pattern / NHibernate Sessions in an MVVM WPF
If I expose IQueryable from my service layer, wouldn't the database calls be less if I need to grab information from multiple services?
For example, I'd like to display 2 separate lists on a page, Posts and Users. I have 2 separate services that provides a list of these. If both provides IQueryable, will they be joint in 1 database call? Each repository creates a context for itself.
It's best to think of an IQueryable<T> as a single query waiting to be run. So if you return 2 IQueryable<T> instances and run them in the controller, it wouldn't be any different than running them separably in their own service methods. Each time you execute the IQuerable<T> to get results, it will run the query by itself independent of other IQuerable<T> objects.
The only time (as far as I know) it will make an impact if there is enough time between the two service calls that the database connection might close, but you would need a considerable amount of time in between the service calls for that to be the case.
Returning IQuerable<T> to the controller still has some usefulness, such as easier handling of paging and sorting (so sorting is done on the controller and is not done on the service layer which doesn't necessarily care about how data is sorted or paged). This isn't a performance concern though, and people will disagree about if it's best to do this in the controller or not (I've seen reputable developers do this and give well thought out reasons why).
No. The best an IQueryable can do is reduce the number of calls within a singular database context. An IQueryable will not cross contexts.
Personally, I don't use IQueryables past the repositories for a number of reasons:
1) I don't use the same domain objects as database objects, and seeing "no translation to SQL" pisses me off ;)
2) I don't like the necessary structure for IQueryables in views: foreach (var item in collection){var tempItem = item; code on tempItem}
3) I've come up with a method of passing generic filters to the data layer (LinqKit and PredicateBuilder are gods)
If these reasons don't apply to you, of course you should feel free to use IQueryables to whichever layer you desire.
Not with two different contexts.
Definitely NO. It's a leaky abstraction.
It allows abominations like this:
q.Where(x=>{Console.WriteLine("fail");return true;});
Thing is - when exposing IQueryable, You are saying that Your data layer fully supports linq to objects.
If you make two method calls you will make two queries.
You can combine the methods into a single method which gets all the data at once.
If you are implementing the repository pattern you will have an easier time if you instantiate one database context per request.
Your service layer is exactly that, a layer which serves up what you need. Often times my service layers are named things like SearchService which has methods for returning every packaged collection I will ever need (the actual view models themselves). And if I ever need a new search, my service layer gets a new method. The backing for your service layer can then contain any data backing or persistence model you would like, be it a repository or Entity Framework provider, etc.
To answer your question though, the line needs to be drawn at the service layer, all queries need to be contained within it and only data returned.