Does Kendo Sorting, Grouping Client side or Server Side? How does it work if all the records are fetched at once or the data is loaded in parts. I tried verifying but was not able to conclude with the answer
You can do it either client or server side.
For server side (i.e. data loaded in parts):
http://docs.telerik.com/kendo-ui/aspnet-mvc/helpers/grid/faq#how-do-i-implement-paging-sorting-filtering-and-grouping
For client side (i.e. all data fetched at once):
http://docs.telerik.com/kendo-ui/aspnet-mvc/helpers/grid/faq#how-can-i-configure-the-grid-to-perform-paging-sorting-filtering-and-grouping-in-memory
Customised server side - e.g. where you don't have an IQueryable model for your grid data:
http://docs.telerik.com/kendo-ui/aspnet-mvc/helpers/grid/custom-binding
If you want to do client side binding (AJAX) binding, it will retrieve the data in parts that it needs, just make sure to not put .ServerFiltering(false) (and other similar methods) in your DataSource configuration.
Also, in your controller, use the .ToDataSource() extension method when returning the data, like this:
public ActionResult GetTasks([DataSourceRequest] DataSourceRequest request)
{
var tasks = _db.Tasks; // however you want to get your data
return Json(tasks.ToDataSourceResult(request), JsonRequestBehavior .AllowGet);
}
Note that as iandayman mentioned, you'll want to use an IQueryable<T> datasource for this to work properly and defer the paging/sorting/filtering operations to the database.
If you want to do a Server side binding, you can do that as well and using the IQueryable datasource and it will also defer execution to the database.
As pointed out by #iandayman, it is possible to fetch all the records at once to perform client-side processing, but for performance reasons on larger data sets, I recommend fetching data from the server in one page-result at a time.
The sorting, filtering, paging and grouping can be performed server-side, as outlined in my answer here, which uses the Kendo.Mvc library from Telerik and IQueryable server-side data source, such as Entity Framework or Telerik Data Access.
Bear in mind that my answer also makes use of a pure-JavaScript Kendo DataSource declaration, rather than MVC wrappers, as it gave me better granular control over all DataSource options, when I first wrote my solution. Telerik may have improved the wrappers since then (see Telerik's MVC approach here or their WebApi approach using the wrappers here).
Related
How can I apply a filter accent-insensitive? In OData the "eq" operator is case and accent sensitive. The case is easy to fix, because the "tolower" but relative to the accent I'm not getting a simple solution. I know contains is supposed to be accent-insensitive but if I use contains filtering by "São José" I am only getting these responses "São José" and "São José dos Campos", it is missing "Sao Jose".
The following example filtering by "Florianopolis" is expected to return "Florianópolis", but it does not:
url: api/cidades/get?$filter=contains(nome, 'Florianopolis')
[HttpGet]
[EnableQuery]
public ActionResult<IQueryable<CidadeDTO>> Get()
{
try
{
return Ok(_mapper.Map<IEnumerable<CidadeDTO>>(_db.Cidades));
}
catch (System.Exception e)
{
return BadRequest(e.GetBaseException().Message);
}
}
It should bring aswell, like entity framework.
If your OData model was mapped directly to EF models AND an IQueryable<T> expression was passed into OK() then the query is explicitly passed through to the database engine as SQL:
SELECT * FROM Cidades WHERE nome LIKE '%Florianopolis%'
When that occurs, the Collation settings in the database connection will determine the comparison matching logic.
When your database collation is case and accent insensitive, but your data is still filtered as if it was not, then this is an indication that an IEnumerable<T> has been passed into OK() and the comparison logic is being evaluated in C# which by default in insensitive to both case and accent. Unfortunately this means that it is very likely that the entire data table has been loaded into memory first so that the filter can be applied.
In your case the OData model is mapped to DTO expressions that are mapped to the EF models via AutoMapper and that is where the generated query can break down. By calling Map() you are loading ALL records from the EF table and leaving the $filter criteria to be applied by the EnableQueryAttribute
For OData query conventions to be applied automatically you must return an IQueryable<T> from your method, or atleast pass an IQueryable<T> into the OK() response handler. With AutoMapper, you can use the Queryable Extensions to satisfy the IQueryable<T> requirement:
Queryable Extensions
When using an ORM such as NHibernate or Entity Framework with AutoMapper’s standard mapper.Map functions, you may notice that the ORM will query all the fields of all the objects within a graph when AutoMapper is attempting to map the results to a destination type.
...
ProjectTo must be the last call in the chain. ORMs work with entities, not DTOs. So apply any filtering and sorting on entities and, as the last step, project to DTOs.
In OData the last requirement (about ProjectTo) is still problematic because the EnableQueryAttribute will append the query options to the IQueryable<T> response, which will still end up materializing the entire table into memory first (IEnumerable<T>) and then apply the filter, which is still incredibly inefficient. It is this behaviour that is generally observed when someone complains about poor performance from an OData implementation, it is not always AutoMapper, but usually the pattern that the data source is loaded into memory in its entirety and then filtered. Following the default guidance for AutoMapper will lead you in this direction.
Instead we need to use an additional package: AutoMapper.Extensions.ExpressionMapping that will give us access to the UseAsDataSource extension method.
UseAsDataSource
Mapping expressions to one another is a tedious and produces long ugly code.
UseAsDataSource().For<DTO>() makes this translation clean by not having to explicitly map expressions. It also calls ProjectTo<TDO>() for you as well, where applicable.
This changes your implementation to the following:
[HttpGet]
[EnableQuery]
public ActionResult<IQueryable<CidadeDTO>> Get()
{
return Ok(_db.Cidades.UseAsDataSource().For<CidadeDTO>());
}
Don't fall into the trap of assuming that AutoMapper is necessary or best practice for an OData API implementation. If you are not using the unique features that AutoMapper provides then adding an additional abstraction layer can end up over-complicating your solution.
I'm not against AutoMapper, I use it a lot for Integrations, ETL, GraphQL and non-DDD style data schemas where the DTO models are significantly different to the underlying EF/data storage models. But it is a maintenance and performance overhead that a simple DDD data model and OData API based solution can easily do without.
Don't hire an excavator when a simple shovel will do the job.
AutoMapper is a convention based ORM that can be useful when you want to change the structure between implementation layers in your code, traditionally you might map Business Domain models that may represent aggregates or have flattened structures to highly normalised Database models.
OData is also a convention based ORM. It was designed to facilitate many of the same operations that AutoAmpper provides with the exception of Flattening and Unflattening models. These operations are deferred to the EF engine. The types exposed via OData mapping are DTOs
If your DTO models are the same relational structure as your EF models, then you would generally not use AutoMapper at all, the OData Edm mapping is optimised specifically to manage this type of workload and is designed to be and has been integrated directly into the serialization layer, making the Edm truely Data Transfer Objects that only exist over the wire and in the client.
This did the job
[HttpGet]
public ActionResult<IQueryable<PessoaDTO>> Get(ODataQueryOptions<Pessoa> options)
{
try
{
var queryResult = options.ApplyTo(_db.Pessoas);
return Ok(queryResult);
}
catch (System.Exception e)
{
return BadRequest(e.GetBaseException().Message);
}
}
This is NOT about client-side paging in a browser!
My problem is that I plan to generate a rather huge XML-file using Freemarker as the template engine. From my current knowledge this means that I need to feed the entire set of data into the model at once, which requires lots of RAM in the machine.
To avoid that I plan to read paged data from a database using spring-data like so. Using something like Page<T> findAll(Pageable pageable); should solve the part of getting the source data in smaller bits. But, what about generating the file?
Is there a way to use some sort of paging or to stream data to Freemarker?
You could implement some sort of FreeMarker TemplateModel. For instance you could implement TemplateMethodModelEx to take a page as an argument and return your data. Theoretically this would work and FreeMarker renderer would invoke the method only when needed, but I haven't tested this kind of setup so can't be certain.
I have multiple input values, rows may 1, 10, 20 or 100 or more and columns may be 3 or more as shown here
Questions:
1) In spring controller how do I get the values for each rows? I need exact values for all the corresponding values. e.g: row 1 Expense Activities values match with accounts and corresponding description. Here Expense Activities, Accounts are drop down boxes and description is text field.
2) I have another tab called Expense which is similar to Earnings salary as shown in image. How can get the values for multiple blocks.
I am using Spring mvc with ext-js as front end technologies.
Your questions are fairly hard to answer given the lack of detail as to what you are doing.
In similar spring-ext apps I have written I use a store backed by a rest proxy to back my ext grid. This will give you certain client-server communications for free. for more reading on the ext js side try:
Ext.data.Store
Ext.data.proxy.Rest
Its not too hard to expand the grid example to use a rest proxy which will give you a good ground in how set your grid up for client-server communication.
Then in your spring controller you can will need to implement and annotate the methods for the CRUD operations that the ext proxy will send requests to.
A handy guide for getting started with writing a restful web service with spring:
Spring REST Service
In your specific case (and I'm guessing here because I'm not 100% sure what you're asking) you would need a spring controller method annotated with something like:
#RequestMapping(value="/expense", method=RequestMethod.GET)
You would back this with whatever business logic is used to load the grid and pass the data structure as most likely a JSON object.
For updates from the client you would then have another spring controller method, only this time anotated something like:
#RequestMapping(value="/expense", method=RequestMethod.POST) for create or
#RequestMapping(value="/expense/{id}", method=RequestMethod.PUT) for updates
Obviously if you're data structures are more complicated then the JSON objects sent over the wire will be correspndingly complex.
Hopefully this will set you on the path to getting your app working.
I am planning to use knockout.js and MVVM pattern on the client side for a Single Page application. So Models, ViewModels would be defined on the client side. I am confused about how we have to structure on the server side.
Now, would controllers just return domain model itself ? Would all the mapping from domain model to ViewModel happen only on the client side ?
In my solution, there is wide gap between the Domain model and the ViewModel. So the above approach would result in a lot of data being returned to client side unnecessarily. Though it seems like overkill, I am thinking about repeating ViewModel and InputViewModel definitions (former represents data rendered, latter represents data to be posted back to controller actions) on server side and also having a mapping layer (based on automapper) to map Domain models to ViewModels on the server side. Does this make sense ? Or is there a better approach ?
I would suggest that you work out what data your view models actually need, then have the controllers build up a server-side view model that contains that data and send it to the client in a JSON format.
This way you are not sending unnecessary data over to the client (or back), you are still able to do much of the heavy lifting on the server and the knockout view models can do what they are meant for: presenting the data to be used by the view.
What you described in point 2 is actually the solution I use the most and it makes sense to me:
I use Automapper on the server side to map between Domain models and ViewModels (.Net objects) that are View specific and contain only the data the View needs.
The Controller Action that is responsible for loading the View the first time, will databind the View to the ViewModel, so that the page is initialized quickly without the need to make an Ajax call.
In the View itself I create the knockout viewmodel, assigning any initial values (if needed) by Json encoding the bounded ViewModel (for example using Asp.Net MVC i would do something like
var boundedViewModel = #Html.Raw(Json.Encode(Model));
That is exactly how I would approach this problem. If this were a straight MVC app, you would still be creating viewmodels.
Sometimes for complicated data sets, I can see a use case for using something like Knockback, which takes the rich data model of backbone.js and combines it with knockout.js
http://kmalakoff.github.com/knockback/
As an MVC newb, I keep getting caught up in the details. One in particular is making me pause longer than I'd expect; pagination. Should pagination go in the model, or in the controller?
In case it matters, I'm using ZF, and would like to paginate the results of some SQL data.
Pagination separates records between pages, so it only gathers data from the model, but deals with presentation. Unless it's intrinsic of the model to split the output in multiple pages (which rarely occurs), my suggestion is to put pagination logic (IE dealing with page number) in the controller.
You might also want to consider taking advantage of a view helper, to minimize the code you put into your controller (fat controllers aren't a good thing).
The thing is though, that Zend_Paginator has a very convenient set of adapters. One of these is Zend_Paginator_Adapter_DbSelect which allows you to enclose a Zend_Db_Select query for efficiently paginating a sql query (e.g. limiting the results). So I can see why you are wondering where to construct these. Although you can indeed debate whether the model is the best place, I personally don't have a problem with creating a method in my model like:
public function fetchEntitiesAsPaginator();
... which would return a Zend_Paginator instance injected with a Zend_Paginator_Adapter_DbSelect.
BTW:
I personally don't consider a paginator just for presentation. I just consider it as a glorified LimitIterator. When you look at it from that perspective, things are starting to look a bit different already. And as a side note: the presentation concerns of Zend_Paginator are seperated by the Zend_View_Helper_PaginationControl view helper already anyway.
Logic to paginate goes in the controller. Any data you need, for example the current page number, should go in the model.
When in doubt, data goes in the model and everything that acts on the data goes in the controller.
The controller should handle the parameter for the page number, and then pass it to the model, which will then know which records to retrieve.
For example...
$userModel->getAll((int) $_GET['page']);
(I don't know Zend Framework, but the idea should be clear)