I am working on an MVC3 and Razor website. The user has to select their way through a few choices before finally working on the data.
For example:
Client List -> Version List (Filtered by client) -> Etc (Filtered by version)
Once a user selects a client, they select a version for the client. So I'm passing the client id on the querystring. For each mode of the controller of version I'm passing around the client id. On views that I want to show the client name, I'm querying the database for the client and stuffing it into the ViewBag. This seems very inefficient. I feel like I could use a cookie to hold the client id & name.
Now that I've got my version controller done, I'm facing the same pattern again with each subsequent controller, but now I need to persist both client and version...
What is a preferred approach for persisting information like this across requests?
This seems very inefficient
That's what database are made and optimized for => query data based on fields and if you put indexes on those fields it will be screamingly fast. Of course Session, Cookies, Cache are some common techniques that you could employ to limit the number of queries to the database but you will have to assume the possible staleness of data that you are getting this way (if some other thread/process modified the data in the database you no longer get correct results).
So before doing any premature optimizations here's what I would recommend you: hammer your database until you discover that this is actually a bottleneck for your application. Databases might become bottleneck in some very high traffic applications where you should resort to one of the afforementioned techniques (or in some poorly written applications of course but let's exclude this possibility for the moment).
You should use TempData, which allows you to pass data between the current and next HTTP requests. Be sure to keep in mind that it uses the session.
Greg Shackles has a great article all about TempData here
see this similar question MVC3 multi step form - How to persist model object
Related
I am using DataTable plugin in Laravel. I have a record of 3000 entries in some
But when i load that page it loads all 3000 records in the browser then create pagination, this slow down the page loading.
How to fix this or correct way
Use server-side processing.
Get help from some Laravel Packages. Such as Yajra's: https://yajrabox.com/docs/laravel-datatables/
Generally you can solve pagination either on the front end, the back end (server or database side), or a combination of both.
Server side processing, without a package, would mean setting up TOP/FETCH or make rows in data being returned from your server.
You could also load a small amount (say 20) and then when the user scrolls to the bottom of the list, load another 20 or so. I mention the inclusion of front end processing as well because I’m not sure what your use cases are, but I imagine it’s pretty rare any given user actually needs to see 3000 rows at a time.
Given that Data Tables seems to have built-in functionality for paginating data, I think that #tersakyan is essentially correct — what you want is some form of back-end filtering or paginating of rows of data to limit what’s being sent to the front end.
I don’t know if that package works for you or not or what your setup looks like, but pagination can also be achieved directly from a DataBase returning data via the SQL (using TOP/FETCH for example) or could be implemented in a Controller or Service by tracking pages of data and “loading a page at a time” both from the server and then into the table. All you would need is a unique key to associate each "set of pages" for a specific request.
But for performance, you want to avoid both large data requests and operations on large sets of data. So the more you limit how much data is being grabbed or processed at any stage of your application using it, the more performant your application will be in principle.
I have created news website in MVC.
I have search functionality on it.
When Index Action of Search Controller is called, it fetches records from database, it returns Search View.
This Search View has AJAX Pager for paging, when Next or Previous button of Pager is clicked, AJAX request is made to Paging Action of Search Controller.
Now I don't want again to make call to my Database. I want to use results which were fetched during Index action of Search Controller.
For now I have used Session[""] object.
I want to know what is better to used for state management in this scenario.
Results fetched from database can be around 1000-5000, ArticleName, ArticleShortDescription (~200 characters)
ViewBag or ViewData are only persistent in the current request. As such, they are not usable.
TempData persists until the next request, but this could be anything, so there's no guarantee it persists long enough for you to make your Ajax call (or subsequent ajax calls).
Realistically, Session is your only decent option in this case, though it's still not optimal.
You'll be storing a lot of information, which may not even be requested by the client. Even then, cleaning it up after it's no longer needed might prove hard as well.
Your best bet would be to make calls to the database which take paging into account, so you only ever return a subset of the data each request, rather than just pulling out all the data.
You should not use any of those. Session are created per user, if you are storing 1000 - 5000 articles for each user using your search, you are going to have a bad time. ViewData are fundamentally Session object with a nice wrappers, so it's also bad for your use case.
Let's say you decide to use HttpRuntime.Cache instead, so that you are not putting all the result on a per-user basis, then you have to worry about how long to store the objects in cache.
The logical approach would be to query the database with pagination
To prevent hitting your database so frequently, then you should cache the paged result with the search term + page number + page size (optional) as your cache key and store your result objects as the cache value, ideally with the cache expiration set. (You wouldn't want to serve stale search results till your cache gets evicted right?)
I avoid using session state as it affects how your application scales in a load balanced environment. You have to ensure a user always has requests served from the same server because that is where session state is stored (unless you put it in the database, but that defeats the point in your situation).
I would try to use application caching. It does mean, if the user clicks Next or Prev and that request is served from another server, you'll have to go to the database again - but personally I would prefer to take that hit.
Have a look at this page, in particular scroll down to the Application Caching section.
Hope this helps.
This is a new project we are doing using Spring MVC 2.5, jsp , Java7, Ajax, and HTML5. On my part I am going to have have 7-10 jsp pages which contain one form each.These pages are sequential. i.e One have to pass the first page successfuly to go to the second and pass the second page to go to the third and so on.
The data in order to be persisted, one has to get to the last page (after passing the rest successfully) and confirm the information is correct. Once the user confirms, I have to persist all the data stored in a bean or session (All or none). No incomplete data should be persisted. Let's call our database table "employee"
I am new to Spring MVC but got the idea and implemented the page flow using a controller.
My question is should I need to have one model class or bean to store all the data, or use session to store each pages information and keep it in the session until it gets persisted?
Or its better to have one model class, but multiple controller/bean to control the data flow from each page. Which one do you recommend? Is there any design pattern already implemented to answer my question? If you have a better idea please feel free to discuss your idea.
There are two approaches as you have already mentioned. Which one to use depends on the datasize and other requirements, for example, whether the user can come back later and continue from where he left. The model and controller need not be just one. It can be designed appropriately.
a) Store data from each screen in the session:
Pros: Unnecessary data is not persisted to db. Can manipulate data from within the session when user traverses back and forth on the screens and hence faster.
Cons of this approach: Too much information in the session can cause memory issues. May not be very helpful during session failover.The user cannot log back in and continue from where the user left, if this functionality is required.
b) Persist each screen data as the user moves on:
Pros: Session is lighter, so only minimum relevant information is stored in the session. User can log back in and continue from where the user left.
A separate inprogress db tables can be used to store this information and only on final submit insert/update the data into the actual tables, else the db would contain a lot of unsubmitted data. This way the inprogress db can be cleaned up periodically.
Cons: Need to make db calls to persist and retrieve for every screen, even though it may not be submitted by the user.
You are correct about your use of the HTTP session for storing the state of the forms.
or use session to store each pages information and keep it in the
session until it gets persisted?
because of this requirement:
No incomplete data should be persisted
As for
should I need to have one model class or bean to store all the data
You can model this as you see fit. Perhaps a model to represent the flow and then an object for each page. Depends on how the data is split across the pages.
Although as noted in a comment above you might be able to make use of WebFlow to achieve this. However that is ultimately just a lightweight framework over Spring MVC.
I need to synchronize my Relational database(Oracle or Mysql) to CouchDb. Do anyone has any idea how its possible. if its possbile than how we can notify the CouchDb for any changes happened on the relational DB.
Thanks in advance.
First of all, you need to change the way you think about database modeling. Synchronizing to CouchDB is not just creating documents of all your tables, and pushing them to Couch.
I'm using CouchDB for a site in production, I'll describe what I did, maybe it will help you:
From the start, we have been using MySQL as our primary database. I had entities mapped out, including their relations. In an attempt to speed up the front-end I decided to use CouchDB as a content repository. The benefit was to have fully prepared documents, that contained all the relational data, so data could be fetched with much less overhead.
Because the documents can contain related entities - say a question document that contains all answers - I first decided what top-level entities I wanted to push to Couch. In my example, only questions would be pushed to Couch, and those documents would contain the answers, and possible some metadata, such as tags, user info, etc. When requesting a question on the frontend, I would only need to fetch one document to have all the information I need at that point.
Now for your second question: how to notify CouchDB of changes. In our case, all the changes in our data are done using a CMS. I have a single point in my code which all edit actions call. That's the place where I hooked in a function that persisted the object being saved to CouchDB. The function determines if this object needs persisting (ie: is it a top level entity), then creates a document of this object (think about some sort of toArray function), and fetches all its relations, recursively. The complete document is then pushed to CouchDB.
Now, in your case, the variables here may be completely different, but the basic idea is the same: figure out what documents you want saved, and how they look like. Then write a function that composes these documents and make sure this is called when changes are made to your relational database.
Notifying CouchDB of a change
CouchDB is very simple. Probably the easiest thing is directly updating an existing document. Two ways to implement this come to mind:
The easiest way is a normal CouchDB update: Fetch the current document by id; modify it; then send it back to Couch with HTTP PUT or POST.
If you have clear application-specific changes (e.g. "the views value was incremented") then writing an _update function seems prudent. Update function are very simple: they receive an HTTP query and a document; they modify the document; and then CouchDB stores the new version. You write update functions in Javascript and they run on the server. It is a great way to "compress" common actions into simpler (and fewer) HTTP queries.
We have a need coming up in an application where the following is true:
A web page uses AJAX to request data from a server.
The specification of the data (e. g. table name) requested from the server will not be known until run-time.
The configuration of the data view is itself data-driven, and configurable by an administrator.
Data updates and inserts must be supported, not just views.
Prototyping this was very easy - we could pass in the appropriate information (table name, changeset, whatever) to a generic data service that just did what it was told (using JSON as the data storage mechanism). The data service could do basic validation on the parameters to ensure the current user can perform the requested operation (read the data, insert a row, read the row).
The issue we have now that we are looking to doing this is a secure production manner, and the idea of passing table names and column names is frightening. Everything we think of to deal with this devolves into trusting the client in some significant way, or seems to involve substantial bookkeeping on the server. For example:
User requests a viewing page.
The server notes the table name and saves it server side with a request ID
The server notes the column names and saves them, replacing them with "col1, col2", etc., and stores the mapping with the request ID data.
The client page sends the request ID to the service, which looks up the server storage by ID
The service returns col1, col2, etc.
This would work, we think, but feels very messy.
Does anyone have experience with this kind of problem and can offer a solution?
Do you need to give them access to raw tables?
Perhaps you can go meta, and make a meta-table that stores the tabular data in a secure manner (ie, only the system knows the table/schema, but the user's concept of schema/table are just abstractions that all map back to the same schema/table)...
Again, more information is needed as to what can be abstracted. Allowing DDL operations by the end-users is asking for trouble, as you rightfully assessed, and I would just abstract that so that "DDL" becomes DML.
However, mapping actual SQL that is written against this data would be much more difficult to abstract, if that is a requirement.
If I had to expose back-end information to end customers, I'd probably hide the actual physical representation using meta-data that would remap table names and columns to more user-friendly text, that would also enable me to provide views on the tables that are a bit more advanced than plain table / column names... As properly modeling associations between tables and so on...