Reduce Menu Generation Time - asp.net-mvc-3

Warning
I wanted to add a disclaimer to warn against this database structure. I would highly advise a more streamlined approach.
My personal preference is to use a hierarchical one or two table structure to store navigation menus. For guidance on erecting a menu from a flat structure, see https://stackoverflow.com/a/444303/1778606
Answer
In my case, the answer was to cache the menu on the client. Read on if you want.
I have a three tier menu that is generated using several queries. In the application I have inherited, there are many user groups. Each user has access to different pages/parts of the web application via user groups. A single user may be in many user groups.
In their menu, each user should only be able to see and access pages their groups have access to. ie. only certain Menu Items, Menu Submenu Sections, and Menu Submenu Section Items should be visible for each user. The pivot tables control visibility and access.
To generate the menu, I need to make several queries. My sample proposed menu design is shown below.
This menu looks drastically different depending on which tier1, tier2, or tier3 items a user has access to.
Would it be appropriate to cache the entire menu in session for each user instead of generating it for each page load? Would this be potentially to much data to cache? My main concern is that it would like cache all three tables (Tier1, Tier2, Tier3) for each active user in session. I need to access the data anyway to see if the user has permissions, but meh!
Are there any architecture designs that would help reduce the number of queries required to generate the menu? (assuming the menu is generated on each page load as normally works as I assumed when I started writing this question)
Is this kind of menu thing normal in applications - we have like 20 groups and users are in anywhere from 2 to all 20? Any advice or comments are welcome.
(I want to minimize page load and processing time.)
Context:
I'm using c#, asp.net mvc3, oracle. Max user base estimated at ~20,000. Max active user base close to 1000. Likely active user base is 100.

At first when a user logs in, then load the permitted menus under the logged in user in cache. Then access the menus/submenus from the cache for that user. when the user logs out then clear the cache. In this scenario each time a new user logs in, cache will only be loaded for that user.
Thanks

Instead of using session to cache this information, you could add something like app fabric or memcached.
This has the advantage of not having to handle session in a load balanced application.
Space should not be much of an issue.

Load the entire menu once for the each user and cache it on the client.
It won't need to be reloaded unless they invalidate the cache (via a reload).
A query will still be needed on each page to validate the user has permissions.
(Answer attempt #2 for my own question)

Heres an option...
Keep the Tier1, Tier2, Tier3 data table in the application data cache
Store the primary keys of the Tier1, Tier2, Tier3 tables in arrays in session for each user, ie. a Tier1Key, Tier2Key, Tier3Key int[].
This should reduce the memory footprint somewhat. There is still going to be a bit of a load generation. You may be able to solve this by using a partial view for the submenu and caching the partial view on the client. (This would require a delayed partial menu load via ajax, which would happen on MenuItem click)
You may need to a query to validate access on each page.
for client caching, see OutputCache Location=Client does not appear to work,
(Answer attempt for my own question)

Related

Dynamics AX Preload Behaviour

Questions
Does the user option preload refer to caching on the client or on the server?
Are there any ways to make this occur asynchronously so that users don't take a large performance hit when first requesting data from a table?
More Info
In Dynamics Ax 2012, under File > User Options > Preload a user can select which tables are preloaded the first time they're accessed.
I've not found anything to say whether this behaviour relates to caching on the client or the AOS.
The fact it's a user setting implies that it's the client.
But it could be an AOS setting where users with this option take the initial hit of preloading the entire table, whilst those without would benefit from any caching caused by other users, but wouldn't trigger the load themselves.
If it's the latter we could improve performance by removing this option from all (human) users, leaving it enabled only on our batch user account, having scheduled jobs on each AOS to request a record from each table, thus triggering the preload without any user being negatively impacted.
Ref: http://dynamicbusinesssolutions.ru/axshared.en/html/9cd36702-2fa7-470c-a627-08
If a table is large or frequently changed it is not a candidate for entire table cache. This applies to ordinary users and batch users alike.
The EntireTable cache is located on the server, but the load is initiated by the user, the first user doing the select takes a performance hit.
To succesfully disable a table from preload, you can disable it using the Admin user, it will apply to all users. Or you can let all users disable it by themselves.
Personally I never change the user setup. If a table is large I change the table CacheLookup property as a customization.
See Set-based Caching:
When you set a table's CacheLookup property to EntireTable, all the
records in the table are placed in the cache after the first select.
This type of caching follows the rules of single record caching. This
means that the SELECT statement WHERE clause must include equality
tests on all fields of the unique index that is defined in the table's
PrimaryIndex property.
The EntireTable cache is located on the server
and is shared by all connections to the Application Object Server
(AOS). If a select is made on the client tier to a table that is
EntireTable cached, it first looks in its own cache and then searches
the server-side EntireTable cache.
An EntireTable cache is created for
each table for a given company. If you have two selects on the same
table for different companies the entire table is cached twice.
Note: Avoid using EntireTable caches for large tables because once
the cache size reaches 128 KB the cache is moved from memory to disk.
A disk search is much slower than an in-memory search.

Does page table changes with context switch?

Suppose, the page table changes with each processes then we don't require TLB and memory for page table. We can implement it with some reasonable number of registers. But the galvin book says(not precisely but my interpretation) we have an entry in page table all pages and we have separate table for each processes so we are using pointer to refer a particular table.
Am I correct(understanding from the book)?
If then what is the need to change the page table for each context switch?
if we are arguing that we can use one page table for whole system then simple answer to this question is that using page table/process provides more security by providing memory isolation among processes running on same system. each process has its own page table means it can not interfere with other processes memory. page table management can not be achieved through registers due to size and number of page tables. suppose you want to have extra registers to store active page tables still you will need memory to store back inactive page tables this is equally expensive method(for your first line). I suggest you to spend some time on understanding of present hardware facilities and OS functionalities then try to come up with innovation in design otherwise you will remain astray from learning.
your Op title ask "does page table changes with context switch" YES page table changes on context switch

Windows Forms: Best way to store table lookups

Developing new C# .net4.5 Windows Forms application. I want to code it "right". I'm developing a couple User Controls. The controls are shared via several tabs. On the controls are some common drop down boxes that are populated with the same SQL Server table data. (one or two columns) I want to read the DB once and have the lookup data available during the entire user experience. The app will be used by many users. Whats the best way to store this data in my new code? example code appreciated. cache? static list ? Help! Thanks!
Simply a global DataTable (Dataset) would do. Or if you want control over the contents of the list using SortedDictionary containing your own custom class for each row would suffice.
The Custom Class is a tidy way of holding a cache (for the data you want from each row), as you can override the ToString function and populate the user controls easily.
To share this cache amongst many users is not easy, and could prove more trouble than its worth. Each user with a separate copy of the program would have their own copy of the cache (in the 2 methods above). (But the user controls will also contains subsets of this cache too). And each program would need to load the user controls, so perhaps this sharing across multiple instances direction is moot.

How to structure models, beans, controllers, and views for a different jsp pages but reside in to one table in a database?

This is a new project we are doing using Spring MVC 2.5, jsp , Java7, Ajax, and HTML5. On my part I am going to have have 7-10 jsp pages which contain one form each.These pages are sequential. i.e One have to pass the first page successfuly to go to the second and pass the second page to go to the third and so on.
The data in order to be persisted, one has to get to the last page (after passing the rest successfully) and confirm the information is correct. Once the user confirms, I have to persist all the data stored in a bean or session (All or none). No incomplete data should be persisted. Let's call our database table "employee"
I am new to Spring MVC but got the idea and implemented the page flow using a controller.
My question is should I need to have one model class or bean to store all the data, or use session to store each pages information and keep it in the session until it gets persisted?
Or its better to have one model class, but multiple controller/bean to control the data flow from each page. Which one do you recommend? Is there any design pattern already implemented to answer my question? If you have a better idea please feel free to discuss your idea.
There are two approaches as you have already mentioned. Which one to use depends on the datasize and other requirements, for example, whether the user can come back later and continue from where he left. The model and controller need not be just one. It can be designed appropriately.
a) Store data from each screen in the session:
Pros: Unnecessary data is not persisted to db. Can manipulate data from within the session when user traverses back and forth on the screens and hence faster.
Cons of this approach: Too much information in the session can cause memory issues. May not be very helpful during session failover.The user cannot log back in and continue from where the user left, if this functionality is required.
b) Persist each screen data as the user moves on:
Pros: Session is lighter, so only minimum relevant information is stored in the session. User can log back in and continue from where the user left.
A separate inprogress db tables can be used to store this information and only on final submit insert/update the data into the actual tables, else the db would contain a lot of unsubmitted data. This way the inprogress db can be cleaned up periodically.
Cons: Need to make db calls to persist and retrieve for every screen, even though it may not be submitted by the user.
You are correct about your use of the HTTP session for storing the state of the forms.
or use session to store each pages information and keep it in the
session until it gets persisted?
because of this requirement:
No incomplete data should be persisted
As for
should I need to have one model class or bean to store all the data
You can model this as you see fit. Perhaps a model to represent the flow and then an object for each page. Depends on how the data is split across the pages.
Although as noted in a comment above you might be able to make use of WebFlow to achieve this. However that is ultimately just a lightweight framework over Spring MVC.

Caching expensive SQL query in memory or in the database?

Let me start by describing the scenario. I have an MVC 3 application with SQL Server 2008. In one of the pages we display a list of Products that is returned from the database and is UNIQUE per logged in user.
The SQL query (actually a VIEW) used to return the list of products is VERY expensive.
It is based on very complex business requirements which cannot be changed at this stage.
The database schema cannot be changed or redesigned as it is used by other applications.
There are 50k products and 5k users (each user may have access to 1 up to 50k products).
In order to display the Products page for the logged in user we use:
SELECT TOP X * FROM [VIEW] WHERE UserID = #UserId -- where 'X' is the size of the page
The query above returns a maximum of 50 rows (maximum page size). The WHERE clause restricts the number of rows to a maximum of 50k (products that the user has access to).
The page is taking about 5 to 7 seconds to load and that is exactly the time the SQL query above takes to run in SQL.
Problem:
The user goes to the Products page and very likely uses paging, re-sorts the results, goes to the details page, etc and then goes back to the list. And every time it takes 5-7s to display the results.
That is unacceptable, but at the same time the business team has accepted that the first time the Products page is loaded it can take 5-7s. Therefore, we thought about CACHING.
We now have two options to choose from, the most "obvious" one, at least to me, is using .Net Caching (in memory / in proc). (Please note that Distributed Cache is not allowed at the moment for technical constraints with our provider / hosting partner).
But I'm not very comfortable with this. We could end up with lots of products in memory (when there are 50 or 100 users logged in simultaneously) which could cause other issues on the server, like .Net constantly removing cache items to free up space while our code inserts new items.
The SECOND option:
The main problem here is that it is very EXPENSIVE to generate the User x Product x Access view, so we thought we could create a flat table (or in other words a CACHE of all products x users in the database). This table would be exactly the result of the view.
However the results can change at any time if new products are added, user permissions are changed, etc. So we would need to constantly refresh the table (which could take a few seconds) and this started to get a little bit complex.
Similarly, we though we could implement some sort of Cache Provider and, upon request from a user, we would run the original SQL query and select the products from the view (5-7s, acceptable only once) and save that result in a flat table called ProductUserAccessCache in SQL. Next request, we would get the values from this cached-table (as we could easily identify the results were cached for that particular user) with a fast query without calculations in SQL.
Any time a product was added or a permission changed, we would truncate the cached-table and upon a new request the table would be repopulated for the requested user.
It doesn't seem too complex to me, but what we are doing here basically is creating a NEW cache "provider".
Does any one have any experience with this kind of issue?
Would it be better to use .Net Caching (in proc)?
Any suggestions?
We were facing a similar issue some time ago, and we were thinking of using EF caching in order to avoid the delay on retrieving the information. Our problem was a 1 - 2 secs. delay. Here is some info that might help on how to cache a table extending EF. One of the drawbacks of caching is how fresh you need the information to be, so you set your cache expiration accordingly. Depending on that expiration, users might need to wait to get the fresh info more than they would like to, but if your users can accept that they migth be seing outdated info in order to avoid the delay, then the tradeoff would worth it.
In our scenario, we decided to better have the fresh info than quick, but as I said before, our waiting period wasn't that long.
Hope it helps

Resources