what way to store data by key and value? - asp.net-mvc-3

I store data in
HttpContext.Current.Application.Add(appKey, value);
And read data by this one:
HttpContext.Current.Application[appKey];
This has the advantage for me that is using a key for a value but after a short time (about 20 minutes) it does not work, and I can not find [appKey],because the application life cycle in iis data will lose.
i want to know is that another way to store my data by key and value?
i do not want sql server,file,... and want storing data on server not on client
i store users some data in it.
thanks for your helping

Since IIS may recycle and throw away any cache/memory contents at any time, the only way you will get data persisted is to store it outside IIS. Some examples are; (and yes, I included the ones you stated you didn't want just to have the list a bit more complete, feel free to skip them)
A SQL database (there are quite a few free ones if the price is prohibitive)
A NoSQL database (same thing there, quite a few free ones and usually simpler to use for key/value)
File (which you also stated you didn't want)
Some kind of external memory cache, a'la AppFabric cache or memcached.
Cookies (somewhat limited in size and not secure in any way by default)

you could create a persistent cookie on the user's machine so that the session doesn't expire, or increase the session timeout to a value that would work better for your situation/users
How to create persistent cookies in asp.net?
Session timeout in ASP.NET

You're talking about persisting data beyond the scope of a session. So you're going to have to use some form of persistent storage (Database, File, Caching Server).
Have you considered using AppFabric. It's actually pretty easy to implement. You could either access it directly from your code using the nuget packages, or you could just configured it as a session store. (I think) doing the latter would mean you'd get rid of the session timeout issue.

Do you understand that whatever you decide to store in Application, will be available for all users in your application?
Now regarding your actual question, what kind of data do you plan on storing? If its user sensitive data, then it probably makes sense to store it in the session. If it's client specific and it doesn't contain any sensitive information, than cookies is probably a reasonable way forward.
If it is indeed an application wide data and it must be the same for every user of your application, then you can make configuration changes to make sure that it doesn't expiry after 20 minutes.

Related

Clarification on database caching

Correct me if I'm wrong, but from my understanding, "database caches" are usually implemented with an in-memory database that is local to the web server (same machine as the web server). Also, these "database caches" store the actual results of queries. I have also read up on the multiple caching strategies like - Cache Aside, Read Through, Write Through, Write Behind, Write Around.
For some context, the Write Through strategy looks like this:
and the Cache Aside strategy looks like this:
I believe that the "Application" refers to a backend server with a REST API.
My first question is, in the Write Through strategy (application writes to cache, cache then writes to database), how does this work? From my understanding, the most commonly used database caches are Redis or Memcached - which are just key-value stores. Suppose you have a relational database as the main database, how are these key-value stores going to write back to the relational database? Do these strategies only apply if your main database is also a key-value store?
In a Write Through (or Read Through) strategy, the cache sits in between the application and the database. How does that even work? How do you get the cache to talk to the database server? From my understanding, the web server (the application) is always the one facilitating the communication between the cache and the main database - which is basically a Cache Aside strategy. Unless Redis has some kind of functionality that allows it to talk to another database, I don't quite understand how this works.
Isn't it possible to mix and match caching strategies? From how I see it, Cache Aside and Read Through are caching strategies for application reads (user wants to read data), while Write Through and Write Behind are caching strategies for application writes (user wants to write data). Couldn't you have a strategy that uses both Cache Aside and Write Through? Why do most articles always seem to portray them as independent strategies?
What happens if you have a cluster of webs servers? Do they each have their own local in-memory database that acts as a cache?
Could you implement a cache using a normal (not in-memory) database? I suppose this would still be somewhat useful since you do not need to make an additional network hop to the database server (since the cache lives on the same machine as the web server)?
Introduction & clarification
I guess you have one misunderstood point, that the cache is NOT expclicitely stored on the same server as the werbserver. Sometimes, not even the database is sperated on it's own server from the webserver. If you think of APIs, like HTTP REST APIs, you can use caching to not spend too many resources on database connections & queries. Generally, you want to use as few database connections & queries as possible. Now imagine the following setting:
You have a werbserver who serves your application and a REST API, which is used by the webserver to work with some resources. Those resources come from a database (lets say a relational database) which is also stored on the same server. Now there is one endpoint which serves e.g. a list of posts (like blog-posts). Every user can fetch all posts (to make it simple in this example). Now we have a case where one can say that this API request could be cached, to not let all users always trigger the database, just to query the same resources (via the REST API) over and over again. Here comes caching. Redis is one of many tools which can be used for caching. Since redis is a simple in-memory key-value storage, you can just put all of your posts (remember the REST API) after the first DB-query, into the cache. All future requests for the posts-list would first check whether the posts are alreay cached or not. If they are, the API will return the cache-content for this specific request.
This is one simple example to show off, what caching can be used for.
Answers on your question
My first question is, why would you ever write to a cache?
To reduce the amount of database connections and queries.
how is writing to these key-value stores going to help with updating the relational database?
It does not help you with updating, but instead it helps you with spending less resources. It also helps you in terms of "temporary backing up" some data - but that only as a very little side effect. For this, out there are more attractive solutions (Since redis is also not persistent by default. But it supports persistence.)
Do these cache writing strategies only apply if your main database is also a key-value store?
No, it is not important which database you use. Whether it's a NoSQL or SQL DB. It strongly depends on what you want to cache and how the database and it's tables are set up. Do you have frequent changes in your recources? Do resources get updated manually or only on user-initiated actions? Those are questions, leading you to the right caching implementation.
Isn't it possible to mix and match caching strategies?
I am not an expert at caching strategies, but let me try:
I guess it is possible but it also, highly depends on what you are doing in your DB and what kind of application you have. I guess if you find out what kind of application you are building up, then you will know, what strategy you have to use - i guess it is also not recommended to mix those strategies up, because those strategies are coupled to your application type - in other words: It will not work out pretty well.
What happens if you have a cluster of webs servers? Do they each have their own local in-memory database that acts as a cache?
I guess that both is possible. Usually you have one database, maybe clustered or synchronized with copies, to which your webservers (e.g. REST APIs) make their requests. Then whether each of you API servers would have it's own cache, to not query the database at all (in cloud-based applications your database is also maybe on another separated server - so another "hop" in terms of networking). OR (what i also can imagine) you have another middleware between your APIs (clusterd up) and your DB (maybe also clustered up) - but i guess that no one would do that because of the network traffic. It would result in a higher response-time, what you usually want to prevent.
Could you implement a cache using a normal (not in-memory) database?
Yes you could, but it would be way slower. A machine can access in-memory data faster then building up another (local) connection to a database and query your cached entries. Also, because your database has to write the entries into files on your machine, to persist the data.
Conclusion
All in all, it is all about being fast in terms of response times and to prevent much network traffic. I hope that i could help you out a little bit.

Is there a point in using Redis for small sized Gorilla sessions

It seems to me that as long as you only want to store simple values like a timestamp for last visit and maybe a userid in the session, there's really no point at all in using Redis as a session persistence with Gorilla sessions since they seem to be storing it in cookies on the client side anyways.
Am I correct or not in this assumption?
I understand that there's a size limit and also that if I were to store sessions on file (the other available storage option with gorilla sessions), it'd be impossible to scale beyond that machine but again, is this whole "session store" a non issue with gorilla sessions cookie store?
Btw, I've seen this question here and NO it doesn't address this issue so it's not a duplicate. What is the advantage of using Gorilla sessions custom backend?
Using Redis (or any other server-side store) can help avoid a whole class of problems, namely:
Large cookie sizes adding per-request overhead - even an additional 4K per request can be a lot on mobile connections.
Severely reducing the risk of cookie data being manipulated as it is stored server-side.
Ability to store more than 4K in the session (i.e. form data from a multi-step form)
... and in Redis' case, the ability to easily expire the server-side sessions (something that's more error prone with mySQL or a filesystem store.
A cookie is still required as it must store an identifier so the user can be associated with their server-side session. This isn't particular to gorilla/sessions whatsoever and is how nearly all other server-side session implementations behave.
If you think your use case is simple then sure, stick with cookie-based sessions. gorilla/sessions makes it easy enough to change out the backing store at a later date.

Store 600 KB of session data per registered user in Heroku

For a given application, we would need to store about 600 Kb of data in the web session per registered user who connects on our website. We would have about 1,000 registered users in parallel hence we need to store 600 Mb of session data.
The reason we need so much data in the session is to avoid querying frequently a table with about 1 billion rows in the database.
I understood Heroku stores session information in the database. This is fine as it means the session data is available cross-dynos (no session affinity).
Is there another way of storing more efficiently information across dynos ? Reading the docs, I found memcachier.
My questions would be the following :
Do you think storing that amount of session in the database would be performant enough
Do you suggest other caching systems than memcachier to store session information available across different dynos ?
Thanks a lot for your help !
Olivier
Heroku does not store session information at all -- how session information is stored depends entirely on your application and your application's framework and that will work in the same way regardless of whether you are deployed on Heroku or any other system.
As far as what kind of storage is sensible, however: it sounds like cookie storage is right out, due to the volume of data. Database storage was the de facto default for web applications for a long time and there's nothing wrong with it. Memcached would be faster, and how much faster exactly depends on your configuration (are you using connection pooling? does each page view hit the database for something else anyways? what is your caching system like?). But as long as you're sure this strategy of storing so much info in session data is sound, then the difference between database and memcached storage will not be great.

Caching Dynamic data that isn't really dynamic in an IIS7 environment

Okay, so I have an old ASP Classic website. I've determined I can reduce a huge number of DB calls by caching the data daily. Our site data is read only, and changes very slowly. I think based on our site usage, I would be able to cache pages by query string for every visit each day, without a hit to our server.
My first thought was to use Output Caching, but the problem I discovered right away was that it wasn't until the third page request was generated that I gained any performance. I verified this using SQL profiler, but I'm not sure why.
My second thought was to add this ObjPageCache include file from https://web.archive.org/web/20211020131054/https://www.4guysfromrolla.com/webtech/032002-1.shtml After some research I discovered that this could cause more issues than it may solve http://support.microsoft.com/kb/316451
I'm hoping someone on here will tell me that since 2002 the issue with Sending ServerXMLHTTP or WinHTTP Requests to the Same Server has been resolved with Microsoft.
Depending on how your data is maintained you could choose from a number of ways to cache it.
If your data is changed and saved in one single place you could choose to generate an html-file which you save to the serverdisk and refer to in your linking. This will require write access for the process running your site though (e.g. NETWORK SERVICE). This will produce fast pages as the server serves these pages without any scriptingengine getting involved.
Another option is reading the data into an DomDocument which you store in the Application object and refer to on the page that needs it (hence saving the roundtrip to the database). You could keep two timestamps together with the cached data (one for the cachingtime and one for the time of change of data in the database). Timestamps will allow for fast check for staleness of the cached data: cached timestamp <> database timestamp => refresh data; otherwise use cached data. One thing to note about this approach is that Application does not accept objects other than multithreaded object so you will have to use the MSXML2.FreeThreadedDomDocument.6.0
Personally I prefer the last one as it allows for a more dynamic usage and I don't have to worry about write access permissions for the process running my site (which would probably pose security risks anyways).

What are the benefits of a stateless web application?

It seems some web architects aim to have a stateless web application. Does that mean basically not storing user sessions? Or is there more to it?
If it is just the user session storing, what is the benefit of not doing that?
Reduces memory usage. Imagine if google stored session information about every one of their users
Easier to support server farms. If you need session data and you have more than 1 server, you need a way to sync that session data across servers. Normally this is done using a database.
Reduce session expiration problems. Sometimes expiring sessions cause issues that are hard to find and test for. Sessionless applications don't suffer from these.
Url linkability. Some sites store the ID of what the user is looking at in the sessions. This makes it impossible for users to simply copy and paste the URL or send it to friends.
NOTE: session data is really cached data. This is what it should be used for. If you have an expensive query which is going to be reused, then save it into session. Just remember that you cannot assume it will be there when you try and get it later. Always check if it exists before retrieving.
From a developer's perspective, statelessness can help make an application more maintainable and easier to work with. If I know a website I'm working on is stateless, I need not worry about things being correctly initialized in the session before loading a particular page.
From a user's perspective, statelessness allows resources to be linkable. If a page is stateless, then when I link a friend to that page, I know that they'll see what I'm seeing.
From the scaling and performance perspective, see tsters answer.

Resources