Azure Web Sites multiple instances using Http Runtime Cache - caching

Our plan is to migrate Azure Web Roles to Azure Web Sites. So far the Azure Web Roles were using Azure Caching that was shared across instances.
Our first thought was to switch to Redis Cache. But after a few other discussions we started discussing using just Http Runtime Cache as our data isn't big (we do not store any images or big data). It's all strings and numbers.
If go for Http Runtime Cache (using it on five instances of one Azure Web Site).
Could following scenario happen?:
Request comes to first instance that serves a content of freshly cached data.
User click's on an item but the request goes to second instance that has older cache at that moment that does not contain the item.
Would this result in an error? Is this a very possible situation? Can we be sure that the request will always go to that one instance?

By default Azure Websites implements sticky sessions, meaning that when a user makes a request and it gets routed to instance A, all future requests will also go to instance A for as long as instance A stays up

Related

Shared http session lifetime

We have several web applications using the same identity provider (which we also manage), most of them (including identity provider) are using .NET core.
Requirement is that if user is logged in in two or more applications at the same time (in one browser), and is actively using one app, it automatically extends the session lifetime in all of the applications.
So while he's using at least one application, he doesn't get logged out of neither of them. Which is another requirement: auto-logout after certain time of inactivity (this part is easy of course)
I thought of using Redis server to manage this shared session lifetime, using SessionId that each app would receive from identity server via claims. So each time user does some action, backend contacts Redis and check if user's session is still active and extend the session lifetime if it is. Logout user if it's not.
Problem is, applications are not allowed to access this Redis server directly (security reasons). So I thought of adding a separate web service for these apps to contact using standard HTTP endpoint. So basically just a middleman between Redis and web app.
Is there any better way to do it? Not sure how common of a requirement is this.
Redis usually belongs in the Distributed cache, which means that it is located on another server farm. Therefore, your application has restrictions because it is not allowed to access an external server.
If your application is under development or if it is still in the growth phase my recommendation is to use InMemory cache or consider response caching middleware.
Also, these are very small amounts of data, and if you are going to store only that in the cache for a start you would definitely consider InMemory.
Of course, I understand your need for a Redis and that is it:
Is coherent (consistent) across requests to multiple servers.
Survives server restarts and app deployments. And that's because
your cache is usually in a different location (e.g. Azure)
Doesn't use local memory.
Is scalable
etc.
For larger applications Consider replacing InMemory cache with the Distributed memory cache. It is mostly similar to the InMemory cache because both the InMemory and the Distributed cache are located on the server farm where the application was run. Only the InMemory cache requires a sticky session, and the Distributed memory cache does not.

Change session mode to migrate to windows azure

I am hosting an on-premises website which I want to migrate to Windows Azure Virtual machine.I will be using multiple instances of Azure VMs. Currently I am using IN-Proc session management technique. Do I really need to change this session mode to migrate the website to cloud. Why??
If you want to have more than 1 Web Role Instance (for load balancing/scalability/redundancy purposes) then Yes you do need to change it. Just a reminder, but you need a minimum of 2 roles in order to be eligible for the 99.9% SLA.
InProc means that the session information is stored in that web roles process. a second web role instance, has no knowledge of that data contained in process in the first web role.
So if your first web request goes to WEBROLE_1 it has your session information.
If your second request goes to WEBROLE_2, it won't know that you already have some session data stored in the other role.
There are a number of other options for storing your session info including using TableStorage, SQL Azure or App Fabric Cache.

Does Varnish Handle User Web Sessions

I've had 2 sysadmins from 2 large hosting organizations tell me that Varnish will handle session sharing between web servers. I can find nothing online to support this and in fact found this where the guy specifically says it does not. I cannot tell if the guy is a Varnish employee or just a contributor or what.
Just looking for more verification on this point.
A session allows you to store many things (shopping carts, logged in user, etc), and is commonly identified by a cookie (e.g. sessionid). A web server knows how to get a session using this sessionid (and can access/update your shopping cart), but varnish only handles cookies. Varnish can do load-balanced lookups to backends, regardless of the cookie values or based on some rules (you need to write you own varnish config).
However, a challenge in session sharing between web servers is whether a web server can access sessions created/updated by another web server. In many Java Web Containers, sessions are by default stored in memory (of only one web server), with load balancers implementing some kind of 'sticky session' mechanism (sending a user with a session to a specific back-end all the time, can be easily setup with varnish). Another option is to store the (serialized) session values in a shared database, so they can be retrieved by any backend (and will keep working if a web server goes down). A third option is to completely serialize the session into a cookie and stop using sessionids, but this is complex (limited size, bandwidth, security requires some signing mechanism, but scaling is great).
All approaches have advantages and disadvantages. You have to choose, varnish supports any option but will not 'automagically' do what you want, so prepare to write a bit of varnish configuration...
If you would describe how you want to load balance, or what you try to achieve, you could get a more specific answer.

Doing Memory Job on every site in whole IIS on every instance of whole Azure Deployment

Lets assume Memory Job is set or clean cache, I know AppFabric and I just use cache as example.
I have a ASP.net MVC3 deployment with 4 instances in Azure at this moment.
Each instance has multiple websites, defined by ServiceDefinition.csdef Sites section, 18 of them. It's foldered as 1 to 17 in sitesroot in .cspkg package.
I want to know, is there any mechanism to loop all Azure Instances, then loop all IIS sites to call a MVC Controller?
The simple answer is no, there is no built in functionality that will iterate through all of your sites on all of your instances.
Presuming you want to do this as you want to tell these sites that there has been a change to some data that is in the cache, the simplest way around this is to have each site poll for updates to your cache, or just expire data out of the cache after a certain period of time.
The more complicated option is to give each site and internal endpoint, then when the site starts up it registers this internal end point with a cache controller service. When some data that can be cached is updated, the cache controller service communicates with all of the endpoints that are registered with it them to refresh their cache.

Amazon EC2 ELB directing load to other instances and session stores

If we scale up (add an instance to ELB), could we redirect some existing requests to the new instance. So that, The users that we force to a new server will be asked to login again
If we scale down (remove an instance from ELB), then all users from that server will automatically be redirected by ELB to other remaining servers. These user should not be aked to login again.
Is this possible (including the redirect of request)? How?
Any ideas are welcome but I presume this can be solved using a central session store. I just don't know how to implement it .
And what are the options in using a central session store? simpledb? redis? memcached?
Our application is just a simple web application hosted in apache. We have two instances of it added unto the Amazon ELB, and we are using PHP.
Any ELB php specific suggestions? when a scale down/up happens that no user-visible symptomps should be shown?
For the most part, this should be completely transparent to your end users without many changes on your end.
The biggest aspect to look at on your side will be ensuring that sessions are persisted / available through the addition / removal of instances.
You can do this by setting a cookie on the client (default behavior in session_start() and ensuring all of your web servers with PHP have the facility to obtain information about the session id.
Some people will use memcached to do this ... and there is native integration in PHP for sessions to be stored in memcached ...
There are quite a bunch of ways to have a centralized session management. Some of them are listed below:
DB:
http://ocklin.org/session_management_cluster.html
Memcache:
http://www.migrate2cloud.com/blog/how-to-configure-memcached-on-aws-ec2-a-starters-guide (make sure the hosts are able to connect without any problem),
http://www.dotdeb.org/2008/08/25/storing-your-php-sessions-using-memcached/
http://php.net/manual/en/memcached.sessions.php
Msession:
http://in.php.net/manual/en/ref.msession.php

Resources