Will an out of proc session survive a windows azure VIP swap? - session

If I'm hosting a site in windows azure with an out of proc session provider and perform a VIP swap, will the session persist through the VIP swap since the session is being provided out of process?
I will come back and answer the question after I perform a test but I'm pretty sure someone will have the answer here asap and azure deployments take a while.

I see no reason why it shouldn't work.
It should be no different than hitting different instances that are part of the same deploy.

If you use the Azure Cache for your session, it should not. The browser is using the same domain, so the same cookie would be sent, and a VIP swap does not affect the cache server.
If you are not using Azure Cache for session I recommend it, they even have a NuGet package.

Related

Shared http session lifetime

We have several web applications using the same identity provider (which we also manage), most of them (including identity provider) are using .NET core.
Requirement is that if user is logged in in two or more applications at the same time (in one browser), and is actively using one app, it automatically extends the session lifetime in all of the applications.
So while he's using at least one application, he doesn't get logged out of neither of them. Which is another requirement: auto-logout after certain time of inactivity (this part is easy of course)
I thought of using Redis server to manage this shared session lifetime, using SessionId that each app would receive from identity server via claims. So each time user does some action, backend contacts Redis and check if user's session is still active and extend the session lifetime if it is. Logout user if it's not.
Problem is, applications are not allowed to access this Redis server directly (security reasons). So I thought of adding a separate web service for these apps to contact using standard HTTP endpoint. So basically just a middleman between Redis and web app.
Is there any better way to do it? Not sure how common of a requirement is this.
Redis usually belongs in the Distributed cache, which means that it is located on another server farm. Therefore, your application has restrictions because it is not allowed to access an external server.
If your application is under development or if it is still in the growth phase my recommendation is to use InMemory cache or consider response caching middleware.
Also, these are very small amounts of data, and if you are going to store only that in the cache for a start you would definitely consider InMemory.
Of course, I understand your need for a Redis and that is it:
Is coherent (consistent) across requests to multiple servers.
Survives server restarts and app deployments. And that's because
your cache is usually in a different location (e.g. Azure)
Doesn't use local memory.
Is scalable
etc.
For larger applications Consider replacing InMemory cache with the Distributed memory cache. It is mostly similar to the InMemory cache because both the InMemory and the Distributed cache are located on the server farm where the application was run. Only the InMemory cache requires a sticky session, and the Distributed memory cache does not.

Share Sessions in IIS Web Farm

We have Windows Server 2008 R2 with IIS on in a web farm environment. I initiate a Classic ASP session and every so often, when refreshing the page, it doesn't show but then comes back again.
I go to http://mainurl.com but have two boxes called http://devbox1.com and http://devbox2.com
I put the files onto one of the DEV boxes which replicates to the other one.
After some reading, I guess this is down to a "common" issue with sharing sessions across a web farm instance.
Could someone please help me how to resolve this please?
Update:
As it's not clear in my post. Do not use the Session.SessionID as the identifier for the cookie as this will change across environments (Microsoft recommend never to store the SessionID in a database).
Quote from MSDN Library - Session.SessionID
You should not use the SessionID property to generate primary key values for a database application. This is because if the Web server is restarted, some SessionID values may be the same as those generated before the server was stopped. Instead, you should use an auto-increment column data type, such as IDENTITY with Microsoft® SQL Server™ or COUNTER with Microsoft Access.
Instead use a self generated id value that you then store in your cookie and the database. This way your Session object can be re-created.
There seems to be some discussion about solution using a database. Just to clarify Classic ASP uses Session object stored in memory this means the minute you switch machines load balanced or otherwise you still lose the session.
Interesting article on the IIS.net forums about this topic - iis 7 Load balancing
Quote from Bill Staples (who at the time was Product Unit Manager, IIS)
One thing to consider, however, is what to do with any application / session state. Classic ASP stores session state in memory that only one process can access. As soon as you scale the sites onto more than one machine, you can no longer guarantee that each incoming request for a particular user session is landing on the same machine, which means the client may suddenly 'lose state' between requests. This is why we recommend that you not use the built-in session support in ASP for these kinds of scenario. Instead we recommend you use SQL or another database to store this kind of data.
My recommendation would be to store the Session in a database create a cookie on the client machine then use this cookie to identify the session from the database.
Cookies can be changed so I would still recommend you use secure cookies across an SSL secured website, especially if data is of a sensitive nature.
You should create sticky sessions while working with web farms because most likely you have load balanced system which under standard configuration will point traffic to the lowest loaded node. As result your users will loose session from time to time.
Ask your network admin team to look how to create sticky session for your particular load balancers and network configuration, they should know exactly what this means.Here is one of the examples what this is and how to configure it: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_StickySessions.html. But once again it is depend on what you are using.
**** solution with cookies or database entries not the best way to handle this situation because once again depend on your web farm configuration IIS may simple reject any attempt to overwrite session ID which you have stored in database or if security is tight enough even refuse to connect to page while connecting to other node.

Change session mode to migrate to windows azure

I am hosting an on-premises website which I want to migrate to Windows Azure Virtual machine.I will be using multiple instances of Azure VMs. Currently I am using IN-Proc session management technique. Do I really need to change this session mode to migrate the website to cloud. Why??
If you want to have more than 1 Web Role Instance (for load balancing/scalability/redundancy purposes) then Yes you do need to change it. Just a reminder, but you need a minimum of 2 roles in order to be eligible for the 99.9% SLA.
InProc means that the session information is stored in that web roles process. a second web role instance, has no knowledge of that data contained in process in the first web role.
So if your first web request goes to WEBROLE_1 it has your session information.
If your second request goes to WEBROLE_2, it won't know that you already have some session data stored in the other role.
There are a number of other options for storing your session info including using TableStorage, SQL Azure or App Fabric Cache.

Windows Azure Caching (Preview) ErrorCode<ERRCA0017>:SubStatus<ES0006>:

I'm using the role-based caching feature for a windows azure web role.
Configured as co-located. I've followed the steps given by windows azure docs for caching (preview). I get the following error:
ErrorCode <ERRCA0017>:SubStatus<ES0006>:There is a temporary failure.
Please retry later. (One or more specified cache servers are
unavailable, which could be caused by busy network or servers. For
on-premises cache clusters, also verify the following conditions.
Ensure that security permission has been granted for this client
account, and check that the AppFabric Caching Service is allowed
through the firewall on all cache hosts. Also the MaxBufferSize on the
server must be greater than or equal to the serialized object size
sent from the client.). Additional Information : The client was trying
to communicate with the server: net.tcp://127.255.0.4:20010/.
I'm running everything as localhost, using the local development storage, my cache client is in the same role as the server. Changed many configuration attributes, but I always get that excpection or similar like "cannot connect to tcp....".
I'd appreciate some help. Thanks.
There are couple of things which could go wrong with your application.
Very first thing to make sure that you have SDK 1.7 in your machine even with Windows Azure Caching Services and then verify that you have reference set from Windows Azure Cache (not from Windows Server App Fabric SDK). I have seen such misconfiguration in past which lead to such errors.
Now have you changed your dataCacheClient, identifier to your ROLE Name as described in the documentation link here. If you follow the documentation as described to you should not hit any error so for the sake of checking what could be wrong, you can create exact same application as described in this link and see if that works or not.
To get more details error, please be sure to increase the DataCacheFactoryConfiguration.ChannelOpenTimeout value to longer i.e. 2 minutes then default 20 seconds as described here. This step will help you to get details about inner exception which may lead to actual root cause to your problem.
We use Azure co-located caching (not in preview anymore) as our session backer and have fairly regular outages. About once a month.
We tried using the Enterprise library Transient Fault Handling but our instances still hang when caching experiences problems. I think that the transient fault code would work for data caching, but for session backing there is some activity closer to the metal that we can't seem to code against.
The error codes have become more informative over the last year and go something like...
ErrorCode:SubStatus:The request timed out..
Additional Information : The client was trying to communicate with the
server: net.tcp://10.xx.xxx.xx:xxxxx/.
Our best guess so far from experimenting and MS support is that each, or at least one co-located cache role/instance needs to know about all the other instance's IPs, since Azure can destroy and re-up instances whenever they want, this sometimes fails to update the dependent instances. This is secret sauce for Azure, but it is not a secret when our site goes down. I'm looking for any more information on this and to see how others are working around this issue.
One possible work-around. One of our talented platform administrators found that resetting IIS on the instances and scaling up two more instances seem to help the problem. This makes sense to me because it gives caching another chance to gather all the required info about the other instances. This is NOT CONFIRMED to solve the problem but if we repeat this during the next outage it could be a valid work around.

Amazon EC2 ELB directing load to other instances and session stores

If we scale up (add an instance to ELB), could we redirect some existing requests to the new instance. So that, The users that we force to a new server will be asked to login again
If we scale down (remove an instance from ELB), then all users from that server will automatically be redirected by ELB to other remaining servers. These user should not be aked to login again.
Is this possible (including the redirect of request)? How?
Any ideas are welcome but I presume this can be solved using a central session store. I just don't know how to implement it .
And what are the options in using a central session store? simpledb? redis? memcached?
Our application is just a simple web application hosted in apache. We have two instances of it added unto the Amazon ELB, and we are using PHP.
Any ELB php specific suggestions? when a scale down/up happens that no user-visible symptomps should be shown?
For the most part, this should be completely transparent to your end users without many changes on your end.
The biggest aspect to look at on your side will be ensuring that sessions are persisted / available through the addition / removal of instances.
You can do this by setting a cookie on the client (default behavior in session_start() and ensuring all of your web servers with PHP have the facility to obtain information about the session id.
Some people will use memcached to do this ... and there is native integration in PHP for sessions to be stored in memcached ...
There are quite a bunch of ways to have a centralized session management. Some of them are listed below:
DB:
http://ocklin.org/session_management_cluster.html
Memcache:
http://www.migrate2cloud.com/blog/how-to-configure-memcached-on-aws-ec2-a-starters-guide (make sure the hosts are able to connect without any problem),
http://www.dotdeb.org/2008/08/25/storing-your-php-sessions-using-memcached/
http://php.net/manual/en/memcached.sessions.php
Msession:
http://in.php.net/manual/en/ref.msession.php

Resources