MachineKey in web.config going out of sync between two IIS applications - asp.net-web-api

We've got an ASP.NET 4.5 WebAPI service that is hosted on two seperate AWS instances that are behind a load balancer. Both of these machines have a standard system.web configuration section containing the machineKey element with both the decryption and validation keys and algorithms set. The web.configs on both machines are identical:
<system.web>
...
<machineKey decryption="AES" decryptionKey="..." validation="SHA1" validationKey="..." />
...
</system.web>
These keys are used by the OAuth middleware to decrypt/validate bearer tokens in requests.
Everything has been working absolutely fine for the last two years, but in the last month, requests going to the second web box have been failing because the OAuth bearer token appears to be invalid.
If we only run one 'leg' of the web balancer at once, everything is fine, no matter which leg handles all requests. But if both are running, requests to the second box fail. This is odd as there have been no code changes, and no environmental changes we are aware of, either to the web boxes or the load balancer.
It all suggests that both machines are using different machine keys, even though they are both hard-coded in the web.config.
Is there some way that the machine key is overridden somehow? Are there any other explanations for what might be going wrong?

Check the system clock on both AWS instances match. I seem to recall this happening to me a few years ago.

Related

Can a person add CORS headers using the ELB Application Load Balancer (sitting in front of Solr)?

We have a number of EC2 instances running Solr in EC2, which we've used in the past through another application. We would like to move towards allowing users (via web browser) to directly access Solr.
Without something "in front" of Solr this results in a security risk, so we have opted to try to use ELB (specifically the Application Load Balancer) as a simple and maintenance free way of preventing certain requests from hitting SOLR (i.e. preventing the public from DELETING or otherwise modifying the documents in Solr).
This worked great, but we realize that we need to deal with the CORS issue. In other words, we need to add the appropriate headers to requests that come in from a browser. I have not yet seen a way of doing this with Application Load Balancer but am wondering if it is possible to do someway. If it is not possible, I would love as an additional recomendation the easier and least complicated way of adding these headers. We really really really hate to add something like nginx in front of Solr because then we've got additional redundancy to deal with, more servers, etc.
Thank you!
There is not much I can find on CORS for ALB either and I remember when I used Beanstalk with ELB I had to add CORS support in my java application directly.
Having said that, I can find a lot of articles on how to set up CORS for Solr.
Can it be an option for you?

How does windows azure websites handle session?

I was investigating one of the new offerings in windows azure. Specifically "Websites" and i'm not able to find anything about how it handles session data. Does anybody know? I moved the slider up to 2 instances and it all seems to "just work", but I would feel better about using it if I knew for sure it was sharing session data (or not?)
If you'd like to learn more about the architecture of Windows Azure Web Sites, I would suggest watching this session from TechEd 2012 Windows Azure Web Sites: Under the Hood
You have some options to solve this problem
sql solution
table storage solution
memcache solution
Sql is the classic solution. Sql handles all sessions with classic sql requests.
Table storage works wonders (in my experience). It's really easy to scale and really simple to implement (just a few lines of code on your webconfig).
Memcache solution is the best solution. Azure provides a cluster of "cache servers" to store session (or other serializable objects). It's really easy to scale and works really really fast. I am using this solution on my production environments with 0 problems and a really good performance results.
In order to implement Memcache, you just need to add those lines on your web.config:
<configuration>
<configSections>
<section name="dataCacheClients" type="Microsoft.ApplicationServer.Caching.DataCacheClientsSection, Microsoft.ApplicationServer.Caching.Core" allowLocation="true" allowDefinition="Everywhere"/>
<!-- more config sections here -->
</configSections>
<dataCacheClients>
<dataCacheClient name="default">
<hosts>
<host name="YOUR_NAME_HERE.cache.windows.net" cachePort="YOUR_PORT_HERE"/>
</hosts>
<securityProperties mode="Message">
<messageSecurity authorizationInfo="YOUR_KEY_HERE">
</messageSecurity>
</securityProperties>
</dataCacheClient>
</dataCacheClients>
<!-- more configurations here -->
Summary
If you don't care about the costs and you wish to archieve best performance possible, go for memcache solution. If you need to keep your costs really low, go for table storage.
Since the linked video above is quite out of date, I thought I would share what I was able to find regarding sessions on Azure.
Azure makes use of Application Request Routing.
ARR cleverly keeps track of connecting users by giving them a special cookie (known as an affinity cookie), which allows it to know, upon subsequent requests, to which server instance they were talking to. This way, we can be sure that once a client establishes a session with a specific server instance, it will keep talking to the same server as long as his session is active.
Reference:
https://azure.microsoft.com/en-us/blog/disabling-arrs-instance-affinity-in-windows-azure-web-sites/.
Are you targetting ASP.NET 4.5?
If you don't explicitly configure any providers with 4.5, it will default to using the ASP.NET Universal Providers which are now included in Machine.config. So it will be using a SQL Session State provider by default. I would expect it to use a local DB, though, so I'm not sure how it would be sharing the state.
You could test it by opening up some sessions, then taking the number of instances back down to one and see if some sessions lose state or not.
The load balancer could be using session affinity, in which case, you might not notice if it's not sharing session state.
How many web roles do you have? If you keep it to 1 you should be ok, but you can read the details here about how multiple web roles are going to create the same session state problems you'd encounter if you were running a web farm... When running a web farm an option is keeping session state in your db. So as you can imagine, if you needed to run multiple web roles then you could lean on sql Azure (though Table Storage is really cool, and likely a great fit for something like session state)
But to more directly answer your question, you can use multiple web roles to distribute processing load, and a web role is just a "front-end web application and content hosted inside of IIS". So again, if you're only using one web role then your app probably is working just fine. But just be aware that if you ever need to scale your web roles out, it will bork your Session persistence up.

IIS hangs on specific route requests in ASP.NET MVC 3 app UNLESS running in debugger

We have a very strange issue that we're dealing with.
We have an MVC 3 application that we are still developing, and Monday we started running into an issue with four specific routes (methods) in one of our controllers. All four of these routes are used in the management of roles and deal with either creating or editing a role. We have two different tiers of roles in the application, so there are two routes for creating a role for each tier, and two routes for editing a role for each tier. Both methods for the corresponding create routes call the same view, as well as the two methods for the corresponding edit routes. Both views call a shared partial view that contains the form fields that correspond to the properties of the role being created or edited.
Here's the issue.
If I attempt to hit these routes without running the debugger first, IIS will hang. It will not error out, throw an error, or even register the request in the IIS log.
HOWEVER, if I attempt to access those routes in the debugger, regardless of whether I have a breakpoint set up or not, the routes function as they should.
To make life a little more interesting, if I attempt to access those same routes AFTER I've run and shut down the debugger, the routes STILL function as they should.
We can reproduce this behavior on EVERY machine on our development team AND our staging server (except the debugging part on staging).
The methods that correspond to all of these routes themselves rely on a couple of methods in the same web service in our middle tier. Those methods work properly outside of the debugger in our integration tests.
We've checked for endless loops in the code, but can't find anything that would create an endless loop under these conditions. In fact, there's only one loop in the shared view, but it's a for each loop which shouldn't ever result in an endless loop.
Lastly, when I attempt to hit any of these four routes without running under the debugger or at least running it on a previous request, IIS essentially hangs. It will not time out. It will not throw an error. It will not log an error to the IIS log. Finally, it will eat up system resources the the point that you have to either restart IIS or reboot the entire machine.
Has anyone ever seen this behaviour before? Any ideas on how to get around it? I've NEVER seen this behavior before, and the only thing that anyone in our development group could come up with was some sort of permissions issue on a file, but we're not accessing the file system (outside of the view files themselves, and they have proper permissions) at any point during the processing of these methods.
I'm open to any and all suggestions.
UPDATE #1:
I have also posted this question on the ASP.NET forums and I had someone ask a question for more information. Here's my response to their questions.
What IIS are we talking about?
IIS 7.5. We're using the full-fledged IIS, not IIS Express.
What error?
That's just it. There is no error. No error is being reported. In fact, the request itself isn't being recorded in the IIS log for the site IF we're attempting to access these routes without the debugger running. If the debugger is running, everything works as you would expect it to.
VS Cassini?
Nope. IIS 7.5 that comes with Windows 7.
If you deploy on IIS a default WebForm project , does it works?
Yes. Without an issue. I actually have a number of WebForms applications that I maintain for customers running on my development box. They all work without any issue whatsoever.
If you deploy on IIS a default MVC project , does it works?
Yes. I have a number of sites running on this box. All of these sites are running without a hitch. IN FACT, the vast majority of routes on this site can be accessed without any issue. The vast majority of routes WITHIN THIS VERY CONTROLLER can be accessed without any issue!!!
To reiterate, this controller allows a user to manage users, roles, and permissions within the application. We have methods in there for listing, creating, and updating users, roles, and permissions. The routes that hit the methods for managing users and permissions work regardless of whether the debugger is running or not. The ONLY routes giving us issues are the four routes that I described above.
We currently have 19 controllers in this application, each with a varying number of defined route methods. EVERY OTHER route defined for the application is working properly and is not exhibiting this behavior. These are the only four methods (routes) in this one particular controller where we are seeing this.
UPDATE #2:
I've narrowed this down to a REST call (to a service that we control) within the controller. Here's the weird part - if I go into the REST service and immediately return a value (don't process anything), it still hangs outside of the debugger. If I'm running inside of the debugger or immediately after running the debugger, everything works as it should.
If I attempt to hit that REST service in fiddler directly, it works like a charm.
I'm going to try changing the URL in the service contract for the web service I'm calling and see if that works. Maybe it's something to do with the REST URL on the web service.
UPDATE #3:
Just to add further confusion, I set up Fiddler to act as a proxy between my MVC application and the REST middle tier. For EVERY other REST call within the application, the proxy gets the request. For this particular REST call, the proxy NEVER gets the request.
Now here's the annoying part. The WebChannelFactory that we use to call all of the methods in the middle tier through REST is created using a utility class in our MVC application. This code is used to generate every channel, so there's no difference between the requests that populate the list of users and the one that populates the list of permissions (the one that's hanging).
This is a GET request that's hanging, so I was able to call it directly in the browser. It works without an issue. The issues doesn't appear to be on the service end, but somewhere in the MVC application.
Make sure that you don't pass in ViewBag.Variable.ToString(), since it's dynamic, it will not work!

1 A-record for every subdomain (10000+); any potential issues? Any other solution?

Most solutions I've read here for supporting subdomain-per-user at the DNS level are to point everything to one IP using *.domain.com.
It is an easy and simple solution, but what if I want to point first 1000 registered users to serverA, and next 1000 registered users to serverB? This is the preferred solution for us to keep our cost down in software and hardware for clustering.
alt text http://learn.iis.net/file.axd?i=1101
(diagram quoted from MS IIS site)
The most logical solution seems to have 1 x A-record per subdomain in Zone Datafiles. BIND doesn't seem to have any size limit on the Zone Datafiles, only restricted to memory available.
However, my team is worried about the latency of getting the new subdoamin up and ready, since creating a new subdomain consist of inserting a new A-record & restarting DNS server.
Is performance of restarting DNS server something we should worry about?
Thank you in advance.
UPDATE:
Seems like most of you suggest me to use a reverse proxy setup instead:
alt text http://learn.iis.net/file.axd?i=1102
(ARR is IIS7's reverse proxy solution)
However, here are the CONS I can see:
single point of failure
cannot strategically setup servers in different locations based on IP geolocation.
Use the wildcard DNS entry, then use load balancing to distribute the load between servers, regardless of what client they are.
While you're at it, skip the URL rewriting step and have your application determine which account it is based on the URL as entered (you can just as easily determine what X is in X.domain.com as in domain.com?user=X).
EDIT:
Based on your additional info, you may want to develop a "broker" that stores which clients are to access which servers. Make that public facing then draw from the resources associated with the client stored with the broker. Your front-end can be load balanced, then you can grab from the file/db servers based on who they are.
The front-end proxy with a wild-card DNS entry really is the way to go with this. It's how big sites like LiveJournal work.
Note that this is not just a TCP layer load-balancer - there are plenty of solutions that'll examine the host part of the URL to figure out which back-end server to forward the query too. You can easily do it with Apache running on a low-spec server with suitable configuration.
The proxy ensures that each user's session always goes to the right back-end server and most any session handling methods will just keep on working.
Also the proxy needn't be a single point of failure. It's perfectly possible and pretty easy to run two or more front-end proxies in a redundant configuration (to avoid failure) or even to have them share the load (to avoid stress).
I'd also second John Sheehan's suggestion that the application just look at the left-hand part of the URL to determine which user's content to display.
If using Apache for the back-end, see this post too for info about how to configure it.
If you use tinydns, you don't need to restart the nameserver if you modify its database and it should not be a bottleneck because it is generally very fast. I don't know whether it performs well with 10000+ entries though (it would surprise me if not).
http://cr.yp.to/djbdns.html

Is there a performance hit when "Windows" authentication is enabled on an anonymous website?

I've been having performance issues with a high traffic ASP.NET 2.0 site running on Windows 2000. While editing the web.config file I noticed that the authentication mode was set to 'Windows'. I changed it to 'None'. The only users this site has are anonymous and it gets 25,000+ page views at day. Could this have been a caused performance issues?
There is a small potential, but if you are not securing any folders, it shouldn't be an issue.
In reality it would mostly be an issue if you needed to secure a folder path.
There might be a SMALL performance hit but I can't imagine it would be that bad.
It's very unlikely. Windows authentication is performed within IIS, and then a token is sent on to ASP.NET, so if you're using Anonymous Authentication, then it'll be effectively instantaneous, as this token will be created when the security context is created and that'll be it.
The 'None' authentication is intended for custom authentication, rather than for anonymous authentication- anonymous is one of the Windows authentication choices (i.e. IIS auth).
Perhaps you should setup tracing on the app and get methods to log event periods, to see where it's slow. It's likely to be a slow-running query, a timeout issue, lack of disk-space/swap-space, something like that.
Check out: http://msdn.microsoft.com/en-us/library/aa291347(VS.71).aspx for more detail on the authentication methods.

Resources