How to do Continuous Integration with a live website without affecting users? - visual-studio-2010

I have implemented Continuous Integration using TFS Version Control and TFS Build 2010. The compiled website project gets dropped in a shared folder with a version number.
Now I have a very basic question and may be a stupid question. When we normally deploy a website project from VS 2010 to a webserver it uploads App_Offline.htm file to the website folder so no requests are served to the user. After publish is completed that App_Offline.htm file is removed. During that period of time users see outage.
If we use CI on a live website then how can we eliminate that outage which appears to a user. I believe the whole point of CI is that users get to see newer features and the site is never down.
How is this accomplished? If we deploy website project to root folder then existing users will be affected and that is certainly no advisable.
I wanted to know what is the recommended practice with VS2010, TFS2010 Build & Version Control.

There's no real foolproof method for this, service up-time is never 100%, that's why people usually define it in 'nines'
But, if you had multiple web servers (Backup, fail-over, mirror etc.), you could roll out the update across them, so that as you update some servers, others will still be online (albeit with the old version) to serve users.
In general, only some of the largest websites have to worry so meticulously about being down for a few short minutes, so make sure you're focusing your energy in the right place ; )

Regarding taking down the site for the shortest time possible, the only way I've seen this done successfully is using multiple sites - either load balancing, or 2 sites on the same machine + swapping host headers after the release/warm up. But in most cases it's not worth the effort, releases shouldn't take down the site for more than a few seconds in which time there should be relatively few requests. You're better off trying a few things you can do to help your users live through a site release.
Move session out of proc.
If the users session lives in the app pool it will be lost when a new version is released, change the config to move it into a session server or the database.
Specify a machine key for the website
Viewstate (and cookies?) are encrypted using a key that is generated when a site starts, if a site restarts due to a release any users filling out a form will receive a invalid viewstate exception on postback. (Note: this may have other security implications)

Related

WildFly 10 HA deploy: not losing sessions

I have been reading posts and documentation all day long about that topic, and still can't find something easy to understand and trust on.
I currently have my webapp deployed on WildFly 10, as a simple war file.
It's an e-commerce website, in production for a few weeks, and every time we need to deploy a new release, well... that's very annoying, because some customers could be shopping right now, and deployment will obviously make them lose their sessions, and that's very bad.
I need a solution to deploy a new war without restarting the application server. At first, I read the docs about clustering (domain configuration over standalone configuration), but I'm not sure that's enough for me...
Imagine the same customer with a few items in the shopping cart (http session), accessing the first node of the cluster.
Then I put it down, because I'm deploying.
OK, the customer will be redirected to the second node of the cluster but... will the session data still be available? Will he 'lose' the shopping cart items?
I read about sticky sessions, but nothing about configuring them in WildFly. I am on Amazon AWS, so I can use ELB (load balancer), too.
Can you help me understand exactly what I need to learn and use?
Each WildFly instance will have it's own session id that it keeps in the cookie. This id will only restore the session on the particular node it came from.
Sticky sessions mean that the ELB will always redirect the user to the same node in your cluster so that won't quite solve your problem.
Some things to think about:
Clustering
Clustering may help (doesn't need to be domain mode). With HA enabled, the session will be transferred between the nodes automatically so that cookie on the clients browser will be able to restore the session on either node. This of course has the issue where if you upgrade one of the war files first you may have an object that can no longer be de-serialized because it changed.
Clustering WF on AWS is a little tricky as well because you can't use UDP broadcast to discover each other. We use a database connection to keep track of nodes and do clustering.
Roll your own
One option you could do is to roll your own solution to keep just the minimum amount of information on the client as required. Something like:
create a record in the database with a GUID.
set the GUID to a cookie
Save the items in their cart in the database based on the GUID
have a filter that checks for the GUID cookie and can restore their cart each time they hit the site.
I've used an approach like this for e-commerce apps in the past. It has another side effect that you now have the person's shopping cart saved in your database and it's easy to see exactly what people were interested in buying.
Use Tomcat parallel deployment
Does your application require a full app server? If it is just servlet based you could try using Tomcat and it's parallel deployment functionality. It allows you to deploy your new .war file on top of your old one. It will then keep serving old sessions to the old war but new sessions will go to the new war file.
Parallel deployment is very cool if your app is simple enough to be able to use tomcat.

Is there a way to backup a Liberty server deployment in Bluemix?

I am deploying a packaged liberty server into Bluemix that contains my application.
I want to update my application but before I do so, I'm wondering what's the best way to backup what I have currently up and running? If my update is bad, I would like to restore the previous version of my app.
In other words, what is the best practice or recommended way to update a web application running on a Liberty server in Bluemix. Do I simply keep a backup of the zip I pushed to Bluemix and restore it if something goes wrong? Or is there management capability provided by Bluemix for backup and restore?
It's understood that manual backup of the pushed zip is an acceptable strategy. Additionally, I found the Bluemix documentation Blue-green deployments to be a reasonable solution, as it's a deployment technique that utilizes continuous delivery and allows clients to rollback their app in the case of any issues.
The Cloud Foundry article Using Blue-Green Deployment to Reduce Downtime and Risk succinctly explains the deployment steps (since Bluemix is based on Cloud Foundry, the steps are similar to the Example: Using the cf map-route command steps in the previously cited Bluemix documentation).
I agree with Ryan's recommendation to use the blue/green approach, though the term may be unfamiliar to those new to cloud server deployments. Martin Fowler summarizes the problem it addresses in BlueGreenDeployment:
One of the challenges with automating deployment is the cut-over
itself, taking software from the final stage of testing to live
production. You usually need to do this quickly in order to minimize
downtime. The blue-green deployment approach does this by ensuring you
have two production environments, as identical as possible. At any
time one of them, let's say blue for the example, is live. As you
prepare a new release of your software you do your final stage of
testing in the green environment. Once the software is working in the
green environment, you switch the router so that all incoming requests
go to the green environment - the blue one is now idle.
Solving this problem is one of the main benefits of PaaS.
That said, for historical context, it's worth noting this blue/green strategy isn't new to cloud computing. Allow me to elaborate on one of the "old" ways of handling this problem:
Let's assume I have a website hosted on a dedicated server, myexample.com. My public-facing server's IP address ("blue") would be represented in the DNS "#" entry or as a CNAME alias; another server ("green") would host the newer version of the application. To test the new application in a public-facing manner without impacting the live production environment, I simply update /etc/hosts to map the top-level domain name to the green server's IP address. For example:
129.42.208.183 www.myexample.com myexample.com
Once I flush the local DNS entries and close all browsers, all requests will be directed to the green pre-production environment. Once I've confirmed all works as expected, I update the DNS entry for the live environment (myexample.com in this case). Assuming the DNS has a reasonably short TTL value like 300 seconds, I update the A record value if by IP or CNAME record value if by alias and the change will be propagated to DNS servers in minutes. To confirm the propagation of the new DNS values, I comment out the aforementioned /etc/hosts change, flush the local DNS entries, then run traceroute. Assuming it correctly resolves locally, I perform a final double-check all is well in the rest of the world with the free online DNS checker (e.g., whatsmydns.net).
The above assumes an update to the public-facing content server (e.g., an Apache server connecting to a database or application server); the switch over from pre-production to production is more involved if the update applies to a central database or similar transactional data server. If it's not too disruptive for site visitors, I disable login and drop all active sessions, effectively rendering the site read-only. Then I go about updating the backend server in much the same manner as previously described, i.e., switching a pre-production green front end to reference a replication in the pre-production green backend, test, then when everything checks out, switch the green front end to blue and re-enable login. Voila.
The good news is that with Bluemix, the same strategy above applies, but is simplified since there's no need to fuss with DNS entries or separate servers.
Instead, you create two applications, one that is live ("blue") and one that is pre-production ("green"). Instead of changing your site's DNS entries and waiting for the update to propagate around the world, you can update your pre-production application (cf push Green pushes the new code to your pre-production application), test it with its own URL (Green.ng.mybluemix.net), and once you're confident it's production-ready, add the application to the routing table (cf map-route Green ng.mybluemix.net -n Blue), at which point both applications "blue" and "green" will receive incoming requests. You can then take the previous application version offline by unmapping it (cf unmap-route Blue ng.mybluemix.net -n Blue).
Site visitors will experience no service disruption and unlike the "old" way I outlined previously, the deployment team (a) won't have to bite their nails waiting for DNS entries to propagate around the world before knowing if something doesn't work and (b) can immediately revert to the previous known working production version if a serious problem is discovered post-deployment.
You should be using some sort of source control, such as Git or SVN. Bluemix is nicely integrated with IBM DevOps Services (IDS) which can leverage git or an external Github repo to manage your project. When you open your app's dashboard, you should see a link in the upper right-hand corner that says "ADD GIT". That will automatically create a git repo for your project in IDS.
Using an SCM tool, you can manage versions of your code with relative ease. IDS provides you with an ability to deploy directly to Bluemix as part of your build pipeline.
After you have your code managed as above, then you can think about green/blue deployments, etc. as recommended above.

Need for creating different Projects in Google API console

I have basically two URL's http://xyzwebsite.com (for Development Testing) and http://abcwebsite.com (For Production). I have a simple Login mechanism where a user can click on Google Plus icon to log in rather than using their Username and Password. I created one Project for Development with obviously different Client ID and different for Production with a separate client ID.
But I tested both the URL's above with the client ID of Development project and it worked fine. I am wondering why there is a need ot having multiple projects in Google API console?
There is no particular need. A single project can have several URLs and client IDs for use.
Some reasons you might use multiple projects include:
Changing project settings in dev without worrying about breaking production
If you have a development script that gets into an endless loop or something it might use up all of the quota and the production app might start throwing errors
You might want clear branding on the dev app that explicitly identifies as not production.
Some unknown reason I can't think of.

windows azure website load time

Sometimes when I access my windows azure website, the initial response time is very slow. After the first page load the website is fast. Some background: The website is not that often visited at the moment. Further, I am using a keepalivecontroller to keep the website running and the website is running in shared mode. I am wondering: are websites that are not that active removed from memory in windows azure? Or is it just that background tasks on the operational level of windows azure are interfering sometimes? It is not transparent for me what is happening, so is there some sla of something for windows azure websites?
There is now a new feature available for Windows Azure Websites in 'Reserved' mode that will keep your website warm. You can now turn on "Always-on" under the "Configuration"-tab on your Azure Website. As explained in this blog post:
When the new “Always On” feature is is enabled on a site, “Windows
Azure will automatically ping your website regularly to ensure that
the website is always active and in a warm/running state,” Guthrie
writes. “This is useful to ensure that a site is always responsive
(and that the app domain or worker process has not paged out due to
lack of external HTTP requests).”
Easiest way to keep a website warm is to call it regularly using the Scheduler feature in Windows Azure Mobile Services.
You simply write a script in the Scheduler that pings your website every x minutes.
Here's a post covering how to do that: http://fabriccontroller.net/blog/posts/job-scheduling-in-windows-azure/
The Windows Azure Web Sites are still in preview, so there is currently no SLA with that service.
The Web Sites do idle out when in free or in Shared mode, which is likely what you are seeing. When the site idles out it actually is removed from memory, and indeed the IIS process host running the site is shut down. This is how they can get the density of hosting 100 sites on the same VM.
You can find a lot of info on the Channel9 site about why this is the case, or, as a shameless plug, here is an article that talks about how the process is handled.
Now, you mentioned that you were using a keepalivecontroller, but what exactly do you mean by that? I use pingdom.com to contantly request data for one of my websites, and that seems to do pretty well. It is still possible that a request doesn't come in and the idle time is met which then cycles the site. It is also possible that even if you always have the site running that the VM the site sites on needs to have the underlying OS updated, in which case Azure would then move the site process to another VM, which could also cause the slow start up on the next request.
I'd start logging your application start ups and then look through your logs to see how often that is happening.
If you only need to warm it up once (vs keeping it warm) and are mostly trying to prevent your customers experience page cold starts, I believe the correct tool is IIS Application Initialization. You can configure it with a list of urls to hit before it deems the app ready for action.
My site is suffering from page cold starts and that is severely magnified in Azure Websites (even on an S3), but it is absolutely speedy after its served that first time thanks to several layers of caching (our inefficient use of Umbraco's dynamic nodes query language creates a lot of database churn--which we're cleaning up opportunistically).
From what I've read and my own web.config attempts this is still not available in Azure Websites. I've asked Microsoft for it here: MS IDEA: Application Initialization to warm up specific pages when app pool starts. Please consider voting for it.
For each service/site you need to go to "Configure", then switch "Always On" to ON. Also make sure you click Save; it took my website about 2 minutes before noticing the change.
Why this is not the default is kind of mind boggling, because my setup on HostGator was running much faster than Azure. I guess Microsoft is figuring if nobody is accessing your site, it's okay if it has a long load time.

How can a web application synch a folder of text files on the client's PC?

I want to be able to synchronize several text files on a user's PC in real time from my web application. Basically I want a few data files on the local PC to mirror the state of a user's data in my web application so if the web application or the user's internet connection is lost he can use those data files to get some critical info (possibly using html/javascript code stored in with those files that would run in offline mode on those data files.)
I know that google gears has a lot of interesting tools for working with offline state, but I'd prefer an even simpler application in html/javascript that wouldn't be as reliant on google gears. I'd rather use google gears to just create those files and slowly keep them in synch with the web application's version of data throughout the day.
Update on answers:
PersistJS is a good suggestion I will look into, but I was hoping people would direct me towards really good Google Gears tutorials resources.
You can save data on the browser using PersistJS, which uses the best client-side persistent storage mechanism it can find, supporting:
Flash
Google Gears
HTML 5 storage specs
browser-specific extensions
cookies
When your app reconnects, you can resync. Creating and reading text files is something the browser will generally block your web site from doing.
Risking of stating the obvious; if you want to store user state locally, isn't cookies the standard way?
maybe more then one cookie will be needed, but that sounds like the simplest of ways.
You're going to need to make an ActiveX control and a FireFox plugin to get these permissions. Short of that I agree with orip try using PersistJS
You can ask the user to download a subversion client that is predefined to interface with your subversion server only. Then write your web application to interface with the subversion service from your side only.
There is a good deal of security harm associated with granting access to a user's file system so you will want to lock down all possible points of exploitation. You will want to ensure that the user cannot access the subversion server except through the client that you ask them to install. You will want to ensure the connection between the application server and the subversion server is extremely secure so that the transmission path cannot be compromised and that malicious logic that may be loaded onto the application server cannot access the subversion server. I would say to encrypt the transmission path between those two servers and put the subversion server behind the firewall separating your network DMZ. I would also suggest use a challenge/response mechanism between the application server and the subversion server to prevent malicious code from appearing to be legitimate decisions made on the application server. Also, ensure that data only flows form the application server to the subversion server in a unidirectional fashion only, because if there is malicious logic planted on your application server then any data that comes from the subversion server is compromised without even accessing that server.
you could use the File System Object FSO through javascript, however it is dependant on Microsoft as it is an ActiveX control, it would also require permissions in the browser, or perhaps a HTA (HTML Application).
http://www.webreference.com/js/column71/
Its a real security issue so most avenues are closed down inhrentley.
Inherently the web model was designed not to authorize upstream from server to client. Now things are changing slowly maybe could you do this with Websocket ?

Resources