Azure: What if a guest OS update breaks my application? - windows

We are investigating the option of shifting our small company's infrastructure to Azure PaaS (Websites, Cloud Services, SQL) as we do not have the resources to maintain our infrastructure at scale and it takes a lot of developer time to keep our current servers maintained.
The last problem we have with moving the Azure PaaS is that the control over updates seems somewhat limited according to this article Azure enforces that you remain within two patch versions of the guest OS that Microsoft rolls out.
Aside from the fact that that places a testing burden on us (we would have to test that software works with new OS releases forced upon us) there is nothing about what can be done if an Azure update DOES break one of our applications...and it has happens before with Windows Updates.
How is this supposed to be delt with? Has no one else had this problem?

This is typically dealt with by updating your applications and/or fixing your custom code to work with newer patches and/or updates.
There's really very little else you can do. I've worked at places that didn't, and seen the results of blocking an incompatible update long-term (or turning off updates altogether), and it's far worse than just maintaining your whatever. Failure to do so is how you end up paying a group of consultants thousands of dollars an hour to troubleshoot a code base or application that isn't compatible with anything made in the last decade.

I would like to add that you may want to have your whole deployment replicated, but always running on the latest available patch.
This way you could test updates with weeks in advance before updating your production environment.

Related

Jelastic maintenance windows & process across providers

Does anyone know if all Jelastic providers have the same outage windows & process? I'm liking Jelastic for the simplicity but the last maintenance window caused issues for my application despite only impacting the dashboard (supposedly).
Are all Jelastic providers going to be down for maintenance at the same time?
Are the upgrade process/scripts uniform across the providers? i.e. provided by jelastic?
Cheers
Are all Jelastic providers going to be down for maintenance at the same time?
Jelastic releases are rolled out to most/all providers around the same time (i.e. within a period of a few weeks).
Are the upgrade process/scripts uniform across the providers? i.e. provided by jelastic?
Jelastic development team engineers create the upgrade scripts and perform the upgrade process, so the actual upgrade experience should be approximately the same at every provider.
Most platform upgrades are targeted at improving the underlying platform (e.g. adding new dashboard features, core infrastructure etc.) and should not have any impact on your application servers.
Maintenance windows are announced to provide notification that some operations will be performed, because there is always a risk that something unforeseen can happen in such cases (if it could be guaranteed zero risk, there would be no need for any notification).
Just as you do with your own application, every Jelastic platform change is tested thoroughly in a dev environment, and the full upgrade process is simulated in a stage environment, so that as many issues as possible are identified and eliminated before it touches the live production platform - but software engineering is complex (as everyone reading stackoverflow appreciates) and there is no such thing as 100% error free software.
PS the fact that I can say exactly which environment you're referring to should tell you something about the number of environments affected. Unfortunately you were very unlucky in this instance, and Jelastic engineers are still investigating the exact details to identify root cause + will outline necessary steps to avoid a recurrence. Application uptime and stability is a critical issue to us all.

How could Database Projects and TFS help our dev team?

I have just started working with a new company in a very small IT department, and an even smaller dev team (just 2 developers including me). We mainly develop in house web applications for the company.
Now my background is in desktop applications so this job comes with a slight learning curve for me having never developed ASP.Net web applications before. Currently we do not use TFS however I have made the suggestion and it is something we are going to be adopting soon.
I am also considering recommending we move our SQL databases into database projects as currently we do nightly backups to protect the data but all the updates are manual, we connect to the database and execute queries etc.
Im not a DBA but in my last job we were in the process of migrating our databases to database projects and the DBAs seemed to love the idea. What would the benefits and potential downfalls of this be? Would it aid us with updating databases in our live enviroment after development has been done? Obviously we dont want to loose any data but just update tables / Stored Procs etc.
As a side question I have very limited knowledge of TFS, and although we are going to be using it to handle our version control is it possible to use TFS to update our live websites automatically once development has finished?
Sorry if this is quite a broad question, I am attempting to research this myself but I would like to hear from people who actually use the products and do these things.
Thanks
About database projects: I, and several dba's I know, have had mixed experiences with them. I'm not sure they are exactly where they should be at this time but it may be simply a function of how I work. The deployment model is... difficult and can result in some unexpected behavior. If you go this route test test test to make sure you understand exactly what's happening.
If you are just trying to get version control for the database you might consider SQL Source Control from Red Gate. It looks pretty nice and hooks into TFS. I used one of the early versions (beta and 1.0) for awhile and was very happy with it. Now that I think about it, I'm not sure why I don't have it here... ;)
As far as deploying out of TFS, you can absolutely do this. We have a build server set up so that whenever code is checked in the build server automatically spins up to compile and deploy it out to one of our testing sites. Look here for a primer to get you started. This does require some configuration on your web server to properly support and the documentation is spotty at best.
Once you are happy with the test area, you can hook something into changes to the Build Quality so that it pushes the code to a staging or even production server... Or simply have a build setup to recompile and deploy out there. Although I don't recommend doing production pushes this way. Not because of a technical issue, but rather a timing one. It's usually much faster to just copy / paste from one location to production when necessary thereby limiting downtime.

Upgrade Oracle database from 9.2.0.7 to 9.2.0.8

We are planning to upgrade from Oracle 9.2.0.7 to 9.2.0.8. Main reason of the proposed upgrade is to address the issue in relation to exception "terminated with error: ORA-00904: "T2"."SYS_DS_ALIAS_4": invalid identifier" when we try to execute DBMS_STATS.GATHER_SCHEMA_STATS.
We are concerned that the proposed upgrade may have negative impact on our Java application or in the worst case may not even support by our Java application.
What are the possible approaches or strategies that we can take to ensure the upgrade from Oracle 9.2.0.7 to 9.2.0.8 will not have adverse impact on our Java application or will not cause our Java application to function incorrectly. Essentially we just want to confirm that our application will still support Oracle 9.2.0.8.
Thank you.
Your first step should be to ensure you set up a test system with your exact production layout and current software (9.2.0.7).
Run it for a bit to make sure it's okay, then perform the upgrade on your test system and run it for as bit longer to ensure it hasn't broken anything. I'm not talking about the cowboy-developer "if it runs for five minutes, that's okay" type of test. It should be a thorough test of all functionality and performance if possible.
Once you're happy with the level of testing, you can plan to do the same thing to production.
This isn't rocket science, you should always have a production mirror on which you can test upgrades of software, both your own and third-party stuff. And you should have backout strategies on the off-chance that the production upgrade fails anyway despite your testing.
We're pretty paranoid so we actually set up a whole new machine well in advance doing as much as possible. Then, at cutover time, we disable current production, perform whatever transfer is still required to the new machine, then bring that up and test it. If at any point during testing something cannot be fixed in the upgrade window, the old machine is put back online and we try again later, with appropriate kicks in the rear end for those responsible for the failure :-)
I've upvoted Paxdiablo's answer - there are few shortcuts around testing with as much application coverage as possible on a full-size copy of your production system.
I think you're generally looking to answer two questions with an upgrade:
Have new bugs been introduced in the
Oracle functionality used by the
application?
Have changes in the optimizer changed
the execution plans (for the worse!)
for any application queries?
I believe that as early as 9.2 the optimizer would include system stats in determination of execution plans, so you want to at least bring that information over into the test system to reduce that variable to the optimizer if your test system hardware differs from production.
If you upgrade to Oracle 11g and have the $$$$, you can license and configure Real Application Testing. This will let you essentially record and play back database activity in a test instance to answer these two questions.
In addition to the excellent answers by dpbradley & paxdiablo, before the database is patched it is worth looking on the Oracle support site, support.oracle.com, at the known issues that this patch may introduce which may stop you losing more than you gain!
You will need a valid support license to log into the Oracle support site but there is a note for 9.2.0.8 filed under:
9.2.0.8 Patch Set - Availability and Known Issues [ID 388281.1]

How should I support and continue development on a forked application?

We have a large application that runs at roughly 5 locations. None of these locations are running the same version of the application. This makes patching and updating very complicated.
Try to follow this example: We'll call the application I'm talking about "application A". Now we want to roll out an application B to one of these locations and it must implement application A. We'll have to modify A to accept B's requirements. However, our development version of A (the version that eventually each location will have) must have support for B as well. This means we have to rollback to the state of the software running on the site requiring B as well as make these changes to our development version of A. This also means that the 4 other locations are implementing a version of A without B support.
So one can see where it gets frustrating to control the versioning. If we want to support sites 2-5 we can't use our development version of the source, we have to rollback to the specific version on that site. What's the best way to do this? Keep in mind we're using Visual Studio 2008 and Team Foundation Server.
The first thing I would recommend is to refactor your app to all use the same version of the core product.
Try to logically break out specific functionality into their own modules and access that functionality through an adapter type pattern.
John points out a crucial first step. Nothing good can happen until your code base is internally stable.
But even after you split your app into reusable components with well-defined interfaces, you will likely still have to maintain multiple versions ("forks") of many/all those components and develop their releases in parallel. That's where branching comes in.
I highly recommend reading the new TFS Branching Guidance papers cover-to-cover. I was quite critical of earlier revisions, but the documentation team has really improved their offering here. (thanks in part to additional feedback as TFS finds broader adoption...but in equal part to finally paying attention to established SDLC norms that far predate Microsoft's entry to the marketplace)

Tips to upgrade workstations for development team?

I have secured the budget to upgrade the individual workstations and latops. While newer, bigger screens were welcomed with enthusiasm, the thought of re-installation tools and settings caused most of them to blanch and I got one "Do I really have to?".
How much downtime do you usually have when you move to a new machine?
Do you employ tools or script to set up your dev environment, tools, db's, debuggers etc.specifically for a windows environment?
Is there a standard image that you keep and then let devs move in and tweak the machine as necessary?
My company essentially virtualized in order to stop wasting so much time with upgrades/system failures.
Whenever a desktop/laptop failed, we'd have to spend a better part of a day fixing it and reloading the software.
So, we went out, bought iMacs for everyone and loaded Parallels (a VMware like product for OSX) on them. Then we made a standard dev image for everyone, and just copied it to everyone's machines.
Essentially, if anyone's configuration got messed, we just loaded in a fresh image and kept on truckin'. Saved a lot of time.
Some additional benefits:
When new software is out, we just make a new image and distribute it. Not OS re-installs or anything like that.
If hardware changes, doesn't matter, just move the image.
You can run multiple os's concurrently for testing
You can take "snapshots" in your current image and revert if you really messed something up.
Multiple builds on the same machine...since you can run multiple os's.
Surprisingly the overhead of a virtualized system is quite low.
We only run the software on a real machine for performance tuning/testing purposes.
One day is generally enough for upgrades. I do keep digital copies of VS.NET so much easier to install.
When it comes to other tools generally it's just better to go to websites and install the latest version.
Also it's a good idea to install tools whenever you need instead of trying to install everything at the same time.
The last time I upgraded to a new machine, I think it took about 4 hours to get most of the necessary tools reinstalled. Over time, I've had to re-install quite a few more tools, but I think it's worth it.
If you can get a ghost/image of the default tool set (Visual Studio 2003-2008, Eclipse, NetBeans, or whatever you're using), and all the major service packs, that would help a lot with the initial setup.
I think the downtime is definitely worth it, a new, faster machine will make anyone more productive.
You can have 0 downtime by having both machines available. You will not have as much productivity.
This depends on the number of tools needed by the development team. Tools such as Rational Software Architect can take hours to install on their own. The exercise of having the developers list the applications they need before moving in can help you optimize strategies to deploy effectively. Both machines should be available for a fixed period of time and having them available can allow develoers to both work and kick of long running installs at the same time.
Creating a standard image based on the list provided to you can improve efficiency. Having the relvant software on a share could also let them cherry pick as needed and give the development team the feeling that they can go back as necessary.
Tools to assist in catpuring user settings exist. I have only ever had experience with Doctor Mover. If you have 100 or more developers to move it may be worth the cost. I can't complain too much but it wasn't perfect.
I have never had a problem with just getting a list of all the software a particular users uses. In fact I have never found the base install to be much of an issue. The parts I tend to spend the most time on are re-configuring all of the users custom settings (very common with developers I find). This is where it is very valuable to have the old machine around for awhile so that the user can at a minimum remote-desktop to it and see how they have things set up.
Depending on how your team works, I would highly recommend having every user receiving a new computer get the latest source tree from your source control repository rather than by copying entire directories. And, I would also recommend doing that before actually sending the old workstation elsewhere or even disconnecting it.
One of the great things about tools like CVS and SVN is that it is quite easy for developers to end up with an unofficial "personal branch" from things that are not properly checked in, merged, etc.
While it will cost time to deal with the shift if things are not properly synchronized, it is an invaluable opportunities to catch those things before they come to haunt you later.

Resources