How could Database Projects and TFS help our dev team? - visual-studio-2010

I have just started working with a new company in a very small IT department, and an even smaller dev team (just 2 developers including me). We mainly develop in house web applications for the company.
Now my background is in desktop applications so this job comes with a slight learning curve for me having never developed ASP.Net web applications before. Currently we do not use TFS however I have made the suggestion and it is something we are going to be adopting soon.
I am also considering recommending we move our SQL databases into database projects as currently we do nightly backups to protect the data but all the updates are manual, we connect to the database and execute queries etc.
Im not a DBA but in my last job we were in the process of migrating our databases to database projects and the DBAs seemed to love the idea. What would the benefits and potential downfalls of this be? Would it aid us with updating databases in our live enviroment after development has been done? Obviously we dont want to loose any data but just update tables / Stored Procs etc.
As a side question I have very limited knowledge of TFS, and although we are going to be using it to handle our version control is it possible to use TFS to update our live websites automatically once development has finished?
Sorry if this is quite a broad question, I am attempting to research this myself but I would like to hear from people who actually use the products and do these things.
Thanks

About database projects: I, and several dba's I know, have had mixed experiences with them. I'm not sure they are exactly where they should be at this time but it may be simply a function of how I work. The deployment model is... difficult and can result in some unexpected behavior. If you go this route test test test to make sure you understand exactly what's happening.
If you are just trying to get version control for the database you might consider SQL Source Control from Red Gate. It looks pretty nice and hooks into TFS. I used one of the early versions (beta and 1.0) for awhile and was very happy with it. Now that I think about it, I'm not sure why I don't have it here... ;)
As far as deploying out of TFS, you can absolutely do this. We have a build server set up so that whenever code is checked in the build server automatically spins up to compile and deploy it out to one of our testing sites. Look here for a primer to get you started. This does require some configuration on your web server to properly support and the documentation is spotty at best.
Once you are happy with the test area, you can hook something into changes to the Build Quality so that it pushes the code to a staging or even production server... Or simply have a build setup to recompile and deploy out there. Although I don't recommend doing production pushes this way. Not because of a technical issue, but rather a timing one. It's usually much faster to just copy / paste from one location to production when necessary thereby limiting downtime.

Related

Any scenarios where Meteor.js autopublish would actually be useful?

Lately I've been getting more and more interested in Meteor.js. At the moment I'm developing a new web project of mine. What I can't get out of my mind is the Autopublish feature of Meteor. At the moment of writing my MongoDB has a total of 32453 records, therefore, as you can probably guess I had to turn off autopublish and subscribe/publish manually.
I've read a mouthful of guides now and it seems to be a completely common practice to turn off autopublish as soon as your application is created. This makes me question - does the feature have any practical use in a real world scenario? I can see it being useful for the shock and awe effect of the examples, but aside from that, it seems more or less pointless. I might be missing something very obvious though.
Autopublish is designed to be turned off before production. It's simply a feature to speed up development in the early stages, and that's all. From the Meteor Docs:
By default, a new Meteor app includes the autopublish and insecure packages, which together mimic the effect of each client having full read/write access to the server's database. These are useful prototyping tools, but typically not appropriate for production applications. When you're ready, just remove the packages.
You are not missing anything. It was added to make the examples work and to get users up and running quickly when working on new projects. I can't think of a compelling reason for a production app to have autopublish on.

Set up a development environment for MVC2 as it would be for a development team

So at work we have our environment setup as most of you probably do as well. We have a centralized code base (controlled through SVN), which runs off a Database on the same server (Integration). We bring this code base down and copy the database down locally to work on our machines with.
This is what I need to figure out how to set up. I want to setup a Database in SQL Server 2008 locally, have it connected to my MVC 2 app, and also have it locally setup in IIS so I can test it without going into the debugger and running in the Development Server of VS2010 everytime.
So far searching I haven't really found any articles or anything that tell how to set this up, even though I feel like it is the most common thing to do (as most software shops are setup this way).
Any sources or directions would be awesome.
Thanks!
I am running Windows 7 Ultimate, Visual Studio 2010, SQL Server 2008, and IIS (whichever version comes with Windows 7).
The answer is "it depends". Although most of the software shops are setup like this there are tweaks in the setup. Especially when it comes to Database.
I found 2 cases:
In most cases where I worked, I found they have a DEV database server where the developers are given access on a perticular database which the entire team works on. They do install SQL Management Studio on the dev machines for connecting to the server/database.
Some shops have SQLExpress setup on each developer machine where they maintain a local copy of the database (same as yours)... This comes with additional headache of syncing copies of multiple databases. We used it with Visual Studion Database Projects in past and it worked like charm in many cases where we used to get "delta" updates and apply to server database. Obviously - these updates were done by someone who knew the VS DB PRO features and given some dedicated hours to perfrom the sync.
I still prefer a "controlled environment" as opposed to #2 where the schema changes are controlled by only a few...
Just my 2 cents...
Lots of this really depends on the app and local details. But we do the same sorts of things all the time. First and foremost, you'll want your develop some standardization and/or conventions about environment -- it makes life alot easier if everyone agrees that they should be running the local test DB at .\SQLEXPRESS and if they can agree on what the local urls should be.
Perhaps the tallest pole in the tent is automating the database setup -- there are some real challenges there, especially if your app has a significant amount of data it needs to be usable. I haven't found a perfect solution here, typically we use a combination of a utility called sseutil to create database instances and a database migration framework to make schema changes. Something like RoundhousE looks compelling here.

Why should my development team have a build server?

We know this is good to have, but I find myself justifying it to my employer. Please pitch in on why a development team needs a build server.
There are multiple reasons to use build servers. In no particular order and off the top of my head:
You simplify the developers' workflow and reduce the chance of mistakes. Your build server can take care of multiple steps such as checking out latest code, having required software installed, etc. There's no chance of a developer having some stray DLLs on their machine that can cause the build to pass or fail seemingly at random.
Your build server can replicate your target environment (operating system, etc.) and there's less of a chance of something working on developers' desktops and breaking in production.
While it's a good practice for developers to test everything they check in, sometimes they just don't. Then it's good to have the build server there to catch test errors and let the team know the product is broken.
Centralized builds provide easy access to code metrics -- which tests passed, which failed, how often, how well is your code covered by your tests, etc. Having a solid understanding of the quality state of the codebase reduces maintenance and testing costs by providing timely feedback that allows errors to be fixed quickly and easily.
Product deployment is simplified -- the developer or QA doesn't have to remember multiple manual steps. It can be easily automated.
The link between developers and QA is simplified. QA personnel can go to a known location to grab latest, propertly versioned builds.
It's easy to set up builds for release branches, providing an extra safety net for products in their release stage, when making code changes must be done with extra care.
To avoid the "but it works on my box" issue.
Have a consistent, known environment where the software is built to avoid dependencies on local dev boxes.
You can use a virtual server to avoid (much) extra cost if you need to.
ASAP knowledge on what unit tests are currently working and which do not; furthermore, you'll also know if a once passing unit tests starts to fail.
This should sum up why it is critical to have a build server:
http://www.codinghorror.com/blog/2006/10/the-build-server-your-projects-heart-monitor.html
It's a continuous quality test dashboard; it shows you statistics about how the quality of your software is doing, and it shows them to you now. (JUnit, Cobertura)
It makes sure developers aren't hamstrung by other developers breaking the build, and encourages developers to write better code. (FindBugs, PMD)
It saves you time and money throughout the year by getting better code from developers the first time - less money on testing and retesting - and by getting more code from the same developers, because they're less likely to trip each other up.
Two main reasons that non technical people can relate to:
It improves the productivity of the dev team because problems are identified earlier.
It makes the state of the project very obvious. I've shown my management the build status dashboard an now they look at it all the time.
One more thing. Something like Hudson is very simple to set up - you might want to simply run it somewhere in a corner for a while and then show it later.
This is my principal argument:
all official releases must be build in a controlled environment. No exception.
simply because you never know how the developers create their personal releases.
You also don't need to talk about build server as in "blade that costs an arm a a leg". The first build server I set up was a desktop machine that sat unplugged in a corner. It served us very well for more than 3 years.
One you have your build machine, you can start adding some features (Hudson is great) and implement everything that the other posters mentioned.
Once your build machine becomes indispensable to your organization (and everyone sees its benefits), you will be able to ask for a shiny new blade if you wish :-)
The simplest thing you can do to convince your your employer to have a build server is to tell them that they will be able to release faster and with better quality.
Faster releases come from the immediate feedback about quality of the build. If someone breaks the build, he or she can fix the broken build immediately thus avoiding a delay in the build and release schedule. Without a build server the team will have to spend time trying to find what and when happened and how to fix it.
Better quality is achieved by the build server running bug detection tools automatically every time someone check is changes into a version control system. You don't mention what is the main development language in your organization, but such tools, advanced but commercial and simple but free, exist practically for all languages. Lint, FxCop, FindBugs and PMD come to mind.
You may also check this presentation on benefits of continuous integration for a more extensive discussion.

What kind of safeguards do you use to avoid accidentally making unintended changes to your production environment?

Because we don't have a good staging environment we often have to debug issues on our production systems. We have web, application, and database servers.
What kind of safeguards do you use to avoid accidentally making unintended changes to your production environment when doing this?
EDIT:
The application is a very complex B2B vertical web application. There is a lot of data involved. Some tables have close to 100 million records.
EDIT:
The staging environment we have in place does not have the capacity to mirror production. There are also hundreds of gigabytes of data files involved besides the actual database data.
EDIT:
We do use source control for the code but not for the stored procedures. There are some old stored procedures in source control but nobody keeps that updated anymore.
The main concerns are the database and data on the file system.
BTW, I am a consultant at this company, not an actual employee.
The most direct answer is: "Don't do that."
source control. nothing like a rollback when things to irreparably wrong. Also, a diff can help you replicate the changes to other production systems.
New production releases go via our systems guys, the programmers and developers can only request to make their new system go live, approval is needed as well, and we show that each change that has been made has been tested (by including a snapshot of all that was tested in this release in the production request).
We keep the previous production releases for fallback in case of issues.
If things do break (which they shouldn't do often with a proper testing procedure and managed releases) then we can either roll back, or hotfix. Often when things are broken in live and the fix is small, we can hotfix, then move the fix to test to do a proper test.
Regardless, sometimes things get by...
only allow certain accounts write access, so you have to log in differently to make a change
on web server, have two directory structures, that mirror each other, one where only one ID can write, the other staging dir, everyone can write.
on database server, have one production db, where only one ID can write, have a staging db where everyone can write. the staging DB can have nightly backup restored to it.
HOWEVER, if you have a bad query or some resource hog in your staging system resources will be pulled from production, and the machine could hang.
For Web and Application Servers, I would try to copy the environment to a new location (but on the same environment) and have the affected people reproduce behavior on the copy. This will at least give you a level of separation from accidentally screwing with 100% of your clients.
For Database Servers, I would configure user accounts on the production system to give them read only rights.
Read-Only/Guest accounts. Seriously. It's the same reason you don't always login as root or Administrator.
This is a tough thing, and it goes with the territory of "no staging environment."
For many reasons, it's best to have a dedicated (duplicate) of PROD you can use to stage deploys to...and to debug on, but I know that sometimes when you're starting out that doesn't work out as quickly or thoroughly as we'd want.
One thing I've seen work is the use of VMs: aside from the debug environment, you can create a mini-PROD in a VM and use that to debug. This may not be practical given the type of app you're developing, so additional detail in that area would be helpful.
As for avoiding changes to PROD during debugging: is there a reason you'd need to change anything to facilitate debugging? If so, that might be worth looking into solving another way.
Version control is immensely helpful for controlling changes to production environments - just make your production environment a working copy of the appropriate directory or directories from the repository. When you roll out an update, your source control system makes sure that ALL the changed files get copied. When an update breaks something, you can roll the production working copy back to the last revision which wasn't broken. Also, you can check your production WC out from a tag instead of from the trunk; that way you can decide which repository revisions to apply to the production environment by adjusting the tag.
If you're not familiar with the concepts of version control systems, I'd advise you to do some research. They're conceptually complex but incredibly useful and powerful. The Wikipedia article is a good place to start:
http://en.wikipedia.org/wiki/Revision_control
I'm sorry, you have to have a staging environment. There's no getting around this. If it means you have to cull the size of your datasets, then that's what you have to do. Use VMware and VMware converter to import the production systems during down-periods, if you have them (this is a many-hour process, so maybe not practical).
There are a certain class of problems you can't solve without having full access to production DBs (or a copy), performance is one of these. But you really should build a staging environment, even if it's on someone's desktop machine with a stripped down dataset.
That aside, I've had to live my life with a few of these in the past, and really, there's nothing you can do except lots of backups. Every change you make should be preceded by incremental backups. That way if you fubar'd something, the amount you've lost is not substantial. SQL server can take differential backups that limit amount of diskspace used for backups. Oracle can as well.
In case you really have no other choice, and it is likely to be a chronic situation... consider adding some way to the application data (files, or database) to flag a set of data as 'please god do not actually actively change production state with this data', combined with data dumps at critical positions in a process when this flag is activated, you may be able to exercise most of the production logic without the data actually being acted upon.

MS Team Foundation Server in distributed environments - hints tips tricks needed

Is anyone out there using Team Foundation Server within a team that is geographically distributed? We're in the UK, trying work with a team in Australia and we're finding it quite tough.
Our main two issues are:
Things are being checked out to us without us asking on a get latest.
Even when using a proxy, most thing take a while to happen.
Lots of really annoying little things like this are hardening our arteries, stopping us from delivering code and is frankly creating a user experience akin to pushing golden syrup up a sand dune.
Is anyone out there actually using TFS in this manner, on a daily basis with (relative) success?
If so, do you have any hints, tips, tricks or gotchas that would be worth knowing?
P.S. Upgrading to CruiseControl.NET is not an option.
Definitely upgrade to TFS 2008 and Visual Studio 2008, as it is the "v2" version of Team System in every way. Fixes lots of small and medium sized problems.
As for "things being randomly checked out" this is almost always due to Visual Studio deciding to edit files on your behalf. Try getting latest from the Team Explorer, with nothing open in Visual Studio, and see if that behavior persists. I bet it won't!
Multiple TFS servers is a bad idea. Make sure your proxy is configured correctly, as it caches repeated GETs. That said, TFS is a server connected model, so it'll always be a bit slower than true "offline" source control systems.
Also, if you could edit your question to contain more specific complaints or details, that would help -- right now it's awfully vague, so I can't answer very well.
We use TFS with a somewhat distributed team - they aren't too far away but connect via a slow and unreliable VPN.
For your first issue, get latest on checkout is not the default behaviour. (Here's an explanation) There is an add-in that will do it for you, though.
Here's the workflow that works for us:
Get latest
Build and verify nothing's broken
Work (changes pended)
Get latest again
Deal with merge conflicts
Build and verify nothing's broken
Check in
[edit] OK looks like you rephrased this part of the question. Yes, Jeff's right, VS decides to check some files out "for you," like sln and proj files. It also automatically checks out any source file that you edit (that's what you want though, right? although you can change that setting in tools > options > source control)
The proxy apparently takes a while to get ramped up (we don't use it) but once it has cached most of the tree it's supposed to be pretty quick. Can you do some monitoring and find the bottleneck(s)?
Anything else giving you trouble, other than get-latest-on-checkout and speed?
From my understanding you can have multiple TFS Application servers in different locations. They either can both talk to the same SQL Server or you could use SQL Server mirroring. Having your own local TFS server would likely speed up your development times.

Resources