How do you sell continuous integration to clients? - continuous-integration

I know that using continuous integration improves the quality of my code base, and speeds up releases, but what is the best way to convince clients that they want it on their next project?

Say exactly what you've said in the question:
Speeding up releases = earlier market penetration = more money
Improving code quality = less time fixing bugs = less cost
So long as you can help them set it up reasonably quickly and cheaply, I can't see why it would be a problem.

In addition to making the standard arguments I quote the data from this paper:
Alan MacCormack, Chris Kemerer, Michael Cusumano, and Bill Crandall, “Trade-offs between Productivity and Quality in Selecting Software Development Practices”, IEEE Software, September-October 2003
Namely:
Integration/regression testing at each code check-in = 36%
reduction in defect rate
Daily builds = 93% rise in LOC output/programmer
So CI gives you higher productivity and better quality. Who doesn't want that?

You have made some assertions. If you want to sell the idea to you clients you are going to have to answer the questions:
How does it improve your code quality?
Compilation/build issues are identified on a regular basis. And if used in conjunction with automated integration and unit tests you will be able to identify bugs on a regular basis.
How does it speed up you releases?
If you automate the build and deployment process you eliminate the downtime required from the development team to ship a new build for testing.
You have a history of successful builds to fall back on if you run out of time and are prepared to ship with incomplete features,

I am not sure how interested Clients are in continuous integration. I think selling the idea t the development team is more worth while exercise in many cases.
That said Clients will always like to hear.
Your project will always be in a
working state.
All code is tested as we write it

Related

Continuous Integration tools

Im doing research regarding continuous integration tools and there benefits. For my research im looking at the following tools:
GitLab CI
Jenkins
Bamboo
GoCD
TeamCity
Now I wont bother you with all the requirements and benefits. But so far im not finding so many differences between the tools except for these:
Fan-in fan-out support GoCD
Community size, Jenkins and GitLab seem to have most contributors
Costs
Open source or not
Amount of plugins available
I was wondering if some people who have had to choose a continuous integration tool aswell could share there experience and why they chose that tool and if there are certain differences that are worth thinking about before choosing which I didn't cover.
Now im leaning towards GoCD because of fan-in fan-out support and the visualisation of the continuous delivery pipeline does anybody have experience with the support on issues for this tool?
Thanks in regard,
Disclaimer: I was an active contributor to GoCD before previous Fall.
I haven't used GitLab CI so won't talk about that :) Also, I haven't used any of these tools in the past one year.
I think TeamCity is a good CI tool. It integrates very well with IDE if you want to debug some failures. The test reports are brilliant. But I don't think they are that advanced in CD space and in my opinion you need both. But if you are interested only in CI, you might want to give it a look. However, you will miss on some of the good features of GoCD I've mentioned below.
Jenkins has a huge community but Jenkins has its own disadvantages. Many a times one plugin doesn't work due to another plugin for some compatibility issues for instance.
GoCD has Fan-in/Fan-out support which avoids many unnecessary builds saving a lot of build time and resources. The value stream map is intuitive and helps to get a better picture of the build stage from a developer's, QA's or even Deliver Manager's point of view. The pipeline modeling in GoCD is also very good. If you read Jez Humble and David Farley's book on Continuous Delivery, you will see the power behind such a build design.
Now, to your second question:
Now im leaning towards GoCD because of fan-in fan-out support and the
visualisation of the continuous delivery pipeline does anybody have
experience with the support on issues for this tool?
Good to hear that :P I love GoCD. The support is good. If you choose to go the Open Source way, the mailing list is pretty active. You can expect a reply from the GoCD team within a day or two. Of course, your questions have to be genuine and specific. Looking through the forums before posting a question helps :)
You can also choose to buy support for GoCD from ThoughtWorks. They used to offer multiple support tiers, not sure of the current support model. You might face issues only when your DB grows too huge (~5-7 GB) when you might want to go for the proprietary Postgres DB support from ThoughtWorks. I've seen very few users of GoCD with that DB size.
I have a lot of experience with Teamcity and some with Gocd. If you are interested in fan-in/fan-out it's also possible to do the same in Teamcity -- it's called Build Chains.
Also there is a good post about this topic on official blog.
If I could choose I would prefer Teamcity. It's more mature and more feature rich product suitable for use in corporate environment.

What’s the ROI of Continuous Integration?

Currently, our organization does not practice Continuous Integration.
In order for us to get an CI server up and running, I will need to produce a document demonstrating the return on the investment.
Aside from cost savings by finding and fixing bugs early, I'm curious about other benefits/savings that I could stick into this document.
My #1 reason for liking CI is that it helps prevent developers from checking in broken code which can sometimes cripple an entire team. Imagine if I make a significant check-in involving some db schema changes right before I leave for vacation. Sure, everything works fine on my dev box, but I forget to check-in the db schema changescript which may or may not be trivial. Well, now there are complex changes referring to new/changed fields in the database but nobody who is in the office the next day actually has that new schema, so now the entire team is down while somebody looks into reproducing the work you already did and just forgot to check in.
And yes, I used a particularly nasty example with db changes but it could be anything, really. Perhaps a partial check-in with some emailing code that then causes all of your devs to spam your actual end-users? You name it...
So in my opinion, avoiding a single one of these situations will make the ROI of such an endeavor pay off VERY quickly.
If you're talking to a standard program manager, they may find continuous integration to be a little hard to understand in terms of simple ROI: it's not immediately obvious what physical product that they'll get in exchange for a given dollar investment.
Here's how I've learned to explain it: "Continuous Integration eliminates whole classes of risk from your project."
Risk management is a real problem for program managers that is outside the normal ken of software engineering types who spend more time writing code than worrying about how the dollars get spent. Part of working with these sorts of people effectively is learning to express what we know to be a good thing in terms that they can understand.
Here are some of the risks that I trot out in conversations like these. Note, with sensible program managers, I've already won the argument after the first point:
Integration risk: in a continuous integration-based build system, integration issues like "he forgot to check in a file before he went home for a long weekend" are much less likely to cause an entire development team to lose a whole Friday's worth of work. Savings to the project avoiding one such incident = number of people on the team (minus one due to the villain who forgot to check in) * 8 hours per work day * hourly rate per engineer. Around here, that amounts to thousands of dollars that won't be charged to the project. ROI Win!
Risk of regression: with a unit test / automatic test suite that runs after every build, you reduce the risk that a change to the code breaks something that use to work. This is much more vague and less assured. However, you are at least providing a framework wherein some of the most boring and time consuming (i.e., expensive) human testing is replaced with automation.
Technology risk: continuous integration also gives you an opportunity to try new technology components. For example, we recently found that Java 1.6 update 18 was crashing in the garbage collection algorithm during a deployment to a remote site. Due to continuous integration, we had high confidence that backing down to update 17 had a high likelihood of working where update 18 did not. This sort of thing is much harder to phrase in terms of a cash value but you can still use the risk argument: certain failure of the project = bad. Graceful downgrade = much better.
CI assists with issue discovery. Measure the amount of time currently that it takes to discover broken builds or major bugs in the code. Multiply that by the cost to the company for each developer using that code during that time frame. Multiply that by the number of times breakages occur during the year.
There's your number.
Every successful build is a release candidate - so you can deliver updates and bug fixes much faster.
If part of your build process generates an installer, this allows a fast deployment cycle as well.
From Wikipedia:
when unit tests fail or a bug emerges, developers might revert the codebase back to a bug-free state, without wasting time debugging
developers detect and fix integration problems continuously - avoiding last-minute chaos at release dates, (when everyone tries to check in their slightly incompatible
versions).
early warning of broken/incompatible code
early warning of conflicting changes
immediate unit testing of all changes
constant availability of a "current" build for testing, demo, or release purposes
immediate feedback to developers on the quality, functionality, or system-wide impact
of code they are writing
frequent code check-in pushes developers to create modular, less
complex code
metrics generated from automated testing and CI (such as metrics for code coverage, code
complexity, and features complete) focus developers on developing functional, quality code, and help develop momentum in a team
well-developed test-suite required for best utility
We use CI (Two builds a day) and it saves us a lot of time keeping working code available for test and release.
From a developer point of view CI can be intimidating when Automatic Build Result, sent by email to all the world (developers, project managers, etc. etc.), says:
"Error in loading DLL Build of 'XYZ.dll' failed." and you are Mr. XYZ and they know who you are :)!
Here's my example from my own experiences...
Our system has multiple platforms and configurations with over 70 engineers working on the same code base. We suffered from somewhere around 60% build success for the less commonly used configs and 85% for the most commonly used. There was a constant flood of e-mails on a daily basis about compile errors or other failures.
I did some rough calculations and estimated that we lost an average of an hour a day per programmer to bad builds, which totals nearly 10 man days of work every day. That doesn't factor in the costs that occur in iteration time when programmers refuse to sync to the latest code because they don't know if it's stable, that costs us even more.
After deploying a rack of build servers managed by Team City we now see an average success rate of 98% on all configs, the average compile error stays in the system for minutes not hours and most of our engineers are now comfortable staying at the latest revision of the code.
In general I would say that a conservative estimate on our overall savings was around 6 man months of time over the last three months of the project compared with the three months prior to deploying CI. This argument has helped us secure resources to expand our build servers and focus more engineer time on additional automated testing.
Our biggest gain, is from always having a nightly build for QA. Under our old system each product, at least once a week, would find out at 2AM that someone had checked in bad code. This caused no nightly build for QA to test with, the remedy was to send release engineering an email. They would diagnose the problem and contact a dev. Sometimes it took as long as noon before QA actually had something to work with. Now, in addition to having a good installer every single night, we actually install it on VM's of all the different supported configurations everynight. So now when QA comes in, they can start testing within a few minutes. Now when you think of the old way, QA came in grabbed the installer, fired up all the vms, installed it, then started testing. We save QA probably 15 minutes per configuration to test on, per QA person.
There are free CI servers available, and free build tools like NAnt. You can implement it on your dev box to discover the benefits.
If you're using source control, and a bug-tracking system, I imagine that consistently being the first to report bugs (within minutes after every check-in) will be pretty compelling. Add to that the decrease in your own bug-rate, and you'll probably have a sale.
The ROI is really an ability to provide what the customer wants. This is of course very subjective but when implemented with involvement of the end customer, you would see that customers starts appreciating what they are getting and hence you tend to see less issues during User Acceptance.
Would it save cost? may be not,
would the project fail during UAT? definitely NO,
would the project be closed in between? - high possibility when the customers find that this would not deliver the
expected result.
would you get real-time and real data about the project - YES
So it helps in failing faster, which helps mitigate risks earlier.

How to effectively measure developer's work hours? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have a few software developers working for my projects and I would like to provide them a way to register time they spent on real development.
There is good will to register development hours, no force, but we try to avoid techniques like excel sheets register because this is so uncomfortable.
I can track svn commits, but this is unreliable. Developers also helps supporting different projects during the day, so assuming they work on one project by whole day is not true.
I've seen utilities that popups a message every hour to confirm the project you're working on but this is annoying.
Some kind of active-window-title-anaylzer might help (you can get solution name from there in the case of Visual Studio) but I have no experience with such idea.
If you have any experience with programmers/designers work hours registration, please share with me.
Thanks
Its a good question, and the very best way you can measure hours spent on a development project is not to measure hours spent at all.
You say there is good will to register hours, but I have my doubts. Realistically, from a developers point of view, excessive time management is a distraction for most (perhaps ALL developers on the planet).
I can understand why time is measured so excessively on ODesk. There is a good reason for this, because project time is paid up front by the client to ODesk, and the developer needs to prove to ODesk hours worked. Payment is also guaranteed, and its unlikely oDesk providers and developers ever meet in real life, so there is no trust.
Since its unlikely you're paying your developers upfront, and its more likely you have better trust established, you need to perhaps switch your attention from a management strategy that's cludgy and annoying, to something way more useful.
Yes ladies and gentleman I am talking about Scrum. Throw any notion of keeping hour per hour tabs of your developers out the window (they'll love you for it). And instead introduce Scrum Management into the scenario.
Create some sprints (milestones) for your product development, and in that have a list of iterations (batched deliverable s), try keep your iterations down to a weekly period.
Create a product backlog, and make sure you're aware who exactly is working on what. Find someone on the team to act as a scrum master on your behalf, or to take on this responsibility yourself. Make sure you have daily feedback meetings, keep them short and focused, to identify any risks that might get in the way of deliverable s. Let the developers more or less drive the timing process, and get realistic estimates for tasks.
Read a book or 2 on scrum, and get others and team members involved in the learning curve. Tweak the base scrum methodology to best suit your particular style of management, and I assure you, you will have a very happy team.
Measure your time in man days, and try avoid getting on the back of a developer for hour by hour progress...
There're various time tracking software tools as you have probably already seen from doing google searches. But in all honestly you are asking for the Holy Grail of time tracking. For example do you consider a developer staring at her code and thinking as development time? She might stare at the screen for 1 hour and only type for 10 minutes. In this case it looks like they don't work much when really they worked for 1 hour and 10 minutes. I don't mean to say what you asking for isn't a valid thing to want it just seems to be one of those problems that doesn't have a perfect solution.
Good luck.
I think you're asking the wrong question and are headed for a slippery slope. There's so much that goes into development that has nothing to do with actually coding.
I think a better solution if you're deadset on tracking something is to track the time spent on activities NOT related to development. Of course there's some grey area there too. For example, a meeting to discuss user requirements should probably be counted towards development even though no coding will get done.
you need something like this dashboard to measure time on task. The only way to know the real software developer time is to have then track it. That way when they switch tasks they stop the clock so to speak. I think the hardest thing would be getting the developers to use it as a measure of how long they work on a certain project or even code module, etc. If you can then use those metrics to reduce distractions and other time sucks, you might at least be able to get a decent swag about how much time they spend coding versus email versus talking to other developers, etc.
If you're trying to measure what the developer has as their active window, you have to assume goodwill, because any decent developer can sneak around that if you're trying to turn the screws on them. I spend about a third of my "development time" in Firefox looking at references, for example.
Maybe ask the developers just to keep a log, so you know where their time is going? Whereas that's not ideal, you're never going to do much better than that.
If you are trying to measure the time spent on distractions and disturbances and other task, would it not be in your developers interest to give you this information willingly?
You said somewhere that you are implementing scrum.
If you really have to either take it up at the daily scrum, making it a part of the ritual, or add a very short daily meeting at the end of the day. Let the developers guesstimate how much of the day was spent on distractions and disturbances and other tasks. To me that feels like it will be as close to "correct" as any other way of measuring, considering the difficulties involved.
So, instead of having the developers note down the time, make it the scrummaster's job to sum it up, and make this as painless as possible for the devs. Make sure that the developers gain something tangible from doing it as well, otherwise it is going to end up on the impediment list awfully quickly.
As Dean J implied, you have to trust the developers anyway.
it depends on your IDE - if you are using Eclipse then I recommend using Mylyn plugin. You can measure the time spent on each task. Tasks can be fetched from every famous task repositories i.e. Tuleap. details here
User only need to put the task in Active mode - and deactivate when task is finished to able to stop the timer. I think Mylyn will support such process - if a status of a task changes then this would trigger the active mode (if closed then deactivate the task)
Sometimes development involve using a browser, or a terminal. Eclipse can be used as a browser and as a terminal as well - so developer does not need to leave Eclipse - so almost every activity can be measured related to a task.
Try DotProject or XPlanner
An active window analyzer won't give you reliable results, because your developers will swap between programs (Outlook, File Explorer, version control, internet browser, etc.). Your proposed analyzer won't log that time, although it will very likely be part of the development time that the developer puts into the project (software development is not just coding in VS all the time).
Trying to measure a developer's work hours is the wrong notion. A proper question to ask is what is a programmer's effectiveness. That can't be measured by coding time, time spent sitting in front of a computer, or the like.
As Joel Spolsky put it well in a blog on software craftsmanship, software development "...is not a manufacturing process."
A related but somewhat different discussion appears in this SO article on an invasive programmer productivity measurement tool.
I definitely would not recommend you to use any time measuring software where the developer is forced. It is a massive distraction to developers' concentration.
Instead, the following simple techniques can be used:
Spreadsheet: For smaller projects or team of developers
There's probably nothing easier than to create and share a spreadsheet online, add project tasks to it, assign tasks to a developer, let the developers estimate hours for theirs tasks, let them update their task status(es) (very rough value between 0% and 100%) as they want, let them specify time (hours) it really took to complete the task.
So in the spreadsheet you may have columns:
task name, assigned to, estimated hours, real hours, done (%)
Google Drive spreadsheet may be the answer.
This is a very simple and fast method which distracts developer(s) minimally.
Scrum: For medium to large projects or team of developers
Tasks and the Scrum information are recorded and kept on a board in the office and/or a special Scrum application can be used. A good web Scrum application is Pivotal Tracker which I would recommend for any size of project or team.
More about Scrum:
http://en.wikipedia.org/wiki/Scrum_(development)
In both cases, the product owners (clients or those who deal with clients), project managers and all developers can better and faster:
communicate
estimate
see progress of the team and all individuals involved

Build Quality

We have 3 branches {Dev,Test,Release} and will have continuous integration set up for each branch. We want to be able to assign build qualities to each branch i.e. Dev - Ready for test...
Has anyone any experience with this that can offer any advice/best practice approach?
We are using TFS 2008 and we are aware that it has Build Qualities built in. It is just when to apply a quality and what kind of qualities people use is what we are looking for.
Thanks
:)
Your goal here is to get the highest quality possible in each branch, balanced against the burden of verifying that level of quality.
Allowing quality to drop in any branch is always harmful. Don't think you can let the Dev branch go to hell and then fix it up before merging. It doesn't work well, for two reasons:
Recovering is harder than you expect. When a branch is badly broken, you don't know how broken it really is. That's because each issue hides others. It's also hard to make any progress on any issue because you'll run in to other problems along the way.
Letting quality drop saves you nothing. People sometimes say "quality, cost, schedule - pick any 2" or something like that. The false assumption here is that you "save" by allowing quality to slip. The problem is that as soon as quality drops, so does "velocity" - the speed at which you get work done. The good news is that keeping quality high doesn't really cost extra, and everyone enjoys working with a high-quality code base.
The compromise you have to make is on how much time you spend verifying quality.
If you do Test Driven Development well, you will end up with a comprehensive set of extremely fast, reliable unit tests. Because of those qualities, you can reasonably require developers to run them before checking in, and run them regularly in each branch, and require that they pass 100% at all times. You can also keep refactoring as you go, which lets you keep velocity high over the life of the project.
Similarly, if you write automated integration / customer tests well, so they run quickly and reliably, then you can require that they be run often, as well, and always pass.
On the other hand, if your automated tests are flaky, if they run slowly, or if you regularly operate with "known failures", then you will have to back off on how often people must run them, and you'll spend a lot of time working through these issues. It sucks. Don't go there.
Worst case, most of your tests are not automated. You can't run them often, because people are really slow at these things. Your non-release branch quality will suffer, as will the merging speed and development velocity.
Assessing the quality of a build in a deterministic and reproducible way is definitely challenging. I suggest the following:
If you are set up to do automated regression testing then all those tests should pass.
Developers should integration test each of their changes using an official Dev build newly installed on an official and clean test rig and give their personal stamp of approval.
When these two items are satisfied for a particular Dev build you can be reasonably certain that promoting this build to Test will not be wasting the time of your QA team.

Scrum: too much or not enough? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
My company has recently started using Scrum; we've done 2 sprints. We're still learning, but we've definitely exposed and fixed some problems in our development process already. So in general I think it has been good for us.
In reading many of the internet musings about Scrum from evangelists, cynics and everyone in between, three common and somewhat contradictory themes have stood out to me:
Scrum implementation fails because the processes of Scrum are not followed closely enough.
Scrum implementation fails because the organization does not adapt Scrum to its own environment/culture/practices.
The processes of Scrum are not important; only the values in the Agile Manifesto matter.
Examples of these can be seen in the responses to these SO questions:
Have you had a bad experience with Scrum or Sprinting?
Is Scrum evil?
Is Agile Development Dead?
I have to admit that we're not yet following all the guidelines of Scrum: we haven't done a release at the end of the sprints, our Scrum Master doesn't want us to move tasks out of the sprint backlog near the end of the sprint so that he can see how much our planning was off (which means the burndown chart never goes to 0), and urgent customer support issues still have incredible power to disrupt everyone's planning, for a few examples.
My question is: in trying to solve these and other issues, is it better to try and be closer to the official Scrum processes, better to be closer to some of our pre-Scrum processes, or better to meditate on the principles of Scrum to try and come up with a different process altogether?
I would say that you are really missing one of the key components of agility if you don't release early and often. To the degree that you don't do this, your process is not agile and bound to suffer the same sorts of problems that traditional, plan-driven processes have. It may be that this is a temporary condition as you are just getting used to things, but you need to start releasing soon (and regularly).
You'll always have the problem with show-stoppers, but you may be able to help this by shortening your sprint length. The customer may not be able to wait a month, but they may be able to wait 2 weeks for some things. A shorter sprint length, then, may help you to defer some requests to the next sprint making them less disruptive. You also need to be upfront with the customer that the disruptions are actually causing your pace to suffer. They may voluntarily choose to wait if they know that their chosen features are being delayed by some requests.
Another observation that I would make is that, as with almost anything, it's better to start out by following the pattern as closely as you can while you are learning. Once you have a good grasp of the fundamental principles, you can then see where some principles can be bent, broken, or replaced much more clearly to improve the process. Until you really get it, the things you change may hurt or help -- you really have no idea since you don't have the experience that tells you how things ought to be working. Unless your Scrum master is really experienced, you may want to hew closer to the defined practices until you've got a few more sprints under your belt.
Almost everything I've read on Scrum says that one of the keys is to adapt the process to fit your own situation. No two development teams are the same, and different things work for different people.
The main ideas behind Scrum are:
Have a tight feedback loop from requirements to development and back to the stakeholder(s).
This allows the development team to continually verify that they are building something that's actually wanted and allows the development to be easily adjusted as requirements and expectations change. Stakeholders can add or remove features at any point and they can adjust the priority of the features as their needs change.
Keep the software in a state where it's releasable at the end of any given sprint.
That's not to say you have releases every sprint, but that you could if the customer decides they want to have the latest stuff. This also helps a development team avoid the situation of integration hell that comes from people going off and working on a piece of the project on for months at a time in isolation.
Be completely transparent with what's going on in development and everyone needs to be willing to make tradeoffs.
This is where most projects fail and where Scrum can really succeed if everyone buys into the process. So many development projects are set up to where a release has to have X features released on Y date and no flexibility in changing that. This results in half-done features and bug ridden software as the developers cram to get in all the required features on their checklist.
The reality is, unexpected things happen in software development. With open communication and willing participants in the Scrum process, customers and developers can continually evaluate the current state of the project and make educated decisions on prioritizing the work remaining on the project.
Scrum does work. Not with all teams in all situations, but it has been shown to work.
I would suggest trying to embrace textbook Scrum as much as your business environment allows, see how that works out, and then tune it.
Why does your Scrum master not want to move tasks out of the sprint backlog? Does he not 100% embrace the principles of Scrum? (I would see that as worrying in a Scrum master)
Most problems implementing Scrum are actually just problems in the team or business being exposed by the Scrum process e.g. - if your sprints are thrown out by unforeseen support issues this suggests you are not allocating enough resource to support
Every company is different, every project is different and every client is different.
I think it's just as easy to fail by following scrum (or any other methodology) too closely in an environment that doesn't fit the methodology as it is to fail because you follow scrum too loosely in a project that does fit.
At the end some generic answer in a QA site is no replacement to serious analysis of your own project, company, team and clients - there is no magic formula and you have to make your own decision.
Answer: You need to adopt both Scrum and XP together to get the full benefits of scrum.
Reasons:
The reasons are based on years of doing XP and scrum, and specifically on what I learned from Jeff Sutherland's talk (for the ACCU in London, May 2009)
Scrum is a management technique - not necessarily a software production method. Some people use scrum in other domains e.g. preparing museum exhibitions and running religious institutions... so it has the mechanisms you need to make a multidisciplinary team deliver work adaptably in small increments.
Scrum, originally included all the extreme programming practices. Jeff Sutherland actually said that he's never seen a scrum project achieve the higher orders of productivity measured for scrum without using the extreme programming practices.
Scrum and XP both come from the same background - Object-oriented programming, specifically with Smalltalk. The programmers went off and developed XP whilst the management people created scrum. You need both aspects - development practices and management practices.
The XP practices were deliberately removed from Scrum to make it easier to adopt. - Implementing the XP practices is hard and it's difficult to get them adopted quickly. Jeff actually said that Ken Schwaber removed the XP practices to help people get started with scrum. The danger now is that this minimal scrum has become all that people see and expect.
Lots of non-technical project managers now teach scrum - but they don't have the skillset to teach XP
Not all developers find the XP practices easy to adopt - they can be hard sell and it takes a few months rather than the 2 days it takes to establish basic scrum.
Scrum doesn't attempt to address the technical issues in software development. It's just a small management process.
The strength of scrum is that it doesn't get in the way by prescribing lots of unnecessary or irrelevant technical work.
The weakness of scrum is that it doesn't tell you what good technical practices to do.
Extreme Programming does address the technical issues involved in software development and it fits very well within scrum. The reason the scrum people didn't force everyone to do the XP technical practices is that it takes about 6 months to implement those tech practices, rather than the 2 days it takes to implement the most basic scrum.
Whether or not scrum is "evil" - there are certainly drawbacks with it. We discussed the uneasy relationship between XP and Scrum at length at XP Days, London, 2009: http://xpday-london.editme.com/WhereHasXpGone
Scrum is not really the problem that you are showing. Most development methodologies work, even waterfall, as much as we like to bash it, works. Scrum does make you concentrate a little more on the important things, but it won't stop people from making bad decisions like not really following the process.
The system is pretty simple at its core.
See the problem.
Define what done is.
Create a series of tasks that will get you to done.
Estimate those tasks.
Select enough of those so that you can get something done in a short period of time.
Complete the tasks.
Rinse and repeat.
OK admittedly these steps are simplified, and I haven't thrown in a scrum master and a customer. But the point is that the framework is just a basic time management strategy. If the people in your system are chaotic and not good at getting things done then scrum really won't help them.
It's better to start applying Scrum by the book, and to really understand the underlying principles and values from the Agile manifesto, prior to customize it, so that the process does not get denatured. Be sure to run retrospectives at the end of each and every iteration (Sprint) to "inspect and adapt" your process and eliminate waste.
For your Scrum Master, he can track what is removed from the current Sprint. Also Sprints are planned based on the previous Sprints achievement, not on what was previously scheduled. I do no get its point.

Resources