Agile and code release [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Do you know of any agile process that is created for code releases? One of the main theme of agile is frequent releases and each company/client would have their own test/approval processes that control code releases. Most of the time these slow down the pace of "frequent releases"
Currently we have a proprietary tool based workflow. The team who needs a code promotion needs to create a promotion request to one of the final UAT servers. Once this is complete, and once tests are done, certain customers, technical/non-technical managers need to approve, then it goes in to production deploy stage. Meanwhile no sprint planning meeting or anything of that sort.
What is the code release process (Which is agile) that has worked for you?

Why is there no sprint planning meeting of any sort while the workflow proceeds? Tag your repository and get on with the next release straight away. If you need bug fixes on the candidate release, branch from the tag and fix them. The approvals workflow and final UAT testing should not either involve or delay the development team. (Excuse non-distributed SCM terminology if you are actually on something like Git or Mercurial.)
If you take an Agile process like Scrum, the release output is "releasable software" not "released software". If you have an overhead getting stuff released to production, then it can just happen in parallel. I should add that the majority of the testing should have been as part of the sprint - perhaps you need to revisit exactly what testing is done when in your cycle?

If you are having problems testing "big" releases then your release cycle is to long. The underlying principle of release often is that often == smaller releases. If you are having problems and you are only releasing small sets of features that don't take long to test then it is your release engineering team that is the bottleneck, their waterfall approval process needs to change.
Release into a common dev environment all during the sprint, release to a QA environment during the sprint.
Release into a reference environment at the end of the sprint for the demo of only the completed ( and tested ) features.
Release to production whenever the product owners want.
Risk of bugs should not be an issue, since bugs should not have any correlation to the frequency of the releases, actually more releases should actually mean less risk and less bugs. Testing should be done during the sprint, not after. If something isn't fully tested and might be buggy then it isn't done and should not be demoed, much less released to production.
In the end release to production should be the product owners call. A politicized waterfall release engineering process almost never keeps bugs out of production, it just makes the show up later rather than sooner. Managers ticking a check box on a form with their "ok" isn't keeping buggy code out of the customers eyes. Frequent releases to QA during development will. Testing should not be part of the release engineering cycle, it should be part of the development cycle.

This depends on how mission-critical your product is. By "release", do you mean launching your life-critical software to a hospital? Or is it a casual gaming website?
If your work is mission-critical or life-critical, agile may not be able to work for you. In that case, you may need to do more formal testing before deploying.
If you work on a website that isn't mission-critical (and this is often better than not!), you have the freedom to be a little buggy. This helps you iterate faster and re-release again and again.
For that kind of product, which agile is perfect for, let developers test themselves, let clients see the results, then launch to a small group of users (hopefully randomly selected active users -- hallway testing) as soon as possible -- if it's a small thing, even to your whole userbase. On a web service, you can do this quickly and fix it without going through much pain.

Related

Are monthly releases, waterfall in disguise? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
I am starting my deep-dive into agile and had questions on how certain companies promote their releases. I need input on whether the community agrees that monthly release cycles for services is the same, in theory, as waterfall? My reasoning is that if a team bundles up several service changes/features and makes one mass monthly release then it's the same as waterfall. Wouldn't the
"agile way" be to release each change/fix/feature as they are merged?
One of the agile values is responding to change over following a plan.
Note that it doesn't specify that you need to release according to a particular frequency or method. This is because Agile is an approach and is not a framework nor is it a methodology.
One organisation might be able to release monthly and still respond well to change. A lot will depend on the nature of the product and the environment. Another organisations might need to release as soon as a change/fix/feature is ready. Both organisations can still be following the Agile approach.
As an extreme example, imagine a product that is only ever used by its customers at Christmas. There is still value in releasing frequently as the this helps to reduce technical risk, but it might be considered overkill to release every time a new feature is completed.
The original book on Scrum, "Agile Software Development with Scrum," specified monthly sprints. However it and other methods disconnect sprints from releases--that is, development from delivery--by specifying that each sprint creates a "potentially shippable product." The product is supposed to be in a state that could be delivered to customers, but for business reasons the company may not choose to do so. (One reason I have witnessed, by the way, is that the customer only wanted quarterly releases for anything except security patches.)
On the flip side, although this is debated in the Agile community, Continuous Delivery need not be blocked by sprint dates: You could deliver as often as desired, getting acceptance on the fly, and use end-of-sprint ceremonies to show stakeholders everything that was approved and delivered over the sprint.
Speaking as an Agile coach who maintains his waterfall certification (PMP) because waterfall is appropriate for some types of projects, I believe saying Agile is a subset of waterfall is a misperception based on tying deliveries with cycles, which isn't necessary.

How to manage multiple products that share code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
The company I just joined has a system of products that share a large percentage of their code base (via shared links in Visual SourceSafe). There are about 25 product types in this system as well as a PC interface.
The products network together using proprietary protocols that are largely undocumented. Historically, the method for maintaining this mess is to require that all firmware and software is released as a package. This, of course, causes significant delays in release schedules due to the required regression testing.
Has anyone else had a successful method of dealing with this type of issue? We're really getting beat up over it by management (I honestly can't fault them for feeling this way).
My first thoughts are to try to separate the device releases from each other somehow. Maybe pull shared functionality into libraries which are versioned. Then only update devices that use the libraries that have changed. I see issues with version mismatches from this however.
This is an organizational question. I understand how to keep the house of cards going via testing and processes, but I believe that better organization of the code base could have many good results.
I appreciate the advice.
significant delays in release schedules due to the required regression testing.
That's why folks do a "daily build".
Daily builds typically include a set of tests, sometimes called a
smoke test ( as in where there is smoke there is fire). These tests
are included to assist in determining what may have been broken by the
changes included in the latest build. The critical piece of this
process is to include new and revised tests as the project progresses.
When the organization -- as a whole -- has to keep the daily build working, then people change their responsibilities, points of view, biases, complaints and actions to keep the daily build running.
Daily stand-up meetings become focused on things that might break the build.
Individual developers have to refactor their code more carefully to avoid breaking the build.
Breaking the build becomes an immediate, instantaneous indicator of something being out of sync. Immediate. No delay. If I break the build today, everyone will know it tomorrow morning. No days were wasted assuming (or hoping) that things still worked. We can immediately roll changes back, or apply changes to keep going forward.

Test Case Design and responsibility of Testers, Developers, Customers [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So it seems like a lot of people are playing the blame game around where I work, and it brings up an interesting question.
Knowns:
Requirements team writes requirements for product.
Developers create their own unit tests according to requirements.
Testing team creates Test Conditions, Test Design and Test Cases according to requirements.
Product released if and only if X% of test cases from Testing team passes.
After delivery customer does Acceptance tests --> Customer response team gets bugs from the field, and lets the testing team know about these issues.
Question:
If the customer ends up filing a lot of defects, who is to blame? Is it the Testing team for not covering those? Or is it the requirements team for not writing better requirements? And how does one improve upon the system?
The statement "Product released if and only if X% of testcases from Testing team passes" really bothers me. The team may want to consider having better release criteria which is gated on more than just test pass rates. For example, are the scenarios known, understood, accounted for (and tested)? Certainly not all bugs will be fixed, but are the ones that have been postponed or not fixed been triaged correctly? Have you reached your stress testing and performance goals? Have you threat modelled and accounted for mitigations to potential threats? Have x amount of customers (internal/external) deployed builds and provided feedback prior to release (i.e. "dogfood")? Do developers understand the bugs coming from the field and the testers to create regression unit tests? Does the requirements team understand these bugs coming in to see why the scenarios weren't accounted for? Are there key integration points between features which weren't accounted for in specs, development, or testing?
A few suggestions to the team would be to first do a postmortem on the issues found and understand where it broke, and strive to push quality upstream as much as possible. Make sure the requirements team, devs, and testers are communicating frequently and well throughout the planning, dev, and testing cycle to make sure everyone is on the same page and knows who is doing what. You would be amazed at how much product quality can be gained when people actually talk to each other during development!
Bugs can enter the system at both the requirements and development steps. The requirements team could make some mistakes or over-simplifying assumptions when creating the requirements, and the developers could misinterpret the requirements or make their own assumptions.
To improve things, the customer should sign off on the requirements before development proceeds, and should be involved, at least to some extent, in monitoring development to ensure things are on the right track.
The first question in my mind would be, "how do the defects stack up against the requirements?"
If the requirement reads, "OK button should be blue" and the defect is "OK button is green", I would blame development and test -- clearly, neither read the requirements. On the other hand, if the complaint is, "OK button is not yellow", clearly, there was an issue with requirements gathering or your change-control process.
There's no easy answer to this question. A system can have a large number of defects with responsibility spread between everyone involved in the process -- after all, a "defect" is just another way of saying "unmet customer expectation". Expectations, in themselves, are not always correct.
"Product released if and only if X% of test cases from Testing team passes" - is one the criteria for release. In this case "Coverage of Tests" in written TCs is very important. It needs good review of TCs whether any functionality or scenario is missed or not. If anything is missed in TCs there might possibility to find bugs as some of requirement is not covered in test cases.
It also needs some ad-hoc testing as well as Exploratory testing to find uncover bugs in TCs. And it also needs to define "Exit criteria" for testing.
If customer/client finds any bug/defect it is necessary to investigate as: i) What type of bug is found? ii) Is there any Test case written regarding that? iii) If there is test case(s) regarding that executed properly? iv) If it is absent in TCs why it was missed? and so on
After investigation decision can be taken who should be blamed. If it is very simple and open-eyed bug/defect, definitely testers should be blamed.

How to use agile tools/methods within a geographically distributed team [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm working on a software project which several members are working from home and some other are part-timers. We meet physically one time each month at least. We communicate mostly by emails. Our source code repository (mercurial) is on a jungle disk (workgroup) that we share together.
We have a working product and one customer. But, we are not agile enough (ie: one change in the code sometimes break something else, we don't have unit testing, code is not documented, etc.) I want to use an Agile methodology to coordinate our work and track our progresses. I also want to use TDD.
The team has no experience with agile methodologies (or other methodologies).
What is the best approach to use an Agile methodology with a geographically distributed team? Which methodology is best with that kind of team? How to implement it efficiently with the least resistance possible?
Thanks!
I have done this as part of a distributed XP team sharing source code and stories across 3 sites, each site being 12 hours apart (Seattle, Bournemouth UK, and Singapore).
Here are some write-ups of what we did:
Distributed Agile Patterns: http://www.keithbraithwaite.demon.co.uk/professional/papers/index.html#europlop2005
http://www.keithbraithwaite.demon.co.uk/professional/papers/index.html#xp2005
We found that it helps to get everybody physically together at the start of the project to establish standards and to build relationships.
We also found that it helps to have "ambassadors" - shipping different people around between teams to spread knowledge and build trust.
We were lucky to have three sites that were each 12 hours apart - so we could have a stand-up meeting first-thing in the morning and last thing in the evening. We called them "hand-over meetings" and did them over video-conference between the incoming team and the outgoing team.
We also found remote pair-programming worked - between a local pair and a remote pair (i.e. four people) but that it's very intense and draining and best done only for short periods of time when it's really critical to see what other people are doing remotely.
Aside: Kent Beck's Advice for people using Eclipse to remote pair: http://www.threeriversinstitute.org/blog/?p=584
Well, my first thought, given what you specified:
Add unit tests to your source code!
Without unit testing, most Agile methodology isn't all that useful. Being Agile is about being light and being able to respond to change quickly - unit testing is one of the main things that makes that work. Without unit testing, you'll never have the freedom to make changes without risking major breakage.
As you add tests, I would document your code. This, again, is critical for being able to change things, even more so when the team is distributed.
Once that's done, you can start implementing other methodology over time. Personally, I would have the entire team do this, and get started on having daily/weekly stand-ups (which work fine with a distributed team via conference calls, etc), where everyone describes what they've tested, how they're progressing, etc.
That will at least get you on the proper track...
Have a quick browse through this blog:
You're not agile if your team is dispersed. Yeah right!
Start with a Continuous Integration (automated build). I used CruiseControl.Net. I had two builds set up: 1) an automated build after every check-in and 2) a test build to build on demand.
You have to improve your communication for a start. Yes, engineering practices are important, but the key to agile is communication. Email is not the most effective tool to coordinate an agile project, but there are not shortage of tools out there that can help.
We have had great success with Skype (mostly pm, but also normal phone), and also with tools like MS SharedView it is possible to demo and even pair programme across sites.
Once you start to communicate effectively and feel like a team, the rest will follow. Agile is all about inspecting and adapting, so try things out and have fun with it. Start with the daily stand up and move on from there. Regular retrospectives will help you identify you problems and improve.
If you are into tools: To be able to do pair-programming or synchronous code reviews remotely, you could try the eclipse plugin Saros, which enables collaborative editing (including support for driver/observer roles and following users through the code).
(Disclaimer: Saros is a project of my working group at Freie Universität Berlin)

Scrum: too much or not enough? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
My company has recently started using Scrum; we've done 2 sprints. We're still learning, but we've definitely exposed and fixed some problems in our development process already. So in general I think it has been good for us.
In reading many of the internet musings about Scrum from evangelists, cynics and everyone in between, three common and somewhat contradictory themes have stood out to me:
Scrum implementation fails because the processes of Scrum are not followed closely enough.
Scrum implementation fails because the organization does not adapt Scrum to its own environment/culture/practices.
The processes of Scrum are not important; only the values in the Agile Manifesto matter.
Examples of these can be seen in the responses to these SO questions:
Have you had a bad experience with Scrum or Sprinting?
Is Scrum evil?
Is Agile Development Dead?
I have to admit that we're not yet following all the guidelines of Scrum: we haven't done a release at the end of the sprints, our Scrum Master doesn't want us to move tasks out of the sprint backlog near the end of the sprint so that he can see how much our planning was off (which means the burndown chart never goes to 0), and urgent customer support issues still have incredible power to disrupt everyone's planning, for a few examples.
My question is: in trying to solve these and other issues, is it better to try and be closer to the official Scrum processes, better to be closer to some of our pre-Scrum processes, or better to meditate on the principles of Scrum to try and come up with a different process altogether?
I would say that you are really missing one of the key components of agility if you don't release early and often. To the degree that you don't do this, your process is not agile and bound to suffer the same sorts of problems that traditional, plan-driven processes have. It may be that this is a temporary condition as you are just getting used to things, but you need to start releasing soon (and regularly).
You'll always have the problem with show-stoppers, but you may be able to help this by shortening your sprint length. The customer may not be able to wait a month, but they may be able to wait 2 weeks for some things. A shorter sprint length, then, may help you to defer some requests to the next sprint making them less disruptive. You also need to be upfront with the customer that the disruptions are actually causing your pace to suffer. They may voluntarily choose to wait if they know that their chosen features are being delayed by some requests.
Another observation that I would make is that, as with almost anything, it's better to start out by following the pattern as closely as you can while you are learning. Once you have a good grasp of the fundamental principles, you can then see where some principles can be bent, broken, or replaced much more clearly to improve the process. Until you really get it, the things you change may hurt or help -- you really have no idea since you don't have the experience that tells you how things ought to be working. Unless your Scrum master is really experienced, you may want to hew closer to the defined practices until you've got a few more sprints under your belt.
Almost everything I've read on Scrum says that one of the keys is to adapt the process to fit your own situation. No two development teams are the same, and different things work for different people.
The main ideas behind Scrum are:
Have a tight feedback loop from requirements to development and back to the stakeholder(s).
This allows the development team to continually verify that they are building something that's actually wanted and allows the development to be easily adjusted as requirements and expectations change. Stakeholders can add or remove features at any point and they can adjust the priority of the features as their needs change.
Keep the software in a state where it's releasable at the end of any given sprint.
That's not to say you have releases every sprint, but that you could if the customer decides they want to have the latest stuff. This also helps a development team avoid the situation of integration hell that comes from people going off and working on a piece of the project on for months at a time in isolation.
Be completely transparent with what's going on in development and everyone needs to be willing to make tradeoffs.
This is where most projects fail and where Scrum can really succeed if everyone buys into the process. So many development projects are set up to where a release has to have X features released on Y date and no flexibility in changing that. This results in half-done features and bug ridden software as the developers cram to get in all the required features on their checklist.
The reality is, unexpected things happen in software development. With open communication and willing participants in the Scrum process, customers and developers can continually evaluate the current state of the project and make educated decisions on prioritizing the work remaining on the project.
Scrum does work. Not with all teams in all situations, but it has been shown to work.
I would suggest trying to embrace textbook Scrum as much as your business environment allows, see how that works out, and then tune it.
Why does your Scrum master not want to move tasks out of the sprint backlog? Does he not 100% embrace the principles of Scrum? (I would see that as worrying in a Scrum master)
Most problems implementing Scrum are actually just problems in the team or business being exposed by the Scrum process e.g. - if your sprints are thrown out by unforeseen support issues this suggests you are not allocating enough resource to support
Every company is different, every project is different and every client is different.
I think it's just as easy to fail by following scrum (or any other methodology) too closely in an environment that doesn't fit the methodology as it is to fail because you follow scrum too loosely in a project that does fit.
At the end some generic answer in a QA site is no replacement to serious analysis of your own project, company, team and clients - there is no magic formula and you have to make your own decision.
Answer: You need to adopt both Scrum and XP together to get the full benefits of scrum.
Reasons:
The reasons are based on years of doing XP and scrum, and specifically on what I learned from Jeff Sutherland's talk (for the ACCU in London, May 2009)
Scrum is a management technique - not necessarily a software production method. Some people use scrum in other domains e.g. preparing museum exhibitions and running religious institutions... so it has the mechanisms you need to make a multidisciplinary team deliver work adaptably in small increments.
Scrum, originally included all the extreme programming practices. Jeff Sutherland actually said that he's never seen a scrum project achieve the higher orders of productivity measured for scrum without using the extreme programming practices.
Scrum and XP both come from the same background - Object-oriented programming, specifically with Smalltalk. The programmers went off and developed XP whilst the management people created scrum. You need both aspects - development practices and management practices.
The XP practices were deliberately removed from Scrum to make it easier to adopt. - Implementing the XP practices is hard and it's difficult to get them adopted quickly. Jeff actually said that Ken Schwaber removed the XP practices to help people get started with scrum. The danger now is that this minimal scrum has become all that people see and expect.
Lots of non-technical project managers now teach scrum - but they don't have the skillset to teach XP
Not all developers find the XP practices easy to adopt - they can be hard sell and it takes a few months rather than the 2 days it takes to establish basic scrum.
Scrum doesn't attempt to address the technical issues in software development. It's just a small management process.
The strength of scrum is that it doesn't get in the way by prescribing lots of unnecessary or irrelevant technical work.
The weakness of scrum is that it doesn't tell you what good technical practices to do.
Extreme Programming does address the technical issues involved in software development and it fits very well within scrum. The reason the scrum people didn't force everyone to do the XP technical practices is that it takes about 6 months to implement those tech practices, rather than the 2 days it takes to implement the most basic scrum.
Whether or not scrum is "evil" - there are certainly drawbacks with it. We discussed the uneasy relationship between XP and Scrum at length at XP Days, London, 2009: http://xpday-london.editme.com/WhereHasXpGone
Scrum is not really the problem that you are showing. Most development methodologies work, even waterfall, as much as we like to bash it, works. Scrum does make you concentrate a little more on the important things, but it won't stop people from making bad decisions like not really following the process.
The system is pretty simple at its core.
See the problem.
Define what done is.
Create a series of tasks that will get you to done.
Estimate those tasks.
Select enough of those so that you can get something done in a short period of time.
Complete the tasks.
Rinse and repeat.
OK admittedly these steps are simplified, and I haven't thrown in a scrum master and a customer. But the point is that the framework is just a basic time management strategy. If the people in your system are chaotic and not good at getting things done then scrum really won't help them.
It's better to start applying Scrum by the book, and to really understand the underlying principles and values from the Agile manifesto, prior to customize it, so that the process does not get denatured. Be sure to run retrospectives at the end of each and every iteration (Sprint) to "inspect and adapt" your process and eliminate waste.
For your Scrum Master, he can track what is removed from the current Sprint. Also Sprints are planned based on the previous Sprints achievement, not on what was previously scheduled. I do no get its point.

Resources