Test Case Design and responsibility of Testers, Developers, Customers [closed] - testcase

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So it seems like a lot of people are playing the blame game around where I work, and it brings up an interesting question.
Knowns:
Requirements team writes requirements for product.
Developers create their own unit tests according to requirements.
Testing team creates Test Conditions, Test Design and Test Cases according to requirements.
Product released if and only if X% of test cases from Testing team passes.
After delivery customer does Acceptance tests --> Customer response team gets bugs from the field, and lets the testing team know about these issues.
Question:
If the customer ends up filing a lot of defects, who is to blame? Is it the Testing team for not covering those? Or is it the requirements team for not writing better requirements? And how does one improve upon the system?

The statement "Product released if and only if X% of testcases from Testing team passes" really bothers me. The team may want to consider having better release criteria which is gated on more than just test pass rates. For example, are the scenarios known, understood, accounted for (and tested)? Certainly not all bugs will be fixed, but are the ones that have been postponed or not fixed been triaged correctly? Have you reached your stress testing and performance goals? Have you threat modelled and accounted for mitigations to potential threats? Have x amount of customers (internal/external) deployed builds and provided feedback prior to release (i.e. "dogfood")? Do developers understand the bugs coming from the field and the testers to create regression unit tests? Does the requirements team understand these bugs coming in to see why the scenarios weren't accounted for? Are there key integration points between features which weren't accounted for in specs, development, or testing?
A few suggestions to the team would be to first do a postmortem on the issues found and understand where it broke, and strive to push quality upstream as much as possible. Make sure the requirements team, devs, and testers are communicating frequently and well throughout the planning, dev, and testing cycle to make sure everyone is on the same page and knows who is doing what. You would be amazed at how much product quality can be gained when people actually talk to each other during development!

Bugs can enter the system at both the requirements and development steps. The requirements team could make some mistakes or over-simplifying assumptions when creating the requirements, and the developers could misinterpret the requirements or make their own assumptions.
To improve things, the customer should sign off on the requirements before development proceeds, and should be involved, at least to some extent, in monitoring development to ensure things are on the right track.

The first question in my mind would be, "how do the defects stack up against the requirements?"
If the requirement reads, "OK button should be blue" and the defect is "OK button is green", I would blame development and test -- clearly, neither read the requirements. On the other hand, if the complaint is, "OK button is not yellow", clearly, there was an issue with requirements gathering or your change-control process.
There's no easy answer to this question. A system can have a large number of defects with responsibility spread between everyone involved in the process -- after all, a "defect" is just another way of saying "unmet customer expectation". Expectations, in themselves, are not always correct.

"Product released if and only if X% of test cases from Testing team passes" - is one the criteria for release. In this case "Coverage of Tests" in written TCs is very important. It needs good review of TCs whether any functionality or scenario is missed or not. If anything is missed in TCs there might possibility to find bugs as some of requirement is not covered in test cases.
It also needs some ad-hoc testing as well as Exploratory testing to find uncover bugs in TCs. And it also needs to define "Exit criteria" for testing.
If customer/client finds any bug/defect it is necessary to investigate as: i) What type of bug is found? ii) Is there any Test case written regarding that? iii) If there is test case(s) regarding that executed properly? iv) If it is absent in TCs why it was missed? and so on
After investigation decision can be taken who should be blamed. If it is very simple and open-eyed bug/defect, definitely testers should be blamed.

Related

Is Test Driven Development Agile? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
From the agile manifesto, agile values:
Individuals and interactions over processes and tools,
Working software over comprehensive documentation,
Customer collaboration over contract negotiation,
Responding to change over following a plan
Yet doesn't TDD create a plan and almost structure out a contract negotiation?
"What are the features you want?"
"1,2,3"
Developer writes tests for 1,2,3 -> Team delivers code
"Here's 1,2,3 give us our money"
It's also a form of comprehensive documentation and also a process. Once the tests are written individuals and interactions no longer matter as much because the "source of truth" is no longer with people but ironed out in the code.
Just wondering how they fit together, if they're opposed or do they work together?
TDD is more like a practice for individual contributors, instead of a process. Test here usually refers to unit test, which is part of development work, instead of comprehensive test suits such as performance, functional, and integration tests.
TDD in certain cases should help individual contributor really think about requirement and implementation (respond to change and come up with working software). I personally do not adopt this practice, but it is an agile practice that can be adopted by a single contributor. Do not confuse it with higher level tests and related documents.
Yet doesn't TDD create a plan
Nope. TDD does not mean "write tests up front" it means "write tests before writing code". The whole "Do as much as you need and no more" comes into play. You are not expected to write the tests for all your features before writing any code, just the feature you are currently on. And then (depending on the level of testing) just a small subset of the feature will need tests now.
It's also a form of comprehensive documentation and also a process
It also aids with working software.
Working software over comprehensive documentation,
Over, not instead of. If you can get both, great.
the tests are written individuals and interactions no longer matter as much because the "source of truth" is no longer with people but ironed out in the code.
The oracle for what it does is always the code. The oracle for what it should do is always people.
TDD done well also aids with the communication.
Any insight as to why some people seem to be getting mad at the question?
The question comes off as very troll-y. You are twisting the manifesto to make it sound like anything that aids the latter is "bad" and you are twisting the definition of TDD to be an all-encompassing, completely up-front process. Neither of which are true.
Individuals and interactions over processes and tools,
BDD is a great tool for aiding interactions at a dev/BA/stake holder level. TDD (xUnit and alikes) are great tools for aiding interactions at a dev level.
Working software over comprehensive documentation
TDD helps create working software.
Customer collaboration over contract negotiation
(BDD) Being able to describe in a common language the specification and have that execute is awesome.
Responding to change over following a plan
A well tested code base can change with ease. An untested or badly tested code base is fixed.
That is, while there is value in the items on
the right, we value the items on the left more.
I also agree with Tom’s answer, ‘if it is possible to do agile well, then I believe that is always a good thing. If it is not possible to do it well, then I believe that it can be harmful.’ Agile simply isn’t the right answer for every company. It is different to do well in a large company and it’s lack of focus on software architecture can really affect the usefulness of the resulting technology. Digital Animal have written an interesting article on Agile and why it doesn’t work for them. http://digitalanimal.com/blog/slaying-the-agile-dragon-the-game-of-thrones-methodology/?AT=D8c953

What are the Documents involved in SDLC? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
From Estimation to Delivery - thoughout the software development life cycle,
Which all documents are involved and
What is the order?
I am not sure whether methodology have much impacts on documents, anyway let us consider Waterfall.
The answer is - as has been stated - it depends. I'm sure lots of people will answer for Agile methodologies (which are a far more movable feast) so for completeness I'll go with what you'd have for a fairly standard waterfall methodology:
A scope document - very high level outlining what is and more importantly what is not in scope of the project and what assumptions are being made. The main purpose of this document is to set expectations of what will ultimately be delivered - you're not saying how things will work but you're trying to answer questions such as will there be reporting? Will it pass data to other systems? Will you have to write your own user management functionality or pull from AD? Where you can't get definite answers to these things then include an assumptions section and list what you're assuming will be the case so people can correct you if you're wrong. It should also include things like target implementation dates (not as a commitment but so people know what is anticipated and manage expectations accordingly).
A functional specification - What the application should do on a business level. This can be divided out into business requirements (the business processes it's automating and how they work) and functional requirements (what the system does and how it does it - screen navigation, how calculations are made and so on) but more commonly they're combined except for the largest systems. It should also include "non-functional" requirements such as performance, load, security and so on.
A technical specification - The most likely to be missed out. A detailed technical design including things such as object models, schema diagrams and information on how detailed technical problems are being addressed.
Test plans and test scripts - How the application is being tested with detailed test cases, data and expected results, covering all elements of the system.
User guide and Release Notes - How to install, configure and use the application.
The one I'd add to this is a support document - a short (less than 10 pages) crash course in what the app does and how it does it. Developers will often not read the full specifications (either because they don't have time or don't want to) so this document should be enough to allow them to understand what it does, how it works, the areas of the application which are most likely to be problematic and so on. It would be written a few weeks after go live by the team who built and implemented the system.
Of course depending on your methodology you may have none of these documents but if you're running a standard project in an old school structured, waterfall way, this would be pretty normal.
I'll use the typical consulting answer... 'It Depends'.
To start, methodology has an enourmous impact on the documentation artifacts (not to mention project success), and I would place waterfall-style project management on the same level as allowing my doctor to cover me with leeches to cure a broken leg.
That being said - I have seen folks use the Microsoft Solutions Framework, and here's a link where you can grab their templates:
http://www.microsoft.com/downloads/details.aspx?FamilyID=9D2016AD-6F8A-47F5-84FA-BEC389DB18C1&displaylang=en&displaylang=en
In reality, I would strongly recommend any project to use Agile methodologies and engineering practices (at least, if you want it to have a much higher chance of success than a waterfall project).
http://www.agilealliance.com/ has some good reading, as does wikipedia at http://en.wikipedia.org/wiki/Agile_software_development
Good luck!
In a typical production scenario, where the development is not carried out at the client location, generally waterfall model of SDLC is followed and documents pertaining to various stages of WFM are prepared:
Requirement gathering - Business Requirement Specification that details the complete requirement. This is functional in nature. This is accompanied by test case scenarios provided by users in which the users mention the testing and test cases that they would carryout on the desired functionality. This serves as a guideline to the development team as well to build the scope of the functionality and validations.
Requirement Analysis - During this phase, the BA associated with the project carried out the impact analysis and feasibility analysis. The limitations if any in the requirements, constraints, assumptions are documented, shared with the business users and signed-off to avoid any further surprises.
Development Approach - During this phase, the development team lead or the system analyst prepares an approach doc that defines the process flow, screen design, controls that will be placed on the screen, validations, attributes, database diagram, etc. This is then signed-off with the BA. If the development team foresee any technical constraint that will impact the desired functionality, same is shared with business team again and signed-off.
Testing - When the users carryout testing on the release, they validate the release based on the test cases and test scenarios provided earlier. The defects found are documented and sent back to the development team. The defects are first validated by the BA to ascertain whether the defect reported in understanding flaw, functional requirement lapse or technical bug. Accordingly resolution is provided. During this phase, care is taken that all the test cases are completed successfully and all the bugs are resolved. If any test case or bug is to be parked for next run, then basis the impact that it will have on the functionality, a joint call is taken by development team and business users on the risk involved. At the end, Business Users prepare testing sign-off document where they mention the time taken by each resource for testing, observations and process improvement suggestions.
Production Deployment - This includes the deployment instructions for the deployment team, server and database administrators to carryout deployment.
Feel free to provide your suggestions.

How to avoid conflict with the Team Lead? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 13 years ago.
Improve this question
Currently we are facing some problems with our Team Lead regarding work assignment hierarchy and responsibility of work done. It is generally seen if some targets are not met by the team the Team Lead openly starts blaming the team and sometimes pin-points some of the developers. Further during the allocation of work to the developers the Team Lead does not properly explains the work to be done but expects us to complete it completely.
The worst part is that the Project Manager and Team Lead are real cousins and the Project Manager always takes the Team Lead side when such issues are put up to him by the developers.
Please guide what best can be done by the developers to make a healthy work environment.
Thanks in advance.
This is double sided, and very objective. It might depend souly on what kind of person the Team Lead is, and if they are open for discussion/questions.
The team lead should be openly addressed about this, BUT also, if a developer is unsure about what to do they should ask.
It never hurts to ask questions, you will be amazed at what you can learn.
Well personal relationships should not not be related with professional life. The developers should first of all organize a meeting with team lead and put forward their issues in a healthy and explanatory way. Also keep in loop the Project Manager with your views. Do not wait for anybody to make a healthy environment for you... start yourself in this direction.
One should be able to adapt to various environments and culture that is different in different organization. Always be with the flow.
I'm not sure that you can avoid conflict! The challenge is deciding what to do so everyone can learn and not too many people get hurt.
A well-run team should run itself. That is to say, the team lead's role should be to get a good framework in place so the team can decide on priorities, techniques, methodologies and even process by talking together.
So good managers will ask team members "OK, so what would you do?" They'll then get the appropriate support put in place so that can happen.
I'd suggest that as a group you
Regularly get together (perhaps weekly) to review progress and learn from mistakes made in implementation.
Make sure that all tasks are given to the team as a whole, not to individual developers. Everyone should know the high-level summary of a job.
Get together daily to very quickly summarise progress. Keep this meeting limited to 10 minutes.
In these meetings it's best to avoid blaming people. Blame the code instead, or the process, but don't get personal.
And if your company culture allows it, try reading up on some of the literature around agile project management: there are many parts of that process that are designed to avoid conflict of this nature. However, it can be quite a hard shift for some organisations to devolve quite so much power to developers...
If possible, schedule a meeting with the Project Manager and Team Lead. Openly discuss the issues in a mature and positive light. Tell the Team Lead what you do like (as a group), and tell him what you think can be done to improve quality, expectations, deadlines, etc. If critical requirements are habitually missing, let him know that. Although his cousin is the Project Manager, his answers may be guarded and he could get defensive no matter what the real circumstances are.
Ultimately, in my opinion, the PM/TL relation is a formula for disaster. If the problem is the Team Lead, and the Project Manager is part of that problem, then the next logical step is to go to the PM's boss.

Where does "Change Management" end and "Project Failure" begin? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I got into a mini-argument with my boss recently regarding "project failure." After three years, our project to migrate a codebase to a new platform (a project I was on for 1.5 years, but my team lead was on for only a few months) went live. He, along with senior management of both my company and the client (I'm one of those god-awful consultants you hear so much about. My engagement is an "Application Outsourcing") declared the project to be a success. I disagreed, stating that old presentations I had found showed that compared to the original schedule, the delay in deployment was best measured in months and could potentially be measured in years. I explained what I know of project failure, and the studies and statistics behind failure rates. He responded that that was all academia, and that no project he led had failed, thanks to the wonders of change/risk management - what seems to come down to explaining delays and re-evaluating the schedule based on new data.
Maybe consulting like this differs from other projects, but it seems like this is just failure wrapped up in a prettier name to avoid the stigma of having failed to deliver on time, on budget, or with full functionality. The fact that he explained that my company gave away hours of work for free in order to finish the project within the maxed out budget says a lot.
So I ask you this:
What is change management, and how does it apply to a project?
Where does "change management" end, and "project failure" begin?
#shog9:
I wasn't asking about a blame game with the consultants, especially since in this case I represent the consultants. I was looking for views on when a project should be considered "failed" regardless of if the needed functionality was finally implemented.
I'm looking for the difference between "this is actually a little more complex than we thought, and it's going to be another week" which I'd expect is somewhat typical, and "project failure" - however you want to define failure. Is there even a difference? Does this minor level of schedule slippage constitute statistical "project failure?"
I think, most of the time, we developers forget this we all do is, after all, about bussiness.
From that point of view a project is not a failure while the client is willing to pay for it. It all depends on the client, some clients have more patience and understand better the risks of software development, other just won't pay if there's a substantial delay.
Anyway, about your question. Whenever you evolve a project there are risks involved, maybe you schedule the end of the project in a certain date but it will take like six month longer than you expected. In that case you have to balance what you have already spent and what you have to gain against the risks you're taking. There's actually an entire science called "decision making" that studies it at software level, so your boss is not wrong at all.
Let's look at some questions, Is the client willing to wait for the project? Is he willing to assume certain overcosts? Even if he doesn't, Is worth completing the project assuming the extra costs instead of throwing away all the already done work? Can the company assume what's already lost?
The real answer to your problem lies behind that questions. You can't establish a point and say, here, if the project isn't done by this time then it's a failure. As for your specific situation, who knows? Your boss has probably more information that you have so your work is to tell him how is the project going, how much it will take and how much it will cost (in terms hours/man if you wish)
Unless the goals were clearly stated in the beginning of the project, there are no clear lines between "success" and "failure." Often, a project would have varying degree of success/failure.
For some, just getting some concepts in code would be a success, while other may measure success as recovering all investments and making profit.
Two well-known modes of failures are schedule slip and quality deterioration, but in real-world, people do not seem to care much about them.
Simple ways to slip the schedule are to let the managers make request whenever they want (features creep) and let the programmers code whatever they feel is right (cowboy coding). Change management process such as sprint planning of scrum and planning game of XP are some of the examples. Theses are some of the attempts for the management and the developers to ship reliable products on time. If either party is not interested in reliable or on-time, then change management would not be useful.
I suppose how successful the project is depends on who the client is. If the client were the company directors and they are happy, then the project was successful regardless of the failures along the way.
Andy Rutledge has written a pretty interesting article on success. Though the title is Pre-bid Discussions, the article defines having a successful project, which for Andy entails:
Will I or my team be allowed to bring our best work to the final result?
Is the client prepared to engage in the project appropriately?
Is the client prepared to begin this project?
Is the client prepared to invest trust in my or my team’s ideas?
Am I or is my team prepared to fulfill or exceed the project requirements?
This article was pointed out by Obie Fernandez, a successful consultant, in his Do the Hustle conference about consulting.
What is change management, and how does it apply to a project?
Change management is about approving and communicating changes to a project before they happen. If someone on your project (user, sponsor, team member.. whoever) wants to add a feature, the change needs to be documented and analysed for the effect. Any resulting changes to scope, budget and schedule must then be approved before the change is undertaken. These changes are typically approved by your sponsor, your steering committee or your client.
Once the changes have been approved and accepted that is your new plan. It doesn't matter what the original budget or schedule was.
Change Management on projects is all about the principle of "No Surprises". The right people (your Change Control Board) need to approve any changes to Scope, Schedule and Budget before they are acted upon.
One thing to remember is that there may be certain explicit or implicit constraints and tolerances for change. You may be have to deliver your project by a certain date to meet government regulatory requirements. Or your organisation may have a threshold that once a project budget is 30% over the original budget it must go to a "C" level or the project is killed. Investigating and explicitly stating these thresholds and tolerances up front are a good way of having better successful projects.
Where does "change management" end, and "project failure" begin?
If a project delivers on the approved scope, schedule and budget then it is successful.
However it may be still viewed as a failure. Post Implementation Reviews are a good tool to qualify this with your stakeholders (not just your boss). Also Benefit Realisation would be worthwhile looking into to see outside the blackbox of the project and the impact on the business as a whole.

Which Agile software development methods have you had the most success with? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
There are numerous Agile software development methods. Which ones have you used in practice to deliver a successful project, and how did the method contribute to that success?
I've been involved with quite a few organisations which claimed to work in an 'agile' way, and their processed usually seemed to be base on XP (extreme programming), but none of them ever followed anywhere near all the practices.
That said, I can probably comment on a few of the XP practices
Unit testing seems to prove very useful if it's done from the start of a project, but it seems very difficult to come into an existing code-base and start trying to add unit tests. If you get the opportunity to start from scratch, test driven development is a real help.
Continuous integration seems to be a really good thing (or rather, the lack of it is really bad). That said, the organisations I've seen have usually been so small as to make any other approach seem foolish.
User story cards are nice in that it's great to have a physical object to throw around for prioritisation, but they're not nearly detailed enough unless your developer really knows the domain, or you've got an onsite customer (which I've never actually seen).
Standup meetings tend to be really useful for new team members to get to know everyone, and what they work on. The old hands very quickly slack off, and just say things like 'I'm still working on X', which they've been doing for the past week - It takes a strong leader to force them to delve into details.
Refactoring is now a really misused term, but when you've got sufficient unit tests, it's really useful to conceptually separate the activity of 'changing the design of the existing code without changing the functionality' from 'adding new functionality'
Scrum because it shows where the slackers are. It also identifies much faster that the business unit usually doesn't have a clue what they really want delivered
Scrum.
The daily standup meeting is a great way to make sure things stay on track and progress is being made. I also think it's key to get the product/market folks involved in the process in a real, meaningful way. It'll create a more collaborative environment and removes a lot of the adversarial garbage that comes up when the product team and the dev teams are separate "silos".
Having regular retrospectives is a great way to help a team become more effective/agile.
More than adhering to a specific flavor of Agile this practice can help a team identify what is working well and adapt to a changing environment.
Just make sure the person running the retrospective knows what he/she is doing otherwise it can degenerate into a complaining session.
There are a number of exercises you can take a team through to help them reflect and extract value from the retrospective. I suggest listening to the interview with Linda Rising on Software Engineering Radio for a good introduction.
Do a Google search for "Heartbeat retrospectives" for more information.
I've been working with a team using XP and Scrum practices sprinkled with some lean. It's been very productive.
Daily Standup- helps us keep complete track of what and where everyone is working on.
Pair Programming- has improved our code base and helped remove "silly" bugs being introduced into the system.
iterative development- using 1 week iterations has helped up improve our velocity by setting more direct goals which has also helped us size requirements
TDD- has helped me change my way of programming, now I don't write any code that doesn't fix a broken test and I don't write any test that doesn't have a clearly defined requirement. We've also been using executable requirements which has really helped devs and BAs reach requirements understandings.
kanban boards- show in real time where we are. We have one for the Milestone as well as the current iteration. At a glance you can see what is left to do and what's being done and what's done and accepted. If you don't report in your daily standup something pertaining to what's on the board you have explaining to do.
co-located team- everyone is up to speed and on page with what everyone else is doing. communication is just-in-time, very productive, I don't miss my cube at all.

Resources