Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
From Estimation to Delivery - thoughout the software development life cycle,
Which all documents are involved and
What is the order?
I am not sure whether methodology have much impacts on documents, anyway let us consider Waterfall.
The answer is - as has been stated - it depends. I'm sure lots of people will answer for Agile methodologies (which are a far more movable feast) so for completeness I'll go with what you'd have for a fairly standard waterfall methodology:
A scope document - very high level outlining what is and more importantly what is not in scope of the project and what assumptions are being made. The main purpose of this document is to set expectations of what will ultimately be delivered - you're not saying how things will work but you're trying to answer questions such as will there be reporting? Will it pass data to other systems? Will you have to write your own user management functionality or pull from AD? Where you can't get definite answers to these things then include an assumptions section and list what you're assuming will be the case so people can correct you if you're wrong. It should also include things like target implementation dates (not as a commitment but so people know what is anticipated and manage expectations accordingly).
A functional specification - What the application should do on a business level. This can be divided out into business requirements (the business processes it's automating and how they work) and functional requirements (what the system does and how it does it - screen navigation, how calculations are made and so on) but more commonly they're combined except for the largest systems. It should also include "non-functional" requirements such as performance, load, security and so on.
A technical specification - The most likely to be missed out. A detailed technical design including things such as object models, schema diagrams and information on how detailed technical problems are being addressed.
Test plans and test scripts - How the application is being tested with detailed test cases, data and expected results, covering all elements of the system.
User guide and Release Notes - How to install, configure and use the application.
The one I'd add to this is a support document - a short (less than 10 pages) crash course in what the app does and how it does it. Developers will often not read the full specifications (either because they don't have time or don't want to) so this document should be enough to allow them to understand what it does, how it works, the areas of the application which are most likely to be problematic and so on. It would be written a few weeks after go live by the team who built and implemented the system.
Of course depending on your methodology you may have none of these documents but if you're running a standard project in an old school structured, waterfall way, this would be pretty normal.
I'll use the typical consulting answer... 'It Depends'.
To start, methodology has an enourmous impact on the documentation artifacts (not to mention project success), and I would place waterfall-style project management on the same level as allowing my doctor to cover me with leeches to cure a broken leg.
That being said - I have seen folks use the Microsoft Solutions Framework, and here's a link where you can grab their templates:
http://www.microsoft.com/downloads/details.aspx?FamilyID=9D2016AD-6F8A-47F5-84FA-BEC389DB18C1&displaylang=en&displaylang=en
In reality, I would strongly recommend any project to use Agile methodologies and engineering practices (at least, if you want it to have a much higher chance of success than a waterfall project).
http://www.agilealliance.com/ has some good reading, as does wikipedia at http://en.wikipedia.org/wiki/Agile_software_development
Good luck!
In a typical production scenario, where the development is not carried out at the client location, generally waterfall model of SDLC is followed and documents pertaining to various stages of WFM are prepared:
Requirement gathering - Business Requirement Specification that details the complete requirement. This is functional in nature. This is accompanied by test case scenarios provided by users in which the users mention the testing and test cases that they would carryout on the desired functionality. This serves as a guideline to the development team as well to build the scope of the functionality and validations.
Requirement Analysis - During this phase, the BA associated with the project carried out the impact analysis and feasibility analysis. The limitations if any in the requirements, constraints, assumptions are documented, shared with the business users and signed-off to avoid any further surprises.
Development Approach - During this phase, the development team lead or the system analyst prepares an approach doc that defines the process flow, screen design, controls that will be placed on the screen, validations, attributes, database diagram, etc. This is then signed-off with the BA. If the development team foresee any technical constraint that will impact the desired functionality, same is shared with business team again and signed-off.
Testing - When the users carryout testing on the release, they validate the release based on the test cases and test scenarios provided earlier. The defects found are documented and sent back to the development team. The defects are first validated by the BA to ascertain whether the defect reported in understanding flaw, functional requirement lapse or technical bug. Accordingly resolution is provided. During this phase, care is taken that all the test cases are completed successfully and all the bugs are resolved. If any test case or bug is to be parked for next run, then basis the impact that it will have on the functionality, a joint call is taken by development team and business users on the risk involved. At the end, Business Users prepare testing sign-off document where they mention the time taken by each resource for testing, observations and process improvement suggestions.
Production Deployment - This includes the deployment instructions for the deployment team, server and database administrators to carryout deployment.
Feel free to provide your suggestions.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have to design and develop stand alone desktop application using NetBeans , Java and MySQL. I need to know how to plan my software step by step before coding, like create SRS document, drawing use cases, plan ER Diagram, Flow charts , BP Diagrams, Class Diagrams, etc...
complete quality product with less errors
As per my understanding, the development model needs to be determined - if it is waterfall or prototypical. Waterfall model is not much in use these days as per my knowledge. Under waterfall model, the coding begins only after the requirements specifications and software design is fully developed and nailed down such that there is almost negligible chance that they would change. However, in modern world, the agile or prototypical software development model is being followed where we start with basic requirements and basic software design and then directly proceed to coding, testing and also sometimes releasing the product as soon as possible. And then all the steps of srs, design, coding, testing and releasing get repeated continously until the application's life time, thus the product gets better with each release and only after few releases it will reach at a point where it has got many features live in production.
The reason that the iterative model of software development is more popular is because the requirements keep changing and it is hard to nail down the requirements for all the features of the product beforehand because the stakeholders don't have the full idea of what they want and/or how they want. The same is true for design, due to change in requirements, the software design also needs to be changed and hence it is not beneficial to lock down the software design either.
However that being said, it is not the case that iterative development does not have any srs or any design specs. I would suggest to start with basic srs and basic software design that captures the very core part of the application and also keep it flexible so that it can accommodate the changes easily.
The diagrams and documents that you mentioned are all good starting points. However, they need to be kept at minimal capturing only the core part of the application so that the coding, testing and releasing part can proceed quickly and thus accomplish the goal of getting that initial version (proof of concept) out so that it can be demonstrated to the stakeholders.
Let us say, it is a shopping application, the core part of it may have these features:
Ability to add items to inventory
Ability to show all the items to user (search comes later, user authentication comes later)
Ability for a user to view details of the item
Ability to make a purchase (fake purchase, actual payment processing can be done later)
Ability to view the orders and order details.
Thus the above features try to complete the critical path of the application so that it can be a working application as soon as possible and can be demonstrated and iterated over. The features that are not critical initially can be stubbed out - such as authentication, search, payment processing, sending emails and so on.
I am not sure if this answers your question but hope it provides some pointers in order to start the application development from scratch.
If you have more time then follow waterfall model.
You can go for Agile methodology for fast delivery of application.
Planning of software depends on following factors
1)scope of project
2)deadlines of project
3)Number of resources available
4)cost of project
5)R&D work time etc
i hope , it will help you
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So it seems like a lot of people are playing the blame game around where I work, and it brings up an interesting question.
Knowns:
Requirements team writes requirements for product.
Developers create their own unit tests according to requirements.
Testing team creates Test Conditions, Test Design and Test Cases according to requirements.
Product released if and only if X% of test cases from Testing team passes.
After delivery customer does Acceptance tests --> Customer response team gets bugs from the field, and lets the testing team know about these issues.
Question:
If the customer ends up filing a lot of defects, who is to blame? Is it the Testing team for not covering those? Or is it the requirements team for not writing better requirements? And how does one improve upon the system?
The statement "Product released if and only if X% of testcases from Testing team passes" really bothers me. The team may want to consider having better release criteria which is gated on more than just test pass rates. For example, are the scenarios known, understood, accounted for (and tested)? Certainly not all bugs will be fixed, but are the ones that have been postponed or not fixed been triaged correctly? Have you reached your stress testing and performance goals? Have you threat modelled and accounted for mitigations to potential threats? Have x amount of customers (internal/external) deployed builds and provided feedback prior to release (i.e. "dogfood")? Do developers understand the bugs coming from the field and the testers to create regression unit tests? Does the requirements team understand these bugs coming in to see why the scenarios weren't accounted for? Are there key integration points between features which weren't accounted for in specs, development, or testing?
A few suggestions to the team would be to first do a postmortem on the issues found and understand where it broke, and strive to push quality upstream as much as possible. Make sure the requirements team, devs, and testers are communicating frequently and well throughout the planning, dev, and testing cycle to make sure everyone is on the same page and knows who is doing what. You would be amazed at how much product quality can be gained when people actually talk to each other during development!
Bugs can enter the system at both the requirements and development steps. The requirements team could make some mistakes or over-simplifying assumptions when creating the requirements, and the developers could misinterpret the requirements or make their own assumptions.
To improve things, the customer should sign off on the requirements before development proceeds, and should be involved, at least to some extent, in monitoring development to ensure things are on the right track.
The first question in my mind would be, "how do the defects stack up against the requirements?"
If the requirement reads, "OK button should be blue" and the defect is "OK button is green", I would blame development and test -- clearly, neither read the requirements. On the other hand, if the complaint is, "OK button is not yellow", clearly, there was an issue with requirements gathering or your change-control process.
There's no easy answer to this question. A system can have a large number of defects with responsibility spread between everyone involved in the process -- after all, a "defect" is just another way of saying "unmet customer expectation". Expectations, in themselves, are not always correct.
"Product released if and only if X% of test cases from Testing team passes" - is one the criteria for release. In this case "Coverage of Tests" in written TCs is very important. It needs good review of TCs whether any functionality or scenario is missed or not. If anything is missed in TCs there might possibility to find bugs as some of requirement is not covered in test cases.
It also needs some ad-hoc testing as well as Exploratory testing to find uncover bugs in TCs. And it also needs to define "Exit criteria" for testing.
If customer/client finds any bug/defect it is necessary to investigate as: i) What type of bug is found? ii) Is there any Test case written regarding that? iii) If there is test case(s) regarding that executed properly? iv) If it is absent in TCs why it was missed? and so on
After investigation decision can be taken who should be blamed. If it is very simple and open-eyed bug/defect, definitely testers should be blamed.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Web development is a mess.
This is because we have to interact with a lot of people.
Businness, Designers, Developpers, Leads, etc...
A website is a mixture of a lot of skills which involves programmers, designers, seo experts, business persons, ergonomists, etc...
So, the question is, how do you work to make all those people understand themselves, interact together.
How could I decompose the severals steps leading to a website ?
Because a lot of enterprise sales a design at first, how could you then add the right functionnalities ?
For example, we can decompose a project like this :
Functional scopes (CRUD, Resources, ACL)
Designing the interface
Start development
Write xhtml/css according to the interface with the functionnal requirements
I may have forgotten steps, or disordered them.
EDIT :
For example, here is how I do :
I write a short overview about the project, what is the main goal ?
I try to know which resources (users, articles, products, etc..) are involved.
I write a short CRUD list for each resources which help me to have an overview about the features
I start to design the database (with mysql Workbench for example)
That done, i try to know if there are roles and privileges to rely them with the resources
I start development (+ testing)
Then i insert xhmtml code to respect W3C & web semantic.
I start to insert visual design with CSS
So what about you ? what are you steps to be efficient ?
I would say:
Overall Site Intent
User Analysis (Determine site/application Demographics, User Groups, etc.)
Conceptual Design
Graphic Design
Functional Scope
Interface Design (Prototype, Wireframes, etc.)
Interface Mockups
Development/Unit Testing
User Acceptance Testing
...pick and choose the parts you need. Doing all of them may be overkill, but probably not if you're working on a large team with many groups giving their input. Making sure you don't miss steps gives a chance for everybody to give their input and decide on a course of action.
Web development is different from other types of software development because frequently
there aren't any users among the development personnel. For example, "users" are absent from your list of people involved.
The users exist as a notional bunch of faceless people who are out there (we hope, because that's what the business plan is predicated on). Requirements are gathered and design decisions taken on the basis of assumptions about what the putative users might like or want.
So in many ways web development more resembles opening a restaurant or launching a new political party than rolling out an ERP system.
I don't think there's actually anything unique about web-development here compared to regular software development (with the exception of seo, which is just another technical challenge). I don't think there's anything inherently more "messy" about web-development. Read through the terms in your question again - do any of the terms (excluding seo as mentioned) not apply to general software development (substitute "xhtml/css" for "frontend development")?
Personally, I think any software engineering methodology which you've found works for your team-size/work environment/colleagues/etc is applicable to web-development.
There's nothing magical about the fact that the end-product runs in a browser.
XP and Agile methodologies look at creating teams whose members have all the skills needed for the project, such as Project Manager, developer, business anylist, designer, tester etc.
Having teams means there is better comunication between everyone involed including the client.
The subject is massive so do some google searches on XP, agie, scrum, kanban.
Yup dear you are right, there are several steps in developing a dynamic website however you want to develop a static site then its easy.
the only designing is needed for it and some functionality is added by a designer like email and so on.
but if you are going to develop a dynamic website then its accomplished by these steps.
1. First you make sure about the requirement.
2. Then you decide about its interface and layout.
3. Designer designed all the Form the are needed
4. Then the developer./ programmers will add functionality on froms .
5. After Completing the coding part the project goes to Testing for erros.
6.if any error occurs then it rectified by programmer again it goes to testing this process will going on until all error has not been removed.
7. Finally the web site publishes and then hosted on a server.
A website is a mixture of a lot of skills which involves programmers, designers, seo experts, business persons, ergonomists, etc...
If you're really lucky you will have a team of talented multidisciplinarians who can take on more than one role.
That's when you tend to get the best web products.
Design by committee, which you will always get if everyone only gets to 'wear one hat', rarely produces kick-arse products.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Let's say I have my vision and now my product backlog of items. That part is in writing and ready to be used. I am about to create my sprint(s). I am curious. When does a programming team sit down and say "let's use this platform, this framework, and language" and things like "We need a class here," or "I see a way we can use an Interface with that" and so on. When is that kind of talk going on?
Are there meetings that come before the sprint where someone decides for all teams - "We are going to use Linux, MySQL, PHP and CodeIgniter" and then one of the teams has a sprint to implement that infrastructure, while other teams wait for completion to start the other sprints (team A2 build a ui or security model and the features of it from the product backlog)?
Is this also where tools like trac would be used at the team level, when the sprints first begin?
Sorry if I'm all over the place with this. I've just never seen it done and just when I think I understand it I think of a new question.
Also it's beside the point but what do you name your teams? Bob's team, Smith's team, something more colorful?
Thank you.
Short answer is "It depends" as for the first part there could be other teams that dictate those kinds of terms to some extent. For example, on my current project some things are almost a given,e.g. IDE=Visual Studio, Bug tracking=HP Quality Center, Version Control=Subversion, O/S for developers XP Professional,etc.
There can be a Sprint 0 where some infrastructural elements are handled like a CI server, wiki for the team, making sure everyone has accounts in SVN, and other administrative things to get handled.
Team names like code names can come up at anytime though they can have different meanings as what someone can use for a team name in one place may not be so good somewhere else,e.g. Team Voltron may not go over well for those completely unfamiliar with the term.
Some teams start their projects with a Sprint Zero, where they refine the vision, define the global architecture (choices of platform, language, not class or interfaces), the definition of "done"...
This Sprint is special, it's about preparation and, unlike the other sprints, might not lead the team to deliver any working software..
If you are part of an agile-scrum team, chances are there your company
already has defined patterns and
architectures.
In my opinion scrum-teams are not responsible for design, There are separate design-teams who are responsible for overall design and integration-plan of any ongoing projects.
The design-team does the strategic part of projects development phase which is architecture, design and integration plan. These teams may have their own scrum sprints.
Scrum-master along with team-leads are responsible for tactics of implementing projects as per design.
programmers, testers and QA engineers have operational responsibility of writing and testing code.
I would split it into few parts.
Things like choosing tools/platforms (Linux, MySQL, PHP etc) I'd have agreed before even starting sprint 0. I consider sprint 0 more like setting vision and high-level architecture which, to some point, is dependent on tools/platforms of your choice. People you'd like to have in the team will also be different for ASP.NET project than for PHP project.
Another thing is moving to discussions like "I need a class here and interface there." This level of details can't be really decided up front during sprint 0. We just go with these decisions all along the way. This mean we're changing our architecture rather often but it's a rare situation when changes are deep. And almost always when we change something it is well-grounded decision.
To summarize: key technology decisions before you start, high-level architecture during sprint 0, lower-level design decisions whenever needed (during sprints).
"Sprint 0" is the standard approach to starting up. For ongoing major architecture decisions (switch toolkit, language, platform), a series of investigation spike stories have worked well if they are as small and focused as possible. The story is to address a specific question or prove a concept. Infrastructure questions can -- and I'd argue must -- be broken down into small stories or you may wander off the map.
Smaller infrastructure changes have sometimes worked well as a "tax" to other stories, sometimes not. (E.g. research and add a dependency injection tool, switch to generic hibernation tool) Taxing stories requires excellent communication between product and development. It presumes that some eager dev has already done some late night homework on the infrastructure.
Without success, we've tried hoping major architecture decisions will happen over the course of normal work. This fails because scrum keeps you too focussed.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I got into a mini-argument with my boss recently regarding "project failure." After three years, our project to migrate a codebase to a new platform (a project I was on for 1.5 years, but my team lead was on for only a few months) went live. He, along with senior management of both my company and the client (I'm one of those god-awful consultants you hear so much about. My engagement is an "Application Outsourcing") declared the project to be a success. I disagreed, stating that old presentations I had found showed that compared to the original schedule, the delay in deployment was best measured in months and could potentially be measured in years. I explained what I know of project failure, and the studies and statistics behind failure rates. He responded that that was all academia, and that no project he led had failed, thanks to the wonders of change/risk management - what seems to come down to explaining delays and re-evaluating the schedule based on new data.
Maybe consulting like this differs from other projects, but it seems like this is just failure wrapped up in a prettier name to avoid the stigma of having failed to deliver on time, on budget, or with full functionality. The fact that he explained that my company gave away hours of work for free in order to finish the project within the maxed out budget says a lot.
So I ask you this:
What is change management, and how does it apply to a project?
Where does "change management" end, and "project failure" begin?
#shog9:
I wasn't asking about a blame game with the consultants, especially since in this case I represent the consultants. I was looking for views on when a project should be considered "failed" regardless of if the needed functionality was finally implemented.
I'm looking for the difference between "this is actually a little more complex than we thought, and it's going to be another week" which I'd expect is somewhat typical, and "project failure" - however you want to define failure. Is there even a difference? Does this minor level of schedule slippage constitute statistical "project failure?"
I think, most of the time, we developers forget this we all do is, after all, about bussiness.
From that point of view a project is not a failure while the client is willing to pay for it. It all depends on the client, some clients have more patience and understand better the risks of software development, other just won't pay if there's a substantial delay.
Anyway, about your question. Whenever you evolve a project there are risks involved, maybe you schedule the end of the project in a certain date but it will take like six month longer than you expected. In that case you have to balance what you have already spent and what you have to gain against the risks you're taking. There's actually an entire science called "decision making" that studies it at software level, so your boss is not wrong at all.
Let's look at some questions, Is the client willing to wait for the project? Is he willing to assume certain overcosts? Even if he doesn't, Is worth completing the project assuming the extra costs instead of throwing away all the already done work? Can the company assume what's already lost?
The real answer to your problem lies behind that questions. You can't establish a point and say, here, if the project isn't done by this time then it's a failure. As for your specific situation, who knows? Your boss has probably more information that you have so your work is to tell him how is the project going, how much it will take and how much it will cost (in terms hours/man if you wish)
Unless the goals were clearly stated in the beginning of the project, there are no clear lines between "success" and "failure." Often, a project would have varying degree of success/failure.
For some, just getting some concepts in code would be a success, while other may measure success as recovering all investments and making profit.
Two well-known modes of failures are schedule slip and quality deterioration, but in real-world, people do not seem to care much about them.
Simple ways to slip the schedule are to let the managers make request whenever they want (features creep) and let the programmers code whatever they feel is right (cowboy coding). Change management process such as sprint planning of scrum and planning game of XP are some of the examples. Theses are some of the attempts for the management and the developers to ship reliable products on time. If either party is not interested in reliable or on-time, then change management would not be useful.
I suppose how successful the project is depends on who the client is. If the client were the company directors and they are happy, then the project was successful regardless of the failures along the way.
Andy Rutledge has written a pretty interesting article on success. Though the title is Pre-bid Discussions, the article defines having a successful project, which for Andy entails:
Will I or my team be allowed to bring our best work to the final result?
Is the client prepared to engage in the project appropriately?
Is the client prepared to begin this project?
Is the client prepared to invest trust in my or my team’s ideas?
Am I or is my team prepared to fulfill or exceed the project requirements?
This article was pointed out by Obie Fernandez, a successful consultant, in his Do the Hustle conference about consulting.
What is change management, and how does it apply to a project?
Change management is about approving and communicating changes to a project before they happen. If someone on your project (user, sponsor, team member.. whoever) wants to add a feature, the change needs to be documented and analysed for the effect. Any resulting changes to scope, budget and schedule must then be approved before the change is undertaken. These changes are typically approved by your sponsor, your steering committee or your client.
Once the changes have been approved and accepted that is your new plan. It doesn't matter what the original budget or schedule was.
Change Management on projects is all about the principle of "No Surprises". The right people (your Change Control Board) need to approve any changes to Scope, Schedule and Budget before they are acted upon.
One thing to remember is that there may be certain explicit or implicit constraints and tolerances for change. You may be have to deliver your project by a certain date to meet government regulatory requirements. Or your organisation may have a threshold that once a project budget is 30% over the original budget it must go to a "C" level or the project is killed. Investigating and explicitly stating these thresholds and tolerances up front are a good way of having better successful projects.
Where does "change management" end, and "project failure" begin?
If a project delivers on the approved scope, schedule and budget then it is successful.
However it may be still viewed as a failure. Post Implementation Reviews are a good tool to qualify this with your stakeholders (not just your boss). Also Benefit Realisation would be worthwhile looking into to see outside the blackbox of the project and the impact on the business as a whole.