Purpose of automation testing - Feasibility [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
what is the purpose of automation testing?
According to me the main purpose is
Fast
Removes repetitive manual work
My main query comes here. If after automation if it only reduces repetitive manual work but it is taking almost the same time as it was taking before then is automation feasible in this case. As to make the testing automated it takes some time by the tester to create.
So if one resource is dedicating 15 working days to create the framework of the automation testing and later if he gets that the automation testing is just reducing his repetitive work but not reducing the time required then what is the profit of the organisation from this automation testframe provided the resource is dedicated to that part which he automate.

The profit is long term :
short term, it take time to create the tests
short / middle term, you gain some time running them ; but it is balanced by the time taken it write them
long / very-long term, you can run the tests over and over again ; each day, you gain some more time ;-)
You also have the advantage of having reproducable tests -- easier to get the same results each time, comparing between two builds if/what went wrong...
Also, once you have your tests which are complete, lots of things are tested each time they are run -- on the other hand, would a human being do the same tests over and over again each day ? Would you ?
Considering too many developpers don't even fully test their application once... I bet no-one will test his application every day / each time a modification is made.
Considering the feasability : well, last year, I spent something like 20 days writting automated tests ; those are still run 2 times a day each day -- and still sometimes identify regressions on (not often used by develloppers) parts of the application no-one would test manually, or in parts of the application that are so hard to get to (many screens with long forms and complex process) that no-one ever tests them manually too...
It took time, yes ; but it definitly was a great investment !

Building of escalators and elevators takes a great deal of time and money. They also require maintenance.
But people using them have the convenience quickly getting to the floor they need. They're still walking too.
As you see from this analogy, Test Automation clearly is not the same as Automated Testing.
But once it's implemented, testers may use it to get test results automatically. That saves time and helps to extend the coverage.
You also don't really need elevators in small house with 2-3 storeys. For 5-7 storey building it becomes valuable. For 10 and up storey building it is necessary, and the more floors you have, the more elevators and escalators will be required.
Replace storeys with functionality modules to get back to Test Automation needs.
Thanks.

The main benefit from automating your testing is that it will expose when you made changes to the code that caused a regression, where something that used to work fine is now broken. The payback computation on the automation work really depends on how much your code changes. If you're unlikely to ever touch the code once it's tested as working, then automation is of limited value relative to what it costs to develop. But if developers are going to be hacking at the program regularly, you can bet that automating the tests that pass will eventually pay for itself. You'll find regressions as soon as they're introduced, rather than later when the cause will be much harder to determine, and it doesn't take many such expensive events to pay for the cost of automating the tests in the first place. And if you consider the quality of the released code to be important, automated tests to find regressions before something goes out are even more valuable.

Its quick
avoid the regression testing
need to work only on the updated module
Very less manual intervention
we can utilize more time to the further enhancement in automation

Related

test strategy for non functional test cases in continuous integration [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
In large-system development, the non-functional requirements are frequently
the most important, and implementing them takes the majority of the development time.The non-functional tests are expensive
and often take long to run.Non-functional tests frequently cannot be run in the normal continuous-integration-system cycle because they may take too long to execute—a stability test might take two weeks.Anyone could suggest any good test strategy to achieve manual execution of non functional testing in continuous integration process where taking automated build which is created in every 2 hrs
Some lengthy tests could (and if so they should) be split in several shorter tests which can be executed in parallel.
In some cases it could be preferable to spend some money to increase the number of testbeds thus the overall test bandwith/capacity which would allow multiple tests overlapping each-other, reducing or even eliminating the impact of the long test duration - you could still use it in (some) CI systems - no one says that if the CI pipelines start every 2h they also need to complete within 2h - they can continue and overlap (staggered) as long as the resource capacity allows it (or at least a decent CI system should support such overlapping).
Alternatively the CI systems could be configured to selectively run longer tasks depending on their capacity: say do the typical stuff for every pipeline (2h apart) but only run a test with a capacity of 1 per day once every 12 pipelines or whenever resources for the long test are available (maybe selecting one pipeline which already passed the shorter verifications -> higher chances of passing the longer test, more meaningful results) (this could be done even "manually", by firing the long tests with artifacts from a subset of the CI executions).
In some cases the long duration is a side effect of limitations of the testing infrastructure or the actual test coding, for example inability to execute tasks in paralel even if that wouldn't fundamentally affect the test. In such cases switching to a more appropriate infrastructure or, respectively, re-writing the tests to allow/improve parallelism could shorted (sometimes significantly) the test duration.
First of all, congratulations for understanding of importance of non-functional requirements, this is still uncommon knowledge!
You've mentioned running tests for 2 weeks - this seems far too long for me. Continuous integration is about immediate feedback loop. If any test take that long, you may get notified of a serious problem only after 2 weeks after it was introduced. I'd think twice if this has to be like that.
Manual execution of non functional testing in continuous integration should be avoided as much as possible. Tests should run automatically straight after deployment. If for some reasons certain tests can't run in this fashion (e.g. because they take longer to execute), they should be triggered periodically - automatically of course.
There are a couple of options to speed up NFT execution time:
Scale down the tests - e.g. instead of 1000 threads with ramp up = x, run 100 threads with ramp up = x/10. If you scale all necessary parameters properly, you may get accurate feedback much earlier.
Parallelise NFT execution across a number of test environments, once functional tests passed. If you use platform like Amazon, this should be perfectly possible. And if you pay for time the machine was up, this doesn't have to significantly raise the cost - overall test execution time may be similar.

Unit tests in TDD

I am quite new to TDD and the first question which came into my mind is whether I should apply unit tests to every developed component. I am asking it since I observed that unit testing takes a lot of time, especially when some changes into the requirements are provided. So, could you suggest something like best practices in TDD regarding unit testing?
A short description of TDD expressed in three rules:
You are not allowed to write any production code unless it is to make a failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
And a longer description: http://jamesshore.com/Agile-Book/test_driven_development.html
And a more in-depth description: http://www.growing-object-oriented-software.com/
Just an observation - you said
I observed that unit testing takes a
lot of time
which is true in the short view. But for code that's going to be around a while, the extra work saves time. I stressed the value of unit testing on one project I was on about years ago, telling everyone that would listen, "We don't have time to skip that step." And it's true. There are many places that time will be saved - testers will spend less time testing the app, kicking bugs back through whatever bug-tracking process you use, you'll spend less time remembering what you did weeks or months later so you can fix the bug, users will see fewer bugs, meaning they will spend less time yelling at you for broken apps. You'll spend less time on the phone at 2 AM hoping you can get the app fixed before the users come in the next day.
It all comes to to economics, in a way. For any code that's going in to production, trust me, you don't have time to skip that step. It'll cost you more time and your company more cash.
If the code's not going to prod, that is, it's a utility you wrote to help with some task, or see how the network layer really works, you want to adjust the amount of testing you do to suit the need. Experience will help guide you there to know the right amount.
TDD means that the tests drive the development of your components. You start by writing a unit test that specifies the behavior of the component, then you implement the component. So to answer your question:
should apply unit tests to every
developed component
No, because the unit tests should already be written before developing the component.
The tests drive the development of your code. TDD is really all about defining the desired behaviour of your software, the fact that's it's all testable is just a good side effect.
A good essay on TDD is available here

Becoming the most efficient one-man team [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Like many here, I am a one-man development team. I'm responsible for everything from gathering project requirements, designing concept-screens, planning and developing databases, and writing all code.
Being a one-man team is nice, but has its negatives. I don't have the ability to quickly consult with other developers, I rarely get a second set of eyes for my code, and I'm sure you guys can come up with many other negatives too.
To make the most of my time, and commit myself most efficiently to my work, what tips or practices could I implement into my day-to-day routine to be the best one-man team possible?
Daily list of what I am going to do.
Remove as many distractions as possible to focus on tasks. Turn off
email, turn off IM, etc... even if
for a set period of time and then
during a break check them.
Take time to learn about other coding techniques, tools and programming wisdom. This I have found to be crucial to my development. It's to easy to just code away and feel productive. What about what could be if you just had some more knowledge / weaponry under your belt to bang out that next widget. I know this one really sounds counter productive but it really isn't. Knowledge/know how is our real currency. The more we know the more we can make a better decision about how something should be done and do it faster.
Take breaks and be aware of your
body. When we are tired we don't
think as well and will make more
mistakes, become frustrated more
easily, etc...
Learn to use the 80 / 20 rule to your
advantage. I don't mean skimp or be
lazy. Often though we will work our
tail off for that 20% when it wasn't
necessary.
Set goals for yourself (daily,
weekly, bi-weekly). Make sure the
goals are also in line with those you
are coding for or you may find you
have wasted some time.
From a technical aspect consider:
Consider Unit testing / TDD. I have found in
my own work that this actually saves
time. It takes a while to get the
hang of but with anything you will
get better.
Care for your code. Refactor it
(especially if you start unit
testing). The better your code is
the easier it is to maintain which
takes less time. The easier it is to
understand the faster you can change
/ implement features.
I'm learning to spend a lot more time planning out my day than I used to. This includes planning out projects, down to writing psuedo-code for the programming I need to do. I find that with all the interruptions in my schedule, it's difficult for me to get started at something. Having everything broken down into small tasks makes it much easier to start after an interruption.
According to operational research, shortest job first is the best scheduler to get most amount of things done.
I write and run integration and system tests, but no unit tests, because I've no need for early (pre-integration) testing: Should one test internal implementation, or only test public behaviour?
A corrolary of Conway's Law is that you need to test the internal software interfaces which separate/integrate developers, whereas a "one man army" don't need to explicitly test his internal interfaces in this way.
A lot of the other tips are good but they equally apply to developers working in a team as well as a lone developer.
I think the hardest thing as a one man team is effective communication with the rest of your company. You will always be a lone programmers voice in any meeting or discussion around how best to build software.
As a result I'd advise trying to improve negotiation skills and focus on improving the way you describe technical concepts in terms a non-programmer can understand. Reading books such as Getting to Yes and How to win friends and influence people are a good way to start.
When there is more than one person agreeing on a viewpoint, the viewpoint automatically gains credibility with those you are trying to convince. In the absence of this possiblity you need to work extra hard at preparing your arguments with well-researched evidence and a balanced view.
I'm in the same situation. There's already a lot of good advice above but one thing I'd add is find when your best coding times are and make sure you're coding during that time. I have a few hours in the morning that I seem to be at my best for coding. I try to keep that time free of all distractions. Plan things like meetings, writing documentation, testing (at least the tedious, repetitive stuff), and all that other stuff for your less productive time. Keep those coding hours when you're 2 to 5 times more productive for coding.
Make sure you refactor early and often. That serves almost like a second set of eyes (for me, at least).
Don't work insane hours (especially tricky if you're working from home). Actually, working less hours often proves more productive as the impending break/end of day pressure increases your efficiency.
You may want to look up Parkinson's Law for work/time management.
I use a text file to collect all the things I do every day. Every time I run into a problem or have a question or find a solution, I add it to my file. It's very low-tech but it provides a wealth of information, like "where am I spending most of my time?" or "how did I fix that problem before?". Also makes it super-quick to give your client a list of hours at the end of your billing cycle.
I also use another text file (per client) that contains all the work items on my plate, arranged in order of priority, and updated frequently. It helps both me and my clients focus on what I should be working on next, so the pump is always primed.
Eventually I'll move away from flat text files to using something like FogBugz, but for now I can't beat the price, or how easy it is to search, or how easy it is to e-mail.

TDD in an large project: How do you get started? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Simple question. Let's put on our engineer/project manager hat for a second:
You have a large project and you will have several different developers working on different parts. You have a solid functional spec and are ready to start figuring out your implementation. You want to practice Test Driven Development. Assume you will be given reasonable but not infinite time and money.
How do you start preparing the modules for implementation by the other developers? Do you start writing interfaces or do you start writing tests? Do you mix n' match?
How would you change your answer if you had a team of veteran coders? A team of less experienced folks? Would your coding environment change your decisions?
Strictly speaking - if you are doing TDD (not just Unit Testing) then you start with the tests first before writing the function that the Unit tests actually test.
All functionality that you write needs to have tests written to verify the code that you are about to write.
The developers themselves would be the ones writing the unit tests - functional/acceptance tests are a separate issue and could be (partially) written prior to work commencing.
If the team is not experienced with Unit Testing, they should start implementing features and then writing Unit Tests as soon as each little class/small piece of functionality is finished
Unit Tests (and TDD as a result) are not to test modules of systems - they are to test at a more granular level - that is that functions and classes do what the dev expects them to.
Testing at a higher level is outside the bounds of TDD, and stepping into other types of tests
If you have a good functional spec, I'd split the work...and start working on test cases.
Once you've got the test cases worked out, the developers can begin coding their modules and the tests to go with them.
...and the approach wouldn't change with different developers or coding environment.
Without searching and being fairly new here, I'm guessing there's been a lot of discussion around this, but let me give an answer anyway.
First, I'd ask what you mean by a 'large' project? I've seen some people label a project taking a few months and involving 4 or 5 programmers as a 'large project'. To others, a 'large project' means a multiple year duration and 30 or 40 devs. I'll assume it's somewhere in the middle, given you mention 'several developers'. To be safe, let's assume it's a half year to a year in duration with 5-10 devs.
As others have said, if you're doing TDD you'd start with the tests first, not a lot of design work. The design is emergent. In practice, however, I think there's a balance between the TDD approach (which I see as valuable but not essential) and simply ensuring you have good unit test coverage, which is essential in my view.
If you're a coder yourself and you have experience with TDD, I'd suggest you should be practicing what you'll be preaching. Rather than trying to design the whole system at an abstract level, definining interfaces, and so on, choose a core piece of the system. Make sure to do the simplest thing possible, get a green bar, refactor, add another test, and so on.
The biggest impediment to using TDD on a project with multiple developers is lack of experience with the methodology. Give your programmers a concrete example (even if it's a small bit of functionality) that really shows them how to do it the right way, pair with people as they come onto the project, have regular reviews of people's unit tests, and make sure it continues to be a topic that's at the forefront of what you're doing, not just an afterthought. If you're doing Scrum, make it part of your definition of 'done' for any task/story.
I'd also spend some time setting up something like EMMA or Cobertura (big fan of Cobertura), so you have some hard metrics by which to assess people's tests . effective your tests are, but they are a data point. If you have 20% test coverage, you can be pretty sure people aren't writing the tests they should. Conversely, just because you have 90% test coverage doesn't ensure the tests are good, which is the reason for things like pairing and reviews.
So, the simple answer is: give your guys an example, focus on pairing/reviews, and put some things in place like a code coverage tool to help keep the team honest.
TDD isn't exactly what you're looking for. TDD happens at the beginning of the project and drives development, hence Test Driven Development.
What you're likely looking for is a test writing strategy. From being on numerous big projects that have implemented testing later on, here are some tips:
Start small. Don't try to get 100% coverage on the entire project, choose one class or one set of functions and begin there.
The impulse will be to start with your smallest/simplest class/functions just to get some quick wins and have code coverage at 100% for one file. Don't.
Instead, as you get bug reports and fix them, write tests to support yourself. It becomes an incredibly effective exercise to write tests to demonstrate bug, fix bugs, run tests and not see a bug.
As you get new feature requests, write tests to support yourself. It's the same effect as fixing bugs.
Once you have a few tests written, run them continuously. There's no point to having tests if they're all broken and/or not being run.
True, you don't end up with 100% code coverage, but you can steadily and regularly apply more and more tests and after a dozen bugs or so, you'll have quite a few useful tests. And more importantly, you'll get more and more tests is the "worst" or most problematic areas of the system. We're doing this now with web2project: http://caseysoftware.com/blog/unit-testing-strategy-web2project.
In traditional development (non TDD), between having a functional spec and writing the first line of code there is a lot of designing involved. You would still need to go through this process and this definitely depends on the skill level of the team. The only thing that would be different in TDD is to write the test cases just before writing the code. You obviously cannot write the tests without knowing what classes/interfaces are involved.

How Much Time Should be Allotted for Testing & Bug Fixing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Every time I have to estimate time for a project (or review someone else's estimate), time is allotted for testing/bug fixing that will be done between the alpha and production releases. I know very well that estimating so far into the future regarding a problem-set of unknown size is not a good recipe for a successful estimate. However for a variety of reasons, a defined number of hours invariably gets assigned at the outset to this segment of work. And the farther off this initial estimate is from the real, final value, the more grief those involved with the debugging will have to take later on when they go "over" the estimate.
So my question is: what is the best strategy you have seen with regards to making estimates like this? A flat percentage of the overall dev estimate? Set number of hours (with the expectation that it will go up)? Something else?
Something else to consider: how would you answer this differently if the client is responsible for testing (as opposed to internal QA) and you have to assign an amount of time for responding to the bugs that they may or may not find (so you need to figure out time estimates for bug fixing but not for testing)
It really depends on a lot of factors. To mention but a few: the development methodology you are using, the amount of testing resource you have, the number of developers available at this stage in the project (many project managers will move people onto something new at the end).
As Rob Rolnick says 1:1 is a good rule of thumb- however in cases where a specification is bad the client may push for "bugs" which are actually badly specified features. I was recently involved in a project which used many releases but more time was spent on bug fixing than actual development due to the terrible specification.
Ensure a good specification/design and your testing/bug fixing time will be reduced because it will be easier for testers to see what and how to test and any clients will have less lee-way to push for extra features.
Maybe I just write buggy code, but I like having a 1:1 ratio between devs and tests. I don't wait until alpha to test, but rather do it throughout the whole project. The logic? Depending on your release schedule, there could be a good deal of time between when development starts and when your alpha, beta, and ship dates are. Furthermore, the earlier you catch bugs, the easier (and cheaper) they are to fix.
A good tester, who find bugs soon after each check-in, is invaluable. (Or, better yet, before a check-in from a PR or DPK) Simply put, I am still extremely familiar with my code, so most bug fixes become super simple. With this approach, I tend to leave roughly 15% of my dev time to bug fixing. At least when I do estimates. So in a 16 week run I'd leave around 2-3 weeks.
Only a good amount of accumulated statistics from previous projects can help you to give precise estimates. If you have a well defined set of requirements, you can make a rough calculation of how many use cases you have. As I said you need to have some statistics for your team. You need to know average bugs-per-loc number to estimate total bugs count. If you don't have such numbers for your team, you can use industry average numbers. After you have estimated LOC (number of use cases * NLOC) and average bugs-per-lines, you can give more or less accurate estimation on time required to release project.
From my practical experience, time spent on bug-fixing is equal to or more (in 99% cases :) ) than time spent on original implementation.
From the testing Bible:
Testing Computer Software
p. 31: "Testing [...] accounts for 45% of initial development of a product." A good rule of thumb is thus to allocate about half of your total effort to testing during initial development.
Use a language with Design-by-Contract or "Code-contracts" (preconditions, check assertions, post-conditions, class-invariants, etc) to get "testing" as close to your classes and class features (methods and properties) as possible. Then use TDD to test your code with its contracts.
Use as much self-built code-generation as you possibly can. Generated code is proven, predictable, easier to debug, and easier/faster to fix than all-hand-coded code. Why write what you can generate? However, do not use OPG (other-peoples-generators)! Code YOU generate is code you control and know.
You can expect to spend an inverting ratio over the course of your project--that is--you will write lots of hand-code and contracts in the start (1:1) of your project. As you see patterns, teach a code generator YOU WRITE to generate the code for you and reuse it. The more you generate, the less you design, write, debug, and test. By the end of the project, you will find that your equation has inverted: You're writing less of your core-code, and your focus shifts to your "leaf-code" (last-mile) or specialized (vs generalized and generated) code.
Finally--get a code analyzer. A good, automated code analysis rule system and engine will save you oodles of time finding "stupid-bugs" because there are well-known gotchas in how people write code in particular languages. In Eiffel, we now have Eiffel Inspector, where we not only use the 90+ rules coming with it, but are learning to write our own rules for our own discovered "gotchas". Such analyzers not only save you in terms of bugs, but enhance your design--even GREEN programmers "get it" rather quickly and stop making rookie mistakes earlier and learn faster!
The rule of thumb for rewriting existing systems is this: "If it took 10 years to write, it will take 10 years to re-write." In our case, using Eiffel, Design-by-Contract, Code Analysis, and Code Generation, we have re-written a 14 year system in 4 years and will fully deliver in 4 1/2. The new system is about 4x to 5x more complex than the old system, so this is saying a lot!

Resources