Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What are the differences between using a VS integrated tool like Testdriven.net or using a GUI test runner like Icarus or NUnit GUI?
What do you prefer and why?
So far i've found that reports are better in Icarus than in td.net, which only features a commandline output.
However td.net is faster to use, i can execute single tests more easily without having to uncheck the rest first. NCover integration is very nice also.
Icarus has one great feature that keeps me using it. It is the automatic reload and rerun of tests.
I keep Icarus hovering over on the left hand monitor. Each time I build in Visual Studio, Icarus reloads the assemblies and runs all the tests. It's sort of like the instant feedback of Resharper's Solution Analysis, except for tests instead of syntax. Running the tests is automatic and doesn't seem to affect the performance of Visual Studio (likely b/c Icarus is it's own process, not hosted inside the IDE).
To enable this configuration go to Icarus->Options->Test Explorer. Check 'Always reload files' and 'Run tests after reload'.
Do you have a Continuous Integration server (like a build server, but runs unit tests)?
If so, you can set up gallio to run your unit tests and have all the reporting information there while allowing the developers to use something with faster feedback while they are working.
If there is no option, I prefer something that is integrated into the IDE like Testdrvien. The immediate feedback is really helpful when refactoring a piece of code or developing something new under TDD. Besides, if you don't have the sanity checking going on at a single point (like a CI server), you are going to want as many eyes as you can find on those unit tests. Developers tend to use whatever is easiest and, generally, an integrated test suite is easier than a separate component.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 11 months ago.
Improve this question
Does Cypress have browser recorder tool like Katalon is for Selenium? It seems that may be quicker to write tests. It would make tests easier for non-technical team members to maintain and easier and quicker to setup and automate.
I recommend Preflight's Cypress recorder.
https://cypress.preflight.com
It creates very good css and xpath selectors. Scripts also adapt to UI changes.
Email, visual, sms and pdf testing is also available.
I just tried email testing, it took me 2 minutes to create a Cypress script
There are several browser plugins that allow recording cypress commands but none of them looks like a powerful tool which can be compared to Katalon recorder.
Beside that, there is a couple of solutions that I believe can compete with the Katalon one:
There is a Cypress Studio tool (though it's still marked as 'experimental'). It works inside the Cypress runner itself.
For IntelliJ platform, there is a paid plugin 'Cypress Pro' which
integrates recorder with the IDE. Here is a tutorial video
https://www.dakka.dev/ - is a chrome extension for generating tests for Cypress.io.
It also supports the assertions and suggests element selectors which are closer to how we write end-to-end tests.
It also supports Playwright and Puppeteer
here's the direct link to the extension: https://chrome.google.com/webstore/detail/dakka/gllikifiancbeplnkdnpnmmhhlncghej
No there is no capture+replay capability present in cypress. However, you must consider that the captured and auto-generated tests in Katalon can become messy and hard to maintain.
It is not as powerful as Katalon, but there are chrome extensions which can record actions and generate Cypress code -- can be useful to get started with tests, although you'll still need to write the assertions.
https://chrome.google.com/webstore/detail/cypress-recorder/glcapdcacdfkokcmicllhcjigeodacab
https://github.com/KabaLabs/Cypress-Recorder
Until someone (Cypress?) makes a more complete recorder/code generator I'm confident recommending it, but as #Ivan said it's not as complex or feature-rich at Katalon.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm working as the sole developer for a company, and also have a consultancy where I periodically do contract work. In both of these areas, I am typically the only developer on each project.
I currently do TDD as much as it makes sense to do so, so most of my software has decent test coverage. Here's what I want to know from you:
Does it make sense for a single dev environment to implement continuous integration? If so, why? If not, why?
I like the idea of continuous integration, but, short of working on a project with at least one more developer, I don't really see the point - or am I missing the point entirely?
Thanks,
Joe
While continous integration is a great teamwork tool, I believe that it's also a good point in your case.
It's true that you won't verify that your software pieces are working nicely with others' ones (you can try to practice multiple personality... gollum! gollum! we're going to destroy this branch... NOO, the developer is a good guy...!), but think about automatic test runs and deployments from a specialized machine.
This is a strong point: develop while another machine is executing tests and deploying your last change. Also, it might help/force you to develop using a self discipline.
In my previous position I was a single_QA responsible for 15 devs project. I had a similar approach as you - TDD. And still I prefer CI, because testing is more than unit tests passing. If I can give you an advice - just adopt whats useful for you from CI process. After all CI was originally intended to be used in combination with automated unit tests written through the practices of test-driven development. But if we're going to talk about serious testing - CI is mandatory process. Why - mostly it allows you to automatically run your unit tests periodically and gain results report. If I was you I'd take these in-mind first:
build self-testing(after built, all tests should run to confirm expected behavior)
use separate CI environment will help you when you have a large lib of tests and don't want to interrupt your development process
staging builds
Update:
If you find it useful, feel free to use my CI implementation for .Net 4.5
Cheers
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm looking to enhance our current test suites, and continuous integration builds with full stack integration/acceptance testing.
I'm looking into tools like Culerity and Selenium that can execute front end javascript while running user stories. I'm looking for something that can provide coverage of front-end javascript and high level features without sucking up tons of development time maintaining a complex test environment. We're currently using Rspec, Cucumber, and CruiseControl.rb, so easy integration with those tools would be ideal.
Are any of the headless browsers and js-capable test environments to a point where they are worth the trouble of setting up and maintaining? What are the best option you've come across, and pitfalls to avoid?
Thanks.
You sound like you are way further down this road than I am, but I'll comment anyway.
I am working on a JavaScript project (with a Java + MySQL back end) and decided to use Selenium for testing, and to try to achieve as thorough coverage as I could. I also poked around with a few other testing tools, but I can't say I really got to know any of them. None of them appeared, from their web sites, to be very polished or popular compared to Selenium. I am planning to integrate to CruiseControl eventually, but haven't done so yet.
This has been an interesting project and at the end of the day, I am quite happy with Selenium. Selenium plusses:
Test 'scripts' can all be written in Java, no obscure scripting language involved. Among other things, you can easily do things like manipulate and verify the data in your database before and after tests.
Se also supports Perl, C#, etc. I think, although that is of no interest to me.
Selenium IDE is a great tool for quickly understanding how Se works, how locators work, etc. You don't want to actually run tests long-term using the IDE, but it's great for getting your feet wet, and for ongoing figuring things out.
Se seems to work flawlessly with jUnit. Probably TestNG as well, but have not tried that yet, it's on my todo list.
Excellent documentation and web site.
Minuses:
I spent a LOT of time figuring out how to locate elements in all cases. This is partially the 'fault' of the framework I am using (ExtJS), not Selenium.
It seems no matter what you do, Se has timing dependencies - eg. places where you have to inject artificial pauses to make it work.
There are also monitor-size dependencies in my tests. I think this is extremely undesirable but in some places it seems to be unavoidable. Basically, this is because there are many element types that JS doesn't support you clicking on programatically.
Related to #3, in places I am forced to drive the mouse. That means you have to have a dedicated test PC. Which is no big deal, but doesn't seem right.
Tests are slow - mainly due to the time it takes Se to invoke Firefox. No doubt this is partially my environment, and I suspect I could do lots of things to improve this. However, it is really noticeable and not obvious why. It takes about 10 minutes to run about 40 tests.
Support forum is very spotty. Well, you get what you pay for. But time and again I found someone had posted about my problem, and the post was ignored or else an invalid solution was offered with no follow-up when the OP pointed out that the suggestion was bogus.
HTH, cheers.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
We're currently not applying the automated building and testing of continous integration in our project. We haven't bothered this far as we're only 2 developers working on it, but even with a team of 2 I still think it would be valuable to use continous integration and get a confirmation that our builds don't break or tests start failing.
We're using .Net with C# and WPF. We have created Python-scripts for building the application - using MSbuild - and for running all tests. Our source is in SVN.
What would be the best approach to apply continous integration with this setup? What tool should we get? It should be one which doesn't require alot of setup. Simple procedures to get started and little maintanance is a must.
Have a look at JetBrains' TeamCity. Free for a small team like yours. Easy to install and minimal fuss. And it looks good. Far better than CruiseControl.NET.
CruiseControl.NET is good too, but definitely requires more work to get setup.
I've been using Hudson (open source software) and found it really flexible. It's more popular in the Java community, but there are MSBuild and MSTest plug-ins available. Hudson also makes it easy to schedule builds or run builds when changes are checked into svn. I found this blog very useful as a starting point.
Cruise Control .NET
Try Cruise (http://www.thoughtworks-studios.com/cruise-release-management) (the re-write of CruiseControl.NET) by Thoughtworks. Its very sexy, much easier to get going and very nice to use. Great feedback too. And its free for teams of less than 10.
Even with two its a great tool to have and once you've done it once its much easier to set up other projects. Having it build, fresh, from SVN when you check in and then tell you everything is ok is a really nice feeling thats easy to get used to.
Allow a good two days though for any build system to wire it all up, thats not installing thats just getting everyhing wired up as it should be. Trick is to do baby steps, get it checking out your code and add more and more layers as you go. Once you have a base set up you can add the other bells and whistles when you get time until after a week or two you have the whole thing singing and dancing. Sounds like a lot of work but its well worth it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We need to compile our code after check - ins, be notified if compile fails, run tests, be notified of test results and publish our application (publish a website or create an msi file for a desktop app) on a daily basis.
We are using SVN and were considering using TeamCity or CruiseControl.NET for continuous integration server for our .net projects which have msunit tests.
My project manager came up with HP Quality Center and Quick Test Professional (it is already purchased) and suggested using them for issue tracking (currently we are using Jira) and continuous integration.
Does it make sense?
NO. I'm using now at a client, and hate it. It does not support non-MS browsers (ActiveX, etc), so on OS X, we are stuck with VMs, etc. Moreover the interface it quite clunky & slow. It's ancient, horrible, legacy tech. There are much better options.
We have lots of customers who integrate QC defect and test tracking into pipelined continuous integration. But QC is not driving the process, it's being integrated into the CI and CID process.
We use QC to run what are called Test Sets. We have been very successful running in this manner. You can use QC to notify you on a failed execution. This would of course notify you if something did not compile on QTP's end. You also set up other QTP and LoadRunner scripts to run if a script fails.
Not a good idea, I've done a POC for QC and Borland tools (for HP), and although possible, there are too many area where the synchronization has to be perfect, and the slow response time of QC sometime due to network etc to trigger right file, get result of compilation and publish is a bit shaky. Again technically via API it is completely feasible.