How do you write UI tests for a command line app in Xcode? - xcode

I'm making a small Zork-type game, and I don't want to have to type my way through the whole thing to test the game play every time I change something. Is there a way to use UI testing to do it for me?
I've tried looking around, but everyone just talks about running UI tests from the command line. But, I'd like to know how to do it for a console app.

Now I don't know what your codebase looks like, but your best bet is probably to create test files that run the logic you want to test. You may want to make all logic input independent so that input can be passed by either your tests or a user.

Related

Run all specs from command line in cypress

How can I run all specs from command line in cypress? I have 3 spec files which depends on each other and browser shouldn't reset after each test.
"But when you click on "Run all specs" button after cypress open, the Test Runner bundles and concatenates all specs together..."
I want to accomplish the exact same thing through the command line.
You might not like this answer, but you're going head first against the wall there.
One of goals in pretty much any testing project is making your tests completely independent from one another, and there's plenty of reasons to do so, with just a few being:
You don't care if one test failed and the chain is broken.
Similarly, changing/updating one test case doesn't break a chain.
You can run your tests in parallel, which is a serious point in any projects that plan to scale.
As far as I know, this browser/runner reset after each spec file is desired behavior from cypress side to make parallelization possible (but I can't remember where I read it), so I don't think there's any workaround for your problem.

Is there a recommended debugging strategy for E2E automation tests?

What is the best elegant approach for debugging a large E2E test?
I am using TestCafe automation framework and currently. I'm facing multiple tests that are flaky and require a fix.
The problem is that every time I modify something within the test code I need to run the entire test from the start in order to see if that new update succeeds or not.
I would like to hear ideas about strategies regard to how to debug an E2E test without losing your mind.
Current debug methods:
Using the builtin TestCafe mechanism for debugging in the problematic area in the code and try to comment out everything before that line.
But that really doesn't feel like the best approach.
When there are prerequisite data such as user credentials,url,etc.. I manually Declare those again just before the debug().
PS: I know that tests should be focused as much as possible and relatively small but this is what we have now.
Thanks in advance
You can try using the flag
--debug-on-fail
This pauses the test when it fails and allows you to view the tested page and determine the cause of the fail.
Also use test.only to to specify that only a particular test or fixture should run while all others should be skipped
https://devexpress.github.io/testcafe/documentation/using-testcafe/command-line-interface.html#--debug-on-fail
You can use the takeScreenshot action to capture the existing state of the application during the test. Test Café stores the screenshots inside the screenshots sub-directory and names the file with a timestamp. Alternatively, you can add the takeOnFails command line flag to automatically capture the screen whenever a test fails, so at the point of failure.
Another option is to slow down the test so it’s easier to observe when it is running. You can adjust the speed using the - speed command line flag. 1 is the fastest speed and 0.01 the slowest. Then you can record the test run using the - video command line flag, but you need to set up FFmpeg for this.

How to run Jasmine on multiple pages?

A lot of examples out there focus on writing tests and pretty much everything is closed in one single html file.
I'm more interested in how to actually run Jasmine within real-life app (which is not a SPA).
For the first test I included my SpecRunner.html at the very end of my app's framework:
include PATH_TOOLS.'/tests/jasmine/SpecRunner.html';
and this works, wherever I go in my app I have the test results at the bottom of the page. Obviously this kind of mixing test code with framework's code is not the cleanest approach - every time I push my commits to repository I'd have to remove this line.
On the other hand, If I open SpecRunner.html directly I cannot navigate to my app from there, unless it would be opened in iframe, but is it a common practice? I doubt. I know I can always run Jasmine in terminal, but I would prefer to see the results beside my app.
Perhaps I can somehow run Jasmine from command line and force it to open my app in a real browser, like Selenium does?

Visual Studio and Individual Unit Tests as Actuall Admin Tools

This question may be a bit nebulous, so please bear with me.
I am using Visual Studio, and I am new to the entire realm of unit testing. One thing I do a lot though is use Unit Testing as a quick and dirty ad-hoc "administration ui" at times when I need to just TRY things, but don't have time to make an actual admin system.
What I mean is ... sometimes I just want to get some data thrown into my database to see how it looks on a page. So I'll make a dirty unit test...
[Fact]
public void install_some_test_data(){
using(var database = RavenDocumentStore()){
using(var session = database.OpenSession()){
// create some objects
// add some objects
// save some objects
}
}
}
Nowhere in here have I really cared about "testing", I just like the fact that I can right click and say "Run it" and it'll go, without launching up the program, without having to have a lot of interaction, etc. etc.
So my question is this;
Is this okay to do? I realize this isn't how a program should be long term managed, but I was scolded for doing this by another developer simply because I wanted to quickly show them how something they wanted to see worked. Is there really a problem with using these convenient tools as simpler ways of running commands against my database that can give me immediate feedback on whether it passed or failed?
My followup question to that is this; I have had to actually search pretty hard to find a unit test runner that lets me run individual methods instead of running every single unit test in a class. Is this normal? This whole thing kind of confuses me in general. Even when I do use tests for actually testing, why would I always want to run EVERY test every time? Isn't that extremely wasteful of time and resources? How do you control the order that they run in? This seems even more imperative when tests have to depend on some data to exist before they can run appropriately.
At the moment, I am using xunit and then the unit test runner in ReSharper, but out of the box, Visual Studio doesn't seem to let me run unit tests individually, just as huge batches in a single class.
Am I the only person confused by this?
Sure, it's actually very easy to execute a single unit test (without an external test runner required).
Go to TEST -> Windows -> Test Explorer
Now you'll see a window like this
Now you can rightclick on the test you want to execute and select 'run selected methods'.
As to your other question: it's hard to distinguish what you are asking. Are you demonstrating a proof of concept with a unit test? How does this test replace your admin panel?
Just put your cursor on a Test function name. Then go to Test -> Run Selected Scope (or similar - the name changes by context). That test should execute and also create a test list for you automatically.

Best way to test a Delphi application

I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI.
My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work.
I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done.
But I have a few questions before I do anything drastic... (and before purchasing anything)
Is it worth it?
Would this be a good way to test?
The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)?
I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp.
Is there a way in testcomplete to specify command-line parameters for an exe?
Does anybody have any similar experiences.
I would suggest you plan to use both DUnit and something like TestComplete, as they each serve a different purpose.
DUnit is great for Unit Testing, but is difficult to use for overall application testing and UI testing.
TestComplete is one of the few automated testing products that actually has support for Delphi, and our QA Engineer tells me that their support is very good.
Be aware though that setting up automated testing is a large and time-consuming job. If you rigourously apply unit testing and auomated UI testing, you could easily end up with more test code than production code.
With a large (existing) application you're in a difficult situation with regards to implementing automated testing.
My recommendation is to set up Unit Testing first, in conjunction with an automated build server. Every time someone checks anything in to source control, the Unit Tests get run automatically. DO NOT try to set up unit tests for everything straight up - it's just too big an effort for an existing application. Just remember to create unit tests whenever you are adding new functionality, and whenever you are about to make changes. I also strongly suggest that whenever a bug is reported that you create a unit test that reproduces the bug BEFORE you fix it.
I'm in a similar situation. (Large app with lots of dependencies). There is almost no automated testing. But there is a big wish to fix this issue. And that's why we are going to tackle some of the problems with each new release.
We are about to release the first version of the new product. And the first signs are good. But it was a lot of work. So next release we sure need some way to automate the test process. That's why I'm already introducing unit tests. Although due to the dependencies, these are no real unit tests, but you have to start somewhere.
Things we have done:
Introduced a more OO approach, because a big part the code was still procedural.
Moved stuff between files.
Eliminated dependencies where possible.
But there is far more on the list of requirements, ensuring enough work for the entire team until retirement.
And maybe I'm a bit strange, but cleaning up code can be fun. Refactoring without unit tests is a dangerous undertaking, especially if there are a lot of side effects. We used pair programming to avoid stupid mistakes. And lots of test sessions. But in the end we have cleaner code, and the amount of new bugs introduced was extremely low.
Oh, and be sure that you know this is an expensive process. It takes lots of time. And you have to fight the tendency to tackle more than one problem in a row.
I can't answer everything, as I've never used testcomplete, but I can answer some of those.
1 - Yes. Regression testing is worth it. It's quite embarrassing to you as a developer when the client comes back to you when you've broken something that used to work. Always a good idea to make sure everything that used to work, still does.
4 - Oracle has something called Flashback which lets you create a restore point in the database. After you've done your testing you can just jump back to this restore point. You can write scripts to use it too, FLASHBACK DATABASE TO TIMESTAMP (FEB-12-2009, 00:00:00);, etc
We're looking at using VMWare to isolate some of our testing.
You can start from a saved snapshot, so you always have a consistent environment and local database state.
VMWare actions can be scripted, so you can automatically install your latest build from a network location, launch your tests and shut down afterwards.
Is it worth it?
Probably. Setting up and maintaining tests can be a big job, but when you have them, tests can be executed very easily and consistently. If your project is evolving, some kind of test suite is very helpful.
Would this be a good way to test?
I would say that proper DUnit test suite is better first step. However if you have large codebase which is not engineered for testing, setting up functional tests is even bigger pain than setting up GUI tests.
The result of the test should in my database (Oracle), is there an easy
way in testcomplete to check these values (multiple fields in multiple tables)?
TestComplete has ADO and BDE interface. Or you can use OLE interface in VBScript to access everything that's available.
Is there a way in testcomplete to specify command-line parameters for
an exe?
Yes.
One way to introduse unittesting in an (old) application could be to have a "Start database" (like the "Flashback" feature described by Rich Adams).
The program som unittest using DUnit to control the GUI.
Se the "GUI testing with DUnit" on http://delphixtreme.com/wordpress/?p=181
Every time the test is started by restoring to "Start database", because, then a known set of data, can be used.
I would need to setup a test database to do all the automated
testing, would there be an easy way to
automate re-setting the test db?
Use transactions: perform a rollback when the test completed. This should revert everything to the initial state.
Recommended reading:
http://xunitpatterns.com/

Resources