I want to implement unit testing in my Xcode project, and would like to run tests without requiring the application to be started.
Reasons for this are, I have a core data based document app, that also uses a cvdisplay link to control continuous rendering in a background thread.
It strikes me that I do not need a running application to test core data datamodel functionality, this should be distinct from view stuff anyway. Also I would like to isolate and performance test my background rendering processes, something that seems very difficult with the app running, but could easily do without the application running, just getting the right classes and feed it the correct data.
I've seen other questions that have answers for Xcode versions before six, but the answers don't seem to work for the current version.
The docs now make a distinction between application and library tests. Library tests are run against library targets.
I'm not sure i want to reorganise my code into distinct libraries at the moment, and would prefer to avoid it or fake it somehow.
I saw somewhere an openradar relating to this in ios, but I'm interested in osx.
Has anyone any insight into this?
EDIT : Learning to cope with the existing setup for now, testing with full app running, I can run some checks on that, then I close all documents and shut down the display link.
I can then run tests creating my own persistent store coordinator, in memory datastore and context, as well as testing my rendering classes without fear of conflict with the other display thread.
I'm now running into troubles with linking sources, I just can't seem to get it right, I fiddle with settings, it seems to work for a bit, then suddenly stops building again with Undefined symbols for architecture x86_64: errors, either that or problems linking with 3rd party private frameworks. I look through the web, change a few things, it starts working again. Then I add some tests, importing more of my classes, things stop working again.,.. Infuriating
EDIT 2: Pretty much all sorted now, but maybe not terribly efficient. For each test case class, I either open or close documents and start or stop the display link in the +(void)setup method. I don't do anything in the +(void)tearDown, and let the setup decide how to proceed based on the current state.
Although this means it's possible to flow from one test class to another minimizing document opens and closes, there doesn't seem to be a way to order the tests so that I could group them together.
BTW, I also solved my mentioned linking troubles (XCode 6 Testing Target Troubles), not really relevant to this question though.
It sounds like you landed on the standard solution: Give your app a way to tell when it's being stood up for testing rather than use, and then have applicationDidFinishLaunching: not do any of your usual launch-time behaviors, but leave it to specific tests to provide any setup they need.
You might benefit from creating multiple test suites to deal with different expected conditions, like all the tests that work around a specific document being open.
Related
I'm trying to start a dotnet MAUI app following the tutorial of MS in their official docs
I'm just opening the startup MAUI project(the built-in default one) and VS22 just won't have it. I get 40+ errors most of them about reference missing and duplication of classes/functions
glimps from the errors I get
now I have already seen a post here having somewhat of the same problem but the solutions(restarting and downloading workloads from the CLI using - dotnet workload intall) just didn't work for me.
I haven't done any changes to the code whatsoever so I really don't get what is the problem here.
any help would be appreciated.
Edit 1:
The app do seems to be working when I run the android simulator… which makes it even weirder
This is a bug in the tooling at the moment it seems. If you look at the errors, especially the ones in your screenshot you can see that these talk about Android. If you expand the Project column for a little you will see the list of target platforms that it's talking about.
Because everything is in 1 project now, it gives errors about platform-specific stuff because it is only looking at that one target that it's building. In this case, maybe you were building iOS and it gives errors about not being able to find Android types. This makes sense, however, we shouldn't see these errors in this case.
It's a bit hard to explain like this, I hope it makes any sense.
Long story short, it's a bug, it's being worked on. And you should be able to ignore them and it would still run as you've already discovered yourself. It gives a lot of noise however and if there is an actual error, you will have to find that in this list and fix that.
When using App Code, autocomplete will only show results directly related to my struct. This is a great feature as it keeps everything very clean and I know that accessing these properties isn't going to give me an error.
Except I don't like App Code and it's non native looking UI.
In Xcode it's quite different. Why am I getting flatMap, map, description, debugDescription?
Obviously if its my own code, I know which properties/functions are okay to use, because I wrote them. I can just ignore the noise. But if I'm using someone else's library this can slow things down. Especially when I'm just guessing or trying to remember a function.
Is there a way to fix this — to have Xcode not show me functions/properties that I can't use?
It always shows default system objects properties if the object/class is not found.
It's the behavior of Xcode
The situation is as follows:
I am trying to define a path in which a certain character travels in a game. This can be done by typing all coordinates for such path. However, this requires a lot of testing, recompiling, as you try to view such path in the product and see if it is what you wanted. This is very tedious
Clearly there is no Graphical Interface built in for every purpose, and obviously also for this case. Then, I proceed to built another application/ another few files that serve as a custom graphical interface for my path class: for editing paths by coordinates and change it interactively.
This does not really cause a problem. However, such an application does not fit in the app, nor make sense to be an application just for programming a specific class. Additionally, if I want to have an application for each of my more complicated data structures, it becomes very messy and hard to manage.
I recall that there is a playground feature in swift. This is perfect for me as it is interactive. And I am thinking:
Is there a way to programme an playground-similar application inside the same project?
(Since I demand programming to be pretty) Can this be done without switching to different projects just for this purpose? Is there such a feature?
Equivalently, is there a way to programme something that helps programming within swift, such as an extension for swift?
Again, I emphasis that, this is needed only for saving troubles and making an application more self-contained.
Turns out to be a stupid question. There is obviously an option in adding a new playground file to existing project.
background
I have designed many tools in the past year or so that is designed to help me program for XPages. These tools include primarily helper java classes, extended logging (making use of OpenLogger and my own stuff), and a few other things that I personally feel I cannot work without. It has been discussed with my employer, and we feel that it might be a good idea to start publishing these items to openNTF. Since these tools are made up of about 3 .nsfs, all designed to use the same java code, key javascript classes, css, and even a custom control or two, I would like to consolidate key items into a plug-in that can be installed at the server and client level. I want to do this consolidation before I even think about publishing any of the work I've done so far. It would just be far too much work to maintain, not just for me, but for potential users. I have not really found any information on how to do such a thing in google searches. I also have to make sure that I am able to make use of the ExtLib libraries, openNTF Domino API, and the Notes API.
my questions
How does one best go about designing such plug-ins? Must a designer
use eclipse, or is this it possible to do this directly in the Notes
Designer?
How does a designer best go about keeping a server and client up to date while designing and updating the plug-in code? Is this why GitHub is often used?
Where is the best place to get material to get started in this direction? I sort of feel lost in the woods, knowing I need to head north, but not having a compass for that first step.
Thank you very much for your input.
In my experience, I found that diving into plug-in development is a huge PITA until you get used to it, but it's definitely worth it overall.
As for whether you can use Designer for plugin development: yes, but you will likely eventually want to not do so. I started out by using Designer for this sort of thing for a while, presumably with the same sentiment as you: why bother installing another instance of Eclipse when I'm already sitting in one all day? However, between Designer's age (it's roughly equivalent to, I think, Eclipse 3.4), oddities when it comes to working sets between the "Applications" and "Project Explorer" views, and, in my case, my desire to use a Mac app, I ended up switching.
There are two major starting points: the XSP Starter Kit (http://www.openntf.org/internal/home.nsf/project.xsp?name=XSP%20Starter%20Kit) and Niklas Heidloff's video on setting up Eclipse for XPages development (http://www.openntf.org/main.nsf/blog.xsp?permaLink=NHEF-8RVB5H). The latter mentions the XPages SDK (http://www.openntf.org/internal/home.nsf/project.xsp?name=XPages%20SDK%20for%20Eclipse%20RCP), which is also useful. In my setup, I found the video largely useful, but some aspects either difficult to find (IBM's downloads are shifting sands) or optional (debugging, which will depend on whether or not you're using Eclipse on Windows).
Those resources should generally get you set up. The main thing to worry about when setting up your Eclipse environment will be making sure your Plug-In Execution Environment is properly done. If you're following the SDK setup instructions, that SHOULD get you where you need to be.
The next thing to know about is the way plugins are structured. Each plugin you want to install in Designer or Domino will also be paired with a feature project (a feature can house several plugins), and potentially an update site - the last one is optional if you just want to import the features into an Update Site NSF. That's how I often do my normal plugin development: export the paired feature to a directory and then import the feature into the server's Update Site NSF and then install in Designer from there using Application -> Install. You can also set things up so that you deploy into the server's plugin/feature directories instead of taking the step of installing into an update site if you'd prefer. GitHub doesn't really come into play for this aspect - it's more about sharing/collaborating with your code and also having a remote storage location for your git repositories (which I highly advise).
And as for the "lost in the woods" feeling: yep, you'll have that for a good while. There are lots of moving parts and esoteric concepts to get a hold of all at once. If you mostly follow the above links and then start with some basics from the XSP Starter Kit (which is itself a plugin project that you can pair with a feature) - say, printing text in the Activator class and making an implicit global variable just to make sure it works - that should help get your feet wet.
It's best done in Eclipse. You can debug your code running on the server from there, as well as run it directly from there. The editors are also more up-to-date. You want:
Eclipse for RCP and RAP developers
XPages SDK for Eclipse RCP (from OpenNTF)
XPages Debug Plugin (from OpenNTF - basically allows you to load the plugins to the Domino server dynamically, rather than exporting to an Update Site all the time)
XSP Starter Kit on OpenNTF is a good starting point for a plugin. There are various references to the library id, which has to be unique for your plugin. Basically, references to org.openntf.xsp.starter need changing to whatever you want to call your plugin. You're also best advised to remove what you don't need. I tend to work in a copy of the Starter, remove stuff, build and if there are errors with required classes (Activator.java obviously will be required and some others), then paste them back in from the Starter.
XPages OpenLog Logger is a good cross-reference, that was built from XPages Starter Kit. It's pretty much stripped down and you'll be able to see what had to be changed. A lot of the elements of the XSP Starter Kit correspond to Java classes you'll probably be familiar with from your XPages Java development.
GitHub etc tend to be used as source control, which is useful for working out what's changed from time to time.
My goal is to be able to write core testing that I can use within a unit testing framework as well as UI testing with selenium.
For simple test like:
Scenario: Add two numbers
Given I have entered 50 into the calculator
And I have entered 70 into the calculator
When I press add
Then the result should be 120
I would create both unit tests to prove that my core API would pass as well as a Selenium test that would prove my UI is doing the correct thing as well.
I briefly tried to find anyone doing something similar through Google, but couldn't find any examples. So I guess my question is, has anyone here done anything similar?
On approach I had thought of was simple adding the feature files to a project or directory and using the add existing item as link as the solution.
Update: Adding feature files to a common directory and adding them as a link appears to be working great. The feature bindings regenerates for each project the feature file was included in so I can run unit tests in one and Selenium UI tests in the other.
First, lets start with why you might want to do this. Its laziness of the good kind.
The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don't have to answer so many questions about it. Hence, the first great virtue of a programmer. Also hence, this book. See also impatience and hubris. (p.609)
Larry Wall, Programming Perl
Except it isn't, because we aren't going to reduce our overall energy expenditure.
When you are using SpecFlow, the easy part to keep up to date is the plain text. You will find yourself refactoring the [Binding]s again and again, but the scenarios tend to be quite easy to work with, and need very little revision once they have been agreed.
In addition the [Binding]s are global. Load them in from any assembly and they are available to the SpecFlow runner. In respect of what you are trying this actually makes things harder as you need to put effort in to keep the UI bindings from being mixed up with the non-UI bindings.
Also consider the way that SpecFlow actually runs the tests from feature files. It's a two stage process.
When you save the .feature file the SpecFlow VS plugin generates a .feature.cs file.
When you run your test engine (e.g. NUnit) it ignores the plain text and uses compiled code from .feature.cs
So if you start using linked .features I have no idea if the SpecFlow plugin will generate .feature.cs for both instances of the file. (If you try this please let us know)
Second lets consider the features themselves. I think you will constantly finding yourself compromising your tests to make them fit the other place they are used. Already in the example you have given you have on the screen. If you are working with just the core API then there won't be a screen, so do we change this to fit better in a non-UI scenario?
Finally you have another thing to consider, just how useful will your tests be. If you have already got a test that tests the Core API, then what will it mean to run same test via Selenium. All you will really test is the UI layer. In my current employment we have a great number of regression tests that perform this very kind of testing, running up a client that connects to a server and manipulating the UI to get the desired scenarios enacted. These are the most fragile tests we have due to their scale. They constantly break and we basically have to check our entire codebase to find the line that broke them. Often something like 10-100 of them break just for a one line change. If these tests weren't so important to the regression cycle then the effort in maintaining them would just be too much. In my own personal projects I tend to remove these tests completely and instead with UIs, I avoid testing the View layer. With WPF MVVM, I execute Commands and test for results in ViewModels. If somebody then decides the TextBox should be a ComboBox or that it will work better in mauve, then my testing is isolated.
In short, there is a reason you can't find anything about this on Google :-)
In general (see http://martinfowler.com/bliki/TestPyramid.html), one should limit the number of automated tests that test the UI directly, and prefer tests that start at the presentation layer (just below the view layer), or below.
SpecFlow is agnostic; the tests can be implemented using e.g. Selenium at the UI layer or just MSTest or NUnit at any of the layers below.
However having said that I appreciate that you will have situations where you are doing ATDD and want to implement SpecFlow scenarios to match each of the acceptance criteria. Some of the criteria will be perfectly fine to test at a lower architectural level, but one or two of them may be specific to the GUI-- for example testing Login and ensuring that the user is redirected to the home page after successful login. If using Angular2 or React routing (see https://en.wikipedia.org/wiki/Single-page_application), that redirect is likely done in the GUI layer itself.
I don't have a perfect answer yet, but as a certified SpecFlow trainer, I have a vested interest in this! The way I am currently leaning is to use a complementary tool like CucumberJS for the front-end specific tests (such as testing React router redirects) and SpecFlow for tests at lower architectural layers. Our front-end uses Node.JS/Express and our backend is .NET Core. The idea is that the front-end tests mostly use the front-end only with mocked out AJAX calls to the backend (see sinonjs), and the back-end tests use EF Core with the in-memory option (see docs.efproject.net/en/latest/providers/in-memory/. So the tests all run fast.
Of course, you still need a few tests that actually go all the way through, but those are different-- we should call those integration tests. I do not believe that acceptance tests need to be integration tests. That way, you have a suite of acceptance tests from doing ATDD, plus a relatively small set of integration tests that test all the way front-to-back. The integration tests run more slowly and require more maintenance, so you separate them out into a different part of the CI/CD build chain.
I hope this makes sense. It is not so much solving the problem as avoiding the problem.