As I learned from DevGuide testing ReSharper plugins works as follows:
Plugin is loaded and test input file is passed to it
Plugin performs it's actions on the passed file
ReSharper's test environment writes plugin actions results to .tmp file in a special format that depends on the type of functionality tested (for example, if we test completion, .tmp file will contain the list of generated completion items)
ReSharper's test environment compares .tmp file with .gold file to decide if test is failed or succeeded
But I need the following scenario. The first two steps are the same as the above ones, then:
I write code that obtains the results of plugin's actions and check are they what I'm expected so I can make test fail if needed
How can I achieve this?
I need it because I have a code that uses AST generated by ReSharper to build some graphs and I want to test are the graphs built correctly.
Yes, you can do this. You need to create your own test base class, instead of using one of the provided ones.
There is a hierarchy of base classes, each adding extra functionality. Usually, you'll derive from something like QuickFixAvailabilityTestBase or QuickFixTestBase, which add the functionality for testing quick fixes. These are the classes that will do something and write the output to a .tmp file that is then compared to the .gold file.
These classes themselves derive from something like BaseTestWithSingleProject, which provides the functionality to setup an in-memory solution and project that's populated with files you specify in your test, or BaseTestWithTextControl which also gives you a text control for the file you're testing. If you derive from this class directly (or with your own custom base class), you can perform the action you need for the actual test, and either assert something in memory, or write the appropriate text to the .tmp file to compare against the .gold.
You should override the DoTest method. This will give you an IProject that is already set up, and you can do whatever you need to in order to test your extension's functionality. You can use project.Solution.GetComponent<> to get at any shell or solution component, and use the ExecuteWithGold method to execute something, write to the .tmp file and have ReSharper compare to the .gold file for you.
Related
I want to ignore test files in codeql result.
but this query includes test files.
import codeql.ruby.AST
from RegExpLiteral t, File f
where not f.getBaseName().regexpMatch("spec")
select t
ignore test files in the result
regexpMatch requires that the given pattern matches the entire receiver. In your case that means it would only succeed if the file name is exactly "spec". Maybe you rather want to test for ".*spec.*" (or use matches("%spec%")).
I am not sure though if that answers your question. As far as I know there is in general no direct way to ignore test sources. You could however do one of the following things:
Exclude the test directory when building the CodeQL database; for GitHub code scanning see the documentation
For GitHub code scanning filter out non-application code alerts in the repository alerts list (see documentation)
Manually add conditions to your query which exclude tests, for example a file name check as you have done or checking the code for certain test-related constructs
I have two cucumber feature ( DeleteAccountingYear.feature and AddAccountingYear.feature).
How can i do to make that the second feature(AddAccountingYear.feature) run before the first one (AddAccountingYear.feature).
I concur with #alannichols about tests being independent of each other. Thats a fundamental aspect of automation suite. Otherwise we will end up with a unmaintainable, flaky test suite.
To run a certain feature file before running another feature appears to me like a test design issue.
Cucumber provides few options to solve issues like this:
a) Is DeleteAccountingYear.feature really a feature of its own? If not you can use the cucumber Background: option. The steps provided in the background will be run for each scenario in that feature file. So your AddAccountingYear.feature will look like this:
Feature: AddingAccountingYear
Background:
Given I have deleted accounting year
Scenario Outline: add new accounting year
Then I add new account year
b) If DeleteAccountingYear.feature is indeed a feature of its own and needs to be in its own feature file, then you can use setup and teardown functions. In cucumber this can be achieved using hooks. You can tag AddDeleteAccountingYear.feature with a certain tag say #doAfterDeleteAccountYear. Now from the Before hooks you can do the required setup for this specific tag. The before hooks(for ruby) will look like:
Before('#doAfterDeleteAccountYear') do
#Call the function to delete the account year
end
If the delete account year is written as a function, then the only thing required is to call this method in the before hook. This way the code will be DRY compliant as well.
If these options doesn't work for you, one another way of forcing the order of execution is by using a batch/shell script. You can add individual cucumber commands for each feature in the order you would like to execute and then just execute the script. The downside of it is different reports will be generated for each feature file. But this is something that I wouldn't recommend for the reasons mentioned above.
From Justin Ko's website - https://jkotests.wordpress.com/2013/08/22/specify-execution-order-of-cucumber-features/ the run order is determined in the following way:
Alphabetically by feature file directory
Alphabetically by feature file name
Order of scenarios within the feature file
So to run one feature before the other you could change the name of the feature file or put it in a separate feature folder with a name that alphabetically first.
However, it is good practice to make all of your tests independent of one another. One of the easiest way to do this is to use mocks to create your data (i.e. the date you want to delete), but that isn't always an option. Another way would be to create the data you want to delete in the set up of the delete tests. The downside to doing this is that it's a duplication of effort, but it won't matter what order the tests run in. This may not be an issue now, but with a larger test suite and/or multiple coders using the test repo it may be difficult to maintain the test ordering based solely on alphabetical sorting.
Another option would be to combine the add and delete tests. This goes against the general rule that one test should test one thing but is often a pragmatic approach if your tests take a long time to run and adding the add data step to the set up for delete would add a lot of time to your test suite.
Edit: After reading that link to Justin Ko's site you can specify the features that are run when you run cucumber, and it will run them in the order that you give. For any that you don't care about the order for you can just put the whole feature folder at the end and cucumber will run through them, skipping any that have already been run. Copy paste example from the link above -
cucumber features\folder2\another.feature features\folder1\some.feature features
I have several lines of code of a written class with Interface written in testclass.h and implementation written in testclass.m in Xcode. I wish when I update an entry in testclass.m, its counterpart in testclass.h can be updated automatically.
For example, I have an interface for following function in both testclass.h and testclass.m:
-(void)testfunction
And I modified its name to a different one due to some reason in testclass.m to:
-(void)another_test_function
If I want this code to run I need to manually change the entry in the header. Although I'm very new to programming but I can imagine it could be really frustrating if you are trying to modify something in a big program with a lot of different files invoking some modified entry name. I wish Xcode can auto-detect this change and modify the entry in the header file to -(void)another_test_function automatically.
Is there any way I can do that? All I know by searching the internet is that you can use a shortcut to "edit all in scope" but this only affect all the occurrence in the same file, not header file.
Right-click the method name you would like to change (in either the header or the implementation file) and then select Refactor > Rename. You can then change the name of the method, and Xcode will show you what it will change.
If that looks good, you can accept the changes and you're done.
Can you include expressions in the "Output Files" section of a build rule in Xcode? Eg:
$(DERIVED_FILE_DIR)$(echo "/dynamic/dir")/$(INPUT_FILE_BASE).m
Specifically, when translating Java files with j2objc, the resulting files are saved in subfolders, based on the java packages (eg. $(DERIVED_FILE_DIR)/com/google/Class.[hm]). This is without using --no-package-directories, which I can't use because of duplicate file names in different packages.
The issue is in Output Files, because Xcode doesn't know how to search for the output file at the correct location. The default location is $(DERIVED_FILE_DIR)/$(INPUT_FILE_BASE).m, but I need to perform a string substitution to insert the correct path. However any expression added as $(expression) gets ignored, as it was never there.
I also tried to export a variable from the custom script and use it in Output Files, but that doesn't work either because the Output Files are transformed into SCRIPT_OUTPUT_FILE_X before the custom script is ran.
Unfortunately, Xcode's build support is pretty primitive (compared to say, make, which is third-odd years older :-). One option to try is splitting the Java source, so that the two classes with the same names are in different sub-projects. If you then use different prefixes for each sub-project, the names will be disambiguated.
A more fragile, but maybe simpler approach is to define a separate rule for the one of the two classes, so that it can have a unique prefix assigned. Then add an early build phase to translate it before any other Java classes, so the rules don't overlap.
For me, the second alternative does work (Xcode 7.3.x) - to a point.
My rule is not for Java, but rather for Google Protobuf, and I tried to maintain the same hierarchy (like your Java package hierarchy) in the generated code as in the source .proto files. Indeed files (.pb.cc and .pb.h) were created as expected, with their hierarchies, inside the Build/Intermediates/myProject.build/Debug/DerivedSources directory.
However, Xcode usually knows to continue and compile the generated output into the current target - but that breaks as it only looks for files in the actual ${DERIVED_FILE} - not within sub-directories underneath.
Could you please explain better "Output Files are transformed into SCRIPT_OUTPUT_FILE_X" ? I do not understand.
I am working on a module that supplies methods for navigating directories and manipulating files. Basically it will be a combination of the Dir and File classes, with options specific to the needs of a project I'm working on.
Right now I have started writing tests for some of these methods and things are getting messy.
Example
One of the methods I have is a tree function that returns a hash of files and folders where you can pass options like tree(only: 'folders', limit: 3). In order to test that it only goes down 3 levels, I would have to have 4+ subfolders with dummy files in them.
The Problem
Right now I'm testing on folders outside the project since the subfolders are already there, but I want to move away from this, especially considering the implausibility of testing on system files once I start testing methods equivalent to rm -rf (as well as the lack of portability).
I'm starting to think that I need to create a "lab rat" type folder that I do all my "experiments" on, but I have no clue how to approach creating it.
Do I create a function that creates the files?
Do I pull files and folders from another location?
Do I use some sort of "lorem ipsum" generator for file structures?
Do I make all these files and folders manually(ugh)?
Do I just mock and stub the hell out of everything and not actually create/delete the files and folders?(I don't see this happening)
So...
How would someone normally approach testing excessive amounts of file and folder manipulation?
I don't think you want to use mocks/stubs. The file system of your OS should be well tested and fast, so the benefit of mocks/stubs is minimal. Creating a mock/stub system increases the complexity without much benefit.
Here's my answers:
Do I create a function that creates the files?
Yes. You can create tests for these functions to make sure that they are correct. Instead of calling Dir and File, write helper functions that make the code simple and readable. Maybe you can share the helper functions between the source/test code...
Do I pull files and folders from another location?
Not sure what this is for...
Do I use some sort of "lorem ipsum" generator for file structures?
Yes, if you mean create functions that generate file structures.
Do I make all these files and folders manually(ugh)?
No.
Do I just mock and stub the hell out of everything and not actually create/delete the files and folders?(I don't see this happening)
No. One benefit of creating files/directories is that you can manually check what is going on and not be 100% dependent on the tests. This is actually a good approach because without it there could be a bug where both the source code and test code is not doing what you expect, but you wouldn't know because everything seems to be working.