Depenent JvmModelInferrer invoked twice? - gradle

tldr;
When I build a project with DSL-B (that depends on DSL-A) the generateXtext gradle task executes the JvmModelInferrer of DSL-A too often.
Details
Here is a simple example to reproduce the issue: ex.xtext.twog
there are 2 xtext projects:
DSL A: grammar a which is independent
DSL B: grammar b, which references grammar a
and 2 demo projects, which use the xtext projects:
demo/demoA: (uses DSL a) with this simple model def DefStr java.lang.String
demo/demoB: (uses DSL b) with this simple model use UseStrCls DefStr
I've added some debug-log messages to the JvmModelInferrers to see what's going on.
The xtext generation in demoB calls the AJvmModelInferrer 3 times (see build output):
:demoB:generateXtext
AJvmModelInferrer: infer definition=DefStr isPreIndexingPhase=true
AJvmModelInferrer: infer definition=DefStr isPreIndexingPhase=false
BJvmModelInferrer: infer use=UseStrCls isPreIndexingPhase=true
BJvmModelInferrer: infer use=UseStrCls isPreIndexingPhase=false
AJvmModelInferrer: infer definition=DefStr isPreIndexingPhase=false
Why is AJvmModelInferrer called again after BJvmModelInferrer?
Note: I could not find good docs or examples on how to use multiple grammars, so it's well possible, that I am doing something wrong in my setup-here are the relevant parts:
BStandaloneSetup.createInjectorAndDoEMFRegistration() calls AStandaloneSetup.doSetup()
gradle.build of project demoA adds src/main/java as resource dir, so that the xtext-model A is in the jar file (and thus demoB can find it)

Related

Custom name for BOOST_DATA_TEST_CASE

Using googletest you can name your parameterized tests based on the parameters using the last argument in INSTANTIATE_TEST_SUITE_P.
Now I am using BOOST_DATA_TEST_CASE, and the tests are currently named _0, ..., _N which makes them hard to distinguish. Is there any way that the boost tests can be named in a similar way to googletests parameterized tests?

AutoIT Page/Window Object Model

I would like to ask if we can also achieved a Page/Window Object Model in AutoIT? Majority of my project assignment was on Web Automation and I'm using Selenium Webdriver with Framework uses Page Object Model. Currently, I'm assigned to a project for GUI automation. I like to implement this kind of approach also in AutoIT if feasible so that I can reuse the objects to other classes. We are planning to use AutoIT standalone. I noticed that most of the example available in the internet was the object created on each class/script.
Your insights are highly appreciated.
Thanks!
General:
That common approach of using the Page Object Model (POM) Design Pattern isn't quit good feasible with AutoIt. Of course you can create a object structure with AutoIt too, but it was not intended for the language. Anyway, some of the goals of POM can be achieved with the following example suggestion of a test structure.
Please notice:
Since you don't provide enough information about your application under test (AUT), I explain a basic structure. The implementation depends on your application (SWING/RCP, WinForm etc.). It's also important which tool support do you need for your page object recognition. Besides WinForm that could be controled by ControlCommand functions in AutoIt, it's a proper way to use UIASpy or au3_uiautomation as helper tools.
UIASpy - UI Automation Spy Tool
au3_uiautomation
It's an advantage to know the POM structure in context with Selenium. I usually include a test case description with behavior driven development BDD (Gherkin syntax with Cucumber or SpecFlow), but this will not be a part of that example here.
Example structure:
The structure consists of two applications under test Calc and VlcPlayer. Both follow the common structure PageObjects and Tests. You should try to devide your page objects (files) in many subfolders to keep an overview. This substructure should be similar for the Tests folder/subfolders.
In the Tests area you could include several test stages or test categories depending on your test goals (Acceptance/UI tests, just functional smoke tests and so on). It's also a good idea to control the execution order by an separat wrapper file, TestCaseExecutionOrder.au3. This should exist for all test categories to avoid a mixing of them.
This wrapper au3 file contains the function calls, it's the processing start/control.
Approach description:
TestCaseExecutionOrder.au3
Calls the functions which are the test cases in the subfolders (Menu, PlaylistContentArea, SideNavigation).
Test case NiceName consists of some test steps.
These test steps have to be included into that script/file by:
#include-once ; this line is optional
#include "Menu\OpenFolder.au3"
Test step OpenFolder.au3 (which is a part of a test case) contains the function(s) to do the folder loading and there content.
In that functions the PageObject MenuItemMedia.au3 will be loaded/included into the script/file by:
#include-once ; this line is optional
#include "..\..\..\PageObjects\Menu\MenuItemMedia.au3"
File MenuItemMedia.au3 should only contain the recognition mechanism for that area and actions.
This could be find menu item Media (as a function).
or find open folder menu item (as a function) and so on.
Func _findMenuItemMedia()
; do the recognition action
; ...
Return $oMenuItem
EndFunc
In the test step OpenFolder.au3 which calls _findMenuItemMedia() like:
Global $oMedia = _findMenuItemMedia()
can a .click executed or something like .getText etc.
The test cases should only #include the files which are necessary (test steps). The test steps should also only #include the necessary files (page objects) and so on. So it's possible to adjust the recognition functions once and it can be used in the corresponding test steps.
Conclusion:
Of course it's hard to explain it in this way, but with this approach you can do a similar way like in Selenium for web testing. Please notice that you properbly have to use Global variables often. You have to be ensure the correct includings and don't lose the overview of your test, which is in OOP test based approaches much easier.
I recommend the usage of VS Code, because you can jump from file to file at the #include statements. That's pretty handy.
I hope this will help you.

Jacoco coverage inconsistent on anonymous classes

We use jacoco, but on different builds and different machines, but the same code and gradle script, it gives different results. The problem seems to be anonymous classes - they are sometimes not lining up with the test run - even though it's all done as part of the same, clean, build. We get this:
[ant:jacocoReport] Classes in bundle 'SomeThing' do no match with execution data. For report generation the same class files must be used as at runtime.
[ant:jacocoReport] Execution data for class a/b/c/$Mappings$1 does not match.
[ant:jacocoReport] Execution data for class a/b/c/$Mappings$3 does not match.
so it's looking like it's getting the anonymous classes out of sync. Any idea how to fix this? It's really dicking with our rules, as we have for instance a 100% class coverage requirement - and this means some classes are showing up sometimes as not covered.
To generate report based on file jacoco.exec JaCoCo performs analysis of bytecode, i.e. compiled classes.
Different versions of Java compiler produce different bytecode. Even recompilation might produce different bytecode - see https://bugs.openjdk.java.net/browse/JDK-8067422 and https://github.com/jacoco/jacoco/issues/383.
If classes used at runtime (during creation of jacoco.exec) are different (e.g. created on another machine with different compiler), then they can't be associated with classes used during creation of report, leading to message Execution data for class ... does not match. You can read more about Class Ids in JaCoCo documentation.
All in all to avoid this message - use exact same class files during creation of jacoco.exec and during generation of report.

Best data structure for ANT like Utility

Trying to make an ANT like utility wherein I am loading a configuration.xml ( similar to ant build.xml ). This configuration.xml has different 'target' tags that need to be executed based on the target attributes and properties. Each target has 'dependent' targets, which must be executed prior to executing the calling target
Which is the Best data structure for such processing ?
Currently I am using HASHMAP together with a Stack
I am reading the configuration.xml by SAX parser and loading each target as an object ( with all its properties and dependencies onto a HASHMAP.)
This Hashmap is then iterated, and dependencies are kept on stack. Once the stack is build, it is poped and each target executed.
Is this the Optimum solution or any better data structure ?
One approach is to use an XSLT transformation and generate ANT file that is dynamically executed. The following example illustrates the principle:
iterate int xml file using ant
But perhaps a better approach is to use a dynamic scripting language like groovy and create a custom DSL language for your application.

What is a good Visual studio source code layout for cross-project inheritance hierarchies

I develop similar calculation packages for different clients (Bob, Rick, Sue, Eve). The calculation packages consist of several calculation modules (A, B, C, D, ...), which are organized in a "Chain of responsibility" pattern. Assembly is done with Abstract factory.
As a lot of code is shared between the calculation modules of different clients, they are organized in an hierarchy:
ICalcA
|
AbstrCalcA
| |
AbstrCalcAMale AbstrCalcAFemale
| | | |
CalcABob CalcARick CalcASue CalcAEve
(same hierarchy for B, C, D, ...).
Now release management dictates, that I organize the source code per inheritance level:
Project: CalcCommon
[CalcA]
ICalcA.cs
AbstrCalcA.cs
[CalcB]
ICalcB.cs
AbstrCalcB.cs
[CalcC]
...
Project: CalcMale
[CalcA]
AbstrCalcAMale.cs
[CalcB]
AbstrCalcBMale.cs
[CalcC]
....
Project: CalcBob
[CalcA]
CalcABob.cs
[CalcB]
CalcBBob.cs
[CalcC]
....
Project: CalcFemale
....
For Bob, I release CommonCalc.dll, CalcMale.dll and CalcBob.dll.
Now this is all good, but with many modules, helper classes, and so on, it is very cumbersome to work within the same module hierarchy. Closely related classes (e.g. ICalcA and CalcABob) are far away in the solution explorer. No one on my teams seems to find anything without searching for the class name -- if he can remember it. Features tend to get implemented in wrong or multiple hierarchy levels.
How can I improve the situation?
I was thinking of creating one project per Module and hierarchy level (Projects: CalcCommonA, CalcMaleA, CalcBobA, CalcRickA, CalcCommonB, CalcMaleB, ...), and grouping them via solution folders.
I just found out that the new search bar on top of the solution explorer comes in handy for this.
Step 1: Make sure all classes related to Feature A contain "FeatureA" in their class name.
Step 2: If working in the FeatureA hierarchy, enter "FeatureA" into the search/filter bar.
This will display just to the classes of this particular hierarchy, just as required.

Resources