I have written unit cases for my adapter code. The results are in a text file having module name and whether the unit test is succcess or failure with the string SUCCESS AND FAILURE. How can I use this text file to show code coverage in sonarqube analysis ?. Please help me on this.
I want to set as covered as true for the entire folder level and not for linenumber. How to specify in that general xml format ? – Umap
Your best bet is to try to convert the into the Generic Test Data format. However, that format is designed to take coverage data about lines, not modules, so you may face difficulties with your data granularity.
Related
As the title suggests, I am just trying to do a simple export of a datastage job. The issue occurs when we export the XML and begin examination. For some reason, the wrong information is being pulled from the job and placed in the XML.
As an example the SQL in a transform of the job may be:
SELECT V1,V2,V3 FROM TABLE_1;
Whereas the XML for the same transform may produce:
SELECT V1,Y6,Y9 FROM TABLE_1,TABLE_2;
It makes no sense to me how the export of a job could be different then the actual architecture.
The parameters I am using to export are:
Exclude Read Only Items: No
Include Dependent Items: Yes
Include Source Code with Routines: Yes
Include Source Code with Job Executable: Yes
Include Source Content with Data Quality Specifications: No
What tool are you using to view the XML? Try using something less smart, such as Notepad or Wordpad. This will determine/eliminate whether the problem is with your XML viewer.
You might also try exporting in DSX format and examining that output, to see whether the same symptoms are visible there.
Thank you all for the feedback. I realized that the issue wasn't necessarily with the XML. It had to do with numerous factors within our data stage environment. As mentioned above, the data connections were old and unreliable. For some reason this does not impact our current production refresh, so it's a non issue.
The other issue was the way that the generated SQL and custom SQL options work when creating the XML. In my case, there were times when old code was kept in the system, but the option was switched from custom code to generate SQL based on columns. This lead to inconsistent output from my script. Thus the mini project was scrapped.
This is a how-to/best-practice question.
I have a code base with a suite of unit tests run with pytest
I have a set of *.rst files which provide explanation of each test, along with a table of results and images of some mathematical plots
Each time the pytest suite runs, it dynamically updates the *.rst files with the results of the latest test data, updating numerical values, time-stamping the tests, etc
I would like to integrate this with the project docs. I could
Build these rst files separately with sphinx-build whenever I want to view the test results [this seems bad, since it's labor intensive and not automated]
tell Sphinx to render these pages separately and include them in the project docs [better, but I'm not sure how to configure this]
have a separate set of sphinx docs for the test results which I can build after each run of the test suite
Which approach (or another approach) is most effective? Is there a best practice for doing this type of thing?
Maybe take a look into Sphinx-Test-Reports, which reads in all information from junit-based xml-files (pytest supports this) and generates the output during the normal sphinx build phase.
So you are free to add custom information around the test results.
Example from webpage:
.. test-report:: My Report
:id: REPORT
:file: ../tests/data/pytest_sphinx_data_short.xml
So complete answer to your question: Take none of the given approaches and let a sphinx-extension do it during build-time.
Currently I am working on moving some API DDT (data from CSV) tests from RobotFramework to Jmeter and what troubles me is the lack of proper JSONs assertion which is able to ignore some keys during comparison. I am totally new to jmeter so I am not sure if there's no such option available.
I am pretty sure we are using the wrong tool for this job, especially because functional testers would take the job of writing new tests. However, my approach (to make it as easy as possible for functionals) is to create jmeter plugin which takes response and compare it to baseline (excluding ignored keys defined in its GUI). What do you think? Is there any builtin I can use instead? Or do you know anything about some existing plugin?
Thanks in advance
The "proper" assertion is JSON Assertion available since JMeter 4.0
You can use arbitrary JSON Path queries to filter response according to your expected result
Example:
If it is not enough - you can always go for JSR223 Assertion, Groovy language has built-in JSON support so it will be way more flexible than any existing or future plugin.
Please find below the approach that I can think of:-
Take the response/HTML/json source code dump for the base line using "save response to a file".
Take the response dump for the AUT that needs to be compare or simply 2nd run dump.
Use 2 FTP sampler's to make calls for the locally saved response dump's.
Use compare assertion to compare the 2 FTP call response's. In the compare assertion, you can use RegEx String and Substitution to mask the timestamps or userID to something common for both so that it will be ignored in comparison.
Below I have shown just an image for my thought's for help.
You need to take care on how to save and fetch the response's.
Hope this help.
I am writing a complex application (a compiler analysis). To debug it I need to examine the application's execution trace to determine how its values and data structures evolve during its execution. It is quite common for me to generate megabytes of text output for a single run and sifting my way through all that is very labor-intensive. To help me manage these logs I've written my own library that formats them in HTML and makes it easy to color text from different code regions and indent code in called functions. An example of the output is here.
My question is: is there any better solution than my own home-spun library? I need some way to emit debug logs that may include arbitrary text and images and visually structure them and if possible, index them so that I can easily find the region of the output I'm most interested. Is there anything like this out there?
Regardless you didn't mentioned a language applied, I'd like to propose apache Log4XXX family: http://logging.apache.org/
It offers customizable details level as well as tag-driven loggers. GUI tool (chainsaw) can be combined with "old good" GREP approach (so you see only what you're interested in at the moment).
Colorizing, search and filtering using an expression syntax is available in the latest developer snapshot of Chainsaw. The expression syntax also supports regular expressions (using the 'like' keyword).
Chainsaw can parse any regular text log file, not just log files generated by log4j.
The latest developer snapshot of Chainsaw is available here:
http://people.apache.org/~sdeboy
The File, load Chainsaw configuration menu item is where you define the 'format' and location of the log file you want to process, and the expression syntax can be found in the tutorial, available from the help menu.
Feel free to email the log4j users list if you have additional questions.
I created a framework that might help you, https://github.com/pablito900/VisualLogs
I'm a newbie to Unit Testing and I'm after some best practice advice. I'm coding in Cocoa using Xcode.
I've got a method that's validating a URL that a user enters. I want it to only accept http:// protocol and only accept URLs that have valid characters.
Is it acceptable to have one test for this and use a test data file? The data file provides example valid/invalid URLs and whether or not the URL should validate. I'm also using this to check the description and domain of the error message.
Why I'm doing this
I've read Pragmatic Unit Testing in Java with JUnit and this gives an example with an external data file, which makes me think this is OK. Plus it means I don't need to write lots of unit tests with very similar code just to test different data.
But on the other hand...
If I'm testing for:
invalid characters
and an invalid protocol
and valid URLs
all in the same test data file (and therefore in the same test) will this cause me problems later on? I read that one test should only fail for one reason.
Is what I'm doing OK?
How do other people use test data in their unit tests, if at all?
In general, use a test data file only when it's necessary. There are a number of disadvantages to using a test data file:
The code for your test is split between the test code and the test data file. This makes the test more difficult to understand and maintain.
You want to keep your unit tests as fast as possible. Having tests that unnecessarily read data files can slow down your tests.
There are a few cases where I do use data files:
The input is large (for example, an XML document). While you could use String concatenation to create a large input, it can make the test code hard to read.
The test is actually testing code that reads a file. Even in this case, you might want to have the test write a sample file in a temporary directory so that all of the code for the test is in one place.
Instead of encoding the valid and invalid URLs in the file, I suggest writing the tests in code. I suggest creating a test for invalid characters, a test for invalid protocol(s), a test for invalid domain(s), and a test for a valid URL. If you don't think that has enough coverage, you can create a mini integration test to test multiple valid and invalid URLs. Here's an example in Java and JUnit:
public void testManyValidUrls() {
UrlValidator validator = new UrlValidator();
assertValidUrl(validator, "http://foo.com");
assertValidUrl(validator, "http://foo.com/home");
// more asserts here
}
private static void assertValidUrl(UrlValidator validator, String url) {
assertTrue(url + " should be considered valid", validator.isValid(url);
}
While I think this is a perfectly reasonable question to ask, I don't think you should be overly concerned about this. Strictly speaking, you are correct that each test should only test for one thing, but that doesn't preclude your use of a data file.
If your System Under Test (SUT) is a simple URL parser/validator, I assume that it takes a single URL as a parameter. As such, there's a limit to how much simultaneously invalid data you can feed into it. Even if you feed in an URL that contains both invalid characters, and an invalid protocol, it would only cause a single result (that the URL was invalid).
What you are describing is a Data-Driven Test (also called a Parameterized Test). If you keep the test itself simple, feeding it with different data is not problematic in itself.
What you do need to be concerned about is that you want to be able to quickly locate why a test fails when/if that happens some months from now. If your test output points to a specific row in you test data file, you should be able to quickly figure out what went wrong. On the other hand, if the only message you get is that the test failed and any of the rows in the file could be at fault, you will begin to see the contours of a test maintainability nightmare.
Personally, I lean slightly towards having the test data as closely associated with the tests as possible. That's because I view the concept of Tests as Executable Specifications as very important. When the test data is hard-coded within each test, it can very clearly specify the relationship between input and expected output. The more you remove the data from the test itself, the harder it becomes to read this 'specification'.
This means that I tend to define the values of input data within each test. If I have to write a lot of very similar tests where the only variation is input and/or expected output, I write a Parameterized Test, but still invoke that Parameterized Test from hard-coded tests (that each is only a single line of code). I don't think I've ever used an external data file.
But then again, these days, I don't even know what my input is, since I use Constrained Non-Determinism. Instead, I work with Equivalence Classes and Derived Values.
take a look at: http://xunitpatterns.com/Data-Driven%20Test.html