Running JSCoverage with Jasmine - jasmine

A little new to Javascript coding, so please bear with me.
I read through the following link
jasmine with jscoverage automated testing
However, since I don't have a Ruby project, it didn't seem to be what I wanted.
Here are the steps I executed:
Copy my js file to be tested to a directory 'input'
Run the following command 'jscoverage input output'
Copy my spec and src folders for Jasmine as well as SpecRunner.html into 'output'
Copy the instrumented source file into src
Open jscoverage.html in Chrome
Open SpecRunner.html in the 'Browser' tab
At this point, the browser page displays my Jasmine tests. However, the 'Summary' page shows 0 files tested and the 'Source' tab is grayed out.
I know I messed up somewhere but am not sure where. Please help me out. Thanks!

The correct steps to be followed are as follows:
Copy the jasmine files (spec and src folders for Jasmine as well as SpecRunner.html) to a directory 'input'
Run the following command 'jscoverage input output'
Open jscoverage.html in Firefox (Chrome will not show the individual files that were tested)
Open SpecRunner.html in the 'Browser' tab
This approach has the drawback that all the files including jasmine related js show up. But you can select your js file that's being tested and just look at the code coverage for it.

The standard jscoverage approach is to instrument the entire codebase you want to get coverage on, then run a suite a tests and generate a report. This approach is a bit heavy handed since a codebase only needs a coverage report on a set frequency. Unlike CI tests, how often does the dev really need to know the coverage percentage change, weekly?
The node.js jscoverage project uses the same 'instrumentation' approach as the larger jscoverage project, but it can be run from the node CLI on an individual file, or from code on one or more files, as they are called from the tests themselves. In place of separate step to 'instrument' a batch of files, the jscoverage (confusing name), 'instruments' at test runtime.
I have been working towards a pattern that allows jasmine tests and testing in either a browser or at the CLI using the same source code setup and test configuration. Still beta. The jasmine just-in-time 'instrumentation' is not done yet.
https://github.com/d1b1/jasmine-jscoverage

Related

VS Code - Go: Prevent removal of test coverage markings after save

I'm using VS Code for a Go project. I have it configured to show the code coverage with gutter markings after I run the tests.
The issue I'm having is that if I make any changes to a file in the package, and then save it, VS Code then removes all of the coverage markings from all package files. I've gone through all of the Go settings and cannot find one that prevents this from happening.
There are options to re-run the tests on every save, which would then show the new coverage markings, but that's not feasible for this package because the tests take relatively long to run.
I'd like to find a way for it to keep the most recent test coverage markings when I save a file, instead of removing them all. Is this possible?

How to add breakpoints to .feature files

As a developer
I want to put breakpoint(s) in feature files
So that I can debug a feature/scenario/step
Have any of you implemented this functionality with Behave or Cucumber?
You can not put breakpoints to .feature files, because it is a plain text files. Instead you can put breakpoints inside test-steps which implement your BDD steps.
Example
When I click button "save"
Then saved page opened
You need to go inside that step and see something like that:
public void iClickButton(String buttonName) {
getButtonByName(buttonName).click();
}
You can put breakpoint inside method iClickButton and debug it.
That is how you can debug execution in BDD style.
I can't say I've ever been able to put breakpoints in feature files. Instead, I put them inside the step files so when that step runs, you can verify it did its job. It's a lot of switching back and forth between feature and step files, but it works
I just want to point out that in 2018 feature files are used by a lot more than Behave or Cucumber [i.e. Codeception for PHP]. "Gherkin" is supposed to be a business readable DSL and the concept of a "breakpoint" really doesn't apply in that domain. I'd say it's definitionally impossible to put a break point in a Gherkin feature file. You could certainly do that if you want, go for it, but there isn't a standard way of doing it, and it's possible you'd be confusing things in a large organization or team.
First of all you need to add debug in step definition not in feature file
Now, You are running it by maven command like:
mvn clean install
But you need to run it using junit or testng.
Put an debug point in your step defination and run the project as junit/testng in debug section
OR
If you still need to run it using maven you can try below parameter in maven command
-Dmaven.surefire.debug test -DforkCount=0 test

Viewing which params were requested during load test

I'm using Visual Studio Online Load Testing to test an API with variable parameters coming from a CSV file.
My setup looks like this:
In properties I set "Show Separate Request Results" to True, hoping that I would be able to see which parameters were used during the test, but I cannot find anything on this in the report?
Is this the way to do this or am I doing something wrong?
Visual Studio load tests are not great at showing how individual test cases worked. The test case logs show the data source values used by a test, look in the context section of the log. By default, logs of the first 200 test cases that fail are retained; altered via Maximum test logs in run settings. Logs of successful tests can also be retained by altering Save log frequency for completed tests in run settings.
Whilst the log files have the data in their context sections, it is hard work (ie lots of mouse waving and mouse clicking) to open each log file, view the context, scroll the right section into view, close the log file, etc.
The mechanism I use to record data source usage etc is to have a web test plugin with a PostWebTest method. It writes useful data to a simple text file as each test case finishes. I write one line per test case, formatted as a CSV so it can easily be read and analysed in a spreadsheet. The data written includes date, time, test outcome, some data source values, and some context parameter values extracted or generated during the run. Tests run with multiple agents will get one file written on each agent. Gathering these files will be a little work but less than viewing individual test case log files. Unfortunately I have not found a way of collecting these files from load tests run with Visual Studio Team Services (previously known as Visual Studio Online).
An early version of the plugins I wrote can be found here.

Brief "run-down" on setting up Unit Tests in Visual Studio 2008

I am forcing myself to learn test-driven development, and so far I'm enjoying myself. There's a few quirks that Visual Studio Unit Testing has that is driving me batty though. A bit of background information, my project folder looks like this:
[Root] BitFlex
BitFlex\Code
BitFlex\Debug
BitFlex\Documents
BitFlex\Release
Now of course all the source code is stored in the code folder, and on a build the project output either goes to the debug or release folders depending on the current configuration. Now for my unit testing, I have it setup so the test project is output to either:
BitFlex\Debug\Unit Tests\
BitFlex\Release\Unit Tests\
1) At this point, everything is fine and dandy. There are two problems with this, the first being that when I run a test it cannot find the assembly, as it gives me this error:
Error AssignDefaultProgramTest BitFlex.UnitTests The test assembly 'D:\src\DCOM Productions\BitFlex\Code\TestResults\David Anderson_DCOMPRODUCTIONS 2009-07-31 23_21_00\Out\BitFlex.UnitTests.dll' cannot be loaded. Error details: Could not find file 'D:\src\DCOM Productions\BitFlex\Code\TestResults\David Anderson_DCOMPRODUCTIONS 2009-07-31 23_21_00\Out\BitFlex.UnitTests.dll'.
I cannot seem to find information on this error, or how to resolve it so I suppose that's where everyone's experise around here comes in to play.
2) My other beef is that Visual Studio generates the "Test Results" folder in my code directory, I would prefer to move that to my Unit tests folder in either output configuration. Is there a way to do this, or a better practice to setting up a well organized Unit Test using my folder hierarchy?
By default MSTesting framework runs all tests in an 'isolated' location and not from the binaries directory.
To fix this you can do one of these two:
1. go to the test configuration file and under deployment uncheck deploy the tests.
2. don't use path when looking for external files instead use deploy attribute or the test config to deploy the needed files along with your tests.
For doing TDD with MSTest, turn off deployment. You shouldn't need it for "unit testing."
Also, never ever EVER have VS automatically generate tests for you. What's generated may be fine for some types of functional testing, but are usually very poor unit tests.

Visual Studio unit testing - how to access external files?

I have data files used as input to my unit tests. These files are quite big and I don't want to copy them each time unit tests are executed. Tests are executed without deployment. So I can just put them into folder under my solution, and... how to obtain path to my solution (or test project source code) when unit test is executing?
Because you can run a test project in different ways (TD.NET, Visual Studio, R# etc.), the path used to reference the tests can change.
For this reason, I embed test needed files in my test assembly and draw them out from there.
You can use:
Assembly.GetExecutingAssembly().Location
in your tests to get the path of the assembly containing the unit tests.
Simple, make the location of the files configurable (and testable).
Then either, set it in the unit testing code or set it thru some config file.

Resources