In my Protractor test script, I use the usual notation:
describe("mytest") {
...
it(" should do this") {
...
it(" should do that") {
I would like to be able to see what test and what part of each is currently running when I run them. Is there any option I can use to output test descriptions to the console?
There is a reporter that should do what you are looking for. Take a look at https://www.npmjs.com/package/jasmine-spec-reporter and https://github.com/bcaudan/jasmine-spec-reporter/tree/master/examples/protractor
You can use the --verbose option to print more information about your tests, but it will not tell you which test is currently being run.
I suggest you to create an issue if you want that feature. https://github.com/angular/protractor/issues/new
$ ./node_modules/protractor/bin/protractor protractor-config.js --verbose
------------------------------------
PID: 7985 (capability: chrome #1)
------------------------------------
Using the selenium server at http://localhost:4444/wd/hub
angularjs homepage
should greet the named user
todo list
should list todos
should add a todo
Finished in 5.915 seconds
3 tests, 5 assertions, 0 failures
since protractor runs in node you can use console.log as you normally would in javascript, for more console fun see the node docs
I like to place any logs after functionality so that I know its been completed, so wrapping it inside of a .then() function seemed to work best for me
example:
element(elementToFind).click().then(function(){
console.log("clicked element");
continue();
});
If you need to know the name of the spec currently rinning you could use: jasmine.getEnv().currentSpec.description
and log it
console.log('\nTest spec: ' + __filename + '\n');
to log the test but no idea how to log the it being executed
Related
I have a inspec test, this is great:
inspec exec scratchpad/profiles/forum_profile --reporter yaml
Trouble is I want to run this in a script and output this to an array
I cannot find the documentation that indicated what method i need to use to simulate the same
I do this
def my_func
http_checker = Inspec::Runner.new()
http_checker.add_target('scratchpad/profiles/forum_profile')
http_checker.run
puts http_checker.report
So the report method seems to give me load of the equivalent type and much more - does anyone have any documentation or advice on returning the same output as the --reporter yaml type response but in a script? I want to parse the response so I can share output with another function
I've never touched inspec, so take the following with a grain of salt, but according to https://github.com/inspec/inspec/blob/master/lib/inspec/runner.rb#L140, you can provide reporter option while instantiating the runner. Looking at https://github.com/inspec/inspec/blob/master/lib/inspec/reporters.rb#L11 I think it should be smth. like ["yaml", {}]. So, could you please try
# ...
http_checker = Inspec::Runner.new(reporter: ["yaml", {}])
# ...
(chances are it will give you the desired output)
I have two load tests below with each one being in their separate test cases. This is using SOAP UI free:
Currently I have to manually select a load test, run it manually, wait until it finishes and then manually export the results before manually moving onto the next load test and performing the same actions.
Is there a way (and if so how) to be able to automatically run all the load tests (one by one) and extract each of it's own set of results in a file (test step, min, max avg, etc). This is to save the tester having to do manual intervention and can just let the test run whilst they do other stuff.
You can use the load tests command line, the doc is here.
Something like
loadtestrunner -ehttp://localhost:8080/services/MyService c:\projects\my-soapui-project.xml -r -f folder_name
Using these two options:
r : Turns on exporting of a LoadTest statistics summary report
f : Specifies the root folder to which test results should be exported
Then file like LoadTest_1-statistics.txt will be in your specified folder with csv statistics results.
inspired with answer of #aristotll )
loadtestrunner.bat runs the following class : com.eviware.soapui.tools.SoapUITestCaseRunner
from groovy you can call the same like this:
com.eviware.soapui.tools.SoapUITestCaseRunner.main([
"-ehttp://localhost:8080/services/MyService",
"c:\projects\my-soapui-project.xml",
"-r",
"-f",
"folder_name"
])
but the method main calls System.exit()...
and soapui will exit in this case.
so let's go deeper:
def res = new com.eviware.soapui.tools.SoapUITestCaseRunner().runFromCommandLine([
"-ehttp://localhost:8080/services/MyService",
"c:\projects\my-soapui-project.xml",
"-r",
"-f",
"folder_name"
])
assert res == 0 : "SoapUITestCaseRunner failed with code $res"
PS: did not tested - just an idea
Hi I am using a flag when testing in go:
file_test.go
var ip = flag.String("ip", "noip", "test")
I am only using this in one test file. And it works fine when only testing that one test file, but when I run:
go test ./... -ip 127.0.0.1 alle of the other test file saying: flag provided but not defined.
Have you seen this?
Regards
flag.Parse() is being called before your flag is defined.
You have to make sure that all flag definitions happen before calling flag.Parse(), usually by defining all flags inside init() functions.
If you've migrated to golang 13, it changed the order of the test initializer,
so it could lead to something like
flag provided but not defined: -test.timeout
as a possible workaround, you can use
var _ = func() bool {
testing.Init()
return true
}()
that would call test initialization before the application one. More info can be found on the original thread:
https://github.com/golang/go/issues/31859#issuecomment-489889428
do not call flag.Parse() in any init()
I'm very late to the party; but is this broken (again) on Go 1.19.5?
I hit the same errors reported on this thread and the same solution reported above (https://github.com/golang/go/issues/31859#issuecomment-489889428) fixes it.
I see a call to flags.Parse() was added back in go_test.go in v1.18
https://go.googlesource.com/go/+/f7248f05946c1804b5519d0b3eb0db054dc9c5d6%5E%21/src/cmd/go/go_test.go
I am only just learning Go so it'd be nice to have some verification from people more skilled before I report this elsewhere.
If you get this, when running command via docker-compose then you do incorrect quoting. Eg.
services:
app:
...
image: kumina/openvpn-exporter:latest
command: [
"--openvpn.status_paths", "/etc/openvpn_exporter/openvpn-status.log",
"--openvpn.status_paths /etc/openvpn_exporter/openvpn-status.log",
]
First is correct, second is wrong, because whole line counted as one parameter. You need to split them by passing two separate strings, like in first line.
How can I run a specific thread group in a Test Plan from the command line? I have a Test Plan (project file) that contains two "thread groups": one for crawling a site and another for calling specific urls with parameters. From the command line I execute with Maven, like so:
mvn.bat -Dnamescsv=src/test/resources/RandomLastNames.csv
-Ddomainhost=stgweb.domain.com -Dcrawlerthreads=2 -Dcrawlerloopcount=10 -Dsearchthreads=5 -Dsearchloopcount=5 -Dresultscsv=JmeterResults.csv clean test verify
I want to pass an argument to run only one of the two "thread group" in that project file. Can you do that with JMeter? I don't want to use an IF controller unless I have to because it feels like a "hack". I know that SoapUI lets you do this with the '-s' option.
I asked this question on the JMeter forum also.
In our tests we use the while controller. It doesn't look like a hack to me and works well. You can turn thread groups on and off easily with the JMeter properties. Note you can't change its status when the test is already running though.
Add While controller - ${__P(threadActive)}
Set JMeter property on JMeter load ( -JthreadActive = true )
Run test
Please note ${__P(threadActive)} equates to ${__P(threadActive)} == true, anything other than true will result in that thread group not running
When running a large set of tests using MsTest from the command line, I can see each test executing and its outcome logged in the window like so:
Passed Some.NameSpace.Test1
Passed Some.NameSpace.Test2
And so on for thousands of tests. Once completed, MsTest will spit out a summary like this
Summary
---------
Test run failed
Passed 2000
Failed 1
------------
Total 2001
At this point I either have to start scrolling backwards in the window trying to find the needle in a haystack that represents my single failing test, or I can open the huge xml file that represents the result, and text-search for some keyword indicating a failed test.
Isn't there an easier way? Can I have MsTest report progress without dumping Passed test names to the console (still logging failed ones), or can I have a summary of just Failed tests at the end?
I think its obvious what any command line user wants to do: follow progress AND know the outcome at the end, without having to read xml or browse the cmd window history.
Answering my own question: A simple wrapper/parser script that calls MsTest.exe and parses/summarizes the output, either the stdout or the trx, is the only solution it seems.
You could use the TestContext.CurrentTestOutcome at the end of each test to determine if the test was failed and then log all failed tests to a different file.
[TestCleanup]
public void CleanUp()
{
if (TestContext.CurrentTestOutcome.ToString().Equals("Failed"))
{
TestContext.WriteLine("{0}.{1} ==> {2}", TestContext.FullyQualifiedTestClassName,
TestContext.TestName, TestContext.CurrentTestOutcome.ToString());
//Log the result to a file.
}
}
I don't know if this could help you.