testng specify different Users - maven

I am running our automated tests using TestNG. The reason we picked TestNG is because we can send variables inputs into the test methods example public void testXX( String userId ) and the userId can change for each test.
The code below shows three different userIds I can use to execute my tests. So my exact same test will run three times for each of the three different users. This feature is awesome and really enables me to have multiple tests under different scenarios because each of our users carry different profiles.
// All valid Pricing Leads
#DataProvider(name = "userIds")
public Object[][] createPricingLeadUsersParameters() {
return new Object[][] {
{ "TestUser001" },
{ "TestUser002" },
{ "TestUser003" }
};
}
#Test( dataProvider = "userIds" )
public void createGroup( String userIds) {
............
}
The problem I am having right now is during certain conditions I can only have one userId used or else all of my tests will fail. I would like to keep my exact same test but only pass in on userId not the three shown above. It there a way to configure TestNG to make this variable on the command line so at times I would use the three defined, but under another condition it would only be one of the three or a new userId?

Sure, there are plenty of ways to do this. How about passing a system property when you run TestNG?
java -Dfoo=bar org.testng.TestNG...
and then your data provider can test the value of foo with System.getProperty() and adjust what it returns accordingly.

Related

Mocking is sometimes not applied when running a multi-class testsuite

I am testing a service which heavily relies on project reactor.
For many tests I am mocking the return value of the component responsible for API calls.
The tests are split over multiple files.
When I run the tests of one file, they are green, but when I execute all of the test files at once, some tests fail, with the error message indicating that the mocking did not succeed (Either the injected component returned null, or the implementation of the actual component is invoked).
In the logs, there is no information about the mocking failing.
A code example:
interface API {
Flux<Bird> getBirds();
}
#Component
class BirdWatcher {
API api;
BirdWatcher(API api) {
this.api = api;
}
Flux<Bird> getUncommonBirds() {
return api.getBirds() // Although this is mocked in the test, in some runs it returns `null` or calls the implementation of the actual component
.filter(Bird::isUncommon);
}
}
#SpringBootTest
class BirdWatcherTests {
#Autowired
BirdWatcher birdWatcher;
#MockBean
API api;
#Test
void findsUncommonBirds() {
// Assemble
Bird birdCommon = new Bird("Sparrow", "common");
Bird birdUncommon = new Bird("Parrot", "uncommon");
Mockito.when(api.getBirds()).thenReturn(Flux.just(birdCommon, birdUncommon));
// Act
Flux<Bird> uncommonBirds = birdWatcher.getUncommonBirds();
// Assert
assertThat(uncommonBirds.collectList().block().size(), equalTo(1));
}
}
For me the issue seems like a race condition, but I don't know where and how this might happen, and how I can check and fix this.
I am using spring-boot-test:2.7.8, pulling in org.mockito:mockito-core:4.5.1 org.mockito:mockito-junit-jupiter:4.5.1, and org.junit.jupiter:junit-jupiter:5.8.2, with gradle 7.8.
For reactor, spring-boot-starter-webflux:2.7.8, depending on reactor:2.7.8.

Laravel Mix - Is Chaining Required to Ensure Execution Order?

TLDR;
Do you have to chain Laravel Mix methods to maintain the execution order? Are any methods async that would prevent one from using the following non-chaining pattern, mix.scripts(); mix.js(); mix.sass();?
The few tests I've run suggest I do not need to chain.
An Example
Due to how our Laravel app is setup, we need to have more that one Laravel Mix setup. Instead of copy-n-pasting a webpack.mix.js file and modifying a few lines here and there in each file, we're looking at creating a config object that is passed to a singular webpack.mix.js file. In this file, we would check if various things have been configured, and if so, run the appropriate Mix method. Below is a pseudo-code example.
if ( config.js ) {
mix.js( config.js.src, config.js.dist );
}
if ( config.sass ) {
mix.sass( config.sass.src, config.sass.dist );
}
if ( config.concat ) {
if ( config.concat.styles ) {
// Could be more than one set of files that need to be combined, so array.
config.concat.styles.map( ( files ) => {
mix.styles( files.src, files.dist );
}
}
if ( config.concat.scripts ) {
// Could be more than one set of files that need to be combined, so array.
config.concat.scripts.map( ( files ) => {
mix.scripts( files.src, files.dist );
}
}
}
Currently, our code is more like most examples you see on the web.
mix
.options()
.webpackConfig()
.styles()
.styles()
.scripts()
.js()
.sass();
laravel-mix abstracts configuration of webpack and dynamically generates the webpack config.
The organization of its API implementation is done using Builder pattern with a fluent or chainable interface.
This makes it such that to produce a particular configuration only steps that are necessary to be performed have to be called.
You need to ensure that code in your webpack.mix.js module can be properly imported.
You need to be careful about the ordering of custom tasks such as copy, copyDirectory, combine, version. In v5.0.0, custom tasks are run without any bearing on their asynchronous nature. However there is coming changes to see to it that they are run sequentially.
Other API methods can be called in any order.
The few tests I've run suggest I do not need to chain.
You're absolutly correct!
Laravel Mix is written in JavaScript and makes use of Method Chaining.
You can visualize code execution with OnlinePythonTutor.
If you look at the code below, you'll find that you don't necessarily need to chain methods to maintain execution order.
class Person {
setName(name) {
this.name = name
return this
}
setAge(age) {
this.age = age
return this
}
}
var p = new Person()
p.setName("Alice").setAge(42) // Set name before age
p.setName("Bob") // Set name
p.setAge(42) // Set age
You can visualize this code here

Why is custom message not working in grails when tried with the following?

i am a beginner in grails and i have the following problem. Please help.
package racetrack
class Users {
String userName
String password
static constraints = {
userName(nullable:false, maxSize:20)
password(password:true, minSize: 8,
validator: {
return (it.matches("(.*[\\d])"))?true: ['noNumber']
return (it.matches("(.*[\\W])"))?true: ['noSpecialCh']
return (it.matches("(.*[a-z])"))?true: ['noLower']
return (it.matches("(.*[A-Z])"))?true: ['noUpper']
}
)
}
}
I created the above domain and in message.properties, i added the following:
users.password.validator.noNumber=should contain at least one number
users.password.validator.noLower=should contain at least one lower case letter as well
users.password.validator.noUpper=should contain number as well
users.password.validator.noSpecialCh=should contain number as well
however, i am not given required messages when tried with faulty values. Suppose, if i give no number in the password "should contain at least one number" message was expected but i only get does not match custom validation message.
The core problem is that Groovy, unlike Java, allows multiple return statements. If you converted that to Java it wouldn't compile.
Groovy allows multiple return statements, but obviously only considers the first, so with your code you have one check, not four, essentially
(it.matches("(.*[\\d])")) ? true : ['noNumber']
It should be something like this:
if (!it.matches("(.*[\\d])")) {
return ['noNumber']
}
if (!it.matches("(.*[\\W])")) {
return ['noSpecialCh']
}
if (!it.matches("(.*[a-z])")) {
return ['noLower']
}
if (!it.matches("(.*[A-Z])")) {
return ['noUpper']
}
except that all of the regexes are broken, but that's a separate issue.

Is it possible to provide UI selectable options to custom msbuild tasks?

I have built a custom msbuild task that I use to convert 3D models in the format I use in my engine. However there are some optional behaviours that I would like to provide. For example allowing the user to choose whether to compute the tangent array or not, whether to reverse the winding order of the indices, etc.
In the actual UI where you select the Build action for each file, is it possible to define custom fields that would then be fed to the input parameters of the task? Such as a "Compute Tangents" dropbox where you can choose True or False?
If that is possible, how? Are there any alternatives besides defining multiple tasks? I.e. ConvertModelTask, ConvertModelComputeTangentTask, ConvertModelReverseIndicesTask, etc.
Everything in a MsBuild Custom Task, has to have "settable properties" to drive behavior.
Option 1.
Define an ENUM-esque to drive you behavior.
From memory, the MSBuild.ExtensionPack.tasks and MSBuild.ExtensionPack.Xml.XmlFile TaskAction="ReadElementText" does this type of thing.
The "TaskAction" is the enum-esque thing. I say "esque", because all you can do on the outside is set a string. and then in the code, convert the string to an internal enum.
See code here:
http://searchcode.com/codesearch/view/14325280
Option 2: You can still use OO on the tasks. Create a BaseTask (abstract) for shared logic), and then subclass it, and make the other class a subclass, and the msbuild task that you call.
SvnExport does this. SvnClient is the base class. And it has several subclasses.
See code here:
https://github.com/loresoft/msbuildtasks/blob/master/Source/MSBuild.Community.Tasks/Subversion/SvnExport.cs
You can probably dive deep with EnvDTE or UITypeEditor but since you already have a custom task why not keep it simple with a basic WinForm?
namespace ClassLibrary1
{
public class Class1 : Task
{
public bool ComputeTangents { set { _computeTangents = value; } }
private bool? _computeTangents;
public override bool Execute()
{
if (!_computeTangents.HasValue)
using (var form1 = new Form1())
{
form1.ShowDialog();
_computeTangents = form1.checkBox1.Checked;
}
Log.LogMessage("Compute Tangents: {0}", _computeTangents.Value);
return !Log.HasLoggedErrors;
}
}
}

Duplicate the behaviour of a data driven test

Right now, if you have a test that looks like this:
[TestMethod]
[DeploymentItem("DataSource.csv")]
[DataSource(
Microsoft.VisualStudio.TestTools.DataSource.CSV,
"DataSource.csv",
"DataSource#csv",
DataAccessMethod.Sequential)]
public void TestSomething()
{
string data = TestContext.DataRow["ColumnHeader"].ToString();
/*
do something with the data
*/
}
You'll get as many tests runs as you have data values when you execute this test.
What I'd like to do is duplicate this kind of behaviour in code while still having a datasource. For instance: let's say that I want to run this test against multiple deployed versions of a web service (this is a functional test, so nothing is being mocked - ie. it could very well be a codedui test against a web site deployed to multiple hosts).
[TestMethod]
[DeploymentItem("DataSource.csv")]
[DataSource(
Microsoft.VisualStudio.TestTools.DataSource.CSV,
"DataSource.csv",
"DataSource#csv",
DataAccessMethod.Sequential)]
public void TestSomething()
{
var svc = helper.GetService(/* external file - NOT a datasource */);
string data = TestContext.DataRow["ColumnHeader"].ToString();
/*
do something with the data
*/
}
Now, if I have 2 deployment locations listed in the external file, and 2 values in the datasource for the testmethod, I should get 4 tests.
You might be asking why I don't just add the values to the datasource. The data in the external file will be pulled in via the deployment items in the .testsettings for the test run, because they can and will be defined differently for each person running the tests and I don't want to force a rebuild of the test code in order to run the tests, or explode the number of data files for tests. Each test might/should be able to specify which locations it would like to test against (the types are known at compile-time, not the physical locations).
Likewise, creating a test for each deployment location isn't possible because the deployment locations can and will be dynamic in location, and in quantity.
Can anyone point me to some info that might help me solve this problem of mine?
UPDATE! This works for Visual Studio 2010 but does not seem to work on 2012 and 2013.
I had a similar problem where I had a bunch of files I wanted to use as test data in a data driven test. I solved it by generating a CSV file before executing the data driven test. The generation occurs in a static method decorated with the ClassInitialize attribute.
I guess you could basically do something similar and merge your current data source with your "external file" and output a new CSV data source that your data driven test use.
public TestContext TestContext { get; set; }
const string NameColumn = "NAME";
const string BaseResourceName = "MyAssembly.UnitTests.Regression.Source";
[ClassInitialize()]
public static void Initialize(TestContext context)
{
var path = Path.Combine(context.TestDeploymentDir, "TestCases.csv");
using (var writer = new StreamWriter(path, false))
{
// Write column headers
writer.WriteLine(NameColumn);
string[] resourceNames = typeof(RegressionTests).Assembly.GetManifestResourceNames();
foreach (string resourceName in resourceNames)
{
if (resourceName.StartsWith(BaseResourceName))
{
writer.WriteLine(resourceName);
}
}
}
}
[TestMethod]
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestCases.csv", "TestCases#csv", DataAccessMethod.Random)]
public void RegressionTest()
{
var resourceName = TestContext.DataRow[NameColumn].ToString();
// Get testdata from resource and perform test.
}

Resources