[Script Taxonomy][1]
[1]: https://i.stack.imgur.com/2r5S5.jpg
Attached is a picture that talks as to how scripts are structured for our testing needs.
We have a Jmeter project (e.g.Main.JMX) with multiple thread groups as shown in the pic and each thread group is calling external JMX (e.g.Sub1.jmx, Sub2.jmx) using the include controller. In each external JMX file (e.g.Sub1.jmx, Sub2.jmx), we have created a thread group, that contains simple controllers with a series of steps that is representing a test case. Each step from the simple controller is calling the test fragment residing in the same Sub1.JMX using the module controller.
The module controller defined in the simple controller is failing to locate the test fragments and producing the below error from Sub1.JMX file.
Error occurred starting thread group :[TG]-Subscriptions, error message:ModuleController:[MC]-2. Login to portal/mobile has no selected Controller (did you rename some element in the path to target controller?), test was shutdown as a consequence,
see log file for more details
rg.apache.jorphan.util.JMeterStopTestException: ModuleController:[MC]-2. Login to portal/mobile has no selected Controller (did you rename some element in the path to target controller?), test was shutdown as a consequence
at org.apache.jmeter.control.ModuleController.resolveReplacementSubTree(ModuleController.java:143) ~[ApacheJMeter_components.jar:5.4.1]
at org.apache.jmeter.control.ModuleController.restoreSelected(ModuleController.java:126) ~[ApacheJMeter_components.jar:5.4.1]
at org.apache.jmeter.control.ModuleController.clone(ModuleController.java:70) ~[ApacheJMeter_components.jar:5.4.1]
at org.apache.jmeter.engine.TreeCloner.addNodeToTree(TreeCloner.java:76) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.engine.TreeCloner.addNode(TreeCloner.java:63) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:993) ~[jorphan.jar:5.4.1]
at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:994) ~[jorphan.jar:5.4.1]
at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:994) ~[jorphan.jar:5.4.1]
at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:994) ~[jorphan.jar:5.4.1]
at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:994) ~[jorphan.jar:5.4.1]
at org.apache.jorphan.collections.HashTree.traverse(HashTree.java:976) ~[jorphan.jar:5.4.1]
at org.apache.jmeter.threads.ThreadGroup.cloneTree(ThreadGroup.java:535) ~[ApacheJMeter_core.jar:?]
at org.apache.jmeter.threads.ThreadGroup.makeThread(ThreadGroup.java:310) ~[ApacheJMeter_core.jar:?]
at org.apache.jmeter.threads.ThreadGroup.startNewThread(ThreadGroup.java:265) ~[ApacheJMeter_core.jar:?]
at org.apache.jmeter.threads.ThreadGroup.start(ThreadGroup.java:244) ~[ApacheJMeter_core.jar:?]
at org.apache.jmeter.engine.StandardJMeterEngine.startThreadGroup(StandardJMeterEngine.java:527) [ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:452) [ApacheJMeter_core.jar:5.4.1]
at java.lang.Thread.run(Thread.java:834) [?:?]
Please advise if there is any possibility to get rid of the above error achieve the successful connection between the module controller defined in the simple controller and the test fragment (in Sub1.jmx).
I cannot reproduce your issue using simplified test plan assuming:
Sub1.jmx test plan with Test Fragment and Module Controller pointing to that Test Fragment
Main.jmx test plan with Include Controller pointing to Sub1.jmx
So most probably you either forgot to "select" the necessary test fragment in the Module Controller or renamed the Test Fragment after selecting so the Module Controller cannot find its target anymore.
If I'm reading your question incorrectly and the problem is still there I would ask to provide a minimal reproducible example test plan(s) based on Debug Samplers
Also be informed that according to JMeter Best Practices you should always use the latest version of JMeter so you can try upgrading as it might be the case you're suffering from a JMeter bug which has already been fixed.
Related
I have the following situation with jmeter:
I have two thread groups and I want to use a variable extracted from a response from the first one into the second thread.
What I am doing:
the variable is extracted with JSON/YAML Path Extractor and then it is set as a property with BeanShell Assertion ${__setProperty(id, ${id})};
Then in the second thread group I have a BeanShell PreProcessor where I am trying to modify the value with the following script
String ids2 = props.get("id");
String ids3 = vars.put(${__intSum(2,-4)});
String ids = vars.put(ids,"${__intSum(${ids2},${ids3})}");
As result I am getting an exception in jmeter
ava.lang.NumberFormatException: For input string: ""
at java.lang.NumberFormatException.forInputString(Unknown Source) ~[?:1.8.0_221]
at java.lang.Integer.parseInt(Unknown Source) ~[?:1.8.0_221]
at java.lang.Integer.parseInt(Unknown Source) ~[?:1.8.0_221]
at org.apache.jmeter.functions.IntSum.execute(IntSum.java:66) ~[ApacheJMeter_functions.jar:5.4.1]
at org.apache.jmeter.engine.util.CompoundVariable.execute(CompoundVariable.java:138) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.engine.util.CompoundVariable.execute(CompoundVariable.java:113) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.testelement.property.FunctionProperty.getStringValue(FunctionProperty.java:91) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.testbeans.TestBeanHelper.unwrapProperty(TestBeanHelper.java:129) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.testbeans.TestBeanHelper.prepare(TestBeanHelper.java:84) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.engine.StandardJMeterEngine.notifyTestListenersOfStart(StandardJMeterEngine.java:202) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:382) ~[ApacheJMeter_core.jar:5.4.1]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_221]
The goal is to reduce the variable with 2 and then to transfer it to another variable which I will use only in the second thread group
Any help will be highly appreciated
Since JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting
Your approach might work for 1 thread but if you will have > 1 thread the property will be overwritten, if you want each thread (virtual user) to have its own value you should add the current thread number as the prefix or postfix for the property. The relevant function is __threadNum() or if you prefer scripting you can call ctx.getThreadNum() function
In the vast majority of cases Inter-Thread Communication Plugin is easier to use
In short: how can I save a file both locally AND on cloud, similarly how to set to read from local.
Longer description: There are two scenario, 1) building model 2) serving model through API. In building the model, a series of analysis is done to generate feature and model. The result will be written locally. At the end everything will be uploaded to S3. For serving the data, first all required files which are generated from the first step, will be downloaded.
I am curious how I can leverage Kedro here. Perhaps I can define two entries for each file conf/base/catalog.yml one corresponds to the local version and the second for the S3. But perhaps not the most efficient way when I am dealing with 20 files.
Alternatively, I can upload the files using my own script to S3 and exclude the synchronization from Kedro ! in otherwords, Kedro is blind from the fact that there are copies exist on the cloud. Perhaps this approach is not the most Kedro-friendly way.
Not quite the same, but my answer here could potentially be useful.
I would suggest that the simplest approach in your case is indeed defining two catalog entries and having Kedro save to both of them (and load from the local one for additional speed up) which gives you the ultimate flexibility, though I do admit isn't the prettiest.
In terms of avoiding all your node functions needing to return two values, I'd suggest applying a decorator to certain nodes that you tag with a certain tag, e.g tags=["s3_replica"] taking inspiration from the below script (stolen from a colleague of mine):
class S3DataReplicationHook:
"""
Hook to replicate the output of any node tagged with `s3_replica` to S3.
E.g. if a node is defined as:
node(
func=myfunction,
inputs=['ds1', 'ds2'],
outputs=['ds3', 'ds4'],
tags=['tag1', 's3_replica']
)
Then the hook will expect to see `ds3.s3` and `ds4.s3` in the catalog.
"""
#hook_impl
def before_node_run(
self,
node: Node,
catalog: DataCatalog,
inputs: Dict[str, Any],
is_async: bool,
run_id: str,
) -> None:
if "s3_replica" in node.tags:
node.func = _duplicate_outputs(node.func)
node.outputs = _add_local_s3_outputs(node.outputs)
def _duplicate_outputs(func: Callable) -> Callable:
def wrapped(*args, **kwargs):
outputs = func(*args, **kwargs)
return (outputs,) + (outputs,)
return wrapped
def _add_local_s3_outputs(outputs: List[str]) -> List[str]:
return outputs + [f'{o}.s3' for o in outputs]
The above is a hook, so you'd place it in your hooks.py file (or wherever you want) in your project and then import it into your settings.py file and put:
from .hooks import ProjectHooks, S3DataReplicationHook
hooks = (ProjectHooks(), S3DataReplicatonHook())
in your settings.py.
You can be slightly cleverer with your output naming convention so that it only replicates certain outputs (for example, maybe you agree that all catalog entries that end with .local also have to have a corresponding .s3 entry and you mutate the outputs of your node in that hook accordingly, rather than do it for every output.
If you wanted to be even cleverer, you could inject the corresponding S3 entry into the catalog using a after_catalog_created hook rather than manually writing the S3-versioon of the dataset in your catalog, again, as per a naming convention you choose. Though I'd argue that writing the S3 entries is more readable in the long-run.
There are 2 ways I can think of. A simpler approach is to use --env conf for both cloud and local. https://kedro.readthedocs.io/en/latest/04_kedro_project_setup/02_configuration.html#additional-configuration-environments
conf
├── base
│ └──
├── cloud
│ └── catalog.yml
└── my_local_env
└── catalog.yml
And you can call kedro run --env=cloud or kedro run --env=my_local depending on which env you want to use.
Another more advanced way is to use TemplatedConfigLoader https://kedro.readthedocs.io/en/stable/kedro.config.TemplatedConfigLoader.html
conf
├── base
│ └── catalog.yml
├── cloud
│ └── globals.yml (contains `base_path:s3-prefix-path`)
└── my_local
└── globals.yml (contains `base_path:my_local_path`)
In catalog.yml, you can refer to base_path like this
my_dataset:
filepath: s3:${base_path}/my_dataset
And you can call kedro run --env=cloud or kedro run --env=my_local depending on which env you want to use.
I am tryint to reuse the code in following documentation : https://geode.apache.org/docs/guide/11/developing/region_options/dynamic_region_creation.html
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
Region<String,RegionAttributes<?,?>> regionAttributesMetadataRegion = createRegionAttributesMetadataRegion(cache);
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
[fatal 2021/02/15 16:38:24.915 EET <ServerConnection on port 40527 Thread 1> tid=81] Serialization filter is rejecting class org.restcomm.cache.geode.CreateRegionFunction
java.lang.Exception:
at org.apache.geode.internal.ObjectInputStreamFilterWrapper.lambda$createSerializationFilter$0(ObjectInputStreamFilterWrapper.java:233)
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed.
So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
Region<?,?> cache = instance.getRegion(name);
if(cache==null) {
Execution execution = FunctionService.onServers(instance);
ArrayList argList = new ArrayList();
argList.add(name);
Function function = new CreateRegionFunction();
execution.setArguments(argList).execute(function).getResult();
}
ClientRegionFactory<Object, Object> cf=this.instance.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY).addCacheListener(new ExtendedCacheListener());
this.cache = cf.create(name);
BR
Yulian Oifa
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
Once the Function is registered on server side, you can execute it by ID instead of sending the object across the wire (so you won't need to instantiate the function on the client), in which case you'll also avoid the Serialization filter error. As an example, FunctionService.onServers(instance).execute(CreateRegionFunction.ID).
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed. So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Indeed, for security reasons Geode doesn't allow serializing / deserializing arbitrary classes. Internal Geode distributed tests use the MemberVM and set a special property (serializable-object-filter) to circumvent this problem. Here's an example of how you can achieve that within your own tests.
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
If the dynamically created region is used by the client application then yes, you should create it, otherwise you won't be able to use it.
As a side note, there's a lot of internal logic implemented by Geode when creating a Region so I wouldn't advice to dynamically create regions on your own. Instead, it would be advisable to use the gfsh create region command directly, or look at how it works internally (see here) and try to re-use that.
I have a FHIR Device resource that contains a FHIR DeviceComponent resource. I use the following HAPI FHIR code to 'insert' one resource into the other:
protected static void insertResourceInResouce(BaseResource resource, BaseResource resourceToInsert)
{
ContainedDt containedDt = new ContainedDt();
ArrayList<IResource> resourceList = new ArrayList<IResource>();
resourceList.add(resourceToInsert);
containedDt.setContainedResources(resourceList);
resource.setContained(containedDt);
}
According to the Eclipse debugger the insertion works fine. This resource with its insertion is then added to a bundle. When all the work is done the Eclipse debugger shows the resource with the contained resource properly placed in the bundle. However, when generating a JSON string the contained resources are not there. The encoding operation appears as follows:
return fhirContext.newJsonParser().setPrettyPrint(true)
.encodeResourceToString(bundle);
Any ideas what I am doing wrong?
It turns out that one must reference the contained resource from the parent resource using the "#" to prefix the reference. If one does that then the contained resource will be present in the XML and JSON.
Admittedly this requirement makes no sense to me. Why would I include a resource INSIDE another scoping resource if I did not think it was important?
The ASP.NET Web API Help Page project does not produce complete documentation for F# record types used as parameters or result types for Web API controller actions. Members are listed, but summary information in XML comments is not displayed in the generated documentation. How do I fix this?
Example
Consider the following F# record type being used as a parameter or result type for a Web API action method:
[<CLIMutable>]
type ExampleRecord = {
/// Example property.
Prop : int
Expected output
The generated help page documentation for this type should include the summary information in description column for this member.
Name │ Description │ Type │ Additional information
══════╪═══════════════════╪═════════╪═══════════════════════
Prop │ Example property. │ integer │ None.
Actual output
What we actually see is that the summary information is completely absent.
Name │ Description │ Type │ Additional information
══════╪═════════════╪═════════╪═══════════════════════
Prop │ │ integer │ None.
Specifics
This issue occurs in relation to the following specific technologies:
Microsoft ASP.NET Web API Help Pages v5.1.1;
Visual Studio Professional 2013 (Update 1); and
F# 3.1 compiler.
Despite the self-answer, the floor is wide open to better solutions as what I've got currently doesn't really cut mustard.
Update: the doubled namespace issue has been fixed. Future readers may need to adjust the code below. Specifically, you might need to change "${namespace}${namespace}${class}" to "${namespace}${class}". Don't say I didn't warn you!
The problem arises because of two bugs related to how XML documentation is generated for F# record types:
"When the F# compiler generates the documentation file, it actually documents the internal field instead of the public property of the record member."—Axel Habermaier
The namespace of a record member is doubled in the generated XML.
Barring an update to Visual Studio 2013 (or perhaps just the F# compiler), the best fix for this would probably be a post-build action that cleans up the generated XML. For now, I have a temporary fix that involves changing the method that gets the documentation for members.In Areas/HelpPage/XmlDocumentationProvider, find the method with the signature:
public string GetDocumentation(MemberInfo member)
…and replace the definition with:
public string GetDocumentation(MemberInfo member)
{
string selectExpression;
bool isRecord = FSharpType.IsRecord(member.DeclaringType, FSharpOption<BindingFlags>.None);
if (isRecord)
{
// Workaround for a bug in VS 2013.1: duplicated namespace in documentation for record types.
Regex matchTypeName = new Regex(#"(?<namespace>(?:[_\p{L}\p{Nl}]+\.)*)(?<class>[_\p{L}\p{Nl}]+)$");
string classExpression = matchTypeName.Replace(GetTypeName(member.DeclaringType), "${namespace}${namespace}${class}");
string memberExpression = String.Format(CultureInfo.InvariantCulture, "{0}.{1}", classExpression, member.Name);
selectExpression = String.Format(CultureInfo.InvariantCulture, FieldExpression, memberExpression);
}
else
{
string expression = member.MemberType == MemberTypes.Field ? FieldExpression : PropertyExpression;
string memberName = String.Format(CultureInfo.InvariantCulture, "{0}.{1}", GetTypeName(member.DeclaringType), member.Name);
selectExpression = String.Format(CultureInfo.InvariantCulture, expression, memberName);
}
XPathNavigator propertyNode = _documentNavigator.SelectSingleNode(selectExpression);
return GetTagValue(propertyNode, "summary");
}
This is a very temporary fix! It will be overwritten if you update the Web API Help Pages package, and will break things if the aforementioned bugs are fixed. I'd really appreciate any help finding a better solution.