How to capture TPS for multiple system involved Jmeter - jmeter

In my scenario, I have three systems A, B and C. I am creating load on System A and system A communicate in following way A -> B -> C. At last response comes to system A in following way A <- B <- C. I am being able to capture TPS via Jmeter for system A only. How can I capture TPS for the other two systems?

JMeter knows nothing about the architecture of your application, it treats the system under test as the black box
If you want to check systems B and C performance you need to mimic the communication between A and B and vice versa in the JMeter test, the same for B and C. The approach is known as Isolation Testing
In case if for some reason it's not doable you can try to collect the information from the systems B and C using i.e. JMeter PerfMon Plugin which supports a variety of metrics and allows executing arbitrary commands/reading files in order to retrieve and plot the collected information.

Related

Is there a way to do collaborative learning with h2o.ai (flow)?

Relatively new to ML and h2o. Is there a way to do collaborative learning/training with h2o? Would prefer a way that uses the flow UI, else woud be using python.
My use case is that there would be new feature samples x=[a, b, c, d] periodically coming into a system where an h2o algorithm (say, running from a java program using a MOJO) assigns a binary class that users should be able to manually reclassify as either good(0) or bad(1), at which point these samples (with their newly assigned responses) get sent back to theh h2o algorithm to be used to further train it.
Thanks
FLOW UI is great for prototyping something very quick with H2O without writing a single like of code. You can ingest the data, build desired model and the evaluate the results. Unfortunately FLOW UI is can not be extended for the reason you asked, and FLOW is limited for that reason.
For collaborative learning you can write your whole application directly in python or R and it will work as expected.

Omnet++: Parallelize Single Run Simulation

I'm trying to parallelize my model (I want to parallelize a single config run, not run multiple configs in parallel).
I'm using Omnet++ 4.2.2, but probably the version doesn't matter.
I've read the Parallel Distributed Simulation chapter of the Omnet++ manual
and the principle seems very straightforward:
simply assign different modules/submodules to different partitions.
Following the provided cqn example
*.tandemQueue[0]**.partition-id = 0
*.tandemQueue[1]**.partition-id = 1
*.tandemQueue[2]**.partition-id = 2
If I try to simulate relatively simple models everything works fine I can partition the model at wish.
However, when I start to run simulation that use Standardhost module, or modules that are interconnected using ethernet links that doesn't work anymore.
If i take for example the Inet provided example WiredNetWithDHCP (inet/examples/dhcp/eth), as experiment, lets say I want to run hosts in a different partition than the switch
I therefore assign the switch to a partition and everything else to another:
**.switch**.partition-id = 1
**.partition-id = 0
The different partitions are separated by links, there is delay, and therefore it should be possible to partition this way.
When I run the model, using the graphic interface, I can see that the model is correctly partitioned however the connections are somehow wrong and i get the following error message:
during network initialization: the input/output datarates differ
clearly datarates don't differ (and running the model sequentially works perfectly), by checking the error message this exception is triggered also by link not connected. This is indeed what happen. It seems that the gates are not correctly linked.
Clearly I'm missing something in the Link connection mechanism, should I partition somewhere else?
Due to the simplicity of the paradigm I feel like being an idiot but I'm not able to solve this issue by myself
Just to give a feedback,
It seems that directly it cannot be done, not the full INET as it is can be parallelized in short because it uses global variables in some places.
in this particular case, mac addresses assignment are one of the issues (uses a global variable), hence eth interface cannot be parallelized.
for more details refer to this paper explaining why this is not possible:
Enabling Distributed Simulation of OMNeT++ INET Models:
For reference/possible solution refer to authors webpage from aachen university, where you can download a complete copy of omnet++ and INET that can be parallelized:
project overview and code

Different dashboards based on same analyse run

Sonar-Qube: V.5.1.1
C#-Plugin: V.4.0
ReSharper-Plugin: V.2.0
Due to the long analyse runs I would like to have the following:
Let's assume I analyse my source with the rules A, B, C and D. Now I would like to have a dashboard based on the issues found with rule A and B and another dashboard based on the issues found with rules C and D and the third one basing on all rules. But I don't want to have an analyse run for each of those combinations! Curently an analyse run takes 4 hours!
What you're after isn't possible.
===Edit===
Based on our comment conversation, I'd advise putting all rules in the same profile and setting the severity of "next year's" rules to Info. The teams can easily use the issues page to choose which sets of issues to see at one time.
When it's time to make rule set II official, you can simply upgrade the severities of the relevant rules.

Trying to perfect my cucumber scenarios

I know either of these will work, but I am trying to become a better member of the ruby/cucumber community. I have a story that tests if multiple sections of my website doesn't have any links under it, it should not display. So which of these two ways are the best ways to write the scenarios. Once again, I understand either will work but i'm looking for the Best Practice solution. I would normally use option B as they are all testing different "Then" steps; however I have been some research and I'm second guessing myself since I can test all the scenarios with the same given statement and I was reading you should only make a new scenario if you are changing both the "given" and "then" steps.
A.
Scenario: A user that cannot access A, B, C, or D
Given I am a, user without access to A, B, C, or D
When I navigate to reports
Then I see the A header
But I cannot click on A's header
And I see error message under A stating the user does not have access
And I do not see the B section
And I do not see the C section
And I do not see the D section
OR
B.
Scenario: A user that cannot access A
Given I am a, user without access to A
When I navigate to reports
Then I see the A header
And I see error message under A stating the user does not have access
But I cannot click on A's header
Scenario: A user that cannot access B
Given I am a, user without access to B
When I navigate to reports
Then I do not see the B section
Scenario: A user that cannot access C
Given I am a, user without access to C
When I navigate to reports
Then I do not see the C section
Scenario: A user that cannot access D
Given I am a, user without access to D
When I navigate to reports
Then I do not see the D section
I believe best practice is to break down features into their various parts (in this case, scenarios)
Option B is better because it adheres to the single responsibility principle (which of course can be applied to many different parts of code). The way B is written is clear and direct. If you come back to this in 6 months, or a new developer sees this for the first time, you both have a good idea of the goal of the test.
Option A seems to be doing a lot, and although this is an integration test, you should keep the specific parts of code being tested as independent as possible. Ask yourself this, when this test fails, will you know exactly why? or will you have to start digging around to see what part of the test actually failed?
Best practice, in this context, advocates smaller sections of code. If these tests start repeating themselves (DRY, don't repeat yourself), you can start to refactor them (with a Background perhaps)
Granular scenarios are preferable because they communicate the desired behavior more explicitly and provide better diagnostics when there is a regression. As your application evolves, small scenarios are easier to maintain. Long scenarios develop a "gravitational attraction" and get even longer. In a long scenario, it is difficult to figure out all of the setup and side-effects of the steps. The result is an "gravitational attraction" where long scenarios keep growing.
A scenario outline can give make your test both granular and concise. In the following example, it's obvious at a glance that resources B, C, and D all have the same policy, while resource A is different:
Scenario Outline: A user cannot access an unauthorized resource
Given I am a user without access to <resource>
When I navigate to reports
Then I do not see the <resource> section
Examples:
| resource |
| B |
| C |
| D |
Scenario: A user that cannot access A
Given I am a, user without access to A
When I navigate to reports
Then I see the A header
And I see error message under A stating the user does not have access
But I cannot click on A's header
I would replace A B C or D for something more readable, just think that your grandma needs to understand this definition, she wouldn't understand what A B C D mean. so let's put it this way
given a basic user
..
..
then the user cannot see the edit tools
given a super user
..
..
then the super user should see the edits tools
Just try to join those A B C D in something meaningful such a group name, level n, team, etc
then you will use TestUnits for one of each items: A B C D if you will

Debug Gradle's parallel mode

We are trying Gradle for our very large and complex enterprise app. We are using multi project build structure and are very excited with Gradle's parallel execution feature.
Our codebase is structured in domain layers like this:
UI modules (~20) -> shared ui -> domain -> dao -> framework
Dependencies are uni directional and build happens bottom up.
Unfortunately we are not seeing a big boost in our build times. Its pretty much same as what we were getting with ant before.
Looking at the execution sequence of tasks in parallel mode few things doesn't look right.
Our expectation is Gradle will run tasks in sequence initially when it is building core layers. So after it assembles framework, dao, domain and shared ui, it should kick everything else in parallel.
But execution sequence we are seeing is somewhat like this:
framework.assemble -> dao.assemble -> domain.assemble -> shared.ui.assemble -> Other UI modules.assmble (in parallel) -> war -> Other UI.check + shared.ui.check + dao.check (in parallel) -> domain.check -> framework.check
Bottleneck is at the end when it is running checks for domain and framework in sequence and not in parallel. These 2 modules are the biggest modules for us with around 12k unit tests and they take around 4 mins to run.
We spent lot of time looking at the dependencies using gradle tasks --all and test task for these modules are completely independent and there is nothing that should hold off their execution.
We are wondering if this is a known issue or is there a way to enable some extra debugging in Gradle to get more insight as how Gradle determines execution order with parallel mode. Any help is appreciated.
As of Gradle 1.4, parallel task execution is (intentionally) constrained in a few ways. In particular, the set of tasks executing at any time won't contain two tasks belonging to the same project. This will be improved over time. I'm not aware of any debugging aids other than what you get from the logs (e.g. with --debug).
Note that parallel test execution is a separate feature. If you have a lot of tests in the same project, test.maxParallelForks = x with x > 1 should show a noticeable speedup. The value for x is best determined experimentally. A good starting point is the number of physical cores on the machine (e.g. Runtime.getRuntime().availableProcessors() / 2).

Resources