VS Load Test Scenario - How can we add custom scenario info to database - visual-studio

I have a Load Test project that works perfectly and is able to saves all the perf counter in the LoadTest db as expected.
I wanted to add some specific scenario attributes information to my test run.
Therefore when I create a report in Excel at the end, ill be able to filter based on those attributes
Example:
Environment Attribute: (QA, PreProd, Production)
Target Attribute: (UI, API,..)
I searched everywhere but couldn't find the information. Not sure if I have to create a new table in that DB and populate it myself, or if there is another easier way.

The closest I have achieved to doing what you ask is by modifying the "reporting names" of requests to include the wanted information. On one test I added a test case that executed a small number of times (so few that the overall results would not be skewed). It had two requests. The first collected some data and saved it to a context parameter. The second had a PreRequest plugin that wrote the data to the ReportingName field of that request. (Experiments showed that setting the field in a PostRequest plugin had no effect.)
Another approach I have used is have a PostWebTest plugin that writes a line of useful data to a logging file, so I get one line per test case executed.
It may be possible to add another table to the load test database, but I would wonder whether it would be properly handled by the Open, Import, Export and Delete commands of the Open and Manage Test Results window.

Related

Flow Triggering Itself(Possibly), Each run hits past IDs that were edited

I am pretty new to power automate. I created a flow that triggers when an item is created or modified. It initializes some variables and then does some switch cases to assign values to each of them. The variables then go into an array and another variable is incremented to get the total of the array. I then have a conditional to assign a value to a column in the list. I tested the flow specifically going into the modern view of the list and clicking the save button. This worked a bunch of times and I sent it for user testing. One of the users edited multiple items by double clicking into the item which saves after each column change(which I assume triggers a run of the flow)
The flow seemingly works but seemed to get bogged down at a point based on run history. I let it sit overnight and then tested again and now it shows runs from multiple IDs at a time even though I only edited one specific one.
I had another developer take a look at my flow and he could not spot anything wrong with it and it never had a hard error in testing only warnings about conditionals causing a loop but all my conditionals rectify. Pictures included. I am just not sure of any caveats I might be missing.
I am currently letting the flow sit to see if it finishes getting caught up. I read about the concurrent run option as well as conditions on the trigger itself. I am curious as to why it seems to run on two records(or more) all at once without me or anyone editing each one.
You might be able to ignore the updates from the service account/account which is used in the connection of the actions by using the following trigger condition expression:
#not(equals(triggerOutputs()?['body/Editor/Claims'], 'i:0#.f|membership|johndoe#contoso.onmicrosoft.com'))

Maximo: Use script to update work order when a related table is updated

I have an automation script in Maximo 7.6.1.1 that updates custom fields in the WORKORDER table.
I want to execute the automation script when the LatitudeY and LongitudeX fields (in the WOSERVICEADDRESS table) are edited by users.
What kind of launch point do I need to do this?
Edit:
For anyone who's learning automation scripting in Maximo, I strongly recommend Bruno Portaluri's Automation Scripts Quick Reference PDF. It doesn't have information about launch points, but it's still an incredibly valuable resource.
I wish I'd known about it when I was learning automation scripting...it would have made my life so much easier.
You can create an attribute action launch point on the latitudeY field and another on the longitudeX field. These will trigger whenever each field is modified, so it will fire once when the latitudeY field was changed, again if the longitudeX field is changed, again if the longitudeX field is changed again, and so on. This is also all before the data is saved, so the user may choose to cancel their changes, but the scripts will still have fired.
You could also make an "on save" object launch point for WOSERVICEADDRESS (if that's what is actually being updated via the map). This will run any time data in the object is saved, so you would have to do the extra checks of seeing if either of those fields have changed and then do your logic, but at least it would run once and only if the user commits to their changes.
Related:
Populates WORKORDER.WOSAX and WORKORDER.WOSAY (custom fields) from the values in WOSERVICEADDRESS.LONGITUDEX and WOSERVICEADDRESS.LATITUDEY.
woMbo=mbo.getOwner()
longitudex=mbo.getString('longitudex')
latitudey=mbo.getString('latitudey')
if woMbo is not None:
wosax=woMbo.getString('WOSAX');
wosay=woMbo.getString('WOSAY');
if longitudex!=wosax:
woMbo.setValue('WOSAX',longitudex)
if latitudey!=wosay:
woMbo.setValue('WOSAY',latitudey)
The launch points are Attribute Launch Points, not Object Launch Points.

Is jmeter insert the data in real time?

Currently i am using j meter.I have a test plan order like
Login,Add Patient(After login).After executing the test plan successfully,i checked the application whether new patient is added or not.Yes,Its added in the application when the test plan(HTTP) is successful.Now i have two questions
Data is added in the application,Whether its correct or not?
Suppose i don't want to insert the data in the application means what
i have to do.Is there any configuration?.
You can use i.e. Response Assertion to automatically check whether patient was added or not.
If you considering a form of negative testing i.e.
ensure that patient will NOT be added if wrong credentials are provided
ensure that patient will NOT be added if logged in user doesn't have permission to add patients
You can use the same Response Assertion but with reverse condition.
See How to Use JMeter Assertions in Three Easy Steps article for more information on how to conditionally set pass and/or fail criteria basing on request results.

what are most commonly used JMeter listeners

I am using JMeter for load testing and using listeners for getting the response results but I am not sure which are the most commonly used listeners which will give data for analysis.
I know Table view and tree view but those basic ones, kindly advice which listeners should I use.
JMeter documentation provides a very good overview of the listeners and when/how to use them.
While you are debugging and developing your plan, there's nothing better than View Results Tree, which also serves as a tester for RegEx, CSS/JQuery and XPath tester. However this particular listener must be disabled or deleted during the real load test, as it will eventually crash JMeter with OOM exception.
During the real load test you need to record statistics (how long requests took, etc.) and errors. In non-interactive mode, the best is to use Simple Data Writer with CSV format, which is considered to be very efficient. If you use interactive mode, or both (interactive and non-interactive modes), it's very convenient to use Aggregate Report or
Summary Report, since they display stats right away, and you can see immediately if something goes wrong. They also have ability to write to file, just like Simple Data Writer.
Finally, if you want to include some custom result collecting (not provided by any listeners), you can use BeanShell Listener or BSF Listener
In terms of organization, I find it convenient to separate successes and failures, so I always have 2 listeners:
For successes (in "Log/display only" option Successes is checked) I either record only statistics using Aggregate/Summary report (if test will run interactively and for a long time) or record file in CSV format (if I need a raw data about each request).
I always record failures (in "Log/display only" option Errors is checked) into file in XML format (for example using Simple Data Writer). XML format is not that efficient, but test is not supposed to have many failures (if it does, basically it should be stopped and analyzed). But XML format allows to record failing request and response headers and body, which is convenient for further debugging.
Hope this helps.
While executing the test better to avoid adding the listeners, only thing you can add simple data writer alone from that listeners you can generate any type listeners as you need.
while making the script ready you can use any type of listeners that will no be any issues.

How to force ActiveRecord to ALWAYS insert, update, delete and retrieve from the database?

We are using ActiveRecord stand-alone (not part of a Rails application) to do unit testing with RSpec. We are testing a trigger on the database that it inserts rows into an audit table.
The classes are:
Folder has many File
Folder has many FileAudit
The sequence of events is like this:
Create Folder
START TEST ONE
Create File
Do some stuff to File
Get Folder.file_audits
Check associated FileAudit records
Destroy File
Destroy FileAudits
END TEST ONE
START TEST TWO
Create File
Do some other stuff to File
Get Folder.file_audits
Check associated FileAudit records
Destroy File
Destroy FileAudits
END TEST TWO
Destroy Folder
The FileAudits from test one are getting destroyed, but not from test two. ActiveRecord seems to think that there is nothing new in that table to delete at the end of the second test.
I can do Folder.file_audits(true) to refresh the cache, but I would rather just disable any and all kinds of caching and have ActiveRecord just do what I tell it instead of it doing what it thinks is best.
I also need to set a flag on File to the same value and verify that the trigger did not create an audit record. When I set the flag to a different value, I can see the update statement in the log, but when I set it to the same value and save, there is no update in the log.
I am sure that the caching and etc. is fine for a web site, but we are not doing that. We need it to always get all records from the database and always update and delete no matter what. How can we do that?
We are using ActiveRecord 3.1.3.
Thanks
My initial guess would be that there you are doing something with transactions in one of the tests. If you are, you are effectively eliminating the outer transaction that wraps the unit test itself, in which case causes the unit test cleanup to have nothing to rollback.
I don't know if this applies to you or not, but I've had problems in the past with doing model.save instead of model.save!. Sometimes I would get validation errors on save, but without the bang the validation errors don't raise an actual exception, so I never know that the save wasn't successful.

Resources