BIRT Report : first report very slow - birt

I have a problem: First of all, my application is working properly, my reports are well generated.
Now I have a little concern about the 1st report generated that puts more than 45s.
Subsequently, if I run the same report or any other report, it is done in 2-3 seconds.
Do you have any idea to solve this problem for the 1st report?
Thank you

Obviously, initialization takes most of the time.
You'll have to figure out what part of the initialization.
I think you'll have to add logging with timestamp at several places in the code or profiling to see how long each part takes
1) Starting up the Java process and loading the BIRT classes
2) Starting up the BIRT report engine
3) Loading ressouces inside the report (e.g. JS files and libraries)
4) Connecting to the DB (in particular, if you are using connection pooling)
5) DB initialization (often the DB caches data very efficiently, so subsequenct SQL statements selecting the same or similar data can run very fast)
For example, you could add log statements inside the initialization event of the report itself, inside the beforeOpen and afterOpen events of the Data Source, inside the beforeOpen and afterOpen events of the Data Sets, and inside your Java code calling the reports.

Related

Generate JMeter custom pie chart

I'm looking for a way to display pie chart/table after running 100 tests, but all available built-in reports seem to accumulate data on time spent per sample, controller and about performance metrics.
Although tests mostly check performance and some metrics are usefull, we also need statistics on actual response data.
Each http request queries service for items availability per product.
After tests finish we also would like pie chart appear with 3 sections:
Available
Low on stock
Unavailable
Now, I found "Save Responses to a file" listener but it generates separate files which isn't very good. Also with "View Results Tree" we can specify filename where responses will be dumped.
We don't need the whole response object and preferably not even write anything to disk.
And than, how to actually visualize that data in JMeter after tests complete? Would it be Aggregate Graph?
So to recap: while threads run, each value from json response object (parsed with JPath) should be remembered somewhere and after tests complete these variables should be grouped and displayed as a pie chart.
I can think only of Sample Variables property, if you add the next line to user.properties file:
sample_variables=value1,value2,etc.
next time when you run JMeter test in command-line non-GUI mode the .jtl result file will contain as many extra columns as you have Sample Variables and each cell will contain respective value of the variable for each Sampler.
You will be able to use Excel or equivalent to build your charts. Alternatively you could use Backend Listener and come up with a Grafana dashboard showing what you need.

view all http request errors during/after jmeter load test

I was wondering if there was a easier way to do this. Below is a simple load test specification:
WHen I run for high loads the Summary Report might report percentage errors. And you can also probably view those requests in that View Results Tree page. (That is if we catch the error-ed request quickly enough).
Now what do we do if we want to study all the errors to see if there is some pattern in them, or, simply to know all kinds of errors in the http load test? I am looking for some feature or hack to this effect.
You can generate HTML Reporting Dashboard which provides:
A Statistics table providing in one table a summary of all metrics per transaction including 3 configurable percentiles, basically the same as your Summary Report listener
An error table providing a summary of all errors and their proportion in the total requests
A Top 5 Errors by Sampler table providing for every Sampler (excluding Transaction Controller by default) the top 5 Errors
Response codes per second zoomable chart
There is a separate Listener - Response Codes per Second
JMeter .jtl result files are basically .CSV files so you can open it with MS Excel or equivalent and perform grouping or plot errors messages at a timeline chart
And last but not the least, for "high loads" it's recommended to disable or even remove all the Listeners (especially View Results Tree guy) because they don't add any value and just consume the valuable resources.

Changes on Power bi report with javascript are not applying every time

I have embedded power bi report in Dynamics Portal.I am trying to update settings on the power bi report with java script but the changes are not applying every time. I could see changes are getting applied on the script some times and some times not.I could see script is executing every time(i have kept alert box when we executed the script).Is there any way where we can make sure settings applies on the power bi report every time.
https://github.com/Microsoft/PowerBI-JavaScript/wiki/Handling-Events
Use the PowerBI event handler and dont apply changes to the report unless you know is fully loaded, rendered and available. To do this you can make sure of the Rednered event.
var report = powerbi.embed(reportContainer, config);
report.on("rendered", function(event) {
// do stuff here like apply filters or whatever it is you want to do to manipulate the report

Jmeter- how to find the time taken for the report to generate

how to check the time taken for some report to generate using jmeter
In my application we need to submit report and check for the time taken for the report to be completed. it usually takes about 15-20 minutes. How to check that using jmeter.
I checked in Listener- View results in table , aggregate report - it doesn't have that info. Kindly help
In Jmeter you can use below logic to check for completion time of report generation activity.
and your report generation activity has columns similar to,
Which has start time and end time (from which you can get approx time for your report generation activity, this raises question if you see this through UI why are you saying, you cant find out execution time for report.)
If you want it in Jmeter load testing then probably you have a custom code(beanshell postprocessor) extracting both values from result and logging it into log file.
status check workflow would be somewhat like,
While controller ------- to loop continuously till our condition satisfied (my mistake, value should be status!=complete)
http request ---- Refresh reqeust
Constant time ---- delay of 2 seconds

Specify test end condition in Visual Studio Load Test

I'm testing a large BizTalk system using Visual Studio Load Test. Load Test to pushes messages into MQ, these are picked up by BizTalk and then processed.
Rather than having the test finish (and all performance counters ending) as soon as Visual Studio has finished injecting messages to MQ, I want the test to end if and only if some condition is met (in my case if (SELECT COUNT(*) FROM BizTalkMsgBoxDb.dbo.Spool) == 4).
I can see a bunch of ways to run stuff after the test is complete, but no obvious way to extend the test and continue monitoring unless some user-defined exit condition is met.
Is this possible, or if not, does anyone have an idea for a good work-around/hack to achieve this?
You'll want to write a custom load test plugin. Details begin at this URL: http://msdn.microsoft.com/en-us/library/ms243153.aspx
The plugin can manipulate the scenario, extending the duration of the test until your condition is met.
I imagine you want to keep the load test running after queueing up a bunch of requests in order to continue to monitor the performance while the requests are processed. Although we can't control the load test duration, there is a way to achieve this.
Don't limit the test duration: Set the load test duration (or number of iterations) to a very large value -- larger than you anticipate (or know) it will take for the end condition to be satisfied.
Limit the scenario that queues up requests: In the load test scenario properties, in the Options section, set the Maximum Test Iterations so that the user load will drop to zero after sending the desired number of requests. If setting an iteration limit is not possible for some reason, you can instead write a load test plugin that sets the user load to zero in a specified scenario after a certain amount of test time has elapsed.
Check for end condition: Write a web test plugin that checks the database for your end condition. Attach this plugin to a new webtest in a new scenario and set Think Time Between Test Iterations on the scenario so that the test runs only as often as needed (60 seconds?). When the condition is reached, the plugin should write a predetermined value into the user context (the user context is accessible in the web test context as $LoadTestUserContext, and is only available in a load test, not when running a webtest standalone).
Abort the test: Write a load test plugin that looks for the flag value in the user context in the TestFinished event. When the value is found, the plugin calls LoadTest.Abort().
There is one minor disadvantage to this method: the test state is marked as Aborted in the results database.
At time of writing there is (still) no way to extend the duration of the test using a custom load test plugin, nor by having a virtual user type that refused to exit, nor by locking the close-down period of the test and preventing it from exiting that way.
The only way we managed to do something like this was to directly manipulate the LoadTest database and inject performance counter data in afterwards from log files, but this is neither smart nor recommended.
Oh well..

Resources