Changes on Power bi report with javascript are not applying every time - dynamics-crm

I have embedded power bi report in Dynamics Portal.I am trying to update settings on the power bi report with java script but the changes are not applying every time. I could see changes are getting applied on the script some times and some times not.I could see script is executing every time(i have kept alert box when we executed the script).Is there any way where we can make sure settings applies on the power bi report every time.

https://github.com/Microsoft/PowerBI-JavaScript/wiki/Handling-Events
Use the PowerBI event handler and dont apply changes to the report unless you know is fully loaded, rendered and available. To do this you can make sure of the Rednered event.
var report = powerbi.embed(reportContainer, config);
report.on("rendered", function(event) {
// do stuff here like apply filters or whatever it is you want to do to manipulate the report

Related

Generate JMeter custom pie chart

I'm looking for a way to display pie chart/table after running 100 tests, but all available built-in reports seem to accumulate data on time spent per sample, controller and about performance metrics.
Although tests mostly check performance and some metrics are usefull, we also need statistics on actual response data.
Each http request queries service for items availability per product.
After tests finish we also would like pie chart appear with 3 sections:
Available
Low on stock
Unavailable
Now, I found "Save Responses to a file" listener but it generates separate files which isn't very good. Also with "View Results Tree" we can specify filename where responses will be dumped.
We don't need the whole response object and preferably not even write anything to disk.
And than, how to actually visualize that data in JMeter after tests complete? Would it be Aggregate Graph?
So to recap: while threads run, each value from json response object (parsed with JPath) should be remembered somewhere and after tests complete these variables should be grouped and displayed as a pie chart.
I can think only of Sample Variables property, if you add the next line to user.properties file:
sample_variables=value1,value2,etc.
next time when you run JMeter test in command-line non-GUI mode the .jtl result file will contain as many extra columns as you have Sample Variables and each cell will contain respective value of the variable for each Sampler.
You will be able to use Excel or equivalent to build your charts. Alternatively you could use Backend Listener and come up with a Grafana dashboard showing what you need.

Trains: Can I reset the status of a task? (from 'Aborted' back to 'Running')

I had to stop training in the middle, which set the Trains status to Aborted.
Later I continued it from the last checkpoint, but the status remained Aborted.
Furthermore, automatic training metrics stopped appearing in the dashboard (though custom metrics still do).
Can I reset the status back to Running and make Trains log training stats again?
Edit: When continuing training, I retrieved the task using Task.get_task() and not Task.init(). Maybe that's why training stats are not updated anymore?
Edit2: I also tried Task.init(reuse_last_task_id=original_task_id_string), but it just creates a new task, and doesn't reuse the given task ID.
Disclaimer: I'm a member of Allegro Trains team
When continuing training, I retrieved the task using Task.get_task() and not Task.init(). Maybe that's why training stats are not updated anymore?
Yes that's the only way to continue the same exact Task.
You can also mark it as started with task.mark_started() , that said the automatic logging will not kick in, as Task.get_task is usually used for accessing previously executed tasks and not continuing it (if you think the continue use case is important please feel free to open a GitHub issue, I can definitely see the value there)
You can also do something a bit different, and justcreate a new Task continuing from the last iteration the previous run ended. Notice that if you load the weights file (PyTorch/TF/Keras/JobLib) it will automatically connect it with the model that was created in the previous run (assuming the model was stored is the same location, or if you have the model on https/S3/Gs/Azure and you are using trains.StorageManager.get_local_copy())
previous_run = Task.get_task()
task = Task.init('examples', 'continue training')
task.set_initial_iteration(previous_run.get_last_iteration())
torch.load('/tmp/my_previous_weights')
BTW:
I also tried Task.init(reuse_last_task_id=original_task_id_string), but it just creates a new task, and doesn't reuse the given task ID.
This is a great idea for an interface to continue a previous run, feel free to add it as GitHub issue.

how to automatic log CAN data when testing fail

I'm using CAPL to make an automatic testing, I want to log CAN bus data at a point (such as testfail point), and the CAN bus data need 10 second before the point (not after the point).
Can someone help me?
Open Analysis -> Measurement setup
Rightclick on the right side of the measurement setup window
Add new logging block
Select toggle on and off method as CAPL
Set the pre-trigger time to 10 000 ms or how much you need
Configure output file how you need it
Whenever test fails you trigger the block with startLogging(nameOfYourLoggingBlock)
Remember that you need to end the logging at some point with stopLogging(nameOfYourLoggingBlock)

BIRT Report : first report very slow

I have a problem: First of all, my application is working properly, my reports are well generated.
Now I have a little concern about the 1st report generated that puts more than 45s.
Subsequently, if I run the same report or any other report, it is done in 2-3 seconds.
Do you have any idea to solve this problem for the 1st report?
Thank you
Obviously, initialization takes most of the time.
You'll have to figure out what part of the initialization.
I think you'll have to add logging with timestamp at several places in the code or profiling to see how long each part takes
1) Starting up the Java process and loading the BIRT classes
2) Starting up the BIRT report engine
3) Loading ressouces inside the report (e.g. JS files and libraries)
4) Connecting to the DB (in particular, if you are using connection pooling)
5) DB initialization (often the DB caches data very efficiently, so subsequenct SQL statements selecting the same or similar data can run very fast)
For example, you could add log statements inside the initialization event of the report itself, inside the beforeOpen and afterOpen events of the Data Source, inside the beforeOpen and afterOpen events of the Data Sets, and inside your Java code calling the reports.

Plotting JMeter test results dynamically in HTML chart

I want to be able to run a JMeter test for thousands of users and plot the results dynamically using a JQuery based charting library like HighCharts i.e. the response from every virtual user must be plotted in near real time to show a stock ticker like chart which gets updated dynamically. I am OK running the test in Non-GUI mode.
I have tried the following,
- Run the JMeter test in non-GUI mode and write the response to a file. What I notice is that the results get written to the file in a buffered manner which means even if I have a program monitoring the file for new records, I wont get it in real time.
I am looking for suggestions on how this can be achieved
1. Do I need to write a custom JMeter plugin? In this case how will it work?
2. Is there some listener which can give me the desired data
3. Can this be done via post processor?
I have seen real time reporting being done on some cloud based load testing websites which use JMeter, so I'm sure it can be done, but how?
There is some buffering when writing to a file, but it shouldn't be more than a few seconds worth of data.
I'd go with the route of reading the log file into something like statsD using something like logstash.net and from there you can probably find an existing solution that pushes it to a chart.
You can disable buffering by adding this in user.properties file:
jmeter.save.saveservice.autoflush=true
This impacts slightly performances for test that have low or no pauses.
To do what you want you could use this kind of library:
http://www.chartjs.org/

Resources