I have a report which, when rendered on its own, has the following performance times (taken from ExecutionLogStorage table):
TimeDataRetrieval: 6776
TimeProcessing: 142
TimeRendering: 30
When this report is used as a sub-report which is repeated 34 times, the performance of the overall report comes out as follows:
TimeDataRetrieval: 9255
TimeProcessing: 187709
TimeRendering: 35
Furthermore, the memory consumption of my IIS process (using ReportViewer web control) goes up by several hundreds of MB.
Are these performance issues inherent to sub-reporting or is there something wrong with my report?
Related
We have an 11g ORACLE Forms/Reports application. Some reports have multiple images which work fine in 11g, but when we move them to our new 12c environment, the report hangs.
Experimentation shows that when all images bar one is removed, the report runs ok. You can introduce multiple copies of the same image into the report and it will still run, but if you have a mix of images, it hangs. It does not matter if the images are linked in or inserted, or in what order or where, it still fails.
By hanging, I mean that the report server says that the report is formatting page X (where X is the page containing the second image), and you cannot cancel the report. Trace logs show that the failure occurs when it is processing an image.
Since I have seen no complaints about 12c images on the web, I assume it is not an ORACLE bug, and I also assume that such a restriction cannot be a feature. I assume that some setting is restricting the number of images which can be processed. Does anyone know what that setting is and how to lift it?
I don't have solution, but I do have a few suggestions:
recompile the report using "all" option (Ctrl + Shift + K). Sometimes it makes magic
I've noticed similar behavior when image size is (too) large; try to make it smaller (how? For example, reduce its quality) (what is "too large"? It depends; to me, it happened when report had several thousands of pages, displaying relatively small images (e.g. 20 KB in size) but - multiplied by number of pages, it just didn't work. Image size reduced to ~4KB fixed it. I'm not saying that you can do the same, but - if possible, try it
I agree - the fact that the same report works OK on 11g makes you crazy, huh ... I sincerely hope that compilation will help as it is the simplest option I can think of.
I managed to find a near identical report in a near identical application, which worked. By creating a report with 2 images which could run in either application, and changing the applications so they used the same report server, I found that the test report worked in 1 app but hung in the other. The only difference then lay in how the report was submitted. For the hanging report, I rewrote the submission code from scratch and the report worked fine. I still don't know the critical difference, but now it doesn't matter.
My website uses the following optimizations to free up main thread as well as optimize content load process:
- Web workers for loading async data as well as images.
- Defer images until all the content on page is loaded first
- typekit webfontloader for optimized font load
Now since the time I completely switched over to webworkers for all network [async] related tasks, I have noticed the increased occurence in following errors[by ~50%]:
But my score seems to be unaffected.
My question is, how accurate is this score?
P.S: My initial data is huge, so styling and rendering takes ~1300ms & ~1100ms resp. [required constraint]
After doing a few experiments and glancing through the LightHouse (the engine that powers PSI) source code I think the problem comes in the fact that once everything has loaded (page load event) Lighthouse only runs for a few seconds before terminating.
However your JS runs for quite some time afterwards with the service workers performing some tasks nearly 11 seconds after page load on one run of mine (probably storing some images which take a long time to download).
I am guessing you are getting intermittent errors as sometimes the CPU goes quiet long enough to calculate JS execution time and sometimes it does not (depending on how long it is between tasks it is performing).
To see what I mean open developer tools on Chrome -> Performance Tab -> Set CPU slowdown to 4 x slowdown (which is what Lighthouse emulates) and press 'record' (top left). Now reload the page and once it has loaded fully stop recording.
You will get a preformance profile and there is a section 'Main' that you can expand to see the main thread load (which is still used despite using a worker as it needs to decode the base64 encoded images, not sure if that can be shifted onto a different thread)
You will see that tasks continue to use CPU for 3-4 seconds after page load.
It is a bug with Lighthouse but at the same time something to address at your side as it is symptomatic of a problem (why base64 encoded images? that is where you are taking the performance hit on what would otherwise be a well-optimised site).
I'm using Nopcommerce 2.40.
I've run load test with 5 virtual user for 1 min on ErrorPage.htm which is simple HTML page and found that..It is taking 25 to 35% of CPU.
I think it is going to be serious performance problem if simple HTML is taking too much CPU.There is no need to check other pages and it does not matter whether you are using Output caching or other caching to improve performance.
What could be the reason behind this?
It executes several SQL commands. There is a fix available here:
http://nopcommerce.codeplex.com/SourceControl/changeset/changes/f693be2bc2e0
This will add htm and html pages to ignore.
I am measuring page rendering speed, firing a StopWatch at OnBeginRequest, and stopping it at OnResultExecuted, thereby measuring the entire page render cycle.
I get the following time stamps during rendering:
0 ms - OnBeginRequest
+1.1 ms - OnActionExecuting
+2 ms - OnActionExecuted
+3 ms - OnResultExecuted
the three latter timestamps are of course application-specific, but I am wondering what happens during the 1.1 millisecond between the moment the app receives the request and the action method gets control?
How to reduce this time?
What is the maximum rendering speed you ever obtained with MVC.NET (pages per second) and how did you do it?
Many things happen: routes are parsed, controller is located and instantiated, action method is called. Make sure that you are running in Release mode:
<compilation debug="false" />
so that your measurement results are more realistic. In reality the time between receiving the request and invoking the controller action is never a bottleneck. It is the time spent inside the controller action that you should focus on reducing. This is where your application might gain a real performance boost. There are different techniques for improving performance and a popular one is to use caching.
According to Gu:
Today’s ASP.NET MVC 3 RC2 build
contains many bug fixes and
performance optimizations. Our latest
performance tests indicate that
ASP.NET MVC 3 is now faster than
ASP.NET MVC 2, and that existing
ASP.NET MVC applications will
experience a slight performance
increase when updated to run using
ASP.NET MVC 3.
Time taken to render a page is different to number of requests per seconds are not directly correlated values (similar to FPS and time per frame in game development see here). Especially in a multi-threaded environment.
Personally on my machine an empty MVC applications renders the default controller and view in 0.8-1.1 ms. Of course the route collection is almost empty, so that presumably saves a lot of time. There are a few optimizations you can make, you can find them on the net easily, one of the primary ones is: clear your view engines and add just the view engine you are using, that will save a roundtrip to the hard drive on every request.
ViewEngines.Clear();
ViewEngines.Engines.Add(new WebFormViewEngine()));
As for real websites I was able to get a real world MVC application to render more than 2000 requests per second. One thing you might want to try is to put your Temp ASP.Net files and your website folder on a RAM drive, since MVC and IIS do hit the physical assembly file on every request, but realistically the gain is too small to be noticeable or worth anyone's time.
If you look at the source code here page generation time is at 1 ms (this is not entirely true since it's in the middle of the view, but very close nevertheless). That server is running on a RAM drive. You can speed it up a little bit more by moving ASP.Net Temp Files to a RAM drive, but I couldn't get it under 0.8 ms no matter what.
I'have a report, where the html generation for a preview takes about 39 seconds. When i try to preview the report in pdf, it's not done in 4 Minutes. Is that normal? My other reports have about 50% time - diference at maximum.
If its not normal, how can i speed up the report generation in pdf?
Thanks!
(BIRT 2.1.3, RCP Designer )
I would say 6x increase in generation time for PDF over HTML is not to be expected.
Most of my reports take no more than twice as long to export to PDF than they do to HTML. XLS export is in between HTML and PDF.
I was able to gain some optimisation on execution time by splitting up some data sets, and combining others. Some experimentation may provide you with some good results.
However a key thing to note was that my optimisation was spread across all export types, not just limited to PDF.
That isn't really much help, but it gives you something to try.