So ready with my new c++/cli dll for deserializing html to oo elements, I went ahead to test the performance. The benchmark was some random stackoverflow page saved in a file to be processed 1000 times. The dll is called from a vb.net test app. In debug mode it takes ~15ms per page, in release mode ~2.9ms but when I ran the performance profiler on release mode, surprisingly I got ~1.5ms.
How is that possible? That is nearly half the time. Is it using some performance increasing compiler options or something?
Related
In my Delphi application, when I execute TOpenDialog, it loads a lot of modules before showing the dialog.
This may take a while.(2-3 seconds).
The second and other times, it is much faster.
Is there a way to preload these modules when launching application?
The issue is not that the the modules take a long time to load. The issue is that the Delphi debugger instruments each new module load to report it in the output window.
The slowness you observe the first time the file dialog is shown is an artefact of running under the IDE debugger. If you run your program without the debugger then file dialog initial load time is greatly reduced.
Whilst you could track down the names of the modules that are loading, and force them to be loaded when the process starts, I would absolutely recommend that you do not do that. The modules that are loaded are likely to vary from system to system, so it is entirely possible that if you attempt to do this it will lead to you creating versions of your software that fail to start. Even if you do this just for your private debug builds, you will be guaranteeing that you suffer slower load times every time you debug the program.
I must work with a Visual Studio (2017) Solution that contains 654 projects, and counting. The projects are a mixture of C++ and C# projects - possibly 2/3 C++.
The problem is, VS2017 (we're already running 15.8) is highly unstable at this project count, but for some tasks I need to open the whole solution.
One can (and should) question the design, but please not here. Are there any viable tricks to make working with such a sln bearable?
The problems we have is:
After having fully loaded, it's sluggish as hell, even on our beefy dev machines. Hangs often.
It will crash many times a day. (We isolated a few cases that reliably crash it, like oping the C++ setting dialog, but it's still unstable).
Crashes are often observed when VS peaks at ~ 2.6GB RAM
Not Problems:
Solution load times: The solution is loaded in a decent amount of time. We don't need to optimize for this at the moment.
Compilation times: Devs don't do full-solution builds anyway. (But some tasks require having your corner in the full context of the whole solution.)
I already tried disabling VS Intellisense, but it didn't help. Disabling our VisualAssistX plugin also didn't really help.
Historically the VS team has always said that they're going to fix the problem of VS loading too much at some indefinite point in time. That's one reason why they
haven't made it 64-bit yet as well. Now that they've disabled the selective loading API, you're pretty much on their mercy.
For older VS versions, there's Funnel.
It allows you to selectively load a subset of the projects, automatically loading dependencies. The added benefit is that refactoring, search etc. only works in context of loaded projects, making it much faster. You can also save and organize your filters, making it easier to switch between different subsets.
It is the first time for me to use Microsoft Visual Studio 2010 Performance Profiler for profiling. After the application program is finished, I use CPU Sampling method for profiling and create a performance section. After that I launch profiling. The problem I have found is that each time I profiles the same program I get different samples accounts. The following picture can illustrate my problem:
In the above picture the baseline file and comparison file come from the same application program. I expect that these two profile files should be the same, but in reality they are not. I was wondering what I could do in order to obtain consistent result. Thanks!
That's just not possible when you use the sampling method to profile your program. It works by periodically interrupting your program and finding out what it is doing. Inevitably, the odds that it will reliably interrupt your program at the exact same place when you profile repeatedly are very low. The data you'll get is only statistically relevant, it is an estimate and useful to find hotspots in your code. Which is something you should always be checking first when you profile.
You'll need the instrumentation method to get hard numbers about the number of times a function executes. The profiler now reliably records when a function is entered. Biggest problem with that is that it drastically slows down your program.
Background info on these profiling methods is available in this MSDN Library article.
I mostly don't wait for the results or intellisense because it is too slow.
But sometimes, when I am not sure of a type, I would like it to be here, and it takes very long to have the inference run.
However, my CPU is working around 2% all the time..
=> Is it possible to have visual studio be more agressive with my computer's ressources ?
Update
I use Visual Beta 11
while intellisense does appear immediately after loading a solution, it takes a while when modifying the code (around 20-30 seconds)
The F# VS implementation is not stingy about using resources to give you feedback in the editor. If any IntelliSense info is out of date, it will happily burn one full core (or a little more) trying to catch back up. And if any stale information is available in the meantime, it should serve up the stale results. A wait of tens of seconds is unexpected for any 'warm' solution.
(If this is a relatively new install, you might run ngen eqi from a VS command prompt to ensure NGEN has finished after the installation; the F# compiler components are slow unless they have been NGEN'd, and this happens in the background after the VS install.)
If you see IntelliSense this slow, I'd be curious to know more about your solution (number of files, size of files, using type providers?, ...) to identify the problem.
I have a BIRT report with performance problems: it takes approximately 5 minutes to run.
At the beginning I though the problem was the database: this report uses a quite complex SQL Server stored procedure to retrieve data. After a lot of SQL optimizations this procedure now takes ~20 seconds to run (in the management console).
However, the report itself still takes too much time (several minutes). How do I identify the other bottlenecks in BIRT report generation? Is there a way to profile the entire process? I'm running it using the www viewer (running inside Tomcat 5.5), and I don't have any Java event handlers, everything is done using standard SQL and JavaScript.
I watched the webinar "Designing High Performance BIRT Reports" 1, it has some interesting considerations but it didn't help much...
As I write this answer the question is getting close to 2 years old, so presumably you found a way around the problem. No one has offered a profiler for the entire process, so here are some ways of identifying bottle necks.
Start up time - About a minute can be spent here
running a couple reports one after the other or starting a second after the first is running can help diagnosis issues.
SQL Query run time - Good solutions are mentioned in the question
any SQL trace and performance testing will identify issues.
Building the report - This is where I notice the lions share of time being taken. Run a SQL trace while the report is being created. Even a relatively simple tables with lots of data can take around a minute to configure and display (HTML via apache tomcat) after the SQL trace indicates the query is done.
simplify the report or create a clone with fewer graphs or tables run with and without pieces to see if any create a notable difference
modify the query to bring back less records, less records are easier to display,
Delivery method PDF, Excel, HTML each can have different issues
try the report to different formats
if one is significantly greater, try different emitters.
For anyone else having problems with BIRT performance, here are a few more hints.
Profiling a BIRT report can be done using any Java profiler - write a simple Java test that runs your report and then profile that.
As an example I use the unit tests from the SpudSoft BIRT Excel Emitters and run JProfiler from within Eclipse.
The problem isn't with the difficulty in profiling it, it's in understanding the data produced :)
Scripts associated with DataSources can absolutely kill performance. Even a script that looks as though it should only have an impact up-front can really stop this thing. This is the biggest performance killer I've found (so big I rewrote quite a chunk of the Excel Emitters to make it unnecessary).
The emitter you use has an impact.
If you are trying to narrow down performance problems always do separate Run and Render tasks so you can easily see where to concentrate your efforts.
Different emitter options can impact performance, particularly with the third party emitters (the SpudSoft emitters now have a few options for making large reports faster).
The difference between Fixed-Layout and Auto-Layout is significant, try both.
Have you checked how much memory you are using in Tomcat? You may not be assigning enough memory. A quick test is to launch the BIRT Designer and assign it additional memory. Then, within the BIRT Designer software, run the report.