This is a school project. We built a JavaFX GUI to take inputs for an algorithm. The algorithm runs only 45 minutes in stand-alone mode. However, when calling from JavaFX GUI, the running time is doubled. Any idea what might cause this? Also, the JavaFX GUI consumes about 400 Mb memory, is this normal?
Any help is appreciated.
Yulian
The algorithm and the GUI draw calls are competing for memory and this will create a slowdown for both the algorithm and the gui.
Allocate more memory for the java runtime to fix this
There are very minute things that one needs to keep in mind while developing if they need to optimize their JavaFX Applications (performance wise):
1. Many people think that they should write the entire application using pure JavaFX coding. This may be fine for small test applications. But when we write real-world applications, we need to divide the I/O portion, Business logic and all Non-UI related code to different layers.
2.
Use JavaFx Api Async + Callback methods so as to execute these code outside Event-Dispatch-Thread.
Related
I am trying to decide what application paradigm to use for my iOS app build with Appcelerator. Many people talk about the Tweetanium way as the best way, i.e. single context.
I think I am going to use that but I have a few questions about it.
Since I include all "windows" on the first page. Does that mean that it will have to load all windows in the application at app start?
Will this paradigm really be very fast and memory conservative compared to "normal" way of for example the Kitchensink?
What is the downside of using Tweetaniums way of doing things?
Is it suitable for complex apps?
Thankful for all input!
Short version: Yes :)
Longer version:
Multi-context apps (like the Kitchen Sink) are also fine generally speaking, but you run into the following two problems with larger apps:
1.) Sharing data between windows/contexts within an app
2.) Unsure when the code for a given window has been run
You can also (potentially) maintain a pointer to a UI object created in one context after the window associated with that context is closed, which under some circumstances can lead your app to leak memory. Single context is easier and will ultimately get you into less trouble. Also, if your app is large, be sure to only load scripts as you need them, and not all up front.
I am currently trying to automate a Windows Forms application by using the Microsoft UI Automation Library and C#, but I have big problems concerning the performance. The Identification of single elements by using a PropertyCondition or iterating over all elements of a window takes very long (up to 4 minutes). As soon as I have a AutomationElement, everything is fine (e.g. GetCurrentPropertyValue reacts within 100ms).
The poor performance only applies to one application. I don't have access to the source but if something needs to be changed or checked, I can talk to the responsible programmer. As far as I know, some events (e.g. paint) were overwritten for the application. A typical window of the application contains about ~100 elements which are found by the FindAll method.
I also tried the COM interface of the UI automation library, which is about two times faster but this does not really solve the problem.
Does anyone have an idea how to solve this problem or experienced similar behavior?
We found the answer when we took a closer look at the main loop. In most cases Application.Run is used to start up the main window and run the application but for some reason the following code was used:
[...]
MainForm.Show();
while DoStop == false
{
System.Threading.Thread.Sleep(10);
Application.DoEvents();
}
[...]
As the Microsoft UI Automation Library uses window messages, all the System.Threading.Thread.Sleep(10); summed up and made the object detection become really slow. This does not happen, if Application.Run is used.
Suppose you are making a GUI application, and you need to load/parse/calculate a bunch of things before a user can use a certain tool, and you know what you have to do beforehand.
Suddenly, it makes sense to start doing these calculations in the background over a period of time (as opposed to "in one go" on start-up or exactly when it is needed). However, doing too much in the background will slow down the responsiveness of the application.
Are there any standard practices in this kind of approach? Perhaps ways to detect low load on the CPU or the user being idle and execute code in those times? Arguments against this type of approach?
Thanks!
Without knowing your app or your audience, I can't give you specific advice.
My main argument against the approach is that unless you have a high-profile application which will see a lot of use by non-programmers, I wouldn't bother. This sounds like a lot of busy work that could be spent developing or refining features that actually allow people to do new things with your app.
That being said, if there is a reason to do it, there is nothing wrong with lazy-loading data.
The problem with waiting until idle time is that some people have programs like SETI#Home installed on their computer, in which case their computer has little to no idle time. If loading full-throttle kills the responsiveness of your app, you could try injecting sleeps. This is what a lot of video games do when you specify a target frame rate, to avoid pegging the CPU. This would get the data loaded faster, rather than waiting for idle time.
If parts of your app depend on data to work, and the user invokes that part of the app, you will have to abandon the lazy-loading approach and resume your full CPU/disk taxing load. If it takes a long time, or make the app unresponsive, you could display a loading dialog with a progress bar.
If your target audience will tend to have a multicore CPU and if your app startup and background initialization tasks won't contend for the same other resources (e.g. disk IO, network, ...) to the point of creating a new bottleneck, you might want to kick off a background thread to perform the initialization tasks (or even a thread per initialization task if you have several tasks that can run in parallel). That will make more efficient use of a multicore hardware architecture.
You didn't specify your target platform, but it's exceedingly easy to achieve this in .NET and so I have begun doing it in many of my desktop apps.
My application has several large forms with lots of images which dramatically increases the size of the built executable. Over time, it seems that the startup performance becomes sluggish and it doesn't seem to be getting any better.
If I put all of the forms besides the main form in a separate dll, would it alleviate some of the pressure put on the application during startup?
I'd test it myself, but I have A LOT of forms and I don't want to do it unless someone can confirm that such an action will prove to be useful.
Many factors can affect startup performance. Have you used any tools to prove that it's the images?
For a start, go through these tips:
http://devcomponents.com/blog/?p=361
And consider using multithreading to load bigger objects in the background.
I'm not quite sure about that, but if I were you I would use the Profiler when it comes to improving performance.
Before I go guessing what's wrong, I consult with it and work my way up, because it tells me which methods and classes are costing the most in my code.
Another tip that may be useful: This reduced my application's startup time from 2 minutes to <10 seconds on a low-end thin client. Use NGEN to generate a precompiled native image of your assemblies.
I'm wondering if you were to use MEF and Lazy load, then when you actually need the module (Form) instantiate by calling .Value.
There are a couple of things I do with applications containing a lot of forms:
Create a UI .exe: basically only my forms
Create a backend .dll: everything that does the work behind the UI.
Are the images actually included in the .dll? If so, I would actually put my images into a .dll separate from the UI.
Given that the images are for toolbars, I wouldn't split them out as resources. I'll still stand fast on my advice to split into multiple .dlls.
As others said, profile, don't guess.
Not just any profiler will do.
Here's a user (besides me) who discovered random pausing on his own.
You say the "intense" methods are all in dlls you don't have source code for - that's typical and normal.
What you need to know is which statements in your code are requesting the time to be spent, and they can't be restricted to CPU-only time.
Most profilers don't tell you this, but random-pausing does.
If you're interested, here's a recent discussion of the issues.
What do you do to increase startup speed (or to decrease startup time) of your Delphi app?
Other than application specific, is there a standard trick that always works?
Note: I'm not talking about fast algorithms or the likes. Only the performance increase at startup, in terms of speed.
In the project options, don't auto-create all of your forms up front. Create and free them as needed.
Try doing as little as possible in your main form's OnCreate event. Rather move some initialization to a different method and do it once the form is shown to the user. An indicator that the app is busy with a busy mouse cursor goes a long way.
Experiments done shows that if you take the exact same application and simply add a startup notification to it, users actually perceive that app as starting up faster!
Other than that you can do the usual things like exclude debug information and enable optimization in the compiler.
On top of that, don't auto create all your forms. Create them dynamically as you need them.
Well, as Argalatyr suggested I change my comment to a separate answer:
As an extension to the "don't auto create forms" answer (which will be quite effective by itself) I suggest to delay opening connections to databases, internet, COM servers and any peripheral device until you need it first.
Three things happen before your form is shown:
All 'initialization' blocks in all units are executed in "first seen" order.
All auto-created forms are created (loaded from DFM files and their OnCreate handler is called)
You main form is displayed (OnShow and OnActivate are called).
As other have pointed out, you should auto-create only small number of forms (especially if they are complicated forms with lots of component) and should not put lengthy processing in OnCreate events of those forms. If, by chance, your main form is very complicated, you should redesign it. One possibility is to split main form into multiple frames which are loaded on demand.
It's also possible that one of the initialization blocks is taking some time to execute. To verify, put a breakpoint on the first line of your program (main 'begin..end' block in the .dpr file) and start the program. All initialization block will be executed and then the breakpoint will stop the execution.
In a similar way you can step (F8) over the main program - you'll see how long it takes for each auto-created form to be created.
Display a splash screen, so people won't notice the long startup times :).
Fastest code - it's the code, that never runs. Quite obvious, really ;)
Deployment of the application can (and usually does!) happen in ways the developer may not have considered. In my experience this generates more performance issues than anyone would want.
A common bottleneck is file access - a configuration file, ini file that is required to launch the application can perform well on a developer machine, but perform abysmally in different deployment situations. Similarly, application logging can impede performance - whether for file access reasons or log file growth.
What I see so often are rich-client applications deployed in a Citrix environment, or on a shared network drive, where the infrastructure team decides that user temporary files or personal files be stored in a location that the application finds issues with, and this leads to performance or stability issues.
Another issue I often see affecting application performance is the method used to import and export data to files. Commonly in Delphi business applications I see export functions that work off DataSets - iterating and writing to file. Consider the method used to write to file, consider memory available, consider that the 'folder' being written to/read from may be local to the machine, or it may be on a remote server.
A developer may argue that these are installation issues, outside the scope of their concern. I usually see many cycles of developer analysis on this sort of issue before it is identified as an 'infrastructure issue'.
First thing to do is to clear auto
created forms list (look for Project
Options). Create forms on the fly
when needed, especially if the
application uses database connection
(datamodule) or forms that include
heavy use of controls.
Consider using form inheritance also
to decrease exe size (resource usage is mimized)
Decrease number of forms and merge similar or related functionality into single form
Put long running tasks (open database connections, connect to app server, etc) that have to be performed on startup in a thread. Any functionality that depends on these tasks are disabled until the thread is done.
It's a bit of a cheat, though. The main form comes up right away, but you're only giving the appearance of faster startup time.
Compress your executable and any dlls using something like ASPack or UPX. Decompression time is more than made up for by faster load time.
UPX was used as an example of how to load FireFox faster.
Note that there are downsides to exe compression.
This is just for the IDE, but Chris Hesick made a blog posting about increasing startup performance under the debugger.