I've noticed that an app that I'm currently working on is starting to have a nice amount of external native modules (some open source some I wrote myself).
I know that when Tianium app starts the framework verifies the license of the module against Appcelerator servers. Simple logic is that if there is a big number of modules the more licenses it will have to verify. Also, I'm guessing, some part of the modules has to be loaded at app start.
Modules also add to the size of the application (of course depending on the module). But in my case most of them are simple, yet I'm guessing they have some common "framework" elements in them that is probably duplicated between modules.
So my question is, should modules be avoided as much as possible? do they have performance impact on app load? on app in general? app size?
Modules will have an impact on app load and app size definitely. Generally if we load the modules in alloy.js then ti will have to spend more time loading them and preparing them to be used by the application. Also, the modules are bundled with the executable (APK or IPA). So the bigger the size of the SO, the greater will be the size of the executable.
Related
I am half way migrating an ionic app to Nativescript. As I was Googling I found some articles and repos about lazily loading modules in {N}.
e.g. https://github.com/sis0k0/lazyNinjas
I am not really sure why to care for lazy loading while all app files are already stored in the device. Does it have any performance improvement? If any, how?
Should I consider restructuring?
Thanks.
The performance increase you will have is mainly on loading time. As the modules are lazily loaded, your application needs to parse less code when it first loads so you get this boost. Please note that users tend to like a lot when apps open fast and are ready to use as soon as they tap them.
We're doing a rewrite of our UI using Angular 2. Since this is so new, there are very few resources available, so please excuse me, if my question seems silly and has been answered already.
First, a little background. Our product is built out of "modules" which are widgets that can be dropped on a page. Since, not all modules are being rewritten to use Angular 2, there will be a mix of non Angular and Angular modules on one page at the same time. For this reason, we've decided to make each Angular2 module to be a stand-alone angular app.
In the prototype phase all looked fine and dandy, however, fast forward a few months, and with just weeks before the release, someone looked at our page load times and was less than impressed. On my machine, with prodMode enabled, it takes 5 modules about 2.5 seconds to render with 2 of our most complicated modules taking a second each. The two biggest templates I've got are 32KB and 80KB in size, but since the processing time is the same, I suspect the linear length doesn't contribute as much as structural complexity, and they are pretty complex. The other 3 modules are much simpler.
From this timeline it seems that a lot of time is spent in parsing the template and loading components. So I thought maybe this is because each module is an independent angular app, and they probably don't share the cached components. So I've moved the BROWSER_APP_COMPILER_PROVIDERS from the App Providers into Platform Providers list. This caused all modules to reuse a single RuntimeCompiler (I think).
However in the grand scale of things, it did not improve the situation much. The total time went down to 2.3 seconds which makes it hardly worth the hassle.
Now, the modules are mostly wizards. That is they sit and look pretty until the user taps/clicks on the them to engage. So this got me thinking, what if I could stage the template parsing? If I could tell Angular to parse wizard steps on demand I could lower load time in exchange for some lag when interacting the the module. This is what I'm researching now, but I would love to hear the community's input.
Thank you for reading.
UPDATE: I am running RC.3.
In angular 2.4 we have a concept known as AOT, which can improve the performance!
An Angular application consists largely of components and their HTML templates. Before the browser can render the application, the components and templates must be converted to executable JavaScript by the Angular compiler.
You can compile the app in the browser, at runtime, as the application loads, using the just-in-time (JIT) compiler. This is the standard development approach shown throughout the documentation. It's great but it has shortcomings.
JIT compilation incurs a runtime performance penalty. Views take longer to render because of the in-browser compilation step. The application is bigger because it includes the Angular compiler and a lot of library code that the application won't actually need. Bigger apps take longer to transmit and are slower to load.
Compilation can uncover many component-template binding errors. JIT compilation discovers them at runtime, which is late in the process.
The ahead-of-time (AOT) compiler can catch template errors early and improve performance by compiling at build time.
Offline (pre)compiler could help, but it is not ready yet.
You could try replacing copy pasted/duplicated html code with separate component to reduce template size. Tree-like UI structure with reusable components might help.
Another way is to use lazy loading: load wizard when user clicks them (might use preloading).
I think it would be helpful to submit CPU profile to angular team, so that they could optimize compiler/parser.
Ok, Thanks for writing to me. So, if I am understanding this correct, every widget acts like different angular 2 app. Therefore, base module or base angular
app is dependent on multiple dependent injectable apps. Now, I am assuming this fact, that you have already taken bundling and minification of production ready
script. Apart from that, Just wanted to understand what is the SLA of API serving the request. Many a time, I observe that people complain UI for slowness. But,
it used to be the backend not meeting the SLAs. Check if your backend SLA is fine.
Next, what is your caching strategy of angular modules. How frequently, these are making request back to the server. Here is a catch. Around caching, all components
are not released in a battle ready status. Another thing, which I wanted to understand is, whether you are using RXJS for handling promises or not. RXJS, drammatically
improves the performance of angular 2 app.
Next, is reusablity of the code, say wrapping the reusable code in directive and then injecting the same at all the required places. This will cut the code compilation
cost.
Also, keep checking angular 2 milestones and the new fixes getting released around performance improvement. I am expecting final battle ready production framework
for angular 2 by December. Then, we will also migrate our apps to angular 2. Even, in nasdaq, we are currently running on 1.5X version. I hope this will help you
to improve the performance of your app.
So, I've done a bit of reading around the forums about AssetBundles and the Resources folder in Unity 3D, and I can't figure out the optimal solution for the problem I'm facing. Here's the problem:
I've got a program designed for standalone, that loads "books" full of .png and .jpg images. The pages are, at the moment, the same every time the program starts. At the start of the scene for any "book", it's loading all those images at once using www.texture and a path. I'm realizing now, however, that this is possibly an non-performant method for accessing things at runtime -- it's slow! Which means the user can't do anything for 5-20 seconds while the scene starts and the book's page images load up (on non-legendary computers). SO, I can't figure out which of the three things would be the fastest:
1) Loading one asset bundle per book (say 20 textures # 1 mb each).
2) Loading one asset bundle per page (1 mb each).
3) Either of the first two options, but loaded from the resources folder.
Which one would be faster, and why? I understand that asset bundles are packaged by unity, but does this mean that the textures inside will be pre-compressed and easier on memory at load time? Does the resources folder cause less load time? What gives? As I understand it, the resources folder loads into a cache -- but is it the same cache that the standalone player uses normally? Or is this extra, unused space? I guess another issue is that I'm not sure what the difference is between loading things from memory and storing them in the cache.
Cheers, folks...
The Resource folders are bundled managed assets. That means they will be compressed by Unity, following the settings you apply in the IDE. They are therefore efficient to load at runtime. You can tailor the compression for each platform, which should further optimize performance.
We make expensive use of Resources.Load() to pull assets and it performs well on both desktop and mobile.
There is also a special folder, called StreamingAssets, that you can use to put bundled un-managed assets. This is where we put the videos we want to play at runtime, but don't want Unity to convert them to the default ogg codec. On mobile these play in the native video player. You can also put images in there and loading them is like using WWW class. Slow, because Unity needs to sanitize and compress the images at load time.
Loading WWW is slower due to the overhead of processing asset, as mentioned above. But you can pull data from a server or from outside the application "sandbox".
Only load what you need to display and implement a background process to fetch additional content when the user is busy going through the first pages of each book. This would avoid blocking the UI too long.
Optimize the images to reduce the file size. Use tinypng, if you need transparent images, or stick to compressed JPGs
Try using Power of 2 images where possible. This should speed up the runtime processing a little.
ath.
Great answer from Jerome about Resources. To add some additional info for future searches regarding AssetBundles, here are two scenarios:
Your game is too big
You have a ton of textures, say, and your iOS game is above 100 mb -- meaning Apple will show a warning to users and prevent them from downloading over cellular. Resources won't help because everything in that folder is bundled with the app.
Solution: Move the artwork you don't absolutely need on first-run into asset bundles. Build the bundles, upload them to a server somewhere, then download them at runtime as needed. Now your game is much smaller and won't have any scary warnings.
You need different versions of artwork for different platforms
Alternative scenario: you're developing for iPhone and iPad. For the same reasons as above you shrink your artwork as much as possible to hit the 100 mb limit for iPhone. But now the game looks terrible on iPad. What do?
Solution: You create an asset bundle with two variants. One for phones with low res artwork, and one for tablets with high res artwork. In this case the asset bundles can be shipped with the game or sent to a server. At run-time you pick the correct variant and load from the asset bundle, getting the appropriate artwork without having to if/else everywhere.
With all that being said, asset bundles are more complicated to use, poorly documented, and Unity's demos don't work properly at times. So seriously evaluate whether you need them.
The question is quite simple:
Can we use JavaFX as a thin client running on a browser while a java server does most of the work?
IE: cretate the UI and it's controllers with JavaFX and have the bussiness/database connection/etc part run on a server?
Even if possible, would it be a complicated turnaround?
Based on the information you've provided, I wouldn't necessarily say that JavaFX is a good fit, but on the other hand I would not worry about the load times. My rationale is: The bad thing about JavaFX is that you have an additional tech requirement for your clients (JVM) and require some form of installation (even if it is just an applet). Those won't be a factor for HTML5. JavaFX has benefits over HTML5 if one of these cases is true:
1) You have complex controls and/or a lot of user interaction with the UI
2) You need your application to be really flashy, e.g. by incorporating animations
3) You have a complex business logic that you would like to execute on the client (e.g. because you had a previous implementation as a rich client)
'Some tables and simple controls' don't really fit here.
The reason why I wouldn't worry too much about the download time is that most users of an enterprise application will be using your app a lot from few different machines, thus caching should deal with that problem (plus an FX app is not going to be that large).
There is an interesting article on the topic to be found here: http://www.oracle.com/technetwork/articles/java/casa-1919152.html . Since it is coming directly from Oracle, you should of course take it with a pinch of salt, but I for one do agree with the general notion. The article also outlines some (subjective) experiences when switching to JavaFX.
If it's an enterprise app, and you already know that your users will have java installed on their clients, javafx is a good solution. If not, the downloading of the javafx jar can be quite a buzz kill the first time an app is run, as it's quite (understandably) large. I'm using it for enterprise apps, and the web start functionality works well.
And don't forget, if you're using jdk 7, there is a javafx packager which will create a single file installer/run-time for your app. I can't provide a lot of detail for that as I haven't bothered with it yet.
I have an application composed of a GUI and 3 launchd daemon launched command-line executables.
I'm planning to put the executables for the launchd daemons inside the .app bundle for the GUI.
These apps utilize 2 (both fairly small) frameworks which I have created.
Is it a better idea to put these frameworks in /Library/Frameworks (and thus save multiple applications loading the same code) or to keep them in the application bundle (thus making the application self-contained except for the launchd plists)?
Bottom line: I'd opt for self-contained unless/until you have convincing proof that putting the frameworks in /Library/Frameworks/ will offer a noticeable improvement for your particular scenario. Check the impact of doing it both ways, but I prefer to group the frameworks with the app to begin with.
The dynamic linker (dyld) is pretty smart about loading frameworks and reusing what has already been loaded in most cases. If there are several apps in disparate locations using a framework, it is definitely preferred to install it to /Library/Frameworks/. However, since it seems that all the "apps" you're referring to are within your .app bundle, it wouldn't seem there's much benefit to this approach, since they'll all be linked to the same path, which dyld should pick up on. (Not only is a user required to have admin permissions to modify /Library, but the install process instantly becomes more complex.) See the last part of my answer to a related SO question for ideas about analyzing launch performance of executables, including examining dyld activity.
Another consideration is the degree of control you have over framework versioning when they are stored within your app bundle. If you purposefully "publish" a framework by putting it in a canonical public location, you must be prepared to accept the consequences of others possibly choosing to link against it. However, if a framework is only distributed inside your app bundle, any developer who chooses to link against it anyway has nobody to blame but themselves. :-)
According to the Apple Framework Programming Guide "for nearly all cases, installing your frameworks in /Library/Frameworks is the best choice", because the code common to the applications sharing the frameworks is loaded only once in memory.