AppMessage maximum size considerations? - pebble-watch

I'm looking to get a clearer idea of the factors that affect the maximum allowed size of app messages incoming to the watch. The maximum size that the SDK guarantees will work is 124 bytes, and the docs say that "in some context, Pebble might be able to provide your application with larger inbox/outbox. You can call app_message_inbox_size_maximum() and app_message_outbox_size_maximum() in your code to get the largest possible value you can use."
I tried this out on my pebble and app_message_inbox_size_maximum() returned 2044 (which is more than enough for my app), but I imagine that's not reliable across pebbles? What is the "some context" the docs mention?

There are two factors that will impact how much memory is available:
Are you talking to a JavaScript program or to a iOS/Android program using the PebbleKit iOS/Android libraries?
In the case of JavaScript, you will have much more memory available because Pebble will use the same buffer that is used for installing apps and upgrading the firmware. Unfortunately the channel to communicate with third party apps written with the PebbleKit native libraries is much smaller (about 500 bytes).
The version of Pebble OS you are using
There will be small differences between versions but nothing major.

When developing apps, I'd say the biggest thing to remember is that the message has to exist within your app's memory space while it's being processed. app_message_open allocates the space for you, which comes out of the 24kb that contains your app binary and app heap (see the Pebble Dev FAQ). So, ~2kB for AppMessages may or may not be a problem.
However, the app_message_xxx_size_maximum functions can't know how much RAM you are eventually going to use for other stuff. Since you can't resize the inboxes, you have to get it right the first time using your own judgement.
Beyond that, who knows. PebbleOS is closed source, so there's no easy way to figure out what's going on under the hood. But, we can try! First, since there is no way to check the in/outbox size, we might assume that the OS does not resize the inbox/outbox after it has been created. Then, since the OS does not allow >1 app to run at once, one could surmise that the size limits would be consistent across app launches on the same hardware. Finally, the maximum size should only get larger as time goes on, since reducing it would run the risk of breaking apps that relied on a particular size.

Related

Making IO application more CPU efficent

I have an application that takes files from one place and moves them to another place - pretty much all this application does is checks if files are in s3 and downloads ones that are not to another s3. Currently application uses very low amounts of provided CPU. From this post, my understanding that it is to be expected (seeing as my app is pretty much I/O and nothing more).
My initial idea was to lower the number of CPUs provided to the app. However, providing less and less negatively impacted the speed at which my app performs its duties (which according to this article kind of make sense - less CPU means less total clock speed). This is not an option as it needs to run somewhat fast.
I am using kafka messages to start my app. So another idea of mine was to increase the number of partitions in my topic from which my app consumes the messages (so that I can increase the no. of threads that can run concurrently). That allowed me to reduce the number of CPU that I provide to my app (while maintaining the desired processing speed) but my app still uses very low amounts of CPUs.
My app runs in kubernates whose cluster is deployed to EC2s, if that is of any difference. My app is springBoot java. I tried to only give it a minimum number of CPUs, while maxing out the no. of concurrent threads in my app, but again I can see a lot of wasted CPU there.
My question is then as follows: is it possible to somehow make an application to use all available CPU (thus making it more efficient) in this scenario? Is there a config or a method or something that does that? Or for an app that checks data is present and downloads data somewhere else, this is an expected behavior - increasing the number of available resource would improve speed at which my app runs but as a cons of that, there will be waster CPU? (so I am in the classic "good comes with the bad" sort of situation here?)

Is there any way to force JavaFX to release video memory?

I'm writing an application leveraging JavaFX that scrolls a large amount of image content on and off of the screen every 20-30 seconds. It's meant to be able to run for multiple hours, pulling in completely new content and discarding old content every couple minutes. I have 512Mb of graphics memory on my system and after several minutes, all of that memory has been consumed by JavaFX and no matter what I do with my JavaFX scene, none of it is released. I've been very careful to discard nodes when they drop off of the scene, and at most I have 50-60 image nodes in memory at one time. I really need to be able to do a hard release of the graphics memory that was backing these images, but haven't been able to figure out how to accomplish that, as the Image interface in JavaFX seems to be very high level. JavaFX will continue to run fine, but other graphics heavy applications will fail to load due to limited resources.
I'm looking for something like the flush() method on java.awt.image.Image:
http://docs.oracle.com/javase/7/docs/api/java/awt/Image.html#flush()
I'm running java 7u13 on Linux.
EDIT:
I managed to work out a potential workaround ( see below ), but have also entered a JavaFX JIRA ticket to request the functionality described above:
RT-28661
Add explicit access to a native resource cleanup function on nodes.
The best workaround that I could come up with was to set my JVM's max heap to half of the available limit of my graphics card. ( I have 512mb of graphics memory, so I set this to -Xmx256m ) This forces the GC to be more proactive in cleaning up my discarded javafx.image.Image objects, which in turn seems to trigger graphics memory cleanup on the part of JavaFX.
Previously my heap space was set to 512mb, ( I have 4gb of system memory, so this is a very manageable limit ). The problem with that seems to be that the JVM was being very lazy about cleaning up my images until it started approaching this 512mb limit. Since all of my image data was copied into graphics memory, this meant I had most likely exhausted my graphics memory before the JVM really started really caring about cleanup.
I did try some of the suggestions by jewelsea:
I am calling setCache(false), so this may be having a positive affect, but I didn't notice an improvement until I dropped my max heap size.
I tried running with Java8 with some positive results. It did seem to behave better in graphics memory management, but it still ate up all of my memory and didn't seem to start caring about graphics memory until I was almost out. If reducing your the application's heap limit is not feasible, then evaluating the Java8 pre-release may be worthwhile.
I will be posting some feature requests to the JavaFX project and will provide links to the JIRA tickets.
Perhaps you are encountering behaviour related to the root cause of the following issue:
RT-16011 Need mechanism for PG nodes to know when they are no longer part of a scene graph
From the issue description:
Some PG nodes contain handles to non-heap resources, such as GPU textures, which we would want to aggressively reclaim when the node is no longer part of a scene graph. Unfortunately, there is no mechanism to report this state change to them so that they can release their resources so we must rely on a combination of GC, Ref queues, and sometimes finalization to reclaim the resources. Lazy reclamation of some of these resources can result in exceptions when garbage collection gets behind and we run out of these limited resources.
There are numerous other related issues you can see when you look at the issue page I linked (signup is required to view the issue, but anybody can signup).
A sample related issue is:
RT-15516 image data associated with cached nodes that are removed from a scene are not aggressively released
On which a user commented:
I found a workaround for my app just settihg up an using of cashe to false for all frequently using nodes. 2 days working without any crashes.
So try calling setCache(false) on your nodes.
Also try using a Java 8 preview release where some of these issues have been fixed and see if it increases the stability of your application. Though currently, even in the Java 8 branch, there are still open issues such as the following:
RT-25323 Need a unified Texture resource management system for Prism
Currently texture resources are managed separately in at least 2 places depending on how it is used; one is a texture cache for images and the other is the ImagePool for RTTs. This approach is flawed in its design, i.e. the 2 caches are unaware of each other and it assumes system has unlimited native resources.
Using a video card with more memory may either reduce or eliminate the issue.
You may also wish to put together a minimal executable example which demonstrates your issue and raise a bug request against the JavaFX Runtime project so that a JavaFX developer can investigate your scenario and see if it is new or a duplicate of a known issue.

Drastic performance inprovement in .NET CF after app gets moved out of the foreground. Why?

I have a large (500K lines) .NET CF (C#) program, running on CE6/.NET CF 3.5 (v.3.5.10181.0). This is running on a FreeScale i.Mx31 (ARM) # 400MHz. It has 128MB RAM, with ~80MB available to applications. My app is the only significant one running (this is a dedicated, embedded system). Managed memory in use (as reported by GC.Collect) is about 18MB.
To give a better idea of the app size, here's some stats culled from .NET CF Remote Performance Monitor after staring up the application:
GC:
Garbage Collections 131
Bytes Collected by GC 97,919,260
Managed Bytes in use after GC 17,774,992
Total Bytes in use after GC 24,117,424
GC Compactions 41
JIT:
Native Bytes Jitted: 10,274,820
Loader:
Classes Loaded 7,393
Methods Loaded 27,691
Recently, I have been trying to track down a performance problem. I found that my benchmark after running the app in two different startup configurations would run in approximately 2 seconds (slow case) vs. 1 second (fast case). In the slow case, the time for the benchmark could change randomly from EXE run to EXE run from 1.1 to 2 seconds, but for any given EXE run, would not change for the life of the application. In other words, you could re-run the benchmark and the time for the test stays the same until you restart the EXE, at which point a new time is established and consistent.
I could not explain the 1.1 to 2x slowdown via any conventional mechanism, or by narrowing the slowdown to any particular part of the benchmark code. It appeared that the overall process was just running slower, almost like a thread was spinning and taking away some of "my" CPU.
Then, I randomly discovered that just by switching away from my app (the GUI loses the foreground) to another app, my performance issue disappears. It stays gone even after returning my app to the foreground. I now have a tentative workaround where my app after startup launches an auxiliary app with a 1x1 size window that kills itself after 5ms. Thus the aux app takes the foreground, then relinquishes it.
The question is, why does this speed up my application?
I know that code gets pitched when a .NET CF app loses the foreground. I also notice that when performing a "GC Heap" capture with .NET CF Remote Performance Monitor, a Code Pitch is logged -- and this also triggers the performance improvement in my app. So I suspect somehow that code pitching is related or even responsible for fixing performance. But I'm at a loss as to figure out how to determine if that is really the case, or even to explain why pitching code could help in this way. Does pitching out lots of code somehow significantly help locality of reference of code pages (that are re-JITted, presumably near each other in memory) enough to help this much? (My benchmark spans probably 3 dozen routines and hundreds of lines of code.)
Most importantly, what can I do in my app to reliably avoid this slower condition. Any pointers to relevant .NET CF / JIT / Code pitching information would be greatly appreciated.
Your app going to the background auto-triggers a GC.Collect, which collects, may compact the GC Heap and may pitch code. Have you checked to see if a manual GC.Collect without going to the background gives the same behavior? It might not be pitching that's giving the perf gain, it might be collection or compaction. If a significant number of dead roots are swept up, walking the root tree may be getting faster. Can't say I've specifically seen this issue, so this is all conjecture.
Send your app a wm_hibernate before your performance loop. Will clean up things
We have a similar issue with our .NET CF application.
Over time, our application progressively slows down, eventually to a halt with what I anticipate is due to high CPU load, or as #wil-s says, as if thread is spinning consuming CPU. The only assumption / conclusion I've made to so far is either we have a rogue thread in our code, or there's an under the cover issue in .NET CF, maybe with the JITter.
Closing the application and re-launching returns our application to normal expected performance.
I am yet to implement code change to test issuing WM_HIBERNATE or launch a dummy app which quits itself (as above) to force a code pitch, but am fairly sure this will resolve our issue based on the above comments. (so many thanks for that)
However, I'm subsequently interested to know whether a root cause was ever found to this specific issue?
Incidentally and seemingly off topic (but bear with me), we're using a Freescale i.MX28 processor and are experiencing unpredictable FlashDisk corruption. Seeing 2K blocks of 0xFFs (erased blocks) in random files located on NAND Flash.
I'm mentioning this as I now believe the CPU and FlashDisk corruption issues are linked, due to this article as well as this one:
https://electronics.stackexchange.com/questions/26720/flash-memory-corruption-due-to-electricals
In the article, #jwygralak67 comments:
I recently worked through a flash corruption issue, on a WinCE system,
as part of a development team. We would sporadically find 2K blocks of
flash that were erased. (All bytes 0xFF) For about 6 months we tested
everything from ESD, to dirty power to EMI and RFI interference, we
bought brand new devices and tracked usage to make sure we weren't
exceeding the erase cycle limit and buring out blocks, we went through
our (application level) software with a fine toothed comb.
In the end it turned out to be an obscure bug in the very low level
flash driver code, which only occurred under periods of heavy CPU
load. The driver came from a 3rd party. We informed them of the issue
we found, but I don't know if they ever released a patch.
Unfortunately, we're yet to make contact with him.
With all of this in mind, potentially if we work around the high CPU load, maybe the corruption will no longer be present. Another conjecture case!
On that assumption however, this doesn't give a firm root cause for either situation, which I'm desperately seeking!
Any knowledge or insight, however small, would be very gratefully received.
#ctacke - we've spoken before regarding OpenNETCF via email, so I'm pleased to see your name!

How much memory should a caching system use on Windows?

I'm developing a client/server application where the server holds large pieces of data such as big images or video files which are requested by the client and I need to create an in-memory client caching system to hold a few of those large data to speed up the process. Just to be clear, each individual image or video is not that big but the overall size of all of them can be really big.
But I'm faced with the "how much data should I cache" problem and was wondering if there are some kind of golden rules on Windows about what strategy I should adopt. The caching is done on the client, I do not need caching on the server.
Should I stay under x% of global memory usage at all time ? And how much would that be ? What will happen if another program is launched and takes up a lot of memory, should I empty the cache ?
Should I request how much free memory is available prior to caching and use a fixed percentage of that memory for my needs ?
I hope I do not have to go there but should I ask the user how much memory he is willing to allocate to my application ? If so, how can I calculate the default value for that property and for those who will never use that setting ?
Rather than create your own caching algorithms why don't you write the data to a file with the FILE_ATTRIBUTE_TEMPORARY attribute and make use of the client machine's own cache.
Although this approach appears to imply that you use a file, if there is memory available in the system then the file will never leave the cache and will remain in memory the whole time.
Some advantages:
You don't need to write any code.
The system cache takes account of all the other processes running. It would not be practical for you to take that on yourself.
On 64 bit Windows the system can use all the memory available to it for the cache. In a 32 bit Delphi process you are limited to the 32 bit address space.
Even if your cache is full and your files to get flushed to disk, local disk access is much faster than querying the database and then transmitting the files over the network.
It depends on what other software runs on the server. I would make it possible to configure it manually at first. Develop a system that can use a specific amount of memory. If you can, build it so that you can change that value while it is running.
If you got those possibilities, you can try some tweaking to see what works best. I don't know any golden rules, but I'd figure you should be able to set a percentage of total memory or total available memory with a specific minimum amount of memory to be free for the system at all times. If you save a miminum of say 500 MB for the server OS, you can use the rest, or 90% of the rest for your cache. But those numbers depend on the version of the OS and the other applications running on the server.
I think it's best to make the numbers configurable from the outside and create a management tool that lets you set the values manually first. Then, if you found out what works best, you can deduct formulas to calculate those values, and integrate them in your management tool. This tool should not be an integral part of the cache program itself (which will probably be a service without GUI anyway).
Questions:
One image can be requested by multiple clients? Or, one image can be requested by multiple times in a short interval?
How short is the interval?
The speed of the network is really high? Higher than the speed of the hard drive?? If you have a normal network, then the harddrive will be able to read the files from disk and deliver them over network in real time. Especially that Windows is already doing some good caching so the most recent files are already in cache.
The main purpose of the computer that is running the server app is to run the server? Or is just a normal computer used also for other tasks? In other words is it a dedicated server or a normal workstation/desktop?
but should I ask the user how much
memory he is willing to allocate to my
application ?
I would definitively go there!!!
If the user thinks that the server application is not a important application it will probably give it low priority (low cache). Else, it it thinks it is the most important running app, it will allow the app to allocate all RAM it needs in detriment of other less important applications.
Just deliver the application with that setting set by default to a acceptable value (which will be something like x% of the total amount of RAM). I will use like 70% of total RAM if the main purpose of the computer to hold this server application and about 40-50% if its purpose is 'general use' computer.
A server application usually needs resources set aside for its own use by its administrator. I would not care about others application behaviour, I would care about being a "polite" application, thereby it should allow memory cache size and so on to be configurable by the administator, which is the only one who knows how to configure his systems properly (usually...)
Defaults values should anyway take into consideration how much memory is available overall, especially on 32 bit systems with less than 4GB of memory (as long as Delphi delivers only 32 bit apps), to leave something free to the operating systems and avoids too frequent swapping. Asking the user to select it at setup is also advisable.
If the application is the only one running on a server, a value between 40 to 75% of available memory could be ok (depending on how much memory is needed beyond the cache), but again, ask the user because it's almost impossible to know what other applications running may need. You can also have a min cache size and a max cache size, start by allocating the lower value, and then grow it when and if needed, and shrink it if necessary.
On a 32 bit system this is a kind of memory usage that could benefit from using PAE/AWE to access more than 3GB of memory.
Update: you can also perform a monitoring of cache hits/misses and calculate which cache size would fit the user needs best (it could be too small but too large as well), and the advise the user about that.
To be honest, the questions you ask would not be my main concern. I would be more concerned with how effective my cache would be. If your files are really that big, how many can you hold in the cache? And if your client server app has many users, what are the chances that your cache will actually cache something someone else will use?
It might be worth doing an analysis before you burn too much time on the fine details.

How do I ensure GUI responsiveness when using OpenCL on the display GPU?

In my relatively short time learning OpenCL I frequently see my application cause the operating system UI to become significantly less responsive (several seconds for a window to respond to a drag for example). I have encountered this problem on Windows Vista and Mac OS X both with NVidia GPUs.
What can I do when using OpenCL on the same GPU as the display to ensure that my application does not significantly degrade the UI responsiveness like this? Also, can this be done without taking needless performance losses within my application? (Ie, if the user is not doing some UI intensive task then I would not expect my application to run any slower than it does now.)
I understand that any answers will be very platform specific (where platform includes OS/GPU/driver combo).
As described in Dr. David Gohara's OpenCL Tutorial Episode 6 (beginning at 43:49), graphics cards cannot be preemptively scheduled at this time. As a result, using the same graphics card both for an intensive OpenCL kernel and the UI (or other GPU-using operations) will result in clunkiness or the visual appearance of freezing. Until graphics cards get preemptively scheduled multitasking (if ever), there's no way to do exactly what you want with just a single graphics card. I don't believe this is a platform-specific issue at all.
However, this problem might be solvable by dividing the problem up. Given the relative speed of whatever single GPU is available (you'll have to do testing to find the right setup), divide up your OpenCL problem to run the kernel multiple times with different parts of the input data, and later combine the output data when all sets of kernels are complete. I would recommend creating kernel sets that take less than 100 milliseconds to run (on a given GPU) so that lag would be, if not unnoticeable, not significantly annoying (the 100 milliseconds figure is a good "rule of thumb" according to this paper).
Based on your comment about your program being a command-line application, I assume your application will only run once at any given time, versus being a continuously running application with real-time output, as a lot of OpenCL demos are. My above answer is only satisfactory for non-continuous applications, since real-time performance isn't inherently expected. However, if your application is supposed to be continuous, the only solution currently available is to add a second, simpler graphics card that will only be used for UI.

Resources