Using same CGImageRef Buffer for different images? - performance

I am looking for way to create CGImageRef buffer once and use it again for different images.
I my application is facing a performance hit as it creates a image and then draws it on context. This process is in a timer which fires every 1ms. I wonder if there is anything I could do to avoid calling
CGBitmapContextCreateImage(bitmapcontext); on every single tick.
Thanks

There is one way you could theoretically do it:
Create an NSMutableData or CFMutableData.
Use its mutableBytes/MutableBytePtr as the backing buffer of the bitmap context.
Create a CGDataProvider with the data object.
Create a CGImage with that data provider, making sure to use all the same parameter values you created the bitmap context with.
However, I'm not sure that this is guaranteed to work. More specifically, I don't think that the CGImage is guaranteed not to copy, cache, and reuse any data that your data provider provided. If it ever does, you'll find your app showing a stale image (or even an image that is partly stale).
You might be better off simply holding on to the CGImage(s). If you generate the image based on some input, consider whether you might be able to cache resulting images by that input—for example, if you're drawing a number or two into the context, consider caching the CGImages in a dictionary or NSCache keyed by the number(s) string. Of course, how feasible this is depends on how big the images are and how limited memory is; if this is on iOS, you'd probably be dropping items from that cache pretty quickly.
Also, doing anything every 1 ms will not actually be visible to the user. If you mean to show these images to the user, there's no way to do that 1000 times per second—even if you could do that in your application, the user simply cannot see that fast. As of Snow Leopard (and I think since Tiger, if not earlier), Mac OS X limits drawing to 60 frames per second; I think this is also true on iOS. What you should do at a reasonable interval—1/60th of a second being plenty reasonable—is set a view as needing display, and you should do this drawing/image generation only when the view is told to draw.

Related

Images storage performance react native (base64 vs uri path)

I have an app to create reports with some data and images (min 1 img, max 6). This reports keeps saved on my app, until user sent it to API (which can be done at the same day that he registered a report, or a week later).
But my question is: What's the proper way to store this images (I'm using Realm), is it saving the path (uri) or a base64 string? My current version keeps the base64 for this images (500 ~~ 800 kb img size), and then after my users send his reports to API, I deleted this base64 hash.
I was developing a way to save the path to the image, and then I display it. But image-picker uri returned is temporary. So to do this, I need to copy this file to another place, then save the path. But doing it, I got (for kind of 2 or 3 days) 2x images stored on phone (using memory).
So before I develop all this stuff, I was wondering, will it (copy image to another path then save path) be more performant that save base64 hash (to store at phone), or it shouldn't make much difference?
I try to avoid text only answers; including code is best practice but the question about storing images comes up frequently and it's not really covered in the documentation so I thought it should be addressed at a high level.
Generally speaking, Realm is not a solution for storing blob type data - images, pdf's etc. There are a number of technical reasons for that but most importantly, an image can go well beyond the capacity of a Realm field. Additionally it can significantly impact performance (especially in a sync'ing use case)
If this is a local only app, storing the images on disk in the device and keep a reference to where they are (their path) stored in Realm. That will enable the app to be fast and responsive with a minimal footprint.
If this is a sync'd solution where you want to share images across devices or with other users, there are several cloud based solutions to accommodate image storage and then store a URL to the image in Realm.
One option is part of the MongoDB family of products (which also includes MongoDB Realm) called GridFS. Another option is a solid product we've leveraged for years is called Firebase Cloud Storage.
Now that I've made those statements, I'll backtrack just a bit and refer you to this article Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps which is a fantastic article about implementing Realm in a real-world use application and in particular how to deal with images.
In that article, note they do store the images in Realm for a short time. However, one thing they left out of that (which was revealed in a forum post) is that the images are compressed to ensure they don't go above the Realm field size limit.
I am not totally on board with general use of that technique but it works for that specific use case.
One more note: the image sizes mentioned in the question are pretty small (500 ~~ 800 kb img size) and that's a tiny amount of data which would really not have an impact, so storing them in realm as a data object would work fine. The caveat to that is future expansion; if you decide to later store larger images, it would require a complete re-write of the code; so why not plan for that up front.

slow-loading persistent store coordinator in core data

I have been developing a Cocoa app with Core Data. Initially everything seemed fine, but as I added data to the application, I found that the initial data window took ages to load. To fix that, I moved to another startup window that didn't have the data, so start-up was snappy. However, no matter what I do, my first fetch AND my first attempt to load a data window (with tables views) are always slow. (That is, if I fetch slowly and then ask for the data window, both will be slow the first time around.) After that, performance is acceptable.
I traced through my application and found that while I can quickly step through the program, no matter what, the step that retrieves the persistent store coordinator is incredibly slow ... 15 - 20 seconds can elapse with a spinning beach ball.
I've read elsewhere that I might want to denormalize the data. I don't think that will be sufficient. An earlier version was far less "interconnected" between the entities, and it still was a slug at startup. Now I'm looking at entities that may have as high as 18,000 managed objects. Some of the relations are essential to having the data work correctly.
I've also read about the option of employing a separate managed object context in the background. The problem with this is that even this background context would take too long to be usable. If the user tries to run a search, he or she will still be waiting forever for that context to load. I might buy myself a few seconds while the user decides what to type in to the search field, but I can't afford to stall for 25 seconds.
I noticed that once data is imported into the persistent store, even searches on a table that is not related to others (and only has 1000 objects) still takes ages to load. The reason seems to be that it's the coordinator retrieval itself that's slow, not the actual fetch or the context.
Can anyone point me in the right direction on how to resolve this? Thanks!
Before you create your data model:
If you’re storing large objects such as photos, audio or video, you need to be very careful with your model design.
The key point to remember is that when you bring a managed object into a context, you’re bringing all of its data into memory.
If large photos are within managed objects cut from the same entity that drives a table-view, performance will suffer. Even if you’re using a fetched results controller, you could still be loading over a dozen high-resolution images at once, which isn’t going to be instant.
To get around this issue, attributes that will hold large objects should be split off into a related entity. This way the large objects can remain in the persistent store and can be represented by a fault instead, until they really are needed.
If you need to display photos in a table view, you should use auto-generated thumbnail images instead.
Read the whole article
You might be getting ahead of yourself thinking PSC is the culprit.
There is more going on behind the scenes with CoreData than is readily obvious -- PSC is very flexible and must be directed.
A realistic approach for the data size you specified (18K) is to focus on modularizing the logic of your fetch request templates and validation for specific size cases (think small medium large XtraLarge, etc.).
The suggestion to denormalize your data does not take into account the overhead to get your data into a fully denormalized state, plus a (sometimes) unintended side-effect of denormalization is sparsity (unless you have very specific model of course).
Since you usually do not know beforehand what data will be accessed and modified beforehand, make a one-to-many relationship between your central task and any subtasks. This will free up some constraints on your data access.
You can always give your end users the option to choose how they want to handle the larger datasets.

How can I get the *original* data behind an NSImage?

I have an instance of NSImage that's been handed to me by an API whose implementation I don't control.
I would like to obtain the original data (NSData) from which that NSImage was created, without the data being converted to another representation/format (or otherwise "molested"). If the image was created from a file, I want the exact, byte-for-byte contents of the file, including all metadata, etc. If the image was created from some arbitrary NSData instance I want an exact, byte-for-byte-equivalent copy of that NSData.
To be pedantic (since this is the troublesome case I've come across), if the NSImage was created from an animated GIF, I need to get back an NSData that actually contains the original animated GIF, unmolested.
EDIT: I realize that this may not be strictly possible for all NSImages all the time; How about for the subset of images that were definitely created from files and/or data?
I have yet to figure out a way to do this. Anyone have any ideas?
I agree with Ken, and having a subset of conditions (I know it's a GIF read from a file) doesn't change anything. By the time you have an NSImage, a lot of things have already happened to the data. Cocoa doesn't like to hold a bunch of data in memory that it doesn't directly need. If you had an original CGImage (not one generated out of the NSImage), you might get really lucky and find the data you wanted in CGDataProviderCopyData, but even if it happened to work, there's no promises about it.
But thinking through how you might, if you happened to get incredibly lucky, try to make it work:
Get the list of representations with -imageRepresentations.
Find the one that matches the original (hopefully there's just the one)
Get a CGImage from it with -CGImageForProposedRect:context:hints. You probably want a rect that matches the size of the image, and I'd probably pass a hint of no interpolation.
Get the data provider with CGImageGetDataProvider
Copy its data with CGDataProviderCopyData. (But I doubt this will be the actual original data including metadata, byte-for-byte.)
There are callbacks that will get you a direct byte-pointer into the internal data of a CGDataProvider (like CGDataProviderGetBytePointerCallback), but I don't know of any way to request the list of callbacks from an existing CGDataProvider. That's typically something Quartz accesses, and we just pass during creation.
I strongly suspect this is impossible.
This is not possible.
For one thing, not all images are backed by data. Some may be procedural. For example, an image created using +imageWithSize:flipped:drawingHandler: takes a block which draws the image.
But, in any case, even CGImage converts the data on import, and that's about as low-level as the Mac frameworks get.

Is there a way to determine what CGLFlushDrawable is doing to the back buffer?

According to Apple's documentation, CGLFlushDrawable or it's Cocoa equivalent flushBuffer may behave in couple different ways. Normally for a windowed application the contents of a back buffer are copied to the visible buffer like it's stated here:
CGLFlushDrawable
Copies the back buffer of a double-buffered context to the front buffer.
I assume the contents of the drawing buffer are left untouched (see question 1.). Even if I'm wrong, it can be assured by passing the kCGLPFABackingStore attribute to CGLChoosePixelFormat.
But further reading reaveals, that under some circumstances the buffers may be swapped rather than copying being performed:
If the backing store attribute is set to false, the buffers can be exchanged rather than copied. This is often the case in full-screen mode.
And also this states
When there is no content above your full-screen window, Mac OS X automatically attempts to optimize this context’s performance. For example, when your application calls flushBuffer on the NSOpenGLContext object, the system may swap the buffers rather than copying the contents of the back buffer to the front buffer. (...) Because the system may choose to swap the buffers rather than copy them, your application must completely redraw the scene after every call to flushBuffer.
And here go my questions:
If the back buffer is copied, is it guaranteed, that it's contents are preserved even without the backing store attribute?
If the bufferse are swapped, does the back buffer get contents of the front buffer, or is it undefined so it could as well get random stuff?
The system may choose to swap buffers, but is there any way to determine if it actually did choose to do so?
In any of those cases, is there a way to determine if the buffer was preserved, exchanged with the front buffer or got messed up?
Also any information on how it is made in WGL, GLX or EGL would be appreciated. I particulary need the answer to the question 4.
No, it's not guaranteed.
It might be random.
No, I don't believe so.
No. If you don't specify kCGLPFABackingStore or NSOpenGLPFABackingStore, then you can't make any assumptions about the contents of the back buffer, which is why the docs say you must redraw from scratch for every frame.
I'm not sure what you're asking about WGL, GLX, and EGL.

Switch OpenGL contexts or switch context render target instead, what is preferable?

On MacOS X, you can render OpenGL to any NSView object of your choice, simply by creating an NSOpenGLContext and then calling -setView: on it. However, you can only associate one view with a single OpenGL context at any time. My question is, if I want to render OpenGL to two different views within a single window (or possibly within two different windows), I have two options:
Create one context and always change the view, by calling setView as appropriate each time I want to render to the other view. This will even work if the views are within different windows or on different screens.
Create two NSOpenGLContext objects and associate one view with either one. These two contexts could be shared, which means most resources (like textures, buffers, etc.) will be available in both views without wasting twice the memory. In that case, though, I have to keep switching the current context each time I want to render to the other view, by calling -makeCurrentContext on the right context before making any OpenGL calls.
I have in fact used either option in the past, each of them worked okay for my needs, however, I asked myself, which way is better in terms of performance, compatibility, and so on. I read that context switching is actually horribly slow, or at least it used to be very slow in the past, might have changed meanwhile. It may depend on how many data is associated with a context (e.g. resources), since switching the active context might cause data to be transferred between system memory and GPU memory.
On the other hand switching the view could be very slow as well, especially if this might cause the underlying renderer to change; e.g. if your two views are part of two different windows located on two different screens that are driven by two different graphic adapters. Even if the renderer does not change, I have no idea if the system performs a lot of expensive OpenGL setup/clean-up when switching a view, like creating/destroying render-/framebuffer objects for example.
I investigated context switching between 3 windows on Lion, where I tried to resolve some performance issues with a somewhat misused VTK library, which itself is terribly slow already.
Wether you switch render contexts or the windows doesn't really matter,
because there is always the overhead of making both of them current to the calling thread as a triple. I measured roughly 50ms per switch, where some OS/Window manager overhead charges in aswell. This overhead depends also greatly on the arrangement of other GL calls, because the driver could be forced to wait for commands to be finished, which can be achieved manually by a blocking call to glFinish().
The most efficient setup I got working is similar to your 2nd, but has two dedicated render threads having their render context (shared) and window permanently bound. Aforesaid context switches/bindings are done just once on init.
The threads can be controlled using some threading stuff like a common barrier, which lets both threads render single frames in sync (both get stalled at the barrier before they can be launched again). Data handling must also be interlocked, which can be done in one thread while stalling other render threads.

Resources