I've got a mac application that I've developed.
I use it to create sqlite files that are bundled with my iphone app. The mac app uses Core Data and bindings and is working fine except for one "weird" issue.
I use an NSImageView (or Image Well) to allow me to drag and drop jpg files.
This is bound through to an optional binary attribute in my model class.
For some reason when I drag and drop a 4k jpg file it onto the image well and save the sqlite file. The data saved to the binary column is over 15 times larger than it should be.
Whereas if I use an application like SQLiteManager and add the image into the row in the database. The binary data is the correct (expected size).
File 4k jpg
Actual size: 2371.
Persisted via Core Data size: 35810.
Can anyone give me a suggestion as to why this might be happening?
Do I need to set some setting in Interface Builder or write some custom code?
Create dump from sqlite3 file and check which content use your space. I use plain sqlite3 to store image cache in Galileo and as far i know there db size ~ total images size.
Related
TL;DR: In macOS 10.13, an MTLTexture has a maximum width and height of 16,384. What strategies can you use to be able to process and display images larger than 16,384 pixels using Metal?
In a photo viewer that I'm currently working on, I've moved most of the viewing of images into a Metal backed view that uses Core Image for doing any image adjustments. This is working really well but I've recently started testing against some really large images (panoramas) and I'm now hitting some limits that I'm not entirely sure how to workaround while remaining relatively performant.
My current environment looks like this:
Load and decode an image from from an NSURL into an IOSurface. This is done using either Image IO directly or a Core Image pipeline that renders into the IOSurface. The IOSurface is then passed from an XPC service back into the main app.
In the main app, a new MTLTexture is created that is backed by the IOSurface. Then, a CIImage is created from the MTLTexture and that CIImage is used throughout an image pipeline as the root "source image".
However, if I attempt to open an image larger that 16,384 pixels in one dimension, then I'm unable to create the original IOSurface 16,384 on my laptop. (13" MBP-TB 2016)
But even if I could create a larger IOSurface, then I'm still stuck with the same limit on the MTLTexture.
See: Apple's Metal Feature Table Set
I'm curious what strategies others would recommend to allow one to open large image files while still taking advantage of Core Image and Metal.
One attempt I've made is to just have the root source image be a CIImage that was created with a CGImageRef. However, there's a significant drop in performance between that arrangement and a CIImage backed by a texture for even smaller sized images.
Another idea I've had, but haven't yet explored, was to use CIImageProvider in some capacity but I'm not entirely sure how I'd go about "tiling" potentially several IOSurfaces or MTLTextures, and if that even makes sense or if it would be better to just allocate a single large buffer to read from. (Or perhaps use dispatch_data in some capacity?)
(macOS 10.13 or even 10.14 would be fine.)
When creating an image in a Windows Store App, how do you control when data is read from disk?
In WPF, you could control when an image was read from disk using BitmapCacheOptions. BitmapCacheOptions.OnDemand would postpone reading data from disk until the image data was actually needed. There were a few downsides to this:
IO costs often appeared as UI delays;
if a stream was used as the image source, then the stream could not be closed;
if a file was used as the image source, then the file was locked.
To address that problem you could use BitmapCacheOptions.OnLoad to read the image into memory immediately.
How do you control when image data is loaded into memory in Windows Store Apps?
WPF code would look something like this:
var bitmapImage = new BitmapImage();
bitmapImage.BeginInit();
bitmapImage.CacheOption = BitmapCacheOption.OnLoad;
bitmapImage.UriSource = path;
bitmapImage.EndInit();
bitmapImage.Freeze();
Edit - More info
WPA shows that getting an 8.8mb image onto the screen costs ~330ms. Of that, 170ms is spent on file IO (including 37ms for antivirus to check the file), and 160ms is spent on WIC decoding.
Any ideas how to control when the file IO happens or how to trigger WIC decoding?
(Right click and open in new tab to see full size)
For Windows Store Apps I suggest looking into the AccessCache APIs - http://msdn.microsoft.com/en-us/library/windows/apps/windows.storage.accesscache.aspx
Specifically the StorageApplicationPermissions class. With it you can add various storage items to be accessed.
Take a look at the FilePickerSample (or FileAccessSample) app for more information on how it can be used.
I made few forms in Access 2010 and I add logo of company to the header form. This picture is .jpg and size of it is 70KB. I don't know why size of .mdb immediately increased from 4MB to 12MB? (few forms and the same logo) Maybe there is some options of image compression ?
Taken from http://office.microsoft.com/en-us/access-help/store-images-in-a-database-HP005280225.aspx
..."However, embedding images can rapidly inflate the size of your
database and cause it to run slowly. This is especially true if you
store GIF and JPEG files, because OLE creates additional bitmap files
that contain display information for each of your image files, and
those additional files can be larger than your original images. In
addition, this method only supports the Windows Bitmap (.bmp) and
Device Independent Bitmap (.dib) graphic file formats. If you want to
display other common types of image files, such as GIF and JPEG
images, you have to install additional software."...
To explain how these bitmap files are stored, the link below offers more explanation than the microsoft site:
Taken from http://www.ammara.com/support/kb/showkbe5cc.html
..."OLE Linking & Embedding is a technique used by Microsoft Access to
store 'Objects' in database tables.The technique relies on the
associated external application to store, present and edit the data.
In some cases an additional uncompressed 'preview' image is also saved
in the table (even when linking). This preview image is used for
faster display of the data, or when the server application isn't
available. This can cause a massive overhead. If you're storing jpeg
images the uncompressed preview can be ten or twenty times the actual
image size, causing the size of the database to rocket."...
So, when you drop an image onto a form in MS Access, uncompressed image data is saved to the system tables. This is actual uncompressed table data, so a compact and repair may offer little help.
The common workaround seems to be store the path to the image in a database table, and use that path to invoke the image on the form.
I don't know WHY (and I don't care) but I already noticed that behaviour as well. My workaround for company logos or equivalent is to insert it in ONE form, which I then insert as a subform wherever I need it. It has the added benefit that if the logo changes one day, there is only one place to update.
I am using a sample code from Microsoft windows SDK (CameraCapture, which is found in C:\Program Files\Windows Mobile 6 SDK\Samples\PocketPC\CPP\win32\CameraCapture) to capture image using windows mobile. The program save a image into a file, but I am interested to store the image into memory rather than any saving into storage.
any suggestions?
I don't have access to that particular sample but I assume that the project is using the CaptureCameraDialog class.
http://msdn.microsoft.com/en-us/library/microsoft.windowsmobile.forms.cameracapturedialog_members.aspx
Unfortunately using this class will only return the image path. You don't state why you need to save it in memory rather than on the disk but if your intention is to do some basic processing then you could just load it from the disk once captured.
using (CameraCaptureDialog cameraCapture = new CameraCaptureDialog())
{
cameraCapture.ShowDialog();
//get the name of the last image taken
string fileName = cameraCapture.FileName;
//Load the image from disk into our image object
Image image = new Bitmap(fileName);
}
A word of caution here. I believe when saved to disk the images will be in a compressed format, but when loaded into memory they will be uncompressed. It is very easy to cause an out of memory exception when working with images in the compact framework so I wouldn't recommend holding lots of images in memory at once especially if working with a hi resolution camera.
We would like to display very large (50mb plus) images in Internet Explorer. We would like to avoid compression as compression algorithms are not what CSI would have us believe that they are and the resulting files are too lossy.
As a result, we have come up with two options: Silverlight Deep Zoom or a Flash based solution (such as Zoomify). The issue is that both of these require conversion to a tiled output and/or conversion to a specific file type (Zoomify supports a single proprietary file type, PFF).
What we are wondering is if a solution exists which will allow us to view the image without a conversion before hand.
PS: I know that you can write an application to tile the images (as needed or after the load process) and output them; however, we would like to do this without chopping up the file.
The tiled approach really is the right way to do it.
Your users don't want to download a 50mb file before they can start viewing the image. You don't want to spend the bandwidth to serve 50 megs to every user who might only view a fraction of your image.
If you serve the whole file, users will eventually be able to load and view it, but it won't run smoothly for most of them.
There is no simple non-tiled way to serve just a portion of an image unless you want to use a server-side library like imagemagik or PIL to extract a specific subset of the image for each user. You probably don't want to do that because it will place a significant load on your server.
Alternatively, you might use something like google's map tool to provide zooming and scaling. Some comments on doing that are available here:
http://webtide.wordpress.com/2008/08/27/custom-google-maps/
Take a look at OpenSeadragon. To make a image can work with OpenSeadragon, you should generate a zoomable image format which mentioned here. Then follow starting guide here
The browser isn't going to smoothly load a 50 meg file; if you don't chop it up, there's no reasonable way to make it not lag.
If you dont want to tile, you could have the server open the file and render a screen sized view of the image for display in the browser at the particular zoom resolution requested. This way you arent sending 50 meg files across the line when someone only wants to get an overview of the image. That is, the browser requests a set of coordinates and an output size in pixels, the server opens the larger image and creates a smaller image that fits the desired view, and sends that back to the web browser.
As far as compression, you say its too lossy, but if thats what you are seeing you are probably using the wrong compression algorithm or setting for the type of image you have. The jpg format has quality settings to control lossiness, and PNG compression is lossless (the pixels you get after decompressing are the exact values you had prior to compression). So consider changing what you are using as compression, and dont just rely on the default settings in an image editor.