I made few forms in Access 2010 and I add logo of company to the header form. This picture is .jpg and size of it is 70KB. I don't know why size of .mdb immediately increased from 4MB to 12MB? (few forms and the same logo) Maybe there is some options of image compression ?
Taken from http://office.microsoft.com/en-us/access-help/store-images-in-a-database-HP005280225.aspx
..."However, embedding images can rapidly inflate the size of your
database and cause it to run slowly. This is especially true if you
store GIF and JPEG files, because OLE creates additional bitmap files
that contain display information for each of your image files, and
those additional files can be larger than your original images. In
addition, this method only supports the Windows Bitmap (.bmp) and
Device Independent Bitmap (.dib) graphic file formats. If you want to
display other common types of image files, such as GIF and JPEG
images, you have to install additional software."...
To explain how these bitmap files are stored, the link below offers more explanation than the microsoft site:
Taken from http://www.ammara.com/support/kb/showkbe5cc.html
..."OLE Linking & Embedding is a technique used by Microsoft Access to
store 'Objects' in database tables.The technique relies on the
associated external application to store, present and edit the data.
In some cases an additional uncompressed 'preview' image is also saved
in the table (even when linking). This preview image is used for
faster display of the data, or when the server application isn't
available. This can cause a massive overhead. If you're storing jpeg
images the uncompressed preview can be ten or twenty times the actual
image size, causing the size of the database to rocket."...
So, when you drop an image onto a form in MS Access, uncompressed image data is saved to the system tables. This is actual uncompressed table data, so a compact and repair may offer little help.
The common workaround seems to be store the path to the image in a database table, and use that path to invoke the image on the form.
I don't know WHY (and I don't care) but I already noticed that behaviour as well. My workaround for company logos or equivalent is to insert it in ONE form, which I then insert as a subform wherever I need it. It has the added benefit that if the logo changes one day, there is only one place to update.
Related
Noticed that images sometimes are sliced up in PDFs.
Steps:
insert an image with a high resoultion (3000x1800) into a .docx
use "Microsoft Print to PDF" option of Word to convert to PDF
extracting all images with pdfimages or pymupdf
Result:
Image is sliced horizontally into three images
Questions:
What exactly happens in the in the transition from .docx to pdf (or in generell in the process to pdf) that makes the converter slice it up into three images instead of one?
Do the individuell XObjects of the sliced images contain information which says that these three images belong to originally one?
How do I know how the images are sliced (horizontally / vertically) and what if originally there were two images inserted into the .docx file and both of them are sliced. Can you tell if slice x belongs to original image y or z?
So, as you have found out: because the code which generates the PDF choose to do so.
The technical reasons may be various - it could be that historically there were printers which would only have so much memory, and would need to get limiterd size-images when printing, and someone at some point when writing the PDF export code present in Microsoft Office choose to apply this limit.
Anyway, technically, as put in the comments, an image in a PDF file could be composed of unlimited smaller images collated together.
Now, the second part, and your actual question: to know whether images ibn a PDF file belong together in a single original image one would need a custom extractor tool to check the geometry of all images in the document and find out which images have no margins or boundaries with others - it would not be that hard to do for well behaved files (which we can't know if MS Office generated files are: there are ways to obfuscate image positioning by making it indirectly). The metadata in the image-parts may or may not contain information that would allow one to recompose the original image: it would be up to the code generating the PDF to include this metadata or not - but the geometry can't lie in this case: if the final document presents a single image visually, it is possible to detect that when fetching the images.
This only happens on the product pages with images with a larger height than 500px approximately. Caching is disabled. Products display correctly at smaller sizes but i need a solution that doesn't require image resizing before uploading.
I believe its something to do with using multiple image resizer program and some of the meta information in the image.
Thanks
It sounds like there is EXIF data in the jpeg which records which way 'up' is. Either this info is getting ignored when you upload but is not ignored on your PC - explaining why the image looks the right way up when you view it on your desktop, but the wrong way up in Magento, or vice versa.
Can you use an art program or bulk convertor like XnView to either apply or remove the EXIF data before uploading? Then you might need to manually rotate some images.
This may well be a little of an open-ended question
The site I am working on requires to be optimised for performance. One of the key areas is to optimise the file sizes of the images used upon the site.
Unfortunatley these images are being created by employees who do not have the required knowledge for creating images for the web, and it is my job to produce a set of guidelines for them to use.
I was wondering whether there was any resource/guidlines/literature regarding typical images file sizes for images of different dimensions - as I would like to include something like this to aid them to ensure their images are being created properly.
Any info would be greatly appreciated.
Thanks in advance
I can't answer the opinion question, but I can suggest some guidelines that will keep your images smaller.
First off, if they're using Photoshop to edit their images, it's likely they're storing a whole bunch of crap in the headers (digital papertrail, EXIF data, and such). Also, folks will frequently save in too high a bit depth.
For novice users, trying to explain why they need to use "save for web" is more likely to confuse them. Instead, just point them at:
http://www.smushit.com/ysmush.it/
This site is rather handy - it will compress all the images on a page you specify, or you can upload the images.
You should strongly consider writing some guidelines about where images are stored as well. It's frequently very beneficial to have your static image content stored on several servers, apart from your dynamic content. Most browsers will only download a limited # of files at a time from any given website (usually it's 2).
Unless there's a good reason, all your images should be cached using one of the HTTP cache techniques (expires, etags, etc).
Good luck.
72 dpi as a resolution and either jpeg or png formats work best.
Try to use images at the exact pixel area size they will end up being displayed as. This is specified by the images height and width attributes.
You can set the output quality of a jpeg image which will also save file size although there is a trade off against image quality.
I hope this is of use.
We would like to display very large (50mb plus) images in Internet Explorer. We would like to avoid compression as compression algorithms are not what CSI would have us believe that they are and the resulting files are too lossy.
As a result, we have come up with two options: Silverlight Deep Zoom or a Flash based solution (such as Zoomify). The issue is that both of these require conversion to a tiled output and/or conversion to a specific file type (Zoomify supports a single proprietary file type, PFF).
What we are wondering is if a solution exists which will allow us to view the image without a conversion before hand.
PS: I know that you can write an application to tile the images (as needed or after the load process) and output them; however, we would like to do this without chopping up the file.
The tiled approach really is the right way to do it.
Your users don't want to download a 50mb file before they can start viewing the image. You don't want to spend the bandwidth to serve 50 megs to every user who might only view a fraction of your image.
If you serve the whole file, users will eventually be able to load and view it, but it won't run smoothly for most of them.
There is no simple non-tiled way to serve just a portion of an image unless you want to use a server-side library like imagemagik or PIL to extract a specific subset of the image for each user. You probably don't want to do that because it will place a significant load on your server.
Alternatively, you might use something like google's map tool to provide zooming and scaling. Some comments on doing that are available here:
http://webtide.wordpress.com/2008/08/27/custom-google-maps/
Take a look at OpenSeadragon. To make a image can work with OpenSeadragon, you should generate a zoomable image format which mentioned here. Then follow starting guide here
The browser isn't going to smoothly load a 50 meg file; if you don't chop it up, there's no reasonable way to make it not lag.
If you dont want to tile, you could have the server open the file and render a screen sized view of the image for display in the browser at the particular zoom resolution requested. This way you arent sending 50 meg files across the line when someone only wants to get an overview of the image. That is, the browser requests a set of coordinates and an output size in pixels, the server opens the larger image and creates a smaller image that fits the desired view, and sends that back to the web browser.
As far as compression, you say its too lossy, but if thats what you are seeing you are probably using the wrong compression algorithm or setting for the type of image you have. The jpg format has quality settings to control lossiness, and PNG compression is lossless (the pixels you get after decompressing are the exact values you had prior to compression). So consider changing what you are using as compression, and dont just rely on the default settings in an image editor.
I've got a mac application that I've developed.
I use it to create sqlite files that are bundled with my iphone app. The mac app uses Core Data and bindings and is working fine except for one "weird" issue.
I use an NSImageView (or Image Well) to allow me to drag and drop jpg files.
This is bound through to an optional binary attribute in my model class.
For some reason when I drag and drop a 4k jpg file it onto the image well and save the sqlite file. The data saved to the binary column is over 15 times larger than it should be.
Whereas if I use an application like SQLiteManager and add the image into the row in the database. The binary data is the correct (expected size).
File 4k jpg
Actual size: 2371.
Persisted via Core Data size: 35810.
Can anyone give me a suggestion as to why this might be happening?
Do I need to set some setting in Interface Builder or write some custom code?
Create dump from sqlite3 file and check which content use your space. I use plain sqlite3 to store image cache in Galileo and as far i know there db size ~ total images size.