Oracle JPEG compression - file sizes - image

We have an unsupported Oracle application running 9i as the database server and 10g on the application server.
As part of the application we are able to load and save photos (jpeg) within the database. We have started using the photos for different purposes and I have noticed that there is a huge difference in file size where comparing the preload file with the image when exported. Can anyone explain why this is happening.
As an example when we load a 10mb file, the photo is only just over 1mb when extracted. There is on average an 84% decrease in the exported file size over the original photo. The image dimensions are still the same size as is the dpi. Any ideas why the exported images are significantly smaller then the images loaded to the database.
Thanks,

Related

Reducing the size of oracle database docker image

Oracle provides docker images for its database, is there a way to reduce the size of the docker image down from the current 5.06GB? That size is obtained building with the defaults. Tested on 12.1.0.2-se2.

Does stripe size in the ORC File dump represented in Compressed size Format?

We have snappy compressed generated ORC files. I'm just trying to understand the ORC File dump log, and I know that by default the stripe size for ORC would be 64MB. But, I see that each stripe in the ORC file in general has around 5-10MB in size. I just want to know if that sizes are represented in compressed format or my default stripe is itself lesser than 64MB?
Note: I'm using the latest EMR instance in the background and the files are in S3.
Stripe size denote the buffer memomry size which is assigned to change row storage to column storage and then write to HDFS. so alway you see your stripe in HDFS is lower than the size of your stripe(i.e. 64 MB).

Oracle small blob vs large blob

I would like to know what is better way of handling large files such as 3-4 gigabytes as Oracle blob SecureFile.
The scenario here is, I am planning to upload large files to oracle db over wcf service. I am spiltting file in to smaller chunks of 200mb and uploading it one by one. On oracle side, I just append to the single blob until whole files get uploaded. This happens in sequential manner. However, I am thinking to upload chunks in parallel so I can speed up the operation of uploading. But this will not possible to handle at Oracle end as I can't update single blob with multiple uploads as it would then write bytes not in the order it receives from the service. Is it good than to insert each blob separately and merge them later once into a single blob record in Oracle side?
Thanks
Jay

SSRS - Unzip image varbinary(max) data and display

I'm working with a database-driven application that allows users to upload images which are then zipped and embedded into a database in a varbinary(max) format. I am now trying to get that image to display within an SSRS report (using BI 2005).
How can I convert the file data (which is 65,438 characters long when zipped and 65,535 characters when not zipped) into a normal varbinary format that I can then display in SSRS?
Many thanks in advance!
You'll have to embed a reference to a dll in your project and use a function to decompress the data within SSRS, see for example SharpZipLib. Consider storing the data uncompressed if possible, as the CPU / space trade off is unlikely to be in your favour here, as impage data is likely to have a poor compression ratio (it is usually already compressed).

Transfer of oracle dump file through mail which allows only 5mb max upload

i want to transfer my oracle database dump file from one place to another, and size of database is 80mb even if i 7 zip it coverts to 9mb. but mail allows me to upload maximum of 5mb data, so can i break my dump file? and at the same time i dont want to loose the key structure in database.
P.S. all the other mails are blocked and cloud spaces are also bloacked.
To meet the constraints of your network, you can create dump files of smaller size, which will enable you to create dump files of 5 MB (or smaller than that).
exp user/pass FILE=D:P1.dmp,E:P2.dmp FILESIZE=5m LOG=splitdump.log
I have not tried the above syntax, but have tried this one, where a substitution variable is used, ensuring that you need not worry about how many dump files you have to specify beforehand. This will automatically generate as many dump files, as needed of requisite size
expdp user/pass tables=test directory=dp_dir dumpfile=dump%u.dmp filesize=5m

Resources