What is the most efficient way to transfer pictures over wifi from FlashAir in Java8? - image

I'm creating a program that reads pictures (JPG max size about 10Mb per file) from FlashAir as soon as they're taken, display them in a bigger screen for review and saved them to a local folder. It is paramount to reduce the time from the moment the picture is taken until it is displayed to the user and to prevent loss of quality (they are macro pictures). Now, the camera works with JPG, so changing that is not an option for the moment. All the pictures must be saved locally in the maximum possible quality.
I was wondering what would be the best way to achieve this. Since the FlashAir card is in the camera and moves around, the bottleneck will probably be in the wireless transfer (max speed is 54 Mb/s).
The picture could be displayed withing the Java app or sent to a different app for editing, but I want to reduce I/O operations (I don't want to have to re-read the picture once is saved locally to actually display it).
What is the best way to achieve this using pure Java 8 classes?
My test implementation uses ImageIO.read() and ImageIO.write() methods. The problems I have with this approach is that it takes a long time for the picture to be displayed (it is actually read from the saved folder) and the image is re-encoded and compressed, loosing quality compared to the original file that is in the SD Card.
I feel it should be a way to transfer the bytes very efficiently over the network first and run two parallel processes to save the untouched bytes to disk and decode and display the image (image then could potentially be edited and saved to disk to a different location).
I don't need a fully working example. My main concern is what Java 8 I/O classes are best suited for this job and to know if my approach is the best one to achieve the results.
Edit
After some research I was thinking of using ReadableByteChannel to storage the picture bytes in a ByteBuffer and then pass copies of it to two jobs that will run in parallel: the one saving the bytes would use a FileChannel and the one displaying the image would use them to create an ImageIcon.
I don't know if there is a better/recommended approach.

Related

LabVIEW - Store very large 3D array (array of images)

I am working on a LabVIEW project on which I have to process some video (with for example 5000 images of 640*480 pixels, so lot of data to process). Using a for loop I am processing one image at the time so in this side all is okay. But in the other side, I have to store the results to visualise the results of the wished image after the processing. Until now I always worked with array but here LabVIEW has not enough memory to do the job (which is quite normal).
Is there a best way to change my way to deal with the data, using other solution such as cluster, save the image on the local disk, etc?
For information, the processing is quite long (several minutes for only one image) and I don't have to save the result before the user ask for so I am anticipating the case where all the video is processed without saving the result.
Thank you in advance.
How much RAM do you have? Assuming 4 bytes per pixel, 5000 640 x 480 images would take about 6 GB, so if you have 16 GB RAM or more then you might be able to handle this data in RAM as long as you're using 64-bit LabVIEW and you're careful about how memory is allocated - read through VI Memory Usage from the help, for a start.
Alternatively you can look at storing the data on disk in a format where you can access an arbitrary chunk from the file. I haven't used it much myself but HDF5 seems to be the obvious choice - if you're on Windows you can install the LiveHDF5 library from the VI package manager.
Did you consider to store images as files in the system temporary directory and delete it afterwards? Since the processing takes long time per image, it should be easily possible to have a "image queue" of 5 images always loaded into memory (to avoid aby performance drop due to loading from file right before the processing) and rest would sit on the disk.

Rapid load and display of still images

I'm developing an app (Windows) that will allow essentially random access to about 1000 1920x1080 images. In effect this is a movie, but using stills and not presented sequentially -- the user can "scrub" to any image very rapidly.
I gather there are three factors to trade off: load time, decode time (if needed) and presentation time. I can specify the hardware, within limits, so SSD and good graphics card can be assumed.
Compressed images (PNG, JPG etc) will load more quickly but have an added decode step. Raw or BMP images will be slower to load but avoid the decode step. Presentation time should be the same in all cases, right, once in the proper form?
Is there an obviously superior approach, codec, library, hardware etc? Can anyone point to a study of the tradeoffs, or offer personal experience as a guide?

Writing multiple files Vs. writing one big file [in a solid state drive]

(I was not able to find a clear answer to my question, maybe I used the wrong search term)
I want to record many images from a camera, with no compression or lossless compression, on a not so powerful device with one single solid drive.
After investigating, I have decided that, if any, the compression will be simply png image by image (this is not part of the discussion).
Given these constraints, I want to be able to record at maximum possible frequency from the camera. The bottleneck is the (only one) hard drive speed. I want to use the RAM for queuing, and the few available cores for compressing the images in parallel, so that there's less data to write.
Once the data is compressed, do I get any gain in writing speed if I stream all the bytes in one single file, or, considering that I am working with a solid drive, can I just write one file (let's say about 1 or 2 MB) per image still working at the maximum disk bandwidth? (or very close to it, like >90%)?
I don't know if it matters, this will be done using C++ and its libraries.
My question is "simply" if by writing my output on a single file instead of in many 2MB files I can expect a significant benefit, when working with a solid state drive.
There's a benefit, not a significant one. A file system driver for a solid state drive already knows how to distribute the data of a file across many non-adjacent clusters so doing it yourself doesn't help. Necessary to fit a large file on a drive that already contains files. By breaking it up, you force extra writes to also add the directory entries for those segments.
The type of a solid state drive matters but this is in general already done by the driver to implement "wear-leveling". In other words, intentionally scatter the data across the drive. This avoids wearing out flash memory cells, they have a limited number of times you can write them before they physically wear out and fail. Traditionally only guaranteed at 10,000 writes, they've gotten better. You'll exercise this of course. Notable as well is that flash drives are fast to read but slow to write, that matters in your case.
There's one notable advantage to breaking up the image data into separate files: it is easier to recover from a drive error. Either from a disastrous failure or the drive just filling up to capacity without you stopping in time. You don't lose the entire shot. But inconvenient to whatever program reads the images off the drive, it has to glue them back together. Which is an important design goal as well, if you make it too impractical with a non-standard uncompressed file format or just too slow to transfer or just too inconvenient in general then it will just not get used very often.

XAP file from XNA game is huge, how can I compile resources without images being so big?

I typically have been writing xna games for windows phone 7 and set all my content to a build action of compile, which is default; what I've noticed is that my XAP file is now huge after finishing a new project, it seems to have taken 15MB worth of images and blown them up to 200MB in size. Is there anyway to get the build to be smaller while keeping the images compiled? From what I read it compiles images as basically full bitmaps. What's another direction I can take to resolve this issue, as forcing users to download a 200MB app seems unfair when at most it should only take up 15-20MB.
The XNA Content Pipeline basically stores images as they will be used on the GPU. That is either as an uncompressed bitmap, or DXT compressed (which doesn't compress it by much).
So if your original files were in jpeg format (or, to a lesser extent, png), you will find that your original files are much smaller than the built XNB files.
So the answer is to distribute your original jpeg and png files, and load them with Texture2D.FromStream. Note that this uses more CPU power to convert them into the right format at runtime (although I've heard reports of faster loading in some cases, because there's less data being transferred). Also you'll have to do premultiplied alpha manually yourself (and anything else that the content pipeline is handling for you).
Another thing you might want to look into is turning on compression for your sound effects. By default they are uncompressed. See this answer for details.
For more info, this article looks helpful.

comparing 2 kernel images and flashing the diff FLASH memory

i have existing old version images like kernel image,filesys image,application images in my NAND flash.
i want to port the new modified kernel or application image on to the NAND flash by replacing the older one.
But in the new images 90% is common to the old images.
so i don't want the entire new image to transfer.
inspite i am thinking of some kind of comaprision between the old and new images and want to send only the difference on to flash memory. so that i can avoid transfering a larger data.
can it be possible ? i need some guidence to do this.
It's certainly possible, however with flash you'll have to take into account the difference between erase sector size and write sector size (typically the erase block is multiple write sectors in size).
This would be very difficult, for two reasons.
The Linux kernel is stored compressed, so a small change can cause all the compression output following that point to be different.
If a modification changes the size of some code, everything stored after that will have to shift forward or back.
In theory, you could create your own way of linking and/or compressing the kernel so that code stays in one place and compression happens in a block-aware way, but that would be a lot of work -- probably not worth it just to save a few minutes of erase/write time during kernel upgrades.

Resources