How to display JPEG image on microcontroller LCD? - image

I am recently developing some firmware on the STM3210E development board which has an ARM cortex M3 processor. It has been interfaced to a 240x320 LCD. After going through the demo firmware, I realised that images are encoded in 32 bit variables (correct me if I am wrong) stored in array as shown below.
uint32_t STM32Banner[50] = {0x6461EB7A, 0x646443BC, 0x64669BFE, 0x6468F440, 0x646B4C82,
0x646DA4C4, 0x646FFD06, 0x64725548, 0x6474AD8A, 0x647705CC,
0x64795E0E, 0x647BB650, 0x647E0E92, 0x648066D4, 0x6482BF16,
0x64851758, 0x64876F9A, 0x6489C7DC, 0x648C201E, 0x648E7860,
0x6490D0A2, 0x649328E4, 0x64958126, 0x6497D968, 0x649A31AA,
0x649C89EC, 0x649EE22E, 0x64A13A70, 0x64A392B2, 0x64A5EAF4,
0x64A84336, 0x64AA9B78, 0x64ACF3BA, 0x64AF4BFC, 0x64B1A43E,
0x64B3FC80, 0x64B654C2, 0x64B8AD04, 0x64BB0546, 0x64BD5D88,
0x64BFB5CA, 0x64C20E0C, 0x64C4664E, 0x64C6BE90, 0x64C916D2,
0x64CB6F14, 0x64CDC756, 0x64D01F98, 0x64D277DA, 0x64D4D01C}
Could you please explain me how to convert a JPEG/PNG/BMP image to this format (RGB565) ?

You have two choices:
Write your own set of decoders.
Use available free decoders
The first solution is only really viable for BMP (and perhaps GIF), which is quite a simple format compared to PNG and JPEG. Even so, writing a BMP decoder that handles all different versions and specialties of BMP gracefully takes quite a bit of work (I have tried it). Hacking together something that can extract the image data from the most common BMP formats is quite easy though.
The second solution is probably the way to go for the other formats. Most open-source decoders are available under LGPL or similar, so licensing shouldn't really be a problem. For JPEG images use libJPEG, for PNG use libPNG and for GIF use giflib.
Most of the decoders do not support decoding to RGB565 so you will have to write a converter to convert from RGB888 to RGB565.

use a program like GIMP to convert to an uncompressed bmp (what you normally get when you save-as bmp).
A bmp has something like a 54 byte header then it goes into the data. Each line is pixels either 3 bytes (RGB) or four bytes (RGBX) per pixel. The width is aligned on a 4 byte boundary so if you have three bytes per and multiply that by the width in pixels if that is not a multiple of four (say 3 bits wide * 3 = 9 as a simple example) then there will be some padding. You know from opening the file in gimp how wide it is, you probably want to use gimp to adjust the image to match your lcd screen anyway. The first bytes of data after the header are the pixel in the lower left corner of the image, you might need to flip the image in the y axis, or just start off this way and see what happens.
Knowing the size of your image, (from opening it with gimp), you can do a little math to see if the size of the file matches with what I am saying, if it is dramatically smaller then there is some compression going on and you need to save again and change the settings for the bmp.
Once you have this figured out then write a simple program to extract the pixels from the bmp and save them in the format you desire. Even better read the code and docs and understand how to program the lcd and you can get from raw pixels to the lcd without having to to through their specific format/code.

Related

Use gostscript 9.21 to convert text to outlines, and how to keep the resolution of the picture

I use gostscript to convert text to outlines with the following code :gswin32c.exe -sDEVICE=pdfwrite -sOutputFile=output.pdf -dQUIET -dNOPAUSE -dBATCH -dNoOutputFonts -f test_new.pdf,it works.But i got a very small output file from 2.5M to 70kb.Then i find the picture became blurred in pdf.
Add -dPDFSETTINGS=/default,This will have the same result.
I's better to use -dPDFSETTINGS=/printer or -dPDFSETTINGS=/prepress,but 300dpi is not enough for me(or for my boss).
Is there any way to keep the original resolution of the picture.
Or how to set a higher dpi for images in output pdf.
The test file is here.
Thanks in advance.
The answer to your question is 'yes' (but see later). Don't use PDFSETTINGS, that sets lots of things all in one go. If you want control then you need to specify each setting individually.
Rather than use this shotgun approach you need to read the documentation, decide which controls affect areas you want to change, and alter those controls only.
However, image downsampling is not your problem. If you don't use -dPDFSETTINGS then PDF file written by Ghostscript contains an image at exactly the same resolution as the image in the original file.
Your problem is that the image is being written with JPEG compression, and JPEG is a lossy compression, so you are losing fidelity. Note that in the original file the image is written uncompressed, which is why its so large.
It looks like the original image was a JPEG, and the free PDF editor you are using has realised that so it saved the image uncompressed (I may be giving it too much credit here, it may save all images uncompressed). Applying JPEG to an image which has already been quantised simply amplifies the artefacts.
Instead you need to specify that you want images compressed with Flate, which is a lossless compression. The documentation for the pdfwrite controls can be found here, you need to change AutoFilterColorImages and ColorImageFilter.
Note that by not applying JPEG quantisation (a second time) and DCT encoding, the compression is less than your first experience. For me the output file comes in at just over 600Kb (leaving the font in place, and the text as text, would be a couple of Kb smaller). However the image is identical, as expected.
Since you are clearly using Ghostscript in a commercial environment, can I just point you at the licence and ask you to check that your usage is compatible with the AGPL, bearing in mind that this covers software as a service usage as well.

How computer stores images or videos as data, and where, can i make operations on it?

You know, computer stores images as channels and pixels in those channels. And pixel values are like "00110101" which fills 8 bits at memory. I want to know truly where that bits stored at memory, and how can i make operations on them.
Thanks!
Well, the standard book is Digital Image Processing by Gonzalez and Woods.
Another book, where you can pick up the PDF for free is Image Processing in C by Dwayne Philips - PDF here.
First, you need to get a decent C compiler and development system - personally I use Mac OSX, but I guess you would want Visual Studio free edition on Windows.
Then you need to get started with some simple reading and writing of files and memory allocation. I would go with greyscale images of the NetPBM format - probably just PGM files - described here as they are the easiest. You can download the NetPBM programs and run them in a Windows Command Prompt and see how they work and try and implement them yourself in C. You can also download ImageMagick for Windows and try converting images from colour to greyscale and resizing them like this:
convert input.png -colorspace gray result.jpg
convert input.tif -resize 400x400 result.pgm
When you have got that, I would move on to colour PPM format and then maybe PNG and/or JPEG. Remember there are libraries for TIF/JPEG/PNG/BMP so don't be afraid to use them.
Finally, move on to displaying images yourself with Windows GDI etc.
Come back to StackOverflow if you get stuck - questions are free!
tl;dr wildly different with different encodings/filesystems/os'es/drivers
Well that depends on the image format. BMP is one of the easier formats, details on what these files look like can be found on for instance wiki
And to answer "where its stored", it is stored on permanent storage (hardrive/ssd), where exactly depends on the filesystem (FAT/NTFS/EXT etc).
When an image is to be displayed, its read into memory, where it can be manipulated and through some apis this data can be put into a memory region specifically meant to display the current images on you screen.

Animated gif image to individual tiff images capturing individual frames

I have a small problem, I have a set of animated gif images. I want to pick individual gif image files, and create multiple tiff images capturing individual frames.
I am looking to do it in Python/Java.
Help would be appreciated!
You can do this easily from the command-line using ImageMagick. It is available for free from here. It has bindings for Perl, C/C++, Python and lots of others. It is ready installed in many Linux distros.
Your command looks like this:
convert -coalesce input.gif %02d.tif
which will produce TIFF format output files, numbered 01.tif, 02.tif etc. according to the frame number.
You can also extract an individual frame, say frame 7, like this:
convert -coalesce input.gif[7] my_favourite.tif
or a sequence of frames, say 3-7 like this:
convert -coalesce input.gif[3-7] frames%02d.tif
Note however, that when you extract individual frames, you may get artefacts depending on how well compressed your original GIF files are - since they sometimes only store DIFFERENCES between frames, so you may be best advised to extract all frames then discard any you don't want.

Save uint16 tiff image as truecolor with Matlab

I am processing microscopy images (in Matlab) in the tiff format, normally uint8 or uint16. Basically I read them, put them in a cell array for processing and then export them in the tiff format either as an image sequence or a stack (using imwrite and either the 'overwrite' or 'append' writemode property of imwrite, respectively). Up to now everything works very well.
The problem I'm having is the following:
When I open the images with ImageJ, they are not in truecolor "RGB" color mode, but rather in composite mode. For example ImageJ reads the data as 8 bit, which it is, but does not open the image as a truecolor (Sorry for the bad choice of words I don't know the right terminology). Hence I have to manually combine the 3 channels together, which is bothersome for large datasets.
Here is a screen shot explaining. On the left is what I would like,i.e. what I obtain if I open the image directly with ImageJ, and on the right is what I currently have after saving images with Matlab and opening them with ImageJ, which I don't want.
The code I'm using to export the image sequence is the following. "FinalSequenceToExport" is the cell array containing the images.
for i = 1:SliceNumber
ExportedName = sprintf('%s%s%d.tiff',fileName,'Z',i);
imwrite(FinalSequenceToExport{i},ExportedName,'tif','WriteMode','overwrite','Compression','none');
end
If I ask Matlab the size of FinalSequenceToExport{1}, for instance, it gives 512 x 512 x 3.
If I open a given image in the command window and then save it with the same code as above, it does what I want and the resulting image opens as I want in ImageJ. Hence my guess would be that the problem arises from the use of the cell array but I don't understand how.
I hope I've been clear enough. If not please ask for more details.
Thanks for the help!
You need to specify the the 'ColorSpace'
Try this
imwrite(FinalSequenceToExport{i},ExportedName,...
'tif','WriteMode','overwrite','Compression','none', ...
'ColorSpace', 'rgb');
After revisiting this question I found the following to work, thanks to the hint from #Ashish:
imwrite(uint8(FinalSequenceToExport{i}/255),...);
I just needed to divide by 255 after converting to uint8.

Uncrush PNG image on ubuntu?

IPA image uses pngcrush to compress PNG image, but I want to uncrush a PNG image on Ubuntu.
Can anyone give me any idea?
The standard PNG utility pngcrush has been modified by Apple, which makes it produce technically invalid PNGs: a new chunk is inserted before the mandatory first chunk IHDR, RGB(A) order of pixel data is inverted, and RGB pixels get premultiplied with their alpha.
Hence, I'd rather call these PNGs "fried", rather than just "crushed".
Try my own pngdefry. The source code is written on a Mac OSX machine but it should be compilable for other OSes as well; it's pretty straightforward C code.

Resources