Converting simple image to intel hex - image

So I am developing a type of tamagotchi (virtual pet) for my microprocessors final. I made my own images 128x64 pixels long, as I am using a display with that resolution, so each image weighs 1Kbytes. I am using an at89s52 (8052) microcontroller and it doesn't have enough memory to store all the animations I want. My plan (and I kind of want to keep it that way) is to use an EPROM to save all my images with intel hex format (the programmer I am using is SUPERPRO and it imports that type of files). Of course the assembly code will be easy for me after the point where I have the data in the ROM. I am not that good of a programmer to develop a code that does exaclty what I want (convert images to intel hex), and all the software I have tried doesn't generate them correctly (inserts hex values that aren't supposed to be there, for example: in a blank space there is supposed to be only zeroes, and there is another value). I have tried with png with transparent background, with white background and jpg. The images I have are such:
http://imgur.com/a/yiCOb
(it seems I am not allowed to post images here)
I don´t see much help in other places of the internet, so the answer to this question would be of great help for future MCU-based programmers. Thank you.

It's about 30 years since I last made an EPROM :-)
Anyway, you need 2 things...
Part One
Firstly, your files are PNG format which means they have dates, times, palettes, gamma chunks and a bunch of zlib compressed data and you can't just copy that to a screen buffer. So, you need to convert the PNGs to a simple binary format where 0 is off and 1 is on and there is nothing else in the file. The easiest way to do that is with ImageMagick which is installed on most Linux platforms and is available for free on macOS and Windows. Let's say one of your frames is called anim.png and we want to get it to a simple format, like PGM (Portable GreyMap - see Wikipedia description) we can use ImageMagick like this at the console:
convert anim.png -compress none anim.pgm
The first few lines will be:
P2
128 64
255
255 255 255 255 255 255 255 ...
...
...
because the image is 128x64 and the maximum brightness in the file is 255. Then all the data follows in ASCII (because I put -compress none). In there, 255 represents white and 0 represents black.
As that is too big for the screen, here is an image of how it looks - hopefully you can see your black box as a bunch of zeroes in the middle at the bottom:
Now, if you run that same command again, but remove the -compress none, the same header will be produced but the data will follow in binary.
convert anim.png anim.pgm
And we can also use sed to delete the 3 lines of header:
convert anim.png anim.pgm | sed '1,3d' > anim.bin
Now you have a binary file of just pure pixels that is free of dates/times, author and copyrights, palettes and compressed data, you can pass to the next part.
Part 2
Secondly, once you have got your data in a sensible binary format you need to convert it to Intel Hex, and for that you need srec_cat which is available for Linux daily and via homebrew on a Mac.
Then, I haven't tested this and have never used it, I think you will want something like:
srec_cat anim.bin -binary -output -intel
:020000040000FA
:20000000323535203235352032353520323535203235352032353520323535203235352000
:200020003235352032353520323535203235352032353520323535203235352032353520E0
:200040003235352032353520323535203235352032353520323535203235352032353520C0
:200060003235352032353520323535203235352032353520323535203235352032353520A0
:20008000323535203235352032353520323535203235352032353520323535203235352080
...
:207E8000353520323535203235352032353520323535203235352032353520323535203202
:207EA0003535203235352032353520323535203235352032353520323535203235352032E2
:147EC000353520323535203235352032353520323535200A2A
:00000001FF
Summary
You can abbreviate and simplify what I am suggesting above - I will leave it there so folks can understand it in future though!
convert YourImage.png gray: | srec_cat - -binary -output -intel
The gray: is a very simple ImageMagick format equivalent to just the binary part of a PGM file without any header. Like PGM it uses one byte per pixel so it will be somewhat inefficient for your pure black and white needs. You can see that by looking at the file size - the PGM file is 8192 bytes, so 1 byte per pixel. If you really, really want 1 bit per pixel, you could use PBM format like this:
convert YourImage.png pbm: | sed '1,3d' | srec_cat - -binary -output -intel
Note:
From v7 of ImageMagick onwards, you should replace convert by magick so as to avoid clashing with Windows' built-in convert command that converts filesystems to NTFS.
ImageMagick is quite a large package, you could do this equally well just with the NetPBM suite, and use the tool called pngtopnm in place of convert.

Related

Issue with LEAD Technologies JPG V1.01 - Bad Image

I have a few JPG images that seem to be corrupt - yet the program dealing with them has no problems at all. I need to convert them to a new database - using C# or Delphi to do it.
The images are stored in a DB (which I can then save to file if I need to) - and the image has the following starting text in the header....
Bad Image
When it should be something like
Example of Good JPG Header
Note that the image has the text LEAD Technologies V1.01. I have contacted the company and they are currently on version 20.x - so it is so old even their latest tools will not read this image properly.
Has anyone out there had to deal with this issue in the past? If so - any thoughts as to how to deal with this one?
It looks as if the image is corrupted - but as I noted the original program can still use it as a image file...
As requested - Full Image to review
Full Image Download
I have been trying to analyse your files and see if I can work out how they are corrupted.
Normally, JPEG files have well-known markers in them, which consist of two bytes - namely 0xFF followed by a second byte that is not 0x00.
If you scan a normal JPEG file for markers, like this:
xxd -c16 -g1 -u normal.jpg | ggrep --color=always "FF [1-9A-F][1-9A-F]"
you will get something like this:
and you can see:
SOI (start of image) - 0xFFD8
DQT (define quantization table) - 0xFFDB
DHT (define Huffman table) - 0xFFC4
SOS (start of scan) - 0xFFDA
EOI (end of image) - 0xFFD9
If, on the other hand, you scan your images, you just get pages of junk - sadly I cannot work out what the pattern is. If anyone else can, I can probably remove it - so ping me with a comment if you can!

How to produce a Code 39 that can be reliably read after faxing

My application is generating a Code 39 barcode but a customer is having problems with their document management system recognizing the barcode after the prints have been scanned and re-printed.
I have also tested it using an online barcode reader which confirms that the barcode on their end document is not readable.
Is there a best type of barcode to use, that would give the best results after printing, scanning and re-printing elsewhere?
Here is an original barcode in the PDF straight from the application:
Here is a barcode once it has been printed, scanned and re-printed:
Testing using an online barcode reader results in:
We are sorry, we could not found any barcode in the uploaded image.
I am using GNU Barcode to generate the barcode:
$ barcode -h
barcode: Options:
-i <arg> input file (strings to encode), default is stdin
-o <arg> output file, default is stdout
-b <arg> string to encode (use input file if missing)
-e <arg> encoding type (default is best fit for first string)
-u <arg> unit ("mm", "in", ...) used to decode -g, -t, -p
-g <arg> geometry on the page: [<wid>x<hei>][+<margin>+<margin>]
-t <arg> table geometry: <cols>x<lines>[+<margin>+<margin>]
-m <arg> internal margin for each item in a table: <xm>[,<ym>]
-n "numeric": avoid printing text along with the bars
-c no Checksum character, if the chosen encoding allows it
-E print one code as eps file (default: multi-page ps)
-P create PCL output instead of postscript
-p <arg> page size (refer to the man page)
Known encodings are (synonyms appear on the same line):
"ean", "ean13", "ean-13", "ean8", "ean-8"
"upc", "upc-a", "upc-e"
"isbn"
"39", "code39"
"128c", "code128c"
"128b", "code128b"
"128", "code128"
"128raw"
"i25", "interleaved 2 of 5"
"cbr", "codabar"
"msi"
"pls", "plessey"
"code93", "93"
Code 39 is a low data density barcode that is tolerant of a wide X-dimension (the width of a narrow bar) and highly discriminating narrow-wide ratio (up to 1:3). As far as barcode symbologies go this makes it better suited than others at being transferred over a low resolution, noisy medium.
The Code 39 standard permits the use of a modulo 43 check digit which reduces the possibility of misreads. I notice that this isn't present in your scanned image (although it is in your source image) so perhaps your system can be upgraded to accommodate this.
The most significant problem with the images that you have provided is that the width of the narrow-most spaces is under-sized leading to a corruption of the barcode. In the case of the source image this is due to excessive "print growth" (ink spread) resulting from to pixel-grazing. In the case of the scanned image this has been exaggerated because the chosen X-dimension is insufficient to survive the imaging imperfections introduced by the end to end process.
To demonstrate the effect of print growth I have superimposed your scanned image with an cleaner rendition of the same data:
You can observe that towards the right hand side of the image that the narrow space between two adjacent narrow bars has been compressed out of the image to form a single wide bar.
To improve things at source you can try the following:
Avoid pixel-grazing by ensuring that the barcode generation is performed so that the X-dimension of the symbol is set to a multiple of the output device's native resolution – a process sometimes referred to as "grid-fitting".
Compensate for ink-spread by modifying the GNU Barcode library to subtract a small, fixed amount from the bar widths in order to be compatible with your printing and scanning processes.
Maximise the narrow-wide ratio of the bars and spaces to 1:3.
Maximise your X-dimension.
Migrating to another linear barcode symbology is unlikely to help as these same issues will probably affect it to an even greater extent.
Further information about high-quality barcode generation is given in this answer.

Dust and scratch removal with open source graphic libraries

I am trying to automate the cleanup process of a large amount of scanned films. I have all the images in 48-bit RGBI TIFF files (RGB + Infrared), and I can use the infrared channel to create masks for dust removal. I wonder if there is any decent open source implementation of in-painting that I can use to achieve this (all the other software I use for batch processing are open source libraries I access through Ruby interfaces).
My first choice was ImageMagick, but I couldn't find any advanced in-painting option in it (maybe I am wrong, though). I have heard this can be done with MagickWand libraries, but I haven't been able to find a concrete example yet.
I have also had a look at OpenCV, but it seems that OpenCV's in-paint method accept only 8-bit-per-channel images, while I must preserve the 16.
Is there any other library, or even an interesting code snippet I am not aware of? Any help is appreciated.
Samples:
Full Picture
IR Channel
Dust and scratch mask
What I want to remove automatically
What I consider too large to remove with no user intervention
You can also download the original TIFF file here. It contains two alpha channels. One is the original IR channel, and the other one is the IR channel already prepared for dust removal.
I have had an attempt at this, and can go some way to achieving some of your objectives... I can read in your 16-bit image, detect the dust pixels using the IR channel data, and replace them, and write out the result without any alpha channel and all the while preserving your 16-bit data.
The part that is lacking is the replacement algorithm - I have just propagated the next pixel from above. You, or someone cleverer than me on Stack Overflow, may be able to implement a better algorithm but this may be a start.
It is in Perl, but I guess it could be readily converted to another language. Here is the code:
#!/usr/bin/perl
use strict;
use warnings;
use Image::Magick;
# Open the input image
my $image = Image::Magick->new;
$image->ReadImage("pa.tiff");
my $v=0;
# Get its width and height
my ($width,$height)=$image->Get('width','height');
# Create output image of matching size
my $out= $image->Clone();
# Remove alpha channel from output image
$out->Set(alpha=>'off');
# Load Red, Green, Blue and Alpha channels of input image into arrays, values normalised to 1.0
my (#R,#G,#B,#A);
for my $y (0..($height-1)){
my $j=0;
my #RGBA=$image->GetPixels(map=>'RGBA',height=>1,width=>$width,x=>0,y=>$y,normalize=>1);
for my $x (0..($width-1)){
$R[$x][$y]=$RGBA[$j++];
$G[$x][$y]=$RGBA[$j++];
$B[$x][$y]=$RGBA[$j++];
$A[$x][$y]=$RGBA[$j++];
}
}
# Now process image
my ($d,$r,$s,#colours);
for my $y (0..($height-1)){
for my $x (0..($width-1)){
# See if IR channel says this is dust, and if so, replace with pixel above
if($A[$x][$y]<0.01){
$colours[0]=$R[$x][$y-1];
$colours[1]=$G[$x][$y-1];
$colours[2]=$B[$x][$y-1];
$R[$x][$y]=$R[$x][$y-1];
$G[$x][$y]=$G[$x][$y-1];
$B[$x][$y]=$B[$x][$y-1];
$out->SetPixel(x=>$x,y=>$y,color=>\#colours);
}
}
}
$out->write(filename=>'out.tif',compression=>'lzw');
The result looks like this, but I had to make it a JPEG just to fit on SO:
I cannot comment, so I write an answer.
I suggest using G'Mic with the filter "inpaint".
You should load the image, take the IR image and convert it to b/w, then tell the filter inpaint to fill the areas marked in the IR image.
OpenCV has a good algorithm for image inpaiting, which is basically what you were searching for.
https://docs.opencv.org/3.3.1/df/d3d/tutorial_py_inpainting.html
If that will not help then only Neural Networks algorithms

PostScript to PDF conversion/slow print issue [GhostScript]

I have several large PDF reports (>500 pages) with grid lines and background shading overlay that I converted from postscript using GhostScript's ps2pdf in a batch process. The PDFs that get created look perfect in the Adobe Reader.
However, when I go to print the PDF from Adobe Reader I get about 4-5 ppm from our Dell laser printer with long, 10+ second pauses between each page. The same report PDF generated from another proprietary process (not GhostScript) yeilds a fast 25+ ppm on the same printer.
The PDF file sizes on both are nearly the same at around 1.5 MB each, but when I print both versions of the PDF to file (i.e. postscript), the GhostScript generated PDF postscript output is about 5 times larger than that of the other (2.7 mil lines vs 675K) or 48 MB vs 9 MB. Looking at the GhostScript output, I see that the background pattern for the grid lines/shading (referenced by "/PatternType1" tag) is defined many thousands of times throughout the file, where it is only defined once in the other PDF output. I believe this constant re-defining of the background pattern is what is bogging down the printer.
Is there a switch/setting to force GhostScript to only define a pattern/image only once? I've tried using the -r and -dPdfsettings=/print switches with no relief.
Patterns (and indeed images) and many other constructs should only be emitted once, you don't need to do anything to have this happen.
Forms, however, do not get reused, and its possible that this is the source of your actual problem. As Kurt Pfiefle says above its not possible to tell without seeing a file which causes the problem.
You could raise a bug report at http://bubgs.ghostscript.com which will give you the opportunity to attach a file. If you do this please do NOT attach a > 500 page file, it would be appreciated if you would try to find the time to create a smaller file which shows the same kind of size inflation.
Without seeing the PostScript file I can't make any suggestions at all.
I've looked at the source PostScript now, and as suspected the problem is indeed the use of a form. This is a comparatively unusual area of PostScript, and its even more unusual to see it actually being used properly.
Because its rare usage, we haven't any impetus to implement the feature to preserve forms in the output PDF, and this is what results in the large PDF. The way the pattern is defined inside the form doesn't help either. You could try defining the pattern separately, at least that way pdfwrite might be able to detect the multiple pattern usage and only emit it once (the pattern contains an imagemask so this may be worthwhile).
This construction:
GS C20 setpattern 384 151 32 1024 RF GR
GS C20 setpattern 384 1175 32 1024 RF GR
is inefficient, you keep re-instantiating the pattern, which is expensive, this:
GS C20 setpattern
384 151 32 1024 RF
384 1175 32 1024 RF
GR
is more efficient
In any event, there's nothing you can do with pdfwrite to really reduce this problem.
'[...] when I print both versions of the PDF to file (i.e. postscript), the GhostScript generated PDF postscript output is about 5 times larger than that of the other (2.7 mil lines vs 675K) or 48 MB vs 9 MB.'
Which version of Ghostscript do you use? (Try gs -v or gswin32c.exe -v or gswin64c.exe -v to find out.)
How exactly do you 'print to file' the PDFs? (Which OS platform, which application, which kind of settings?)
Also, ps2pdf may not be your best option for the batch process. It's a small shell/batch script anyway, which internally calls a Ghostscript command.
Using Ghostscript directly will give you much more control over the result (though its commandline 'usability' is rather inconvenient and awkward -- that's why tools like ps2pdf are so popular...).
Lastly, without direct access to one of your PS input samples for testing (as well as the PDF generated by the proprietary converter) it will not be easy to come up with good suggestions.

Read an image pixel by pixel in Ruby

I'm trying to open an image file and store a list of pixels by color in a variable/array so I can output them one by one.
Image type: Could be BMP, JPG, GIF or PNG. Any of them is fine and only one needs to be supported.
Color Output: RGB or Hex.
I've looked at a couple libraries (RMagick, Quick_Magick, Mini_Magick, etc) and they all seem like overkill. Heroku also has some sort of difficulties with ImageMagick and my tests don't run. My application is in Sinatra.
Any suggestions?
You can use Rmagick's each_pixel method for this. each_pixel receives a block. For each pixel, the block is passed the pixel, the column number and the row number of the pixel. It iterates over the pixels from left-to-right and top-to-bottom.
So something like:
pixels = []
img.each_pixel do |pixel, c, r|
pixels.push(pixel)
end
# pixels now contains each individual pixel of img
I think Chunky PNG should do it for you. It's pure ruby, reasonably lightweight, memory efficient, and provides access to pixel data as well as image metadata.
If you are only opening the file to display the bytes, and don't need to manipulate it as an image, then it's a simple process of opening the file like any other, reading X number of bytes, then iterating over them. Something like:
File.open('path/to/image.file', 'rb') do |fi|
byte_block = fi.read(1024)
byte_block.each_byte do |b|
puts b.asc
end
end
That will merely output bytes as decimal. You'll want to look at the byte values and build up RGB values to determine colors, so maybe using each_slice(3) and reading in multiples of 3 bytes will help.
Various image formats contain differing header and trailing blocks used to store information about the image, data format and EXIF information for the capturing device, depending on the type. Probably going with a something that is uncompressed would be good if you are going to read a file and output the bytes directly, such as uncompressed TIFF. Once you've decided on that you can jump into the file to skip headers if you want, or just read those too to see or learn what's in them. Wikipedia's Image file formats page is a good jumping off place for more info on the various formats available.
If you only want to see the image data then one of the high-level libraries will help as they have interfaces to grab particular sections of the image. But, actually accessing the bytes isn't hard, nor is it to jump around.
If you want to learn more about the EXIF block, used to describe a lot of different vendor's Jpeg and TIFF formats ExifTool can be handy. It's written in Perl so you can look at how the code works. The docs nicely show the header blocks and fields, and you can read/write values using the app.
I'm in the process of testing a new router so I haven't had a chance to test that code, but it should be close. I'll check it in a bit and update the answer if that didn't work.

Resources