Acumatica and code128 barcodes not scanning - barcode

We are creating barcodes (code128) in Acumatica's Report designer. We have several different types of reports where we have barcodes, and they seem to generally work fine. However, we are seeing issues where certain types of serial numbers aren't able to scan. In fact, for these problem items, the barcodes print out with an "x" over the barcode. I know that is indicative of overlapping components, but in this case, I have removed all of of the components out of the report for testing, and we still see the same issue. I have tries the same number on other reports where it's ok, and we then see the same issue, so I suspect it's something about the number itself (maybe in combination with the barcode settings we have setup). I'm a bit stuck, so hoping that someone has some troubleshooting advice.
Example serial # that works fine: 101230
Example problem Serial #: FL23432
Could it be the number of characters? the fact that one has letters?
here is a screenshot of the barcode settings as well:
https://www.dropbox.com/s/1484s1qtgrpgilk/Screenshot%202019-05-07%2020.40.23.png?dl=0
Any help would be very much appreciated.

With a value of 10 for property BarWidth it displays properly.
I can reproduce the issue with a BarWidth value of 40. I'm pretty sure Acumatica is trying to help you out here. The gaps are too wide for accurate tracking by common lower-end hand scanner. You need to reduce the value of BarWidth property until the X disappear.
EDIT:
Actually the red X is just to indicate that the barcode doesn't fit in the control size. You can eliminate it by making the control wider but I'd think that a gap of 40 is still too big for practical scanning in real life scenario:

To print your labels, try Asgard Labels, the now official Fulfilled By Acumatica solution for label printing. Much easier and it supports all barcodes supported by your printers including 2D.

Related

Cytoscape vs STRING for long list of proteins

I am mid-way through my university project, and I have run into an issue. I have a long list of around 1000 proteins that I wanted to analyse in STRING, however, my list is too large. I decided to try and utilise Cytoscape (and downloaded the stringApp), but the networks generated are still very messy. I've attached a screenshot here. Is there any way to improve the presentation of the network by downloading any Cytoscape apps or by tweaking the settings?
Thanks in advance
Well, the short answer is "no". A slightly longer answer is "it depends".
Showing a hairball really isn't helpful, usually, so you need to refine things somewhat. What is your data source (i.e. where did the 1000 proteins come from)? What do you hope to see in the network? If you are looking for particular groups of proteins (e.g. complexes), you would probably want to use MCL to cluster them first. If you have some other data you want to map, such as transcriptomic or proteomic data, you could refine your network based on fold change or abundance values.
All that being said, somethings you might try. First, you are seeing the "fast" version of the network. Try clicking on the show graphics details button (the diamond in the network view tool bar). That will give you the full graphics details. Second, you might try spreading the network out a bit by using the Layout->Layout Tools. Turn off the "Selected Only" and then adjust the scale. Finally, depending on your biological question, you might want to eliminate proteins that are only present in the nucleus or cytoplasm, or are only in lung tissue. This is all possible using the sliders provided by the stringApp's Results Panel.
-- scooter

How to add redundancy into an OCR-scanned code

This is more of an algorithmy question - I am not very mathematical so was looking for an engineery solution... If this is off topic for SO let me know and I will delete the question.
I created a mashup of open source goodness to do Optical Character Recognition on difficult backgrounds: https://github.com/metalaureate/tesseract-docker-ocr
I want to use it to scan labels with a pre-defined ID code, e.g., 2826672. The accuracy is about 70% for digits.
Question: how do I add redundancy programmatically to my code to increase accuracy to 99%, and how do I decode it? I can imagine some really kludgy ways, like doubling and inverting the digits, but I don't know how to do this in a way that honors information theory without my having to translate a lot of math.
How do I add and decode digits to correct for OCR errors?
If you have the freedom of actually printing the labels, then there's no real reason to stick with plain ol' numbers. Use QR codes instead. Both the size (information capacity) and information redundancy is configurable, so you can customize it to fit your specific scenario. Internally, Reed-Solomon error correction is used. They offer There are plenty of libraries for both QR code generation and recognition from a scan.
Further info is available in Wikipedia.

Teller transactions archive - print barcode on papers

I am looking into options of auto indexing of daily documents generated by tellers in bank operations. The documents does not have any reference number and its handwritten by customer.
So to auto index these documents and store in EDMS, we have to put the core bank transaction reference number on each. So what options do i have? Print barcode label contains this trans number and attach to paper? or have a machine that i can feed the paper and it can print barcode on it?
Anyone know what is the right HW or SW for this?
Thanks
Depends on how complex you want to be. Perhaps these documents could be multiple (stapled?) pages. would you want to index each page - and would the documents then form an associated sequence (eg. doc. 00001-01 to -20)
Next caper is to consider the form of the number. It's best to formulate a check-digiting system so that a printed number can be manually entered and the check-digit verifies that the number hasn't been miskeyed.
Now - if these documents may be different sizes for instance, or potentially a wad of paperwork, how would you feed them through a printer?
So I'd suggest that a good choice would be to produce your numbers on a specialist barcode-printer with human-readable line on the same label. Some idiot will want to insist on using cheap thermally-sensitive labels, but these almost inevitably deteriorate with time. I'd choose thermal-transfer labels which are a little more complex - your tellers would need to be able to load label-rolls and also the transfer-ribbon (a little like a typewriter-ribbon, if you remember those) but basically any monkey could do it.
Even then, there are three grades of ribbon - wax, resin and a combination. Problem with wax is that it can become worn - same thing as you get with laser-printing where the pages get stuck together if they are left to their own devices for a while. Another reason you don't use laser printers in this role - apart from the fact that you'd need to produce sheets of labels to attach rather than ones and twos on-demand is that the laser processing will cook the glue on the sheets. Fine for an address label with a lifetime of a few days, but disastrous when you may be storing documents for years. Document goes one way, label another...
Resin is the best but most expensive choice. It has better wearing characteristics.
My choice would be a Zebra TLP2824plus using thermal-transfer paper and resin ribbon. Software is easy - just means you need to go back 20 years in time and forget all about drivers - just send a sring to the printer as if it was a generic text printer. The formatting of the label - well, the manual will show you that...
Other technologies and approaches would probably be more complex than simply producing and attaching barcode labels. For instance, if you were to have an inkjet printer like those that are used to mark (milk/juice) cartons - well, it would have to deal with different sizes of paper, and different weights from near-cardboard to airmail paper. It would also have a substantial footprint since the paper would need to be physically presented to the printer. Then there's all the problems of disassembling and reassembling a stapled wad. And who can control precisely where the printing would occur? What may suit one document may not suit another - it may have inconveniently-placed logos or other artwork in the "standard" position for that-sized paper.
Another issue is colour. There's no restriction on background colour with a label (yellow or fluoro pink for example) - it would be easy to locate when necessary. Contrast that with the-ink's-running-low washed-out ink printing on a grey background. White labels wouldn't stand out all that well on the majority of (white) documents.
BUT a strong alternative technology would be to have reels of labels pre-printed by a commercial printing establishment rather than producing them with a special printer on-demand. Reels are better than sheets - they are easier to use especially for people with short fingernails.

OCR for scanning printed receipts. [duplicate]

Would OCR Software be able to reliably translate an image such as the following into a list of values?
UPDATE:
In more detail the task is as follows:
We have a client application, where the user can open a report. This report contains a table of values.
But not every report looks the same - different fonts, different spacing, different colors, maybe the report contains many tables with different number of rows/columns...
The user selects an area of the report which contains a table. Using the mouse.
Now we want to convert the selected table into values - using our OCR tool.
At the time when the user selects the rectangular area I can ask for extra information
to help with the OCR process, and ask for confirmation that the values have been correct recognised.
It will initially be an experimental project, and therefore most likely with an OpenSource OCR tool - or at least one that does not cost any money for experimental purposes.
Simple answer is YES, you should just choose right tools.
I don't know if open source can ever get close to 100% accuracy on those images, but based on the answers here probably yes, if you spend some time on training and solve table analisys problem and stuff like that.
When we talk about commertial OCR like ABBYY or other, it will provide you 99%+ accuracy out of the box and it will detect tables automatically. No training, no anything, just works. Drawback is that you have to pay for it $$. Some would object that for open source you pay your time to set it up and mantain - but everyone decides for himself here.
However if we talk about commertial tools, there is more choice actually. And it depends on what you want. Boxed products like FineReader are actually targeting on converting input documents into editable documents like Word or Excell. Since you want actually to get data, not the Word document, you may need to look into different product category - Data Capture, which is essentially OCR plus some additional logic to find necessary data on the page. In case of invoice it could be Company name, Total amount, Due Date, Line items in the table, etc.
Data Capture is complicated subject and requires some learning, but being properly used can give quaranteed accuracy when capturing data from the documents. It is using different rules for data cross-check, database lookups, etc. When necessary it may send datafor manual verification. Enterprises are widely usind Data Capture applicaitons to enter millions of documents every month and heavily rely on data extracted in their every day workflow.
And there are also OCR SDK ofcourse, that will give you API access to recognition results and you will be able to program what to do with the data.
If you describe your task in more detail I can provide you with advice what direction is easier to go.
UPDATE
So what you do is basically Data Capture application, but not fully automated, using so-called "click to index" approach. There is number of applications like that on the market: you scan images and operator clicks on the text on the image (or draws rectangle around it) and then populates fields to database. It is good approach when number of images to process is relatively small, and manual workload is not big enough to justify cost of fully automated application (yes, there are fully automated systems that can do images with different font, spacing, layout, number of rows in the tables and so on).
If you decided to develop stuff and instead of buying, then all you need here is to chose OCR SDK. All UI you are going to write yoursself, right? The big choice is to decide: open source or commercial.
Best Open source is tesseract OCR, as far as I know. It is free, but may have real problems with table analysis, but with manual zoning approach this should not be the problem. As to OCR accuracty - people are often train OCR for font to increase accuracy, but this should not be the case for you, since fonts could be different. So you can just try tesseract out and see what accuracy you will get - this will influence amount of manual work to correct it.
Commertial OCR will give higher accuracy but will cost you money. I think you should anyway take a look to see if it worth it, or tesserack is good enough for you. I think the simplest way would be to download trial version of some box OCR prouct like FineReader. You will get good idea what accuracy would be in OCR SDK then.
If you always have solid borders in your table, you can try this solution:
Locate the horizontal and vertical lines on each page (long runs of
black pixels)
Segment the image into cells using the line coordinates
Clean up each cell (remove borders, threshold to black and white)
Perform OCR on each cell
Assemble results into a 2D array
Else your document have a borderless table, you can try to follow this line:
Optical Character Recognition is pretty amazing stuff, but it isn’t
always perfect. To get the best possible results, it helps to use the
cleanest input you can. In my initial experiments, I found that
performing OCR on the entire document actually worked pretty well as
long as I removed the cell borders (long horizontal and vertical
lines). However, the software compressed all whitespace into a single
empty space. Since my input documents had multiple columns with
several words in each column, the cell boundaries were getting lost.
Retaining the relationship between cells was very important, so one
possible solution was to draw a unique character, like “^” on each
cell boundary – something the OCR would still recognize and that I
could use later to split the resulting strings.
I found all this information in this link, asking Google "OCR to table". The author published a full algorithm using Python and Tesseract, both opensource solutions!
If you want to try the Tesseract power, maybe you should try this site:
http://www.free-ocr.com/
Which OCR you are talking about?
Will you be developing codes based on that OCR or you will be using something off the shelves?
FYI:
Tesseract OCR
it has implemented the document reading executable, so you can feed the whole page in, and it will extract characters for you. It recognizes blank spaces pretty well, it might be able to help with tab-spacing.
I've been OCR'ing scanned documents since '98. This is a recurring problem for scanned docs, specially for those that include rotated and/or skewed pages.
Yes, there are several good commercial systems and some could provide, once well configured, terrific automatic data-mining rate, asking for the operator's help only for those very degraded fields. If I were you, I'd rely on some of them.
If commercial choices threat your budget, OSS can lend a hand. But, "there's no free lunch". So, you'll have to rely on a bunch of tailor-made scripts to scaffold an affordable solution to process your bunch of docs. Fortunately, you are not alone. In fact, past last decades, many people have been dealing with this. So, IMHO, the best and concise answer for this question is provided by this article:
https://datascience.blog.wzb.eu/2017/02/16/data-mining-ocr-pdfs-using-pdftabextract-to-liberate-tabular-data-from-scanned-documents/
Its reading is worth! The author offers useful tools of his own, but the article's conclusion is very important to give you a good mindset about how to solve this kind of problem.
"There is no silver bullet."
(Fred Brooks, The Mitical Man-Month)
It really depends on implementation.
There are a few parameters that affect the OCR's ability to recognize:
1. How well the OCR is trained - the size and quality of the examples database
2. How well it is trained to detect "garbage" (besides knowing what's a letter, you need to know what is NOT a letter).
3. The OCR's design and type
4. If it's a Nerural Network, the Nerural Network structure affects its ability to learn and "decide".
So, if you're not making one of your own, it's just a matter of testing different kinds until you find one that fits.
You could try other approach. With tesseract (or other OCRS) you can get coordinates for each word. Then you can try to group those words by vercital and horizontal coordinates to get rows/columns. For example to tell a difference between a white space and tab space. It takes some practice to get good results but it is possible. With this method you can detect tables even if the tables use invisible separators - no lines. The word coordinates are solid base for table recog
We also have struggled with the issue of recognizing text within tables. There are two solutions which do it out of the box, ABBYY Recognition Server and ABBYY FlexiCapture. Rec Server is a server-based, high volume OCR tool designed for conversion of large volumes of documents to a searchable format. Although it is available with an API for those types of uses we recommend FlexiCapture. FlexiCapture gives low level control over extraction of data from within table formats including automatic detection of table items on a page. It is available in a full API version without a front end, or the off the shelf version that we market. Reach out to me if you want to know more.
Here are the basic steps that have worked for me. Tools needed include Tesseract, Python, OpenCV, and ImageMagick if you need to do any rotation of images to correct skew.
Use Tesseract to detect rotation and ImageMagick mogrify to fix it.
Use OpenCV to find and extract tables.
Use OpenCV to find and extract each cell from the table.
Use OpenCV to crop and clean up each cell so that there is no noise that will confuse OCR software.
Use Tesseract to OCR each cell.
Combine the extracted text of each cell into the format you need.
The code for each of these steps is extensive, but if you want to use a python package, it's as simple as the following.
pip3 install table_ocr
python3 -m table_ocr.demo https://raw.githubusercontent.com/eihli/image-table-ocr/master/resources/test_data/simple.png
That package and demo module will turn the following table into CSV output.
Cell,Format,Formula
B4,Percentage,None
C4,General,None
D4,Accounting,None
E4,Currency,"=PMT(B4/12,C4,D4)"
F4,Currency,=E4*C4
If you need to make any changes to get the code to work for table borders with different widths, there are extensive notes at https://eihli.github.io/image-table-ocr/pdf_table_extraction_and_ocr.html

Combining semacodes and steganography?

Update
I asked this question quite a while ago now, and I was curious if anything like this has been developed since I asked the question?
I don't even know if there is a term for this kind of algorithm, and I guess there won't be if nobody has invented it yet. However it also makes googling for this a bit hard. Does anybody know if there is a term for this algorithm/principle yet?
This is an idea I have been thinking about, but I do not quite know how to solve it. I would like to know if any solutions like this exists out there, or if you guys have any idea how this could be implemented.
Steganography
Steganography is basically the art of hiding messages. In modern days we do this digitally by e.g. modifying the least significant bits in a image as the one below. Thus for every pixel and for every colour component of that pixel we might be able to hide a byte or two.
This alternation is not visibly by the naked eye, but analysing the least significant bits might reveal patterns that exposes the existence and possibly content of a hidden message. To counter this we simply encrypt the message before embedding it in the image, which keeps the message safe and also helps preventing discovery of the existence of a hidden message.
Thus, in principle, steganography provides the following:
Hiding encrypted message in any kind of media data. (Images, music, video, etc.)
Complete deniability of the existence of a hidden message without the correct key.
Extraction of the hidden message with the correct key.
(source: cs.vu.nl)
Semacodes
Semacodes are a way of encoding data in a visually representation, that may be printed, copied, and scanned easily. The Data Matrix shown below is a example of a semacode containing the famous Lorem Ipsum text. This is essentially a 2D barcode with a higher capacity that usually barcodes. Programs for generating semacodes are readily available, and ditto for software for reading them, especially for cell phones. Semacodes usually contains error correcting codes, are generally very robust, and can be read in very damaged conditions.
Thus semacodes has the following properties:
Data encoding that may be printed and copied.
May be scanned and interpreted even in damaged (dirty) conditions, and generally a very robust encoding.
Combining it
So my idea is to create something that combines these two, with all of the combined properties. This means it would have to:
Embed a encrypted message in any media, probably a scanned image.
The message should be extractable even if the image is printed and scanned, and even partly damaged.
The existence of a embedded message should be undetectable without the key used for encryption.
So, first of all I would like to know if any solutions, algorithms or research is available on this? Secondly I would like to hear any ideas/thoughts on how this might be done?
I really hope to get a good discussion going on the possibilities and feasibility of implementing something like this, and I am looking forward to reading your answers.
Update
Thanks for all the good input on this. I will probably work a bit more on this idea when I have more time. I am convinced it must be possible. Think about research in embedding watermarks in music and movies.
I imagine part of the robustness of a semacode to damage/dirt/obscuration is the high contrast between the two states of any "cell". The reader can still make a good guess as to the actual state, even with some distortion.
That sort of contrast is not available in a photographic image, and is the very reason why steganography works - the lsb bit-flipping has almost no visual effect on the image itself, while digital fidelity ensures that a non-visual system can still very accurately read the embedded data.
As the two applications are sort of at opposite ends of the analog/digital spectrum (semacodes are all about being decipherable by analog (visual) processing but are on paper, not digital; steganography is all about the bits in the file and cares nothing for the analog representation, whether light or sound or something else), I imagine a combination of the two will extremely difficult, if not impossible.
Essentially what you're thinking of is being able to steganographically embed something in an image, print the image, make a colour photocopy of it, scan it in, and still be able to extract the embedded data.
I'm afraid I can't help, but if anyone achieves this, I'll be DAMN impressed! :)
It's not a complete answer, but you should look at watermarking. This technique solves your first two goals (embedable in a printed image and readable even from partly damaged scan).
Part of watermarking's reliability to distortion and transcription errors (from going from digital to analog and back) come from redundancy (e.g. repeating the data several times). Those would make the watermark detectable even without a key. However, you might be able to use redundancy techniques that are more subtle, maybe something related to erasure coding or secret sharing.
I know that's not a complete answer, but hopefully those leads will point you in the right direction!
What language/environment are you using? It shouldn't be that hard to write code that opens both the image and semacode as a bitmap (the latter as a monochrome), sets the lowest bit(s) of each byte of each pixel in the color image to the value of the corresponding pixel of the monochrome bitmap.
(optionally expand the semacode bitmap first to the same pixel-dimensions extending with white)

Resources