Quagga Javascript Barcode Scanner - quaggajs

Is there a way to scan Code 39 and UPC at the same time without changing the barcode type from the options? I tried adding both of them to active barcodes list. Still without changing the option to UPC or Code 39, it is not scanning

I left you some questions with the details you should have given so people can help you more easily but I'll try to answer what I think that your question is. Not sure.
I think you ask why you need to set the reader engine before reading a barcode. I just read the docs for a minute and it is answered there:
Inside the decoder section:
"The most important property is readers which takes an array of types of barcodes which should be decoded during the session.
(...)
Why are not all types activated by default? Simply because one
should explicitly define the set of barcodes for their use-case. More
decoders means more possible clashes, or false-positives. One should
take care of the order the readers are given, since some might return
a value even though it is not the correct type (EAN-13 vs. UPC-A).
The multiple property tells the decoder if it should continue decoding
after finding a valid barcode. If multiple is set to true, the results
will be returned as an array of result objects. Each object in the
array will have a box, and may have a codeResult depending on the
success of decoding the individual box."
You need to use a config like this one with several readers:
{
readers: [
'code_39_reader',
'code_39_vin_reader',
'upc_reader',
'upc_e_reader'
]
multiple: false
}

Related

What is the essential difference between Document and Collectiction in YAML syntax?

Warning: This question is a more philosophical question than practical, but I find it well as to be asked and answered in practical contexts (forums like StackOverflow here, instead of the SoftwareEngineering stack-exchange website), due to the native development in the actual use de-facto of YAML and the way the way it's specification has evolved and features have been added to it over time. Let's ask:
As opposed to formats/languages/protocols such as JSON, the YAML format allows you (according to this link, that seems pretty official, or at least accurate and reliable source to understand the YAML specification) to embed multiple 'Documents' within one file/stream, using the three-dashes marking ("---").
If so, it's hard to ignore the fact that the concept/model/idea of 'Document' in YAML, is no longer an external definition, or "meta"-directive that helps the human/parser to organize multiple/distincted documents along each other (similar to the way file-systems defining the concept of "file" to organize different files, but each file in itself - does not necessarily recognize that it's a file, or that it's being part of a file system that wraps it, by definition, AFAIK.
However, when YAML allows for a multi-Document YAML files, that gather collections of Documents in a single YAML file (and perhaps in a way that is similar/analogous to HTTP Pipelining approach of HTTP protocol), the concept/model/idea/goal of Document receives a new, wider definition/character de-facto, as a part of the YAML grammar and it's produces, and not just of the YAML specification as an assistive concept or format description that helps to describe the specification.
If so, being a Document part of the language itself, what is the added value of this data-structure, compared to the existing, familiar and well-used good old data-structure of Collection (array of items)?
I'm asking it, because I've seen in this link (here) some snippet (in the second example), which describes a YAML sequence that is actually a collection of logs. For some reason, the author of the example, chose to prefer to present each log as a separate "Document" (separated with three-dashes), gathered together in the same YAML sequence/file, instead of writing a file that has a "Collection" of logs represented with the data-type of array. Why did he choose to do this? Is his choice fit, correct, ideal?
I can speculate that the added value of the distinction between a Document and a Collection become relevant when using more advanced features of the YAML grammar, such as Anchors, Tags, References. I guess every Document provide a guarantee that all these identifiers will be a unique set, and there is no collision or duplicates among them. Am I right? And if so, is this the only advantage, or maybe there are any more justifications for the existence of these two pretty-similar data structures?
My best for now, is to see Document as a "meta"-Collection, that is more strict, and lack of high-level logic, or as two different layers of collection schemes. Is it correct, accurate way of view?
And even if I am right, why in the above example (of the logs document from the link), when there's no use and not imply or expected to use duplications or collisions or even identifiers/anchors or compound structures at all - the author is still choosing to represent the collection's items as separate documents? Is this just not so successful selection of an example? Or maybe I'm missing something, and this is a redundancy in the specification, or an evolving syntactic-sugar due to practical needs?
Because the example was written on a website that looks serious with official information written by professionals who dealt with the essence of the language and its definition, theory and philosophy behind (as opposed to practical uses in the wild), and also in light of other provided examples I have seen in it and the added value of them being meticulous, I prefer not to assume that the example is just simply imperfect/meticulous/fit, and that there may be a good reason to choose to write it this way over another, in the specific case exampled.
First, let's look at the technical difference between the list of documents in a YAML stream and a YAML sequence (which is a collection of ordered items). For this, I'll discuss YAML tags, which are an advanced feature so I'll provide a quick overview:
YAML nodes can have tags, such as !!str (the official tag for string values) or !dice (a local tag that can be interpreted by your application but is unknown to others). This applies to all nodes: Scalars, mappings and sequences. Nodes that do not have such a tag set in the source will be assigned the non-specific tag ?, except for quoted scalars which get ! instead. These non-specific tags are later resolved to specific tags, thereby defining to which kind of data structure the node will be deserialized into.
YAML implementations in scripting languages, such as PyYAML, usually only implement resolution by looking at the node's value. For example, a scalar node containing true will become a boolean value, 42 will become an integer, and droggeljug will become a string.
YAML implementations for languages with static types, however, do this differently. For example, assume you deserialize your YAML into a Java class
public class Config {
String name;
int count;
}
Assume the YAML is
name: 42
count: five
The 42 will become a String despite the fact that it looks like a number. Likewise, five will generate an error because it is not a number; it won't be deserialized into a string. This means that not the content of the node defines how it will be deserialized, but the path to the node.
What does this have to do with documents? Well, the YAML spec says:
Resolving the tag of a node must only depend on the following three parameters: (1) the non-specific tag of the node, (2) the path leading from the root to the node and (3) the content (and hence the kind) of the node.)
So, the technical difference is: If you put your data into a single document with a collection at the top, the YAML processor is allowed to take into account the position of the data in the top-level collection when resolving a tag. However, when you put your data in different documents, the YAML processor must not depend on the position of the document in the YAML stream for resolving the tag.
What does this mean in practice? It means that YAML documents are structurally disjoint from one another. Whether a YAML document is valid or not must not depend on any preceeding or succeeding documents. Consequentially, even when deserialization runs into a semantic problem (such as with the five above) in one document, a following document may still be deserialized successfully.
The goal of this design is to be able to concatenate arbitrary YAML documents together without altering their semantics: A middleware component may, without understanding the semantics of the YAML documents, collect multiple streams together or split up a single stream. As long as they are syntactically correct, stream splitting and merging are sound operations that do not invalidate a YAML document even if another document is structurally invalid.
This design primary focuses on sending and receiving data over networks. Of course, nowadays, YAML is primarily used as configuration language. This is why this feature is seldom used and of rather little importance.
Edit: (Reply to comment)
What about end-cases like a string-tagged Document starts with a folded-string, making even its following "---" and "..." just a characters of the global string?
That is not the case, see rules l-bare-document and c-forbidden. A line containing un-indented ... not followed by non-whitespace will always end a document if one is open.
Moreover, ... doesn't do anything if no document is open. This ensures that a stream merger can always append ... to a document to ensure that the current document is closed, but no additional one is created.
--- has widely been adopted as separator between YAML documents (and, perhaps more prominently, between YAML front matter and content in tools like Jekyll) where ... would have been more appropriate, particularly in Jekyll. This gives the false impression that --- should be used by tooling to separate documents, when in reality ... is the syntactic element designed for that use-case.

Do Synthea names generally end with a number

I'm using data from synthea and it looks like most (all?) of the given and family names I'm getting back end with a three digit number (e.g. Gregg522). Is this part of the design of synthea or am I parsing the data incorrectly. A snippet of the json I'm getting back is shown below. If this is part of the design, what is the motivation of ending the name with a number (I would think this would make the data less realistic).
Yes, they generally do. It is sometimes nice to be able to see that the patients are fake/synthetic ones. However, this is a setting you can change: In the synthea.properties file, look for the setting "append_numbers_to_person_names" and set it to false.

Most elegant way to represent lots of fields

So, I have this list of attributes that I would like to represent in my Go code. That is, I'm looking for some data structure that can convey all these.
Mind like a diamond
Knows what's best
Shoes that cut
Eyes that burn like cigarettes
The right allocations
Fast
Thorough
Sharp as a tack
Playing with her jewellry
Putting up her hair
Touring the facilities
Picking up slack
Short skirt
Long jacket
Gets up early
Stays up late
Uninterrupted prosperity
Uses a machete to cut through red tape
Fingernails that shine like justice
Voice that is dark like tinted glass
Smooth liquidation
Right dividends
Wants a car with a cupholder arm rest
Wants a car that will get her there
Changing her name from Kitty to Karen
Trading her MG for a white Chrysler LeBaron
They're not going to change any time soon, but I will want to do the following things with them:
Read them from and write them to a JSON file.
Associate a boolean with each attribute (bonus points for a way to associate any one arbitrary type).
Easily display these strings to a human.
Easily count how many are "set" (That is, a function that, when called on someone who gets up early, has the right dividends, and is sharp as a tack, but nothing else, returns 3).
Should be as strongly typed as possible (It should be impossible, or at least difficult, to ask for an attribute that doesn't exist. You should be able to see all the attributes by looking at the code).
Preserving this order would be a plus, but not strictly necessary.
The approaches I've thought of so far are:
A struct with all of these as fields, but the count function is inelegant to write, and the 'nice' string name of the fields has to be stored elsewhere. Even if that can be solved with tags, the count problem remains.
A map with constant keys (or an array with enumerated values). Counting is a simple loop, and the keys can be the nice string names. However, this creates the problem of asking for keys that don't exist, and you have to hide the dictionary behind a function to stop users from trying to add new keys.
Is there something I'm missing here, or will I have to compromise?
I'd use a uint64 as a bitset (or an actual bitset type, of which none are built in but some are open source). It meets your requirements in the following ways:
JSON storage as "010101" string or base64 (11 chars).
Easy display: just define an array of the strings in the same order they are stored in the bitset.
Easily count how many are "set" - use "popcount" (open source implementations are easy to come by).
Order is naturally preserved--the first attribute will always be bit 0, etc.
If you have more than 64 attributes, you can use math/big.
You can use map[int]interface{} to store the value of any length inside the interface with integer as key or you can also use slice of interface for looping inside the data like this []interface{}. Later on you can easily type assert the interface to get underlying value which is string in your case.
Since it will be slice of interface or map of interface you can easily marshal it to convert it into json and save into file which can be read or write easily.

Globally unique option field numbers for protobufs

In the protobuf documentation (https://developers.google.com/protocol-buffers/docs/proto#customoptions) it says this about custom options:
One last thing: Since custom options are extensions, they must be
assigned field numbers like any other field or extension. In the
examples above, we have used field numbers in the range 50000-99999.
This range is reserved for internal use within individual
organizations, so you can use numbers in this range freely for
in-house applications. If you intend to use custom options in public
applications, however, then it is important that you make sure that
your field numbers are globally unique. To obtain globally unique
field numbers, please send a request to
protobuf-global-extension-registry#google.com. Simply provide your
project name (e.g. Object-C plugin) and your project website (if
available). Usually you only need one extension number.
Why do the options field numbers have to be globally unique for public applications? In what way can collisions be a problem?
Basically, because you wouldn't know whether the data you get is correct.
The protobuf binary wire format only stores the field numbers and the payload (which is itself, for complex types, just field numbers and sub-payloads). There is no name data. So: when you store and retrieve an extension field, all you're saying is "fetch field {field number}, interpret it as {type}". If two different systems have extended the same data using the same field number, then you don't have any way of knowing whether the data you're fetching was actually in that format.
Normally this isn't a problem - as it is rare to conflict like this on the same data; but custom options are different! I'm a library author; I might want to add a custom option that my schema parsing tools recognize, by extending (say) MessageOptions. MessageOptions is the extension point for DescriptorProto, which is to say: that's what option (foo) = "bar"; goes to inside a message.
To do that, I need to assign a number for foo. I choose 5000 arbitrarily (MessageOptions defines extensions 1000 to max;, so that's fine). All is good. My tooling works.
Unknown to me, another library author has chosen to do something similar and has also used 5000. Once the schema is compiled (by protoc or similar), all I have is numbers. If I ask for the data from field 5000, I don't know whether I'm getting my extension, or the other one. The meaning is lost. OK, at a push I could also check the dependency list on the FileDescriptorProto, but ... that's hit and miss.
I don't know whether the presense of value 1 in field 5000 is:
option (.mystuff.someext) = 1;
vs
option (.anotherlib.whatever) = -1; // stored as sint32
vs
option (.yetanother.library.option) = true;
If those extensions all have number 5000, they appear identically on the wire.

Can Groups be used to emulate the "class" or "struct" data structures from other languages

Is there a data structure within LiveCode that can be used as a "holder" for associated data, letting me handle it collectively? I come from a Java / Javascript / C background so I am looking for a Class or Struct sort of data structure.
I've found examples of Groups, which seem to have some of this functionality, but it feels a bit like I'm bending the language to meet my needs.
As a specific example, suppose I had an image field on my screen that would randomly display an image and, when pressed, play an associated sound clip. I'd expect to create a list of "structures" that contained the path to the image and the path to the associated sound clip, and use that data to populate the image field and to decide what sound clip to play.
Would a Group be the correct structure to use in this case? Or am I approaching this in a way that isn't really fitting with the way LiveCode works?
It takes a little getting used to, but the xTalk world is much simpler and more open than any ordinary procedural language. So much of what you once had to manage is no longer required.
So when splash21 said that you could store all your image and sound references in a custom property, he was really saying that the LiveCode environment contains intrinsic, high level functionality that makes these sorts of things instantly accessible, and the only thing required of you is to call for them, and they simply work.
The only way to appreciate this is to make a few simple programs, to really see what is possible. Make your application. Everything you mentioned can be accomplished with perhaps a dozen lines of code in a single handler. I recommend that you join the LiveCode use list and forums. The community is vibrant and eager to help, frequently with full blown solutions to specific problems, but more importantly, as guides and mentors to new users
Craig Newman
Arrays in LiveCode are actually associative arrays (like hash maps). A key is associated with a value. The value might be as well an array.
Chapter 5.5.7 of the User's Guide says
Array elements may contain nested or sub-elements, making them multi-dimensional.
This type of array is ideal for processing hierarchical data structures such as trees or
XML. To access a sub-element, simply declare it using an additional set of square
brackets.
put "ABC" into myVariable["myKeyName"][“aChildElement”]
see also
How to store pictures in a stack?
Dave- I'm hoping to get a struct-like container implemented in the near future. Meanwhile you can, as splash21 mentioned, use custom properties (or better yet, custom property sets) to do what you want. This will give you a pseudo-struct for each object and you can implement the file and sound specifications into the properties. And if you use that in conjunction with a behavior object you'll end up very close to a real inheritable class formation.

Resources