Multiple RFID tags reading collision - collision

Consider passive rfid tags for this question:
When reading multiple rfid tags by the reader, do the multiple rfid tags transmit at the same frequency? If yes then there must be collisions?
How does the reader deal with this problem (if it exists)?
Thanks alot
ALi Tariq

Yes, and yes there are collisions. Those are handled by the protocol layer. In the RAIN RFID/EPC Gen2 protocol this is for example handled by using a Slotted Aloha mechanism, where the reader divides the available time in slots, and each tag is using one (randomly chosen) slot. Still, collisions occur, but the reader keeps retrying until all tags have been read. This is efficient enough for most use cases.

Related

can protocol buffer be used without gRPC?

Hello everyone I am getting my hands dirty with gRPC and protocol buffers and have came across an article which has mentioned that the binary protobuf file for a message is 5 times smaller than Json counterpart but in that article it is mentioned that this level of compression can only be achieved if transmitting via gRPC. This particular comment "compression possible when transmitting via gRPC", I cant seem to understand cause i had an understanding that protocol buffer is a serialization format which can work irrespective of gRPC or is this understanding flawed? what does this means? here is the link to the article and the screen shot.
https://www.datascienceblog.net/post/programming/essential-protobuf-guide-python/
You are correct - Protocol buffers "provide a serialization format for packets of typed, structured data that are up to a few megabytes in size" and has many uses (e.g. serialising data to disk, network transmission, passing data between applications etc).
The article you reference is somewhat misleading when it says
we can only achieve this level of compression if we are transmitting binary Protobuf using the gRPC protocol
The reasoning behind the statement becomes a bit clearer when you consider the following paragraph:
If gRPC is not an option, a common pattern is to encode the binary Protobuf data using the base64-encoding. Although this encoding irrevocably increases the size of the payload by 33%, it is still much smaller than the corresponding REST payload.
So this appears to be focusing on transmitting data over HTTP which is a text based protocol (HTTP/2 changes this somewhat).
Because Profobuf is a binary format it is often encoded before being transmitted via HTTP (technically it could be transferred in a binary format using something like application/octet-stream). The article mentions base64-encoding and using this this increases it's size.
If you are not using HTTP (e.g. writing to disk, direct TCP/IP link, Websockets, MQTT etc) then this does not apply.
Having said that I believe that the example may be overstating the benefits of Protobuf a little because a lot of HTTP traffic is compressed (this article reported a 9% difference in their test).
I agree with Brits' answer.
Some separate points. It's not quite accurate to talk about GPB compressing data; it merely goes some way to minimally encode data (e.g. integers). If you send a GPB message that has 10 strings all the same, it won't compress that down in the same way you might expect zip to.
GPB is not the most efficent encoder of data out there. ASN.1 uPER is even better, being able to exploit knowledge that can be put into an ASN.1 schema that cannot be expressed in a GPB schema .proto file. For example, in ASN.1 you can constrain the value of an integer to, say, 1000..1003. In ASN.1 uPER that's 2 bits (there are 4 possible values). In GPB, that's two bytes of encoded data at least.

Is there a way to track a RFID using the RFID reader?

I have a long range RFID reader together with the RFID. Is there any possibility to track the RFID? Thanks. I use vb.net as my programming language for saving the tag to the database.
The tag only reply with an id, but not with an timestamp when the id started sending, right?. You need the time to measure speed. With the time and knowing which antenna received the answer of the tag and the time when the reader started asking for tags you could make a rough assumption about distance and maybe direction of travel.
For really tracking movement you need to know the building environment and needs to have more than just one reader.
A RFID tag is not a GPS transponder

Compression Algorithm for Small Amounts of Data

I have a server-side program that generates JSON for a client. A few of my colleagues have suggested using zip/gzip compression in order to reduce the amount of data that sending over the wire. However, when tested against one of my average JSON messages, it they both actually increased the amount of data being sent. It wasn't until I sent an unusually large response that the zipping kicked in and was useful.
So I started poking around stackoverflow, and I eventually found LZO, which, when tested, did exactly what I wanted it to do. However, I can't seem to find documentation of the run time of the algorithm, and I'm not quite good enough to sit down with the code and figure it out myself :)
tl;dr? RUN TIME OF LZO?
I'm going to ignore your question about the runtime of LZO (answer: almost certainly fast enough) and discuss the underlying problem.
You are exchanging JSON data structures over the wire and want to reduce your bandwidth. At the moment you are considering general-purpose compression algorithms like DEFLATE and LZO. However, any compression algorithm based on Lempel-Ziv techniques works best on large amounts of data. These algorithms work by building up a dictionary of frequently occurring sequences of data, so that they can encode a reference to the dictionary instead of the whole sequence when it repeats. The bigger the dictionary, the better the compression ratio. For very small amounts of data, like individual data packets, the technique is useless: there isn't time to build up the dictionary, and there isn't time for lots of repeats to appear.
If you are using JSON to encode a wire protocol, then your packets are very likely stereotyped, with similar structures and a small number of common keys. So I suggest investigating Google's Protocol Buffers which are designed specifically for this use case.
Seconding the suggestion to avoid LZO and any other type of generic/binary-data compression algorithm.
Your other options are basically:
Google's Protocol Buffers
Apache Thrift
MessagePack
The best choice depends on your server/language setup, your speed-vs-compression requirements, and personal preference. I'd probably go with MessagePack myself, but you won't go wrong with Protocol Buffers either.

Data structures for audio editor

I have been writing an audio editor for the last couple of months, and have been recently thinking about how to implement fast and efficient editing (cut, copy, paste, trim, mute, etc.). There doesn't really seem to be very much information available on this topic, however... I know that Audacity, for example, uses a block file strategy, in which the sample data (and summaries of that data, used for efficient waveform drawing) is stored on disk in fixed-sized chunks. What other strategies might be possible, however? There is quite a lot of info on data-structures for text editing - many text (and hex) editors appear to use the piece-chain method, nicely described here - but could that, or something similar, work for an audio editor?
Many thanks in advance for any thoughts, suggestions, etc.
Chris
the classical problem for editors handling relative large files is how to cope with deletion and insertion. Text editors obviously face this, as typically the user enters characters one at a time. Audio editors don't typically do "sample by sample" inserts, i.e. the user doesn't interactively enter one sample per time, but you have some cut-and-paste operations. I would start with a representation where an audio file is represented by chunks of data which are stored in a (binary) search tree. Insert works by splitting the chunk you are inserting into two chunks, adding the inserted chunk as a third one, and updating the tree. To make this efficient and responsive to the user, you should then have a background process that defragments the representation on disk (or in memory) and then makes an atomic update to the tree holding the chunks. This should make inserts and deletes as fast as possible. Many other audio operations (effects, normalize, mix) operate in-place and do not require changes to the data structure, but doing e.g. normalize on the whole audio sample is a good opportunity to defragment it at the same time. If the audio samples are large, you can keep the chunks as it is standard on hard disk also. I don't believe the chunks need to be fixed size; they can be variable size, preferably 1024 x (power of two) bytes to make file operations efficient, but a fixed-size strategy can be easier to implement.

Optimizing Data Translation

Our business deals with houses and over the years we have created several business objects to represent them. We also receive lots of data from outside sources, and send data to external consumers. Every one of these represents the house in a different way and we spend a lot of time and energy translating one format into another. I'm looking for some general patterns or best practices on how to deal with this situation. How can I write a universal data translator that is flexible, extensible, and fast.
Background: A house generally has 30-40 attributes such as size, number of bedrooms, roof type, construction material, siding material, etc. These are typically represented as key/value pairs. A typical translation problem is that one vendor will represent the number of bedrooms as a single key/value pair: NumBedrooms=3, while a different vendor will have a key/value pair per bedroom: Bedroom=master, Bedroom=small, Bedroom=small.
There's nothing particularly hard about the translation, but we spend a lot of time and energy writing and testing translations. How can I optimize this?
Thanks
(My environment is .Net)
The best place to start is by creating an "internal representation" which is the representation that your processing will always. Then create translators from and to "external representations" as needed. I'd imagine that this is what you are already doing, but it should be mentioned for completeness. The optimization comes from being able to selectively write import and export only when you need them.
A good implementation strategy is to externalize the transformation if you can. If you can get your inputs and outputs into XML documents, then you can write XSLT transforms between your internal and external representations. The goal is to be able to set up a pipeline of transformations from an input XML document to your internal representation. If everything is represented in XML and using a common protocol (say... hmm... HTTP), then the process can be controlled using configuration. BTW - this is essentially the Pipes and Filters design pattern.
Take a look at Yahoo pipes, Apache Cocoon, XML pipeline, and NetKernel for inspiration.
My employer back in the 90s faced this problem. We had a standard format we converted the customers' data to and from, as D.Shawley suggests.
I went further and designed a simple format-description language; we described our standard format in that language and then, for a new dataset, we'd write up its format too. Then a program would take both descriptions and convert the data from one format to the other, with automatic type conversions, safety checks, etc. (This came in handy for some other operations as well, not just these initial/final conversions.)
The particulars probably won't help you -- chances are you deal with completely different kinds of data. You can likely profit from the general principle, though. The "data definition language" needn't necessarily be a fancy thing with a parser and scanner; you might define it directly with a data structure in IronPython, say.

Resources