Is AMF only for Flash? - amf

I am new to AMF and I learned that AMF is supposedly very fast.
I was wondering if I should use it for all my web services.
Is it still fast without flash VM?

This question could have two answers. One answer is about AMF as a protocol, the other answer is about implementations.
As a protocol, AMF produces small output that has much of the redundancy stripped out. As compared with a similar SOAP implementation, AMF will use fewer bytes of network traffic. In some applications this could be thought of as "faster".
AMF can also be fast if the implementation which encodes the AMF is fast. The Actionscript VM can encode it pretty quickly. However, you state you will not be using the Flash VM. In that case, you might be thinking of using Python. For Python, there are two open source choices: PyAmf and AmFast. AmFast is reportedly eighteen (18) times faster than PyAmf for encoding purposes.
So the ultimate answer is this: determine what kind of "fast" you desire, compare the capabilities of the encoders you can choose from.

AMF was specifically designed for Actionscript, but it is just a binary format based on SOAP. You could use it anywhere, but both the client and server would need to understand AMF.
There are many libraries out there for many different server side languages to understand AMF, but I am not too sure where else you would want to use AMF on the Client-side except for Flash.

AMF is fast in the sense that, compared to other formats you could use, it's:
smaller, so responses take less time to transfer over the network than they would in bigger formats (which affects you no matter what you're using AMF with), and
closer to the native format of the client, so there's less parsing work for the client to do (which is mainly a benefit with Flash Player, though it's still probably closer to any client's native form than e.g. XML).
There's a good comparison of AMF's performance against other protocols/formats used with Flex at: http://www.jamesward.com/census2/, but I haven't seen a comparison that covers the performance of any other AMF client (even James Ward's own pure JavaScript AMF decoder).

Related

Desktop Duplication API vs Windows.Graphics.Capture

I'm writing a screen recording app for a specific game, with focus on performance, support for older versions of windows is irrelevant, so it seems that using either of the two APIs is valid compatibility wise, but I can't find much data on performance, afaik NVIDIA deprecated NVFBC in favor of DDA https://developer.download.nvidia.com/designworks/capture-sdk/docs/NVFBC_Win10_Deprecation_Tech_Bulletin.pdf, according to the document, DDA with GPU encoding (whether NVENC or equivalent) seems quite good performance wise, where does Windows.Graphics.Capture fit in all of this, I know it's newer but is it more performant, I can't run my own benchmarks, because I'm not familiar with either of the APIs and I could introduce an error or simply by coding either sub optimally I may end up simply with comparing the performance of my implementation instead of the API, the game is mostly CPU bound on most systems, so for most users the GPU would have spare power for encoding, also aside from performance what are the benefits & drawbacks of using either of the aforementioned APIs

In terms of HTTP request performance, AJAX or Flash?

In terms of HTTP request performance should I pick AJAX or Flash? To be more specific, I'm more into Flash than AJAX and I'm currently working on a wide scale web project. I wanted to try AJAX out for once and now it's getting too messy for me. Before it gets more complicated I thought may be I can run Flash on the background for HTTP Requests and use it with javascript.
I couldn't find any benchmark on the Internet, but I think AJAX is faster than Flash. So what's your personal experience? Is there too much difference between Flash and AJAX?
Flash and JS both use the browser to send HTTP requests so I don't see any reason there would be a difference in performance between them.
From my personal experience, AJAX tends to be a little faster than Flash, depending on what movie you're showing. If your movie is extremely large, then it will take longer, but for small content they're virtually as fast; the difference is barely seen. However, keep in mind I'm testing on a fairly good laptop; on other devices and machines, like cellphones, the difference might be bigger (probably flash would be slower).
Hope this helps a bit!
N.S.
I agree that AJAX is a generally faster than Flash performing a similar request, but really the speed difference should be a negligible consideration. Having the additional requirement of a Flash movie to just act as an HTTP communication tool seems to be a bad idea because you are still going to require a Javascript solution to act where Flash is unavailable.
I wonder where the proof is in any of these responses. I've used both, I started off doing lots of HTML and JS programming and used AJAX when it was first getting traction and found it to be okay with regard to performance. AMF3 is faster than JSON hands down. Why? Not because of differences in the HTTP standards that they both hitch a ride on, but because of the way the data itself is represented (the compressions schemes used and serialization/de-serialization mechanisms make all the difference).
Go ahead and check it out for yourself, http://www.jamesward.com/census2/
(after all the best proof is a test)
Dojo JSON using gzip compression is closest to AMF3 but still produces a payload of about 160% the size of the AMF payload, one and a half times larger is not in my opinion ever going to be faster assuming equivalent bandwidth. I believe with the latest JavaScript engines the time to de-serialize the data in a browser directly vs having the Flash plugin do that work might make JSON faster for small payloads but when it comes to large amounts of data I don't think that processing time difference would make up for the payload size.

What are the key differences between Apache Thrift, Google Protocol Buffers, MessagePack, ASN.1 and Apache Avro?

All of these provide binary serialization, RPC frameworks and IDL. I'm interested in key differences between them and characteristics (performance, ease of use, programming languages support).
If you know any other similar technologies, please mention it in an answer.
ASN.1 is an ISO/ISE standard. It has a very readable source language and a variety of back-ends, both binary and human-readable. Being an international standard (and an old one at that!) the source language is a bit kitchen-sinkish (in about the same way that the Atlantic Ocean is a bit wet) but it is extremely well-specified and has decent amount of support. (You can probably find an ASN.1 library for any language you name if you dig hard enough, and if not there are good C language libraries available that you can use in FFIs.) It is, being a standardized language, obsessively documented and has a few good tutorials available as well.
Thrift is not a standard. It is originally from Facebook and was later open-sourced and is currently a top level Apache project. It is not well-documented -- especially tutorial levels -- and to my (admittedly brief) glance doesn't appear to add anything that other, previous efforts don't already do (and in some cases better). To be fair to it, it has a rather impressive number of languages it supports out of the box including a few of the higher-profile non-mainstream ones. The IDL is also vaguely C-like.
Protocol Buffers is not a standard. It is a Google product that is being released to the wider community. It is a bit limited in terms of languages supported out of the box (it only supports C++, Python and Java) but it does have a lot of third-party support for other languages (of highly variable quality). Google does pretty much all of their work using Protocol Buffers, so it is a battle-tested, battle-hardened protocol (albeit not as battle-hardened as ASN.1 is. It has much better documentation than does Thrift, but, being a Google product, it is highly likely to be unstable (in the sense of ever-changing, not in the sense of unreliable). The IDL is also C-like.
All of the above systems use a schema defined in some kind of IDL to generate code for a target language that is then used in encoding and decoding. Avro does not. Avro's typing is dynamic and its schema data is used at runtime directly both to encode and decode (which has some obvious costs in processing, but also some obvious benefits vis a vis dynamic languages and a lack of a need for tagging types, etc.). Its schema uses JSON which makes supporting Avro in a new language a bit easier to manage if there's already a JSON library. Again, as with most wheel-reinventing protocol description systems, Avro is also not standardized.
Personally, despite my love/hate relationship with it, I'd probably use ASN.1 for most RPC and message transmission purposes, although it doesn't really have an RPC stack (you'd have to make one, but IOCs make that simple enough).
We just did an internal study on serializers, here are some results (for my future reference too!)
Thrift = serialization + RPC stack
The biggest difference is that Thrift is not just a serialization protocol, it's a full blown RPC stack that's like a modern day SOAP stack. So after the serialization, the objects could (but not mandated) be sent between machines over TCP/IP. In SOAP, you started with a WSDL document that fully describes the available services (remote methods) and the expected arguments/objects. Those objects were sent via XML. In Thrift, the .thrift file fully describes the available methods, expected parameter objects and the objects are serialized via one of the available serializers (with Compact Protocol, an efficient binary protocol, being most popular in production).
ASN.1 = Grand daddy
ASN.1 was designed by telecom folks in the 80s and is awkward to use due to limited library support as compared to recent serializers which emerged from CompSci folks. There are two variants, DER (binary) encoding and PEM (ascii) encoding. Both are fast, but DER is faster and more size efficient of the two. In fact ASN.1 DER can easily keep up (and sometimes beat) serializers that were designed 30 years after itself, a testament to it's well engineered design. It's very compact, smaller than Protocol Buffers and Thrift, only beaten by Avro. The issue is having great libraries to support and right now Bouncy Castle seems to be the best one for C#/Java. ASN.1 is king in security and crypto systems and isn't going to go away, so don't be worried about 'future proofing'. Just get a good library...
MessagePack = middle of the pack
It's not bad but it's neither the fastest, nor the smallest nor the best supported. No production reason to choose it.
Common
Beyond that, they are fairly similar. Most are variants of the basic TLV: Type-Length-Value principle.
Protocol Buffers (Google originated), Avro (Apache based, used in Hadoop), Thrift (Facebook originated, now Apache project) and ASN.1 (Telecom originated) all involve some level of code generation where you first express your data in a serializer-specific format, then the serializer "compiler" will generate source code for your language via the code-gen phase. Your app source then uses these code-gen classes for IO. Note that certain implementations (eg: Microsoft's Avro library or Marc Gavel's ProtoBuf.NET) let you directly decorate your app level POCO/POJO objects and then the library directly uses those decorated classes instead of any code-gen's classes. We've seen this offer a boost performance since it eliminates a object copy stage (from application level POCO/POJO fields to code-gen fields).
Some results and a live project to play with
This project (https://github.com/sidshetye/SerializersCompare) compares important serializers in the C# world. The Java folks already have something similar.
1000 iterations per serializer, average times listed
Sorting result by size
Name Bytes Time (ms)
------------------------------------
Avro (cheating) 133 0.0142
Avro 133 0.0568
Avro MSFT 141 0.0051
Thrift (cheating) 148 0.0069
Thrift 148 0.1470
ProtoBuf 155 0.0077
MessagePack 230 0.0296
ServiceStackJSV 258 0.0159
Json.NET BSON 286 0.0381
ServiceStackJson 290 0.0164
Json.NET 290 0.0333
XmlSerializer 571 0.1025
Binary Formatter 748 0.0344
Options: (T)est, (R)esults, s(O)rt order, (S)erializer output, (D)eserializer output (in JSON form), (E)xit
Serialized via ASN.1 DER encoding to 148 bytes in 0.0674ms (hacked experiment!)
Adding to the performance perspective, Uber recently evaluated several of these libraries on their engineering blog:
https://eng.uber.com/trip-data-squeeze/
The winner for them? MessagePack + zlib for compression
Our goal was to find the combination of encoding protocol and
compression algorithm with the most compact result at the highest
speed. We tested encoding protocol and compression algorithm
combinations on 2,219 pseudorandom anonymized trips from Uber New York
City (put in a text file as JSON).
The lesson here is that your requirements drive which library is right for you. For Uber they couldn't use an IDL based protocol due to the schemaless nature of message passing they have. This eliminated a bunch of options. Also for them it's not only raw encoding/decoding time that comes into play, but the size of data at rest.
Size Results
Speed Results
The one big thing about ASN.1 is, that ist is designed for specification not implementation. Therefore it is very good at hiding/ignoring implementation detail in any "real" programing language.
Its the job of the ASN.1-Compiler to apply Encoding Rules to the asn1-file and generate from both of them executable code. The Encoding Rules might be given in EnCoding Notation (ECN) or might be one of the standardized ones such as BER/DER, PER, XER/EXER.
That is ASN.1 is the Types and Structures, the Encoding Rules define the on the wire encoding, and last but not least the Compiler transfers it to your programming language.
The free Compilers support C,C++,C#,Java, and Erlang to my knowledge. The (much to expensive and patent/licenses ridden) commercial compilers are very versatile, usually absolutely up-to-date and support sometimes even more languages, but see their sites (OSS Nokalva, Marben etc.).
It is surprisingly easy to specify an interface between parties of totally different programming cultures (eg. "embedded" people and "server farmers") using this techniques: an asn.1-file, the Encoding rule e.g. BER and an e.g. UML Interaction Diagram. No Worries how it is implemented, let everyone use "their thing"! For me it has worked very well.
Btw.: At OSS Nokalva's site you may find at least two free-to-download books about ASN.1 (one by Larmouth the other by Dubuisson).
IMHO most of the other products try only to be yet-another-RPC-stub-generators, pumping a lot of air into the serialization issue. Well, if one needs that, one might be fine. But to me, they look like reinventions of Sun-RPC (from the late 80th), but, hey, that worked fine, too.
Microsoft's Bond (https://github.com/Microsoft/bond) is very impressive with performance, functionalities and documentation. However it does not support many target platforms as of now ( 13th feb 2015 ). I can only assume it is because it is very new. currently it supports python, c# and c++ . It's being used by MS everywhere. I tried it, to me as a c# developer using bond is better than using protobuf, however I have used thrift as well, the only problem I faced was with the documentation, I had to try many things to understand how things are done.
Few resources on Bond are as follows ( https://news.ycombinator.com/item?id=8866694 , https://news.ycombinator.com/item?id=8866848 , https://microsoft.github.io/bond/why_bond.html )
For performance, one data point is jvm-serializers benchmark -- it's quite specific, small messages, but might help if you are on Java platform. I think performance in general will often not be the most important difference. Also: NEVER take authors' words as gospel; many advertised claims are bogus (msgpack site for example has some dubious claims; it may be fast, but information is very sketchy, use case not very realistic).
One big difference is whether a schema must be used (PB, Thrift at least; Avro it may be optional; ASN.1 I think also; MsgPack, not necessarily).
Also: in my opinion it is good to be able to use layered, modular design; that is, RPC layer should not dictate data format, serialization. Unfortunately most candidates do tightly bundle these.
Finally, when choosing data format, nowadays performance does not preclude use of textual formats. There are blazing fast JSON parsers (and pretty fast streaming xml parsers); and when considering interoperability from scripting languages and ease of use, binary formats and protocols may not be the best choice.

decoding 802.11 b

I have a raw grabbed data from spectrometer that was working on wifi (802.11b) channel 6.
(two laptops in ad-hoc ping each other).
I would like to decode this data in matlab.
I see them as complex vector with 4.6 mln of complex samples.
I see their spectrum quite nice. I am looking document a bit less complicated as IEEE 802.11 standard (which I have).
I can share measurement data to other people.
There's now a few solutions around for decoding 802.11 using Software Defined Radio (SDR) techniques. As mentioned in a previous answer there is software that is based on gnuradio - specifically there's gr-ieee802-11 and also 802.11n+. Plus the higher end SDR boards like WARP utilise FPGA based implementations of 802.11. There's also a bunch of implementations of 802.11 for Matlab available e.g. 802.11a.
If your data is really raw then you basically have to build every piece of the signal processing chain in software, which is possible but not really straightforward. Have you checked the relevant wikipedia page? You might use gnuradio instead of starting from scratch.
I have used 802.11 IEEE standard to code and decode data on matlab.
Coding data is an easy task.
Decoding is a bit more sophisticated.
I agree with Stan, it is going to be tough doing everything yourself. you may get some ideas from the projects on CGRAN like :
https://www.cgran.org/wiki/WifiLocalization

Would you recommend Google Protocol Buffers or Caucho Hessian for a cross-language over-the-wire binary format?

Would you recommend Google Protocol Buffers or Caucho Hessian for a cross-language over-the-wire binary format? Or anything else, for that matter - Facebook Thrift for example?
We use Caucho Hessian because of the reduced integration costs and simplicity. It's performance is very good, so it's perfect for most cases.
For a few apps where cross-language integration is not that important, there's an even faster library that can squeeze even more performance called Kryo.
Unfortunately it's not that widely used, and it's protocol is not quasi-standard like the one from Hessian.
Depends on use case. PB is much more tightly coupled, best used internally with closely-coupled systems; not good for shared/public interfaces (as in to be shared between more than 2 specific systems).
Hessian is bit more self-descriptive, has nice performance on Java. Better than PB on my tests, but I'm sure that depends on use case. PB seems to have trouble with textual data, perhaps it has been optimized for integer data.
I don't think either is particularly good for public interfaces, but given you want binary format, that is probably not a big problem.
EDIT: Hessian performance is actually not all that good as, per jvm-serializers benchmark. And PB is pretty fast as long as you make sure to add the flag that forces use of fast options on Java.
And if PB is not good for public interfaces, what is? IMO, open formats like JSON are superior externally, and more often than not fast enough that performance does not matter a lot.
For me, Caucho Hessian is the best.
It is very easy to get started, and the performance is good. I have tested local, the latent is about 3ms, on Lan you can expect about 10ms.
With hessian you don't have to write another file to define the model (we using java + java). It saves a lot of time for development and maintenance.
If you need a support to interconnect apps from many languages/platforms, than Hessian is the best. If you use only Java, than Kryo is even faster.
I'm myself looking into this.. no good conclusions so far, but I found http://dewpoint.snagdata.com/2008/10/21/google-protocol-buffers/ summarizing all the options.
Muscle has a binary message transport. Sorry that I can't comment on the others as I haven't tried them.
I tried Google Protocol Buffers. It works with C++/MFC, C#, PHP and more languages (see: http://code.google.com/p/protobuf/wiki/ThirdPartyAddOns) and works really well regardless of transport and disk save/loading.
I would say that ProtocolBuffers, Thrift or Hessian are fairly similar as far as their Binary formats are concerned - where they provide cross-language serialization support. The inherent serialization might have some small performance differences between them ( size/space tradeoffs ) but this is not the most important thing. ProtocolBuffers is certainly a well performing IDL defined format which has features for extensibility which make it attractive.
HOWEVER the use of an "over-the-wire" in the question implies the use of a communications library. Here Google has provided an interface definition for protobuf RPC, which is equivalent to making a specification where all implementation details is left to the implementer. This is unfortunate because it means there is de-facto NO cross-language implementation - unless you can find a cross language implementation probably mentioned here http://code.google.com/p/protobuf/wiki/ThirdPartyAddOns. I have seen some RPC implementations which support java and c, or c and c++, or python and c etc, but here you just have to find a library which satisfies your concrete requirements and evaluate otherwise youre likely to be disappointed. ( At least i was disappointed enough to write protobuf-rpc-pro )
Kyro is a serialization format like protobuf, but java only. Kyro/Net is a java only RPC implementation using Kryo messages. So it's not a good choice for "cross-language-ness" communication.
Today it would seem that ICE http://www.zeroc.com/, and Thrift which provides an RPC implementation out of the box, are the best cross-language RPC implementations out there.

Resources