Info about 3MF file format (for 3D printing) - windows

Microsoft announced new support for 3D printing in Windows 8.1. They invented a new file format called 3MF for communicating with the printer.
Does anyone have any kind of spec for the 3MF file format, or even any detailed information about it? I'm especially interested in how geometry is represented.
Also, any thoughts on how 3MF might co-exist with the existing AMF format? Are the two formats competing or complementary?

You can request the Microsoft 3D Printing SDK (which includes the 3D Manufacturing Format spec) at ask3dprint at microsoft dot com.
Geometry is represented very similarly to AMF, with a more compact (fewer bytes) format.
3MF is intended to be a native 3D spool file format in the Windows print pipeline, something which AMF could not be (due to flaws in the spec, reported to ASTM). 3MF and AMF are not the same, but they follow a similar design approach. Since 3MF supports custom metadata, it is possible to transform an AMF document into 3MF format (and back).
3MF does not support curved triangles or a scripting language, however. 3D printing makers generally aren't really using those anyway.

The spec can be downloaded here:
http://3mf.io/what-is-3mf/3mf-specification
I read the spec, and it stores individual mesh vertices with XML tags.
And, this is justified as follows: "The variable-precision nature of ASCII encoding is a significant advantage over fixed-width binary formats, and helps make up the difference in storage efficiency."
Anyone doing web coding knows that XML tags can eat up >60% of storage space. Just compare the number of bytes used for xml tags vs useful data. And, it requires more CPU time to parse. In addition, storing digits in variable-width ASCII is still longer in bytes than fixed-width double precision IEEE.
This essentially cripples 3MF for handling large meshes.. Unless of course, the companies involved develop their own proprietary extension that overcomes this.
I make a call for consumers and users to openly reject the current 3MF spec until the 3MF consortium fixes this particular aspect. Modify the public spec to include an alternative binary format section for vertex / normal / index/ streaming.

Related

Why protobuf doesn't support alphanumeric type?

I'm wondering why protobuf doesn't implement support of commonly used alphanumeric type?
This would allow to encode several characters in only byte (more if case insensitive) very effectively as no any sort of compression involved.
Is it something that Protobuf developers are planning to implement in future?
Thanks,
In today's global world, the number of times when "alphanumeric" means the 62 characters in the range 0-9, A-Z and a-z is fairly minimal. If we just consider the basic multilingual plane, there are about 48k code units (which is to say: over 70% of the available range) that are counted as "alphanumeric" - and a fairly standard (although even this may be suboptimal in some locales) way of encoding them is UTF-8, which protobuf already uses for the string type.
I cannot see much advantage in using a dedicated wire-type for this category of data, and any additional wire-type would have the issue that it would need support adding in multiple libraries, because an unknown wire-type renders the stream unreadable to down-level parsers: you cannot even skip over unwanted data if you don't know the wire-type (the wire-type defines the skip rules).
Of course, since you also have the bytes type available, you can feel free to do anything bespoke you want inside that.

Font layout algorithm, Win32

I'm trying to come up with a platform independent way to render unicode text to some platform specific surface, but assuming that most platforms support something at least kinda similar, maybe we can talk in terms of the win32 API. I'm most interested in rendering LARGE buffers and supporting rich text, so while I definitely don't want to ever look inside a unicode buffer, I'd like to be told myself what to draw and be hinted where to draw it, so if the buffer is modified, I might properly ask for updates on partial regions of the buffer.
So the actual questions. GetTextExtentExPointW clearly allows me to get the widths of each character, how do I get the width of a nonbreaking extent of text? If I have some long word, he should probably be put on a new line rather than splitting the word. How can I tell where to break the text? Will I need to actually look inside the unicode buffer? This seems very dangerous. Also, how do I figure how far each baseline should be while rendering?
Finally, this is already looking like it's going to be extremely complicated. Are there alternative strategies for doing what I'm trying to do? I really would not at all like to rerender a HUGE buffer each time I change tiny chunks of it. Something between looking at individual glyphs and just giving a box to spat text in.
Finally, I'm not interested in using anything like knuth's word breaking algorithm. No hyphenation. Ideally I'd like to render justified text, but that's on me if the word positioning is given. Ragged right hand margin is fine by me.
What you're trying to do is called shaping in unicode jargon. Don't bother writing your own shaping engine, it's a full-time job that requires continuous updates to take into account changes in unicode and opentype standards. If you want to spend any time on the rest of your app you'll need to delegate shaping to a third-party engine (harbuzz-ng, uniscribe, icu, etc)
As others wrote:
– unicode font rendering is hellishly complex, much more than you expect
– winapi is not cross platform at all
The three common strategies for rendering unicode text are:
1. write one backend per system (plugging on the system native text stack) or
2. select one set of cross-platform libs (for example freebidi + harfbuzz-ng + freetype + fontconfig, or a framework like QT) and recompile them for each target system or
3. take compliance shortcuts
The only strategy I strongly advise against is the last one. You can not control unicode.org normalization (adding upper ss to German), you do not understand worldwide script uses (both African languages and Vietnamese are Latin variants but they exercise unexpected unicode properties), you will underestimate font creators ingenuity (oh, indic users requested this opentype property, but it will be really handy for this English use-case…).
The two first strategies have their own drawbacks. It's simpler to maintain a single text backend but deploying a complete text stack on a foreign system is far from hassle-free. Almost every project that tries cross-plarform has to get rid of msvc first since it targets windows and its language dialect won't work on other platforms and cross-platform libs will typically only compile easily in gcc or llvm.
I think harfbuzz-ng has reached parity with uniscribe even on windows so that's the lib I'd target if I wanted cross-platform today (chrome, firefox and libreoffice use it at least on some platforms). However Libreoffice at least uses the multi-backend strategy. No idea if it reflects current library state more than some past historic assessment. There are not so many cross-platform apps with heavy text usage to look at, and most of them carry the burden of legacy choices.
Unicode rendering is surprisingly complicated. Line breaking is just the beginning; there are many other subtleties that I don't think you've appreciated (vertical text, right-to=left text, glyph combining, and many more). Microsoft has several teams dedicated to doing nothing but implementing text rendering, for example.
Sounds like you're interested in DirectWrite. Note that this is NOT platform independent, obviously. It's also possible to do a less accurate job if you don't care about being language independent; many of the more unusual features only occur in rarer languages. (Chinese being a notable exception.)
If you want some perfect multi platform there will be problems. If you draw one sentence with GDI, one GDI+, one with Direct2D, one on Linux, one on Mac all with the same font size on the same buffer, you'll have differences some round some position to int some other use float for examples.
There is not one, but a least two problems. Drawing text and computing text position, line break etc are very different. Some library do both some do only computing or rendering part. A very simplified explanation is drawing do only render one single char at the position you ask, with transformations zoom, rotation and anti aliasing. Computing do everything else chose each char position in word, sentences line break, paragraphs etc
If you want to be platform independent you could use FreeType to read font files and get every information on each characters. That library get exact same result on each platform and predictability in font is good. The main problem with font is lot of bad, missed, or even wrong information in characters descriptions.Nobody do text perfectly because it's very hard (tipping hat to word, acrobat and every team who deal directly with fonts)
If your computing of font is good. There is lot of work to do everything you could see in a good word processor software (space between characters, spaces between word, line break, alignment, rotation, grease, aliasing...) then you could do the rendering. It should be the easier. You can with same computation do GDI, Direct2D, PDF or printing rendering path.

What are the key differences between Apache Thrift, Google Protocol Buffers, MessagePack, ASN.1 and Apache Avro?

All of these provide binary serialization, RPC frameworks and IDL. I'm interested in key differences between them and characteristics (performance, ease of use, programming languages support).
If you know any other similar technologies, please mention it in an answer.
ASN.1 is an ISO/ISE standard. It has a very readable source language and a variety of back-ends, both binary and human-readable. Being an international standard (and an old one at that!) the source language is a bit kitchen-sinkish (in about the same way that the Atlantic Ocean is a bit wet) but it is extremely well-specified and has decent amount of support. (You can probably find an ASN.1 library for any language you name if you dig hard enough, and if not there are good C language libraries available that you can use in FFIs.) It is, being a standardized language, obsessively documented and has a few good tutorials available as well.
Thrift is not a standard. It is originally from Facebook and was later open-sourced and is currently a top level Apache project. It is not well-documented -- especially tutorial levels -- and to my (admittedly brief) glance doesn't appear to add anything that other, previous efforts don't already do (and in some cases better). To be fair to it, it has a rather impressive number of languages it supports out of the box including a few of the higher-profile non-mainstream ones. The IDL is also vaguely C-like.
Protocol Buffers is not a standard. It is a Google product that is being released to the wider community. It is a bit limited in terms of languages supported out of the box (it only supports C++, Python and Java) but it does have a lot of third-party support for other languages (of highly variable quality). Google does pretty much all of their work using Protocol Buffers, so it is a battle-tested, battle-hardened protocol (albeit not as battle-hardened as ASN.1 is. It has much better documentation than does Thrift, but, being a Google product, it is highly likely to be unstable (in the sense of ever-changing, not in the sense of unreliable). The IDL is also C-like.
All of the above systems use a schema defined in some kind of IDL to generate code for a target language that is then used in encoding and decoding. Avro does not. Avro's typing is dynamic and its schema data is used at runtime directly both to encode and decode (which has some obvious costs in processing, but also some obvious benefits vis a vis dynamic languages and a lack of a need for tagging types, etc.). Its schema uses JSON which makes supporting Avro in a new language a bit easier to manage if there's already a JSON library. Again, as with most wheel-reinventing protocol description systems, Avro is also not standardized.
Personally, despite my love/hate relationship with it, I'd probably use ASN.1 for most RPC and message transmission purposes, although it doesn't really have an RPC stack (you'd have to make one, but IOCs make that simple enough).
We just did an internal study on serializers, here are some results (for my future reference too!)
Thrift = serialization + RPC stack
The biggest difference is that Thrift is not just a serialization protocol, it's a full blown RPC stack that's like a modern day SOAP stack. So after the serialization, the objects could (but not mandated) be sent between machines over TCP/IP. In SOAP, you started with a WSDL document that fully describes the available services (remote methods) and the expected arguments/objects. Those objects were sent via XML. In Thrift, the .thrift file fully describes the available methods, expected parameter objects and the objects are serialized via one of the available serializers (with Compact Protocol, an efficient binary protocol, being most popular in production).
ASN.1 = Grand daddy
ASN.1 was designed by telecom folks in the 80s and is awkward to use due to limited library support as compared to recent serializers which emerged from CompSci folks. There are two variants, DER (binary) encoding and PEM (ascii) encoding. Both are fast, but DER is faster and more size efficient of the two. In fact ASN.1 DER can easily keep up (and sometimes beat) serializers that were designed 30 years after itself, a testament to it's well engineered design. It's very compact, smaller than Protocol Buffers and Thrift, only beaten by Avro. The issue is having great libraries to support and right now Bouncy Castle seems to be the best one for C#/Java. ASN.1 is king in security and crypto systems and isn't going to go away, so don't be worried about 'future proofing'. Just get a good library...
MessagePack = middle of the pack
It's not bad but it's neither the fastest, nor the smallest nor the best supported. No production reason to choose it.
Common
Beyond that, they are fairly similar. Most are variants of the basic TLV: Type-Length-Value principle.
Protocol Buffers (Google originated), Avro (Apache based, used in Hadoop), Thrift (Facebook originated, now Apache project) and ASN.1 (Telecom originated) all involve some level of code generation where you first express your data in a serializer-specific format, then the serializer "compiler" will generate source code for your language via the code-gen phase. Your app source then uses these code-gen classes for IO. Note that certain implementations (eg: Microsoft's Avro library or Marc Gavel's ProtoBuf.NET) let you directly decorate your app level POCO/POJO objects and then the library directly uses those decorated classes instead of any code-gen's classes. We've seen this offer a boost performance since it eliminates a object copy stage (from application level POCO/POJO fields to code-gen fields).
Some results and a live project to play with
This project (https://github.com/sidshetye/SerializersCompare) compares important serializers in the C# world. The Java folks already have something similar.
1000 iterations per serializer, average times listed
Sorting result by size
Name Bytes Time (ms)
------------------------------------
Avro (cheating) 133 0.0142
Avro 133 0.0568
Avro MSFT 141 0.0051
Thrift (cheating) 148 0.0069
Thrift 148 0.1470
ProtoBuf 155 0.0077
MessagePack 230 0.0296
ServiceStackJSV 258 0.0159
Json.NET BSON 286 0.0381
ServiceStackJson 290 0.0164
Json.NET 290 0.0333
XmlSerializer 571 0.1025
Binary Formatter 748 0.0344
Options: (T)est, (R)esults, s(O)rt order, (S)erializer output, (D)eserializer output (in JSON form), (E)xit
Serialized via ASN.1 DER encoding to 148 bytes in 0.0674ms (hacked experiment!)
Adding to the performance perspective, Uber recently evaluated several of these libraries on their engineering blog:
https://eng.uber.com/trip-data-squeeze/
The winner for them? MessagePack + zlib for compression
Our goal was to find the combination of encoding protocol and
compression algorithm with the most compact result at the highest
speed. We tested encoding protocol and compression algorithm
combinations on 2,219 pseudorandom anonymized trips from Uber New York
City (put in a text file as JSON).
The lesson here is that your requirements drive which library is right for you. For Uber they couldn't use an IDL based protocol due to the schemaless nature of message passing they have. This eliminated a bunch of options. Also for them it's not only raw encoding/decoding time that comes into play, but the size of data at rest.
Size Results
Speed Results
The one big thing about ASN.1 is, that ist is designed for specification not implementation. Therefore it is very good at hiding/ignoring implementation detail in any "real" programing language.
Its the job of the ASN.1-Compiler to apply Encoding Rules to the asn1-file and generate from both of them executable code. The Encoding Rules might be given in EnCoding Notation (ECN) or might be one of the standardized ones such as BER/DER, PER, XER/EXER.
That is ASN.1 is the Types and Structures, the Encoding Rules define the on the wire encoding, and last but not least the Compiler transfers it to your programming language.
The free Compilers support C,C++,C#,Java, and Erlang to my knowledge. The (much to expensive and patent/licenses ridden) commercial compilers are very versatile, usually absolutely up-to-date and support sometimes even more languages, but see their sites (OSS Nokalva, Marben etc.).
It is surprisingly easy to specify an interface between parties of totally different programming cultures (eg. "embedded" people and "server farmers") using this techniques: an asn.1-file, the Encoding rule e.g. BER and an e.g. UML Interaction Diagram. No Worries how it is implemented, let everyone use "their thing"! For me it has worked very well.
Btw.: At OSS Nokalva's site you may find at least two free-to-download books about ASN.1 (one by Larmouth the other by Dubuisson).
IMHO most of the other products try only to be yet-another-RPC-stub-generators, pumping a lot of air into the serialization issue. Well, if one needs that, one might be fine. But to me, they look like reinventions of Sun-RPC (from the late 80th), but, hey, that worked fine, too.
Microsoft's Bond (https://github.com/Microsoft/bond) is very impressive with performance, functionalities and documentation. However it does not support many target platforms as of now ( 13th feb 2015 ). I can only assume it is because it is very new. currently it supports python, c# and c++ . It's being used by MS everywhere. I tried it, to me as a c# developer using bond is better than using protobuf, however I have used thrift as well, the only problem I faced was with the documentation, I had to try many things to understand how things are done.
Few resources on Bond are as follows ( https://news.ycombinator.com/item?id=8866694 , https://news.ycombinator.com/item?id=8866848 , https://microsoft.github.io/bond/why_bond.html )
For performance, one data point is jvm-serializers benchmark -- it's quite specific, small messages, but might help if you are on Java platform. I think performance in general will often not be the most important difference. Also: NEVER take authors' words as gospel; many advertised claims are bogus (msgpack site for example has some dubious claims; it may be fast, but information is very sketchy, use case not very realistic).
One big difference is whether a schema must be used (PB, Thrift at least; Avro it may be optional; ASN.1 I think also; MsgPack, not necessarily).
Also: in my opinion it is good to be able to use layered, modular design; that is, RPC layer should not dictate data format, serialization. Unfortunately most candidates do tightly bundle these.
Finally, when choosing data format, nowadays performance does not preclude use of textual formats. There are blazing fast JSON parsers (and pretty fast streaming xml parsers); and when considering interoperability from scripting languages and ease of use, binary formats and protocols may not be the best choice.

Old ASCII Protocol Avatar Question

For anyone that remembers the protocol Avatar, (I'm pretty sure this was it's name) I'm trying to find information on it. All I've found so far, is that it's an ANSI style compression protocol, done by compressing common ANSI escape sequences.
But, back in the day, (The early 90's) I swore I remembered that it was used to compress ASCII text for modems like early 2400 baud BIS modems. (I don't recall all the protocol versions, names, etc from back then, sorry).
Anyways, this made reading messages, and using remote shells a lot nicer, due to the display speed. It didn't do anything for file transfers or what not, it was just a way of compressing ASCII text down as small as possible.
I'm trying to do research on this topic, and figured this is a good place to start looking. I think that the protocol used every trick in the book to compress ASCII, like common word replacement to a single byte, or maybe even a bit.
I don't recall the ratio you could get out of it, but as I recall, it was fairly decent.
Anyone have any info on this? Compressing ASCII text to fewer than 7 bits, or protocol information on Avatar, or maybe even an answer to if it even DID any of the ASCII compression I'm speaking of?
Wikipedia has something about AVATAR protocol:
The AVATAR protocol (Advanced Video
Attribute Terminal Assembler and
Recreator) is a system of escape
sequences occasionally used on
Bulletin Board Systems (BBSes). It has
largely the same functionality as the
more popular ANSI escape codes, but
has the advantage that the escape
sequences are much shorter. AVATAR can
thus render colored text and artwork
much faster over slow connections.
The protocol is defined by FidoNet
technical standard proposal FSC-0025.
Avatar was later extended by in late
1989 to AVT/0 (sometimes referred to
as AVT/0+) which included facilities
to scroll areas of the screen (useful
for split screen chat, or full screen
mail writing programs), as well as
more advanced pattern compression.
Avatar was originally implemented in
the Opus BBS, but later popularised by
RemoteAccess. RemoteAccess came with a
utility, AVTCONV that allowed for easy
translation of ANSI documents into
Avatar helping its adoption.
Also:
FSC-0025 - AVATAR proposal at FidoNet Technical Standards Committee.
FSC-0037 - AVT/0 extensions
If I remember correctly, the Avatar compression scheme was some simple kind of RLE (Run-Length Encoding) that would compress repeated strings of the same characters to something smaller. Unfortunately, I don't remember the details either.
Did you check out AVATAR on Wikipedia?

Identifying Algorithms in Binaries

Does anyone of you know a technique to identify algorithms in already compiled files, e.g. by testing the disassembly for some patterns?
The rare information I have are that there is some (not exported) code in a library that decompresses the content of a Byte[], but I have no clue how that works.
I have some files which I believe to be compressed in that unknown way, and it looks as if the files come without any compression header or trailer. I assume there's no encryption, but as long as I don't know how to decompress, its worth nothing to me.
The library I have is an ARM9 binary for low capacity targets.
EDIT:
It's a lossless compression, storing binary data or plain text.
You could go a couple directions, static analysis with something like IDA Pro, or load into GDB or an emulator and follow the code that way. They may be XOR'ing the data to hide the algorithm, since there are already many good loss less compression techniques.
Decompression algorithms involve significantly looping in tight loops. You might first start looking for loops (decrement register, jump backwards if not 0).
Given that it's a small target, you have a good chance of decoding it by hand, though it looks hard now once you dive into it you'll find that you can identify various programming structures yourself.
You might also consider decompiling it to a higher level language, which would be easier than assembly, though still hard if you don't know how it was compiled.
http://www.google.com/search?q=arm%20decompiler
-Adam
The reliable way to do this is to disassemble the library and read the resulting assembly code for the decompression routine (and perhaps step through it in a debugger) to see exactly what it is doing.
However, you might be able to look at the magic number for the compressed file and so figure out what kind of compression was used. If it's compressed with DEFLATE, for example, the first two bytes will be hexadecimal 78 9c; if with bzip2, 42 5a; if with gzip, 1f 8b.
From my experience, most of the times the files are compressed using plain old Deflate. You can try using zlib to open them, starting from different offset to compensate for custom headers. Problem is, zlib itself adds its own header. In python (and I guess other implementations has that feature as well), you can pass to zlib.decompress -15 as the history buffer size (i.e. zlib.decompress(data,-15)), which cause it to decompress raw deflated data, without zlib's headers.
Reverse engineering done by viewing the assembly may have copyright issues. In particular, doing this to write a program for decompressing is almost as bad, from a copyright standpoint, as just using the assembly yourself. But the latter is much easier. So, if your motivation is just to be able to write your own decompression utility, you might be better off just porting the assembly you have.

Resources