Does anyone know where there is a good example of how to use the asn1 Marshal and Unmarshal funcs in Go?
I'm familiar with the concept of how DER encoding with ASN.1 works, but do not have experience dealing with it directly in code (usually I'm using another library with wraps it - openldap or whatever).
Yes, I've looked at the documentation (http://golang.org/pkg/encoding/asn1/), which seems to describe a tagging system much like what is available for JSON and XML in Go; however I have yet to find a good practical example of this anywhere for the encoding/asn1 package. (Hm, okay I see the Certificate example in asn1_test.go - anyone know of anything else?)
(Overall, I'm trying to implement a very small subset of LDAP (the server side) in Go.)
UPDATE: My question is flawed by the fact that LDAP uses BER, not DER. So encoding/asn.1 isn't going to help. In any case, I ended up making this: https://github.com/bradleypeabody/godap (which uses this for BER+ASN1: https://github.com/go-asn1-ber/asn1-ber )
https://web.archive.org/web/20160816005220/https://jan.newmarch.name/go/serialisation/chapter-serialisation.html
and
https://ipfs.io/ipfs/QmfYeDhGH9bZzihBUDEQbCbTc5k5FZKURMUoUvfmc27BwL/dataserialisation/asn1.html
have quite a few examples with asn1.Marshal / asn1.Unmarshal
Related
The ComplexExpr and ComplexFunc classes in the links below seem very convenient to work with complex numbers. Is there a plan to include them into the official Halide API? Or is there a reason why they are not included?
https://github.com/halide/Halide/blob/master/apps/fft/complex.h
https://github.com/halide/Halide/blob/be1269b15f4ba8b83df5fa0ef1ae507017fe1a69/apps/fft/funct.h
Speaking as a Halide developer...
Or is there a reason why they are not included?
We haven't included these historically since we didn't want to bless a particular representation for complex numbers. There are a few valid ways of dealing with them and the headers in question are just one.
Is there a plan to include them into the official Halide API?
We've started talking about packaging some of this type of code into a set of header-only "Halide tools" libraries, so named to avoid the normative implication of calling it something like "stdlib". So as of right now, there is no concrete plan, but the odds are nonzero.
In the meantime, the code is MIT licensed, so you should feel free to use those files, regardless.
Disclaimer: yes I know that this is not "supposed to be done" and "use interface composition and delegation" and "the authors of the language know better". However, I am confronted with a choice of either copy-pasting from the standard library and creating my own packages, or doing what I am asking. So please do not reply with "What you want to do is wrong, you are a bad dev and you should feel bad."
So, in Go we have the http stdlib package. This package has a number of functions for dealing with HTTP Range headers and responses (parsers, a struct for "offset+size" and so forth). For various reasons I want to use something that is very similar to ServeContent but works a bit differently (long story short - the amount of plumbing needed to do the ReaderAt gymnastics is suboptimal for what I want to accomplish) so I want to parse the HTTP Range header myself, using the utility functions/structs from the http stdlib package and then deal with them manually. Basically, I want a changed version of ServeContent :-)
Is there a way for me to "reopen" the http stdlib package to use it's unexported identifiers? ABI is not a concern for me as the source is mine, the program gets compiled from scratch every time etc. etc. and it does not need binary compatibility with older/other Go versions. I.e. I am able to ensure that the build is going to be done on a specific Go version and there are tests to check that an unexported identifier disappeared. So...
If there is a package called foo in the Go standard library, but it only exposes a MagicMegamethod that does the thing I do not need, and uses usefulFunc and usefulStruct that I want to get access to, is there a way for me to get access to those identifiers? Either by reopening the package, or using some other way... that does not involve copy-pasting dozens of lines from stdlib without tests etc.
There exist (rather gruesome) ways of accessing unexported symbols, but it requires nontrivial amounts of tricky code, so there's unlikely to be a net win.
Since you've outruled the "don't do this" direction, it seems that the answer is either NO or use the methods described in the post I linked to (and this repo).
FWIW I'd personally just copy the code I need from the standard library and tweak it to my needs. This would likely take less time than the time it took you to write this SO question :-)
I am new to Protobufs; I haven't had much exposure to them. One of the API endpoints we require data from, uses Protobuf encoded data. This generally wouldn't be an issue if I was using a 'supported' language such as JavaScript, Java, Python or even R to decode the data...
Unfortunately, I am trying to automate the process using Alteryx. Rather than this being an Alteryx specific question, I have a few questions about Protobufs themselves so I understand this situation better. I've read through the implementation of Protobufs in Java and Python, and have a basic understanding of how to use them.
To surmise (please correct me if I am wrong), a Protobuf is a method of serializing structured data where a .proto schema is used to encode / decode data into raw binary. My confusion lies with the compiler. Google documentation and examples for Python / Java show how a Protobuf compiler (library) is required in order to run the encoding and decoding process. Reading the Google website, it advises that the Protobufs are 'language neutral and platform neutral', but I can't see how that is possible if you need the compiler (and .proto file!) to do the decoding. For example, how would anyone using a language outside of the languages where Google have a compiler created possibly decode Protobuf encoded data? Am I missing something?
I figure I'm missing something, since it seems weird that a public API would force this constraint.
"language/platform neutral" here simply means that you can reliably get the same data back from any language/framework/platform. The serialization format is defined independently and does not rely on the nuances of any particular framework.
This might seem a low bar, but you'd be surprised how many serialization formats fail to clear it.
Because the format is specified, anyone can create a tool for some other platform. It is a little fiddly if you're not used to dealing in bits, but: totally doable. The protobuf landscape is not dependent on Google - here's a list of some of the known non-Google tools: https://github.com/protocolbuffers/protobuf/blob/master/docs/third_party.md
Also, note that technically you don't even need a .proto; you just need some mechanism for specifying which fields map to which field numbers (since protobuf doesn't include the names). Quite a few in that list can work either from a .proto, or from the field/number map being specified in some other way. The advantage of .proto is simply that it is easy to convey as the schema - and again: isn't tied to any particular language. You can write plugins for "protoc" to add your own tooling, so you don't need to write your own parser from scratch. Or you can write your own parser from scratch if you prefer.
You can't speak of non-supported platform in this case: it is more about languages for which you can't find a protobuf implementation.
My 2 cents is: if you can't find a protobuf implementation for your language, find another language you're familiar with (and popular in protobuf community) and handle the protobuf serialization/deserialization with it. Then call it via a REST API, a executable ... whatever
I googled, but I can't find a satisfactory answer. This SO question is related but kinda old as well as the exact opposite of what I am looking for: a way to do screen-scraping using XPath, not CSS selectors.
I've used enlive for some basic screen-scraping but sometimes one needs the power of XPath selectors. So here it is:
Is there any equivalent to Nokogiri or lxml for clojure (java)? What is the state of the "pure java Nokogiri"? Any way to use the library from clojure? Any better alternatives than this hack?
There are a couple of possibilities here.
Several of these require semi-well formed XML to work. If you don't have it, I would pair clj-tagsoup with hiccup to produce the XML (parse with clj-tag-soup, which produces a form that hiccup and write out as XML) and work with that.
First, just use the native JDK capabilities. Assuming the document is well formed enough, try using clj-xpath which provides a wrapper around the native JDK parsing.
If that doesn't suffice, consider taking a more Clojure data structure based route. A simpler path could just use the output of TagSoup and a combination of maps, filters, and nths.
If you need something more advanced, consider using zippers to provide structure around the data, making it easier to manipulate. Use clojure.xml/parse and clojure.zip/xml-zip to produce the zipper, and go from there. An example can be found at http://techbehindtech.com/2010/06/25/parsing-xml-in-clojure/.
Using the native structures is my preferred route for anything complicated, as you can bring the full power of the language to bear.
If you provide a sample of why you need XPath, I can provide some sample code.
Does protobuf-net have any APIs to dump a protobuf into human readable form? I was hoping for something like TextFormat.
At the moment, no. I'm in two minds as to whether it is worthwhile adding; in my mind, this defeats most of the benefits of protocol buffers.
However, since Jon's version is a port of the java version you should find that it is feature compatible, so it should exist there.
there is one for java. the build.toString() method returns a string representation but you'll loose the serialization.