I am translating some lisp code to Tcl and wonder if there is anything like lisp's defstruct in Tcl for creating data structures?
If nothing is built into Tcl, what extension packages to Tcl would you recommend that can be used in a commercial application.
Thanks.
-William
Consider using dictionaries, which work more or less like a hashmap. You can set the key/value pairs much like you would any other structure.
You could take a look at the pages on Rosetta Code under Data Structures. They all have Tcl examples.
Tcl tutor is having 44-47 lessons on dictionaries.
Related
I googled, but I can't find a satisfactory answer. This SO question is related but kinda old as well as the exact opposite of what I am looking for: a way to do screen-scraping using XPath, not CSS selectors.
I've used enlive for some basic screen-scraping but sometimes one needs the power of XPath selectors. So here it is:
Is there any equivalent to Nokogiri or lxml for clojure (java)? What is the state of the "pure java Nokogiri"? Any way to use the library from clojure? Any better alternatives than this hack?
There are a couple of possibilities here.
Several of these require semi-well formed XML to work. If you don't have it, I would pair clj-tagsoup with hiccup to produce the XML (parse with clj-tag-soup, which produces a form that hiccup and write out as XML) and work with that.
First, just use the native JDK capabilities. Assuming the document is well formed enough, try using clj-xpath which provides a wrapper around the native JDK parsing.
If that doesn't suffice, consider taking a more Clojure data structure based route. A simpler path could just use the output of TagSoup and a combination of maps, filters, and nths.
If you need something more advanced, consider using zippers to provide structure around the data, making it easier to manipulate. Use clojure.xml/parse and clojure.zip/xml-zip to produce the zipper, and go from there. An example can be found at http://techbehindtech.com/2010/06/25/parsing-xml-in-clojure/.
Using the native structures is my preferred route for anything complicated, as you can bring the full power of the language to bear.
If you provide a sample of why you need XPath, I can provide some sample code.
We have a logging system, and erlang OTP server is writing logs in erlang term.
We also have Rails interface for internal users, and I want to provide a log analysis for them.
I have tried to find an erlang term parser, not erlang parser, written in ruby. but no luck yet.
erlang terms are simple; atom, tuple, list(including string), binary, and pid/ref
atom is like a symbol
tuple is like a hash
list is like an array
binary/pid/ref are like string
Anyone knows any existing erl-to-ruby parser?
Maybe this isn't quite what you're looking for, but you could check out BERT-RPC. It has serializers, clients, and servers for various languages, including Ruby (they are listed at the bottom of the page).
BERT is new, and it seems overkill to me, and I don't see code out-there for this purpose,
I made my own.
https://github.com/bighostkim/erl_to_ruby
This module from the people at basho seems to be exactly what you need.
https://github.com/basho/erlang_template_helper
So, just as a fun project, I decided I'd write my own XML parser. No, not to parse a specific document, and no, not using an XML parser library. I mean writing code to parse out any XML document into a usable data structure. Just because I like the challenge. :-)
With that said, so far it's proved to be... interesting. It's not as easy to parse (especially when you start taking into account special characters, CDATA, empty tags, comments, etc.) as it initially looked.
Are there any well documented XML parsing algorithms or explanations anywhere that anyone knows of? It seems like there are well-documented Queue and Stack and BTree and etc. etc. etc. implementations everywhere, but I'm not sure I've ever seen a simple, well-documented XML parser algorithm...
I repeat: I am not looking for a pre-built parser library! I am looking for information on how to create my own pre-built parser library! Do not tell me "use expat" or "use SAX" or whatever. That's not what I'm asking for.
Antlr offers a tutorial on parsing XML. It breaks the process down into phases: lexing, parsing, tree parsing, etc. Looks pretty interesting.
I don't know if it would be "cheating" in your book, but you could try parsing your XML with a ready-built all-purpose language parser like ANTLR. The result would be a list of tokens (if you just use the lexer) or a parse tree (if you include the parser) and you could then re-build the parse tree almost 1:1 into an XML structure.
Maybe. I haven't thought about the ways in which XML might be different from "normal" ANTLR fodder like programming languages, and whether you would be able to define a suitable grammar.
VTD-XML is probably the simplest parsing technique possible...
http://expat.sourceforge.net/
Expat is an XML parser library written in C. It is a stream-oriented parser in which an application registers handlers for things the parser might find in the XML document (like start tags). An introductory article on using Expat is available on xml.com.
As per my understanding scheme procedures like map,apply,append etc are written in scheme itself. Is there an easy way to see the implementation of these procedues right inside the REPL ?
I do not believe there is a standard way to dump the source code of a procedure, but a lot of the list functions are defined here, and you can look through the source code for your implementation to see the rest. Note that apply is probably a primitive, though.
I have one large project with components in multiple languages that each depend on some of the same enum values. What solutions have you come up with to unify enums across multiple arbitrary languages? I can think of a few, but I'm looking for the best solution.
(In my implementation, I'm using Php, Java, Javascript, and SQL.)
You can put all of the enums in a text file, then use a code generator to write out the appropriate syntax for each language from that common file so that each component has the enums. Make that text file the authoritative source of information.
You can express the text file in XML but I'd think a tab-delimited flat file would work just fine.
Make them in a format that every language can understand or has a library for. I am using JSON for this at the moment.
Then you can include it with two ways:
For development: Load it from a file/URL at runtime
good for small changes you want too see immediately
slow
For productive usage: Include it in the files
using a build script
fast
no instant feedback
I would apply the dry principle and using code generator as such you could add anew language easely even if it has not enum natively existing.