Pantheios wide characters? - windows

I'm trying to integrate logging into my Windows C++ application, and I wanted to use Pantheios, as it generally has very favorable comments. That said, all the examples included are using macros like PANTHEIOS_LITERAL_STRING etc., for wrapping string literals, and require typedefs like:
typedef std::basic_string<PAN_CHAR_T> string_t;
to compile correctly. I think this is ugly, and would prefer to not use these typedefs.
Here's an example: http://www.pantheios.org/doc/html/cpp_2misc_2example_8cpp_8misc_8strings_2example_8cpp_8misc_8strings_8cpp-example.html
I tried compiling Pantheios with PANTHEIOS_USE_WIDE_STRINGS disabled but get lots of build errors -- any ideas?

As you've observed the file backend assumes multibyte output in a multibyte build, and wide output in a wide build by default, but IIRC there are initialisation options (for be.file) that allow you to force it one way or the other, regardless of how you're building.
fwiw, I would think that the examples have to take into account all permutations, and that's why the "ugliness" you report is there. If you're only building for one char encoding or the other, you don't have to do that. Pretty much like examples of Windows coding that use TCHAR and all the _tcsXXX() funcs: you don't have to do that unless you're wanting your code to work with both.
HTH

Related

Boost.Tests: Specify function name instead of main

I am working on a project using the cmake build system. By default CMake has a nice framework for generating a single executable from a set of C/C++ code. The cmake function is called create_test_sourcelist. What it does is generate a C/C++ dispatcher with a single main entry point which will call other C/C++ code.
Therefore I have a bunch of C/C++ files with a function signature such as: int TestFunctionality1(int argc, char *argv[]), which I'd like to keep as-is, unless of course it means even more work.
How can I keep this system in place and start using BOOST_CHECK ? I could not figure out how to specify the actual main entry point is not called int main(int argc, char *argv[]).
My goal is have a framework for integration with Jenkins, since the project already uses Boost, I believe this should be doable without re-writing the existing CMake test suite and changing all tests into independent main function.
Unfortunately, it seems there is no straightforward and clean way to do that.
From one side the only useful function of create_test_sourcelist is to generate a test driver: a (stupid pretty simple, naive and with lack of abilities to hack/extend) C/C++ translation unit based on ${cmake-prefix}/share/cmake/Templates/TestDriver.cxx.in (and there is no way to select some other template).
From other side, Boost UTF offers its own test runner (which is equal to test driver in CMake's terminology), but in any variant (static, dynamic, single-header) it contains definition for main() some way or another (even in case of external test runner).
…so you end up with two main() functions and no ability to choose a single one.
Digging a little into sources of create_test_sourcelist I've really wonder why are they implemented it as a command and not as ordinal (extern) cmake module — it doesn't do any special (which can't be implemented using CMake language). This command is really stupid — it doesn't check that required functions are really exists (you'll get compile errors if smth wrong). There are no ways for flexible customization of output source file at all. All that is does is stripping paths and extensions from a list of source files and substitute it to mentioned template using ordinal configure_file()…
So, personally I see no reason to use it at all. It is why I've done the same (but better ;) job in the module mentioned in the comment above.
If you are still want to use that command, generated test driver completely useless if you want to use Boost UTF. You need to provide your own initialization function anyway (and it is not main()), where you can manually register your test cases into a master test suite (or organize your tests into some more complex tree). In that case there is absolutely no reason to use create_test_sourcelist! All what you can get from it is a list of sources that needs to be given to add_executable()… but it is much more easy to do w/ set()… This command even can't help you w/ list of test functions (list of filenames w/o extension actually) to call (it used internally and not exported). Do you still want to use that command??

Run user-submitted code in Go

I am working on an application which allows users to compare the execution of different string comparison algorithms. In addition to several algorithms (including Boyer-Moore, KMP, and other "traditional" ones) that are included, I want to allow users to put in their own algorithms (these could be their own algorithms or modifications to the existing ones) to compare them.
Is there some way in Go to take code from the user (for example, from an HTML textarea) and execute it?
More specifically, I want the following characteristics:
I provide a method signature and they fill in whatever they want in the method.
A crash or a syntax error in their code should not cause my whole program to crash. It should instead allow me to catch the error and display an error message.
(In this case, I am not worried about security against malicious code because users will only be executing my program on their own machines, so security is their own responsibility.)
If it is not possible to do this natively with Go, I am open to embedding one of the following languages to use for the comparison functions (in order of preference): JavaScript, Python, Ruby, C. Is there any way to do any of those?
A clear No.
But you can do fancy stuff: Why not recompile the program including the user provided code?
Split the stuff into two: One driver which collects user code, recompiles the actual code, executes the actual code and reports the outcome.
Including other interpreters for other languages can be done, e.g. Otto is a Javascript interpreter. (C will be hard :-)
Have you considered doing something similar to the gopherjs playground? According to this, the compilation is being done client-side.

Syntax Checking with unsupported languages

I have some files that have a particular syntax that is similar to ada (not identical though), however I would like to verify the syntax before going and running them. There isn't a compiler for these files, so I can't check them before using them. I tried to use the following:
gcc -c -gnats <file>
However this says compilation unit expected. I've tried a few variations on this, but to no avail.
I just want to make sure the file is syntactically correct before using it, but I'm not sure how to do it, and I really don't want to write an entire syntax checker just for this.
Is there some way to include an additional unsupported language to gcc without going through a recompile? Also is this simply a file that details to gcc what the syntax constructs are, or what would be entailed? I don't need a full compile, only a syntax check.
Alternately, are there any syntax checkers I can use that I can update an ada syntax check with the small number of changes required for this language?
I've listed Ada as a tag, since the syntax is nearly identical, and finding something that will do ada syntax checking without compiling will be a 90% solution for me.
You could try running the files through gnatchop first. The GCC Ada compiler is rather unique in that it expects filenames to match up with the main unit names inside the file. That may be what your error message is trying to say.
gnatchop will go through any files you give it and write out Ada source files with the appropriate names to make gcc happy (even splitting files into multiple files if needed).
Another option you might be interested in is OpenToken. It is a parser construction toolkit, written in Ada, that allows you to build your own parsers fairly easily. It comes with a syntax recognizer for Ada, so you may just be able to tweak that a bit for your needs.

SWeave with non-R code chunks?

I often use Sweave to produce LaTeX documents where certain chunks are produced dynamically by executing R code. This works well - but is it also possible to have code chunks that are executed in different ways, e.g. by executing the code in the shell, or by running Perl, and so on? It would be helpful to be able to mix things up, so I could do things like run some shell commands to fetch some data, run some perl commands to pre-process it, and then run R commands to analyze it.
Of course I could use all R chunks and use system() as a poor-man's substitute, but that doesn't make for very pleasant reading in the document.
The new new thing (for multi-language, multi-format) docs may be dexy.it which for example these guys at opengamma.org use as the backend.
Ana, who is behind dexy, is also giving a lot of talks about it so also look at the dexy blog.
It's not directly related to Sweave, but org-babel, which is part of Emacs org-mode, allows to mix code chunks of different languages in one file, pass data from one chunk to another, execute them, and generate LaTeX or HTML export from the output.
You can find more informations about org-mode here :
http://www.orgmode.org/
And to see how org-babel works :
http://orgmode.org/worg/org-contrib/babel/
There is certainly no easy way to do this other than through either foreign language interfaces from R (maybe through inline if it's supported), or system(). For what it's worth, I would just use system(); that should be easy enough.
You can see this previous question about having a Sweave equivalent for Python, where one of the respondents actually creates a separate interface. This can give you a sense what what it would take to embed other languages which may not already be supported. At a minimum, you have to do major hacking on the Sweave driver.
Do you know emacs" org-mode and, more specifically, Babel? If you already know Emacs or are willing to switch to Emacs, then org-mode and Babel are the answer to your question(s).
For instance, I am currently working on a document which contains some shell-scripts, does computations with R and creates flow charts with dot (graphviz). Org-mode can export a variety of formats, e.g. LaTeX (that's what I use).
There is the StatWeave project which uses java rather than R to do the weaving, but will run multiple programs instead of just R. I don't know how hard it would be to get it to do Perl or other programs like that, but the homepage indicates that it already works with R, SAS, Stata, and others:
http://www.cs.uiowa.edu/~rlenth/StatWeave/

Unifying enums across multiple languages

I have one large project with components in multiple languages that each depend on some of the same enum values. What solutions have you come up with to unify enums across multiple arbitrary languages? I can think of a few, but I'm looking for the best solution.
(In my implementation, I'm using Php, Java, Javascript, and SQL.)
You can put all of the enums in a text file, then use a code generator to write out the appropriate syntax for each language from that common file so that each component has the enums. Make that text file the authoritative source of information.
You can express the text file in XML but I'd think a tab-delimited flat file would work just fine.
Make them in a format that every language can understand or has a library for. I am using JSON for this at the moment.
Then you can include it with two ways:
For development: Load it from a file/URL at runtime
good for small changes you want too see immediately
slow
For productive usage: Include it in the files
using a build script
fast
no instant feedback
I would apply the dry principle and using code generator as such you could add anew language easely even if it has not enum natively existing.

Resources