Doxygen include plain text - include

Is there a way to include plain text in a doxygen file
Hello Wold
----------
#include helloworld.txt
without it ending up in a fragment like:
I dont want this
​I wanted it to end up like the other text I write in a .markdown file:
I want this

I'm sorry to say I don't think there's an easy way to do this as of May 2016 - what you really need is a mdinclude command to match htmlinclude.
The problem is that Markdown is processed as a pre-processing step and the output from that then run through the normal Doxygen processing detecting generated HTML and special commands.
So, unless you want to make a submission to Doxygen, the only way I can think of is to do your own pre-processing for this particular include.
You could do that in a build step where you use some Unix commands or a script to merge in a file at the include stage.
If you wanted this driven by Doxygen you could use a custom file extension and write your own filter, as described in this SO answer

Related

Better way to include content as-is with AsciiDoc include directive

Context
I am making a script that dynamically inserts include directives in code blocks on an AsciiDoc file and then generates a PDF out of that. A generated AsciiDoc file could look like this:
= Title
[source,java]
---
include::foo.java[]
---
I want the user to be free to include whatever char-based file he or she wants, even other AsciiDoc files.
Problems
My goal is to show the contents as are of these included files. I run into problems when the included file:
is recognized as AsciiDoc beacuse of its extension, an thus any include directives it has are interpreted. I don't want nested includes, just to show the include directive in the code block. Example of undesired behaviour:
contains the code block delimiter ----, as seen on the image above when I end up with two code blocks instead of the intended single one. In this case, it does not matter if the file is recognized as an AsciiDoc file, the problem persists.
My workaround
The script I am writing uses AsciidoctorJ and I am leveraging that I can control how the content of each file is included by using an include processor. Using the include processor I wrap each line of each file with the pass:[] macro. Additionally, I activate macro substitution on the desired code block. A demonstration of this idea is shown in the image above.
Is there a better way to show the exact contents of a file? This works, but it seems like a hack. I would much rather prefer not having to change the read lines as I am currently doing.
EDIT for futher information
I would like to:
not have to escape the block delimiter. I am not exclusively referring to ----, but whatever the delimiter happens to be. For example, the answer by cirrus still has the problem when a line of the included file has .....
not have to escape the include directives in files recognized as AsciiDoc.
In a general note, I don't want to escape (or modify in any way) any lines.
Problem I found with my workaround:
If the last char of a line is a backslash (\), it escapes the closing bracket of the pass:[] macro.
You can try using a literal block. Based on your above example:
a.adoc:
= Title
....
include::c.adoc[]
....
If you use include:: in c.adoc, asciidoctor will still try to find and include the file. As such you will need to replace include:: with \include::
c.adoc:
\include::foo.txt[]
----
----
Which should output the following pdf:

Sphinx replace in inclusion command

I am struggling with this issue:
I have to document a fairly large project composed by a C core engine and different API that are built on top of that, say Java, Python C#.
The docs must be deployed separately for each API, i.e. for each language, but 99% of the docs are the same, just the code snippet and example mainly need to change.
I set the type of language in the conf.py file by defining a global variable
I have used primary_domain and highlight_language to set the correct syntax highlighting
For each example I have a source file with the same name but different extension
Now, I'd like to include say an example using the literalinclude directive specifying the name of the file and let its extension change depending on the language in use. I tried naively to use the replace macro but with no success:
rst_prolog = ".. |ext| replace:: .%s\n" % primary_domain
correctly replace |ext| around the docs, but not in the command
.. literalinclude: filename|ext|
Is there any way I can do this, except parse rst files using sed or the like?

How do I use SPHINXOPTS to trigger the "only" directive when invoking Sphinx with a Makefile?

I am generating a PDF via Sphinx using the autogenerated Makefile. I usually generate it using:
make latexpdf
However, I am now including the only directive, so that some sections appear conditionally (this should happen if I include the relevant tag at the command line).
I added the following reST markup to my source file:
Hello world
.. only:: draft
This is some draft content.
I tried generating the PDF as follows:
SPHINXOPTS="-t draft" make latexpdf
...but the output is the same as if I'd just run make latexpdf as normal, the "only" section does not appear. Is there a problem in my reST or my command line invocation?
(Also, I'd like to specify multiple tags if possible, e.g. draft and admin.)
You need to modify the command a little (the variable assignment must come after make). Either of these work for me (using GNU make):
make SPHINXOPTS="-t draft" latexpdf
or
make latexpdf SPHINXOPTS="-t draft"

How can you get vim to add a header comment to new files?

I write a lot of Rails apps these days and would like to have vim add header comments to all the code I work on..
I tend to store my projects in
~/Development/Repos/Personal
And
~/Development/Repos/Work
Can I get vim to use different copyrights etc based on where abouts the file is being created?
You can just save a header template as a plain text file and read it into a new file with :read. As for checking the path, just write a Ruby script to produce the desired text and invoke it with :read!. Creating a true vim plugin is also an option. However, why waste time learning a new language and API when you already know how to deal with text and paths in Ruby? Although, a bash script would create even less friction if you are comfortable with it.
I suggest you to use one of the many snippet plugins, like XPTemplate or snipMate, to create a 'header' snippet and then use it. The force of these plugins is that you just have to type a word and then press tab to get the expanded snippet.
Here's a snippet from my vimrc which puts in boilerplate when I create a file named test_something.rb. You can probably use a similar autocmd to conditionally add the copyright you desire. You may have to check for the expanded path in the function, but it seems doable with some vimscripting.
" Autocommands
autocmd BufNewFile *test*.rb call MakeRubyUnitTester()
"
" Functions
" Fill in the boilerplate for Ruby Unit Tests
function! MakeRubyUnitTester()
exec "normal irequire 'test/unit'
class TC_Simple < Test::Unit::TestCase"
endfunction

Ruby library for manipulating XML with minimal diffs?

I have an XML file (actually a Visual C# project file) that I want to manipulate using a Ruby script. I want to read the XML into memory, do some work on them that includes changing some attributes and some text (fixing up some path references), and then write the XML file back out. This isn't so hard.
The hard part is, I want the file I write to look the same as the file I read in, except where I made changes. If the input file used double quotes, I want the output to use double quotes. If the input had a space before />, I want the output to do the same. Basically, I want the output to be the same as the input, except where I explicitly made changes (which, in my case, will only be to attribute values, or to the text content of an element).
I want minimal diffs because this project file is checked into version control -- and because the next time I make a change in Visual Studio, it's going to rewrite it in its preferred format anyway. I want to avoid checking in a bunch of meaningless diffs that will then be changed back again in the near future. I also want to avoid having to open the project in Visual Studio, make a change, and save, before I can commit my Ruby script's changes. I want my Ruby script to just make its changes, nothing more.
I originally just parsed the file with regexes, but ran into cases where I really needed an XML library because I needed to know more about child elements. So I switched to REXML. But it makes the following undesirable changes to my formatting:
It changes all the attributes from double quotes to single quotes.
It escapes all the apostrophes inside attribute values (changing them to &apos;).
It removes the space before />.
It sorts each element's attributes alphabetically, rather than preserving the original order.
I'm working around this by doing a bunch of gsub calls on REXML's output, but is there a Ruby XML-manipulation library that's a better fit for "minimal diff" scenarios?
You can build your own SAX parser (using Nokogiri, for example, it's very easy and I recommend to use it) to parse your XML file, change some data in it, and flush the processed XML file with your own customized, built from scratch, XML generator. The bad news is, you have to build a tiny XML library and generator routine in this case, so it is not an ordinary task.
Another way: don't build the SAX parser, but write an XML generator. Parse XML with your favourite library, change what you need to change and generate anything you want. You just need to recursively walk through all nodes in your document and output them within your conventions.

Resources