In our read the docs project we have a use case where we need to show some specific docs on multiple pages in the same version of docs. As of now, we do this either by one of the following ways
Copy-pasting the content to each page's rst file
Write it in one of the concerned files with a label and use :std:ref: in rest of the files to redirect it to the main file
I would want to achieve something like writing content only in one file and then showing it (without any redirection for user) in each of the files. Is it possible?
Use the include directive in the parent file.
.. include:: includeme.rst
Note that the included file will be interpreted in the context of the parent file. Therefore section levels (headings) in the included file must be consistent with the parent file, and labels in the included file might generate duplicate warnings.
You can use for this purpose the include directive.
Say that you write the text in dir/text.rst.
The following will include in other documents:
..include :: /dir/text.rst
where the path is either relative (then, with no slash) or absolute which is possible in sphinx (doc)
in Sphinx, when given an absolute include file path, this directive
takes it as relative to the source directory
Related
I want to use the .. include:: function inline, but I can only get it to actually include the file I want if I separate it with two new lines from the previous text.
Before anyone asks, the file I want to include is a protocol number, so no, it doesn't benefit from a new line, at all. I want to be able to change it easily so I can use it on multiple places of my documentation. I guess that an example would be "We currently use the protocol (proto.txt)." I'm new to Sphinx and rst, so maybe there is a very obvious solution I haven't found.
Inline includes are not possible with Sphinx.
However, you can define global aliases (substitutions) in the rst_epilog variable of your build configuration file.
For example, you can add the following lines to your conf.pyfile:
rst_epilog = """
.. |version| replace:: 4.1
.. |protocol| replace:: httpx
"""
Now, you can access the variables |version| and |protocol| from any .rst file within your project, for example like this:
Version |version| uses the |protocol| protocol.
becomes
Version 4.1 uses the httpx protocol.
If other parts of your software require protocol (or other variables) to be specified in a specific file or format, you can write a script to read it from there as a variable into the Sphinx configuration file.
Context
I am making a script that dynamically inserts include directives in code blocks on an AsciiDoc file and then generates a PDF out of that. A generated AsciiDoc file could look like this:
= Title
[source,java]
---
include::foo.java[]
---
I want the user to be free to include whatever char-based file he or she wants, even other AsciiDoc files.
Problems
My goal is to show the contents as are of these included files. I run into problems when the included file:
is recognized as AsciiDoc beacuse of its extension, an thus any include directives it has are interpreted. I don't want nested includes, just to show the include directive in the code block. Example of undesired behaviour:
contains the code block delimiter ----, as seen on the image above when I end up with two code blocks instead of the intended single one. In this case, it does not matter if the file is recognized as an AsciiDoc file, the problem persists.
My workaround
The script I am writing uses AsciidoctorJ and I am leveraging that I can control how the content of each file is included by using an include processor. Using the include processor I wrap each line of each file with the pass:[] macro. Additionally, I activate macro substitution on the desired code block. A demonstration of this idea is shown in the image above.
Is there a better way to show the exact contents of a file? This works, but it seems like a hack. I would much rather prefer not having to change the read lines as I am currently doing.
EDIT for futher information
I would like to:
not have to escape the block delimiter. I am not exclusively referring to ----, but whatever the delimiter happens to be. For example, the answer by cirrus still has the problem when a line of the included file has .....
not have to escape the include directives in files recognized as AsciiDoc.
In a general note, I don't want to escape (or modify in any way) any lines.
Problem I found with my workaround:
If the last char of a line is a backslash (\), it escapes the closing bracket of the pass:[] macro.
You can try using a literal block. Based on your above example:
a.adoc:
= Title
....
include::c.adoc[]
....
If you use include:: in c.adoc, asciidoctor will still try to find and include the file. As such you will need to replace include:: with \include::
c.adoc:
\include::foo.txt[]
----
----
Which should output the following pdf:
Wenn converting markdown files with cross document links to html, docs or pdf the links get broken in the process.
I use pandoc 1.19.1 and MikTex.
This is my testcase:
File1: doc1.md
[link1](/doc2.md)
File2: doc2.md
[link2](/doc1.md)
The result in html with this call to pandoc:
pandoc doc1.md doc2.md -o test.html
looks like this:
<p>link1 link2</p>
As pdf a link is created but it does not work. Exported as docx it looks the same.
I would have asumed that when multiple files are processed and concatenated into the same output file, then the result should contain page internal links like anchor links for html-output. But instead the link it created in the output file like it was in the input files. Even the original file extension .md is preserved in the created links.
What am I doing wrong ?
My problem looks a bit like this:
pandoc command line parameters for resolving internal links
In the comments of this question the bug is said to be fixed by a pull request in May. But the bug still seems to exist.
Greetings
Georg
I had a similar problem when trying to export a Gitlab wiki to PDF. There links between pages look like filename-of-page#anchor-name and links within a page look like #anchor-name. I wrote a (finicky and fragile) pandoc filter that solved that problem for me, who knows it's useful to others.
Example files
To explain my solution I'll have two test files, 101-first-page.md:
# First page // Gitlab automatically creates an anchor here named #first-page
Some text.
## Another section // Gitlab automatically creates an anchor here named #another-section
A link to the [first section](#first-page)
and 102-second-page.md:
# Second page // Gitlab automatically creates an anchor here named #second-page
Some text and [a link to the first page](101-first-page#first-page).
When concatenating them to render as one document in pandoc, links between pages break as anchors change. Below the concatenated file with the anchors in comments.
# First page // anchor=#first-page
Some text.
## Another section anchor=#another-section
A link to the [first section](#first-page)
# Second page // anchor=#second-page
Some text and [a link to the first page](101-first-page#first-page). // <-- this anchor no longer exists.
The link from the second to the first page breaks as the link target is incorrect.
Solution
By pre-processing all markdown files first individually via a pandoc filter, and then concatenating the resulting json files I was able to get all links working.
Requirements
pandoc
latex
python
pandocfilters
Every file should start with a level 1 header that matches the filename (except for the number at the beginning). E.g. the file 101-A file on the wiki.md should have a first level one header named A file on the wiki.
Filter
The filter itself (together with the pandoc script) is available in this gist.
What it does is:
It gets the label of the first level 1 header, e.g. first-page
It prepends that label to all other labels in the same file, e.g. first-page-another-section.
It renames all links to the same file such that the prefix is taken into account, e.g. #first-page-first-page
It renames all links to other files such that the (assumed) prefix of the other files is taken into account, e.g. 101-first-page#first-page becomes #first-page-first-page.
After it has run every markdown file through this filter individually and converted them to json files, it concatenates the json's and converts that to a PDF.
As the pandoc README states:
If multiple input files are given, pandoc will concatenate them all (with blank lines between them) before parsing.
So for the parsing done by pandoc, it sees it as one document... so you'll have to construct your links in multiple files as if it they were all in one file, see also this answer for details.
I want to use the .. include:: function inline, but I can only get it to actually include the file I want if I separate it with two new lines from the previous text.
Before anyone asks, the file I want to include is a protocol number, so no, it doesn't benefit from a new line, at all. I want to be able to change it easily so I can use it on multiple places of my documentation. I guess that an example would be "We currently use the protocol (proto.txt)." I'm new to Sphinx and rst, so maybe there is a very obvious solution I haven't found.
Inline includes are not possible with Sphinx.
However, you can define global aliases (substitutions) in the rst_epilog variable of your build configuration file.
For example, you can add the following lines to your conf.pyfile:
rst_epilog = """
.. |version| replace:: 4.1
.. |protocol| replace:: httpx
"""
Now, you can access the variables |version| and |protocol| from any .rst file within your project, for example like this:
Version |version| uses the |protocol| protocol.
becomes
Version 4.1 uses the httpx protocol.
If other parts of your software require protocol (or other variables) to be specified in a specific file or format, you can write a script to read it from there as a variable into the Sphinx configuration file.
I have a project which have a lot of useful docs outside of the src directory which I'd like to render as usual DocPad documents.
Examples:
Files at the root of the project: README.md, LICENSE, Contributing.md and similar, which are already there and can be used in things like GitHub. I would like to reuse the content from those files to create the corresponding readme, license and contributing pages, or to include the contents from those files somewhere in layout or a document.
I have a project that have some docs inside, and I'd like to render the .md files as DocPad documents from it by including it in package.json, so those files would be in node_modules at root.
In both those cases there are files outside of the src/documents that I'd like to use as partials or documents, and it seems that the partial plugin can't help me (or I couldn't find a way to make it do what I need), and the #getCollection can only get things from the src/documents.
So, the question is: Is there a way I can tell DocPad to treat some of the files/folders from the outside of the src folder? Do I miss something?
If not, then what would be the best way to do it as a plugin, which direction should I dig?
DocPad natively supports storing documents outside of the default src folder. The way you do this is via the documentsPaths config option in the DocPad Configuration File (e.g. docpad.coffee). Something like this:
path = require('path')
docpadConfig = {
documentsPaths: [
'documents'
path.resolve('..','data','documents')
]
....
Of course, where this will fall down is if you want to just include arbitrary, individual files somewhere on the file system. In such cases symlinks would be the way to go.
Template helpers can also be used for this, as they can do whatever you want.
For instance, the Bevry Learning Centre website uses template helpers to render arbitrary files by relative paths as code examples:
Template Helper: https://github.com/bevry/learn/blob/6e202638f2321eec2633d1dbeaf1078bdb953562/docpad.coffee#L244-L257
Tempalte using the Template Helper: https://github.com/bevry/documentation/blob/f24901251d19ec1cfa56fcee14c2c6836c0a995c/node/handsonnode/03-server.html.md.eco
If you would also like to render them, you could combine such a solution with the Text Plugin.
The combination would be like so:
Template Helper in DocPad Configuration File:
docpadConfig =
templateData:
readProjectPath: (relativePath) ->
fullPath = require('path').join(__dirname, relativePath)
return #readFullPath(fullPath)
readRelativePath: (relativePath) ->
fullPath = #getPath(relativePath)
return #readFullPath(fullPath)
readFullPath: (fullPath) ->
result = require('fs').readFileSync(fullPath)
if result instanceof Error
throw result
else
return result.toString()
Usage of Template Helper with Text Plugin using Eco as templating engine:
<t render="markdown"><%- #readProjectPath('README.md') %></t>
The answer would be a rather simple one: relative symbolic links. Docpad handles them perfectly.
This way, to have a symlink of README.md inside your documents, you should do this (with pwd of src/documents):
ln -s ../../README.md readme.html.md
Or, in case of a docs from inside one of the project's modules:
ln -s ../../node_modules/foobar/docs/ docs
Both those variants work perfectly.
Note: Symlinks can be tricky. Refer to these for some common gotchas:
https://github.com/docpad/docpad/issues/878#issuecomment-53197720
https://github.com/docpad/docpad/issues/878#issuecomment-53209674
Just as a comparison between the different answers:
Use the paths solution when you want to add entire extra directories to be included inside the DocPad database to be treated as normal by DocPad.
Use the sym/hard link solution when you want to include specific documents or files that you would like to be treated as a DocPad document or file, with all the intelligent file parsing and document rendering, including layout, database, and caching features.
Use the template helper solution when you want to include specific files that you do not want included in the DocPad database for whatever reason.
Made this answer a community wiki one, so it can be updated accordingly for new answers and better details.