Build translated Sphinx docs in separate directories - internationalization

I work on a documentation that will be published in different languages. It is one of the reasons I use Sphinx.
I know how generate the translated version but with the setting described in the documentation, the resulting files replaces the ones that were generated before. Thus, when generating multiple translation, I have to move the files to another directory before doing anything else. It would be more practical (and easier to deploy) to generate the translations in separate directories.
Is there a way to tell Sphinx or the makefile that when I run
make -e SPHINXOPTS="-D language='(lang)'" (format)
the files have to be generated in /build/(format)/(lang) ?
For now, only the HTML build is used (and I doubt that something else will be used) so a specific solution would be accepted if it is not possible to do it globally.
Sphinx version is 1.4.6.

I found a working solution by replacing the Makefile by a custom Python script (build.py).
Using sys.argv, I emulate the make target behaviour. I added several options for the language. Using the subprocess module, precisely its call() function, I am able to run commands with a set of options. The script is based on a function that generates the command to be executed by subprocess.call():
def build_command(target, build_dir, lang=None):
lang_opt = []
if lang:
lang_opt = ["-D", "language='" + lang + "'"]
build_dir += "/" + lang
else:
build_dir += "/default"
return ["sphinx-build", "-b", target, "-aE"] + lang_opt + ["source", "build/" + build_dir]
It is the lang parameter that allows me to separate each language, independently of the target. Later in the code, I just run
subprocess.call(build_command(target, target, lang))
To build the documentation in the desired language with the specified target (usually, target = "html"). It can also emulate make gettext:
subprocess.call(build_command("gettext", "locale"))
And so on...
A better solution may exist, but at least this one will do the job.

Related

What's the 'correct' way to stage optional server certificates for a bitbake recipe?

Given a recipe for a server program that's written to use SSL if a cert/key pair are provided on the system image -- what's the 'proper' way to handle installing those certs from a bitbake viewpoint? Especially concerning 'during development' of the server software, where I need to provide self-signed certs while we test things.
I have a solution in place, but I'm not sure it's optimal, and it felt like I was fighting the tooling too much to do this. So it's time to ask.
Here's what I have.
If you setup whitelist environment variables for:
SERVER_RECIPE_NAME_CERT = '/absolute/path/to/cert.pem'
SERVER_RECIPE_NAME_CERT_KEY = '/absolute/path/to/key.pem'
Then, in the server recipe I've mangled in the following where I'd normally just have the SRC_URI I've created a python function that gets expanded into the SRC_URI if the cert variables are set.
def certfile_src(d):
files = ''
if d.getVar('SERVER_RECIPE_NAME_CERT') is not None:
files = files + 'file://' + d.getVar('SERVER_RECIPE_NAME_CERT', True)
if d.getVar('SERVER_RECIPE_NAME_CERT_KEY') is not None:
files = files + ' ' + 'file://' + d.getVar('SERVER_RECIPE_NAME_CERT_KEY', True)
return files
SRC_URI = "\
git://${GO_IMPORT} \
${#certfile_src(d)} \
"
I had issues with using a python function syntax instead of the def syntax, but in retrospect that may have been because I had the python function below the SRC_URI assignment. I should probably try doing it that way again, as I preferred that syntax.
So to summarize the questions:
Have I reinvented the wheel in a less efficient manner? Is there a 'right way' or 'better way' to do this with existing tooling?
I probably should have used ${PN} in the getVar, so that this could be copied / pasted cross-recipe since this is a common pattern for some things I'm working with.
I probably should make this a 'class' ... which makes me wonder if there is one already that I missed, but I'm not sure if a class can modify the SRC_URI... do they even need to? Could I have just done all this in a do_install_append() and copied the certs from the absolute path source into ${D}${sysconfdir}/... without making QA checks fail like crazy?
You have to make sure that your environment variables are listed in your host exported environment variable called BB_ENV_EXTRAWHITE (c.f. https://docs.yoctoproject.org/bitbake/bitbake-user-manual/bitbake-user-manual-ref-variables.html#term-BB_ENV_EXTRAWHITE). This is required otherwise a change in your environment variables won't be picked up by the build.
You want to use SRC_URI because it has a mechanism to check the checksum of the files between builds so that if the path to your certs is the same but the certs are different, the recipe is still rebuilt.

autoconf: how do I substitute the library prefix?

CLISP's interface to PARI is configured with the configure.in containing AC_LIB_LINKFLAGS([pari]) from lib-link.m4.
The build process also requires the Makefile to know where the datadir of PARI is located. To this end, Makefile.in has
prefix = #LIBPARI_PREFIX#
DATADIR = #datadir#
and expects to find $(DATADIR)/pari/pari.desc (normally
/usr/share/pari/pari.desc or /usr/local/share/pari/pari.desc).
This seems to work on Mac OS X where PARI is installed by homebrew in /usr/local (and LIBPARI_PREFIX=/usr/local), but not on Ubuntu, where PARI is in /usr, and LIBPARI_PREFIX is empty.
How do I insert the location of the PARI's datadir into the Makefile?
PS. I also asked this on the autoconf mailing list.
PPS. In response to #BrunoHaible's suggestion, here is the meager attempt at debugging on Linux (where LIBPARI_PREFIX is empty).
$ bash -x configure 2>&1 | grep found_dir
+ found_dir=
+ eval ac_val=$found_dir
+ eval ac_val=$found_dir
You are trying to use $(prefix) in an unintended way. In an Autotools-based build system, the $(prefix) represents a prefix to the target installation location of the software you're building. By setting it in your Makefile.in, you are overriding the prefix that configure will try to assign. However, since you appear not to have any installation targets anyway, at least at that level, that's probably more an issue of poor form than a cause for malfunction.
How do I insert the location of the PARI's datadir into the Makefile?
I'd recommend computing or discovering the needed directory in your configure script, and exporting it to the generated Makefile via its own output variable. Let's take the second part first, since it's simple. In configure.in, having in some manner located the wanted data directory and assigned it to a variable
DATADIR=...
, you would make an output variable of that via the AC_SUBST macro:
AC_SUBST([DATADIR])
Since you are using only Autoconf, not Automake, you would then manually receive that into your Makefile by changing the assignment in your Makefile.in:
DATDIR = #DATADIR#
Now, as for locating the data directory in the first place, you have to know what you're trying to implement before you can implement it. From your question and followup comments, it seems to me that you want this:
Use a data directory explicitly specified by the user if there is one. Otherwise,
look for a data directory relative to the location of the shared library. If it's not found there then
(optional) look under the prefix specified to configure, or specifically in the specified datadir (both of which may come from the top-level configure). Finally, if it still has not been found then
look in some standard locations.
To create a configure option by which the user can specify a custom data directory, you would probably use the AC_ARG_WITH macro, maybe like this:
AC_ARG_WITH([pari-datadir], [AS_HELP_STRING([--with-pari-datadir],
[explicitly specifies the PARI data directory])],
[], [with_pari_datadir=''])
Thanks to #BrunoHaible, we see that although the Gnulib manual does not document it, the macro's internal documentation specifies that if AC_LIB_LINKFLAGS locates libpari then it will set LIBPARI_PREFIX to the library directory prefix. You find that that does work when the --with-libpari option is used to give it an alternative location to search, so I suggest working with that. You certainly can try to debug AC_LIB_LINKFLAGS to make it set LIBPARI_PREFIX in all cases in which the lib is found, but if you don't want to go to that effort then you can work around it (see below).
Although the default or specified installation prefix is accessible in configure as $prefix, I would suggest instead going to the specified $datadir. That is slightly tricky, however, because by default it refers to the prefix indirectly. Thus, you might do this:
eval "datadir_expanded=${datadir}"
Finally, you might hardcode a set of prefixes such as /usr and /usr/local.
Following on from all the foregoing, then, your configure.in might do something like this:
DATADIR=
for d in \
${with_pari_datadir} \
${LIBPARI_PREFIX:+${LIBPARI_PREFIX}/share/pari} \
${datadir_expanded}/pari \
/usr/local/share/pari \
/usr/share/pari
do
AS_IF([test -r "$[]d/pari.desc"], [DATADIR="$[]d"; break])
done
AS_IF([test x = "x$DATADIR"], [AC_MSG_ERROR(["Could not identify PARI data directory"])])
AC_SUBST([DATADIR])
Instead of guessing the location of datadir, why don't you ask PARI/GP where its datadir is located? Namely,
$ echo "default(datadir)" | gp -qf
"/usr/share/pari"
does the trick.

How to input a parameter in a custom target with cmake

I have a custom target:
add_custom_target(
create-po
COMMAND ${MSGINIT} --no-translator -i "${PROJECT_SOURCE_DIR}/data/${PACKAGE}.pot" - "${PROJECT_SOURCE_DIR}/po/es.po" -l es_MX.utf8
)
so, is invoked like this:
# make create-po
my idea is to change it to something like this:
# make create-po "es"
so, any user can create a custom localed po file. I don't know the word exactly for this, but I'd like to add a parameter in the target name..is it posible with cmake? Thanks
After so long time I found this question for the same reason: Can I use CMake to initialize a .po file if I want to add a new translation? I expect to use it only once in a while for my project, so make the build system do it seems more comfortable to me than find out all the required options and paths every time.
I ended up with the following CMake snippet:
set(INIT_LANG CACHE STRING "give a locale here to create a target which initializes a related .po file")
IF(INIT_LANG)
add_custom_target(
create-po-${INIT_LANG}
... # integrate INIT_LANG in your command
)
ENDIF(INIT_LANG)
Then, if you want to initialize a new translation file, call (assuming your build dir in under the project root):
# cmake -DINIT_LANG=es_MX.utf8 ..
... and you should get a corresponding make target:
# make create-po-es_MX.utf8
Yes, it's not as straight-forward as the OP's idea/expectation (and mine as well), but users can create new .po files by themselves (of course, this will be documented properly for them in the project ;) ).

Organizing asset files in a Go project

I have a project that contains a folder to manage file templates, but it doesn't look like Go provides any support for non-Go-code project files. The project itself compiles to an executable, but it needs to know where this template folder is in order to operate correctly. Right now I do a search for $GOPATH/src/<templates>/templates, but this feels like kind of a hack to me because it would break if I decided to rename the package or host it somewhere else.
I've done some searching and it looks like a number of people are interested in being able to "compile" the asset files by embedding them in the final binary, but I'm not sure how I feel about this approach.
Any ideas?
Either pick a path (or a list of paths) that users are expected to put the supporting data in (/usr/local/share/myapp, ...) or just compile it into the binary.
It depends on how you are planning to distribute the program. As a package? With an installer?
Most of my programs I enjoy just having a single file to deploy and I just have a few templates to include, so I do that.
I have an example using go-bindata where I build the html template with a Makefile, but if I build with the 'devel' flag it will read the file at runtime instead to make development easier.
I can think of two options, use a cwd flag, or infer from cwd and arg 0:
-cwd path/to/assets
path/to/exe -cwd=$(path/to/exe/assets)
Internally, the exectable would chdir to wherever cwd points to, and then it can use relative paths throughout the application. This has the added benefit that the user can change the assets without having to recompile the program.
I do this for config files. Basically the order goes:
process cmd arguments, looking for a -cwd variable (it defaults to empty)
chdir to -cwd
parse config file
reparse cmd arguments, overwriting the settings in the config file
I'm not sure how many arguments your app has, but I've found this to be very useful, especially since Go doesn't have a standard packaging tool that will compile these assets in.
infer from arg 0
Another option is to use the first argument and get the path to the executable. Something like this:
here := path.Dir(os.Args[0])
if !path.IsAbs(os.Args[0]) {
here = path.Join(os.Getwd(), here)
}
This will get you the path to where the executable is. If you're guaranteed the user won't move this without moving the rest of your assets, you can use this, but I find it much more flexible to use the above -cwd idea, because then the user can place the executable anywhere on their system and just point it to the assets.
The best option would probably be a mixture of the two. If the user doesn't supply a -cwd flag, they probably haven't moved anything, so infer from arg 0 and the cwd. The cwd flag overrides this.

How to set Sphinx's `exclude_patterns` from the command line?

I'm using Sphinx on Windows.
Most of my documentation is for regular users, but there are some sub-pages with content for administrators only.
So I want to build two versions of my documentation: a complete version, and a second version with the "admin" pages excluded.
I used the exclude_patterns in the build configuration for that.
So far, it works. Every file in every subfolder whose name contains "admin" is ignored when I put this into the conf.py file:
exclude_patterns = ['**/*admin*']
The problem is that I'd like to run the build once to get both versions.
What I'm trying to do right now is running make.bat twice and supply different parameters on each run.
According to the documentation, I can achieve this by setting the BUILDDIR and SPHINXOPTS variables.
So now I have a build.bat that looks like this:
path=%path%;c:\python27\scripts
rem BUILD ADMIN DOCS
set SPHINXOPTS=
set BUILDDIR=c:\build\admin
call make clean
call make html
rem BUILD USER DOCS
set SPHINXOPTS=-D exclude_patterns=['**/*admin*']
set BUILDDIR=c:\build\user
call make clean
call make html
pause
The build in the two different directories works when I delete the line set BUILDDIR=build from the sphinx-generated make.bat file.
However, the exclude pattern does not work.
The batch file listed above outputs this for the second build (the one with the exclude pattern):
Making output directory...
Running Sphinx v1.1.3
loading translations [de]... done
loading pickled environment... not yet created
Exception occurred:
File "C:\Python27\lib\site-packages\sphinx-1.1.3-py2.7.egg\sphinx\environment.
py", line 495, in find_files
['**/' + d for d in config.exclude_dirnames] +
TypeError: coercing to Unicode: need string or buffer, list found
The full traceback has been saved in c:\users\myusername\appdata\local\temp\sphinx-err-kmihxk.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
Either send bugs to the mailing list at <http://groups.google.com/group/sphinx-dev/>,
or report them in the tracker at <http://bitbucket.org/birkenfeld/sphinx/issues/>.
What am I doing wrong?
Is the syntax for exclude_patterns in the sphinx-build command line different than in the conf.py file?
Or is there a better way to build two different versions in one step?
My first thought was that this was a quoting issue, quoting being notoriously difficult to get right on the Windows command line. However, I wasn't able to come up with any combination of quoting that changed the behavior at all. (The problem is easy to replicate)
Of course it could still just be some quoting issue I'm not smart enough to figure out, but I suspect this is a Sphinx bug of some kind, and hope you will report it to the Sphinx developers.
In the meantime, here's an alternate solution:
quoting from here:
There is a special object named tags available in the config file. It can be used to query and change the tags (see Including content based on tags). Use tags.has('tag') to query, tags.add('tag') and tags.remove('tag') to change
This allows you to essentially pass flags into the conf.py file from the command line, and since the conf.py file is just Python, you can use if statements to set the value of exclude_patterns conditionally based on the tags you pass in.
For example, you could pass Sphinx options like:
set SPHINXOPTS=-t foradmins
to pass the "foradmins" tag, and then check for it in your conf.py like so:
exclude_patterns = blah
if tags.has('foradmins'):
exclude_patterns = []
That should allow you to do what you want. Good Luck!

Resources