Code Formatting for the ReadTheDocs System - magento

I'm using Read the Docs for the first time. I'm writing docs for a command line system, and my "code samples" include a log of shell output. The shell output ends up looking like this
That is -- the service (or my use of it?) is trying to format this example of running a shell command as though it was source code, and is treating the magento2:generate as though it was a class constant.
Can I control which code blocks get source code formatting in read the docs? I've tried setting no base language in the admin, but it doesn't seem to have an effect. Or is this something I need to control at the mkdocs of sphinx level? (read the docs works by turning your markdown or sphinx files into nice HTML files) Or something else? Or am I out of luck?

You need to define the "language" of the code block in your source document. Both Sphinx and MkDocs will attempt to guess the language, which often is good enough. However, on occasion, it will guess incorrectly and result in weird highlighting. To avoid that, both implementations provide a mechanism to manually define the language of each code block.
Sphinx
For Sphinx, you can use the code-block directive and include the "language" of the block:
.. code-block:: console
You shell commands go here
In the above example, I used console for the a shell session. The alias shell-session would work as well. Note that the alternative lexer bash (and its aliases: sh, ksh, zsh, and shell) woudl not strictly be appropriate as they are for a shell script, whereas you are displaying both the command and theoutput in a shell session.
A complete list of supported language codes can be found in the Pygments documentation.
MkDocs
MkDocs makes use of the Fenced Code Block Markdown extension to define the "language" of a code block:
``` shell
Your shell commands go here
```
As MkDocs uses highlight.js rather than Pygments, the list of supported languages is different. Therefore, I used shell (for a shell-session) in the above example.

Related

julia: static analyzer/linter for the command line

im looking for a shell command-line tool to lint julia scripts (a static analyzer), eg.
local:~ $ linter(myjuliascript.jl)
which will produce its output to terminal => text with linter results = either
// messages inlined with text of myjuliascript.jl or
...
...
// messages indicating line numbers of myjuliascript.jl
...
...
i found this, https://github.com/tonyhffong/Lint.jl, but it does not look promising (does not compile).
question: do you know of any good-quality command-line tool with which to lint julia scripts?
id rather avoid plugins to IDE's, since im a little tired of IDEs; often they are too much kung-fu-fighting with too little benefit. eg. tried to get the VS code julia linter working, no luck; VS code, julia linter doesn't work (on mac)
The closest thing currently available appears to be https://github.com/julia-vscode/StaticLint.jl. While you could technically call this from the command line if so desired using, julia -e, the interface would not seem to be very conducive to that sort of usage.

Why does Scala use a reversed shebang (!#) instead of just setting interpreter to scala

The scala documentation shows that the way to create a scala script is like this:
#!/bin/sh
exec scala "$0" "$#"
!#
/* Script here */
I know that this executes scala with the name of the script file and the arguments passed to it, and that the scala command apparently knows to read a file that starts like this and ignore everything up to the reversed shebang !#
My question is: is there any reason why I should use this (rather verbose) format for a scala script, rather than just:
#!/bin/env scala
/* Script here */
This, as far a I can tell from a quick test, does exactly the same thing, but is less verbose.
How old is the documentation? Usually, this sort of thing (often referred to as 'the exec hack') was recommended before /bin/env was common, and this was the best way to get the functionality. Note that /usr/bin/env is more common than /bin/env, and ought to be used instead.
Note that it's /usr/bin/env, not /bin/env.
There are no benefits to using an intermediate shell instead of /usr/bin/env, except running in some rare antique Unix variants where env isn't in /usr/bin. Well, technically SCO still exists, but does Scala even run there?
However the advantage of the shell variant is that it gives an opportunity to tune what is executed, for example to add elements to PATH or CLASSPATH, or to add options such as -savecompiled to the interpreter (as shown in the manual). This may be why the documentation suggests the shell form.
I am not on the Scala development team and I don't know what the historical motivation for the Scala documentation was.
Scala did not always support /usr/bin/env. No particular reason for it, just, I imagine, the person who wrote the shell scripting support was not familiar with that syntax, back in the mid 00's. The documentation followed what was supported, and I added /usr/bin/env support at some point (iirc), but never bothered changing the documentation, it would seem.

TeX compiler in ruby

I'm looking for a gem that allows to compile tex files (TeLaTeX or just LaTeX) into pdf. I don't need any templating or partial rendering, just simple compiler. Is there any bindings for latex2pdf or something.
Are you looking for a TeX-Compiler written in ruby or a ruby script, that calls LaTeX?
If you look for the 2nd one:
http://rubygems.org/gems/rake4latex
Defines a rake-task to generate a pdf, based on tex-sources. It checks, how many TeX-runs are needed, makeindex, bibtex... is done if required.
Supports splitindex, gloss...
Can be used with LaTeX, pdfLaTeX, XeLaTeX...
Can't you just call the command line directly with backtick notation?
`latex2pdf <options>`
It shows that TeX's syntax is so horrible flexible, that you actually will need TeX or any of its variants to interpret TeX files in general.
So actually calling the command line pdflatex or xelatex (or any wrapper around this, like in peakxu's answer) is the best bet here.
I have no idea if someone packaged a TeX distribution (like TeX Live) into a Ruby Gem, I suppose not.

How does bash tab completion work?

I have been spending a lot of time in the shell lately and I'm wondering how the tab autocomplete works. What's the mechanism behind it? How does the bash know the contents of every directory?
There are two parts to the autocompletion:
The readline library, as already mentioned by fixje, manages the command line editing, and calls back to bash when tab is pressed, to enable completion. Bash then gives (see next point) a list of possible completions, and readline inserts as much characters as are identified unambiguously by the characters already typed in. (You can configure the readline library quite much, see the section Command line editing of the Bash manual for details.)
Bash itself has the built-in complete to define a completion mechanism for individual commands. If for the current command nothing is defined, it used completion by file name (using opendir/readdir, as Ignacio said).
The part to define your own completions is described in the section Programmable Completion. In short, with
complete «options» «command» you define the completion for some command. For example complete -u su says
when completing an argument for the su command, search for users of the current system.
If this is more complicated than the
normal options can cover (e.g. different completions depending on argument index, or depending on previous arguments),
you can use -F function, which will then invoke a shell function to generate the list of possible completions.
(This is used for example for the git completion, which is very complicated, depending on subcommand and sometimes
on options given, and using sometimes names of branches (which are nothing bash knows about).
You can list the existing completions defined in your current bash environment using simply complete, to have an impression on what is possible. If you have the bash-completion package installed (or however it is named on your system), completions for a lot of commands are installed, and as Wrikken said, /etc/bash_completion contains a bash script which is then often executed at shell startup to configure this. Additional custom completion scripts may be placed in /etc/bash_completion.d; those are all sourced from /etc/bash_completion.
If you are interested in the basics:
Bash uses readline which features history and basic completion. You could inspect the source if you want to get a detailed understanding.
Furthermore, you can use readline to build your own CLI interfaces with completion

Compilers for shell scripts

Do you know if there's any tool for compiling bash scripts?
It doesn't matter if that tool is just a translator (for example, something that converts a bash script to a C program), as long as the translated result can be compiled.
I'm looking for something like shc (it's just an example -- I know that shc doesn't work as a compiler). Are there any other similar tools?
A Google search brings up CCsh, but it will set you back $50 per machine for a license.
The documentation says that CCsh compiles Bourne Shell (not bash ...) scripts to C code and that it understands how to replicate the functionality of 50 odd standard commands avoiding the need to fork them.
But CCsh is not open source, so if it doesn't do what you need (or expect) you won't be able to look at the source code to figure out why.
I don't think you're going to find anything, because you can't really "compile" a shell script. You could write a simple script that converts all lines to calls to system(3), then "compile" that as a C program, but this wouldn't have a major performance boost over anything you're currently using, and might not handle variables correctly. Don't do this.
The problem with "compiling" a shell script is that shell scripts just call external programs.
In theory you could actually get a good performance boost.
Think of all the
if [ x"$MYVAR" == x"TheResult" ]; then echo "TheResult Happened" fi
(note invocation of test, then echo, as well as the interpreting needed to be done.)
which could be replaced by
if ( !strcmp(myvar, "TheResult") ) printf("TheResult Happened");
In C: no process launching, no having to do path searching. Lots of goodness.

Resources