Type aliases in type hints are not preserved - python-sphinx

My code (reduced to just a few lines for this demo) looks like this:
AgentAssignment = List[int]
def assignment2str(assignment: AgentAssignment):
pass
The produced html documentation has List[int] as the type hint. This seems to be a resolved issue (#6518), but I am still running into it in 2022 with Sphinx version 5.1.1 (and Python 3.8.2). What am I missing?

I am not sure if there is Sphinx support for this yet, but you need to use explicit PEP 613 TypeAlias introduced in Python 3.10. Because otherwise the type resolver cannot differentiate between a normal variable assignment and a type alias. This is a generic Python solution for the type alias problem beyond the scope of Sphinx.
AgentAssignment: TypeAlias = List[int]
Ps. I am having the same issue with Sphinx

So, one needs to add at the beginning of the file (yes, before all other imports):
from __future__ import annotations
Then in conf.py:
autodoc_type_aliases = {'AgentAssignment': 'AgentAssignment'}
Not that this identity-transformation dictionary makes any sense to me, but it did the trick...

I also had this issue and was very confused why the accepted answer here stackoverflow.com/a/73273330/1822018, which mentions adding your type aliases to the autodoc_type_aliases dictionary as explained in the documentation here https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#confval-autodoc_type_aliases was not working for me. I solved my problem and I am posting it here for the benefit of others.
In my case, I also had installed the Python package sphinx-autodocs-typehints which extends/hijacks/overrides certain Sphinx functionality, in particular it appears to supplant the functionality of the autodoc_type_aliases dictionary in the given PR. To anyone trying to debug this issue I would suggest removing 'sphinx_autodoc_typehints' from the extensions list in your Sphinx conf.py file.

Related

How to add a custom tmLanguage syntax to Sphinx/RST

Is there a method to import a tmLanguage.json into Sphinx to add support for a new/custom language for RST?
There is not directly; if necessary you'll have to write a lexer for a new language in Python. I say if necessary because Sphinx's syntax highlighting is provided under the hood by Pygments, which supports a huge number of languages; you just need to turn support on in Sphinx using the highlight_language config value. The short names for all the various lexers are shown here.
If, somehow, your language doesn't have a lexer already, there are instructions on how to write your own. It's largely (but not entirely) a process of translating the Oniguruma regexes in the .tmLanguage.json file to Python-flavored ones.
One would also hope that you'd contribute it to the pygments Github project, too.

Parallel processing in Julia throws errors

My understanding is that parallelization is included by default in a base Julia installation.
However, when I try to use it, I am getting errors that the functions and macros are not defined. For example:
nprocs()
Throws an error:
ERROR: UndefVarError: nprocs not defined
Stacktrace:
[1] top-level scope at none:0
Nowhere in any Julia documentation can I find mention of any packages that need to be included in order to use these functions. Am I missing something here?
I am using Julia version 1.0.5 inside the JuliaPro/Atom IDE
I figured it out. I'll leave this up for anyone else who is having this problem.
The solution is to import the Distributed package using:
using Distributed
Why this is not included in the documentation I do not know.
Once you know that nproc needs to be used, there exist a couple of options to find where it is defined.
A search through the documentation can help: https://docs.julialang.org/en/v1/search/?q=nprocs
Without leaving the Julia REPL, and even before nprocs gets imported in your session, you can use apropos in order to find more about it and determine that it is needed to import the Distributed package:
julia> apropos("nprocs")
Distributed.nprocs
Distributed.addprocs
Distributed.nworkers
An other way of using apropos is via the help REPL mode:
julia> # type `?` when the cursor is right after the prompt to enter help REPL mode
# note the use of double quotes to trigger "apropos" instead of a regular help query
help?> "nprocs"
Distributed.nprocs
Distributed.addprocs
Distributed.nworkers
Previous options work well in the case of nprocs because it is part of the standard library. JuliaHub is another option which allows looking for things more broadly, in the entire Julia ecosystem. As an example, looking for nprocs in JuliaHub's "Doc Search" tool also returns relevant results: https://juliahub.com/ui/Documentation?q=nprocs

Sourcing data into rstudio [duplicate]

This is meant to be a FAQ question, so please be as complete as possible. The answer is a community answer, so feel free to edit if you think something is missing.
This question was discussed and approved on meta.
I am using R and tried some.function but I got following error message:
Error: could not find function "some.function"
This question comes up very regularly. When you get this type of error in R, how can you solve it?
There are a few things you should check :
Did you write the name of your function correctly? Names are case sensitive.
Did you install the package that contains the function? install.packages("thePackage") (this only needs to be done once)
Did you attach that package to the workspace ?
require(thePackage) (and check its return value) or library(thePackage) (this should be done every time you start a new R session)
Are you using an older R version where this function didn't exist yet?
Are you using a different version of the specific package? This could be in either direction: functions are added and removed over time, and it's possible the code you're referencing is expecting a newer or older version of the package than what you have installed.
If you're not sure in which package that function is situated, you can do a few things.
If you're sure you installed and attached/loaded the right package, type help.search("some.function") or ??some.function to get an information box that can tell you in which package it is contained.
find and getAnywhere can also be used to locate functions.
If you have no clue about the package, you can use findFn in the sos package as explained in this answer.
RSiteSearch("some.function") or searching with rdocumentation or rseek are alternative ways to find the function.
Sometimes you need to use an older version of R, but run code created for a newer version. Newly added functions (eg hasName in R 3.4.0) won't be found then. If you use an older R version and want to use a newer function, you can use the package backports to make such functions available. You also find a list of functions that need to be backported on the git repo of backports. Keep in mind that R versions older than R3.0.0 are incompatible with packages built for R3.0.0 and later versions.
Another problem, in the presence of a NAMESPACE, is that you are trying to run an unexported function from package foo.
For example (contrived, I know, but):
> mod <- prcomp(USArrests, scale = TRUE)
> plot.prcomp(mod)
Error: could not find function "plot.prcomp"
Firstly, you shouldn't be calling S3 methods directly, but lets assume plot.prcomp was actually some useful internal function in package foo. To call such function if you know what you are doing requires the use of :::. You also need to know the namespace in which the function is found. Using getAnywhere() we find that the function is in package stats:
> getAnywhere(plot.prcomp)
A single object matching ‘plot.prcomp’ was found
It was found in the following places
registered S3 method for plot from namespace stats
namespace:stats
with value
function (x, main = deparse(substitute(x)), ...)
screeplot.default(x, main = main, ...)
<environment: namespace:stats>
So we can now call it directly using:
> stats:::plot.prcomp(mod)
I've used plot.prcomp just as an example to illustrate the purpose. In normal use you shouldn't be calling S3 methods like this. But as I said, if the function you want to call exists (it might be a hidden utility function for example), but is in a namespace, R will report that it can't find the function unless you tell it which namespace to look in.
Compare this to the following:
stats::plot.prcomp
The above fails because while stats uses plot.prcomp, it is not exported from stats as the error rightly tells us:
Error: 'plot.prcomp' is not an exported object from 'namespace:stats'
This is documented as follows:
pkg::name returns the value of the exported variable name in namespace pkg, whereas pkg:::name returns the value of the internal variable name.
I can usually resolve this problem when a computer is under my control, but it's more of a nuisance when working with a grid. When a grid is not homogenous, not all libraries may be installed, and my experience has often been that a package wasn't installed because a dependency wasn't installed. To address this, I check the following:
Is Fortran installed? (Look for 'gfortran'.) This affects several major packages in R.
Is Java installed? Are the Java class paths correct?
Check that the package was installed by the admin and available for use by the appropriate user. Sometimes users will install packages in the wrong places or run without appropriate access to the right libraries. .libPaths() is a good check.
Check ldd results for R, to be sure about shared libraries
It's good to periodically run a script that just loads every package needed and does some little test. This catches the package issue as early as possible in the workflow. This is akin to build testing or unit testing, except it's more like a smoke test to make sure that the very basic stuff works.
If packages can be stored in a network-accessible location, are they? If they cannot, is there a way to ensure consistent versions across the machines? (This may seem OT, but correct package installation includes availability of the right version.)
Is the package available for the given OS? Unfortunately, not all packages are available across platforms. This goes back to step 5. If possible, try to find a way to handle a different OS by switching to an appropriate flavor of a package or switch off the dependency in certain cases.
Having encountered this quite a bit, some of these steps become fairly routine. Although #7 might seem like a good starting point, these are listed in approximate order of the frequency that I use them.
If this occurs while you check your package (R CMD check), take a look at your NAMESPACE.
You can solve this by adding the following statement to the NAMESPACE:
exportPattern("^[^\\\\.]")
This exports everything that doesn't start with a dot ("."). This allows you to have your hidden functions, starting with a dot:
.myHiddenFunction <- function(x) cat("my hidden function")
I had the error
Error: could not find function some.function
happen when doing R CMD check of a package I was making with RStudio. I found adding
exportPattern(".")
to the NAMESPACE file did the trick. As a sidenote, I had initially configured RStudio to use ROxygen to make the documentation -- and selected the configuration where ROxygen would write my NAMESPACE file for me, which kept erasing my edits. So, in my instance I unchecked NAMESPACE from the Roxygen configuration and added exportPattern(".") to NAMESPACE to solve this error.
This error can occur even if the name of the function is valid if some mandatory arguments are missing (i.e you did not provide enough arguments).
I got this in an Rcpp context, where I wrote a C++ function with optionnal arguments, and did not provided those arguments in R. It appeared that optionnal arguments from the C++ were seen as mandatory by R. As a result, R could not find a matching function for the correct name but an incorrect number of arguments.
Rcpp Function : SEXP RcppFunction(arg1, arg2=0) {}
R Calls :
RcppFunction(0) raises the error
RcppFunction(0, 0) does not
Rdocumentation.org has a very handy search function that - among other things - lets you find functions - from all the packages on CRAN, as well as from packages from Bioconductor and GitHub.
If you are using parallelMap you'll need to export custom functions to the slave jobs, otherwise you get an error "could not find function ".
If you set a non-missing level on parallelStart the same argument should be passed to parallelExport, else you get the same error. So this should be strictly followed:
parallelStart(mode = "<your mode here>", N, level = "<task.level>")
parallelExport("<myfun>", level = "<task.level>")
You may be able to fix this error by name spacing :: the function call
comparison.cloud(colors = c("red", "green"), max.words = 100)
to
wordcloud::comparison.cloud(colors = c("red", "green"), max.words = 100)
I got the same, error, I was running version .99xxx, I checked for updates from help menu and updated My RStudio to 1.0x, then the error did not come
So simple solution, just update your R Studio

Top-level class documentation

Rubycop outputs messages like:
app/controllers/welcome_controller.rb:1:1: C: Missing top-level class documentation comment.
class WelcomeController < ApplicationController
^^^^^
I wonder what does top-level class documentation look like. It's not just a comment, is it? It needs to have a special format, but which one?
That said a simple comment like so will do nicely:
# This shiny device polishes bared foos
class FooBarPolisher
...
I ended up here looking for a way to disable this check, if that's your case, put
Documentation:
Enabled: false
in your .rubocop.yml file.
From the Rubocop documentation:
RuboCop is a Ruby static code analyzer. Out of the box it will enforce many of the guidelines outlined in the community Ruby Style Guide.
The Ruby Style Guide "comment" section doesn't use the phrase "Missing top-level class documentation comment" but from reading the guide section on comments, you can quickly infer from the examples that commenting classes and modules is recommended.
The reason is, when using rdoc, the comments for the classes/modules will be used to generate the reference to the code, something that is important whether you're writing code for yourself, for a team or for general release by others.

Do you know an alternative ctags generator for Ruby

Exumerant Ctags does not work well with Ruby, you can see there are many hacks in the ruby.c code and basically it fails recognizing many cases. One of the most important is this bit:
class SomeModule::SomeClass
end
Ctags generates:
SomeModule someclass.rb /^class SomeModule::SomeClass$/;" c
which is wrong. The correct and expected entry is:
SomeClass someclass.rb /^class SomeModule::SomeClass$/;" c
This is very limiting. There are some patches for ctags available which does not work, e.g. https://github.com/xtao/overlay/blob/master/dev-util/ctags/files/ctags-5.5.4-ruby-classes.patch but looking on the ctags ruby codebase, this really needs complete rewrite.
So I have been playing with other option which is https://github.com/rdoc/rdoc-tags which works nicer, but it is slow. I mean really SLOW. Generating tags on my project is 2 seconds with ctags but one hour with this tool. Really.
I found one old project that was parsing Ruby on it's own and generating tags, but it was only for Ruby 1.8. It was slower than ctags, but not that bad.
So I am searching for some alternatives. Do you know about any other working ruby ctags generators which give you proper output and are fast?
Thanks!
Edit: I have found very nice project that works with Ruby 1.9+ and is accurate and fast. I recommend it:
https://github.com/tmm1/ripper-tags
Ripper-tags effort does solve everything described here. It is based on official Ruby parser which is also quite fast. https://github.com/tmm1/ripper-tags
gem install ripper-tags
cd your_project/
ripper-tags -R
It does also support Emacs as well.
Exuberant ctags out of the box doesn’t do a number of useful things:
It doesn’t deal with:
module A::B
It doesn’t tag (at least some of) the “operator” methods like ‘==’
It doesn’t support qualified tags, —type=+
It doesn’t output tags for constants or attributes.
Patch available, but it is only for version 5.5 and does not work anymore.
Other projects:
https://github.com/tmm1/ripper-tags (best option for Ruby 1.9+)
https://rubygems.org/gems/rdoc-tags (very slow but works with 1.8)
Source
Add following to your ~/.ctags
--regex-ruby=/(^|;)[ \t]*(class|module)[ \t]+([A-Z][[:alnum:]_]+(::[A-Z][[:alnum:]_]+)+)/\3/c,class,constant/
So you can:
deal with: module A::B
See more here: https://github.com/bltavares/dot-files/blob/master/ctags
A patch is available as of 2013-02
https://github.com/fishman/ctags (ctags patch for Ruby, including rspec)
the rspec tag generator will not properly recognize describe blocks that start with semicolor (:some-method), but other than that, it's great.
There is also https://github.com/eapache/starscope
It doesn't support the extended tag format (yet) but it does other things such as exporting cscope databases.

Resources