I'm refactoring a function and I want to know every file that calls it. With aliases and imports, a simple grep would list other functions with the same name in different modules or would miss some calls.
I tried using mix xref but it doesn't work with functions, only modules (I'm using Elixir 1.12.1).
$ mix xref callers MySchema.changeset/2
** (Mix) xref callers MODULE expects a MODULE, got: MySchema.changeset/2
Is there a tool or a xref command to list callers of a function in Elixir?
There is a deprecated Mix.Tasks.Xref.calls/1 function, but it has been deprecated for a reason, Compilation tracers are way more powerful.
You might set a tracer for {:remote_function, _, YourModule, :your_fun, your_arity} and simply IO.puts/2 from there.
Related
I'm moderately new to common lisp, but have extended experience with other 'separate compilation' languages (think C/C++/FORTRAN and such)
I know how to do an ASDF system definition. I know how to separate stuff in packages. I'm using SBCL, by the way.
The question is this: what's the best practice for splitting code (large packages) between .lisp files? I mean, in C there are include files, while lisp lives with the current image state. So with multiple files I need to handle dependencies or serial order in the system definition. But without something like forward declarations it's painful.
Simple example on what I want to do: I have, for example, two defstructs that are part of the same bigger data structure (like struct1 is a parent of some set of struct2). Some functions works on one, some other works on the other and some other use both.
So I would have: a packages.lisp, a fun1.lisp (with the first defstruct and related functions), a fun2.lisp (with the other defstruct and functions) and a funmix.lisp (with functions that use both). In an ideal world everything is sealed and compiling these in this order would be fine. As most of you know, this in practice almost never happen.
If I need to use struct2 functions from the struct1 ones I would need to either reorder or add a dependency. But then if there's some kind of back call (that can't be done with a closure) I would have struct1.lisp depending on struct2.lisp and vice-versa which is obviously not valid. So what? I could break the loop putting the defstruct in a separate file (say, structs.lisp) but what if either of the struct's function need to access the common functions in the third file? I would like to avoid style notes.
What's the common way to solve this, i.e. keeping loosely related code in the same file but still be able to interface to other ones. Is the correct solution to seal everything in a compilation unit (a single file)? use a package for every file with exports?
Lisp dependencies are simple, because in many cases, a Lisp implementation doesn't need to process the definition of something in order to compile its use.
Some exceptions to the rule are:
Macros: macros must be loaded in order to be expanded. There is a compile-time dependency between a file which uses macro and the file which defines them.
Packages: a package foo must be defined in order to use symbols like foo:bar or foo::priv. If foo is defined by a defpackage form in some foo.lisp file, then that file has to be loaded (either in source or compiled form).
Constants: constants defined with defconstant should be seen before their use. Similar remarks apply to inline functions, compiler macros.
Any custom things in a "domain specific language" which enforces definition before use. E.g. if Whizbang Inference Engine needs rules to be defined when uses of the rules are compiled, you have to arrange for that.
For certain diagnostics to be suppressed like calls to undefined functions, the defining and using files must be taken to be as a single compilation unit. (See below.)
All the above remarks also have implications for incremental recompilation.
When there is dependency like the above between files so that one is a prerequisite of the other, when the prerequisite is touched, the dependent one must be recompiled.
How to split code into files is going to be influenced by all the usual things: cohesion, coupling and what have you. Common-Lisp-specific reasons to keep certain things together in one file is inlining. The call to a function which is in the same file as the caller may be inlined. If your program supports any in-service upgrade, the granularity of code loading is individual files. If some functions foo and bar should be independently redefinable, don't put them in the same file.
Now about compilation units. Suppose you have a file foo.lisp which defines a function called foo and bar.lisp which calls (foo). If you just compile bar.lisp, you will likely get a warning that an undefined function foo has been called. You could compile foo.lisp first and then load it, and then compile bar.lisp. But that will not work if there is a circular reference between the two: say foo.lisp also calls (bar) which bar.lisp defines.
In Common Lisp, you can defer such warnings to the end of a compilation unit, and what defines a compilation unit isn't a single file, but a dynamic scope established by a macro called with-compilation-unit. Simply put, if we do this:
(with-compilation-unit
(compile-file "foo.lisp") ;; contains (defun foo () (bar))
(compile-file "bar.lisp")) ;; contains (defun bar () (foo))
If a compile-file isn't surrounded by with-compilation-unit then there is a compilation unit spanning that file. Otherwise, the outermost nesting of the with-compilation-unit macro determines the scope of what is in the compilation unit.
Warnings about undefined functions (and such) are deferred to the end of the compilation unit. So by putting foo.lisp and bar.lisp compilation into one unit, we suppress the warnings about either foo or bar not being defined and we can compile the two in any order.
Build systems use with-compilation-unit under the hood, as appropriate.
The compilation unit isn't about dependencies but diagnostics. Above, we don't have a compile time dependency. If we touch foo.lisp, bar.lisp doesn't have to be recompiled or vice versa.
By and large, Lisp codebases don't have a lot of hard dependencies among the files. Incremental compilation often means that just the affected files that were changed have to be recompiled. The C or C++ problem that everything has to be rebuilt because a core header file was touched is essentially nonexistent.
but what if
No matter how you first organize your code, if you change it significantly you are going to have to refactor. IMO there is no ideal way of grouping dependencies in advance.
As a rule of thumb it is generally safe to define generic functions first, then types, then actual methods, for example. For non-generic functions, you can cut circular dependencies by adding forward declarations:
(declaim (ftype function ...))
Having too much circular dependency is a bit of a code smell.
Is the correct solution to seal everything in a compilation unit
Yes, if you group the definitions in the same compilation unit (the same file), the file compiler will be able to silence the style notes until it reaches the end of file: at this point it knows if there are still missing references or if all the cross-references are resolved.
But then if there's some kind of back call (that can't be done with a closure)
If you have a specific example in mind please share, but typically you can define struct1 and its functions in a way that can be self-contained; maybe it can accept a map that binds event names to callbacks:
(make-struct-1 :callbacks (list :on-empty one-is-empty
:on-full one-is-full))
Similarly, struct2 can accept callbacks too (Dependency Injection) and the main struct ties them using closures (?).
Alternatively, you can design your data-structures so that they signal conditions, and the in the caller code you intercept them to bind things together.
My understanding is that parallelization is included by default in a base Julia installation.
However, when I try to use it, I am getting errors that the functions and macros are not defined. For example:
nprocs()
Throws an error:
ERROR: UndefVarError: nprocs not defined
Stacktrace:
[1] top-level scope at none:0
Nowhere in any Julia documentation can I find mention of any packages that need to be included in order to use these functions. Am I missing something here?
I am using Julia version 1.0.5 inside the JuliaPro/Atom IDE
I figured it out. I'll leave this up for anyone else who is having this problem.
The solution is to import the Distributed package using:
using Distributed
Why this is not included in the documentation I do not know.
Once you know that nproc needs to be used, there exist a couple of options to find where it is defined.
A search through the documentation can help: https://docs.julialang.org/en/v1/search/?q=nprocs
Without leaving the Julia REPL, and even before nprocs gets imported in your session, you can use apropos in order to find more about it and determine that it is needed to import the Distributed package:
julia> apropos("nprocs")
Distributed.nprocs
Distributed.addprocs
Distributed.nworkers
An other way of using apropos is via the help REPL mode:
julia> # type `?` when the cursor is right after the prompt to enter help REPL mode
# note the use of double quotes to trigger "apropos" instead of a regular help query
help?> "nprocs"
Distributed.nprocs
Distributed.addprocs
Distributed.nworkers
Previous options work well in the case of nprocs because it is part of the standard library. JuliaHub is another option which allows looking for things more broadly, in the entire Julia ecosystem. As an example, looking for nprocs in JuliaHub's "Doc Search" tool also returns relevant results: https://juliahub.com/ui/Documentation?q=nprocs
Is it possible or is there a function that works similar to debug.getupvalues / debug.getupvalue in the lua library that I could use, as I won't be able to use either soon and I depend on them slightly to keep parts of the code I have working.
Also if I could get the function code for debug.getupvalue it would be a great help as I could just use that as a function instead of using the debug library anymore, although I doubt it is code in Lua.
And before you say it, yes I know the debug library is the most undependable library in all of Lua but it made my code work and I would like to find a way to stop using it before it's too late.
The debug library is not meant to be used in production code (as opposed to tests and unusual debugging situations). There are 3 possible solutions. Two of them require changes to the code where the closures are defined. The other would require you to know C:
Add more closures in the same scope as the upvalues that will give you the access that you need.
Use tables instead of closures.
Write a C library that makes use of lua_getupvalue.
To see the source code of debug.getupvalue, download Lua 5.3.5 and look at src/ldblib.c, line 260. lua_getupvalue is in src/lapi.c, line 1222.
Is there any way autoload Hack type aliases? I've placed them in separate files on PSR-4-compliant paths, and although I understand they are Hack-only and aren't formally mentioned in the PSR-0 or PSR-4, I figured one of the following would happen:
HHVM would expand type aliases to their base types, or
spl_autoload would treat the type as a class, function or interface name and execute the script, resolving the alias.
However, neither happen. At runtime, the methods call fail due to incompatibility with the type hints, i.e.:
Catchable fatal error: Argument passed to {method_name} must be an instance of {type_alias}, {concrete_type} given.
Edit: I should mention that I am specifically using Composer. I'm unsure if this is Composer-specific or not.
Yes, you can autoload types in HHVM. You need to be using a class-map approach and the HH\autoload_set_paths function.
There is the hhvm-autoload package which adds support for generating the necessary map into composer.
I don't believe this is possible. PHP doesn't register type hints for autoloading. And it doesn't need to, because the only way to fulfill the type hint is to pass that class or subclasses, whose construction would've triggered the autoloader call. It is impossible then for a type hint to be unknown to the interpreter at the time it checks against it.
This is only a problem in Hack because type aliases introduce this possibility. To stay consistent with PHP, I expect the only viable solution of the two mentioned would be for HHVM to expand type aliases while compiling the bytecode.
I'm struggling with the last pieces of logic to make our Ada builder work as expectedly with variantdir. The problem is caused by the fact that the inflexible tools gnatbind and gnatlink doesn't allow the binder files to be placed in a directory other than the current one. This leaves me with two options:
Let gnatbind write the the binder files to topdir and then let gnatlink pick it from there. This may however cause race conditions if we want to allow simulatenous builds for different architectures and compiler versions which we want.
Modify the calls to gnatbind and gnatlink to temporarily go down to the build directory, in our case build/$ARCH/src-path. I successfully fixed the gnatbind step as this is explicitly called using a env.Execute from within the Ada builder. To try to fix the linking step I've modified the Program env using
env["LINKCOM"] = SCons.Action.Action(ada_linkcom)
where ada_linkcom is defined as
def ada_linkcom(source, target,env ):
....
return ret
where ret is a string describing what should be done in the shell. I need this to be a function it contains a bit complicated logic to convert paths from being relative to top-level to just containing their basenames.
This however fails with an error in scons-2.3.1/SCons/Executor.py on line 347 in function do_execute. Isn't env["LINKCOM"] allowed to be a function with ada_linkcom's signature?
No, it's not. You seem to think that 'env["LINKCOM"]' is what actually calls/executes the final build command, and that's not quite correct. Instead, environment variables like LINKCOM get expanded by the Executor/Builder for each specified Action, and are then executed.
You can have Python functions as Actions, and also use a so-called "generator" to create your Action strings on-the-fly. But you have to assign this Action to a Builder, and can't set it as an environment variable directly.
Please also have a look at the UserGuide ( http://www.scons.org/doc/production/HTML/scons-user.html ), especially section 18.4 "Builders That Execute Python Functions". Our basic guide for writing Builders and Tools might also prove to be helpful: http://www.scons.org/wiki/ToolsForFools