Implementation details of scheme library procedures? - scheme

As per my understanding scheme procedures like map,apply,append etc are written in scheme itself. Is there an easy way to see the implementation of these procedues right inside the REPL ?

I do not believe there is a standard way to dump the source code of a procedure, but a lot of the list functions are defined here, and you can look through the source code for your implementation to see the rest. Note that apply is probably a primitive, though.

Related

How to record scheme session in a file?

I'm trying to record my session while writing some scheme code, But I don't know which is the correct code to record my session while doing some expression.
R5RS
R5RS has an optional procedure transcript-on that takes a file name and it will output the interaction until transcript-off` is called. (Thanks #molbdnilo for pointing this out in a comment)
R6RS and R7RS
This is not supported in the report. Even (scheme-report-environment 5)‌ is specified not to contain the optional procedures load, interaction-environment, transcript-on, transcript-off, and char-ready?.
Implementation lock-ins
The individual implementations might have such features included and if you just need it for you chosen implementation you must read its documentation to find it. I guess this is for tooling rather than production code so using implementation specific features isn't as bad as using non scheme standard forms.
roll your own
You can make your own repl that does what you want with the file output of you chosing that would be the same across all implementations.

Source code documentation in Racket (Scheme)

Is it possible to write documentation in source files like in Common Lisp or Go, for example, and extract it from source files? Or everybody uses Scribble to document their code?
The short answer is you can write in-source documentation by using scribble/srcdoc.
Unlike the other languages you mentioned, these aren't "doc strings":
Although you can write plain text, you have full Racket at-expressions and may use scribble/manual forms and functions.
Not only does this allow for "richer" documentation, the documentation goes into its own documentation submodule -- similar to how you can put tests into test submodules. This means the documentation or tests add no runtime overhead.
You do need one .scrbl file, in which you use scribble/extract to include the documentation submodule(s). However you probably want such a file, anyway, for non-function-specific documentation (topics such as introduction, installation, or "user's guide" prose as opposed to "reference" style documentation).
Of course you can define your own syntax to wrap scribble/srcdoc. For example, in one project I defined a little define/doc macro, which expands into proc-doc/names as well as a (module+ test ___) form. That way, doc examples can also be used as unit tests.
How Racket handles in-source documentation intersects a few interesting aspects of Racket:
Submodules let you define things like "test-time" and "doc-time" as well as run-time.
At-expressions are a different syntax for s-expressions, especially good when writing text.
Scribble is an example of using a custom language -- documentation-as-a-program -- demonstrating Racket's ability to be not just a programming language, but a programming-language programming language.

Java Bytecode manipulation libraries

I am starting to work on a project and for one of the tasks I need to analyze the source code in order to gather information about the classes and their methods. More specifically, for each method I need to know which internal attributes and external objects (references) it uses throughout the entire method body.
I discussed it with my supervisors and they think that Bytecode manipulation libraries is the way to go. I already looked at BCEL, ASM and Javassist but I'm not sure which one I need to use. Do they all provide access to the method body where I can see all the instructions and get the information I need?
Any advice would be appreciate it. Thank you!
If you really “need to analyze the source code”, then libraries which allow to inspect the bytecode are not the way to go.
Otherwise, you really need to define your task precisely. Either, you are about to analyze classes, regardless of whether you will look at their source code or byte code, or you want to analyze source code and consider doing it by compiling first, followed by analyzing the compiled result. In the latter case, you have to compare the effort of both steps with alternative solution, which may, e.g. incorporate direct source code analysis.
Parsing byte code is rather easy, easier than analyzing source code, which is the reason why bytecode is produced prior to the execution of Java programs. To answer your concrete question, yes, all three libraries offer you a way to analyze the instructions and associated information. Which one is the best to fit your needs, is a question that is beyond the scope of Stackoverflow.
Whether analyzing the byte code helps, depends on your exact requirements. When it comes to field and method access, you may precisely get most of them using that approach. Only inlined compile-time constants lack their origins. When it comes to type use, you have to consider that not every source code artifact has an existing counterpart in the byte code, e.g. widening casts produce no actual code and and local variables usually don’t have a declared type (debugging information aside), but only an implied type which depends on how they are actually used. They also have no information about Generics, unless debugging information has been included.

Scala dynamic class management

I would like to know if the following is possible in Scala (but I think the question can be applied also to Java):
Create a Scala file dynamically (ok, no problem here)
Compile it (I don't think this would be a real problem)
Load/Unload the new class dynamically
Aside from knowing if dynamic code loading/reloading is possible (it's possible in Java so I think it's feasible also in Scala) I would like also to know the implication of this in terms of performance degradation (I could have many many classes, with no name clash but really many of them!).
TIA!
P.S.: I know other questions about class loading in Scala exist, but I haven't been able to find an answer about performance!
Yes, everything you want to do is certainly possible. You might like to take a look at ScalaMock, which is an example of creating Scala source code dynamically. And at SBT which is an example of calling the compiler from code. And then there are many different systems that load classes dynamically - look at the documentation for loadLibrary as a starting point.
But, depending on what you want to achieve, you might like to look at Scala Macros instead. They provide the same kind of flexibility as you would get by generating source code and then compiling it, but without many of the downsides of that approach. The original version of ScalaMock used to work by generating source code, but I'm in the process of moving to using macros instead.
It's all possible in Scala, as is clearly demonstrated by the REPL. It's even going to be relatively easy with Scala 2.10.

How to achieve a recursive deftype

I'm curious as to how to do a Clojure deftype that contains a reference to itself, e.g.
(deftype BinaryTree [^BinaryTree left ^BinaryTree right])
This doesn't work... however I see no intrinsic reason why it shouldn't be possible since the underlying Java class is perfectly capable of referring to itself.
What am I doing wrong here?
Mike.
Currently ^Class hints on fields (in opposition to ^primitive hints) are discarded, so there's no gain in trying to put them. This may change in the future.
However auto reference in a type definition (eg in method bodies, not in fields) somewhat works but the implementation is a bit of a hack. There's little incentive to fix auto-reference in the current java compiler given the promise of the rewrite of the compiler in Clojure.

Resources