I am using clisp 2.49 on Windows 7. I start the command window and navigate to the directory with the .lisp file. I then run clisp and try to load the file. I get error "there is no package with name C" on it. C in this case refers to drive C since the absolute path of the fill starts with C:/../../lispFile. I have also tried loading the file on Allegro CL but got the same error.
Below is a screen cap of the error message.
error message
EDIT:
I have identified that the line of code that was causing the error message is:
(defparameter c:\\workdir\\aima\\ (truename "~/public_html/code/");
"The root directory where the code is stored.")
I am not sure if the syntax is incorrect.
Solved: I figured out what I did wrong. I was given instructions to modify the lisp file but misunderstood it and replaced the wrong part of the line. Here is the corrected line of code.
(defparameter *aima-root* (truename "c:\\workdir\\aima\\");
"The root directory where the code is stored.")
Note that one can also compute the directory during load time:
(defparameter *aima-root*
(when *load-pathname*
(make-pathname :defaults *load-pathname*
:name nil
:type nil))
"The root directory where the code is stored.")
*load-pathname* is a standard Common Lisp variable and will be bound during load time, to the pathname similar to the one used for the load function. Thus it points to the file being loaded. We then construct a new pathname, with the defaults filled from the load pathname and no name and no pathname type components.
Thus you can set the *aima-root* variable based on that computation and whenever you load the file, the correct directory will be computed.
There are two Common Lisp variables *load-pathname* and *load-truename* bound during load time. The latter is the real physical pathname of the file. Usually I prefer to use the *load-pathname*, which might not be related to the physical pathname structure. Here the code uses the function truename and thus it might be necessary to use the *load-truename*. Common Lisp implementations often record the location where functions and other things are defined, by storing the pathname. Finding the file later is sometimes easier with a pathname than using a truename - because it can have a device/machine independent indirection using logical pathnames.
Related
In Scheme R7RS there is both a load and include form.
Include is described as:
Semantics: Both include and include-ci take one or
more filenames expressed as string literals, apply an
implementation-specific algorithm to find corresponding files, read
the contents of the files in the specified order as if by repeated
applications of read, and effectively re- place the include or
include-ci expression with a begin expression containing what was read
from the files. The difference between the two is that include-ci
reads each file as if it began with the #!fold-case directive, while
include does not. Note: Implementations are encouraged to search for
files in the directory which contains the including file, and to
provide a way for users to specify other directories to search.
Load is described as:
An implementation-dependent operation is used to trans- form filename
into the name of an existing file con- taining Scheme source code. The
load procedure reads expressions and definitions from the file and
evalu- ates them sequentially in the environment specified by
environment-specifier. If environment-specifier is omitted,
(interaction-environment) is assumed. It is unspecified whether the
results of the expres- sions are printed. The load procedure does not
af- fect the values returned by current-input-port and
current-output-port. It returns an unspecified value. Rationale: For
portability, load must operate on source files. Its operation on other
kinds of files necessarily varies among implementations.
What is the rationale for the two forms? I assume it is historic. Is there any import semantic difference between the two forms? I see that load can optionally include an environment specifier and include doesn't have that. And include-ci has no direct equivalent using load. But comparing load and include alone, what is the difference and is it important?
I think the critical difference is that include is syntax (or in traditional Lisp terms, it is a macro) while load is a function. In traditional Lisp terms (there will be a much more formal definition of this in Scheme terms which I am not competent to give) this means that include does its work at macro-expansion time, while load does its work at evaluation time. These times can be very different for an implementation which has a file compiler: macro-expansion time happens during compilation of files, while evaluation happens only much later, when the compiled files are loaded.
So, if we consider two files, f1.scm containing
(define foo 1)
(include "f2.scm")
and f2.scm containing
(define bar 2)
then if you load, or compile f1.scm it is exactly the same as if you had loaded or compiled a file fe.scm which contained:
(define foo 1)
(begin
(define bar 2))
which in turn is the same as if fe.scm contained:
(define foo 1)
(define bar 2)
In particular this inclusion of the files happens at macro-expansion time, which happens when the compiler runs: the object file (fasl file) produced by the compiler will include compiled definitions of foo and bar, and will not in any way depend on f2.scm or its compiled equivalent existing.
Now consider f3.scm containing:
(define foo 1)
(load "f2")
(note I have assumed that (load "f2") (as opposed to (load "f2.scm")) loads the compiled file if it can find it, and the source file if it can't: I think this is implementation-dependent).
Loading the source of this file will do the same thing as loading f1.scm: it will cause foo and bar to be defined. But compiling this file will not: it will produce a compiled file which, when it is later loaded will try to load either the source or compiled versions of f2.scm. If that file exists, at load time, then it will be loaded and the effect will be the same as the include case. If it does not exist at load time, bad things will happen. Compiling f1.scm will not cause the definitions in f2.scm to be compiled.
Depending on your background it might be worth comparing this to, say, C-family languages. What include does is what #include does: it splices in source files as they are read, and in C (as in many Scheme/Lisp systems) this happens as the files are compiled. What load does is to load code at runtime, which, in C, you would need to do by invoking the dynamic linker or something.
Historically, Lisp implementations did not offer module systems.
Large programs used load in order to run a set of instructions, the load function runs a REPL script by reading S-expressions from a file, one by one, and passing them to eval.
Include, on the other hand, is used to inline the code read from a file into the your code. It does not evaluate the code.
...replace the include or include-ci expression with a begin expression containing what was read from the files
The added 'begin' prepares the code read from the file to be evaluated sequentially.
Sources: Question quotes ,Racket docs
I see some references and tutorials about the commnads of WinDBG.
Some of them like this lm, this .echo, this !running, and this nt!_PDB.
What is difference between these categories
xxx
.xxx
!xxx
xxx!yyy
?
They look so confused.
There are built-in commands, meta commands (dot commands) and extension commands (bang commands).
My personal opinion is that you needn't care too much about the difference of built-in commands compared to meta commands, since there are enough examples where those definitions do not match properly. It's sufficient to know that they are always there and don't need an extension to be loaded.
Good examples for built-in commands, which are mainly about controlling and getting information from the debugging target:
g - go
k - call stack
~ - list threads
Examples where IMHO this definition does not really match:
version - show version of the debugger
vercommand - show command line that was used to start the debugger
n - set number base
Good examples for meta commands, which are thought for only affecting the debugger but not the target:
.cls - clear screen
.chain - display loaded extensions
.effmach - change behavior of the debugger regarding the architecture
.prefer_dml - change output format
Example where IMHO this definition does not really match:
.lastevent - show last exception or event that occurred (in the target)
.ttime - display thread times (of the target)
.call - call a function (in the target)
.dvalloc - allocate memory (in the target)
However, it's good to understand that the extension commands are different, especially because the same command may result in different output, depending on which extension is loaded or appears first in the extension list and that you can affect the order (e.g. by .load, .unload, .setdll). Besides the simple form !command, note that there is also the !extension.command syntax to specify the extension explicitly. I'll use it in the example below. (There's even !c:\path\to\extension.command)
The example of a collision of extension commands is given from a kernel debug session where one !heap does not give any output and the other obviously needs a parameter to work.
0: kd> !ext.heap
0: kd> !exts.heap
Invalid type information
The last format mentioned in your question (xxx!yyy) is not a command, but a method or type information where xxx denotes the module (DLL) and yyy denotes the method or type name. Often, this is also seen with an additional offset in bytes for locations inside the method (xxx!yyy+0xhhh)
See the following:
xxx - these are built in commands
.xxx - these are meta commands
!xxx - these are extension commands, so they call a command from an extension dll
xxx!yyy - this looks the syntax to reference an exported function from a dll:
<dll_name>!<method_name>
You may find the following useful: http://windbg.info/doc/1-common-cmds.html
I have a large source tree with a directory that has several files in it. I'd like gdb to break every time any of those functions are called, but don't want to have to specify every file. I've tried setting break /path/to/dir/:*, break /path/to/dir/*:*, rbreak /path/to/dir/.*:* but none of them catch any of the functions in that directory. How can I get gdb to do what I want?
There seems to be no direct way to do it:
rbreak file:. does not seem to accept directories, only files. Also note that you would want a dot ., not asterisk *
there seems to be no way to loop over symbols in the Python API, see https://stackoverflow.com/a/30032690/895245
The best workaround I've found is to loop over the files with the Python API, and then call rbreak with those files:
import os
class RbreakDir(gdb.Command):
def __init__(self):
super().__init__(
'rbreak-dir',
gdb.COMMAND_BREAKPOINTS,
gdb.COMPLETE_NONE,
False
)
def invoke(self, arg, from_tty):
for root, dirs, files in os.walk(arg):
for basename in files:
path = os.path.abspath(os.path.join(root, basename))
gdb.execute('rbreak {}:.'.format(path), to_string=True)
RbreakDir()
Sample usage:
source a.py
rbreak-dir directory
This is ugly because of the gdb.execute call, but seems to work.
It is however too slow if you have a lot of files under the directory.
My test code is in my GitHub repo.
You could probably do this using the Python scripting that comes with modern gdb's. Two options: one is to list all the symbols and then if they contain the required directory create an instance of the Breakpoint class at the appropriate place to set the breakpoint. (Sorry, I can't recall off hand how to get a list of all the symbols, but I think you can do this.)
You haven't said why exactly you need to do this, but depending on your use-case an alternative may be to use reversible debugging - i.e. let it crash, and then step backwards. You can use gdb's inbuilt reversible debugging, or for radically improved performance, see UndoDB (http://undo-software.com/)
I have a clojure program that at some point executes a function called db-rebuild-files-table.
This function takes a directory filename as a single string argument and calls a recursive function that descends into the directory file tree, extracts certain data from the files there and logs each file in a mysql database. The end result of this command is a "files" table populated by all files in the tree under the given directory.
What I need is to be able to run this command periodically from the shell.
So, I added the :gen-class directive in the file containing my -main function that actually calls (db-rebuild-files-table *dirname*). I run lein uberjar and generate a jar which I can then execute with:
java -jar my-project-SNAPSHOT-1.0.0-standalone.jar namespace.containing.main
Sure enough, the function runs, but in the database there only exists a single entry, for the directory *dirname*. When I execute the exact same sexp in the clojure REPL I get the right behaviour: all the file tree under *dirname* get processed.
What am I doing wrong? Why does the call (db-rebuild-files-table *dirname*) behave inconsistently when called from the REPL and when executed from the command line?
[EDIT] Whats even weirder is that I get no error anywhere. All function calls seem to work as they should. I can even run the -main function in the REPL and it updates the table correctly.
If this works in the REPL, but not when executed stand-alone, then I would guess that you may be bitten by the laziness of Clojure.
Does your code perhaps need a doseq in order to get the benefits of a side-effect (e.g. writing to your database)?
Nailed it. It was a very insidious bug in my program. I got bitten by clojure's laziness.
My file-tree function used map internally, and so produced just the first value, the root directory. For some reason I still can't figure out, when executed at the REPL, evaluation was actually forced and the whole tree seq was produced. I just added a doall in my function and it solved it.
Still trying to figure why executing something at the REPL forces evaluation though. Any thoughts?
Are file descriptors supported on windows? Why do things "seem to work" in Perl with fds?
Things like "fileno", "dup" and "dup2" were working but then randomly inside some other environment, stopped working. It's hard to give details, mostly what I'm looking for is answers from experienced Windows programmers and how file descriptors work/don't work on Windows.
I would guess that it's the PerlIO layer playing games and making it seem as though file descriptors work, but that's only a guess.
Example of what is happening:
open($saveout, ">&STDOUT") or die();
...
open(STDOUT, ">&=".fileno($saveout)) or die();
The second line die()s but only in certain situations (which I have yet to nail down).
Windows uses file descriptors natively. See Low-Level I/O on MSDN. They all report errors through the C variable errno, which means they show up in Perl's $!.
Note that you can save yourself a bit of typing:
open(STDOUT, ">&=", $saveout) or ...;
This works because the documentation for open in perlfunc provides:
If you use the 3-arg form then you can pass either a number, the name of a filehandle or the normal “reference to a glob.”
Finally, always include meaningful diagnostics when you call die! The program below identifies itself ($0), tells what it was trying to do (open), and why it failed ($!). Also, because the message doesn't end with a newline, die adds the name of the file and line number where it was called.
my $fakefd = 12345;
open(STDOUT, ">&=", $fakefd) or die("$0: open: $!");
This produces
prog.pl: open: Bad file descriptor at foo.pl line 2.
According to the documentation for _fdopen (because you used >&= and not >&), it has two failure modes:
If execution is allowed to continue, errno is set either to EBADF, indicating a bad file descriptor, or EINVAL, indicating that mode was a null pointer.
The second would be a bug in perl and highly unlikely because I don't see anywhere in perlio.c that involves a computed mode: they're all static strings.
Something appears to have gone wrong with $saveout. Could $saveout have been closed before you try to restore it? From your example, it's unclear whether you enabled the strict pragma. If it's not lexical (declared with my), are you calling a function that also monkeys with $saveout?