Are file descriptors supported on windows? Why do things "seem to work" in Perl with fds?
Things like "fileno", "dup" and "dup2" were working but then randomly inside some other environment, stopped working. It's hard to give details, mostly what I'm looking for is answers from experienced Windows programmers and how file descriptors work/don't work on Windows.
I would guess that it's the PerlIO layer playing games and making it seem as though file descriptors work, but that's only a guess.
Example of what is happening:
open($saveout, ">&STDOUT") or die();
...
open(STDOUT, ">&=".fileno($saveout)) or die();
The second line die()s but only in certain situations (which I have yet to nail down).
Windows uses file descriptors natively. See Low-Level I/O on MSDN. They all report errors through the C variable errno, which means they show up in Perl's $!.
Note that you can save yourself a bit of typing:
open(STDOUT, ">&=", $saveout) or ...;
This works because the documentation for open in perlfunc provides:
If you use the 3-arg form then you can pass either a number, the name of a filehandle or the normal “reference to a glob.”
Finally, always include meaningful diagnostics when you call die! The program below identifies itself ($0), tells what it was trying to do (open), and why it failed ($!). Also, because the message doesn't end with a newline, die adds the name of the file and line number where it was called.
my $fakefd = 12345;
open(STDOUT, ">&=", $fakefd) or die("$0: open: $!");
This produces
prog.pl: open: Bad file descriptor at foo.pl line 2.
According to the documentation for _fdopen (because you used >&= and not >&), it has two failure modes:
If execution is allowed to continue, errno is set either to EBADF, indicating a bad file descriptor, or EINVAL, indicating that mode was a null pointer.
The second would be a bug in perl and highly unlikely because I don't see anywhere in perlio.c that involves a computed mode: they're all static strings.
Something appears to have gone wrong with $saveout. Could $saveout have been closed before you try to restore it? From your example, it's unclear whether you enabled the strict pragma. If it's not lexical (declared with my), are you calling a function that also monkeys with $saveout?
Related
According to the Bash Reference Manual, the Bash scripting language is constituted of 4 distinct subclasses of syntactic elements:
built-in commands (alias, cd)
reserved words (if, function)
parameters and variables ($, IFS)
functions (abort, end-of-file - activated with keybindings such as Ctrl-d)
Apart from reading the manual, I became inherently curious if there was a programmatic way to list out or generate all such keywords, at least from one of the above categories. I think this could be useful in some contexts. Sometimes I wish I could see all the options available to me for what I can write in any given moment, and having that information as data, instead of a formatted manual, is convenient, focused, and can be edited, in case you want to strike out commands you know well, or that are too obscure for now.
My understanding is that Bash takes the input into stdin and passes it to the running shell process. When code is distributed in a production-ready form, it is compiled, so it runs faster. Unlike using a Python REPL, you don’t have access to the Bash source code from within Bash, so it is not a very direct route to write a program that searches through source files to find various defined commands. I mean that if you wanted to list all functions, Python has the dir() function which programmatically looks for function names in the namespace. But I don’t think Bash can do that. I think it doesn’t have a special syntax in its source files which makes it easy to find and identify all the keywords. Instead, they will be found if you simply enter them - like cd will “find” the program cd because $PATH returns the path to that command - but there’s no special way to discover them.
Or am I wrong? Technically, you could run a “brute force” search by generating every combination of symbols of every length and record when you did not get “error: unknown command” as a response.
Is there any other clever programmatic way to do this?
I mean I want to see a list of every symbol or string that the bash
compiler
Bash is not a compiler. It and every other shell I know are interpreters of various languages.
recognises and knows what to do with, including commands like
“ls” or just a symbol like “*”. I also want to see the inputs and
outputs for each symbol, i.e., some commands are executed in the shell
prompt by themselves, but what data type do they return?
All commands executed by the shell have an exit status, which is a number between 0 and 255. This is as close to a "return type" as you get. Many of them also produce idiosyncratic output to one or two streams (a standard output stream and a standard error stream) under some conditions, and many have other effects on the shell environment or operating environment.
And some
require a certain data type to standard input.
I can't think of a built-in utility whose expected input is well characterized as having a particular data type. That's not really a stream-oriented concept.
I want to do this just as a rigorous way to study the language.
If you want to rigorously study the language, then you should study its manual, where everything you describe has already been compiled. You might also want to study the POSIX shell command language manual for a slightly different perspective, which is more thorough in some areas, though what it documents differs in a few details from Bash's default behavior.
If you want to compile your own summary of Bash syntax and behavior, then those are the best source materials for such an effort.
You can get a list of all reserved words and syntactic elements of bash using this trick:
help -s '*' | cut -d: -f1
Or more accurately:
help -s \* | awk -F ': ' 'NR>2&&!/variables/{print $1}'
I have a lot of old Perl code that gets called frequently, I have been writing a new module and all of a sudden I'm getting a lot of warnings in my error_log for Apache, they are for every module currently being used. e.g,
"my" variable $variable masks earlier declaration in same statement at
/path/to/module.pm line 40 (#1)
Useless use of hash element in void context at
/path/to/another/module.pm line 212 (#2)
The main layout of the codebase is one giant script that includes the modules and directs the requests to them needed to create certain pages for the website and the main script then handles static elements like menus.
My current project is separated from this main script and doesn't use it however any time I call my code using ajax, there are some other ajax calls that will use the main script and the warnings only seem to appear from those request but only when I'm calling my project.
I have grepped every module and none of them have use warnings (or -w) in them, I have also tried using no warnings 'all' in the main script and my own project but it's not doing anything.
At this point I'm out of ideas on what to do next so all help is appreciated, I'd just like to suppress the warnings, the codebase is quite old and poorly written so going and correcting each issue that causes the warns in the first place isn't do-able.
The Apache server is running mod_perl as well, if that might make a difference I have a feeling it might be something to do with CGI, but I can't seem to find any evidence.
I take it that the code gets called by running certain top-level Perl script(s).
Then use the __WARN__ hook in those script(s) to stop printing of warnings
BEGIN { $SIG{__WARN__} = sub {} };
Place this BEGIN block before the use statements so to affect modules as well.
An empty subroutine is the way to mute warnings since __WARN__ doesn't support 'IGNORE'.
See warn and %SIG in perlvar.
See this post and this post for comments and some examples.
To investigate further and track the warnings you can use Carp
BEGIN {
$SIG{__WARN__} = \&Carp::cluck; # or Carp::confess; to also die
}
which will make it print full stack traces. This can be fine-tuned as you please since we can write our own sub to be called. Or use Carp::Always.
See this post
for some more drastic measures (like overriding CORE::GLOBAL::warn)
Once you find a more precise level at which to suppress warnings then local $SIG{__WARN__} is the way to go, if possible. This is used in a post linked above, and here is another example. It is of course far better to suppress warnings only where needed instead of everywhere.
More detail
Getting stack traces in Perl?
How can I get a call stack listing in Perl?
Note that longmess is unfortunately no longer so standard and well supported.
I am invoking mingw32-make from my tool using CreateProcess(..).After the execution i want to know the results (such as whether target is generated or if there are any compilation errors etc...) of the execution.
Can some one help me on how to parse the result.
Usually one does not parse the result from a compile, except as a followup to analyze the errors. Instead (in scripts and makefiles), one uses the exit-status of the compiler, which returns a nonzero (error) code on failure.
In a makefile, unless the magic "-" flag precedes the command which runs a program, any exit-status will stop the build.
Error messages and diagnostics are written to the standard output. It is possible to parse these messages; several text editors (e.g., emacs, vi like emacs, vim) can do this, positioning the cursor to the location in the source file where an error was detected. There are several variants. The most common uses
filename:line:message
like the grep program. Some add the column number, like one of these:
filename:line.column:message
filename:line:column:message
The error messages may include multiple lines, providing additional information about the message, but in most cases the colon-separated filename, line-number and message is enough to position the cursor to the right place.
If I do C-u M-x recompile inside a buffer that's not the *compilation* buffer, (i.e. the source file for instance), I get this error - "Wrong type argument: consp, nil" after it prompts for the compilation command. Why is this? I want to run recompile interactively as comint works, sometimes outside the compilation buffer. How do I do this?
Try using emacs -Q, just to be sure (yes, I know you said you commented out all of your init file, but just to be sure -- and it's a lot easier to do than comment-out everything).
Next, set debug-on-error to t -- You can do M-x toggle-debug-on-error to do that, if you prefer.
Next, provoke the error and look at the debugger *Backtrace*. It will show you not only which function raised the error because it expected a cons and got nil instead, but also what function called it, passing the bad argument. And so on down the stack.
If necessary, you can click mouse-2 on functions on the stack (at the left, to see their source code. Or put the cursor on them and use C-h f to see their doc -- in particular, what arguments they expect and what their return values should be.
In this way it's pretty easy to find the code that is the culprit. (Most likely, in spite of what you said, it is some non-vanilla Emacs Lisp code you loaded somehow.)
Also, state your emacs version : M-x emacs-version. If you are using a development snapshot then the problem could come from vanilla code (i.e., emacs -Q); otherwise, that's not so likely.
Also, you say you get the error after it prompts you. Immediately after it prompts, before you type anything? After you type a command name and hit RET? Try to be more specific.
Update after your comment:
Load library compile.el (not .elc). Then do M-x debug-on-entry recompile, then step through the debugger using d when function recompile is entered. What you are interested in is when compilation-start is called (applied to its args).
It seems that the value of compilation-arguments that is passed to it is no good. The command name you enter at the prompt becomes the first of the list of compilation-arguments. The others are taken from when you last invoked compile: recompile just reuses the same arguments (except the command name): (mode name-function highlight-regexp)mode name-function highlight-regexp).
However, be aware that compilation-arguments is buffer-local. So if you changed to a different buffer then its value is likely not what you need. You need the value from your last compile, so you should do the recompile in the same buffer where you did compile.
(FWIW, I don't use (re)compile myself, as I don't develop software anymore. I just took a look at the source code.)
Such kind of errors usually depicts a problem with your configuration. Try to investigate messages buffer output. There can be some clues there.
And of course, it is normal to call a recompile command from a buffer with your code. It is a convention to bind it to C-c C-c.
I'm using Fortran for my research and sometimes, for debugging purposes, someone will insert in the code something like this:
write(*,*) 'Variable x:', varx
The problem is that sometimes it happens that we forget to remove that statement from the code and it becomes difficult to find where it is being printed. I usually can get a good idea where it is by the name 'Variable x' but it sometimes happens that that information might no be present and I just see random numbers showing up.
One can imagine that doing a grep for write(*,*) is basically useless so I was wondering if there is an efficient way of finding my culprit, like forcing every call of write(*,*) to print a file and line number, or tracking stdout.
Thank you.
Intel's Fortran preprocessor defines a number of macros, such as __file__ and __line__ which will be replaced by, respectively, the file name (as a string) and line number (as an integer) when the pre-processor runs. For more details consult the documentation.
GFortran offers similar facilities, consult the documentation.
Perhaps your compiler offers similar capabilities.
As has been previously implied, there's no Fortran--although there may be a compiler approach---way to change the behaviour of the write statement as you want. However, as your problem is more to do with handling (unintentionally produced) bad code there are options.
If you can't easily find an unwanted write(*,*) in your code that suggests that you have many legitimate such statements. One solution is to reduce the count:
use an explicit format, rather than list-directed output (* as the format);
instead of * as the output unit, use output_unit from the intrinsic module iso_fortran_env.
[Having an explicit format for "proper" output is a good idea, anyway.]
If that fails, use your version control system to compare an old "good" version against the new "bad" version. Perhaps even have your version control system flag/block commits with new write(*,*)s.
And if all that still doesn't help, then the pre-processor macros previously mentioned could be a final resort.