Why does missing "require" / "include" call error_handler an extra time? - include

I have a custom error handler set up using set_error_handler. When I tried to include a file that doesn't exist, PHP calls error_handler one more time than it should:
<?php
error_reporting(E_ALL | E_STRICT);
set_error_handler(function($errno, $errstr, $errfile, $errline, $errcontext){
if(error_reporting() !== 0){
echo "<br>";
echo "<br>In Custom Error Handler...";
echo "<br>Err String: ", $errstr;
echo "<br>Passing to Default Handler...";
}
return false; // allow default
});
include("/missing_file.php"); // line 11
?>
Output:
In Custom Error Handler... // this is the extra error handler call
Err String: include(/missing_file.php)
[function.include]: failed to open stream: No such file or directory
Passing to Default Handler...
// the default handler does nothing, even though error_reporting is not zero
// Next Phase:
In Custom Error Handler...
Err String: include() [function.include]: Failed
opening '/missing_file.php' for inclusion
(include_path='.:/usr/lib/php:/usr/local/lib/php')
Passing to Default
Handler...
Warning: include() [function.include]: Failed opening
'/missing_file.php' for inclusion
in
/home/yccom/public_html/apr/test.php on line 11
The same behavior is observed with require.
For example, changing line 11 to require will give this output:
In Custom Error Handler... // this is the extra error handler call
Err String: require(/missing_file.php) [function.require]: failed to
open stream: No such file or directory
Passing to Default Handler...
// the default handler does nothing, even though error_reporting is
not zero
// Next Phase:
Fatal error: require() [function.require]: Failed opening required '/missing_file.php' in /home/yccom/public_html/apr/test.php on
line 11
What may be causing the error handler's additional call?

It's quite simple, really. PHP's lifecycle consists of 4 distinct phases:
Parsing
Compilation
Scanning
Execution
for your code to be parsed, all files that are included/required need to be fetched in the first phase, to translate the code into meaningful expressions. Your file doesn't exist, so a warning is issued.
Next, the compilation phase encounters the same include statement, and tries to convert the expressions into opcodes. The file doesn't exist, so a warning is issued.
Scanning translates code into tokens, which again cannot be done for the missing include file.
Execution time... The file cannot be executed because it is missing.
Why would PHP work like this? Isn't it stupid to blunder along, eventhough a file is missing?
In a way, yes, but include is used to include files that are non-critical to the script, if you really need that file's contents, you use require (but preferably require_once). The latter emits, as you stated, a fatal error, and stops everything dead in its tracks. That's what should happen if you're depending on another file for your code to function.
The require construct issues an E_COMPILER_ERROR, which effectively halts the compiler (not unlike __halt_compiler) at a given offset(the line where the failing require statement resides).
Check these slides for more details on each of the 4 main phases.
The reason why your code emits four warnings, is simply because PHP tries to include the file four times. Try running the script from the command line, but use strace:
$ strace -o output.txt php yourScript.php
open the output file, and see the internals of the Zend engine. Pay special attention to lines that look like:
lstat("/your/path/./file.php", 0x50113add8355) = -1//0x5... ~= 0xsomeaddress
You'll see where PHP goes looking for the file: it's all its include_path directories, the cwd, /usr/share/php, probably a pear or lib directory, and the include path you explicitly set.
I've gotten the idea to do this from this site, and based on the output I got, this seems to me to be the most plausible explanation as to why you see multiple errors.

Related

Where should I start to debug when Make throws a particular error

My knowledge of Make is small. I have been told that everything you put after make (that does not contain "-") is a target.
Well a building process I have is failing.
First there is a line
make path/to/configuration_file
configuration_file is not a target. It is a autogenerated configuration file buried inside the directory structure ("path/to") that is of the form
#
# Boot Configuration
#
#
# DRAM Component
#
CONFIG_DRAM_TYPE_LPDDR4=y
# CONFIG_DRAM_TYPE_DDR4 is not set
CONFIG_DDR_SIZE=0x80000000
#
# Boot Device
#
# CONFIG_ENABLE_EMMC_BOOT is not set
# CONFIG_ENABLE_NAND_BOOT is not set
CONFIG_ENABLE_SPINAND_BOOT=y
# CONFIG_ENABLE_SPINOR_BOOT is not set
CONFIG_EMMC_ACCESS_8BIT=y
# CONFIG_EMMC_ACCESS_4BIT is not set
# CONFIG_EMMC_ACCESS_1BIT is not set
so I cannot understand how this is a target. For reference, when I run make there is a Makefile but this Makefile does not reference this file.
Still this line is going well.
The path where it fails says
make diags
and I have verified there is no "diags" target.
I will print here the error file that can give us more info of what is happening
GEN cortex_a/output/Makefile
Init diag test "orc_scheduler" ...
remoteconfig: Failed to generate configure in cortex_a/soc/visio/tests/orc_scheduler!
Makefile:11 recipe for target 'orc_scheduler-init' failed
make[10]: *** [orc_scheduler-init] Error 25
At least what I would like to know is how to interpret this error message. I don't know what the "11" or the "10" or the "25" refers to.
make is fundamentally a tool for automatically running commands in the right order so you don't have to type them in yourself. So all the commands make runs are commands that you could just type into your shell prompt. And all the errors that those commands generate are the same ones that you would see if you typed the command yourself. So, looking at make to try to understand those errors is looking in the wrong place: you have to look at the documentation for whatever command was invoked.
A "target" is just a file that make knows how to build. The fact that when you typed make <somefile> is didn't give you an error that it doesn't know how to build <somefile>, means that <somefile> is a target as far as your makefiles are concerned.
The error message Makefile:11: simply refers to the filename Makefile, line 11, which is where the command that make ran, that failed, can be found. But this likely won't help you solve the problem of why the command failed (unless the problem is you invoked it with the wrong arguments and you need to adjust the makefile to specify different arguments).
The command that failed generated the message:
remoteconfig: Failed to generate configure in cortex_a/soc/visio/tests/orc_scheduler!
I don't know what that means, but it's not related to make. You'll need to find out what this remoteconfig command is, what it does, and why it failed. It's unfortunate that it doesn't show any better error message as to why it failed to "generate configure", but again there's nothing make can do about that.
If you want to learn more about make you can look at the GNU make manual (note, GNU make is only one implementation of make; there are others and they are fundamentally the same but different in details).

How to debug `Error while processing function` in `vim` and `nvim`?

TL;DR
How to find where exactly vim or nvim error started (which file?) when I'm interested in fixing the actual issue and not just removing the bad plugin? Anything better than strace and guesswork to find the error origin?
Issue
I often add a plugin to my vim or nvim config and end up getting errors on hooks (buffer open, close, write):
"test.py" [New] 0L, 0C written
Error detected while processing function 343[12]..272:
line 8:
E716: Key not present in Dictionary: _exec
E116: Invalid arguments for function get(a:args, 'exec', a:1['_exec'])
E15: Invalid expression: get(a:args, 'exec', a:1['_exec'])
The problem is, I have no idea where those come from, only get some line number of unknown file and I know it's not my vim/nvim config file.
Somewhere, you have a plugin that has defined a dictionary with anonymous-functions (check the help related to this tag).
For the curious ones, it's done this way:
let d = {}
function! d.whatever() abort
throw "blah"
endfunction
When you execute this function, you'll get the kind of error you're currently observing. That's why I stopped working this way to prefer:
let d = {}
function s:whatever() abort
throw "blah"
endfunction
let d.whatever = function('s:whatever') " a workaround is required for older versions of vim
" At least this way I'll get a `<SNR>42_whatever` in the exception throwpoint, and thus a scriptname.
That's the why. Now, back to your problem, AFAIK, the only things you'll be able to know are the two functions that have been called:
in line 12 of :function {343}, you've called
:function {272} which contains an error at line 8.
Thanks to these two commands (may be prefixed with :verbose, I don't remember exactly), you'll get the source code of the two functions, which you should be able to use in order to grep your plugins to know where it appears.

how to read yii session in codeginiter controller. I am new to both the frameworks

Following is the code I have used :-
require(realpath(dirname(__FILE__).'/../../../email/apps/common/framework/YiiBase.php'));
$config = require(realpath(dirname(__FILE__).'/../../../email/apps/customer/config/main.php'));
Yii::createWebApplication($config);
var_dump(Yii::app()->User->id);
i am getting following error:-
Message: include(): Failed opening 'Yii.php' for inclusion (include_path='.:/usr/share/php:/usr/share/pear')
Filename: framework/YiiBase.php
Line Number: 428
Be kind write full absolute path for the script that executes the code and location of the files YiiBase.php and main.php.
Solution from the creator of Yii says:
require_once('path/to/yii.php');
Yii::createWebApplication($config);
// you can access Yii::app() now as usual
$session = Yii::app()->session;
//....use session as you wish
My file that can help you start is:
// change the following paths if necessary
$yii=dirname(__FILE__).'/../yii/yii.php';
$config=dirname(__FILE__).'/protected/config/main.php';
// remove the following lines when in production mode
defined('YII_DEBUG') or define('YII_DEBUG',true);
// specify how many levels of call stack should be shown in each log message
defined('YII_TRACE_LEVEL') or define('YII_TRACE_LEVEL',3);
require_once($yii);
Yii::createWebApplication($config);
NOTE
Its is not recommended you use YiiBase. Use either yii or yiilite. YiiBase serves as base class and is not intended to be used directly (to the best of my kowledge). Also check paths if they real point to the file!

Ruby require_relative not loading file, not throwing error

I am having trouble getting constant definitions loaded via an external file. I have narrowed the problem down to the following.
require_relative '../../common/config.rb'
A_CONSTANT = 'something'
puts "A_CONSTANT: #{A_CONSTANT}"
When I run this as written, it prints the message correctly. The same constant is declared in the file common/config.rb. The relative path is correct for the location of this file. Just for completeness, the above code is in /watir/dashboard/spec/ex.rb. The constant is declared in /watir/common/config.rb.
As I see it, the above code should error out for a duplicate constant declaration. It does not. If I comment out the constant declaration above and rerun, the puts statement shows an error for 'uninitialized constant.' Any ideas what's wrong?
Edit - The contents of the file common/config.rb are below.
A_CONSTANT = 'something'
On a lark, I changed the filename to common/conf.rb. When I modify the require_relative statement to load the renamed file, I get the results I originally expected. The file is loaded and the second constant declaration throws a warning saying 'already initialized constant.' If I comment out the second declaration, the script runs perfectly.
It appears that the filename 'config.rb' is somehow special when loaded by a relative path. I have use that filename in other scripts where it was in the same folder as the loading script or a sub-folder. This is the first time I have had to move up the tree to load it.
Ruby allows redefining constants, and will only print a warning. Some setting in your Ruby is just hiding that warning from you.

How to get R script line numbers at error?

If I am running a long R script from the command line (R --slave script.R), then how can I get it to give line numbers at errors?
I don't want to add debug commands to the script if at all possible; I just want R to behave like most other scripting languages.
This won't give you the line number, but it will tell you where the failure happens in the call stack which is very helpful:
traceback()
[Edit:] When running a script from the command line you will have to skip one or two calls, see traceback() for interactive and non-interactive R sessions
I'm not aware of another way to do this without the usual debugging suspects:
debug()
browser()
options(error=recover) [followed by options(error = NULL) to revert it]
You might want to look at this related post.
[Edit:] Sorry...just saw that you're running this from the command line. In that case I would suggest working with the options(error) functionality. Here's a simple example:
options(error = quote({dump.frames(to.file=TRUE); q()}))
You can create as elaborate a script as you want on an error condition, so you should just decide what information you need for debugging.
Otherwise, if there are specific areas you're concerned about (e.g. connecting to a database), then wrap them in a tryCatch() function.
Doing options(error=traceback) provides a little more information about the content of the lines leading up to the error. It causes a traceback to appear if there is an error, and for some errors it has the line number, prefixed by #. But it's hit or miss, many errors won't get line numbers.
Support for this will be forthcoming in R 2.10 and later. Duncan Murdoch just posted to r-devel on Sep 10 2009 about findLineNum and setBreapoint:
I've just added a couple of functions to R-devel to help with
debugging. findLineNum() finds which line of which function
corresponds to a particular line of source code; setBreakpoint() takes
the output of findLineNum, and calls trace() to set a breakpoint
there.
These rely on having source reference debug information in the code.
This is the default for code read by source(), but not for packages.
To get the source references in package code, set the environment
variable R_KEEP_PKG_SOURCE=yes, or within R, set
options(keep.source.pkgs=TRUE), then install the package from source
code. Read ?findLineNum for details on how to tell it to search
within packages, rather than limiting the search to the global
environment.
For example,
x <- " f <- function(a, b) {
if (a > b) {
a
} else {
b
}
}"
eval(parse(text=x)) # Normally you'd use source() to read a file...
findLineNum("<text>#3") # <text> is a dummy filename used by
parse(text=)
This will print
f step 2,3,2 in <environment: R_GlobalEnv>
and you can use
setBreakpoint("<text>#3")
to set a breakpoint there.
There are still some limitations (and probably bugs) in the code; I'll
be fixing thos
You do it by setting
options(show.error.locations = TRUE)
I just wonder why this setting is not a default in R? It should be, as it is in every other language.
Specifying the global R option for handling non-catastrophic errors worked for me, along with a customized workflow for retaining info about the error and examining this info after the failure. I am currently running R version 3.4.1.
Below, I've included a description of the workflow that worked for me, as well as some code I used to set the global error handling option in R.
As I have it configured, the error handling also creates an RData file containing all objects in working memory at the time of the error. This dump can be read back into R using load() and then the various environments as they existed at the time of the error can be inspected interactively using debugger(errorDump).
I will note that I was able to get line numbers in the traceback() output from any custom functions within the stack, but only if I used the keep.source=TRUE option when calling source() for any custom functions used in my script. Without this option, setting the global error handling option as below sent the full output of the traceback() to an error log named error.log, but line numbers were not available.
Here's the general steps I took in my workflow and how I was able to access the memory dump and error log after a non-interactive R failure.
I put the following at the top of the main script I was calling from the command line. This sets the global error handling option for the R session. My main script was called myMainScript.R. The various lines in the code have comments after them describing what they do. Basically, with this option, when R encounters an error that triggers stop(), it will create an RData (*.rda) dump file of working memory across all active environments in the directory ~/myUsername/directoryForDump and will also write an error log named error.log with some useful information to the same directory. You can modify this snippet to add other handling on error (e.g., add a timestamp to the dump file and error log filenames, etc.).
options(error = quote({
setwd('~/myUsername/directoryForDump'); # Set working directory where you want the dump to go, since dump.frames() doesn't seem to accept absolute file paths.
dump.frames("errorDump", to.file=TRUE, include.GlobalEnv=TRUE); # First dump to file; this dump is not accessible by the R session.
sink(file="error.log"); # Specify sink file to redirect all output.
dump.frames(); # Dump again to be able to retrieve error message and write to error log; this dump is accessible by the R session since not dumped to file.
cat(attr(last.dump,"error.message")); # Print error message to file, along with simplified stack trace.
cat('\nTraceback:');
cat('\n');
traceback(2); # Print full traceback of function calls with all parameters. The 2 passed to traceback omits the outermost two function calls.
sink();
q()}))
Make sure that from the main script and any subsequent function calls, anytime a function is sourced, the option keep.source=TRUE is used. That is, to source a function, you would use source('~/path/to/myFunction.R', keep.source=TRUE). This is required for the traceback() output to contain line numbers. It looks like you may also be able to set this option globally using options( keep.source=TRUE ), but I have not tested this to see if it works. If you don't need line numbers, you can omit this option.
From the terminal (outside R), call the main script in batch mode using Rscript myMainScript.R. This starts a new non-interactive R session and runs the script myMainScript.R. The code snippet given in step 1 that has been placed at the top of myMainScript.R sets the error handling option for the non-interactive R session.
Encounter an error somewhere within the execution of myMainScript.R. This may be in the main script itself, or nested several functions deep. When the error is encountered, handling will be performed as specified in step 1, and the R session will terminate.
An RData dump file named errorDump.rda and and error log named error.log are created in the directory specified by '~/myUsername/directoryForDump' in the global error handling option setting.
At your leisure, inspect error.log to review information about the error, including the error message itself and the full stack trace leading to the error. Here's an example of the log that's generated on error; note the numbers after the # character are the line numbers of the error at various points in the call stack:
Error in callNonExistFunc() : could not find function "callNonExistFunc"
Calls: test_multi_commodity_flow_cmd -> getExtendedConfigDF -> extendConfigDF
Traceback:
3: extendConfigDF(info_df, data_dir = user_dir, dlevel = dlevel) at test_multi_commodity_flow.R#304
2: getExtendedConfigDF(config_file_path, out_dir, dlevel) at test_multi_commodity_flow.R#352
1: test_multi_commodity_flow_cmd(config_file_path = config_file_path,
spot_file_path = spot_file_path, forward_file_path = forward_file_path,
data_dir = "../", user_dir = "Output", sim_type = "spot",
sim_scheme = "shape", sim_gran = "hourly", sim_adjust = "raw",
nsim = 5, start_date = "2017-07-01", end_date = "2017-12-31",
compute_averages = opt$compute_averages, compute_shapes = opt$compute_shapes,
overwrite = opt$overwrite, nmonths = opt$nmonths, forward_regime = opt$fregime,
ltfv_ratio = opt$ltfv_ratio, method = opt$method, dlevel = 0)
At your leisure, you may load errorDump.rda into an interactive R session using load('~/path/to/errorDump.rda'). Once loaded, call debugger(errorDump) to browse all R objects in memory in any of the active environments. See the R help on debugger() for more info.
This workflow is enormously helpful when running R in some type of production environment where you have non-interactive R sessions being initiated at the command line and you want information retained about unexpected errors. The ability to dump memory to a file you can use to inspect working memory at the time of the error, along with having the line numbers of the error in the call stack, facilitate speedy post-mortem debugging of what caused the error.
First, options(show.error.locations = TRUE) and then traceback(). The error line number will be displayed after #

Resources