Perl warning printed without 'use warnings' or -w in any files - ajax

I have a lot of old Perl code that gets called frequently, I have been writing a new module and all of a sudden I'm getting a lot of warnings in my error_log for Apache, they are for every module currently being used. e.g,
"my" variable $variable masks earlier declaration in same statement at
/path/to/module.pm line 40 (#1)
Useless use of hash element in void context at
/path/to/another/module.pm line 212 (#2)
The main layout of the codebase is one giant script that includes the modules and directs the requests to them needed to create certain pages for the website and the main script then handles static elements like menus.
My current project is separated from this main script and doesn't use it however any time I call my code using ajax, there are some other ajax calls that will use the main script and the warnings only seem to appear from those request but only when I'm calling my project.
I have grepped every module and none of them have use warnings (or -w) in them, I have also tried using no warnings 'all' in the main script and my own project but it's not doing anything.
At this point I'm out of ideas on what to do next so all help is appreciated, I'd just like to suppress the warnings, the codebase is quite old and poorly written so going and correcting each issue that causes the warns in the first place isn't do-able.
The Apache server is running mod_perl as well, if that might make a difference I have a feeling it might be something to do with CGI, but I can't seem to find any evidence.

I take it that the code gets called by running certain top-level Perl script(s).
Then use the __WARN__ hook in those script(s) to stop printing of warnings
BEGIN { $SIG{__WARN__} = sub {} };
Place this BEGIN block before the use statements so to affect modules as well.
An empty subroutine is the way to mute warnings since __WARN__ doesn't support 'IGNORE'.
See warn and %SIG in perlvar.
See this post and this post for comments and some examples.
To investigate further and track the warnings you can use Carp
BEGIN {
$SIG{__WARN__} = \&Carp::cluck; # or Carp::confess; to also die
}
which will make it print full stack traces. This can be fine-tuned as you please since we can write our own sub to be called. Or use Carp::Always.
See this post
for some more drastic measures (like overriding CORE::GLOBAL::warn)
Once you find a more precise level at which to suppress warnings then local $SIG{__WARN__} is the way to go, if possible. This is used in a post linked above, and here is another example. It is of course far better to suppress warnings only where needed instead of everywhere.
More detail
Getting stack traces in Perl?
How can I get a call stack listing in Perl?
Note that longmess is unfortunately no longer so standard and well supported.

Related

Python: Detect code which gets never executed in production

I need to do refactoring in a big legacy Python code base.
Often I think "these lines don't get executed in production any more".
But I am unsure.
There are some tests which touch these lines. But I can't tell for sure if really no usage happens in production.
What can I do in this situation?
This question is about coverage on a production system. This question is not about coverage during testing/CI.
I don't want to comment out that lines, since I don't want to produce an error in the production system.
Common practice is to use logging inside that lines of code. e.g. you have a block of code you think is not in use. You add try catch block in the beginning of that block of code. Inside trycatch you add line to a specific json named same as your suspicious block of code.
try:
with open("block1.dat", "rb") as file:
activity = pickle.load(file)
curtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
currentact = "dt = {}; code done that: var1 = {},
var2 = {}".format(curdate, var1, var2)
activity.append(currentact)
file = open("block1.dat", "ab")
pickle.dump(activity, file)
file.close()
except Exception: pass
You can use telegram api to log code to. After a while You'll get info how often your code works and what does it do.
Then you monitor for a while and if nothing happens in a month, You can comment the block.
Is the production system deterministic?
Is it interactive?
Does control flow depend on input data?
Do you have access to all possible inputs?
Do the tests exist for a reason or just because?
I'd be careful removing code based on what is needed based on logging unless I knew there are no exceptional situations that occur rarely.
I would follow the common code paths to try to understand the codebase piece by piece in order to figure out what can be simplified. It's hard to give more specific advice without knowing more about the system you're dealing with.
We use a simple pattern to handle this: looks_like_dead_code(my_string)
This is a method which logs the string "my_string".
Example usage:
if ext == '.jpe':
looks_like_dead_code('2018-11-30 tguettler: looks fixed in mime_type_to_extension')
Using the date and the developer login is not enforced, it is just best practice.
If the line gets executed the one who is responsible for checking the logs will talk to the developer.
Since our production environments get updated roughly once in two weeks, you can be sure that this line was not executed during the last months.
I like this solution since in most cases it is like this:
you want to fix a bug or implement a new feature
you look at the code and see some lines which look like dead code. I mean code which is useless, since it won't get executed any more.
You don't have hours of time to investigate. You can dive into your vague guess that this is dead code. You want to do your actual work (fix a bug or implement a new feature. See Step1)
The method looks_like_dead_code() gives you a way to actually do something and leave a note for other developers. It only costs some seconds improve the current situation.
If you have a Tickler file System you can remind yourself to check this code in six months. At least in my context I can be very sure that this is dead code if this line was not executed for several months.

To BEGIN or not to BEGIN

(Crossposting note: This question has already been asked at Ruby Forum, but did not get any answer so far)
For a project (which consists of several independent Ruby programs), I wanted to provide an easy way to enable warnings, but not necessarily force it to be used in every single application. I ended up with a file warnings.rb, which has the following:
BEGIN { $VERBOSE = true }
To enable warnings in an application, one can either
require 'warnings'
or run the application using
ruby -r warnings .....
One might argue that the latter could also be achieved by running the
program using ruby -w, but warnings.rb also provides other stuff related to this topic.
This solution works to our satisfaction, but the BEGIN block is unnecessary. The only case I could come of would be when someone else runs code in a BEGIN block which, when executed, would cause warnings.
This is a pretty exotic situation, so I think I can remove the BEGIN block without risking any harm, but I'm not absolutely sure. Am I right with this?

Can't list source using debug in ruby 1.8 [duplicate]

With this minimal ruby code:
require 'debug'
puts
in a file called, e.g. script.rb
if I launch it like so: ruby -rdebug script.rb
and then press l on the debug prompt, I get the listing, as expected
if I instead run it normally as ruby script.rb
when pressing l I get:
(rdb:1) l
[-3, 6] in script.rb
No sourcefile available for script.rb
The error message seems misleading at best: the working directory hasn't changed, and the file is definitely still there!
I'm unable to find documentation on this behavior (I tried it on both jruby and mri, and the behavior is the same)
I know about 'debugger' and 'pry', but they serve a different use case:
I'm used to other scripting languages with a builtin debug module, that can let me put a statement anywhere in the code to drop me in a debug shell, inspect code, variables and such... the advantage of having it builtin it's that it is available everywhere, without having to set up an environment for it, or even when I'm on a machine that's not my own
I could obviously workaround this by always calling the interpreter with -rdebug and manually setting the breakpoint, but I find this more work than the alternative
After looking into the debug source code, I found a workaround and a fix:
the workaround can be:
trace on
next
trace off
list
this will let you get the listing without restarting the interpreter with -rdebug, with the disadvantage that you'll get some otherwise unwanted output from the tracing, and you'll be able to see it only after moving by one statement
for the fix: the problem is that the SCRIPT_LINES__ Hash lacks a value for the current file... apparently it's only set inside tracer.rb
I've changed line 161, and changed the Hash with a subclass that tracks where []= has been called from, and I wasn't able to dig up the actual code that does the work when stepping into a function that comes from a different file
Also: I haven't found a single person yet who actively uses this module (I asked both on #ruby, #jruby and #pry on freenode), and together with the fact that it uses a function that is now obsolete it leads me to be a bit pessimistic about the maintenance state of this module
nonetheless, I submitted a pull request for the fix (it's actually quite dumb and simple, but to do otherwise I'd need a deeper understanding of this module, and possibly to refactor some parts of it... but if this module isn't actively maintaned, I'm not sure that's a good thing to put effort on)
I suspect the Ruby interpreter doesn't have the ability to load the sourcefile without the components in the debug module. So, no -rdebug, no access to the sourcefile. And I agree it is a misleading error. "Debugging features not loaded" might be better.

how to not forget to delete debug lines in code

This seems to me to be a novel idea (since i haven't found any solutions or anyone having implemented it)...
A shell script that automatically runs whenever you git commit or whatever that will let you know if you forgot to delete any debugging or development env specific lines of code in your project.
For example:
Often times (in my Ruby projects) I'll leave lines of code to output variables like
puts params.inspect
or
raise params.inspect
Also, sometimes I'll use different methods so I can easily see the effects such as in cases like using delayed_job where I'd rather call the method without a delay during development.
The problem is sometimes I forget to change those methods back or forget to delete a call to raise params.inspect and I'll inadvertently push that code.
So I thought maybe the simplest solution was to add a comment to any such debugging line such as
raise params.inspect #debug
In essence flagging that line as a development only/debug line. Then in a shell script that runs before some other command like git commit it can use awk or grep to search through all the latest modified files for that #debug comment and stop execution and alert you. However i don't know much about shell scripting so I thought I'd ask for help :)
Although I whole-heartedly recommend following cdeszaq'a advice and discourage doing this sort of thing, it is pretty easy to write a git hook that will prevent you from committing any lines with a particular string. For simplicity, I'm not showing the git rev-parse --verify HEAD that you should use to make this hook work on an initial commit, but if you simply put the following in .git/hooks/pre-commit (and make it executable), you will not be able to commit any lines of code that contain the string '#debug':
#!/bin/sh
if git diff-index -p -M --cached HEAD | grep '#debug' > /dev/null; then
echo 'debug lines found in commit. Aborting' >&2
exit 1
fi
Rather than having to remember to do additional work (removing lines of code) only to have to do more work later when things break again (re-adding that code), why not put in sensible debugging statements from the beginning?
Most languages have fairly expressive and often cheap logging libraries that will allow you to write out various levels of information (error, info, debug, trace) to a number of different locations (a file, a database). Many of these libraries will even let you adjust the logging level for a specific chunk of the code at runtime or even while the program is running.
So, rather than try to bandage up brute-force debugging by scripting away the problem, why not do yourself, and the rest of the world that has to use what you produce, a favor and use an actual logging framework for logging?
As I said in my comment, you can use any programming language you feel comfortable with.
Anyway, searching for other commit hooks, I think this one could be a good one to start with. It basically looks for some words in your files and can be customized just changing the checks array in the top of the file.
#cdeszaq is correct about the logging part.
For behaviour that differs depending on environment, the common way to achieve this is to make the behaviour configurable. delayed_job should read a value from the config file to decide how long to delay. For production environments the config would have one value and for development environments the config would have a different value.

How can I make an external toolbox available to a MATLAB Parallel Computing Toolbox job?

As a continuation of this question and the subsequent answer, does anyone know how to have a job created using the Parallel Computing Toolbox (using createJob and createTask) access external toolboxes? Is there a configuration parameter I can specify when creating the function to specify toolboxes that should be loaded?
According to this section of the documentation, one way you can do this is to specify either the 'PathDependencies' property or the 'FileDependencies' property of the job object so that it points to the functions you need the job's workers to be able to use.
You should be able to point the way to the KbCheck function in PsychToolbox, along with any other functions or directories needed for KbCheck to work properly. It would look something like this:
obj = createJob('PathDependencies',{'path_to_KbCheck',...
'path_to_other_PTB_functions'});
A few comments, based on my work troubleshooting this:
It appears that there are inconsistencies with how well nested functions and anonymous functions work with the Parallel Computation toolkit. I was unable to get them to work, while others have been able to. (Also see here.) As such, I would recommend having each function stored in it's own file, and including those files using the PathDependencies or FileDependencies properties, as described by gnovice above.
It is very hard to troubleshoot the Parallel Computation toolkit, as everything happens outside your view. Use breakpoints liberally in your code, and the inspect command is your friend. Also note that if there is an error, task objects will contain an error parameter, which in turn will contain ErrorMessage string, and possibly the Error.causes MException object. Both of these were immensely useful in debugging.
When including Psychtoolbox, you need to do it as follows. First, create a jobStartup.m file with the following lines:
PTB_path = '/Users/eliezerk/Documents/MATLAB/Psychtoolbox3/';
addpath( PTB_path );
cd( PTB_path );
SetupPsychtoolbox;
However, since the Parallel Computation toolkit can't handle any graphics functionality, running SetupPsychtoolbox as-is will actually cause your thread to crash. To avoid this, you need to edit the PsychtoolboxPostInstallRoutine function, which is called at the very end of SetupPsychtoolbox. Specifically, you want to comment out the line AssertOpenGL (line 496, as of the time of this answer; this may change in future releases).

Resources