I have a Perl script that runs via a system() command from C. On a specific site (SunOS 5.10), when that script is run, it nearly always takes 6 seconds or more. On other sites, it runs pretty much instantly (0.1s). If I run the script manually, i.e. not from the C code, it also runs instantly. I eventually tracked the slowness down (by spitting out the time a whole bunch in a lot of different places), to a single require line. The file that it is requiring is another Perl script we wrote. The script consists of a single require (this file here), 3 scalars that are assigned integer values, and a handful of time/date conversion routines. The file ends with a 1;. That single require appears to take as much as 6 seconds on occasion, but as I said, not always even on the same machine. I'm absolutely stumped here. My only last thought is to turn on profiling, but the site doesn't have Devel::Profiler and my only other option (that I know of) would be to add it to the Perl command which would require me altering and recompiling the C code (doable but non-trivial).
Anybody have ANY idea what could be going on here? I don't think I can/want to put the entire date.pl that is being required, but it's pretty much exactly as I described; I could answer any questions about it that you have.
Thanks in advance.
You might be interested in A Timely Start by Jean-Louis Leroy. He had a similar problem and tracked it down to a long and deep module search path where perl usually found the modules in the last entries in #INC.
Six seconds is a long time. Have you checked what your network is doing during this?
My first thought was that spawning the new process when using the system() command could be the problem, but six seconds is too long.
I don't know much about perl, but I could imagine that for any reason, the access of the time module could invoke a call to a network time server. Just to get synchronized. Maybe this takes so long or maybe it is getting a time out.
It could be that this only happens for a newly spawned process -- hence only when you use the system() command.
just wild guessing...
So, this does nothing to answer your question directly, but please tell me that you're not actually running on perl 4? Assuming you're on perl 5, you could remove the entire file and replace the require with use POSIX qw(ctime) to get the version that comes with Perl.
If you do have to support perl4, I'll merely grumble something about version 5 being fifteen years old now and go away. :)
Related
Is there a way that I can single step through part (or all) of a Perl 6 program? I expected that there would be a -d, but of course there is not:
% perl6 -d test.p6
I thought perhaps eval-ing the file, but that does the whole thing at once:
% perl 6
> EVALFILE 'test.p6'
As I expected, that just runs the entire file.
I suspect someone hasn't implemented this sort of thing. Is there some way I can hook into the runtime to insert actions between statements and so on? In Perl 5 land, that would be the DB class.
Aside from that, does Perl 6 work with any general debuggers? If I were using the JVM backend, would it even make sense to use a Java tool (or is it gibberish by that point)?
I started working on this problem as one of the features of LREP. I haven't worked on it for a while, so not sure how well it still works. Since the last time I worked on LREP, we've done a lot of cleanup on the internal REPL–I hope to modify LREP to work more cleanly and add in debugger features.
I have a ruby script, apparently correct, that sometimes stops working (probably on some calls to Postgresql through the pg gem). The problem is that it freezes but doesn't produce any error, so I can't see the line number and I always have to isolate the line by using puts "ok1", puts "ok2", etc. and see where the script stops.
Is there any better way to see the current line being executed (without changing the script)? And maybe the current stack?
You could investigate ruby-debug a project that has been rewritten several times for several different versions of ruby, should allow you to step through your code line by line. I personally prefer printf debugging in a lot of cases though. Also, if I had to take an absolutely random guess at your problem, I might investigate whether or not you're running into a race condition and/or deadlock in your DB.
I'm running into some considerable speed bottlenecks with a Python-Matplotlib-Xcode combination. I know some immediate responses will probably ask "Why are you doing python stuff in Xcode, just man up and use vim" --> I like the organizing ability and the built in version control, it makes elements of my work easier to deal with.
Getting python to run in xcode in the first place was a bit more tricky than I had hoped, but its possible. Now I have the following scenario:
A master file, 'main.py' does all the import stuff for me and sets up some universal formatting to make all the figures (for eventual inclusion in my PhD thesis) nice and uniform. Afterwards it runs a series of execfile commands to generate whichever graphics I need. Two things I can think of right off the bat:
1) at the very beginning of main.py after I import all the normal python stuff you tend to need, I call a system script which checks whether a certain filesystem is mounted. I keep all my climate model data on there since my local hard drive is too small to deal with all of it at once. Python pauses itself and waits for the system to do its thing, but once the filesystem has been found, it keeps going. Usually this only needs to happen once in the morning when I get to work, or if the VPN server kicked me off for whatever reason. (Side question, it'd be cool to know if theres a trick to automate an VPN login to reconnect as soon as it notices its not connected)
2) I'm not sure how much xcode is using on its own. running the same program from terminal is (somewhat) faster. I've tried to be memory conscience and turn off stuff I don't need while running the python/xcode combination.
Also, python launches a little window whenever I call plt.show(), this in itself takes time, I've considered just saving them as quick png files and opening them with some other viewer, although I guess that would also have to somehow take time to open up. Given how often these graphics change as I add model runs or think of nicer ways of displaying the data, it'd be nice to not waste something on the order of 15 to 30 minutes (possibly more) out of the entire day twiddling my thumbs and waiting for a window to pop up.
Benchmark it!
import datetime
start = datetime.datetime.now()
# your plotting code
td = datetime.datetime.now() - start
print td.total_seconds() # requires python version >= 2.7
Run it in xcode and from the command line, see what the difference is.
I am making an app with a TON of features. My problem is that applescript seems to have a cut-off point. After a certain number of lines, the script stopps working. It basically only works until the end. Once it gets to that point it stops. I have moved the code around to make sure that it is not an error within the code. Help?
I might be wrong, but I believe a long script is not a good way to put your code.
It's really hard to read, to debug or to maintain as one slight change in a part can have unexpected consequences at the other part of you file.
If your script is very long, I suggest you break your code in multiple parts.
First, you may use functions if some part of the code is reused several times.
Another benefit of the functions is that you can validate them separately from the rest of the execution code.
Besides, it makes your code easier to read.
on doWhatYouHaveTo(anArgument)
say "hello!"
end doWhatYouHaveTo
If the functions are used by different scripts, you may want to have your functions in a seperate library that you will call at need.
set cc to load script alias ((path to library folder as string) & "Scripts:Common:CommonLibrary.app")
cc's doWhatYouHaveTo(oneArgument)
At last, a thing that I sometimes do is calling a different script with some arguments, if a long code fits for slightly different purposes:
run script file {mainFileName} with parameters {oneWay}
This last trick has a great yet curious benefit : it can accelerate the execution time for a reason I never explained (and when I say accelerate, I say reduce execution time by 17 or so for the very same code).
I sweated over the question above. The answer I'm going to supply took me a while to piece together, but it still seems hopelessly primitive and hacky compared to what one could do were completion to be redesigned to be less staticky. I'm almost afraid to ask if there's some good reason that completion logic seems to be completely divorced from the program it's completing for.
I wrote a command line library (can be seen in scala trunk) which lets you flip a switch to have a "--bash" option. If you run
./program --bash
It calculates the completion file, writes it out to a tempfile, and echoes
. /path/to/temp/file
to the console. The result is that you can use backticks like so:
`./program --bash`
and you will have completion for "program" in the current shell since it will source the tempfile.
For a concrete example: check out scala trunk and run test/partest.