How to check Matplotlib's speed in Xcode and increase performance? - xcode

I'm running into some considerable speed bottlenecks with a Python-Matplotlib-Xcode combination. I know some immediate responses will probably ask "Why are you doing python stuff in Xcode, just man up and use vim" --> I like the organizing ability and the built in version control, it makes elements of my work easier to deal with.
Getting python to run in xcode in the first place was a bit more tricky than I had hoped, but its possible. Now I have the following scenario:
A master file, 'main.py' does all the import stuff for me and sets up some universal formatting to make all the figures (for eventual inclusion in my PhD thesis) nice and uniform. Afterwards it runs a series of execfile commands to generate whichever graphics I need. Two things I can think of right off the bat:
1) at the very beginning of main.py after I import all the normal python stuff you tend to need, I call a system script which checks whether a certain filesystem is mounted. I keep all my climate model data on there since my local hard drive is too small to deal with all of it at once. Python pauses itself and waits for the system to do its thing, but once the filesystem has been found, it keeps going. Usually this only needs to happen once in the morning when I get to work, or if the VPN server kicked me off for whatever reason. (Side question, it'd be cool to know if theres a trick to automate an VPN login to reconnect as soon as it notices its not connected)
2) I'm not sure how much xcode is using on its own. running the same program from terminal is (somewhat) faster. I've tried to be memory conscience and turn off stuff I don't need while running the python/xcode combination.
Also, python launches a little window whenever I call plt.show(), this in itself takes time, I've considered just saving them as quick png files and opening them with some other viewer, although I guess that would also have to somehow take time to open up. Given how often these graphics change as I add model runs or think of nicer ways of displaying the data, it'd be nice to not waste something on the order of 15 to 30 minutes (possibly more) out of the entire day twiddling my thumbs and waiting for a window to pop up.

Benchmark it!
import datetime
start = datetime.datetime.now()
# your plotting code
td = datetime.datetime.now() - start
print td.total_seconds() # requires python version >= 2.7
Run it in xcode and from the command line, see what the difference is.

Related

Simple bash tool to execute command on vocal input

I am trying to record mouse positions and events for later playback (linux - Ubuntu 20.04).
Right now this happens at regular intervals - every 2 seconds I record where the mouse is with xdotools and take a screenshot. In a separate question elsewhere I am looking for better tools to do this.
However in a separate train of thought I am also looking for something that would execute a command when I - for example - loudly say "NOW".
I have looked for tools and while there's definitely interesting stuff out there, this is most likely a one-off, and the installation and training process offsets the benefits.
So, anything out there that will do something for example when a sound above a threshold is captured by the microphone?

Is there a way to keep the Xcode console log from overflowing and locking up the session?

I have a Mac app (that is a testbed for a phone app) that spews massive amounts of output into the console log. Mostly this is what I want, but sometimes I run large "batch" runs and the console log essentially fills up and Xcode locks up. The only way I've found to prevent this is to monitor the job and every 30 seconds or so press "Clear", hoping that I'm not so close to the end that I clear out the 50 or so final lines giving the results of the run.
Yes, I could go through the code and reduce the number of lines output, but there are several reasons (not purely based on laziness) for not doing that.
Does anyone know of a way to tell Xcode to maintain the console as a "rotating buffer" of sorts, clearing old stuff from time to time so that it doesn't fill up?
You could write your own rotating buffer implementation, and log to that rather than with printf.
Or if you don't want to replace all your printfs:
#define printf rotatingPrintf
Perhaps it would work to write a command-line tool that has a rotating buffer, then pipe the output of your app to that tool. You can launch GUI apps from the command line like this:
$ /Applications/Foo.app/Contents/MacOS/Foo

Playing Sound in Perl script

I'm trying to add sound to a Perl script to alert the user that the transaction was OK (user may not be looking at the screen all the time while working). I'd like to stay as portable as possible, as the script runs on Windows and Linux stations.
I can
use Win32::Sound;
Win32::Sound::Play('SystemDefault',SND_ASYNC);
for Windows. But I'm not sure how to call a generic sound on Linux (Gnome). So far, I've come up with
system('paplay /usr/share/sounds/gnome/default/alert/sonar.ogg');
But I'm not sure if I can count on that path being available.
So, three questions:
Is there a better way to call a default sound in Gnome
Is that path pretty universal (at least among Debain/Ubuntu flavors)
paplay takes a while to exit after playing a sound, is there a better way to call it?
I'd rather stay away from beeping the system speaker, it sounds awful (this is going to get played a lot) and Ubuntu blacklists the PC Speaker anyway.
Thanks!
A more portable way to get the path to paplay (assuming it's there) might be to use File::Which. Then you could get the path like:
use File::Which;
my $paplay_path = which 'paplay';
And to play the sound asynchronously, you can fork a subprocess:
my $pid = fork;
if ( !$pid ) {
# in the child process
system $paplay_path, '/usr/share/sounds/gnome/default/alert/sonar.ogg';
}
# parent proc continues here
Notice also that I've used the multi-argument form of system; doing so avoids the shell and runs the requested program directly. This avoids dangerous bugs (and is more efficient.)

OSX: How to force multiple-file-copy operation to plough through errors

This is so wrong.
I want to perform a large copy operation; moving 250 GB from my laptop hard drive to an external drive.
OSX lion claims this will take about five hours.
After a couple of hours of chugging, it reports that one particular file could not be copied (for whatever reason; I cannot remember and I don't have the patience to repeat the experiment at the moment).
And on that note it bails.
I am frankly left aghast.
That this problem persists in this day and age is to me scarcely believable. I remember hitting up against the same scenario 20 years back with Windows 3.1.
How hard would it be for the folks at Apple (or Microsoft for that matter) to implement file copying in such a way that it skips over failures, writing a list of failed operations on-the-fly to stderr? And how much more useful would that implementation be? (both these questions are rhetorical by the way; simply an expression of my utter bewilderment; please don't answer them unless by means of comments or supplements to an answer to the actual question, which follows:).
More to the point (and this is my actual question), how can I implement this myself in OS X?
PS I'm open to all solutions here: programmatic / scripting / third-party software
I hear and understand your rant, but this is bordering on being a SuperUser-type question and not a programming question (saved only by the fact you said you would like to implement this yourself).
From the description, it sounds like the Finder bailed when it couldn't copy one particular file (my guess is that it was looking for admin and/or root permission for some priviledged folder).
For massive copies like this, you can use the Terminal command line:
e.g.
cp
or
sudo cp
with options like "-R" (which continues copying even if errors are detected -- unless you're using "legacy" mode) or "-n" (don't copy if the file already exists at the destination). You can see all the possible options by typing in "man cp" at the Terminal command line.
If you really wanted to do this programatically, there are options in NSWorkspace (the performFileoperation:source:destination:files:tag: method (documentation linked for you, look at the NSWorkspaceCopyOperation constant). You can also do more low level stuff via "NSFileManager" and it's copyItemAtPath:toPath:error: method, but that's really getting to brute-force approaches there.

Why is this Perl require line taking so much time?

I have a Perl script that runs via a system() command from C. On a specific site (SunOS 5.10), when that script is run, it nearly always takes 6 seconds or more. On other sites, it runs pretty much instantly (0.1s). If I run the script manually, i.e. not from the C code, it also runs instantly. I eventually tracked the slowness down (by spitting out the time a whole bunch in a lot of different places), to a single require line. The file that it is requiring is another Perl script we wrote. The script consists of a single require (this file here), 3 scalars that are assigned integer values, and a handful of time/date conversion routines. The file ends with a 1;. That single require appears to take as much as 6 seconds on occasion, but as I said, not always even on the same machine. I'm absolutely stumped here. My only last thought is to turn on profiling, but the site doesn't have Devel::Profiler and my only other option (that I know of) would be to add it to the Perl command which would require me altering and recompiling the C code (doable but non-trivial).
Anybody have ANY idea what could be going on here? I don't think I can/want to put the entire date.pl that is being required, but it's pretty much exactly as I described; I could answer any questions about it that you have.
Thanks in advance.
You might be interested in A Timely Start by Jean-Louis Leroy. He had a similar problem and tracked it down to a long and deep module search path where perl usually found the modules in the last entries in #INC.
Six seconds is a long time. Have you checked what your network is doing during this?
My first thought was that spawning the new process when using the system() command could be the problem, but six seconds is too long.
I don't know much about perl, but I could imagine that for any reason, the access of the time module could invoke a call to a network time server. Just to get synchronized. Maybe this takes so long or maybe it is getting a time out.
It could be that this only happens for a newly spawned process -- hence only when you use the system() command.
just wild guessing...
So, this does nothing to answer your question directly, but please tell me that you're not actually running on perl 4? Assuming you're on perl 5, you could remove the entire file and replace the require with use POSIX qw(ctime) to get the version that comes with Perl.
If you do have to support perl4, I'll merely grumble something about version 5 being fifteen years old now and go away. :)

Resources